Description as a Tweet:

Cornucopia was constructed to help users find recipes by taking a picture of an ingredient. We wanted to create a place where the user can find many recipes containing ingredients that they already have. We also wanted users to be able to apply our software for simpler shopping.

Inspiration:

Currently, we are living in challenging times. Parents are struggling with balancing their work and home lives. Now more than ever, it is important to make do with what you already have. Going grocery shopping not only requires lots of time, but it also increases the chances of being exposed to COVID-19. Our app, Cornucopia, makes recipe finding as easy as a few clicks.

What it does:

Cornucopia is an app that lets users use accessible ingredients to discover new dishes. Based on the entered ingredients, the program will refer users to recipes that contain them. Using multi-classification image recognition and web scraping, Cornucopia recommends recipes using ingredients that the users may already have.

How we built it:

We used a pre-trained ResNet50 model to classify images containing ingredients. We also used Swift to create our frontend iOS application, integrating the model. Our backend API was served using Django and a PostgreSQL database. In addition, we used web spiders to provide data for our API. Our backend code was deployed on an AWS web server for security and availability.

Technologies we used:

  • SQL
  • Swift
  • Python
  • Django
  • AI/Machine Learning

Challenges we ran into:

We faced many challenges when training and coming up with our Machine Learning Image Recognition software. Not only did it take a lot of time, but it was very challenging to code. In order to test it, we had to run the program, which took over one hour. As a group, we ran and fine-tuned the program many different times. We took advantage of this extra time by assigning whoever was waiting for the model to finish training to work on the pitch and application. Although there were many problems throughout the hackathon, we believe we managed our time very well and created an amazing, fully functional app.

Accomplishments we're proud of:

We are happy that we were able to combine the frontend and backend seamlessly. We are proud that we created a TensorFlow sequential model from scratch in such a small amount of time. However, we did not use it because we were unable to change it to a model for Swift. We are also proud that we used a pre-trained model to predict the ingredient in the picture. We are proud of our UI that was put together quickly and is very simple to use.

What we've learned:

We learned how to use TensorFlow, Swift, UI Design, Keras, Django, and Amazon Web Services

What's next:

We want to elevate the capabilities of our app by allowing users to take fewer photos and continue to recognize multiple ingredients efficiently and accurately. We would also want to take the time to create previews for each recipe that include time to cook, directions, etc.

Built with:

Xcode, Amazon Web Services, Django, Supervisor, uWSGI, Scrapy, PostgreSQL, Splash, NGINX, and TensorFlow

Prizes we're going for:

  • Best Documentation
  • Best Web Hack
  • Best Mobile Hack
  • Best Machine Learning Hack

Team Members

Aditya Pawar
Vivek Nadig
Brayden Tam
Akhil Datla

Table Number

Table TBD