Skip to main content

Student Innovation - OnionBot: Building a Robot Sous-Chef

OnionBot is a robotic sous-chef that automates pan-cooking tasks, giving you multitasking superpowers so you can focus on culinary creativity. It was inspired by a vision for a robot that softens the onions while you prep the next ingredients!

Parts list

Qty Product Part number
1 Raspberry Pi 4 4G Model B 182-2096
1 Google Coral AI USB Accelerator 201-7821
1 Raspberry Pi Touchscreen, LCD Touch Screen 7in Module for Raspberry Pi 899-7466
1 RS PRO 1.75mm Grey PLA 3D Printer Filament, 1kg 832-0251
1 RS PRO White LED Strip 5m 12V 153-3637
1 Verbatim 32 GB MicroSDHC Card Class 10 750-4802
2 RS PRO Grey PVC Cat5 Cable F/UTP, 1m Male RJ45/Male RJ45 333-6554
1 Power Adapter UK Plug In 12V 6W 175-3294
1 Wide angle (110°) – MLX90640 Thermal Camera Breakout
1 Feedback 360° High Speed Servo
1 Wide Angle Camera Module for Raspberry Pi
1 12V DC Centrifugal blower
4 Female RJ45 Connectors
2 LM13 Adjustable DC DC Convertor
1 any Induction Cooker with power knob


I am Ben, a Design Engineering student who is mad about cooking with robotics. I worked on this project in my 4th Year of Imperial College London, and I use it to cook my pasta meals (so... most days!). For a project overview and demo, check out the video below or Click here to watch on YouTube.


How can we apply robotics to home cooking?

Automation tech in the food industry is well known to reduce physical and cognitive demands on production-line operators. Perhaps the same technology could also reduce errors and help decision-making for home cooking? For example, how might robots augment the cooking skills of busy parents and professionals?

The problem: kitchens pose very different design engineering challenges to production lines, because home cooking requires multi-purpose tools, not specialised machines.

  • Robot arms can mimic human-kitchen interaction, but these are currently far too large and expensive to be feasible for the home.
  • For multi-purpose sensing, cameras can detect a wide variety of cooking information, but there are currently no datasets for training cooking image classification algorithms.

Project inspiration: With OnionBot, I wanted to see if there was a way to integrate industry automation techniques and machine vision into a simple robot that fits on a countertop.

Prototype design

OnionBot tackles the challenge of automating pan cooking. Cooking progress is tracked by a Raspberry Pi 4, using a wide-angle camera and thermal camera mounted above the induction cooker. Recipes are processed by a Python script, and a servo motor directly drives the stove control knob to adjust the power setting.

The goal of this project is automation without unnecessary complexity. After all, we are not replacing the chef but simply giving them multi-tasking superpowers! Since there is no actuator more flexible and dextrous than the human, a touch screen interface called ‘sous-chef’ shows instructions, reminders, and alerts. OnionBot watches the pan so that you can focus on culinary creativity!


OnionBot hardware overview

The final hardware design was reached through several prototype iterations, improving aesthetic design, signal interference, and lens condensation issues. The electronics are displayed on a board to help make understanding more accessible. Through this project, I learnt a huge amount about circuits, Raspberry Pi, and how hard it is to build something that really works reliably!


Hardware prototyping process

Machine Vision Model Training

Food image classification is tricky, as food images often have numerous difficult-to-define features and a lot of environmental variation. A new cooking device must tackle these perception problems.

OnionBot introduces two improvements:

  • The fixed stove-top camera view ensures a consistent environment
  • With general classification, the model must identify characteristics from 1000s of potential classes. Instead, we classify only key events at which actions occur (milestones) for a single recipe. Each model must identify 10s of classes (or less!), dramatically simplifying the perception challenge.

As these datasets don’t currently exist, I built an interface for easy creation of labelled datasets of cooking images. With the control panel, you simply click along with each milestone as you cook, and labelled images are automatically uploaded to Google Cloud, where they can be accessed by model training platforms. Training is simplified using Google AutoML; AutoML allows models to be trained for new recipes with just a few clicks, no ML expertise required!


A) Examples from image classes created for the BBC GoodFood tomato sauce recipe. B) The Live Labelling interface makes creating labelled datasets as simple as clicking along with recipe milestones.

System Overview

This diagram below shows the complete system. I wrote the firmware in Python, accessible through a Python Flask API. The interfaces for the control devices are built-in HTML/CSS/JavaScript, using the Bootstrap library.


The complete system makes OnionBot accessible through an API on the local network such that control devices could easily be expanded in the future.

What can OnionBot v0.2 do?

Check out the video above to see a demonstration of the prototype! OnionBot can:

  • Automatically proceed through a complete pasta with tomato sauce recipe
  • Advance through instructions hands-free, using image classification
  • Autonomously control pan temperature using PID feedback control
  • Detect when the pasta is boiling-over and turn down the heat!
  • Remind you if you haven’t stirred in a while, preventing your food from burning!

This simple recipe functionality only scratches the surface of what could be possible in the future with machine vision!


Delicious pasta sauce coming to a robot near you!

The future of autonomous cooking

The prototype demonstrates an initial proof-of-concept showing the possibilities of automation in home cooking, but large training datasets will be critical to the success of machine vision systems. I propose that autonomous cooking should be approached in the same way that Tesla approaches autonomous driving. Rolling out a ‘fleet’ of networked OnionBot devices would enable crowd-sourcing of massive labelled pan-cooking image databases.

Large cooking image databases don’t currently exist. The OnionBot dataset (which would also include rich metadata on ingredients, temperature, recipes, corrective inputs, and so on) could enable new research into cooking with AI.


A fleet learning model, inspired by Tesla, could facilitate large datasets and advanced deep learning models.

Open source cooking robots

OnionBot is open-sourced to encourage further research into home cooking automation. This project is perfect for makers, as it comprises off-the-shelf components and accessible Python-based code. Long term, a community of collaborators and early adopters could crowd-source:

  • The massive cooking image dataset
  • A database of recipe vision models
  • Advanced deep learning functionality, beyond classification
  • New ‘product’ hardware design

Check out to get involved! Code, 3D files, and system information are available there.

What's next?

In the short term, I am continuing to develop OnionBot prototypes. With further hardware iterations, I aim to bring down component costs and expand functionality. My goal is to design an early adopter DIY device that costs under £250, to enable research, education and robotic cooking for all makers! Going forward, I would love your feedback, support and ideas, please reach out to me through the details on my DesignSpark profile!

I always wanted to be an inventor as a kid, now making it happen! Design Engineering MEng at Imperial College London. Interested in single board computers, cooking robots, farming automation and campervan conversions! Contact me on LinkedIn: