Skip to main content
Sponsored by: NVIDIA

Sponsored by: NVIDIA

Industrial AI - Part 3: Key ML Terminology: Are You 'Sorting Ingredients' or 'Baking Cakes'?

by Hey Jude

Supervised vs UnSupervised Learning.

The Lion VIsion example, part 1, is what you’d call ‘Supervised’ Machine Learning. For easy reference, this is ‘Monkey See, Monkey Do’, where the computer is the ‘monkey’. Or another example is ‘Good’ vs ‘Bad’ Apples in terms of an apple sorting application. In practice, it might look like:

  1. Input 1000 ‘Good Apples’ photos, and 1000 ‘Bad Apples’ in photos.
  2. You ‘show’ it (input) an ‘Unclassified Apple’. It will, statistically speaking, ‘guess’ if it’s ‘Good’ or ‘Bad’. 
    (This is why you see boxes with 96% in demos, which is a ‘guess’ or I should say ‘confidence level’).
  3. You can then have a trapdoor, pneumatic blower, etc. to ‘extract’ the ‘Bad’ apple from the batch.

Now, the system would not know what to do if you showed it a Banana, an Orange or a Star Fruit!

This is like asking a 2-year-old child about a ‘Galaxy’ when it’s only just grasped the concept of ‘Mummy’ and ‘Daddy’. Your system would need to have a bit of code to say ‘reject anything that isn’t either Good Apple or Bad Apple’, or you could command it to ‘Stop All Processes’ to call a human to inspect why you have a Banana in an Apple Factory?? (Perhaps monkey business is afoot!)

The model is only good if you have trained it sufficiently with enough examples that its ‘confidence level’ is as high as possible. For some situations, say Apples, you may only need a couple hundred photos, for others, say like Cancer Screening on X-rays / MRIs, you may need 10,000’s or more to gain sufficient confidence to be useful. And even then, the ‘Action’ you take is different, as of course incorrectly rejecting an apple is not a big deal, but misdiagnosing a lump in someone’s body is more serious - hence why humans still check AI/ML medical data thoroughly.

Where this process gets interesting is whether you start with knowing the end result you desire, or to use Machine Learning in a more open-ended mode or working…

Supervised and un-supervised learning

In contrast to ‘Supervised’ Machine Learning, is ‘Un-Supervised’ Machine Learning. To use another fruit-based example, this is where you’d do the following slightly differently:

  1. Input 1000 photos of Apples, 1000 photos of Pears, 1000 photos of Bananas, etc. 
    (The system would hopefully create ‘groups’ or ‘sets’, of 3 distinct categories).
  2. You’d show it at random either an Apple, Pear or Banana. It would ‘guess’ which it was.
  3. Hopefully, it’d guess right, and sort your fruit into the correct one of three boxes.

An interesting example I came across in 2021, through Make Magazine, was from Benjamin Cabé, who created a ‘smart nose’. It sampled various batches of sourdough bread mix (all slightly different), using a Gas Sensor array.

Then he went on to bake these various batches of bread mix - all having slightly different quantities of yeast, sugar, water, temperature, proving time, etc. Then he ate them all, and personally rated which were the best/worst tasting loaves of sourdough bread that resulted from the various mixes.

This ‘ranked output’ was then used to effectively ‘reverse engineer’ what the best combination of ingredients were. This is why you often hear the phrase ‘Feedback Loop’, where output data (usually favourable) is ‘fed back’ into the system to make it more inclined to said ‘favourable’ result. (Or course it can be tuned for the ‘unfavourable’ results, as in ‘Bad Apples’ example).

Make Magazine - Smart Nose

Above: Make featuring Benjamin Cabé’s ‘smart nose’, running on TinyML. Plus my interview with him.

What I like about Benjamin’s [unsupervised] Machine Learning example is it is somewhat more interesting than the ‘Monkey See, Monkey Do’ approach. This is not to be a snob, but rather to say this gets us closer to where the future may take us with Machine Learning. More on that later.

Design of Experiments. (Why Data Scientists are the ‘OGs’ of Machine Learning).

On a note of ‘credit’ for this Unsupervised Learning process, if you know any Chemists or Data Scientists, enthusiastically start telling them about this example (as I have naively done in the past!), they will likely roll their eyes, and scoff with irritation - as if Machine Learning has unfairly, recently ‘taken all the credit’ for a process that has in fact been around for years - as to them this is simply known as ‘Design of Experiments’, or DoE.

Components of Design of Experiments

Above: Example of making Cake, (similar to Benjamin’s sourdough approach). Credit: SafetyCulture.

DoE is where you create matrices of multi-variable inputs, and run theoretical models on thousands of combinations, and then see if these are possible in the real work, before ‘feeding back’ that verification data to keep improving the model.

I originally trained in Chemistry, (before retraining in Design Engineering), which is basically ‘cooking’ to a really complex level, only you usually don’t get to eat the result (perhaps why I quit).

Anyway, years later, when I led the R&D Department at Sugru (a mountable glue formulation), the team I managed used these methods to completely reformulate the compound to be ‘Family Safe’. It required dozens of inputs and even more experiments to ‘feedback’ positive physical characteristics, before honing in on the best-performing formulations. What is important to stress is that everything is a ‘compromise’: We may have found the *most adhesive* glue formulation, let’s say “#126” - but sadly, it also had the lowest shelf life! We may have found formulation “#53” had the best shelf life - but was horribly smelly, and users hated it! We may know formulation “#218” was the best of all factors, but was too close to the limit of compliant level of a given chemical catalyst, so too risky! And so on…

The point is, the question of ‘what is the best Sugru formulation?’ is as subjective as ‘what is the best Sourdough?’ for Benjamin Cabé. (Perhaps even a ‘Good’ Apple is up for debate, with respect to the ‘Wonky Fruit’ counter-movement saying that small deformities are not a problem and should not be wasted, see OddBox who make a virtue of this moot point).

We can create highly optimal formulations, but there is rarely a ‘perfect’ formula, any more than there is a perfect human being! We do the best we can, and ultimately ‘reframe’ things in a positive light, which (without going off on a whole new tangent), is pretty much why ‘Marketing’ exists!

As to use another adhesive example - Post-It Notes were developed after reframing a ‘failed glue’, which was too weak to permanently stick things together, to instead be marketed as a ‘repositionable glue’ perfect for temporary adhesion of little yellow notes, we now know and love! So again, this is another reason to believe the ‘Robots’ are not able to do this sort of creative lateral thinking to ‘reframe’ a ‘failure’ to an ‘opportunity’. This is why I’m still optimistic about the future, but I do think it’ll be a ‘hybrid’ future - which is why I urge people to experiment with Machine Learning now, and not to think you have to ‘master’ everything in order to get started, or be a billionaire playboy to buy your first ML kit (NVIDIA Jetson computers start at around £150/$180, on RS Online (252-0055) .

To recap on the basics, using food-based examples, these can ‘translate’ to your own workplaces needs:

  1. The ‘Good vs Bad Apples?’ upgrade (easy)
    (What automated processes do you have that have ‘pass / fail’ features?)
  2. The ‘What Fruit is this?’ upgrade (medium)
    (What categories of product can you group more easily?)
  3. The ‘Make The Best Cake/Sugru/Bread’ upgrade, using DoE, upgrade (hard)
    (Can you reverse-engineer/optimise your ingredients-to-product formulations?)

Remember, most companies in 2025 are not even doing A or B, let alone C! For all the stratospheric hype at Silicon Valley, the fact is ‘ground-level’ innovation is always much more gradual. This is not to advocate complacency, the time is certainly now, but it’s also no longer rocket science.

So, as they say, “you don’t have to run faster than a hungry bear in the woods to not get eaten - you just have to run faster than the slowest person next to you”. For the C-Suite readers, it takes some humility, and even more courage to acknowledge that the best application of Artificial Intelligence is not ‘Human vs Machine’ but ‘Human + Machine’. It was ever thus.

AI as a Proxy. When a Small Thing relates to a Big Thing.

Presently, I’m working with a friend on making a ‘Smart Birdbox’, using AI/ML to identify features and movement of the birds automatically. Having done this ‘the labour-intensive way’ by manually recording things by being in my shed, I can say it was fun but hard work, and I did miss one chick fledge! So clearly this is where AI can help ensure I automatically record/capture all the best bits through motion detection and more, even though I am likely to still be watching them myself whenever possible.

However, what is more interesting, (even if you’re not into ornithology), is that AI data can be used as a proxy for something much more interesting… If I collect data on what the chicks get fed, let’s say 200 caterpillars, 150 spiders, 100 peanuts - in 2025, but in 2026 we see a 15% decline in caterpillars and spiders, and an increase in bird nuts, does this indicate a worrying ecological disruption that is afoot. Perhaps with successive years of data, if this trend continues, it may be a serious data set that foretold a climate issue, or perhaps a use of pesticides, or perhaps the birds go ‘lazy’ and just like birdfeeders more… is this their equivalent of a McDonalds?!

Whatever the hypothesis, one can appreciate this may also hold true for your company also, and this is where ‘Un-Supervised’ learning gets especially interesting, as it is watching for trends that you may not have anticipated at first.

AI Bird Box project

Above: AI Bird Box project, a work in progress. Follow on Instructables. Using GETI software from Intel.

Why I love seemingly ‘non-Engineering’ projects like this in any company, is that they also foster two important aspects of innovation and change management:

  1. It does not matter what you train your AI/ML system to do, for £200 you can do remarkable things which were impossible a decade ago. Getting playful and creative is where it starts.
  2. It took me about 2 hours to learn this software and start getting results, and I wrote no code. It was all graphical user interface (GUI). The point being, things move fast and things can get simpler or cheaper ‘overnight’, without a CEO or even CTO realising it. (If in doubt, read up on Deep Seek, which although not without its faults and data privacy issues, is certainly a disruptor to the norm!).

My point again for C-Suite / managers / non-techy people is this - if employees are ‘tinkering’ with things like AI Bird Boxes, this really shows that technology is mature and at scale for this to be ubiquitous and ever-present.

Furthermore, it is not only the domain of the super-rich elites: School kids will be doing ML projects in a few years time. Oh wait, my mistake, they already are and have been for the past couple years with BBC Micro:Bit (233-6797) , a £25 computer doing ML, running Python / BlockCode.

“Tomorrow” it seems, is always happening, Today!

This is why I personally feel that even having ‘hack-a-thons’ at work, where fun projects like this can be done say twice a year, (even if your core business has nothing to do with bird boxes, or insects), it’s not long before people connect the dots and it sparks a business idea. It’s human nature to ‘circle back’ to the day-to-day. But managers need to resist the temptation to demand ‘instant ROI’ on day one. Trust that inspiration will flow, like karma, back to them. It always has in my experiences at Dyson, LEGO, Sugru and more besides. Indeed, sometimes it’s precisely because it’s not a ‘business idea’ at a hack-a-thon that people relax, don’t feel as self-conscious, and good ideas start.

AI Banner

Visit our AI Hub Sponsored by NVIDIA

Industrial AI Blog Series Contents:

Part 1: Lion Vision, AI vs Automation, and Why a Game of ‘Go’ Changed Everything.

Part 2: “Dirty, Dangerous, Difficult & Dull” - The Case for Ethical AI Automation.

Part 3: Key ML Terminology: Are You 'Sorting Ingredients' or 'Baking Cakes'?

Part 4: ML Lessons from Lion Vision. AI Failures, and ‘Sensing Like A Robot’.

Part 5: Getting Started with Jetson Nano / Orin. And Why Octopus Brains ML Marvels.

Part 6: A *Shiny* Idea, Whilst at Lion Vision: “Hi Vis Batteries”. And Why You Need Underdog Engineers.

Are you looking for additional information on Lion Vision?

Winner of the 2020 Alastair Graham-Bryce "Imagineering" Award (IMechE), Jude thrives in high risk collaborations, uncertainty and pressure - drawing from global networks and experiences to deliver high profile campaigns and digital/physical products. A leading Creative Technologist & Physical Prototyping Expert, Jude has worked for NHS, Dyson, LEGO, and a number of start-ups. He is one of the eight featured inventors in BBC Two's Big Life Fix. More at: https://www.judepullen.com/
DesignSpark Logo

Enjoying DesignSpark?

Create an account to unlock powerful PCB and 3D Mechanical design software, collaborate in our forums, and download over a million free 3D models, schematics, and footprints.
Create an account

Already a DesignSpark member? Log In

Comments