How do you feel about this article? Help us to provide better content for you.
Thank you! Your feedback has been received.
There was a problem submitting your feedback, please try again later.
What do you think of this article?
AI has moved from the margins to the mainstream - both metaphorically and physically. Once, AI adoption was constrained to a cluster of high-power compute facilities such as data centres. But in recent times, it has spread to the "edge" of industrial operations, enabled by the integration of smart sensors and machine learning (ML). Now, this democratisation of AI is seeing its reach extended across interconnected industrial settings with a broad range of applications.
Indeed, according to the latest statistics, the edge AI market stood at $20.45 billion in 2023 and will be worth $269.82 billion by 2032, exhibiting an impressive compound annual growth rate of 33%. This rapid growth will be seen across multiple sectors, including automotive, manufacturing, healthcare, energy, and utilities. The research says enhanced real-time decision-making and reduced network traffic associated with edge devices will drive adoption in areas such as energy efficiency and enabling predictive maintenance.
Hardware heroes of edge AI
But what systems and solutions are underpinning the stellar growth of edge AI, particularly in relation to smart sensors? And how easy is it for engineers to experiment with such technologies in industrial settings? Let's answer those questions by diving deeper into the fundamentals of edge AI and the benefits that can be derived.
As the name suggests, edge computing encourages a decentralised architecture with distributed computing frameworks positioning enterprise applications closer to data sources, such as IoT devices. Edge AI processes data locally using onboard computing resources, such as CPUs. Here, machine learning algorithms are deployed to analyse data streams in real-time without sending them wirelessly to off-site centralised servers. The decisions are based on pre-trained models, fully optimised for edge devices, reducing latency, and improving privacy.
This edge AI architecture depends on hardware from the likes of NVIDIA – which was initially known for its graphics processing computer chips but has now emerged as a frontrunner in the AI revolution. The NVIDIA Jetson series, for example, comes equipped with GPUs and other components optimised for running deep learning and processing at the edge. The Jetson Nano (252-0055) , specifically, is a small, low-cost AI device designed for learning and prototyping AI projects at the edge. Meanwhile, other Jetson hardware offers near server-grade performance. NVIDIA and others provide a full range of AI modules specifically designed for edge devices, making them ideal for robotics, smart cameras, and industrial automation applications.
TinyML for Constrained Devices
However, these devices aren’t always best suited for edge AI applications. Sometimes, ML models need to be run on ultra-constrained devices with minimal processing power and memory, like microcontrollers (MCUs). This is where the concept of TinyML, a field of machine learning technologies and applications that perform on-device sensor data analytics at extremely low power, comes in.
Historically, traditional sensors collected data on various parameters, such as temperature, humidity, vibration and motion in industrial IoT. This information would have been transmitted, often via wireless networks, to a central processing unit or cloud server for analysis. However, this approach has drawbacks, most notably around energy consumption, latency and the need for robust communication networks.
Now, though, according to the TinyML Foundation, the global non-profit professional organization focused on ultra-low power machine learning at the edge, sensors can have onboard ML capabilities specifically for low-power, resource-constrained devices. So, data collected by an edge sensor could be pre-processed by the TinyML algorithms in preparation for analysis. The data is fed into the trained TinyML model, which makes predictions or classifications based on its knowledge. The edge device can take real-time action based on the model's output. For instance, a pressure sensor with TinyML anomaly detection could produce an alert if it detects anomalous readings.
This architecture effectively allows AI to be optimised at the edge in environments and applications that wouldn't previously be suitable. It also provides flexibility when combined with previously mentioned solutions, such as NVIDIA's more powerful GPUs or AI chips. For instance, TinyML could be deployed on NVIDIA hardware for more diverse AI edge computing solutions, from high-performance applications requiring complex algorithms to more lightweight, energy-efficient models. This technology is democratising AI and extending its reach across industrial applications – and there is a flourishing TinyML community, with many open-source projects and forums.
Experimenting with edge AI
The potential is there, then, for a revolution in edge AI, using accessible technology with plug-and-play solutions that encourage fast prototyping and test of AI-driven solutions. Let's look at how an engineer might experiment with AI at the edge, highlighting some practical steps that could be taken to bring ideas to life. According to digital engineering group Encora, these steps can include:
- Work out your use case: Look at specific areas where AI can introduce efficiency or innovation within your organisation. For instance, objectives might include initiating predictive maintenance to reduce downtime and costs, or for improved energy management to enhance operational efficiency and safety.
- Choose the proper hardware: Selecting the right hardware is crucial for balancing power consumption with computational needs. As discussed earlier in this piece, high-performance GPUs are suited for intensive computations, whereas MCUs are ideal for low-power, real-time applications. It depends on the requirements of your use case.
- Focus on the data: The quality and type of data collected are critical. Ensure the data is comprehensive and accurately represents the operational conditions you aim to monitor or predict. This data collection might involve integrating various sensors. Also, ensure the data is clean and well-organised for effective analysis.
- Be sure to experiment: Training your model requires selecting the right machine-learning algorithms and tuning parameters to fit your specific dataset. This process might involve experimenting with different models to find the one that best predicts or identifies the patterns relevant to your use case.
- Ensure model efficiency: Optimising your model for inference is about making it as efficient as possible without sacrificing accuracy. Techniques like quantisation can reduce the model's size and computational demands, making it more suitable for edge deployment.
- Test, test and test again: It is crucial to test the model to ensure it performs as expected. This process might involve setting up a pilot program or deploying in phases to monitor performance and gather feedback for refinement.
- Conduct performance reviews: AI models can drift over time as operational conditions change. Implement regular performance reviews and updates to the model to ensure it continues to effectively meet your needs. This proactive approach helps maintain the relevance and accuracy of your edge AI system.
- Get feedback from users: Involve multi-disciplinary stakeholders to assess internal sentiment to the edge AI project and chart the way forward.
So there we have it. That's a topline overview of the shift of AI to the edge in industrial operations, enabled by integrating smart sensors and ML. We can see that the continual evolution of traditional sensors to smart sensors with edge processing capabilities is having a profound effect. Ultimately, this shift is paving the way for widespread innovation – driven in no small part by the ingenuity of the DesignSpark community.
For the latest articles on AI and NVIDIA, then take a look at the AI Hub.
Comments