Guest Author: Arijit Das, 15-year-old from India, Ambassador at Edge Impulse, Co-Organizer for tinyML India.
The rise of IoT and AI
In this age of our technology-driven world wherein every single segment of our life depends on cutting-edge innovation, there is estimated to be more than 21.5 billion IoT devices revolving around the world. These digits continue to grow as the years pass by. Almost all of these devices mostly run on microcontrollers which usually have less than 100KB memory and less than 1MB storage. BUT instead, they are found to work usually in places like industries, factories, weather stations, and so much more. And they work well in these places and don’t need to be replaced by higher alternatives because they are well suited for working in harsh environments.
Presently, we’re living in a world surrounded by AI-based applications. Over the course of the day, we’re utilising it more than we even think about. Tasks like scrolling through your socials, checking the weather, and even taking a picture depend on machine learning models. And training these models are computationally expensive. Similarly running these models are sometimes resource intensive as well. At the rate at which we’re using ML services, we need computational systems that are fast enough to handle it.
Big isn’t always the best
When you take a picture or record a video, the effects or visualizations or maybe even bokeh happens instantly. You don’t have to wait for the media to go to a data centre and do the post-processing and send you the finished media back again. The model now runs locally on your phone or any other device.
When you speak “Alexa” or “Ok Google”, you want them to respond to you immediately. But here the device records your audio, sends it to its cloud instances and then responds back with the information from the data centres. Again, you want to run the ML model locally here to improve latency.
Small is becoming the better
AIoT or TinyML in specific comes to play its role in these scenarios. It is a field of study which relates to deploying models that can run on small, low-powered devices like microcontrollers. It enables low-latency, low power, and no/low bandwidth model inference for edge devices. While a standard consumer CPU consumes between 65-85 watts or for a standard consumer GPU consumes between 200-500 watts, whereas at the same time a typical MCU (microcontroller) would consume in milliwatts. This is around a thousand times less power consumption. These are some of the things that make MCU’s excellent for running TinyML workloads, allowing them to be used for months or even years.
Small can be used anywhere
Well, you can carry your phone or maybe your tablets anywhere. They are able to run full-fledged applications similar to that of your laptop/PC. Some of the most common uses of EdgeAI are included in critical places like factories, the healthcare sector, the agriculture industry and a ton more!
This provides a leap for edge devices to be easy to be deployed anywhere as compared to a traditional data center.
The Big Question
So here comes the final point from the topic of this blog post: Is Cloud gonna fly away or is it gonna join hands together with the IoT ecosystem?
From my point of view, I would consider Cloud will accompany the AIoT industry. People will still be using the Cloud to build new ML models and deploy them onto the smaller Edge devices. There will be an immense collaboration amongst teams managing cloud instances, ML engineers and IoT engineers.
We will soon get to see how all these 3 different ecosystems merge together which will help in the evolution of humans and the interconnectedness of technology for sustainability and good.
LF AI & Data Resources
- Learn about membership opportunities
- Explore the interactive landscape
- Check out our technical projects
- Join us at upcoming events
- Read the latest announcements on the blog
- Subscribe to the mailing lists
- Follow us on Twitter or LinkedIn
- Access other resources on LF AI & Data’s GitHub or Wiki