The Downsides of Edge Computing for IoT
There is still a large vacuum for standards and processes in the IoT — especially the edge computing space. Many vendors are working on ideal processes and protocols here. However, they are lagging years behind the infrastructure that we take for granted when building on the open web.
To develop AI products that can be deployed to the edge not only requires capabilities in the AI/ML space, but also in hardware, software, networking, and of course security. Over time, this will become easier as projects such as LF EDGE become more widely adopted. These standards will allow more tools to be developed in the open-source space. Companies can then use those tools to build powerful systems, without engineers having to reinvent solutions on the networking and security side.
What about my existing IoT network?
Can you implement edge AI on your existing IoT network? It depends heavily on the existing infrastructure.
If your current architecture already consists of programmable gateways, you might be able to deploy ML models to these nodes if the compute capabilities are sufficient. Chances are, however, that if the architecture wasn’t designed from the ground up to support advanced edge computing applications, that it might be difficult.
What's the best way to get started with edge AI in IoT?
Start by considering the full spectrum of devices — the IoT devices or “things” aren’t the only new hardware you need to think about. A wide variety of components, including micro-datacenters and IoT gateways in the field, will become part of networked environments. Your edge infrastructure needs will depend on how much latency your system can tolerate along with the complexity of the operations you need to perform on the data.
AI on the edge is just now starting to break out of research and into implementation. You can expect lots of exciting developments in this space over the next few years. To learn more about IoT development frameworks and processes, download our free guide here.