From cars and TVs to lightbulbs and doorbells. So many of the objects in everyday life have ‘smart’ functionality because the manufacturers have built chips into them.

But what if you could also run machine learning models in something as small as a golf ball dimple? That’s the reality that’s being enabled by TinyML, a broad movement to run tiny machine learning algorithms on embedded devices, or those with extremely low power requirements.

Heavy hitters such as Google, Qualcomm, and ARM recognise TinyML’s potential to transform the way we think about machine learning. It subverts the premise that ML is inherently power hungry and resource intensive, requiring swathes of cloud-hosted processing power to run anything remotely useful.

Read the full article on CIO.com. Join the tinyML community group