(Jeremy Vale coauthored this post.)
Cellphone manufacturers are struggling to differentiate themselves. In the ads from the late 2000s, new phones were sleeker, faster, contained more memory, and could even have a touchscreen. Fast-forward to today, and only the almighty camera receives attention. Apple’s acquisition of Xnor.ai is the latest move in the battle for phone photo supremacy, and AI-on-the-edge is the weapon of choice.
What Is Xnor.ai, And Why Does Apple Want It?
Xnor.ai’s technology enables you to run deep learning models on edge devices, which, in English, means you can run software that can extract insights from photos and videos on the type of low-power processors you have on your smartphone. Why would you want this? Privacy and bandwidth, for one. Wouldn’t you like to be able to search your photos (i.e., have them autotagged) without them having to be uploaded to a server in the cloud? But it also enables a host of photo editing features on your phone. These deep learning models make it possible to autosegment your photos and videos which allows you to, for example, easily put yourself against an entirely different background or just apply a filter to your friend (and, just maybe, create deepfakes).
Is This Just About Smartphones?
No, we are just getting started on the applications for deep learning on edge devices. From toys that engage with the people that use them and Roombas that recognize furniture to smarter security cameras and retail and manufacturing applications, there are countless new, valuable applications. When it comes to smart speakers, Apple could use Xnor.ai to make, for example, HomePod speakers that recognize users visually and enable more precise visually based gestures (e.g., for adjusting volume) — and could even do this while allaying many people’s privacy concerns. Alas, it probably won’t: New photo-related capabilities will likely show up in iOS devices eventually, but Apple has rarely had the stamina to break out beyond its core product offerings.