Ambient Computing: Google Closes the Gap at Google I/O 2022
At Google I/O this year, Google announced that there were 1 billion newly activated Android devices in 2021, putting them just over 3 billion active devices overall. For comparison, Apple only has 1.8 billion total active devices in the market. But Android has always led in volume, so this is not news. By making their mobile OS open to other hardware manufacturers, Google ensured widespread adoption but has suffered from wildly fragmented and uneven user experiences, a fate that Apple has guarded against by restricting iOS to only run-on Apple devices. Announcements at I/O this year indicate that Google is decidedly changing course.
The race for parity with Apple was the overarching theme this year at the annual developer conference as most of the new devices, services, and features Google announced have already existed in the Apple ecosystem for some time.
The big hardware announcements this year were a new tablet, a new watch, and new wireless earbuds with noise canceling. While there have been watches and tablets for several years that run the Android OS, this is the first time these devices will be manufactured by Google. Hardware manufacturers have historically taken great liberties in their implementation of Android which at best, led to inconsistent user experiences and at worst, to bloated UIs with a circus tent of adware, privacy and security issues.
Apple’s walled garden strategy, while criticized for locking users in, has given those same users the invaluable benefit of everything just working together out of the box. This interoperability between various devices like phones, watches, cars, and televisions is the foundation for what is being called ambient computing. Put simply, ambient computing is the ability to seamlessly switch between devices without losing the context of what you were doing, and it is the new battleground for acquiring and retaining mobile customers.
On the software front coming out of Google I/O, while the larger story was still about catching up with Apple with announcements like Google Wallet and a custom UI for the new Pixel tablet that allows side-by-side, windowed apps, Android continues to differentiate itself as an experience you can personalize. With the introduction of Material You last year, a design system and philosophy built on top of Material Design that seeks to make the Android UI more expressive and emotionally responsive, Google continues striving to make the experience of using their platform more personal. This year, further progress on this journey was announced with the ability to choose from preset color palettes to apply a custom theme for your device including ALL app icons, not just the icons for Android system apps.
In the past year, Google has been thinking a lot about color and not just as it applies to buttons and icons. Their partnership with the esteemed Harvard sociology professor Dr. Ellis Monk leverages his skin tone scale to train Google’s computer vision A.I. platform continues to advance the cause of true representation and inclusion for all people regardless of the color of their skin. This means cameras that take truer portraits of people and searches for make-up tips return photos and videos with people of color, not just white women.
It is here, in the underlying platforms and the machine learning models that run on them, that Google shines. Their powerful computer vision, mapping, and predictive modeling capabilities coupled with an increasing commitment to make not just engaging, but life-saving digital experiences for everyone are bearing fruit. From reducing carbon by a half-million metric tons since last year by offering eco-friendly directions in Google maps to broadcasting early warnings for earthquakes and flooding, Google is making a positive impact on the world.
There was perhaps, no better example of Google’s power to make life-changing digital experiences than in the closing moments of Sundar Pichai’s keynote address when he unveiled a prototype of Google’s mixed-reality glasses. The video demonstrates how real-time translations can be projected to the lens of the glasses during a natural conversation. The result is undeniably powerful and emotional, arguably the pinnacle of how technology can augment our lives.
In a few short months, Apple will likely introduce their smart glasses to the world. They will be beautiful and sleek, and they will be available in every Apple store. But without the powerful translation engine and other machine learning platforms Google has assembled over the course of the past decade they will just be a novelty.
While nothing overtly splashy was unveiled at Google I/O this year, it was a watershed moment for the company. By finally offering a seamless, connected experience across native Android devices in this new ambient computing world where we transition from smart car to smart home, Google is poised to truly compete with Apple. With the powerful A.I. infrastructure they’ve built, Google could dominate the future of mobile computing.