I’m eternalizing a tweet I made this morning — I think it has a fitting place in this archive.
"Meta ditched VR and pivoted to AI!"
I hear this assertion a lot, but it's off-target. These are symbiotic developments and Meta knows that.
Given the understood impact of AI, a very brief AI-centric deduction of why new (i.e. 3D) hardware is a necessity for AI advancement:
AI models (esp multimodal) can generate insights and understanding from perception of our environments and real-world.
Insights, however, are only as valuable as actions, and as spatial beings, we have an innate drive to do things based on our perceptions. Current AI systems (for the most part) cannot act on the insights and understanding they generate — this means they are not fully intelligent systems (yet).
Flat-screened devices (i.e. status quo devices) are not capable of spatial action. Software is now ahead of consumer hardware. (E.g. your phone-based AI can tell you the dishes are clean, but will never be able to clean out your dishwasher).
This means we need a new UI to properly act upon the understanding of the AI. (While I think the attempts are misguided, this is the energy channeled into "AI wearables" startups.)
On one hand, this need leads to robotics: systems enabled to build and execute action plans from the AI's perceived understanding.
On the other hand, you have AR/VR: an interface that enables humans to directly act upon the AI's perceived understanding.
I am asserting here that VR and robotics are the opposite ends of the same spectrum: the UI for AI. (e.g. many robotics startups are using 3D engines for simulations, the same engines VR devs use to make games).
As such, Meta's investment in RL will a valuable interface for it to capture value from its AI in the long run.
Tangentially: Meta has invested in AI research for many years now, which is responsible for salient OSS contributions like PyTorch. So describing Meta's venture into AI as a pivot is ... not right)