Meta Unveils V-JEPA 2: A New AI World Model for Understanding the Physical World

Edited by: Veronika Radoslavskaya

Paris, France - Meta has launched V-JEPA 2, an advanced AI "world model" designed to understand and predict the physical world.

This new open-source model can comprehend 3D environments and the movement of objects. It's a significant step beyond large language models, enabling AI to learn, plan, and make decisions more like humans.

V-JEPA 2 can recognize that a ball rolling off a table will fall. It reasons in a simplified "latent" space to understand how objects move and interact.

Meta's chief AI scientist, Yann LeCunn, highlighted the difference between understanding language and the physical world. He explained that a world model acts as an abstract digital twin of reality, allowing AI to predict the consequences of its actions.

Meta is investing heavily in AI, with a planned $14 billion investment in Scale AI. The company is focusing on AI to compete with other tech giants like OpenAI and Google.

V-JEPA 2 is intended for use in delivery robots and self-driving cars. These machines need to understand their surroundings in real-time for navigation.

Other companies are also developing world models. Google's DeepMind is working on Genie, which can simulate games and 3D environments. Fei-Fei Li raised $230 million for a startup called World Labs, focused on large world models.

Sources

  • NBC Boston

Did you find an error or inaccuracy?

We will consider your comments as soon as possible.

Meta Unveils V-JEPA 2: A New AI World Mode... | Gaya One