• Home
  • Ai
  • Ai News
  • Apple Researchers Introduce Matrix3D, a Unified AI Model That Can Turn 2D Photos Into 3D Objects

Apple Researchers Introduce Matrix3D, a Unified AI Model That Can Turn 2D Photos Into 3D Objects 2f5u6z

Matrix3D can perform several photogrammetry subtasks, including pose estimation, depth prediction, and novel view synthesis. 411i

Apple Researchers Introduce Matrix3D, a Unified AI Model That Can Turn 2D Photos Into 3D Objects

Photo Credit: Reuters 3v6d2p

Researchers said that Matrix3D was trained using the masked learning technique

Highlights
  • Matrix3D utilises a multimodal diffusion transformer (DiT)
  • The model was developed in partnership with Nanjing University and HKUST
  • It is an open-source model available for on GitHub
ment

Apple researchers released a new artificial intelligence (AI) model that can generate 3D views from multiple 2D images. The large language model (LLM), dubbed Matrix3D, was developed by the company's Machine Learning team, in collaboration with Nanjing University and the Hong Kong University of Science and Technology (HKUST). The Cupertino-based tech giant has made the AI model available to the open community, and it can be ed via Apple's listing on GitHub. With Matrix3D, the researchers have unified the 3D generation pipeline to eliminate the risk of errors.

Apple's Matrix3D Innovates Multi-Task Photogrammetry 1p5f62

In a post, the tech giant detailed the research that went into the development of the Matrix3D AI model. While several 3D rendering models already exist, this one innovates the existing space by unifying the pipeline to create 3D views. Instead of having multiple models and components, here, a single LLM performs several photogrammetry subtasks such as pose estimation, depth prediction, and novel view synthesis.

Notably, Photogrammetry is the technique of obtaining accurate measurements and 3D information about physical objects and environments by analysing images. It is commonly used to create maps, 3D models, and measurements from 2D images taken from different angles.

The researchers have also published a paper about the new model on the online preprint journal arXiv. As per the researches, Matrix3D is based on a multimodal diffusion transformer (DiT) architecture. It can integrate data across multiple modalities such as image data, camera parameters, and depth maps.

In the paper, Apple researchers highlight that the model was trained using a mask learning strategy where a part of the image is obstructed, and the AI model is trained to find the right pixels that fit in the gap.

The researchers found that the LLM can generate an entire 3D object or scene view with just three images from different angles. While the dataset used to train the model was not disclosed, the model itself is available to , modify, and redistribute via a permissive Apple licence on the company's GitHub listing.

Comments

For the latest reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: LLM
Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen ing his favourite football club - Chelsea, watching movies and anime, and sharing ionate opinions on food. More
Realme Neo 7 Turbo Confirmed to Launch This Month, Pre-Reservations Begin
FalconX Partners With Standard Chartered to Serve Institutional Crypto Investors
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

ment

Follow Us

ment

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »