Meta releases AI model that can identify items within images

Segment Anything Model could identify objects even in cases where it had not encountered those items in its training

PHOTO: FILE

Facebook-owner Meta published an artificial intelligence model on Wednesday that can pick out individual objects from within an image, along with a dataset of image annotations that it said was the largest ever of its kind.

The company's research division said in a blog post that its Segment Anything Model, or SAM, could identify objects in images and videos even in cases where it had not encountered those items in its training.

Using SAM, objects can be selected by clicking on them or writing text prompts. In one demonstration, writing the word "cat" prompted the tool to draw boxes around each of several cats in a photo.

Big tech companies have been trumpeting their artificial intelligence breakthroughs since Microsoft-backed OpenAI's ChatGPT chatbot became a sensation in the fall, triggering a wave of investments and a race to dominate the space.

Meta has teased several features that deploy the type of generative AI popularized by ChatGPT, which creates brand new content instead of simply identifying or categorizing data like other AI, although it has not yet released a product.

Examples include a tool that spins up surrealist videos from text prompts and another that generates children's book illustrations from prose.

Chief Executive Mark Zuckerberg has said that incorporating such generative AI "creative aids" into Meta's apps is a priority this year.

Meta does already use technology similar to SAM internally for activities like tagging photos, moderating prohibited content and determining which posts to recommend to users of Facebook and Instagram.

The company said SAM's release would broaden access to that type of technology.

The SAM model and dataset will be available for download under a non-commercial license. Users uploading their own images to an accompanying prototype likewise must agree to use it only for research purposes.

RELATED

Load Next Story