< Back to 68k.news IN front page

Offline ready: Microsoft debuts Phi-3 Mini AI model

Original source (on modern site) | Article images: [1] [2]

' }

Copyright &copy HT Digital Streams Limited

All Rights Reserved.

Companies

Shouvik Das 3 min read 23 Apr 2024, 08:53 PM IST

Sebastien Bubeck, senior vice-president, Microsoft GenAI

Summary

Microsoft's new family of AI models are narrower but specifically purpose-oriented, and can even be used offline

NEW DELHI : Microsoft Corp.'s new family of AI models unveiled on Tuesday is targeted at finding use in devices such as smartphones and other consumer-end applications, including in situations where users are offline.

The new AI models will find usage "for resource constrained environments and offline environments where local inference may be needed", said Sebastien Bubeck, senior vice-president of Microsoft Generative AI, in an interview with Mint.

With the first of the new AI models, called Phi-3 Mini, Microsoft will look to find usage in devices such as smartphones, Bubeck said. Microsoft introduced its new AI models less than a week after rival Meta Platforms unveiled its own new generation of a small AI model. 

The launch of the Phi-3 Mini comes amid an influx of small AI models that consume fewer resources to run on consumer-end devices, thereby making them viable for common use cases such as photo and video editing, creative illustrations, note-taking, voice transcription and more.

Also read: AI advisory: Why investors and Big Tech are upset

On 18 April, Mark Zuckerberg unveiled the Llama-3, in which the smallest model has 8 billion data parametres. 

Microsoft's Phi-3 Mini has 3.8 billion data parametres. Citing internal benchmarks and tests, Bubeck said the Phi-3 model can outperform competing AI models that are twice as large as it.

At the moment, the smallest model on Microsoft's platform will be available for enterprises from its own Azure cloud platform as well as third-party sources. Bubeck did not specify if the models will become available in any consumer-facing interfaces as yet. 

"We do a lot of work to make our models available to consumers, but we cannot divulge any details on this just yet," he said.

The case for small AI models

OpenAI's generative pre-trained transformer, the GPT-3 AI model, which popularized the generative AI, or GenAI, industry and began a rat-race for AI adoption around the world, had 175 billion data parameters. 

The latest version, GPT-4, is stated to have nearly 3 trillion data points in its training database.

Such large models, however, are extremely difficult to run locally on devices. Even on the cloud, usage of such models is expensive, and thus not viable for commercial operations among enterprises and consumers alike.

In September, Mint reported that 2024 would be the year when large tech vendors will begin offering smaller AI models that would run locally on devices to bring monetizable generative AI services to more users—which Microsoft now hopes to tap with the Phi-3 Mini.

"We are looking to make it easier for developers to deploy AI models," Bubeck said. "With Azure AI, we're bringing a selection of high-performing open and frontier models to enterprise customers to help them create responsible, cost-efficient, multimodal generative AI solutions that benefit people and organizations."

Also read: Inside the race to build desi GPTs

However, the model will not quite be ready for usage with India-specific regional language models. "Phi-3 Mini was predominantly trained and optimized for English. Its capabilities in other languages are limited, meaning it could understand but will not be as fluent as English," Bubeck said.

It is for this reason that the central government, on 7 March, notified the India AI Mission. The ₹10,372-crore AI mission seeks to develop indigenous AI models that are natively trained in local languages and can be deployed across a range of consumer-facing public platforms. 

Microsoft, meanwhile, will likely find an audience for its Phi-3 Mini model among developers looking to build applications that will run natively on smartphones, but can offer generative content despite running without internet connectivity.

Also read: American AI bill: Is it a boon or bane for global innovation?

While such platforms have over time raised questions regarding usage of copyrighted data, Bubeck said the training of the Phi-3 series of AI models used "a variety of data sources, including publicly available information, in a manner consistent with copyright and intellectual property laws… to responsibly scale AI."

However, new laws may revise how such regulations work. Earlier this month, US senator Adam Schiff introduced a Bill that seeks to compensate creators for their work being used by Big Tech firms in training AI models. The Bill also seeks to penalize and hold tech firms accountable for failing to do so.

Catch all the Corporate news and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

more

`; document.getElementById("mintPillarId").innerHTML += customHTML; } if (window.innerWidth > 1023) {var swiper = new Swiper(".mintPillarsSlider", {spaceBetween: 16, slidesPerView: 3.5, navigation: {nextEl: '.swiper-button-next', prevEl: '.swiper-button-prev', }, breakpoints: {360: {slidesPerView: 2.1, }, 768: {slidesPerView: 2, }, 1024: {slidesPerView: 3.5, }, } }); } } } } } setTimeout(() => {window.onload = apiCall(); }, 2000);

< Back to 68k.news IN front page