Categories: Gadgets360

MediaTek Announces Optimisation of Microsoft’s Phi-3.5 AI Models on Dimensity Chipsets

MediaTek announced on Monday that it has now optimised several of its mobile platforms for Microsoft’s Phi-3.5 artificial intelligence (AI) models. The Phi-3.5 series of small language models (SLMs), comprising Phi-3.5 Mixture of Experts (MoE), Phi-3.5 Mini, and Phi-3.5 Vision, was released in August. The open-source AI models were made available on Hugging Face. Instead of being typical conversational models, these were instruct models that require users to input specific instructions to get the desired output.

MediaTek Optimises Dimensity Chipsets for Phi-3.5 SLMs

In a blog post, MediaTek announced that its Dimenisty 9400, Dimensity 9300, and Dimensity 8300 chipsets are now optimised for the Phi-3.5 AI models. With this, these mobile platforms can efficiently process and run inference for on-device generative AI tasks using MediaTek’s neural processing units (NPUs).

Optimising a chipset for a specific AI model involves tailoring the hardware design, architecture, and operation of the chipset to efficiently support the processing power, memory access patterns, and data flow of that particular model. After optimising, the AI model will show reduced latency and power consumption, and increased throughput.

MediaTek highlighted that its processors are not only optimised for Microsoft’s Phi-3.5 MoE but also for Phi-3.5 Mini which offers multi-lingual support and Phi-3.5 Vision which comes with multi-frame image understanding and reasoning.

Notably, the Phi-3.5 MoE has 16×3.8 billion parameters. However, only 6.6 billion of them are active parameters when using two experts (typical use case). On the other hand, Phi-3.5 features 4.2 billion parameters and an image encoder, and the Phi-3.5 Mini has 3.8 billion parameters.

Coming to performance, Microsoft claimed that the Phi-3.5 MoE outperformed both Gemini 1.5 Flash and GPT-4o mini AI models on the SQuALITY benchmark which tests readability and accuracy when summarising a block of text.

While developers can leverage Microsoft Phi-3.5 directly via Hugging Face or the Azure AI Model Catalogue, MediaTek’s NeuroPilot SDK toolkit also offers access to these SLMs. The chip maker stated that the latter will enable developers to build optimised on-device applications capable of generative AI inference using the AI models across the above mentioned mobile platforms.

Recent Posts

Beyoncé’s NFL Christmas Halftime Show Now Streaming on Netflix: Everything You Need to Know

Beyoncé's much-anticipated halftime performance, part of Netflix's NFL Christmas Gameday event, is set to release…

10 months ago

Scientists Predict Under Sea Volcano Eruption Near Oregon Coast in 2025

An undersea volcano situated roughly 470 kilometers off Oregon's coastline, Axial Seamount, is showing signs…

10 months ago

Organic Molecules in Space: A Key to Understanding Life’s Cosmic Origins

As researchers delve into the cosmos, organic molecules—the building blocks of life—emerge as a recurring…

10 months ago

The Secret of the Shiledars OTT Release Date Announced: What You Need to Know

Director Aditya Sarpotdar, following his successful venture "Munjya," has announced the release of his treasure…

10 months ago

Anne Hathaway’s Mothers’ Instinct Now Streaming on Lionsgate Play

The psychological thriller Mothers' Instinct, featuring Anne Hathaway, Jessica Chastain, and Kelly Carmichael, delves into…

10 months ago

All We Imagine As Light OTT Release Date: When and Where to Watch it Online?

Payal Kapadia's award-winning film, All We Imagine As Light, will soon be available for streaming,…

10 months ago