Qualcomm Unveils On-Device Generative AI Features for Android Smartphones at MWC 2024

Qualcomm has showcased a range of new generative artificial intelligence (AI) features for Android smartphones at the Mobile World Congress (MWC) 2024 event. These features will be powered by Snapdragon and Qualcomm platforms and will be entirely within the device. Besides unveiling a dedicated large language model (LLM) for multimodal responses and an image generation tool, the company also added more than 75 AI models that can be used by developers to build specific apps.

In a post, Qualcomm announced all the AI features it revealed at the MWC. A major highlight is that unlike most modern AI models such as ChatGPT, Gemini, and Copilot, which process information on servers, Qualcomm’s AI models are entirely localised within the device. The on-device features and apps made using these models can then be personalised for users, in addition to minimising privacy and reliability-related issues. For this purpose, the chipmaker has made more than 75 open-source AI models, including Whisper, ControlNet, Stable Diffusion, and Baichuan 7B, available to developers through Qualcomm AI Hub, GitHub, and Hugging Face.

The company says that these AI models will also take less computational power and will cost less to build apps on since they are optimised for its platforms. However, the fact that all 75 models are small in size and made for particular tasks is also a contributing factor. So, while users will not see a one-stop shop chatbot, these would offer ample use cases for niche tasks such as image editing or transcription.

To make the process of developing apps using the models faster, Qualcomm has added multiple automation processes to its AI library. “The AI model library automatically handles model translation from source framework to popular runtimes and works directly with the Qualcomm AI Engine direct SDK, then applies hardware-aware optimizations,” it stated.

Apart from the small AI models, the American semiconductor company also unveiled LLM tools. These are currently in the research phase and were only demonstrated at the MWC event. The first is Large Language and Vision Assistant (LLaVA), a multimodal LLM with more than seven billion parameters. Qualcomm said it can accept multiple types of data inputs, including text and images, and generate multi-turn conversations with an AI assistant about an image.

Another tool that was demonstrated is called the Low Rank Adaptation (LoRA). It was demoed on an Android smartphone and can generate AI-powered images using Stable Diffusion. It is not an LLM itself, however, it can reduce the number of trainable parameters of AI models to make them more efficient and scale-ready. Besides its usage in image generation, Qualcomm claimed that it can also be used for customised AI models to create tailored personal assistants, improved language translation, and more.


Is the Samsung Galaxy Z Flip 5 the best foldable phone you can buy in India right now? We discuss the company’s new clamshell-style foldable handset on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

About the Author

You may also like these