Categories: Gadgets360

Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency

Hugging Face, the artificial intelligence (AI) and machine learning (ML) platform, introduced a new vision-focused AI model last week. Dubbed SmolVLM (where VLM is an acronym for vision language model), it is a compact-sized model that is focused on efficiency. The company claims that due to its smaller size and high efficiency, it can be useful for enterprises and AI enthusiasts who want AI capabilities without investing a lot in its infrastructure. Hugging Face has also open-sourced the SmolVLM vision model under the Apache 2.0 license for both personal and commercial usage.

Hugging Face Introduces SmolVLM

In a blog post, Hugging Face detailed the new open-source vision model. The company called the AI model “state-of-the-art” for its efficient usage of memory and fast inference. Highlighting the usefulness of a small vision model, the company noted the recent trend of AI firms scaling down models to make them more efficient and cost-effective.

Small vision model ecosystem
Photo Credit: Hugging Face

The SmolVLM family has three AI model variants, each with two billion parameters. The first is SmolVLM-Base, which is the standard model. Apart from this, SmolVLM-Synthetic is the fine-tuned variant trained on synthetic data (data generated by AI or computer), and SmolVLM Instruct is the instruction variant that can be used to build end-user-centric applications.

Coming to technical details, the vision model can operate with just 5.02GB of GPU RAM, which is significantly lower than Qwen2-VL 2B’s requirement of 13.7GB of GPU RAM and InternVL2 2B’s 10.52GB of GPU RAM. Due to this, Hugging Face claims that the AI model can run on-device on a laptop.

SmolVLM can accept a sequence of text and images in any order and analyse them to generate responses to user queries. It encodes 384 x 384p resolution image patches to 81 visual data tokens. The company claimed that this enables the AI to encode test prompts and a single image in 1,200 tokens, as opposed to the 16,000 tokens required by Qwen2-VL.

With these specifications, Hugging Face highlights that SmolVLM can be easily used by smaller enterprises and AI enthusiasts and be deployed to localised systems without the tech stack requiring a major upgrade. Enterprises will also be able to run the AI model for text and image-based inferences without incurring significant costs.

Recent Posts

Beyoncé’s NFL Christmas Halftime Show Now Streaming on Netflix: Everything You Need to Know

Beyoncé's much-anticipated halftime performance, part of Netflix's NFL Christmas Gameday event, is set to release…

1 year ago

Scientists Predict Under Sea Volcano Eruption Near Oregon Coast in 2025

An undersea volcano situated roughly 470 kilometers off Oregon's coastline, Axial Seamount, is showing signs…

1 year ago

Organic Molecules in Space: A Key to Understanding Life’s Cosmic Origins

As researchers delve into the cosmos, organic molecules—the building blocks of life—emerge as a recurring…

1 year ago

The Secret of the Shiledars OTT Release Date Announced: What You Need to Know

Director Aditya Sarpotdar, following his successful venture "Munjya," has announced the release of his treasure…

1 year ago

Anne Hathaway’s Mothers’ Instinct Now Streaming on Lionsgate Play

The psychological thriller Mothers' Instinct, featuring Anne Hathaway, Jessica Chastain, and Kelly Carmichael, delves into…

1 year ago

All We Imagine As Light OTT Release Date: When and Where to Watch it Online?

Payal Kapadia's award-winning film, All We Imagine As Light, will soon be available for streaming,…

1 year ago