Categories: Gadgets360

Epoch AI Launches FrontierMath AI Benchmark to Test Capabilities of AI Models

Epoch AI, a California-based research institute launched a new artificial intelligence (AI) benchmark last week. Dubbed FrontierMath, the new AI benchmark tests large language models (LLMs) on their capability of reseasoning and mathematical problem-solving. The AI firm claims that existing math benchmarks are not very useful due to factors like data contamination and AI models scoring very high scores on them. Epoch AI claims that even the leading LLMs have scored less than two percent on the new benchmark.

Epoch AI Launches FrontierMath Benchmark

In a post on X (formerly known as Twitter), the AI firm explained that it collaborated with more than 60 mathematicians to create hundreds of origins and unpublished math problems. Epoch AI claims that these questions would take even mathematicians hours to solve. The reason behind developing the new benchmark was cited as the limitations with existing benchmarks such as GSM8K and MATH, where AI models generally score a high point.

The company claimed that the high scores achieved by LLMs are largely due to data contamination. This means the questions somehow were already fed into the AI models, resulting in them easily solving the questions.

FrontierMath solves the problem by including new problems that are unique and have not been published anywhere, mitigating the risks associated with data contamination. Further, the benchmark includes a wide range of questions including computationally intensive problems in number theory, real analysis, and algebraic geometry, as well as topics such as Zermelo–Fraenkel set theory. The AI firm says all the questions are “guess proof”, meaning they cannot be solved accidentally without strong reasoning.

Epoch AI highlighted that to measure AI’s aptitude, benchmarks should be created on creative problem-solving where the AI has to maintain reasoning over multiple steps. Notably, many industry veterans believe that the existing benchmarks are not sufficient to correctly measure how advanced an AI model is.

Responding to the new benchmark in a post, Noam Brown, an OpenAI researcher who was behind the company’s o1 model welcomed the new benchmark and said, “I love seeing a new eval with such low pass rates for frontier models.”

Recent Posts

Beyoncé’s NFL Christmas Halftime Show Now Streaming on Netflix: Everything You Need to Know

Beyoncé's much-anticipated halftime performance, part of Netflix's NFL Christmas Gameday event, is set to release…

10 months ago

Scientists Predict Under Sea Volcano Eruption Near Oregon Coast in 2025

An undersea volcano situated roughly 470 kilometers off Oregon's coastline, Axial Seamount, is showing signs…

10 months ago

Organic Molecules in Space: A Key to Understanding Life’s Cosmic Origins

As researchers delve into the cosmos, organic molecules—the building blocks of life—emerge as a recurring…

10 months ago

The Secret of the Shiledars OTT Release Date Announced: What You Need to Know

Director Aditya Sarpotdar, following his successful venture "Munjya," has announced the release of his treasure…

10 months ago

Anne Hathaway’s Mothers’ Instinct Now Streaming on Lionsgate Play

The psychological thriller Mothers' Instinct, featuring Anne Hathaway, Jessica Chastain, and Kelly Carmichael, delves into…

10 months ago

All We Imagine As Light OTT Release Date: When and Where to Watch it Online?

Payal Kapadia's award-winning film, All We Imagine As Light, will soon be available for streaming,…

10 months ago