Categories: Gadgets360

OpenAI Employees Say Company Is Neglecting Safety and Security Protocols: Report

OpenAI has been at the forefront of the artificial intelligence (AI) boom with its ChatGPT chatbot and advanced Large Language Models (LLMs), but the company’s safety record has sparked concerns. A new report has claimed that the AI firm is speeding through and neglecting the safety and security protocols while developing new models. The report highlighted that the negligence occurred before the OpenAI’s latest GPT-4 Omni (or GPT-4o) model was launched.

Some anonymous OpenAI employees had recently signed an open letter expressing concerns about the lack of oversight around building AI systems. Notably, the AI firm also created a new Safety and Security Committee comprising select board members and directors to evaluate and develop new protocols.

OpenAI Said to Be Neglecting Safety Protocols

However, three unnamed OpenAI employees told The Washington Post that the team felt pressured to speed through a new testing protocol that was designed to “prevent the AI system from causing catastrophic harm, to meet a May launch date set by OpenAI’s leaders.”

Notably, these protocols exist to ensure the AI models do not provide harmful information such as how to build chemical, biological, radiological, and nuclear (CBRN) weapons or assist in carrying out cyberattacks.

Further, the report highlighted that a similar incident occurred before the launch of the GPT-4o, which the company touted as its most advanced AI model. “They planned the launch after-party prior to knowing if it was safe to launch. We basically failed at the process,” the report quoted an unnamed OpenAI employee as saying.

This is not the first time OpenAI employees have flagged an apparent disregard for safety and security protocols at the company. Last month, several former and current staffers of OpenAI and Google DeepMind signed an open letter expressing concerns over the lack of oversight in building new AI systems that can pose major risks.

The letter called for government intervention and regulatory mechanisms, as well as strong whistleblower protections to be offered by the employers. Two of the three godfathers of AI, Geoffrey Hinton and Yoshua Bengio, endorsed the open letter.

In May, OpenAI announced the creation of a new Safety and Security Committee, which has been tasked to evaluate and further develop the AI firm’s processes and safeguards on “critical safety and security decisions for OpenAI projects and operations.” The company also recently shared new guidelines towards building a responsible and ethical AI model, dubbed Model Spec.

Recent Posts

Beyoncé’s NFL Christmas Halftime Show Now Streaming on Netflix: Everything You Need to Know

Beyoncé's much-anticipated halftime performance, part of Netflix's NFL Christmas Gameday event, is set to release…

10 months ago

Scientists Predict Under Sea Volcano Eruption Near Oregon Coast in 2025

An undersea volcano situated roughly 470 kilometers off Oregon's coastline, Axial Seamount, is showing signs…

10 months ago

Organic Molecules in Space: A Key to Understanding Life’s Cosmic Origins

As researchers delve into the cosmos, organic molecules—the building blocks of life—emerge as a recurring…

10 months ago

The Secret of the Shiledars OTT Release Date Announced: What You Need to Know

Director Aditya Sarpotdar, following his successful venture "Munjya," has announced the release of his treasure…

10 months ago

Anne Hathaway’s Mothers’ Instinct Now Streaming on Lionsgate Play

The psychological thriller Mothers' Instinct, featuring Anne Hathaway, Jessica Chastain, and Kelly Carmichael, delves into…

10 months ago

All We Imagine As Light OTT Release Date: When and Where to Watch it Online?

Payal Kapadia's award-winning film, All We Imagine As Light, will soon be available for streaming,…

10 months ago