OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days ...
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies ...
An open letter signed by former and current employees at OpenAI and other AI giants calls for whistleblower protections as ...
The announcement comes soon after a recent exodus of two senior OpenAI executives focused on AI safety and an open letter ...
A group of former and current OpenAI employees released a letter online expressing concerns about the effects and serious ...
OpenAI on Friday said that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence (AI). It began dissolving the so-called “superalignment” group ...
Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that “AI companies ...
On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has ...
OpenAI's Superalignment team, responsible for developing ways to govern and steer "superintelligent" AI systems, was promised 20% of the company's compute resources, according to a person from ...
OpenAI today announced the formation of a committee that will be tasked with ensuring its machine learning research is carried out safely. The Safety and Security Committee, as the panel is called ...