Sam Altman Trust Questions Raised as AI Power Grows
Critics are raising serious questions about whether OpenAI CEO Sam Altman can be trusted to make decisions about artificial intelligence that could affect humanity's future. Former OpenAI board members and AI researchers say Altman has too much power over technology that poses existential risks.

A growing number of AI experts and former OpenAI insiders are questioning whether Sam Altman, the CEO of ChatGPT-maker OpenAI, should have so much control over artificial intelligence development.
Helen Toner, a former OpenAI board member, and other critics say Altman cannot be trusted with decisions about humanity's future, even though his role puts him at the center of AI development that could reshape society.
AI researcher Gary Marcus went further, writing that he "cannot see a future where individuals like him have the ability to make decisions about existential threats to our society." Marcus argues this reflects a bigger problem: too few people have too much power over technology that affects everyone.
OpenAI has become one of the most influential AI companies in the world since launching ChatGPT in 2022. The company's decisions about AI safety, capabilities, and access could determine whether artificial intelligence helps or harms society.
The criticism comes as AI systems become more powerful and integrated into daily life, from answering questions to writing code to making business decisions.
AI tools like ChatGPT are already changing how people work, learn, and communicate. If one person controls the future of this technology without proper oversight, it could affect everyone's jobs, privacy, and safety in ways we can't predict or stop.
Watch for more scrutiny of AI company leadership and calls for government oversight of artificial intelligence development.
Was this article helpful?
0 people found this helpful