Business & Economy

Latest Stories

OpenAI Reassigns Aleksander Madry to AI Safety Role Amidst Senatorial Concerns

Written by:

Farwa Mehmood
Posted: 25-07-2024

OpenAI Reassigns Aleksander Madry to AI Safety Role Amidst Senatorial Concerns

This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl

 

Open AI CEO Sam Altman has revamped the company’s teams by reassigning Aleksander Madry, a top AI safety executive, to a pioneering role focused on AI reasoning. This strategic move reflects OpenAI's commitment to addressing emerging risks in the artificial intelligence era. 

Madry’s previous and new roles

Previously, Madry was Head of Preparedness, a team responsible for tracking, forecasting, evaluating and helping protect against catastrophic risks associated with cutting-edge AI models. He is now working on a new research project, with executives Joaquin Quinonero Candela and Lilian Weng taking on the oversight of the preparedness team. However, Madry will continue to work on core AI safety work in his new role. 

According to the MIT’s website, Aleksander Madry also serves as a co-lead faculty member of the MIT AI Policy Forum and director of MIT's Center for Deployable Machine Learning; roles which he is currently on leave from. 

OpenAI’s response to senatorial concerns 

The decision to reorganize OpenAI’s teams was made just before a group of Democratic senators wrote a letter to Altman with concerns and queries about how OpenAI addresses emerging safety challenges and cybersecurity threats. OpenAI has not responded immediately to requests for comment, but policymakers have requested specific information about safety practices by August 13. 

Addressing AI safety challenges 

The increasing capabilities of AI chatbots has heightened safety concerns, prompting Madry's position change. This adjustment aligns with OpenAI's commitment to improve safety and governance in response to emerging problems. 

Impact of board restructure on Microsoft’s exit

Earlier this month, Microsoft exited its observer seat on OpenAI's board, expressing confidence in OpenAI’s new board, which strongly emphasizes artificial intelligence safety and governance. Madry’s reassignment aligns with OpenAI's efforts to address safety challenges and improve governance. Despite this, an open letter was recently published by a group of both current and former employees raising concerns about a lack of oversight in the industry to ensure the responsible development of artificial intelligence technologies.

Explore More News

OpenAI Reassigns Aleksander Madry to AI Safety Role Amidst Senatorial Concerns

Written by:

Farwa Mehmood
Posted: 25-07-2024

OpenAI Reassigns Aleksander Madry to AI Safety Role Amidst Senatorial Concerns

This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl

 

Open AI CEO Sam Altman has revamped the company’s teams by reassigning Aleksander Madry, a top AI safety executive, to a pioneering role focused on AI reasoning. This strategic move reflects OpenAI's commitment to addressing emerging risks in the artificial intelligence era. 

Madry’s previous and new roles

Previously, Madry was Head of Preparedness, a team responsible for tracking, forecasting, evaluating and helping protect against catastrophic risks associated with cutting-edge AI models. He is now working on a new research project, with executives Joaquin Quinonero Candela and Lilian Weng taking on the oversight of the preparedness team. However, Madry will continue to work on core AI safety work in his new role. 

According to the MIT’s website, Aleksander Madry also serves as a co-lead faculty member of the MIT AI Policy Forum and director of MIT's Center for Deployable Machine Learning; roles which he is currently on leave from. 

OpenAI’s response to senatorial concerns 

The decision to reorganize OpenAI’s teams was made just before a group of Democratic senators wrote a letter to Altman with concerns and queries about how OpenAI addresses emerging safety challenges and cybersecurity threats. OpenAI has not responded immediately to requests for comment, but policymakers have requested specific information about safety practices by August 13. 

Addressing AI safety challenges 

The increasing capabilities of AI chatbots has heightened safety concerns, prompting Madry's position change. This adjustment aligns with OpenAI's commitment to improve safety and governance in response to emerging problems. 

Impact of board restructure on Microsoft’s exit

Earlier this month, Microsoft exited its observer seat on OpenAI's board, expressing confidence in OpenAI’s new board, which strongly emphasizes artificial intelligence safety and governance. Madry’s reassignment aligns with OpenAI's efforts to address safety challenges and improve governance. Despite this, an open letter was recently published by a group of both current and former employees raising concerns about a lack of oversight in the industry to ensure the responsible development of artificial intelligence technologies.