The US government is turning to leading artificial intelligence (AI) companies for guidance on safeguarding critical infrastructure from potential AI-powered attacks. This move comes amidst growing concerns about the risks posed by AI, as well as its potential benefits in enhancing national security. CEOs of OpenAI, Google, and Microsoft are among the tech leaders enlisted for the Federal AI Safety Panel.
The Department of Homeland Security (DHS) announced on Friday the formation of a panel comprising CEOs from prominent companies across various industries. Notable figures include Sundar Pichai of Google, Satya Nadella of Microsoft, and Sam Altman of OpenAI, alongside executives from defense contractors like Northrop Grumman and major air carrier Delta Air Lines.
Collaboration with Private Sector
The collaboration underscores the government’s recognition of the crucial role played by the private sector in addressing the challenges posed by AI. With no targeted national AI law in place, this joint effort aims to harness industry expertise to mitigate risks associated with AI while maximizing its potential benefits.
Focus on Critical Sectors
The panel will provide recommendations to key sectors such as telecommunications, pipeline operations, and electric utilities on the responsible use of AI. Additionally, it will assist in preparing these sectors for potential disruptions caused by AI-related threats.
Importance of AI Safety and Security
DHS Secretary Alejandro Mayorkas emphasized the transformative potential of AI while acknowledging the associated risks. He stressed the importance of adopting best practices and concrete actions to mitigate these risks and advance national interests.
Composition and Objectives of the AI Safety and Security Board
Comprising 22 members, the AI Safety and Security Board is an outcome of President Joe Biden’s 2023 executive order. The board aims to enhance security, resilience, and incident response in AI usage across critical infrastructure sectors.
Addressing Concerns about Misinformation
The emergence of deepfake technology, particularly in audio and video content, has raised concerns about misinformation and election security. Mayorkas highlighted the risks posed by adverse nation-states exploiting AI for malicious purposes and emphasized efforts to counter such threats.
Looking Ahead
As the government continues to navigate the complexities of AI adoption and regulation, the collaboration with industry leaders signals a proactive approach to addressing emerging challenges. The AI advisory board’s insights and recommendations are expected to shape future policies and practices in AI governance.