By the end of this year, airline passengers should be able to opt-out of facial recognition scans at airport security checkpoints without facing delays or other consequences, under new federal requirements (PDF) announced Thursday.
The ability to refuse biometric surveillance is one of several concrete safeguards the Biden administration is implementing across the U.S. government regarding the use of artificial intelligence technology. The move represents the first major step by officials to prevent potential abuse or discrimination as AI systems are rapidly adopted for a range of public services and decision-making.
The new AI oversight policies, detailed by Vice President Kamala Harris, also aim to indirectly shape practices in the private sector AI industry through the federal government’s formidable purchasing power.
Under the binding requirements taking effect December 1st, federal agencies utilizing AI tools will have to verify the systems do not infringe on the public’s rights or safety. Each agency must publish an online list of the AI systems it uses, the reasons for using them, and risk assessments of those technologies.
The policies from the White House Office of Management and Budget (OMB) additionally direct agencies to appoint a chief AI officer to oversee AI adoption and use.
“Leaders across governments, civil society and the private sector have an ethical duty to ensure AI protects the public from harm while allowing everyone to benefit from the technology,” Harris told reporters Wednesday. She stated the administration aims for the new policies to serve as a global model.
The announcements arrive as the federal government rapidly embraces AI across agencies. Systems using machine learning already monitor volcanoes worldwide, track wildfires, count wildlife from drone imagery and much more, with hundreds of other use cases planned. Just last week, the Department of Homeland Security disclosed an expansion of AI programs for training immigration officers, securing infrastructure and investigating crimes.
Responsible development and deployment of AI can enhance public services and tackle major challenges like climate change and economic inequity, said OMB Director Shalanda Young. The government is hiring “at least” 100 AI experts by this summer to assist in implementation, she added.
“These requirements will drive greater transparency,” said Young, referencing agency reporting mandates. “AI presents risks, but also huge opportunities to improve how government operates and serves the American people.”
The policies follow swiftly on an array of recent White House actions attempting to grapple with AI’s dual potentials as both a societal boon and danger if unchecked. Last fall, President Biden signed a sweeping executive order that, among other directives, tasked the Commerce Department with watermarking standards to combat deceptive AI-generated content.
Previously, the administration secured voluntary commitments from leading AI firms to conduct outside safety testing of their models. The OMB guidelines themselves have been years in the making, with Congress first mandating such rules for agencies in 2020 legislation.
While voluntary commitments and executive actions can only go so far, the administration’s policies governing federal procurement and use of commercial AI systems will likely have an outsized influence across the tech industry given Washington’s immense buying power.
Officials said Thursday that OMB will take further regulatory action around federal AI contracting practices, soliciting public input on how to proceed. The administration also continues working with lawmakers considering comprehensive AI legislation to establish national guardrails, though those efforts have moved slowly on Capitol Hill.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
