Bureaucrats shouldn't impose global AI policy at 'fancy, high-level' meetings, expert warns
The Biden administration's AI side deal is 'not good for the public,' expert warns
{{#rendered}} {{/rendered}}
U.S. Secretary of State Antony Blinken’s announcement that he is working with European partners to outline a voluntary artificial intelligence (AI) conduct code has left some experts concerned about how the government plans to handle such delicate policies in the future.
"A lot of us believe that this should be done through legal institutions, through democratic institutions and not simply as a side agreement at a trade meeting between governments and industry," Marc Rotenberg, executive director at the Center for AI and Digital Policy, told Fox News Digital.
"I don't think that's good for the public," Rotenberg stressed. "I think the public has a right to expect that whatever these decisions will be for artificial intelligence, they'll be made through political institutions and not just at these fancy high-level meetings."
{{#rendered}} {{/rendered}}
Blinken made the announcement after a meeting of the EU-US Trade and Technology Council (TTC) with European trade partners in Sweden. European Commission Vice President Margrethe Vestager said generative AI was "a complete game changer" and there needs to be "accountable artificial intelligence."
HOW TO GET A BETTER UNDERSTANDING OF ARTIFICIAL INTELLIGENCE WITH BLOGS, COURSES AND MORE
The effort would appear to push for the TTC to play "an important role" in establishing the codes that "all like-minded countries" could join.
{{#rendered}} {{/rendered}}
Rotenberg said he found the voluntary conduct codes frustrating because the U.S. is already part of the Organization for Economic Co-operation and Development (OECD) AI principles, which the U.S. "led the effort on."
The U.S. even gathered support from countries around the world, including China, Russia and Brazil. The OECD principles established a governmental standard on AI in 2019, which served as the basis for the G-20 AI Principles established in the same year.
The OECD called for an inclusive platform on public policy on AI that focused on three core ideas: Multidisciplinarity, looking at opportunities and challenges posed by current and future AI developments; evidence-based analysis on AI development to help create stronger methodologies; and global multi-partnership to align the private, public, civic and academic sectors on AI policy.
{{#rendered}} {{/rendered}}
BIDEN EDUCATION DEPARTMENT WORRIES AI IN THE CLASSROOM MIGHT BE USED TO SPY ON TEACHERS
Vestager said the new voluntary conduct code would publish a preliminary draft within a matter of weeks. Officials will seek feedback from industry players, invite parties to sign up and promised "very, very soon a final proposal for industry to commit to voluntarily."
A State Department spokesperson told Fox News Digital that the OECD recommendation on AI "is a testament to how like-minded democracies can come together to chart a path forward for the responsible use of emerging technologies in line with our shared values," calling the OECD principles "a cornerstone of global discussions" but did not elaborate on the reason to develop the new conduct code.
{{#rendered}} {{/rendered}}
Rotenberg voiced concerns that the policymakers and lawmakers "are not sufficiently familiar with what’s happened previously." As of 2021, Rotenberg said he could count some 800 different codes of conduct for AI, with companies — including Google, Microsoft and other AI developers — establishing internal conduct codes.
"I think it’s very important to build on the earlier commitments," Rotenberg said.
‘GODFATHER OF AI’ ISSUES STARK WARNING THAT IT WON'T BE LONG BEFORE TECH IS SMARTER THAN HUMANS
{{#rendered}} {{/rendered}}
"We were focusing on establishing the necessary guardrails for artificial intelligence. There's widespread support. It's truly nonpartisan at this point, and it's also global. So, you would look to political leaders to say, 'You know, let's put in place the necessary legal framework, and let's create the institutions that are necessary to make sure it's followed.'
"If we're not moving in that direction, then you see we're not actually making progress," he argued. "That's the concern that comes out of the Trend in Technology Council meeting."
One of the issues around AI guardrail policy comes from the varying levels of development in different countries. Europe, for example, has an AI app "near the finish line" that researchers have worked on for three years, according to Rotenberg, while China is "much farther along" in developing its AI programs that could lead to other frameworks in competition with the U.S.-led voluntary conduct code.
{{#rendered}} {{/rendered}}
"Most U.S. policymakers see the competition with China in terms of innovation and market dominance. That competition is real, there's no doubt about it," Rotenberg said. "What China is also doing is put in place regulatory frameworks that they tend to extend to the Belt and Road Initiative and other countries where they're seeking to establish trade.
"That is not a code of conduct, by the way," he noted. "It's regulation for generative air. It's regulation for recommendation algorithms. It's a regulation for data protection. And it's understandable from a government's perspective that they would want regulations that are aligned with their national goals and industrial strategy.
{{#rendered}} {{/rendered}}
"You can have conflicting principles, which is actually one of the things we're trying to avoid," he concluded. "We want coherent principles. So, if the OECD has managed to get 50 countries behind a good framework, we think that should be implemented."
The State Department did not respond to a Fox News Digital request for comment by time of publication.
Fox News Digital’s Danielle Wallace contributed to this report.