AI Regulation and the Quest for Safe Innovation
The world of artificial intelligence is evolving at an unprecedented pace, and as it does, governments, tech giants, and nations are grappling with the need to strike a balance between innovation and safety. Amid this global transformation, a new wave of AI regulations is taking shape. This article delves into the dynamics of AI regulations, focusing on the competition between tech powerhouses Google and Microsoft, while also exploring the United Kingdom's groundbreaking "AI Safety Summit." As 2023 unfolds, the global race to harness AI's potential intensifies, and this article sheds light on the challenges and opportunities in the quest for responsible AI innovation.
The Global Shift Towards AI Regulation
On October 30th, the seven-member nations currently coming to a consensus on guiding ideas and a "voluntary" code of conduct for AI developers to abide by. The United Kingdom hosted a worldwide summit on AI governance last week at Bletchley Park, and U.S. Vice President Kamala Harris was scheduled to speak. The United Nations (UN) announced last week the creation of a new board to investigate AI governance. The Biden-Harris Administration has also been concentrating on AI safety, obtaining "voluntary commitments" from the leading AI developers. However, this was always meant to be a prelude to an executive order, which is what is being announced today.
The directive specifically states that creators of the “most powerful AI systems” must provide the US government with access to the findings of their safety testing and related information. This means that as AI develops, so will its ramifications for safety and security. Ensuring compliance with the Defense Production Act (1950), the order particularly targets foundation models that could endanger public health, economic security, or national security. This may be considered as the first, real step, on the way to protect people from powerful AI.
Moving back to the UK, at Bletchley Park on Wednesday and Thursday, the United Kingdom was holding the "AI Safety Summit," which it describes as the first of its kind. This high-level goal is also reflected in the attendees, who are anticipated to include prominent intellectuals in the field, captains of industry, and top government officials. (The most recent latecomers are Olaf Scholz, President Biden, Justin Trudeau, and Elon Musk.) "We will draw even more of the new jobs and investment that will come from this new wave of technology if we establish the United Kingdom as a global leader in safe AI," Sunak said.
Debunking Myths and Misconceptions about AI Risks
The question of whether AI poses an "existential risk" has been exaggerated, possibly even on purpose to divert attention from more pressing AI initiatives. Professor Frank Kelly of the University of Cambridge's Mathematics of Systems department noted: “A false impression is not new. However, that's one of the areas where we believe there may be short- and medium-term hazards associated with AI”. In his speech last week, Sunak stated, "At the moment, we don't have a shared understanding of the risks that we face." This summit may at least try to find the same solutions. Still, for the UK this can be more economical purpose to be the AI leader, rather than to regulate AI for safety reasons.
The Tech Titans: Google and Microsoft's AI Rivalry
CEO Satya Nadella clarified that Microsoft feels positively about artificial intelligence (AI). They enjoy it. In the yearly report of the corporation, he writes a letter to shareholders extolling the virtues of AI. As stated by Nadella in his yearly letter:
The upcoming AI generation will transform all software categories and businesses, including our own. Microsoft, founded 48 years ago, is still a significant firm today because we have consistently adjusted to changing technical paradigms, from PC/Server to Web/Internet to Cloud/Mobile. We are leading this new period now, and we are doing so once more.
However, by luck or forethought, they supported OpenAI, the industry leader in natural language artificial intelligence. What if the roles were flipped and Microsoft was left out in the cold by Google's fortunate partnership with OpenAI? Speaking of which, Google invested $2 billion in OpenAI competitor - Anthropic. The funding arrangement involves $500 million upfront and up to $1.5 billion later, depending on what, if any, timing or conditions are met, according to persons familiar with the matter who were cited by The Wall Street Journal.
“We've hardly been operational for more than 2.5 years. We have raised an impressive $1.5 billion throughout that time. Even though we have a considerably lesser team, we have managed to compete," Anthropic CEO and co-founder Dario Amodei remarked. Also, in Amazon's quarterly report to the US Securities and Exchange Commission, the online retailer said it invested $1.25 billion in Anthropic bonds, which can be converted into equity. Such a move, of course, can be made for their interests.
By the end of 2023, we are in the middle of the AI race not only between big tech companies, like Microsoft and Google, but countries as well. With the "AI Safety Summit,” the UK wants to become the leading AI business centre in Europe and try to overtake the USA. Still, will they combine against China, when Zhipu AI, a Chinese developer of foundation models, declared that it had raised a total of 340 million yuan ($2.5 billion) in funding thus far this year? When regulations aren’t actually stopping the AI development process, by the end of 2023 Google and Microsoft will do their best to make the most progress before big regulations in 2024.