Breaking News

Musk and Trump, EU AI Act


US President-elect Donald Trump and Elon Musk watch the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas on November 19, 2024.

Brandon Bell | Via Reuters

The US political landscape is set to undergo some changes in 2025 — and those changes will have some big implications for the regulation of artificial intelligence.

President-elect Donald Trump will be inaugurated on January 20. He will be joined at the White House by a number of top business advisers — including Elon Musk and Vivek Ramaswamy — who are expected to influence policy thinking around emerging technologies such as AI and cryptocurrencies.

On the other side of the Atlantic, the story of two jurisdictions emerged, with the UK and the European Union they differ in regulatory thinking. While the EU turned more to the Silicon Valley giants behind the most powerful artificial intelligence systems, Britain adopted more light approach.

In 2025, the state of AI regulation globally could be in for a major overhaul. CNBC takes a look at some of the key developments to watch — from the evolution of the EU’s landmark AI Act to what the Trump administration might do for the U.S.

Musk’s influence on US politics

Elon Musk walks on Capitol Hill on the day of a meeting with Senate Republican Leader-elect John Thune (R-SD), in Washington, U.S., on December 5, 2024.

Benoit Tessier | Reuters

Although not an issue that was heavily mentioned during Trump’s election campaign, artificial intelligence is expected to be one of the key sectors that will benefit from the next US administration.

First, Trump appointed Musk, the CEO of the electric car maker Teslato co-head his “Department of Government Efficiency” with Ramaswamy, an American biotech entrepreneur who dropped out of the 2024 presidential race to support Trump.

Matt Calkins, CEO of Appian, told CNBC that Trump’s close relationship with Musk could put the US in a good position when it comes to AI, citing the billionaire’s experience as co-founder of OpenAI and CEO of xAI, his own artificial intelligence lab . positive indicators.

“We finally have one person in the US administration who actually knows about AI and has an opinion about it,” Calkins said in an interview last month. Musk has been one of Trump’s most prominent supporters in the business community, even appearing at some of his campaign rallies.

There is currently no confirmation of what Trump has planned in terms of possible presidential directives or executive orders. But Calkins thinks it’s likely Musk will seek to propose guardrails to ensure the development of artificial intelligence doesn’t threaten civilization — a risk he warned many times in the past.

“He’s unquestionably hesitant to allow artificial intelligence to cause catastrophic human outcomes — he’s definitely concerned about that, he’s talked about it long before he took a political position,” Calkins told CNBC.

There is currently no comprehensive federal legislation on artificial intelligence in the US. Instead, there is a patchwork of regulatory frameworks at the state and local levels, with numerous AI bills introduced in 45 states plus Washington DC, Puerto Rico and the US Virgin Islands.

EU AI Act

The European Union is so far the only jurisdiction on a global level that initiates comprehensive rules for artificial intelligence with its Artificial Intelligence Act.

Jaque Silva | Nurphoto | Getty Images

The European Union has so far been the only jurisdiction globally to move forward with comprehensive legal rules for the artificial intelligence industry. Earlier this year, the Artificial Intelligence Act — the first regulatory framework for artificial intelligence — officially entered into force.

The law has not yet fully taken effect, but it is already causing tension among major US technology companies, which are concerned that some aspects of the regulation are too strict and could stifle innovation.

In December, the EU Office for Artificial Intelligence, the newly created body that oversees models under the AI ​​Act, published a second draft of the General Purpose AI Model Code of Practice (GPAI), which applies to systems like OpenAI’s GPT family of large language models , or LLM.

Another draft included exemptions for providers of certain open source AI models. Such models are usually made available to the public to allow developers to create their own custom versions. It also includes a requirement that developers of “systematic” GPAI models undergo rigorous risk assessments.

Computer and Communications Industry Association — whose members include Amazon, Google and Target — warned that it “contains measures that go far beyond the agreed scope of the Act, such as far-reaching measures to protect copyright.”

The AI ​​office was not immediately available for comment when contacted by CNBC.

It is worth noting that the EU Law on Artificial Intelligence is far from full implementation.

As Shelley McKinley, chief legal officer of the popular code repository platform GitHub, told CNBC in November, “the next phase of work has begun, which may mean there’s more ahead of us than behind us at this point.”

For example, the first provisions of the Act will enter into force in February. These provisions cover “high-risk” AI applications such as remote biometric identification, loan decision-making and educational scoring. A third draft of the code on GPAI models is scheduled for publication in the same month.

European tech leaders are worried about the risk that punitive EU measures against US tech companies could provoke a backlash from Trump, which in turn could lead the bloc to soften its approach.

Take, for example, antitrust regulation. The EU has been an active player taking action to counter the dominance of US tech giants — but that’s something that could result in a backlash from Trump, according to Swiss VPN company Proton CEO Andy Yen.

“[Trump’s] the view is that he probably wants to regulate his own technology companies,” Yen told CNBC in November in an interview at the Web Summit technology conference in Lisbon, Portugal. “He doesn’t want Europe to interfere.”

UK Copyright Overview

British Prime Minister Keir Starmer gives an interview to the media while attending the 79th United Nations General Assembly at the United Nations Headquarters in New York, USA, on September 25, 2024.

Leon Neal | Via Reuters

One country to watch out for is the United Kingdom. It was previously done by Britain ran away from the introduction of legal obligations for AI model makers due to fears that the new legislation could be too restrictive.

However, Keir Starmer’s government has said it plans to legislate for artificial intelligence, although details are slim for now. The general expectation is that the UK will take a more principles-based approach to AI regulation, as opposed to the EU’s risk-based framework.

Last month the government dropped its first major indication of where regulation is headed, announcing a consultation on it measures to regulate the use of copyrighted content for training AI models. Copyright is a particularly big problem for generative AI and LLMs.

Most LLMs use public data from the open web to train their AI models. But this often includes examples of artwork and other copyrighted material. Artists and publishers like The New York Times state that these systems unfair scraping of their valuable content without consent to generate native output.

To address this issue, the UK government is considering an exception to copyright law for training AI models, while allowing rights holders to opt out of using their works for training purposes.

Appian’s Calkins said the UK could end up as a “global leader” in copyright infringement by AI models, adding that the country was not “susceptible to the same massive lobbying attack by domestic AI leaders as is the case in the US”. .

US-China relations a possible point of tension

U.S. President Donald Trump, right, and Xi Jinping, Chinese President, walk past members of the People’s Liberation Army (PLA) during a welcoming ceremony outside the Great Hall of the People in Beijing, China, Thursday, Nov. 9, 2017.

Qilai Shen | Bloomberg | Getty Images

Finally, as world governments seek to regulate rapidly growing artificial intelligence systems, there is a risk that geopolitical tensions between the US and China could escalate under Trump.

In his first term as president, Trump has introduced a number of tough policy measures against China, including the decision to add Huawei to a trade blacklist that bars it from doing business with US technology suppliers. He also launched an attempt to ban TikTok, which is owned by the Chinese company ByteDance, in the US — although since softened his stance on TikTok.

It’s China racing to beat the US for dominance in AI. At the same time, the US has taken steps to limit China’s access to key technologies, mainly chips like those it has designed. Nvidiawhich are needed to train more advanced AI models. China responded by trying to build its own domestic chip industry.

Technologists worry that the geopolitical rift between the US and China over artificial intelligence could result in other risks, such as the possibility that one of the two could develop a form of artificial intelligence smarter than humans.

Max Tegmark, founder of the non-profit Future of Life Institute, believes that in the future the US and China could create a form of artificial intelligence that can improve and design new systems without human oversight, potentially forcing the governments of both countries to individually design AI safety rules.

“My optimistic way forward is for the US and China to unilaterally impose national security standards to prevent their own companies from harming and building uncontrolled AGI, not to please rival superpowers, but just to protect themselves,” Tegmark told CNBC in an interview in November.

Governments are already trying to work together to figure out how to create regulations and frameworks around artificial intelligence. In 2023, Great Britain hosted a global AI security summitattended by the administrations of the US and China, to discuss possible barriers around the technology.

– CNBC’s Arjun Kharpal contributed to this report



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button