news

Singapore Is Not Looking to Regulate A.I. Just Yet, Says the City-State's Authority

Nicky Loh | Bloomberg | Getty Images

Singapore’s Marina Bay waterfront.

  • Singapore is not rushing to set AI regulation even as there are repeated calls for government interventions to address its risks.
  • "We are currently not looking at regulating AI," Lee Wan Sie, director for trusted AI and data at Singapore's Infocomm Media Development Authority, told CNBC.
  • The Singapore government is making efforts to promote the responsible use of AI.
  • It is calling for companies to collaborate in the world's first AI testing toolkit — called AI Verify.

As governments deliberate on whether artificial intelligence poses risks or dangers and whether it needs regulating, Singapore is taking more of a wait-and-see approach.

"We are currently not looking at regulating AI," Lee Wan Sie, director for trusted AI and data at Singapore's Infocomm Media Development Authority, told CNBC. IMDA promotes and regulates Singapore's communication and media sectors.

The Singapore government is making efforts to promote the responsible use of AI.

It is calling for companies to collaborate in the world's first AI testing toolkit — called AI Verify —  that enables users to conduct technical tests on their AI models and record process checks.

AI Verify was launched as a pilot project in 2022. Tech giant IBM and Singapore Airlines have already started pilot testing as part of the program.

Calls for regulation

In recent months, AI buzz has gathered pace after chatbot ChatGPT went viral for its ability to generate humanlike responses to users' prompts. It hit 100 million users in just two months after its launch.

Globally, there have been repeated calls for government interventions to address the potential risks of AI, however.

Tech leaders such as OpenAI's CEO Sam Altman and Tesla CEO Elon Musk have warned about the dangers of the technology.

"At this stage, it is quite clear that we want to be able to learn from the industry. We will learn how AI is being used before we decide if more needs to be done from a regulatory front," said Lee, adding that regulation may be introduced at a later stage.

"We recognize that as a small country, as the government, we may not have all the answers to this. So it's very important that we work closely with the industry, research organizations and other governments," said Lee.

Haniyeh Mahmoudian, an AI ethicist at DataRobot and an advisory member of the U.S. National AI Advisory Committee, said "it really benefits" both businesses and policymakers.

"The industry is more hands-on when it comes to AI. Sometimes when it comes to regulations, you see the gap between what the policymakers are thinking about AI versus what's actually happening in the business," said Mahmoudian.

"So having this type of collaboration specifically creating these types of toolkits has the input from the industry. It really benefits both sides," she added.

Google, Microsoft and IBM are among tech giants which have already joined the AI Verify Foundation — a global open-source community set up to discuss AI standards and best practices, as well as collaborate on governing AI.

"We at Microsoft applaud the Singapore government's leadership in this area," said Brad Smith, president and vice chair at Microsoft, in a press release.

"By creating practical resources like the AI governance testing framework and toolkit, Singapore is helping organizations build robust governance and testing processes," said Smith.

Collaborative approach

At the Asia Tech x Singapore summit in June, Singapore's Minister for Communications and Information Josephine Teo noted that while the government recognizes the potential risks of AI, it cannot promote the ethical use of AI on its own.

"The private sector with their expertise can participate meaningfully to achieve these goals with us," she said.

While there are "very real fears and concerns about AI's development," we will need to actively steer AI toward beneficial uses and away from bad ones, said Teo. "This is core to how Singapore thinks about AI."

Meanwhile, some nations are quickly cracking down on AI.

The European Union became the first to set minimum standards with its Artificial Intelligence Act. European Parliament members agreed to bring generative AI tools like ChatGPT under greater restrictions on Wednesday.

France's President Emmanuel Macron and his ministers have expressed a need for AI regulation. "I think we do need a regulation and all the players, even the U.S. players, agree with that," Macron told CNBC last week.

China has already developed draft rules designed to manage how companies develop generative AI products like ChatGPT.

Innovation in a safe environment

Singapore could act as a "steward" in the region for allowing innovation but in a safe environment, said Stella Cramer, APAC head of international law firm Clifford Chance's tech group.

Clifford Chance works with regulators on guidelines and frameworks across a range of markets.

"There's just this consistent approach that we're seeing around openness and collaboration. Singapore is viewed as a jurisdiction that is a safe place to come and test and roll out your technology with the support of the regulators in a controlled environment," said Cramer.

The city-state has launched several pilot projects such as the FinTech Regulatory Sandbox or healthtech sandbox for industry players to test out their products in a live environment before going to market.

"These structured frameworks and testing toolkits will help guide AI governance policies to promote safe and trustworthy AI for businesses," said Cramer.

"AI Verify may potentially be useful for demonstration of compliance to certain requirements," said IMDA's Lee. "At the end, as a regulator, if I want to enforce [regulation], I must know how to do it."

Copyright CNBC
Exit mobile version