Colleges in China and elsewhere in Asia have been belatedly joining international alliances to encourage ethical procedures in artificial intelligence or AI, which have been formerly being studied in university research centers in a fragmented way.
Countries such as South Korea, Japan, China and Singapore are making huge investments in AI research and development, such as the AI interface with robotics and so are in some areas quickly narrowing the gap with the United States. But crucially there are still no global guidelines and standards in place for scientific investigation, design and use of both AI and automated systems.
China’s universities in particular are turning out a high number of investigators shrouded in AI. Whereas previously they would head for Silicon Valley in the US, many are now preferring to remain in the nation to use for home-grown tech giants such as Alibaba, Tencent and Baidu — firms which collect and use huge amounts of consumer information with few legal limitations.
In July Chinese pioneer Xi Jinping introduced a national plan to construct AI to a US$152.5 billion industry with 2030 and said the nation was planning for international dominance.
“China’s pace of AI research and adoption is fast, it’s possibly the market that adopts AI technologies the fastest, thus there’s a whole lot of advanced research being done,” Pascale Fung, a professor in the department of electronic and computer technology at Hong Kong University of Science and Technology, or even HKUST, informed University World News.
“Our prime concern is to take a look at the ethical adoption of AI with regard to setting standards up. Do we need regulations if yes, what? This conversation has not occurred in this area however.
“There isn’t any transparency regarding dataflow. And there is not any certificate of AI security,” she states.
Leading US technology firms Google, Facebook, Amazon, IBM and Microsoft last year set up an industry-led non-profit consortium ‘Partnership about AI to Gain Individuals and Society’ to come up with ethical standards for investigators in AI in collaboration with professors and specialists in policy and ethics.
HKUST declared earlier this month it was the first Asian university partner in the alliance. The prior absence of Asian involvement, academic or otherwise, is astonishing considering the fast rate of AI developments in the area.
The international focus on AI integrity is “only beginning and it’s an global work but with hardly any involvement from Asian countries”, says Fung. “My role is to deliver the very best adopters of AI technology, namely the East Asian countries, to the dining table and to co-lead this effort.”
Researchers have also become concerned about regional campaigns, including in the European Union, to regulate AI systems, particularly those flying robots, to establish accountability. The European Parliament, by way of instance, has put forward thoughts to recognise robots as a legal entity, such as in the event of driverless automobiles.
It had been announced last week a robot developed by a top Chinese AI company, iFlytek, passed on the written evaluation of China’s national medical licensing examination. Though iFlytek stated its robot isn’t meant to replace doctors but to help them, it has brought the dilemma of AI integrity to the fore in a country with a huge lack of doctors, particularly in rural areas.
“We advocate that AI should not be the one making life-and-death decisions. AI can advise the doctor doctors who subsequently are those certified to practise medicine,” Fung says. “But to date these ideas have not yet been adopted internationally.”
“There have to be good practice guidelines and standards of how we use AI, for example in healthcare. Right now there are absolutely no guidelines. We are only playing it by ear,” says Fung. “If we don’t start working on this today, I am afraid there is going to be a enormous accident and then the regulations will come and that’ll be a little too late”
The World Economic Forum’s Global Threats Report 2017, which surveyed 745 leaders in business, government, academia, non-governmental and global organisations, such as members of the Institute of Risk Management, called AI and robotics as “the emerging technology with the greatest potential for adverse consequences over the upcoming decade”.
Fung believes Asian participation in setting ethical guidelines is essential if internationally acceptable guidelines are to be adopted within the area. “There are standards institutions around the world and they’re global, but that has been very little involvement so far by East Asian countries, such as China,” she notes.
The principal work on international standards and ethical best practice for both automated and intelligent systems is being performed by the Institute of Electrical and Electronics Engineers or IEEE.
“Our vision is to make it possible for the technical and scientific community to take into consideration at least the principles of society and this isn’t done,” states Konstantinos Karachalios, managing director at IEEE Standards Association.
The race has been first in creating AI systems “is that the large temptation of our time, just do it before others do it”, Karachalios adds. The premise is that what is being researched and created is good and the prevailing view is “if there’s a problem with this last project it’s not our problem, it’s the damn folks who use it”, Karachalios informed University World News. “This isn’t right.”
The very first variant of this IEEE’s international standards released last year incorporated the perspectives of over 150 specialists in AI, regulation, ethics and policy. Nevertheless, it was viewed as based mostly on Western fundamentals. This has been rectified using a new version to be published next month based on comments, including from non-Western countries, particularly in Asia.
Cultural significance is key for international adoption of moral standards to the design of programs. Karachalios states the need is for moral standards to be integrated, “but we don’t state which worth to embed”.
Sara Mattingly Jordan, an assistant professor in the Centre for Public Administration and Policy at Virginia Tech in the US, who is collating the inputs and answers to the IEEE standards document, states AI integrity “is still very much an intellectual’s subject”, involving largely university professors.
Within the AI industry, “right now we are relying upon people’s professional judgment and skilled experience at an individual degree of integrity. That’s what is controlling the system right now and it is pretty brittle.”
“The hazard of individuals working in tiny disaggregated groups with international advantage in a vacuum is a serious potential threat,” she states. “If every individual state or every individual university attempts to release its code of moral data standards, how is anybody going to function as a vendor in that program? It’ll create significant problems.”
But firms, such as law firms, are starting to join the discussion and the need to include the significant Asian AI powerhouses — South Korea, Japan, Singapore and China — can also be recognised. “It would be amazing if we can get China on board nobody disputes that they’re a significant participant,” she states. “But that does not indicate that we are demanding that China change its outlook.”
Experts say the Chinese authorities would marvel at any given principles or guidelines that challenge the supremacy of this state to control such technology, as well as anything that smacks of human privacy rights that might violate the right of authorities over its own citizens.
“There is a significant attention out of their [the Chinese] side to engage with the moral aspects rather than the political,” states the IEEE’s Karachalios. “The political dimension is concerned because in the end it’s all about liberty and liberty also comes with an ethical dimension. This might not be something that is fascinating to them and we must respect it.
“We must still find a means to engage with each other and have a profitable dialogue,” he says, and points out that “our standards are not laws, they’re recommendations from peer to peer reviewed”.
If producers of AI programs “can demonstrate they can make something that is trustworthy and respects privacy then perhaps people will preferentially choose it, even if they make it more expensive since they use more time and energy looking at those [ethical] facets,” Karachalios states.
Using its international AI ambitions, China definitely wishes to be part of this procedure, states HKUST’s Fung. “On standards and regulations you can bet the Chinese don’t wish to be left.”