AI technology poses new risks of enterprises in the form of human rights issues. Corporate ESG ratings will be affected by how well utilization guidelines are prepared, and explanations provided to users.

On Dec. 13, 2018, the Cabinet Office of Japan held a discussion on basic principles for an AI-driven, human-centric society, releasing the draft proposal for artificial intelligence (AI) utilization.

The draft positions AI as a key technology that could resolve the 17 Sustainable Development Goals (SDGs), and help attain a sustainable world. It continues to provide seven principles of implementation based on a human-centric design, emphasizing that AI utilization must never infringe on human rights.

Rules for AI utilization are being formulated in many nations. The European Union as well is drawing up AI utilization guidelines, while firms in the US are developing their own in-house rules. The Organization for Economic Co-operation and Development (OECD) is hoping to develop a single common global ruleset, holding a meeting for experts in the field in September 2018, and announcing a mid-2019 target date for finalization.

The draft Cabinet proposal is scheduled to accept public comment, and be formally adopted in fiscal 2018. It will be announced at the G20 Summit conference in Osaka in June 2019, recommended for adoption as a main pillar of the international ruleset.

Google employee resignation as protest

One of the reasons there is so much interest in formulating rules is fear that AI utilization could lead to human rights violations and privacy infringement.

In March 2018, Google was criticized both internally and externally as regards its utilization of AI. It began when the firm signed a contract with the US Department of Defense (DoD) to provide its AI technology for military drones. After reports in US media, signatures were collected from the general public and over 3000 employees in protest, with a number of Google employees even resigning.

In response, Google CEO Sundar Pichai announced in June 2018 that the firm was developing guidelines for AI development and use, and had prohibited the utilization of its AI technology in weaponry and espionage. The firm later announced that it would not renew the problematic contract with the DoD.

The use of AI in employment, for example, has also been cited as possibly resulting in bias by gender, or various cultural criteria.

Amazon had been developing AI for use in hiring, but cancelled the project in 2017. The automatic system evaluated résumes, but was found to favor men over women, and although attempts were made to improve the software it proved impossible to eliminate the suspicion that it remained biased.

AI also poses risks in how it handles the information it requires to make decisions. It is clear that misuse of personal information would be condemned as a privacy violation.

In April 2018, it was reported that Facebook had leaked the personal information of at least 87 million users, and that information was used in the 2016 Trump presidential election campaign. CEO Mark Zuckerberg was forced to give explanations to the US Senate in public session. As a result, the firm’s stock dropped by about 15%. The information has been reported as including not only age and gender, but also the individual’s Facebook likes, personal likes and dislikes, and activity data.

Professor Osamu Sudoh, Graduate School of Interdisciplinary Information Studies of the University of Tokyo, chair of the Cabinet meeting, predicts that industry and corporate guidelines will be improved, based on an internationally-accepted set of basic rules for AI utilization.

Google CEO Sundar Pichai. After massive protest both internally and publicly as a result of the firm’s participation in an AI development project for the US DoD, Google announced guiding principles for AI utilization.

Management pressed to announced AI guidelines

Japanese corporations are also hurrying to develop their own in-house rules. Sony Corp., for example, announced the Sony Group AI Ethics Guidelines in September 2018, stressing respect for customer and other stakeholder rights including race, culture, location, religion, etc.

In October 2018, NEC Corp. recognized the possible effects of AI on human rights, establishing a Digital Trust Business Strategy Division staffed with people knowledgeable in human rights and related legal issues to build respect for human rights into the supply chain. An External Expert Council will also be formed, including a range of stakeholders including experts, non-profit organizations (NPO), and consumers.

NEC has positioned safety, especially urban safety, as a key engine for growth, and plans to actively develop products utilizing AI-driven technologies such as facial recognition systems. The firm has said that as a manufacturer, it bears the responsibility to explain to the user what data for AI bases its decisions on, for example. It is currently developed AI utilization guidelines, slated for disclosure before the end of fiscal 2018. NEC believes the development and application of AI-driven products and services will be a key to future growth.

The trend is also attracting the attention of ESG-sensitive investors. Mitsuo Wakameda, chief manager at NEC’s Digital Trust Business Strategy Division, comments “The era when management is questioned on AI usage is already here. I think AI utilization and information disclosure will also become part of ESG evaluations, as part of the social rating. NEC plans to use information disclosure to gain trust.”

AI is also ready being used in a wide range of corporate activity, and the approach to protecting human rights in AI utilization is emerging as a new concern for ESG management.