Tech Matters: The Ethical Path to Singularity - Governance of Artificial Intelligence; an Eye on the Digital World

Tech Matters: The Ethical Path to Singularity - Governance of Artificial Intelligence; an Eye on the Digital World
19 Nov 2019

TECH MATTERS

The Ethical Path to Singularity - Governance of Artificial Intelligence; an Eye on the Digital World

In the Tech Matters Series, we will, in collaboration with leading corporate personalities from the global tech ecosystem, consider the latest in technology law and disputes, trending challenges faced by technology companies, and other technology related issues. For this piece, we consider cross-border perspectives on the governance of Artificial Intelligence with Mr Paul Yee, Legal Counsel for SenseTime. SenseTime is one of the world’s leading Artificial Intelligence companies, described by South China Morning Post as the world’s most valuable AI unicorn. The views espoused herein are made in Mr Shaun Leong’s and Mr Paul Yee’s own capacity and not on behalf of SenseTime.

The rise of Artificial Intelligence (AI)

Data is the basic building block of the digital economy. In recent years, the exponential growth in data volume and computing powers have led to a rise in data-driven technologies such as Artificial Intelligence (“AI”).

From the lauded technological achievements of SenseTime in the area of AI, Google’s DeepMind to the growing presence of and reliance on virtual assistants like Apple’s Siri or Amazon’s Alexa, AI is fast becoming an integral part of our lives. AI has many visible benefits. AI can be used by organisations to provide new goods and services, boost productivity, enhance competitiveness, ultimately leading to economic growth and better quality of life. However, as with any new technology, AI also introduces new ethical, legal and governance challenges. Recent advances in AI-enabled technologies have prompted a wave of responses globally by governments and tech companies alike to tackle these emerging ethical and legal issues.

The Model Framework in Singapore

In January 2019, the Personal Data Protection Commission (“PDPC”) introduced the first edition of A Proposed Model AI Governance Framework (“Model Framework”) with guidelines on how AI can be ethically and responsibly used. The Model Framework is the first of its kind in Asia, and seeks to provide greater certainty to industry players and promote the adoption of AI while ensuring that regulatory imperatives are met.

Definition of AI:

The Model Framework defines AI as “a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning. AI technologies rely on AI algorithms to generate models. The most appropriate model(s) is/are selected and deployed in a production system.”

Some examples of AI include:

  • SenseTime’s facial recognition technologies
  • Google’s DeepTech
  • Apple’s Siri
  • Amazon’s Alexa

2 Core Principles:

The Model Framework is based on 2 overarching guiding principles that deepen trust in AI and understanding of the use of AI technologies:

  • Decisions made by or with the help of AI should be explainable, transparent and fair to consumers. This will build consumer trust and confidence in the use of AI.
  • Explainable”: It is crucial for the automated algorithmic decisions and the data that drives such decisions to be explained to end-users in simple, non-technical terms.
  • “Transparent”: There should be transparency at every stage of the AI value chain. This is to build trust in the entire AI ecosystem. Consumers should be kept informed of how AI technology is applied in decisions affecting them.
  • “Fair”: AI algorithms and models embedded in decision-making systems should incorporate fairness at their core.
  • AI solutions should be human-centric.
  • As AI is used to amplify human capabilities, the protection of the interests of human beings, including their well-being and safety, should be the primary considerations in the design, development and deployment of AI.

4 key areas:

The Model Framework provides guidance on measures promoting the responsible use of AI that organisations should adopt in 4 key areas:

  • Internal Governance Structures and Measures: Organisations should adapt or implement new internal governance structures and measures to ensure robust oversight of the organisation’s use of AI. For example, risks associated with the use of AI can be managed within the enterprise risk management structure. Ethical considerations can be introduced as corporate values. Organisations can consider delineating clear roles and responsibilities for the ethical deployment of AI and having a system of risk management and internal controls.
  • Determining AI Decision-Making Model: Organisations should weigh their commercial objectives for using AI against the risks of using AI. With this, organisations can then select a decision-making model with the appropriate level of human oversight to achieve their objectives. For example, a “human-in-the-loop” model could be most suitable for medical diagnoses – a doctor may use AI to identify possible diagnoses for a medical condition but will ultimately make the final diagnosis.
  • Operations Management: To ensure the effectiveness of an AI solution, relevant departments within an organisation must work together to establish good data accountability practices. Such practices could include keeping a data provenance record to understand and archive the lineage of data, and/or minimising inherent bias in AI. For example, in facial recognition technologies, there may be a selection/omission bias if a dataset includes only Asian faces and is used for facial recognition in populations including non-Asians.
  • Customer Relationship Management: The Model Framework also suggests some strategies for organisations to communicate their use of AI to consumers and customers, in order to build customer trust and relationships.

There is a dynamic and evolving global discussion on the ethics of AI amidst rapid technological advances and the increased integration of AI into business models. In this age, policy makers and regulators must keep pace with the evolution of AI technologies and their attendant ethical and governance issues.

The Model Framework is Singapore’s contribution to the global discussion on the ethics of AI. Singapore presently tops the world in governance of AI, as shown in the Government Artificial Intelligence Readiness Index 2019 (“GAIR Index 2019”).

A Cross-Border Look at developing AI Guidelines and Frameworks

China

The Beijing AI Principles issued by the Beijing Academy of Artificial Intelligence in May 2019 sets out various principles relating to the research and development, use, and governance of AI.

 

Some of the key principles include the following:

 

  • The R&D of AI should serve human values and the overall interests of humankind.
  • AI R&D should take ethical design approaches, including but not limited to making the system as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability, and predictability, and making the system more traceable, auditable and accountable.
  • Use of AI should ensure that stakeholders of AI systems are equipped with sufficient informed-consent about the impact of the system on their rights and interests.
  • An inclusive attitude should be taken towards the potential impact of AI on human employment.

 

Hong Kong

The Privacy Commissioner for Personal Data (PCPD) in Hong Kong, issued an Ethical Accountability Framework in October 2018 (the “Framework”). The Framework expressly links the issue of data ethics to AI.

 

The Framework includes a series of ‘Enhanced Elements’ which call for organisations to, amongst others:

  • Define data-stewardship values
  • Use an “ethics by design” process to translate these values into their data analytics and data-use design processes so as to benefit the end user
  • Be transparent about their processes

 

The Framework further recommends the 3 Hong Kong Data Stewardship Values of ‘Respectful, Beneficial and Fair’ which were developed with the industry consultees, along with two assessment models for use by stakeholders.

 

The Hong Kong Monetary Authority has since encouraged ‘authorised institutions’ to adopt the Framework regarding personal data in the context of fintech development.

 

Japan

On 28 July 2017, Japan published the Draft AI R&D Guidelines for International Discussions in preparation for the Conference toward AI Network Society (the “Guidelines”). The Guidelines are a set of non-binding soft law seeking to promote the benefits and reduce the risks of AI. The Guidelines centre on 5 core philosophies, including a commitment to a human-centred society, and periodic review of the Guidelines as needed to keep pace with advances in AI technologies and AI utilisation.

 

The Guidelines further outlined 9 key principles for the sound development of AI networking, the mitigation of risks associated with AI systems, and improving user uptake of AI:

 

  • Collaboration: Developers should pay attention to the interconnectivity and interoperability of AI systems.
  • Transparency: Developers should pay attention to the verifiability of inputs/outputs of AI systems and their judgments.
  • Controllability: Developers should pay attention to the controllability of AI systems.
  • Safety: Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices.
  • Security: Developers should pay attention to the security of AI systems.
  • Privacy: Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties.
  • Ethics: Developers should respect human dignity and individual autonomy in the research and development of AI systems.
  • User assistance: Developers should consider that AI systems will support users and increase users’ opportunities for choice where appropriate.
  • Accountability: Developers should make efforts to fulfil their accountability to stakeholders including AI systems’ users.

 

Australia

The Australian Government published an AI Ethics Framework in 2019 to identify key principles and measures that can be used to optimise results from the use of AI and maintain the wellbeing of Australian citizens (the “Ethics Framework”).

 

The Ethics Framework espouses the following core principles for AI:

  • Generates net-benefits
  • Do no harm
  • Regulatory and legal compliance
  • Privacy protection
  • Fairness
  • Transparency & Explainability
  • Contestability; and
  • Accountability

 

UK

The UK published its Data Ethics Framework on 30 August 2018 to guide the design of appropriate data use in the government and the wider public sector. The Data Ethics Framework centre around several core principles, including:

  • Starting with a clear user need and public benefit
  • Using data proportionate to the user need
  • Understanding the limitations of data
  • Embedding data use responsibly

 

The UK has also published a collection of guidelines on how to build and use AI in the public sector (the “Guideline Collection”). Amongst others, the Guideline Collection covers how to implement AI ethically, fairly and safely. One example is the Guide on Understanding AI Ethics and Safety published on 10 June 2019, which centre around the actionable principles of fairness, accountability, sustainability, and transparency.

 

Conclusion

Almost a century ago, Isaac Asimov famously laid down the Three Laws of Robotics: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Where these laws used to be firmly rooted in the realm of science fiction, they are becoming increasingly relevant in the real world with the advent of A.I. Indeed, one must not be fooled however. It is not really A.I that needs to be governed, in as much as it is humanity’s use of A.I. which needs to be so governed. If the Singularity is indeed to arrive in this lifetime, then all the more so would the imminent need for Ethical Regulation arise.

For further information, contact:

Paul Yee

Legal Counsel, SenseTime

Shaun Leong

Partner, Eversheds Harry Elias

ShaunLeong@eversheds-harryelias.com

+65 6361 9369       

Chua Ting Fang

Legal Associate, Eversheds Harry Elias

TingFangChua@eversheds-harryelias.com

+65 6361 9808

 
   

For more information, please contact our Business Development Manager, Ricky Soetikno at rickysoetikno@eversheds-harryelias.com