Ethical principles and artificial intelligence

Featured image

In the 11th episode of the series Better cities: The contribution of digital technology, I will apply the ethical principles from episode 9 to the design and use of artificial intelligence.

Before, I will briefly summarize the main features of artificial intelligence, such as big data, algorithms, deep-learning, and machine learning. For those who want to know more: Radical technologies by Adam Greenfield (2017) is a very readable introduction, also regarding technologies such as blockchain, augmented and virtual reality, Internet of Things, and robotics, which will be discussed in next episodes.

Artificial intelligence

Artificial intelligence has valuable applications but also gross forms of abuse. Valuable, for example, is the use of artificial intelligence in the layout of houses and neighborhoods, taking into account ease of use, views and sunlight with AI technology from Spacemaker or measuring the noise in the center of Genk using Nokia's Scene Analytics technology. It is reprehensible how the police in the US discriminate against population groups with programs such as PredPol and how the Dutch government has dealt in the so called ‘toelagenaffaire’.

Algorithms
Thanks to artificial intelligence, a computer can independently recognize patterns. Recognizing patterns as such is nothing new. This has long been possible with computer programs written for that purpose. For example, to distinguish images of dogs and cats, a programmer created an "if....then" description of all relevant characteristics of dogs and cats that enabled a computer to distinguish between pictures of the two animal species. The number of errors depended on the level of detail of the program. When it comes to more types of animals and animals that have been photographed from different angles, making such a program is very complicated. In that case, a computer can be trained to distinguish relevant patterns itself. In this case we speak of artificial intelligence. People still play an important role in this. This role consists in the first place in writing an instruction - an algorithm - and then in the composition of a training set, a selection of a large number of examples, for example of animals that are labeled as dog or cat and if necessary lion tiger and more . The computer then searches 'itself' for associated characteristics. If there are still too many errors, new images will be added.

Deep learning
The way in which the animals are depicted can vary endlessly, whereby it is no longer about their characteristics, but about shadow effect, movement, position of the camera or the nature of the movement, in the case of moving images. The biggest challenge is to teach the computer to take these contextual characteristics into account as well. This is done through the imitation of the neural networks. Image recognition takes place just like in our brains thanks to distinguishing layers, varying from distinguishing simple lines, patterns, and colors to differences in sharpness. Because of this layering, we speak of 'deep learning'. This obviously involves large data sets and a lot of computing power, but it is also a labor-intensive process.

Unsupervised learning
Learning how to apply algorithms under supervision produces reliable results and the instructor can still explain the result after many iterations. As the situation becomes more complicated and different processes are proceeding at the same time, guided instruction is not feasible any longer. For example, if animals attack each other, surviving or not, and the computer must predict which kind of animals have the best chance of survival under which conditions. Also think of the patterns that the computer of a car must be able to distinguish to be able to drive safely on of the almost unlimited variation, supervised learning no longer works.

In the case of unsupervised learning, the computer is fed with data from many millions of realistic situations, in the case of cars recordings of traffic situations and the way the drivers reacted to them. Here we can rightly speak of 'big data' and 'machine learning', although these terms are often used more broadly. For example, the car's computer 'learns' how and when it must stay within the lanes, can pass, how pedestrians, bicycles or other 'objects' can be avoided, what traffic signs mean and what the corresponding action is. Tesla’s still pass all this data on to a data center, which distills patterns from it that regularly update the 'autopilots' of the whole fleet. In the long run, every Tesla, anywhere in the world, should recognize every imaginable pattern, respond correctly and thus guarantee the highest possible level of safety. This is apparently not the case yet and Tesla's 'autopilot' may therefore not be used without the presence of a driver 'in control'. Nobody knows by what criteria a Tesla's algorithms work.

Unsupervised learning is also applied when it comes to the prediction of (tax) fraud, the chance that certain people will 'make a mistake' or in which places the risk of a crime is greatest at a certain moment. But also, in the assessment of applicants and the allocation of housing. For all these purposes, the value of artificial intelligence is overestimated. Here too, the 'decisions' that a computer make are a 'black box'. Partly for this reason, it is difficult, if not impossible, to trace and correct any errors afterwards. This is one of the problems with the infamous ‘toelagenaffaire’.

The cybernetic loop
Algorithmic decision-making is part of a new digital wave, characterized by a 'cybernetic loop' of measuring (collecting data), profiling (analyzing data) and intervening (applying data). These aspects are also reflected in every decision-making process, but the parties involved, politicians and representatives of the people make conscious choices step by step, while the entire process is now partly a black box.

The role of ethical principles

Meanwhile, concerns are growing about ignoring ethical principles using artificial intelligence. This applies to near all principles that are discussed in the 9th episode: violation of privacy, discrimination, lack of transparency and abuse of power resulting in great (partly unintentional) suffering, risks to the security of critical infrastructure, the erosion of human intelligence and undermining of trust in society. It is therefore necessary to formulate guidelines that align the application of artificial intelligence again with these ethical principles.

An interesting impetus to this end is given in the publication of the Institute of Electric and Electronic EngineersEthically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. The Rathenau Institute has also published several guidelines in various publications.

The main guidelines that can be distilled from these and other publications are:

1. Placing responsibility for the impact of the use of artificial intelligence on both those who make decisions about its application (political, organizational, or corporate leadership) and the developers. This responsibility concerns the systems used as well as the quality, accuracy, completeness, and representativeness of the data.

2. Prevent designers from (unknowingly) using their own standards when instructing learning processes. Teams with a diversity of backgrounds are a good way to prevent this.

3. To be able to trace back 'decisions' by computer systems to the algorithms used, to understand their operation and to be able to explain them.

4. To be able to scientifically substantiate the model that underlies the algorithm and the choice of data.

5. Manually verifying 'decisions' that have a negative impact on the data subject.

6. Excluding all forms of bias in the content of datasets, the application of algorithms and the handling of outcomes.

7. Accountability for the legal basis of the combination of datasets.

8. Determine whether the calculation aims to minimize false positives or false negatives.

9. Personal feedback to clients in case of lack of clarity in computerized ‘decisions’.

10. Applying the principles of proportionality and subsidiarity, which means determining on a case-by-case basis whether the benefits of using artificial intelligence outweigh the risks.

11. Prohibiting applications of artificial intelligence that pose a high risk of violating ethical principles, such as facial recognition, persuasive techniques and deep-fake techniques.

12. Revocation of legal provisions if it appears that they cannot be enforced in a transparent manner due to their complexity or vagueness.

The third, fourth and fifth directives must be seen in conjunction. I explain why below.

The scientific by-pass of algorithmic decision making

When using machine learning, computers themselves adapt and extend the algorithms and combine data from different data sets. As a result, the final ‘decisions’ made by the computer cannot be explained. This is only acceptable after it has been proven that these decisions are 'flawless', for example because, in the case of 'self-driving' cars, if they turn out to be many times safer than ordinary cars, which - by the way - is not the case yet.

Unfortunately, this was not the case too in the ‘toelagenaffaire’. The fourth guideline could have provided a solution. Scientific design-oriented research can be used to reconstruct the steps of a decision-making process to determine who is entitled to receive an allowance. By applying this decision tree to a sufficiently large sample of cases, the (degree of) correctness of the computer's 'decisions' can be verified. If this is indeed the case, then the criteria used in the manual calculation may be used to explain the processes in the computer's 'black box'. If there are too many deviations, then the computer calculation must be rejected at all.

Governance

In the US, the use of algorithms in the public sector has come in a bad light, especially because of the facial recognition practices that will be discussed in the next episode. The city of New York has therefore appointed an algorithm manager, who investigates whether the algorithms used comply with ethical and legal rules. KPMG has a supervisory role in Amsterdam. In other municipalities, we see that role more and more often fulfilled by an ethics committee.

In the European public domain, steps have already been taken to combat excesses of algorithmic decision-making. The General Data Protection Regulation (GDPR), which came into effect in 2018, has significantly improved privacy protection. In April 2019, the European High Level Expert Group on AI published ethical guidelines for the application of artificial intelligence. In February 2020, the European Commission also established such guidelines, including in the White Paper on Artificial Intelligence and an AI regulation. The government also adopted the national digitization strategy, the Strategic Action Plan for AI and the policy letter on AI, human rights, and public values.

I realize that binding governments and their executive bodies to ethical principles is grist to the mill for those who flout those principles. Therefore, the search for the legitimate use of artificial intelligence to detect crime, violations or abuse of subsidies and many other applications continues to deserve broad support.

Follow the link below to find one of the previous episodes or see which episodes are next, and this one for the Dutch version.

https://www.dropbox.com/s/3u002oqccv5bs99/Preliminary%20overview%20of%20articles.docx?dl=0

Comments

Topics