#Artificial Intelligence

Topic within Digital City
Zoë Spaaij, Project manager , posted

19 x de AVG, wat betekent dat voor u? Leer het op ons Europees wetgevingswebinar op 2 juni

Featured image

De komende jaren pakt Europa haar rol op het gebied van digitalisering en technologisering. Er komen maar liefst 19 Europese wetten aan die stuk voor stuk net zo ingrijpend zijn als de AVG. Tijdens een webinar op 2 juni van 16.00 - 1700.00 uur leert u van Jonas Onland (VNG) wat dat voor uw organisatie betekent.

De Europese wetten hebben veel impact op de macht van de techbedrijven. Maar ook op de manier waarop de slimme stad wordt ontwikkeld. Maar zijn we wel goed voorbereid op de komst van die nieuwe wetten? En wat houden ze precies in? Wat zijn de gevolgen van die wetten voor de ontwikkeling van smart cities? Zijn bedrijven en gemeenten voorbereid?

Daarover geeft Jonas Onland (Programma leider Digital Transformation & Europe VNG) op 2 juni 2022 een presentatie én gaat met u in gesprek.

Datum: 2 juni van 16.00 – 17.00 uur

Locatie: Online (u ontvangt een dag van tevoren de link)

Deelname gratis

MELD JE NU AAN

Heeft u vragen voor Jonas Onland? Stel ze dan alvast via het aanmeldformulier.

Online event on Jun 2nd
Herman van den Bosch, professor in management development , posted

New and free e-book: Better cities and digitization

Featured image

For 23 weeks I have published weekly episodes of the series Better Cities. The role of digital technology on this site. I have edited and compiled these episodes in an e-book (88 pages). You can download this for free via the link below. The book has 17 chapters that are grouped into six parts:
1. Hardcore: Technology-centered approaches
2. Towards a humancentric approach
3. Misunderstanding the use of data
4. Ethical considerations
5. Embedding digitization in urban policy
6. Applications (government, mobility, energy and healthcare)
7. Wrapping up: Better cities and technology

Herman van den Bosch's picture #DigitalCity
Hanna Rab, Communication advisor at City of Amsterdam: Chief Technology Office, posted

Metaverse en Amsterdam

Featured image

Kom op donderdag 19 mei naar de eerste Metaverse Meetup in Amsterdam en laat je verwonderen door de nieuwe werkelijkheid van de Metaverse. Samen met experts gaan we in gesprek over welke invloed deze nieuwe dimensie van het internet kan hebben op ons leven. Het evenement is een samenwerking tussen gemeente Amsterdam en Sharing Cities Alliance.

Programma:
16.00 inloop
16.15 opening door Douwe Schmidt (Digitalisering & Innovatie, Gemeente Amsterdam) en Harmen van Sprang (co-founder Sharing Cities Alliance)
16.30 presentaties door de sprekers (info volgt)
18.00 borrel
19.00 afsluiting

Over de metaverse
Biedt de metaverse ons nieuwe kansen voor de toekomst of is het een bedreiging voor ons opgebouwde bestaan? Is het een plek van en voor ons allemaal of bepalen platformen steeds meer hoe we leven, werken en spelen? Het fenomeen metaverse spreekt tot de verbeelding van velen maar roept ook veel vragen op. Gemeente Amsterdam wil op een verantwoorde manier inspelen op deze technologische veranderingen.

Over dit evenement
Tijdens dit gratis evenement verkennen we samen met Amsterdammers hoe we de metaverse zo kunnen vormgeven dat deze een verrijking is van ons leven en van onze stad. Aan de hand van de inspirerende sprekers vanuit verschillende organisaties duiken we dieper in het thema. Elke spreker vertelt kort over zijn of haar visie en kennis over het onderwerp en gaat in gesprek met het publiek.
Na afloop drinken we met elkaar een borrel en praten we na over hoe jouw metaverse eruit ziet.

Meer info
https://www.sharingcitiesalliance.com/

Meet-up on May 19th
Liza Verheijke, Community Manager at Amsterdam University of Applied Sciences, posted

Dutch Applied AI Award 2022

Featured image

Do you have an innovative initiative in the field of applied Artificial Intelligence? Then submit your innovation for the Dutch Applied AI Award 2022!

This award has been handed out during the annual Computable Awards since 2020 and is an initiative of Computable (the platform for ICT professionals), De Dataloog (the Dutch podcast about data and AI) and the Centre of Expertise Applied Artificial Intelligence of Amsterdam University of Applied Sciences.

A jury (the award is not a public prize) - consisting of five experts in the field of Applied AI - selects five initiatives from all the entries with a chance of winning the Dutch Applied AI Award. After a pitch round in De Dataloog, the jury decides on the winner. The five nominees will all get a platform to present themselves and their initiative in a broadcast of the well-aired podcast. The Dutch Applied AI Award is not only a prize, but also a prize of recognition!

Do you want to compete for the Dutch Applied AI Award 2022? Then submit your innovative AI application! Deadline: Friday, July 1, 2022.

The award will be handed out during the Computable Awards on Wednesday 5 October 2022

Liza Verheijke's picture #DigitalCity
Anonymous posted

Hyperion Lab Kick-Off Party Spring Edition

Featured image

May 12, 6 pm, join us for the KICK-OFF SURPRISE PARTY, where we will reveal the hottest AI and HPC Innovations while enjoying cocktails made by a robot bartender. Shaken not stirred. 🥂

What to expect?
See with our eyes how technology brings your ideas into 3D universes.
How AI is training itself to become smarter.
How AI will transform the future of the fashion industry.
...But let’s not reveal too much 😏

Attend the event to say goodbye to our first showcasing startups and celebrate their success gained through the work with Hyperion Lab.
And meet the new startups to rock the stage of our showcase program.

😎 Drinks and food on us! You come with enthusiasm.

🚀 Sign up today with the link below!
https://events.hyperionlab.nl/kickoff-spring

Hyperion Lab is a community-driven project aiming to become the go-to place for AI and HPC innovations.
Our space hosts startup innovations from all around Europe, supporting them with hardware and expertise.
In addition, we host training and events related to the AI and HPC community.
Our mission at Hyperion Lab is to bring together the Dutch and International AI and HPC community within a large-scale smart city in Amsterdam South East. Join our community!

Meet-up on May 12th
Herman van den Bosch, professor in management development , posted

Risks and opportunities of digitization in healthcare

Featured image

The 21st episode of the Better cities – the contribution of digital technology-series is about priorities for digital healthcare, often referred to as eHealth.

The subject is broader than what will be discussed here. I won't talk about the degree of automation in surgery, the impressive equipment available to doctors, ranging from the high-tech chair at the dentist to the MRI scanner in hospitals, nor about researching microbes in air, water and sewerage that has exploded due to the covid pandemic. Even the relationship with the urban environment remains somewhat in the background. This simply does not play a prominent role when it comes to digitization in healthcare. The subject, on the other hand, lends itself well to illustrate ethical and social problems associated with digitization. As well as the solutions available in the meantime.

The challenge: saving costs and improving the quality of care

The Netherlands can be fortunate to be one of the countries with the best care in the world. However, there are still plenty of challenges, such as a greater focus on health instead of on disease, placing more responsibility for their own health on citizens, increasing the resilience of hospitals, paying attention to health for the poorer part of the population, whose number of healthy life years is significantly lower and, above all, limiting the increase of cost. Over the past 20 years, healthcare in the Netherlands has become 150% more expensive, not counting the costs of the pandemic. Annual healthcare costs now amount to € 100 billion, about 10% of GDP. Without intervention, this will rise to approximately €170 billion in 2040, mainly due to an aging population. In the meantime, healthcare costs are very unevenly distributed: 80% of healthcare costs go to 10% of the population.

The most important task facing the Netherlands and other rich countries is to use digitization primarily to reduce healthcare costs, while not forgetting the other challenges mentioned. This concerns a series of - often small - forms of digital care. According to McKinsey, savings of €18 billion by 2030 are within reach, if only with forms of digitization with proven effect. Most gains can be made by reducing the administrative burden and shifting costs to less specialized centers, to home treatment and to prevention.

Information provision

There are more than 300,000 health sites and apps on the Internet, which provide comprehensive information about diseases, options for diagnosis and self-treatment. More and more medical data can also be viewed online. Often the information on apps is incomplete resulting in misdiagnosis. Doctors in the Netherlands especially recommend the website Thuisarts.nl, which they developed themselves.

Many apps use gamification, such as exercises to improve memory. A good example of digital social innovation is Mirrorable, a program to treat children with motor disorders because of brain injury. This program also enables contact between parents whose inputs continuously help to improve exercises.

Process automation

Process automation in healthcare resembles in many respects automation elsewhere, such as personnel, logistics and financial management. More specific is the integrated electronic patient file. The Framework Act on Electronic Data Exchange in Healthcare, adopted in 2021, obliges healthcare providers to exchange data electronically and prescribes standards. However, data exchange will be minimal and will only take place at a decentralized level to address privacy concerns. The complexity of the organization of health care and the constant discussions about the content of such a system were also immense obstacles. That's a pity because a central system lowers costs and increases quality. Meanwhile, new technological developments guarantee privacy with great certainty. For example, the use of federated (decentralized) forms of data storage combined with blockchain. TNO conducts groundbreaking research in this area. The institution applies the principles of federated learning along with the application of multi-party computation technology. These innovative technologies enable learning from sensitive data from multiple sources without sharing this data.

Video calling

The recent eHealth monitor of the RIVM shows that by 2021 almost half of all doctors and nurses had had contact with patients with video calling, while this hardly happened in 2019. Incidentally, this concerns a relatively small group of patients. In the US there was an even larger increase, which has now been converted into a sharp decline. It seems that in the US primary health care is reinventing itself. Walgreens, the largest US drugstore chain, will begin offering primary care in 1000 of its stores. Apparently, in many cases, physical contact with a doctor is irreplaceable, even if (or perhaps because) the doctor is relatively anonymous.

Video calling is not only important for care provider, but also for informal caregivers, family and friends and help to combat loneliness. Virtual reality (metaverse!) will further expand the possibilities for this. TNO is also active here: The TNO media lab is developing a scalable communication platform in which the person involved (patient or client), using only an upright iPad, has the impression that the doctor, district nurse or visitor is sitting at the table or on the couch right in front.

Self-diagnosis

The effectiveness of a remote consultation is of course served if the patient has already made a few observations him- or herself. 8% of patients with chronic conditions already do this. There is a growing range of self-tests available for, for example, fertility, urinary tract infections, kidney disorders and of course Covid-19. There are also home devices such as smart thermometers, mats that detect diabetic foot complications, and blood pressure meters; basically, everything that doctors often routinely do during a visit. The GGD AppStore provides an overview of relevant and reliable apps in the field of health.

Wearables, for example built into an i-watch, can collect part of the desired data, store it for a longer period and, if necessary, exchange it with the care provider.

More advanced are the mobile diagnosis boxes for emergency care by nurses on location, such as ambulances. With a fast Internet connection (5G), specialist care providers can watch if necessary.

A small but growing group of patients, doctors, and researchers with substantial financial support from Egon Musk sees the future mainly in chip implants. This would allow not only more complete diagnoses to be made, but also treatments to be carried out. Neuralink has developed a brain implant that improves communication with speech and hearing-impaired people. The Synchron brain implant helps people with brain disorders perform simple movements. For the time being, the resistance to brain implants is high.

Remote monitoring

Meanwhile, all these low-threshold amenities can lead us to become fixated on disease rather than on health. But what if we never had to worry about our health again? Instead, the local health center watches over our health thanks to wearables: Our data is continuously monitored and analyzed using artificial intelligence. They are compared with millions of diagnostic data from other patients. By comparing patterns, diseases can be predicted in good time, followed by automated suggestions for self-treatment or advice to consult a doctor. Until then, we have probably experienced nothing but vague complaints ourselves. Wouldn't that be an attractive prospect?

Helsinki is experimenting with a Health Benefit Analysis tool that anonymously examines patients' medical records to evaluate the care they have received so far. The central question here is can the municipality proactively approach people based on the health risk that has come to light because of this type of analysis?

Medics participating in a large-scale study by the University of Chicago and the company Verify were amazed at the accuracy with which algorithms were able to diagnose patients and predict diseases ranging from cardiovascular disease to cancer. In a recent article, oncologist Samuel Volchenboom described that it is painful to note that the calculations came from Verify, a subsidiary of Alphabet, which not only used medical data (with patients’ consent), but also all other data that sister company Google already had stored about them. He adds that it is unacceptable that owning and using such valuable data becomes the province of only a few companies.

Perhaps even more problematic is that these predictions are based in part on patterns in the data that the researchers can't fully explain. It is therefore argued that the use of these types of algorithms should be banned. But how would a patient feel if such an algorithmic recommendation is the last straw? It is better to invest in more transparent artificial intelligence.

Implementing digital technology

Both many patients and healthcare professionals still have doubts about the added value of digital technology. The media reports new cases of data breaches and theft every day. Most people are not very confident that blockchain technology, among other things, can prevent this. Most medical specialists doubt whether ICT will reduce their workload. It is often thought of as some additional thing. Numerous small-scale pilot projects are taking place, which consume a lot of energy, but which are rarely scaled up. The supply of digital healthcare technologies exceeds their use.

Digital medicine will have to connect more than at present with the needs of health professionals and patients. In addition to concerns about privacy, the latter are especially afraid of further reductions in personal attention. The idea of a care robot is terrifying them. As should be the case with all forms of digitization, there is a need for a broadly supported vision and setting priorities based on that.

Against this background, a plea for even more medical technology in our part of the world, including e-health, is somewhat embarrassing. Growth in healthy years due to investment in health care in developing countries will far exceed the impact of the same investment in wealthy countries.

Nevertheless, it is desirable to continue deliberately on the chosen path, whereby expensive experiments for the benefit of a small group of patients have less priority in my opinion than investments in a healthy lifestyle, prevention, and self-reliance. Healthcare cannot and should not be taken over by robots; digitization and automation are mainly there to support and improve the work of the care provider and make it more satisficing and efficient.

One of the chapters in my e-book Future cities, always humane, smart if helpful, also deals with health care and offers examples of digital tools. In addition, it pays much more contextual information about the global health situation, particularly in cities. You can download by following the link below. The Dutch edition is here.

Herman van den Bosch's picture #DigitalCity
Manon den Dunnen, Strategisch specialist digitaal , posted

Sensemakers latest on Metaverse/Extended Reality

Featured image

This evening we will meet at NewBase on the Marineterrein, after a short introduction about the Metaverse (context and hype) Daniel Doornik and others will share the latest insights and their latest applications in VR, AR and MR.
There also will be the opportunity to try it out!

Really looking forward welcoming you offline again!! As there is limited catering, feel free to bring your own! Presentations usually start at 19h & end around 20.30 with "open mic" when you can share your own story/event/question.

Manon den Dunnen's picture Meet-up on Apr 20th
Maarten Sukel, AI Lead at City of Amsterdam, posted

Kunstmatige Intelligentie werklab Amsterdam

Featured image

Praat en beslis mee over de toepassing van kunstmatige intelligentie in Amsterdam. Tijdens het werklab ga je in gesprek over een concrete casus over het gebruik van een sensorenregister in de stad. Hoe moeten we daar volgens jou mee omgaan?

Een extra paar ogen is soms best handig. In Amsterdam zijn er veel extra ogen: camera’s en sensoren. Deze camera’s en sensoren zijn van de gemeente, maar ook van de buurtsupermarkt, dat grote internationale bedrijf of uw buurman. Gemeente Amsterdam wil graag weten hoeveel camera’s en sensoren er in Amsterdam zijn. Het is mogelijk om met kunstmatige intelligentie data van camera’s en sensoren in kaart te brengen. Maar hoe zit het met de privacy van Amsterdammers als zulke data worden verzameld? En is het automatisch in kaart brengen van camera’s en sensoren wel een goed idee?

Wij nodigen u van harte uit voor het KI-werklab, dé plek waar Amsterdammers met elkaar nadenken over het gebruik van slimme toepassingen in de stad. Het werklab wordt georganiseerd door Netwerk Democratie.

Maarten Sukel's picture Meet-up on Apr 6th
Maarten Sukel, AI Lead at City of Amsterdam, posted

City of Amsterdam AI Graduation Research Fair @Datalab

Featured image

In this DemoDonderdag edition, we invite you to help us with steering 20 graduation research projects into valuable AI solutions for Amsterdam!

Program:
15:45 - Doors open
16:00 - Short Introduction
16:10 - Interactive Poster Session
17:00 - Networking & Snacks

Every year, we give master's students from the field of AI and Data Science the opportunity to conduct their graduation research on real-life problems together with the City of Amsterdam.

This year, we collaborate with 20 students from the University of Amsterdam, the Vrije Universiteit, and the University of Twente, on topics such as measuring the accessibility of our city, creating a healthier, greener, and cleaner environment, optimizing the maintenance of public assets and infrastructure, as well as improving internal processes such as document management.

During this event, the students will present their research directions and current findings, as well as their plans for the remainder of their theses. In a poster session setup, everyone would be able to explore the different projects, enjoy short demonstrations, and have an open discussion about their favorite topics.

What we would need from you is an open mind, constructive feedback, and fresh ideas, so that together we could help all projects crystallize, and eventually, turn them into valuable AI solutions for our city.

Last but not least, this would be a moment for all of us to reconnect and meet each other in a fully physical event.

Maarten Sukel's picture Meet-up on Apr 7th
Hanna Rab, Communication advisor at City of Amsterdam: Chief Technology Office, posted

Responsible Sensing Lab op Arcam tentoonstelling Private_Eye_Butler_Spy

Featured image

Afgelopen week is de tentoonstelling Private Eye Butler Spy geopend in Arcam. Hier zijn prototypes van 3 projecten van Responsible Sensing Lab te zien, een samenwerking van AMS Institute en gemeente Amsterdam. De tentoonstelling onderzoekt de impact van technologie in en rondom het huis.

De tentoonstelling

Toepassingen in en rond het huis worden steeds intelligenter: in plaats van een huissleutel bepalen toegangssystemen aan de hand van biometrische data of een deur opengaat. In de stad controleren scanauto’s of parkeergelden zijn betaald en registreren sensoren de drukte op straat of de luchtkwaliteit. De tentoonstelling Private_Eye_Butler_Spy onderzoekt de veranderende relatie tussen technologie en de mens. Aan de hand van verschillende thema’s verkennen bezoekers welke ethische vraagstukken en ontwerpopgaven een high-tech-toekomst met zich meebrengt.

Bezoeken

Private_Eye_Butler_Spy is gratis te bezoeken van 12 maart t/m 26 juni 2022 bij Arcam aan de Prins Hendrikkade 600.

#DigitalCity
Mark Siebert, Business Development , posted

Vrije Universiteit Amsterdam - Smart Campus LivingLab

Featured image

Join us on a virtual walk through LivingLab projects at the Marineterrein and discover the power of analytics for campus development. Tom van Arman and Tom Griffioen will touch upon open (research) questions of our future Digital Society and illustrate opportunities how connected data can deliver new insights, while respecting privacy.

VU Amsterdam is bursting with data that can make teaching, administrative and facilities processes more efficient and therefore, make work and study easier and more enjoyable. So why aren’t we using it on a massive scale yet?

Tom van Arman is Smart City Architect based in the city of Amsterdam. As an urban planner and technologist, Tom uses IoT, AI, API’s and open data to as a design tool to create more liveable and inclusive cities. In 2010 he founded Tapp, an award winning smart city design agency enabling local governments and industries to bridge the gap between the built environment and new digital landscape. Tom works regularly with local governments, energy companies and mobility partners to rapid prototype solutions to solve problems for the 21st century city.

Tom Griffioen is CEO of the VU spin off Clappform. Clappform is a data analytics platform active in various sectors including the built environment, which enables companies to use Artificial Intelligence in their daily work. The flexible cloud-based platform enables the extraction of valuable insights from both structured and unstructured data. Using the AI algorithm, the data from the sensors is analysed and then visualised in easy-to-use dashboards. The visualisations are real-time and updated automatically.

Mark Siebert's picture Online event on Mar 8th
Hanna Rab, Communication advisor at City of Amsterdam: Chief Technology Office, posted

Gemeente Amsterdam op MozFest 2022

Featured image

Van 7-11 maart vindt MozFest plaats, een virtueel festival met een belangrijke missie: een eerlijker en beter internet voor iedereen en betrouwbare AI.

Amsterdam is sinds 2021 de thuisbasis voor MozFest wat wordt gehost door de Mozilla Foundation. Met honderden workshops, discussies en talks over onder andere open source hulpmiddelen bouwen, eerlijk omgaan met data en oplossingen voor online desinformatie en intimidatie. Ook gemeente Amsterdam zal dit jaar weer onderdeel zijn van het festival.

Over MozFest

MozFest is uniek doordat het door de deelnemers zelf wordt samengesteld. Duizenden technologen, activisten, ondernemers, academici en kunstenaars brengen in meer dan 400 sessies de meest urgente internetproblemen in kaart en werken samen aan oplossingen.

Vanuit gemeente Amsterdam zal een aantal innovatieve projecten onderdeel uitmaken van de programmering. In de vorm van workshops, discussies en talks kom je meer te weten over o.a. AI toepassingen, verantwoorde drones en digitale rechten in Amsterdam.

MozFest wordt door gemeente Amsterdam ondersteund.

Welke activiteiten organiseert de gemeente?

7 maart- 22.45- 23.45 uur
Corona Tech- Behind The Scenes door Siham El Yassini

Tijdens deze sessie wordt een documentaire getoond die inzicht geeft in het gebruik van digitale technologieën door gemeente Amsterdam aan het begin van de corona crisis. De vertoning van de documentaire wordt gevolgd door een Q&A.

8 maart – 16.00 – 17.00 uur
Responsible Drones: discussing the conditions for responsible drone use in cities (and beyond) door Hidde Kamst

Aan welke voorwaarden moeten drones voldoen om op een verantwoorde manier ingezet te worden en wat is de rol van de gemeente hierin? In deze sessie worden de resultaten van het ‘responsible drones’ onderzoeksproject gedeeld en bespreken we met elkaar de mogelijke impact van drones in de stad. De focus ligt op het concretiseren van de toekomstige impact van drones en het betrekken van bewoners bij ontwikkelingen van deze technologie.

11 maart- 15.00- 15.45 uur
Cities for Digital Rights Helpdesk & Governance Framework door Milou Jansen

Tijdens MozFest 2020 is het eerste concept voor een Digital Rights Helpdesk for Cities gepresenteerd. Dit jaar is hieruit een gesubsidieerd internationaal project ontstaan, een eerste versie van het Digitale Rechten Governance Framework opgesteld en is er een helpdesk digitale rechten voor gemeenten binnen de EU ontwikkeld. Het framework en de helpdesk bieden handvatten om digitale rechten in de stad concreet te verbeteren en te waarborgen. Tijdens deze sessie word je uitgenodigd om mee te denken over het plan en betrokken te blijven bij het thema digitale rechten.

11 maart – 16.00-17.30 uur
Building digital spaces in the public interest - Where we are and how to move forward door Sander van der Waal (Waag) en Erik de Vries

Tijdens deze sessie worden de mogelijkheden voor alternatieve sociale platforms besproken. Hoe kunnen we platforms inrichten zodat privacy gewaarborgd blijft en mensen zelf controle houden over hun data? We inventariseren met aanwezigen de componenten voor publieke digitale platformen: wat bestaat al en wat moet nog worden ontwikkeld?

Online event from Mar 7th to Mar 11th
Maarten Sukel, AI Lead at City of Amsterdam, posted

Lancering van het 'Amsterdam for All'-project

Featured image

Op de demo-donderdag van 3 maart om 16 uur lanceert Gemeente Amsterdam samen met World Enabled het 'Amsterdam for All'-project. In dit project meten we met behulp van kunstmatige intelligentie de inclusieve toegankelijkheid van de stad.

We kunnen bijvoorbeeld voorspellen op welke stoepen obstakels aanwezig zijn en waar een verlaagde oversteekplaats is - handig voor de communicatie over de toegankelijkheid van winkelstraten en horeca. Zo dragen we bij aan een inclusieve stad, toegankelijk voor alle Amsterdammers.

Sprekers zijn onder andere:

  • Ger Baron, CTO en Directeur Digitalisering en Innovatie bij gemeente Amsterdam.
  • Dr. Victor Pineda, President van World Enabled een organisatie die zich inzet voor de inclusieve toegankelijkheid van steden.
  • Maarten Sukel, AI lead bij Digitalisering en Innovatie van Gemeente Amsterdam en Phd. onderzoeker bij de Universiteit van Amsterdam. Maarten zal vertellen waarom we kunstmatige intelligentie op deze manier moeten inzetten, en geeft wat tastbare voorbeelden van projecten.
  • Dr. Jon Froehlich, Associate Professor in Human-Computer Interaction bij University of Washington, waar hij onder andere werkt aan Project Sidewalk. Hij zal vertellen over het project en hoe dezelfde tool wordt gebruikt om de toegankelijkheid in Amsterdam inzichtelijk te maken.

Kom je ook?

Maarten Sukel's picture Meet-up on Mar 3rd
Herman van den Bosch, professor in management development , posted

Abuse of artificial intelligence by the police in the US. More than bias

Featured image

The 16th episode of the series Building sustainable cities - The contribution of digital technology reveals what can happen if the power of artificial intelligence is not used in a responsible manner.

The fight against crime in the United States, has been the scene of artificial intelligence’s abuse for years. As will become apparent, this is not only the result of bias. In episode 11, I discussed why artificial intelligence is a fundamentally new way of using computers. Until then, computers were programmed to perform operations such as structuring data and making decisions. In the case of artificial intelligence, they are trained to do so. However, it is still people who design the instructions (algorithms) and are responsible for the outcomes, although the way in which the computer performs its calculations is increasingly becoming a 'black box'.

Applications of artificial intelligence in the police

Experienced detectives are traditionally trained to compare the 'modus operandi' of crimes to track down perpetrators. Due to the labor-intensive nature of the manual implementation, the question soon arose as to whether computers could be of assistance. A first attempt to do so in 2012 in collaboration with the Massachusetts Institute of Technology resulted in grouping past crimes into clusters that were likely to have been committed by the same perpetrator(s). When creating the algorithm, the intuition of experienced police officers was the starting point. Sometimes it was possible to predict where and when a burglar might strike, leading to additional surveillance and an arrest.

These first attempts were soon refined and taken up by commercial companies. The two most used techniques that resulted are predictive policing (PredPol) and facial recognition.

In the case of predictive policing, patrols are given directions in which neighborhood or even street they should patrol at a given moment because it has been calculated that the risk of crimes (vandalism, burglary, violence) is then greatest. Anyone who behaves 'suspiciously' risks to be arrested. Facial recognition plays also an important role in this.

Both predictive policing and facial recognition are based on a "learning set" of tens of thousands of "suspicious" individuals. At one point, New York police had a database of 48,000 individuals. 66% of those were black, 31.7% were Latino and only 1% were white. This composition has everything to do with the working method of the police. Although drug use in cities in the US is common in all neighborhoods, policing based on PredPol and similar systems is focused on a few neighborhoods (of color). Then, it is not surprising that most drug-related crimes are retrieved there and, as a result, the composition of the database became even more skewed.

Overcoming bias

In these cases, 'bias' is the cause of the unethical effect of the application of artificial intelligence. Algorithms always reflect the assumptions, views, and values of their creators. They do not predict the future, but make sure that the past is reproduced. This also applies to applications outside the police force. The St. George Hospital Medical School in London has employed disproportionately many white males for at least a decade because the leather set reflected the incumbent staff. The criticized Dutch System Risk Indication System also uses historical data about fines, debts, benefits, education, and integration to search more effectively for people who abuse benefits or allowances. This is not objectionable but should never lead to 'automatic' incrimination without further investigation and the exclusion of less obvious persons.

The simple fact that the police have a disproportionate presence in alleged hotspots and are very keen on any form of suspicious behavior means that the number of confrontations with violent results has increased rapidly. In 2017 alone, police crackdowns in the US resulted in an unprecedented 1,100 casualties, of which only a limited number of whites. In addition, the police have been engaged in racial profiling for decades. Between 2004-2012, the New York Police Department checked more than 4.4 million residents. Most of these checks resulted in no further action. In about 83% of the cases, the person was black or Latino, although the two groups together make up just over half of the population. For many citizens of colour in the US, the police do not represent 'the good', but have become part of a hostile state power.

In New York, in 2017, a municipal provision to regulate the use of artificial intelligence was proposed, the Public Oversight of Surveillance Technology Act (POST). The Legal Defense and Educational Fund, a prominent US civil rights organization, urged the New York City Council to ban the use of data made available because of discriminatory or biased enforcement policies. This wish was granted in June 2019, and this resulted in the number of persons included in the database being reduced from 42,000 to 18,000. It concerned all persons who had been included in the system without concrete suspicion.

San Francisco, Portland, and a range of other cities have gone a few steps further and banned the use of facial recognition technology by police and other public authorities. Experts recognize that the artificial intelligence underlying facial recognition systems is still imprecise, especially when it comes to identifying the non-white population.

The societal roots of crime

Knowledge of how to reduce bias in algorithms has grown, but instead of solving the problem, awareness has grown into a much deeper problem. It is about the causes of crime itself and the realization that the police can never remove them.

Crime and recidivism are associated with inequality, poverty, poor housing, unemployment, use of alcohol and drugs, and untreated mental illness. These are also dominant characteristics of neighborhoods with a lot of crime. As a result, residents of these neighborhoods are unable to lead a decent life. These conditions are stressors that influence the quality of the parent-child relationship too: attachment problems, insufficient parental supervision, including tolerance of alcohol and drugs, lack of discipline or an excess of authoritarian behavior. All in all, these conditions increase the likelihood that young people will be involved in crime, and they diminish the prospect of a successful career in school and elsewhere.

The ultimate measures to reduce crime in the longer term and to improve security are: sufficient income, adequate housing, affordable childcare, especially for 'broken families' and unwed mothers and ample opportunities for girls' education. But also, care for young people who have encountered crime for the first time, to prevent them from making the mistake again.

Beyond bias

This will not solve the problems in the short term. A large proportion of those arrested by the police in the US are addicted to drugs or alcohol, are severely mentally disturbed, have serious problems in their home environment - if any - and have given up hope for a better future. Based on this understanding, the police in Johnson County, Kansas, have been calling for help from mental health professionals for years, rather than handcuffing those arrested right away. This approach has proved successful and caught the attention of the White House during the Obama administration. Lynn Overmann, who works as a senior advisor in the president’s technology office, has therefore started the Data-Driven Justice Initiative. The immediate reason was that the prisons appeared to be crowded by seriously disturbed psychiatric patients. Coincidentally, Johnson County had an integrated data system that stores both crime and health data. In other cities, these are kept in incomparable data silos. Together with the University of Chicago Data Science for Social Good Program, artificial intelligence was used to analyze a database of 127,000 people. The aim was to find out, based on historical data, which of those involved was most likely to be arrested within a month. This is not with the intention of hastening an arrest with predictive techniques, but instead to offer them targeted medical assistance. This program was picked up in several cities and in Miami it resulted in a 40% reduction in arrests and the closing of an entire prison.

What does this example teach? The rise of artificial intelligence caused Wire editor Chris Anderson to call it the end of the theory. He couldn't be more wrong! Theory has never disappeared; at most it has disappeared from the consciousness of those who work with artificial intelligence. In his book The end of policing, Alex Vitale concludes: Unless cities alter the police's core functions and values, use by police of even the most fair and accurate algorithms is likely to enhance discriminatory and unjust outcomes (p. 28). Ben Green adds: The assumption is: we predicted crime here and you send in police. But what if you used data and sent in resources? (The smart enough city, p. 78).

The point is to replace the dominant paradigm of identifying, prosecuting and incarcerating criminals with the paradigm of finding potential offenders in a timely manner and giving them the help, they need. It turns out that it's even cheaper. The need for the use of artificial intelligence is not diminishing, but the training of the computers, including the composition of the training sets, must change significantly. It is therefore recommended that diverse and independent teams design such a training program based on a scientifically based view of the underlying problem and not leaving it to the police itself.

This article is a condensed version of an earlier article The Safe City (September 2019), which you can read by following the link below, supplemented with data from Chapter 4 Machine learning's social and political foundationsfrom Ben Green's book The smart enough city (2020).

Herman van den Bosch's picture #DigitalCity
Anonymous posted

Webinar: Developer Retention: Beyond compensation

Featured image

On average, software developers tend to stay in a role for less than 2 years, with the tech industry having the highest talent turnover than any other industry. The impact of lost productivity, delayed digital initiatives and time to replace developers run into the billions globally.

On 3 March, we'll be chatting with Jieke Pan, CTO EMEA & APAC at Mobiquity, about:

👉 How to connect customer pain points to individual engineering outputs, resulting in more meaningful and fulfilling work
👉 Why psychological safety and boundaries are key to a developer’s experience at work
👉 A proactive approach to structured and unstructured growth for your engineers

Online event on Mar 3rd
MozFest 2022, posted

MozFest 2022: Hét internationale tech event over het belang van transparante algoritmes.

Featured image

Na een succesvolle online editie in 2021 vindt het internationale virtuele tech-event MozFest dit jaar van 7 tot 11 maart plaats. Duizenden deelnemers van over de hele wereld werken dan samen aan één missie: een eerlijker, transparanter, vrij en inclusief internet en betrouwbare AI.

Het complete festivalprogramma omvat meer dan 350 sessies. Thema’s als privacy, betrouwbare AI en digitale rechten vormen de belangrijkste peilers van het festival. Het gezamenlijke festivaldoel is om de status quo te doorbreken en de online wereld opnieuw vorm te geven. In aanloop naar het festival vinden verschillende Fringe-events voor de MozFest-community plaats. Ook na MozFest is het mogelijk om deel te nemen aan nieuwe Fringe-sessies en bestaande festival-sessies terug te kijken (tot 25 juni). Het fysieke event MozFest house in Amsterdam is vanwege corona geannuleerd.

Over MozFest

MozFest wordt gehost door de Mozilla Foundation. De Mozilla Foundation is een organisatie zonder winstoogmerk die als taak heeft ondersteuning en sturing te geven aan het open source project Mozilla. Kernwaarden van de Mozilla Foundation zijn openheid en inclusiviteit. Met als doel samen zorgen dat het internet een publieke hulpbron blijft, open en toegankelijk voor iedereen. Jaarlijks brengt Mozilla het Internet Health Report uit. Dit onderzoek geeft een jaarlijkse update over o.a. de veiligheid,inclusiviteit en publieke toegankelijkheid van het internet. Het Internet Health Report is open source en komt tot stand op basis van onderzoek van experts over de hele wereld. Tijdens MozFest 2022 worden delen van de Internet Health Report-podcast gedeeld. Het volledige onderzoek komt uit in april.

MozFest 2022's picture Online event from Mar 7th to Mar 11th
Manon den Dunnen, Strategisch specialist digitaal , posted

Sensemakers Latest on VR/AR/MR

Featured image

Sensemakers can finally meet offline, this evening we'll meet at NewBase on the Marineterrein, as to learn about the latest in VR, AR and MR, and depending on the amount of people some will also have the opportunity to try it out!

Looking forward to talk to you all again!!

Manon den Dunnen's picture Meet-up on Feb 16th
Cornelia Dinca, International Liaison at Amsterdam Smart City, posted

Open Call for city governments and experts to pilot city-wide implementations of digital rights

Featured image

The Cities Coalition for Digital Rights, UN-Habitat, UCLG and Eurocities, in partnership with Open Society Foundation, are launching an open call to find experts and city governments in Europe to pilot the Digital Rights Governance Framework.

Selected cities will receive support to design and implement the Digital Rights Governance Framework at the local level, and will be provided with technical advice, ad hoc support and advisory input.

The goal of the project is to tackle local digital-rights challenges around digital rights thematic areas such as digital inclusion, individual’s control and autonomy of data, transparency and accountability, public participation and community engagement, privacy, digital public goods and open digital infrastructures, procurement by developing a Digital Rights Governance Framework and a capacity-building programme in these topics.

Using challenge-driven innovation, the pilot process will include assessments, workshops and a capacity-building approach that will be carried out in collaboration with the city’s local staff.

The call is open to:

  • “CITIES”, where city governments in Europe are welcome to apply by sharing local challenges in the different digital rights areas 
  • “EXPERTS”, for professionals with background in the thematic areas related to digital rights

Find out more and submit your application via: https://citiesfordigitalrights.org/opencall

Deadline: Sunday, 27 February 2022

Cornelia Dinca's picture #DigitalCity
Liza Verheijke, Community Manager at Amsterdam University of Applied Sciences, posted

AIMD ontvangt ruim 2 miljoen euro voor mensgericht AI-onderzoek

Featured image

CLICK HERE FOR ENGLISH

Het AI, Media & Democracy Lab, een samenwerking van UvA, HvA en CWI, krijgt een subsidie van 2.1 miljoen euro toegekend binnen de NWO-call ‘Mensgerichte AI voor een inclusieve samenleving – naar een ecosysteem van vertrouwen’. Hiermee gaan onderzoekers in de zogenoemde ELSA Labs zich samen met mediabedrijven en culturele instellingen inzetten om de kennis over de ontwikkeling en de toepassing van betrouwbare, mensgerichte AI te vergroten.

In totaal honoreert NWO in deze call vijf aanvragen; bij elkaar gaat het om meer dan 10 miljoen. HvA-lectoren Nanda Piersma en Tamara Witschge en Hoofddocent Responsible AI Pascal Wiggers hebben zich hier - samen met vele anderen - tot het uiterste voor ingespannen.

Het AI, Media & Democracy ELSA Lab is een van de gehonoreerde projecten binnen de categorie Economie, Binnenlands bestuur en Cultuur & Media, en onderzoekt de impact van AI op de democratische functie van media. Samen met journalisten, mediaprofessionals, designers, burgers, collega-onderzoekers en publieke en maatschappelijke partners, ontwikkelt en test het lab waarde-gedreven, mensgerichte AI-toepassingen en ethische en juridische kaders voor verantwoord gebruik van AI.

Doel van het Lab is het stimuleren van innovatieve AI-toepassingen die de democratische functie van media versterken. Er wordt samengewerkt met partners als RTL, DPG Media, NPO, Beeld en Geluid, Media Perspectives, NEMO Kennislink, Waag Society, Gemeente Amsterdam, Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, Ministerie van Onderwijs, Cultuur en Wetenschap, Commissariaat van de Media, Hogeschool Utrecht, Universiteit Utrecht, Cultural AI Lab, Koninklijke Bibliotheek, BBC en het Bayrischer Rundfunk AI Lab.

ENORME IMPULS

Prof. dr. Natali Helberger , universiteitshoogleraar Law and Digital Technology aan de UvA en medeoprichter van het AI, Media & Democracy Lab: "Deze subsidie stelt ons in staat om samen met onze partners te onderzoeken hoe AI een rol kan spelen in de democratische en onafhankelijke rol van de media, de publieke sfeer en burgers die zich willen informeren. Met het AI, Media & Democracy Lab kunnen we onze bijdrage leveren aan onafhankelijke innovatie, maar ook aan het vormen van een visie op de toekomst van de media in onze digitale maatschappij."

Dr. Nanda Piersma , wetenschappelijk directeur van het Centre of Expertise Applied AI, HvA-lector Responsible IT en onderzoeker bij CWI: "We willen een verschil maken in het huidige medialandschap door samen met de mediapartners en de publieke partners AI op een verantwoorde manier in de praktijk te brengen. Deze subsidie stelt ons in staat om een experimentele ruimte te creëren waar we AI kunnen uitproberen, en bij goed resultaat ook met de partners in de praktijk te implementeren. Daarmee krijgt het Nederlandse medialandschap een enorme impuls.”

Dr. Tamara Witschge, HvA-lector Creative Media for Social Change: “Met dit consortium van kennisinstellingen kan de HvA echt een belangrijke bijdrage leveren, omdat het gaat over het ontwikkelingen van technologische innovaties die publieke waarden en grondrechten borgen en mensenrechten respecteren, en deze in de journalistieke praktijk te testen. In het project komen de verschillende expertisegebieden van de faculteit Digitale Media en Creatieve Industrie samen: van AI tot media en design.”

GRONDRECHTEN, MENSENRECHTEN EN DRAAGVLAK

De ELSA Labs (ELSA: ‘Ethical, Legal and Societal Aspects’) zijn co-creatieve omgevingen waar interdisciplinair en met elkaar samenhangend onderzoek wordt gedaan naar verschillende technologische en economische uitdagingen waar we als samenleving voor gesteld worden. Het met de NWO-subsidie gefinancierde onderzoek moet niet alleen bijdragen aan technologische innovaties die publieke waarden en grondrechten borgen en mensenrechten respecteren (en waar mogelijk versterken), maar ook op maatschappelijk draagvlak kunnen rekenen. Nanda Piersma: ”We zijn trots dat het AI Media and Democracy Lab eerst het NLAIC ELSA label heeft gekregen en nu ook deze subsidie van NWO. Het voelt dat we het vertrouwen hebben gekregen en we willen dit maximaal waarmaken in de komende jaren.”

OVER MENSGERICHTE AI

NWO en de Nederlandse AI Coalitie hebben, als onderdeel van de Nationale Wetenschapsagenda (NWA), het programma ‘Artificiële Intelligentie: Mensgerichte Artificiële Intelligentie (AI) voor een inclusieve samenleving – naar een ecosysteem van vertrouwen’ gelanceerd. Het programma bevordert de ontwikkeling en toepassing van betrouwbare, mensgerichte AI.

In dit publiek-private samenwerkingsverband werken overheid, bedrijfsleven, onderwijs- en onderzoeksinstellingen en maatschappelijke organisaties samen om de nationale AI-ontwikkelingen te versnellen en bestaande initiatieven met elkaar te verbinden. Dit NWA-onderzoeksprogramma verbindt AI als sleuteltechnologie met AI-onderzoek voor een inclusieve samenleving. Daarbij spelen de nationale onderzoeksagenda AIREA-NL en maatschappelijke en beleidsvraagstukken een belangrijke rol.

Liza Verheijke's picture #DigitalCity
Herman van den Bosch, professor in management development , posted

Ethical principles and artificial intelligence

Featured image

In the 11th episode of the series Better cities: The contribution of digital technology, I will apply the ethical principles from episode 9 to the design and use of artificial intelligence.

Before, I will briefly summarize the main features of artificial intelligence, such as big data, algorithms, deep-learning, and machine learning. For those who want to know more: Radical technologies by Adam Greenfield (2017) is a very readable introduction, also regarding technologies such as blockchain, augmented and virtual reality, Internet of Things, and robotics, which will be discussed in next episodes.

Artificial intelligence

Artificial intelligence has valuable applications but also gross forms of abuse. Valuable, for example, is the use of artificial intelligence in the layout of houses and neighborhoods, taking into account ease of use, views and sunlight with AI technology from Spacemaker or measuring the noise in the center of Genk using Nokia's Scene Analytics technology. It is reprehensible how the police in the US discriminate against population groups with programs such as PredPol and how the Dutch government has dealt in the so called ‘toelagenaffaire’.

Algorithms
Thanks to artificial intelligence, a computer can independently recognize patterns. Recognizing patterns as such is nothing new. This has long been possible with computer programs written for that purpose. For example, to distinguish images of dogs and cats, a programmer created an "if....then" description of all relevant characteristics of dogs and cats that enabled a computer to distinguish between pictures of the two animal species. The number of errors depended on the level of detail of the program. When it comes to more types of animals and animals that have been photographed from different angles, making such a program is very complicated. In that case, a computer can be trained to distinguish relevant patterns itself. In this case we speak of artificial intelligence. People still play an important role in this. This role consists in the first place in writing an instruction - an algorithm - and then in the composition of a training set, a selection of a large number of examples, for example of animals that are labeled as dog or cat and if necessary lion tiger and more . The computer then searches 'itself' for associated characteristics. If there are still too many errors, new images will be added.

Deep learning
The way in which the animals are depicted can vary endlessly, whereby it is no longer about their characteristics, but about shadow effect, movement, position of the camera or the nature of the movement, in the case of moving images. The biggest challenge is to teach the computer to take these contextual characteristics into account as well. This is done through the imitation of the neural networks. Image recognition takes place just like in our brains thanks to distinguishing layers, varying from distinguishing simple lines, patterns, and colors to differences in sharpness. Because of this layering, we speak of 'deep learning'. This obviously involves large data sets and a lot of computing power, but it is also a labor-intensive process.

Unsupervised learning
Learning how to apply algorithms under supervision produces reliable results and the instructor can still explain the result after many iterations. As the situation becomes more complicated and different processes are proceeding at the same time, guided instruction is not feasible any longer. For example, if animals attack each other, surviving or not, and the computer must predict which kind of animals have the best chance of survival under which conditions. Also think of the patterns that the computer of a car must be able to distinguish to be able to drive safely on of the almost unlimited variation, supervised learning no longer works.

In the case of unsupervised learning, the computer is fed with data from many millions of realistic situations, in the case of cars recordings of traffic situations and the way the drivers reacted to them. Here we can rightly speak of 'big data' and 'machine learning', although these terms are often used more broadly. For example, the car's computer 'learns' how and when it must stay within the lanes, can pass, how pedestrians, bicycles or other 'objects' can be avoided, what traffic signs mean and what the corresponding action is. Tesla’s still pass all this data on to a data center, which distills patterns from it that regularly update the 'autopilots' of the whole fleet. In the long run, every Tesla, anywhere in the world, should recognize every imaginable pattern, respond correctly and thus guarantee the highest possible level of safety. This is apparently not the case yet and Tesla's 'autopilot' may therefore not be used without the presence of a driver 'in control'. Nobody knows by what criteria a Tesla's algorithms work.

Unsupervised learning is also applied when it comes to the prediction of (tax) fraud, the chance that certain people will 'make a mistake' or in which places the risk of a crime is greatest at a certain moment. But also, in the assessment of applicants and the allocation of housing. For all these purposes, the value of artificial intelligence is overestimated. Here too, the 'decisions' that a computer make are a 'black box'. Partly for this reason, it is difficult, if not impossible, to trace and correct any errors afterwards. This is one of the problems with the infamous ‘toelagenaffaire’.

The cybernetic loop
Algorithmic decision-making is part of a new digital wave, characterized by a 'cybernetic loop' of measuring (collecting data), profiling (analyzing data) and intervening (applying data). These aspects are also reflected in every decision-making process, but the parties involved, politicians and representatives of the people make conscious choices step by step, while the entire process is now partly a black box.

The role of ethical principles

Meanwhile, concerns are growing about ignoring ethical principles using artificial intelligence. This applies to near all principles that are discussed in the 9th episode: violation of privacy, discrimination, lack of transparency and abuse of power resulting in great (partly unintentional) suffering, risks to the security of critical infrastructure, the erosion of human intelligence and undermining of trust in society. It is therefore necessary to formulate guidelines that align the application of artificial intelligence again with these ethical principles.

An interesting impetus to this end is given in the publication of the Institute of Electric and Electronic EngineersEthically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. The Rathenau Institute has also published several guidelines in various publications.

The main guidelines that can be distilled from these and other publications are:

1. Placing responsibility for the impact of the use of artificial intelligence on both those who make decisions about its application (political, organizational, or corporate leadership) and the developers. This responsibility concerns the systems used as well as the quality, accuracy, completeness, and representativeness of the data.

2. Prevent designers from (unknowingly) using their own standards when instructing learning processes. Teams with a diversity of backgrounds are a good way to prevent this.

3. To be able to trace back 'decisions' by computer systems to the algorithms used, to understand their operation and to be able to explain them.

4. To be able to scientifically substantiate the model that underlies the algorithm and the choice of data.

5. Manually verifying 'decisions' that have a negative impact on the data subject.

6. Excluding all forms of bias in the content of datasets, the application of algorithms and the handling of outcomes.

7. Accountability for the legal basis of the combination of datasets.

8. Determine whether the calculation aims to minimize false positives or false negatives.

9. Personal feedback to clients in case of lack of clarity in computerized ‘decisions’.

10. Applying the principles of proportionality and subsidiarity, which means determining on a case-by-case basis whether the benefits of using artificial intelligence outweigh the risks.

11. Prohibiting applications of artificial intelligence that pose a high risk of violating ethical principles, such as facial recognition, persuasive techniques and deep-fake techniques.

12. Revocation of legal provisions if it appears that they cannot be enforced in a transparent manner due to their complexity or vagueness.

The third, fourth and fifth directives must be seen in conjunction. I explain why below.

The scientific by-pass of algorithmic decision making

When using machine learning, computers themselves adapt and extend the algorithms and combine data from different data sets. As a result, the final ‘decisions’ made by the computer cannot be explained. This is only acceptable after it has been proven that these decisions are 'flawless', for example because, in the case of 'self-driving' cars, if they turn out to be many times safer than ordinary cars, which - by the way - is not the case yet.

Unfortunately, this was not the case too in the ‘toelagenaffaire’. The fourth guideline could have provided a solution. Scientific design-oriented research can be used to reconstruct the steps of a decision-making process to determine who is entitled to receive an allowance. By applying this decision tree to a sufficiently large sample of cases, the (degree of) correctness of the computer's 'decisions' can be verified. If this is indeed the case, then the criteria used in the manual calculation may be used to explain the processes in the computer's 'black box'. If there are too many deviations, then the computer calculation must be rejected at all.

Governance

In the US, the use of algorithms in the public sector has come in a bad light, especially because of the facial recognition practices that will be discussed in the next episode. The city of New York has therefore appointed an algorithm manager, who investigates whether the algorithms used comply with ethical and legal rules. KPMG has a supervisory role in Amsterdam. In other municipalities, we see that role more and more often fulfilled by an ethics committee.

In the European public domain, steps have already been taken to combat excesses of algorithmic decision-making. The General Data Protection Regulation (GDPR), which came into effect in 2018, has significantly improved privacy protection. In April 2019, the European High Level Expert Group on AI published ethical guidelines for the application of artificial intelligence. In February 2020, the European Commission also established such guidelines, including in the White Paper on Artificial Intelligence and an AI regulation. The government also adopted the national digitization strategy, the Strategic Action Plan for AI and the policy letter on AI, human rights, and public values.

I realize that binding governments and their executive bodies to ethical principles is grist to the mill for those who flout those principles. Therefore, the search for the legitimate use of artificial intelligence to detect crime, violations or abuse of subsidies and many other applications continues to deserve broad support.

Follow the link below to find one of the previous episodes or see which episodes are next, and this one for the Dutch version.

Herman van den Bosch's picture #DigitalCity