Herman van den Bosch

Activity

  • 24
    Updates
  • 1
    Smarts
  • 7
    Comments
Highlight from Herman van den Bosch, professor in management development , posted

The first part of the serie 'Better cities - The role of technology' is online

Featured image

Six weeks ago, I started a new weekly series answering the question how digitization can contribute to the development of better cities and their surroundings. Technology alone cannot reach this goal. Far-reaching social and economic reforms are needed, also to ensure that the benefits of digitization are shared by everyone.

Below you will find links to the articles published until now:

Part A: Digital technology as a challenge

1. Prologue to a new series: Better cities. The role of digital technologies

2. Scare-off the monster behind the curtain: Big Tech’s monopoly

3. Ten years of smart city technology marketing

4. Digital social innovation: For the social good (and a moonshot)

5. Collect meaningful data and stay away from dataism

6. The Boston Smart City Playbook

7. The Future of Urban Tech. A project of the NYC Cornell University

Next up:
Part B: Digital instruments and ethics

8. Digital technology and the urban sustainability agenda. A frame

9. Ethical principles for digital technology

10. Accessibility, software, digital infrastructure, and data. The quest for ethics

11. Ethical principles and artificial intelligence

12. Ethical principles and applications of digital technology

13. Amsterdam benchmarked

14. ‘Agenda stad’ and digital instruments

Part C: Applications
15. Artificial intelligence abused
16. Government: services and participation
17. Mobility
18. Circular economy: Construction
19. Circular economy: Waste
20. Resilience
21. Energy transition
22. Health
23. Smart cities from scratch
24. Epilogue

Links to the Dutch versions, you will find below:

Herman van den Bosch's picture #Energy
Herman van den Bosch, professor in management development , posted

Ethical principles and artificial intelligence

Featured image

In the 11th episode of the series Better cities: The contribution of digital technology, I will apply the ethical principles from episode 9 to the design and use of artificial intelligence.

Before, I will briefly summarize the main features of artificial intelligence, such as big data, algorithms, deep-learning, and machine learning. For those who want to know more: Radical technologies by Adam Greenfield (2017) is a very readable introduction, also regarding technologies such as blockchain, augmented and virtual reality, Internet of Things, and robotics, which will be discussed in next episodes.

Artificial intelligence

Artificial intelligence has valuable applications but also gross forms of abuse. Valuable, for example, is the use of artificial intelligence in the layout of houses and neighborhoods, taking into account ease of use, views and sunlight with AI technology from Spacemaker or measuring the noise in the center of Genk using Nokia's Scene Analytics technology. It is reprehensible how the police in the US discriminate against population groups with programs such as PredPol and how the Dutch government has dealt in the so called ‘toelagenaffaire’.

Algorithms
Thanks to artificial intelligence, a computer can independently recognize patterns. Recognizing patterns as such is nothing new. This has long been possible with computer programs written for that purpose. For example, to distinguish images of dogs and cats, a programmer created an "if....then" description of all relevant characteristics of dogs and cats that enabled a computer to distinguish between pictures of the two animal species. The number of errors depended on the level of detail of the program. When it comes to more types of animals and animals that have been photographed from different angles, making such a program is very complicated. In that case, a computer can be trained to distinguish relevant patterns itself. In this case we speak of artificial intelligence. People still play an important role in this. This role consists in the first place in writing an instruction - an algorithm - and then in the composition of a training set, a selection of a large number of examples, for example of animals that are labeled as dog or cat and if necessary lion tiger and more . The computer then searches 'itself' for associated characteristics. If there are still too many errors, new images will be added.

Deep learning
The way in which the animals are depicted can vary endlessly, whereby it is no longer about their characteristics, but about shadow effect, movement, position of the camera or the nature of the movement, in the case of moving images. The biggest challenge is to teach the computer to take these contextual characteristics into account as well. This is done through the imitation of the neural networks. Image recognition takes place just like in our brains thanks to distinguishing layers, varying from distinguishing simple lines, patterns, and colors to differences in sharpness. Because of this layering, we speak of 'deep learning'. This obviously involves large data sets and a lot of computing power, but it is also a labor-intensive process.

Unsupervised learning
Learning how to apply algorithms under supervision produces reliable results and the instructor can still explain the result after many iterations. As the situation becomes more complicated and different processes are proceeding at the same time, guided instruction is not feasible any longer. For example, if animals attack each other, surviving or not, and the computer must predict which kind of animals have the best chance of survival under which conditions. Also think of the patterns that the computer of a car must be able to distinguish to be able to drive safely on of the almost unlimited variation, supervised learning no longer works.

In the case of unsupervised learning, the computer is fed with data from many millions of realistic situations, in the case of cars recordings of traffic situations and the way the drivers reacted to them. Here we can rightly speak of 'big data' and 'machine learning', although these terms are often used more broadly. For example, the car's computer 'learns' how and when it must stay within the lanes, can pass, how pedestrians, bicycles or other 'objects' can be avoided, what traffic signs mean and what the corresponding action is. Tesla’s still pass all this data on to a data center, which distills patterns from it that regularly update the 'autopilots' of the whole fleet. In the long run, every Tesla, anywhere in the world, should recognize every imaginable pattern, respond correctly and thus guarantee the highest possible level of safety. This is apparently not the case yet and Tesla's 'autopilot' may therefore not be used without the presence of a driver 'in control'. Nobody knows by what criteria a Tesla's algorithms work.

Unsupervised learning is also applied when it comes to the prediction of (tax) fraud, the chance that certain people will 'make a mistake' or in which places the risk of a crime is greatest at a certain moment. But also, in the assessment of applicants and the allocation of housing. For all these purposes, the value of artificial intelligence is overestimated. Here too, the 'decisions' that a computer make are a 'black box'. Partly for this reason, it is difficult, if not impossible, to trace and correct any errors afterwards. This is one of the problems with the infamous ‘toelagenaffaire’.

The cybernetic loop
Algorithmic decision-making is part of a new digital wave, characterized by a 'cybernetic loop' of measuring (collecting data), profiling (analyzing data) and intervening (applying data). These aspects are also reflected in every decision-making process, but the parties involved, politicians and representatives of the people make conscious choices step by step, while the entire process is now partly a black box.

The role of ethical principles

Meanwhile, concerns are growing about ignoring ethical principles using artificial intelligence. This applies to near all principles that are discussed in the 9th episode: violation of privacy, discrimination, lack of transparency and abuse of power resulting in great (partly unintentional) suffering, risks to the security of critical infrastructure, the erosion of human intelligence and undermining of trust in society. It is therefore necessary to formulate guidelines that align the application of artificial intelligence again with these ethical principles.

An interesting impetus to this end is given in the publication of the Institute of Electric and Electronic EngineersEthically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. The Rathenau Institute has also published several guidelines in various publications.

The main guidelines that can be distilled from these and other publications are:

1. Placing responsibility for the impact of the use of artificial intelligence on both those who make decisions about its application (political, organizational, or corporate leadership) and the developers. This responsibility concerns the systems used as well as the quality, accuracy, completeness, and representativeness of the data.

2. Prevent designers from (unknowingly) using their own standards when instructing learning processes. Teams with a diversity of backgrounds are a good way to prevent this.

3. To be able to trace back 'decisions' by computer systems to the algorithms used, to understand their operation and to be able to explain them.

4. To be able to scientifically substantiate the model that underlies the algorithm and the choice of data.

5. Manually verifying 'decisions' that have a negative impact on the data subject.

6. Excluding all forms of bias in the content of datasets, the application of algorithms and the handling of outcomes.

7. Accountability for the legal basis of the combination of datasets.

8. Determine whether the calculation aims to minimize false positives or false negatives.

9. Personal feedback to clients in case of lack of clarity in computerized ‘decisions’.

10. Applying the principles of proportionality and subsidiarity, which means determining on a case-by-case basis whether the benefits of using artificial intelligence outweigh the risks.

11. Prohibiting applications of artificial intelligence that pose a high risk of violating ethical principles, such as facial recognition, persuasive techniques and deep-fake techniques.

12. Revocation of legal provisions if it appears that they cannot be enforced in a transparent manner due to their complexity or vagueness.

The third, fourth and fifth directives must be seen in conjunction. I explain why below.

The scientific by-pass of algorithmic decision making

When using machine learning, computers themselves adapt and extend the algorithms and combine data from different data sets. As a result, the final ‘decisions’ made by the computer cannot be explained. This is only acceptable after it has been proven that these decisions are 'flawless', for example because, in the case of 'self-driving' cars, if they turn out to be many times safer than ordinary cars, which - by the way - is not the case yet.

Unfortunately, this was not the case too in the ‘toelagenaffaire’. The fourth guideline could have provided a solution. Scientific design-oriented research can be used to reconstruct the steps of a decision-making process to determine who is entitled to receive an allowance. By applying this decision tree to a sufficiently large sample of cases, the (degree of) correctness of the computer's 'decisions' can be verified. If this is indeed the case, then the criteria used in the manual calculation may be used to explain the processes in the computer's 'black box'. If there are too many deviations, then the computer calculation must be rejected at all.

Governance

In the US, the use of algorithms in the public sector has come in a bad light, especially because of the facial recognition practices that will be discussed in the next episode. The city of New York has therefore appointed an algorithm manager, who investigates whether the algorithms used comply with ethical and legal rules. KPMG has a supervisory role in Amsterdam. In other municipalities, we see that role more and more often fulfilled by an ethics committee.

In the European public domain, steps have already been taken to combat excesses of algorithmic decision-making. The General Data Protection Regulation (GDPR), which came into effect in 2018, has significantly improved privacy protection. In April 2019, the European High Level Expert Group on AI published ethical guidelines for the application of artificial intelligence. In February 2020, the European Commission also established such guidelines, including in the White Paper on Artificial Intelligence and an AI regulation. The government also adopted the national digitization strategy, the Strategic Action Plan for AI and the policy letter on AI, human rights, and public values.

I realize that binding governments and their executive bodies to ethical principles is grist to the mill for those who flout those principles. Therefore, the search for the legitimate use of artificial intelligence to detect crime, violations or abuse of subsidies and many other applications continues to deserve broad support.

Follow the link below to find one of the previous episodes or see which episodes are next, and this one for the Dutch version.

Herman van den Bosch's picture #DigitalCity
Herman van den Bosch, professor in management development , posted

10 Accessibility, software, digital infrastructure, and data. The quest for ethics

Featured image

The 10th episode in the series Better cities: The contribution of digital technology deals with the impact of ethical principles on four pillars of digitization: accessibility, software, infrastructure and data.

In the previous episode, I discussed design principles - guidelines and values - for digital technology. The report of the Rathenau Instituut Opwaarderen - Borgen van publieke waarden in de digitale samenleving concludes that government, industry, and society are still insufficiently using these principles. Below, I will consider their impact on four pillars of digitization: accessibility, software, infrastructure, and data. The next episodes will be focused on their impact on frequently used technologies.

Accessibility

Accessibility refers to the availability of high-speed Internet for everyone. This goes beyond just technical access. It also means that a municipality ensures that digital content is understandable and that citizens can use the options offered. Finally, everyone should have a working computer.

Free and safe Internet for all residents is a valuable amenity, including Wi-Fi in public areas. Leaving the latter to private providers such as the LinkNYC advertising kiosks in New York, which are popping up in other cities as well, is a bad thing. Companies such as Sidewalk Labs tempt municipalities by installing these kiosks for free. They are equipped with sensors that collect a huge amount of data from every device that connects to the Wi-Fi network: Not only the location and the operating system, but also the MAC address. With the help of analytical techniques, the route taken can be reconstructed. Combined with other public data from Facebook or Google, they provide insight into personal interests, sexual orientation, race, and political opinion of visitors.

The huge internet that connects everything and everyone also raises specters, which have to do with privacy-related uncertainty and forms of abuse, which appeared to include hacking of equipment that regulates your heartbeat.

That is why there is a wide search for alternatives. Worldwide, P2P neighborhood initiatives occur for a private network. Many of these are part of The Things Network. Instead of Wi-Fi, this network uses a protocol called LoRaWAN. Robust end-to-end encryption means that users don't have to worry about secure wireless hotspots, mobile data plans, or faltering Wi-Fi connectivity. The Things Network manages thousands of gateways and provides coverage to millions of people and a suite of open tools that enable citizens and entrepreneurs to build IoT applications at a low cost, with maximum security and that are easy to scale.

Software

Computer programs provide diverse applications, ranging from word processing to management systems. Looking for solutions that best fit the guidelines and ethical principles mentioned in the former episode, we quickly arrive at open-source software, as opposed to proprietary products from commercial providers. Not that the latter are objectionable in advance or that they are always more expensive. The most important thing to pay attention to is interchangeability (interoperability) with products from other providers to prevent you cannot get rid of them (lock in).

Open-source software offers advantages over proprietary solutions, especially if municipalities encourage city-wide use. Barcelona is leading the way in this regard. The city aims to fully self-manage its ICT services and radically improve digital public services, including privacy by design. This results in data sovereignty and in the use of free software, open data formats, open standards, interoperability and reusable applications and services.

Anyone looking for open-source software cannot ignore the Fiwarecommunity, which is similar in organization to Linux and consists of companies, start-ups and freelance developers and originated from an initiative of the EU. Fiware is providing open and sustainable software around public, royalty-free and implementation-driven standards.

Infrastructure

Computers are no longer the largest group of components of the digital infrastructure. Their number has been surpassed by so-called ubiquitous sensor networks (USN), such as smart meters, CCTV, microphones, and sensors. Sensor networks have the most diverse tasks, they monitor the environment (air quality, traffic density, unwanted visitors) and they are in machines, trains, and cars and even in people to transmit information about the functioning of vital components. Mike Matson calculated that by 2050 a city of 2 million inhabitants will have as many as a billion sensors, all connected by millions of kilometers of fiber optic cable or via Wi-Fi with data centers, carrier hotels (nodes where private networks converge) to eventually the Internet.

This hierarchically organized cross-linking is at odds with the guidelines and ethical principles formulated in the previous post. Internet criminals are given free rein and data breaches can spread like wildfires, like denial of service (DoS). In addition, the energy consumption is enormous, apart from blockchain. Edge computing is a viable alternative. The processing of the data is done locally and only results are uploaded on demand. This applies to sensors, mobile phones and possibly automated cars as well. A good example is the Array of Things Initiative. Ultimately, this will include 500 sensors, which will be installed in consultation with the population in Chicago. Their data is stored in each sensor apart and can be consulted online, if necessary, always involving several sensors and part of the data. Federated data systems are comparable. Data is stored in a decentralized way, but authorized users can use all data thanks to user interfaces.

Data

There is a growing realization that when it comes to data, not only quantity, but also quality counts. I will highlight some aspects.

Access to data
Personal data should only be available with permission from the owner. To protect this data, the EU project Decode proposes that owners can manage their data via blockchain technology. Many cities now have privacy guidelines, but only a few conduct privacy impact assessments as part of its data policy (p.18).

Quality
There is growing evidence that much of the data used in artificial intelligence as “learning sets” is flawed. This had already become painfully clear from facial recognition data in which minority groups are disproportionately represented. New research shows that this is also true in the field of healthcare. This involves data cascades, a sum of successive errors, the consequences of which only become clear after some time. Data turned out to be irrelevant, incomplete, incomparable, and even manipulated.

Data commons
Those for whom high-quality data is of great importance will pay extra attention to its collection. In. this case, initiating a data common is a godsend. Commons are shared resources managed by empowered communities based on mutually agreed and enforced rules. An example is the Data and Knowledge Hub for Healthy Urban Living (p.152), in which governments, companies, environmental groups and residents collect data for the development of a healthy living environment, using a federated data system. These groups are not only interested in the data, but also in the impact of its application.

Open date
Many cities apply the 'open by default' principle and make most of the data public, although the user-friendliness and efficiency sometimes leave something to be desired. Various data management systems are available as an open-source portal. One of the most prominent ones is CKAN, administered by the Open Knowledge Foundation. It contains tools for managing, publishing, finding, using, and sharing data collections. It offers an extensive search function and allows the possibility to view data in the form of maps, graphs, and tables. There is an active community of users who continue to develop the system and adapt it locally.

To make the data accessible, some cities also offer training courses and workshops. Barcelona's Open Data Challenge is an initiative for secondary school students that introduces them to the city's vast dat collection.

Safety
As the size of the collected data, the amount of entry points and the connectivity on the Internet increase, the security risks also become more severe. Decentralization, through edge computing and federated storage with blockchain technology, certainly contribute to security. But there is still a long way to go. Only half of the cities has a senior policy officer in this area. Techniques for authentication, encryption and signing that together form the basis for attribute-based identity are applied only incidentally. This involves determining identity based on several characteristics of a user, such as function and location. Something completely different is Me and my shadow, a project that teaches Internet users to minimize their own trail and thus their visibility to Internet criminality.

There is still a world to win before the guidelines and ethical principles mentioned in the previous episode are sufficiently met. I emphasize again not to over-accentuate concepts such as 'big data', 'data-oriented policy' and the size of data sets. Instead, it is advisable to re-examine the foundations of scientific research. First and foremost is knowledge of the domain (1), resulting in research questions (2), followed by the choice of an appropriate research method (3), defining the type of data to be collected (4), the collection of these data (5), and finally their statistical processing to find evidence for substantiated hypothetical connections (6). The discussion of machine learning in the next episode will reveal that automatic processing of large data sets is mainly about discovering statistical connections, and that can have dire consequences.

Follow the link below to find one of the previous episodes or see which episodes are next, and this one for the Dutch version.

Herman van den Bosch's picture #DigitalCity
Herman van den Bosch, professor in management development , posted

Policy guidelines and ethical principles for digital technology

Featured image

The 9th episode of the series Building sustainable cities: the contribution of digital technology deals with guidelines and related ethical principles that apply to the design and application of digital technology.

One thing that keeps me awake at night is the speed at which artificial intelligence is developing and the lack of rules for its development and application said Aleksandra Mojsilović, director of IBM Science for Social Good. The European Union has a strong focus on regulations to ensure that technology is people-oriented, ensures a fair and competitive digital economy and contributes to an open, democratic and sustainable society. This relates to more than legal frameworks, also to political choices, ethical principles, and the responsibilities of the profession. This is what this post is about.

Politicians are ultimately responsible for the development, selection, application and use of (digital) technology. In this respect, a distinction must be made between:
• Coordination of digital instruments and the vision on the development of the city.
• Drawing up policy guidelines for digitization in general.
• Recognizing related ethical principles next to these policy guidelines.
• Creating the conditions for democratic oversight of the application of digital technology.
• Make an appeal to the responsibilities of the ICT professional group.

Guidelines for digitization policy

In the previous post I emphasized that the digital agenda must result from the urban policy agenda and that digital instruments and other policy instruments must be seen in mutual relation.
Below are five additional guidelines for digitization policy formulated by the G20 Global Smart Cities Alliance. 36 cities are involved in this initiative, including Apeldoorn, as the only Dutch municipality. The cities involved will elaborate these guidelines soon. In the meantime, I have listed some examples.

Equity, inclusiveness, and social impact
• Enabling every household to use the Internet.
• Making information and communication technology (ICT) and digital communication with government accessible to all, including the physically/mentally disabled, the elderly and immigrants with limited command of the local language.
• Assessing the impact of current digital technology on citizens and anticipating the future impact of this policy.
• Facilitating regular education and institutions for continuous learning to offer easily accessible programs to develop digital literacy.
• Challenging neighborhoods and community initiatives to explore the supportive role of digital devices in their communication and actions.

Security and resilience
• Developing a broadly supported vision on Internet security and its consequences.
• Mandatory compliance with rules and standards (for instance regarding IoT) to protect digital systems against cyberthreats.
• Becoming resilient regarding cybercrime by developing back-up systems that seamless take over services eventually.
• Building resilience against misuse of digital communication, especially bullying, intimidation and threats.
• Reducing the multitude of technology platforms and standards, to limit entry points for cyber attackers.

Privacy and transparency
• The right to move and stay in cities without being digitally surveilled, except in case of law enforcement with legal means.
• Establishing rules in a democratic manner for the collection of data from citizens in the public space.
• Minimalist collection of data by cities to enable services.
• Citizens' right to control their own data and to decide which ones are shared and under which circumstances.
• Using privacy impact assessment as a method for identifying, evaluating, and addressing privacy risks by companies, organizations, and the city itself.

Openness and interoperability
• Providing depersonalized data from as many as possible organizations to citizens and organizations as a reliable evidence base to support policy and to create open markets for interchangeable technology.
• Public registration of devices, their ownership, and their aim.
• Choosing adequate data architecture, including standards, agreements, and norms to enable reuse of digital technology and to avoid lock-ins.

Operational and financial sustainability
• Ensuring a safe and well-functioning Internet
• The coordinated approach ('dig once') of constructing and maintenance of digital infrastructure, including Wi-Fi, wired technologies and Internet of Things (IoT).
• Exploring first, whether the city can develop and manage required technology by itself, before turning to commercial parties.
• Cities, companies, and knowledge institutions share data and cooperate in a precompetitive way at innovations for mutual benefit.
• Digital solutions are feasible: Results are achieved within an agreed time, with an agreed budget.

Ethical Principles

The guidelines formulated above partly converge with the ethical principles that underlie digitization according to the Rathenau Institute. Below, I will summarize these principles.

Privacy
• Citizens' right to dispose of their own (digital) data, collected by the government, companies and other organizations.
• Limitation of the data to be collected to those are functionally necessary (privacy by design), which also prevents improper use.
• Data collection in the domestic environment only after personal permission and in the public environment only after consent by the municipal council.

Autonomy
• The right to decide about information to be received.
• The right to reject or consent to independent decision making by digital devices in the home.
• No filtering of information except in case of instructions by democratically elected bodies.

Safety
• Ensuring protection of personal data and against identity theft through encryption and biometric recognition.
• Preventing unwanted manipulation of devices by unauthorized persons.
• Providing adequate warnings against risks by providers of virtual reality software.
• Securing exchange of data

Public oversight
• Ensuring public participation in policy development related to digitization
• Providing transparency of decision-making through algorithms and opportunity to influence these decisions by human interventions.
• Decisions taken by autonomous systems always include an explanation of the underlying considerations and provide the option to appeal against this decision.

Human dignity
• Using robotics technology mainly in routinely, dangerous, and dirty work, preferably under supervision of human actors.
• Informing human actors collaborating with robots of the foundations of their operation and the possibilities to influence them.

Justice
• Ensuring equal opportunities, accessibility, and benefits for all when applying digital systems
• If autonomous systems are used to assess personal situations, the result is always checked for its fairness, certainty, and comprehensibility for the receiving party.
• In the case of autonomous analysis of human behavior, the motives on which an assessment has taken place can be checked by human intervention.
• Employees in the gig economy have an employment contract or an income as self-employed in accordance with legal standards.

Power relations
• The possibility of updating software if equipment still is usable, even if certain functionalities are no longer available.
• Companies may not use their monopoly position to antagonize other companies.
• Ensuring equal opportunities, accessibility, and benefits for all when applying digital systems.

The above guidelines and ethical principles partly overlap. Nevertheless, I have not combined them as they represent different frames of reference that are often referred to separately. The principles for digitization policy are particularly suitable for the assessment of digitization policy. The ethical principles are especially useful when assessing different technologies. That is why I will  use the latter in the following episodes.
In discussing the digitalization strategy of Amsterdam and other municipalities in later episodes, I will use a composite list of criteria, based on both the above guidelines and ethical principles. This list, titled 'Principles for a socially responsible digitization policy' can already be viewed HERE.

Democratic oversight

Currently, many municipalities still lack sufficient competencies to supervise the implementation and application of the guidelines and principles mentioned above. Moreover, they are involved as a party themselves. Therefore, setting up an independent advisory body for this purpose is desirable. In the US, almost every city now has a committee for public oversight of digitization. These committees are strongly focused on the role of the police, in particular practices related to facial recognition and predictive policing.
Several cities in the Netherlands have installed an ethics committee. A good initiative. I would also have such a committee supervise the aforementioned policy guidelines and not just the ethical principles. According to Bart Wernaart, lecturer in Moral Design Strategy at Fontys University of applied sciences, such a committee must be involved in digitization policy at an early stage, and it should also learn from mistakes in the field of digitization in the past.
The latter is especially necessary because, as the Dutch Data Protection Authority writes, the identity of an ethically responsible city is, is not set in stone. The best way to connect ethical principles and practice is to debate and questions the implications of policy in practice.

Experts’ own responsibility

A mature professional group has its own ethical principles, is monitoring their implementation, and sanctioning discordant members. In this respect, the medical world is most advanced. As far as I know, the ICT profession has not yet formulated its own ethical principles. This has been done, for example, by the Institute of Electric and Electronic Engineers (IEEE) in the field of artificial intelligence. Sarah HamidData scientists are generally concerned more with the abstract score metric of their models than the direct and indirect impact it can have on the world. However, experts often understand the unforeseen implications of government policy earlier than politicians. Hamid addresses the implications for professional action: If computer scientists really hoped to make a positive impact on the world, then they would need to start asking better questions. The road to technology implementation by governments is paved with failures. Professionals have often seen this coming, but have rarely warned, afraid of losing an assignment. Self-confident professionals must therefore say 'no' much more often to a job description. Hamid: Refusal is an essential practice for anyone who hopes to design sociotechnical systems in the service of justice and the public good. This even might result in a better relationship with the client and more successful projects.

Establishing policy guidelines and ethical principles for municipal digitization requires a critical municipal council and an ethics committee with relevant expertise. But it also needs professionals who carry out the assignment and enter the debate if necessary.

The link below opens a preliminary overview of the already published and upcoming articles in the series Building sustainable cities: the contribution of digital technology. Click HERE for the Dutch version.

Herman van den Bosch's picture #DigitalCity
Herman van den Bosch, professor in management development , posted

Digital technology and the urban sustainability agenda. A frame

Featured image

The eighth episode in the series Better cities - The contribution of digital technology provides a frame to seamlessly integrate the contribution of (digital) technology into urban policy. The Dutch versions of this and already published posts are here.

From the very first publication on smart cities (1992) to the present day, the solution of urban problems has been mentioned as a motive for the application of (digital) technology. However, this relationship is anything but obvious. Think of the discriminatory effect of the use of artificial intelligence by the police in the US – to which I will come back later – and of the misery it has caused in the allowance affair (toelagenaffaire) in the Netherlands.

The choice and application of (digital) technology is therefore part of a careful and democratic process, in which priorities are set and resources are weighed up. See also the article by Jan-Willem Wesselink and Hans DekkerSmart city enhances quality of life and puts citizen first (p.15). Below, I propose a frame for such a process, on which I will built in the next five posts.

My proposal is an iterative process in which three clusters of activities can be distinguished:
• Developing a vision of the city
• The development and choice of objectives
• The instrumentation of the objectives

Vision of the city

The starting point for a democratic urban policy is a broadly supported vision of the city and its development. Citizens and other stakeholders must be able to identify with this vision and their voice must have been heard. The vision of the city is the result of a multitude of opposing or abrasive insights, wishes and interests. Balancing the power differences between parties involved is a precondition for making the city more just, inclusive, and democratic and the residents happier.

The concept of a donut economy is the best framework I know of for developing a vision of such a city. It has been elaborated by British economist Kate Raworth in a report entitled A Safe and Just Space for Humanity. The report takes the simultaneous application of social and environmental sustainability as principles for policy.

If you look at a doughnut, you see a small circle in the middle and a larger circle on the outside. The small circle represents 12 principles of social sustainability (basic needs). These principles are in line with the UN's development goals. The larger circle represents 9 principles of the earth’ long-term self-sustaining capacity. A table with both types of principles can be viewed here. Human activities in cities must not overshoot its ecological ceiling, thus harming the self-sustainable capacity of that entity. At the same time, these activities must not shortfall the social foundation of that city, harming its long-term well-being. Between both circles, a safe and just space for humanity - now and in the future - is created. These principles relate to both the city itself and its impact on the rest of the world. Based on these principles, the city can determine in which areas it falls short; think of housing, gender equality and it overshoots the ecological ceiling, for instance, in case of greenhouse gas emissions.

Amsterdam went through this process, together with Kate Raworth. During interactive sessions, a city donut has been created. Citizens from seven different neighborhoods, civil servants and politicians took part in this. The Amsterdam city donut is worth exploring closely.

The urban donut provides a broad vision of urban development, in particular because of the reference to both social and ecological principles and its global footprint. The first version is certainly no final version. It is obvious how Amsterdam has struggled with the description of the impact of the international dimension.

The formulation of desired objectives

Politicians and citizens will mention the most important bottlenecks within their city, even without the city donut. For Amsterdam these are themes like the waste problem, the climate transition, reduction of car use, affordable housing, and inclusion. The Amsterdam donut invites to look at these problems from multiple perspectives: A wide range of social implications, the ecological impact, and the international dimension. This lays the foundation for the formulation of objectives.

Five steps can be distinguished in the formulation of objectives:
• Determine where the most important bottlenecks are located for each of the selected themes, partly based on the city donut (problem analysis), for example insufficient greenery in the neighborhoods.
• Collect data on the existing situation about these bottlenecks. For example, the fact that working-class neighborhoods have four times fewer trees per hectare than middle-class neighborhoods.
• Make provisional choices about the desired improvement of these bottlenecks. For example, doubling the number of trees in five years.
• Formulate the way in which the gap between existing and desired situation can be bridged. For example, replacing parking spaces with trees or facade vegetation.
• Formulate (provisional) objectives.

This process also takes place together with stakeholders. More than 100 people were involved in the development of the circular economy plans in Amsterdam, mainly representatives of the municipalities, companies, and knowledge institutions.

Prioritizing objectives and their instrumentation

Given the provisional objectives, the search can begin for available and desirable resources, varying from information, legal measures, reorganization to (digital) techniques. The expected effectiveness, desired coherence, acceptability, and costs must be considered. With this knowledge, the goals can be formulated definitively and prioritized. It is also desirable to distinguish a short-term and long-term perspective to enable the development of innovative solutions.

The inventory, selection and ethical assessment of resources and the related fine-tuning of the objectives is best done in the first instance by teams representing different disciplines, including expertise in the field of digital technology, followed of course by democratic sanctioning.

My preference is to transfer the instrumentation process to an 'Urban Development and Innovation Department', modeled on the Majors Office of New Urban Mechanics (MONUM) in Boston. Changing teams can be put together from this office, which is strongly branched out with the other departments. In this way, the coherence between the individual goals and action points and the input of scientific research can be safeguarded. According to Ben Green, the author of the book The smart enough city and who has worked in MONUM for years, it has been shown time and again that the effect of technological innovation is enhanced when it is combined with other forms of innovation, such as social innovation.

From vision to action points: Overview

Below I give an overview of the most important building blocks for arriving at a vision and developing action points based on this vision:

1. The process from vision to action points is both linear and iterative. Distinguishing between the phases of vision development, formulating objectives and instrumentation is useful, but these phases influence each other mutually and eventually form a networked process.

2. Urban problems are always complicated, full of internal contradictions and complex. There are therefore seldom single solutions.

3. The mayor (and therefore not a separate alderman) is primarily responsible for coherence within the policy agenda, including the use of (digital) technology. This preferably translates into the structure of the municipal organization, for example an 'Urban Development and Innovation Department'.

4. Formulating a vision, objectives and their instrumentation is part of a democratic process. Both elected representatives and stakeholders play an important role in this.

5. Because of their complexity and coherence, the content of the policy agenda usually transcends the direct interests of the stakeholders, but they must experience that their problems are being addressed too.

6. Ultimately, each city chooses a series of related actions to arrive at an effective, efficient, and supported solution to its problems. The choice of these actions, especially when it comes to (digital) techniques, can always be explained as a function of the addressing problems.

7. The use of technology fits seamlessly into the urban agenda, instead of (re)framing problems to match tempting technologies.

8. Implementation is at least as important as grand plans, but without a vision, concrete plans lose their legitimacy and support.

9. In the search for support for solutions and the implementation of plans, there is collaboration with stakeholders, and they can be given the authority and resources to tackle problems and experiment themselves (‘right to challenge’).

10. In many urban problems, addressing the harmful effects of previously used technologies (varying from greenhouse gas emissions, air pollution to diseases of affluence) is a necessary starting point.

Back to digital technology

(Digital) technology is here to stay and it is developing at a rapid pace. Sometimes you wish it would slow down. It is very regrettable that not democratically elected governments, but Big Tech is the driving force behind the development of technology and that its development is therefore primarily motivated by commercial interests. This calls for resistance against Big Tech's monopoly and for reticence towards their products. By contrast, companies working on technological developments that support a sustainable urban agenda deserve all the support.

In my e-book Cities of the Future. Humane as a choice. Smart where that helps, I performed the exercise described in this post based on current knowledge about urban policy and urban developments. This has led to the identification of 13 themes and 75 action points, where possible with references to potentially useful technology. You can download the e-book here.

Herman van den Bosch's picture #CircularCity
Herman van den Bosch, professor in management development , posted

7. The Future of Urban Tech-project

Featured image

The seventh edition of the series Better cities. The contribution of digital technology is about forecasts, trends and signals regarding the role of technology in the development of cities, as seen by Cornell University's Future of Urban Tech-project. The Dutch versions of this and other already published posts are here.

A source of new insights

Technology has changed the planet for better and for worse. Will this change continue, and which direction will dominate? To answer this question, scientists at the Jacobs Institute at Cornell University in New York developed a horizon scan, named The Future of Urban Tech. At first, they made a content analysis of hundreds of recent scientific publications, from which they distilled 217 signals. These signals were grouped into 49 trends, full of contradictions. Each trend is tagged with an indication of time frame, probability, and societal impact. In the end, they modeled six forecasts. These describe dominant directions for change.

Readers can use the site in their own way. I started from the 17 sectors such as built environment, logistics, mobility, and energy and explored the related trends. It is also possible to start top-down with one of the six forecasts and examine its plausibility considering the related trends and signals. I will show below that each of the forecasts is challenging and invites further reading.

Content selection is supported by dynamic graphics, which connect all signals, trends and forecasts and enable the reader to see their interrelationships. Just start scrolling, unleash your curiosity and decide after some explorations how to proceed more systematically.

The website briefly describes each of the forecasts, trends, and signals. Each signal reflects the content of a handful of (popular) scientific publications, which are briefly summarized. Read the articles that intrigue you or limit yourself to the summary.

Take the time to explore this site as you will encounter many new insights and opinions. The link to the project is at the end of this article.

Below I will explain some aspects of the content of the project, followed by some caveats.

Six forecasts

The forecasts reflect the multiplicity of views in contemporary scientific literature, stimulating readers to form a judgment. The wording of the forecasts is reproduced in abbreviated form below.

1. All buildings, houses, means of transport, infrastructure, but also trees and parks will be connected with sensors and cameras and form one web.
Many buildings, buses, trains, and roads are already equipped with digital detection, but they are not linked yet up at city scale. The next decade will change this, which will, for example, mean a breakthrough in the management of energy flows, but also raise questions regarding privacy.

2. Cities will use advanced biotechnology to take livability to new heights.
Growing understanding of human dependence on nature will lead to mapping the physical-biological world as well as its threats and its blessings to humans. City authorities will equip trees, parks, and waterways with sensors to measure and control the vitality of ecosystems.

3. Resilient corridors will mitigate the impact of climate change, but citizens will be prepared for the inevitable shocks to come.
Cities will reduce CO2 emissions but also prepare for the inevitable consequences of climate change. Political and financial centers of power will be concentrated in places where the impact of climate change can be controlled by technical means.

4. Artificial neural networks provide advanced forms of machine learning with unparalleled predictive capabilities that will bring order to the chaos of urban life.
Machine learning and artificial intelligence will become inscrutable black boxes that make decisions without giving explanations. The ultimate questions are whether the machines to which we outsource our decisions can still be controlled themselves and whether the impact of spontaneous encounters and human ideas disappears if computers produce the best solutions after all?

5. New Screen Deal that redistributes the risks and benefits of urban technology.
“Everything remote” – learning, healthcare, work, and entertainment – is becoming the new normal. The predictive power of AI will lead to conflicts over the concentration of wealth and power that digital platforms cause. But on the other hand, new stakeholders will focus on equity.

6. A global supply chain for city-building technologies will 'crack the code of the city'.
In the smart cities-movement there is a tension between top-down and bottom-up, between proprietary versus open and between Big Tech and Makers. A new urban innovation industry will take dominance but will be more attuned to societal concerns. Governments, in turn, will have a clearer picture of the problems that the industry needs to solve. A public-private structure for investments and governance is indispensable to counter the power of Big Tech.

A few notes

As mentioned, each of the six forecasts is based on trends. Nine trends, in the case of the latest forecast mentioned above. Each trend is illustrated by a handful of signals, documented by various publications. One of the nine trends supporting the latter forecast is “Regional clustering from enterprises to ecosystems”, for example New York, London, Berlin, and Amsterdam. This refers to the growing power of local technology hubs, supported by regional capital and involving governments, start-ups, knowledge institutions and citizens. This concentration could even lead to a new “space race” between cities instead of countries. However, the underlying signals show that this "trend" is more open-ended and uncertain than its description warrants.

I went through many publications documenting the signals and concluded that "trends" essentially map the bandwidth within developments within a domain will occur. To me, this does not detract from the value of the exercise, because the more doubts there are about the future and the more insights we have into the forces that shape it, the more opportunities we have to influence the future.

As the six forecasts must match the open nature of the trends, I have reformulated each of these forecasts as pairs of conflicting directions for development.

1. The commercial or political interests behind urban technology versus the well-being and privacy of citizens.

2. The struggle between 'Big Tech' versus (supra)national political over leadership over technological development.

3. The infusion of technology into all domains of society versus acceptance of unpredictable outcomes of human interactions resulting from creativity, inner motives, and intuitive decisions.

4. Controlling nature through biotechnology versus restoring a balance between humans and natural ecosystems.

5. The concentration of power, political influence, and wealth through control over technology versus open licensing that allows technology to be used for the benefit of the entire world population.

6. Autonomous decision-making through machine learning and artificial intelligence versus the primacy of democratic and decentralized decision-making over the application of technology.

Studying the Future of Urban Tech-project has been a rich and thought-provoking learning experience and has helped fueling the insights underlying this series.

You can find the Future of Urban Tech project behind the link below:

https://futureofurbantech.org/introduction/

Herman van den Bosch's picture #Mobility