#Data
News

Topic within Smart City Academy
Herman van den Bosch, professor in management development , posted

Ethical principles and artificial intelligence

Featured image

In the 11th episode of the series Better cities: The contribution of digital technology, I will apply the ethical principles from episode 9 to the design and use of artificial intelligence.

Before, I will briefly summarize the main features of artificial intelligence, such as big data, algorithms, deep-learning, and machine learning. For those who want to know more: Radical technologies by Adam Greenfield (2017) is a very readable introduction, also regarding technologies such as blockchain, augmented and virtual reality, Internet of Things, and robotics, which will be discussed in next episodes.

Artificial intelligence

Artificial intelligence has valuable applications but also gross forms of abuse. Valuable, for example, is the use of artificial intelligence in the layout of houses and neighborhoods, taking into account ease of use, views and sunlight with AI technology from Spacemaker or measuring the noise in the center of Genk using Nokia's Scene Analytics technology. It is reprehensible how the police in the US discriminate against population groups with programs such as PredPol and how the Dutch government has dealt in the so called ‘toelagenaffaire’.

Algorithms
Thanks to artificial intelligence, a computer can independently recognize patterns. Recognizing patterns as such is nothing new. This has long been possible with computer programs written for that purpose. For example, to distinguish images of dogs and cats, a programmer created an "if....then" description of all relevant characteristics of dogs and cats that enabled a computer to distinguish between pictures of the two animal species. The number of errors depended on the level of detail of the program. When it comes to more types of animals and animals that have been photographed from different angles, making such a program is very complicated. In that case, a computer can be trained to distinguish relevant patterns itself. In this case we speak of artificial intelligence. People still play an important role in this. This role consists in the first place in writing an instruction - an algorithm - and then in the composition of a training set, a selection of a large number of examples, for example of animals that are labeled as dog or cat and if necessary lion tiger and more . The computer then searches 'itself' for associated characteristics. If there are still too many errors, new images will be added.

Deep learning
The way in which the animals are depicted can vary endlessly, whereby it is no longer about their characteristics, but about shadow effect, movement, position of the camera or the nature of the movement, in the case of moving images. The biggest challenge is to teach the computer to take these contextual characteristics into account as well. This is done through the imitation of the neural networks. Image recognition takes place just like in our brains thanks to distinguishing layers, varying from distinguishing simple lines, patterns, and colors to differences in sharpness. Because of this layering, we speak of 'deep learning'. This obviously involves large data sets and a lot of computing power, but it is also a labor-intensive process.

Unsupervised learning
Learning how to apply algorithms under supervision produces reliable results and the instructor can still explain the result after many iterations. As the situation becomes more complicated and different processes are proceeding at the same time, guided instruction is not feasible any longer. For example, if animals attack each other, surviving or not, and the computer must predict which kind of animals have the best chance of survival under which conditions. Also think of the patterns that the computer of a car must be able to distinguish to be able to drive safely on of the almost unlimited variation, supervised learning no longer works.

In the case of unsupervised learning, the computer is fed with data from many millions of realistic situations, in the case of cars recordings of traffic situations and the way the drivers reacted to them. Here we can rightly speak of 'big data' and 'machine learning', although these terms are often used more broadly. For example, the car's computer 'learns' how and when it must stay within the lanes, can pass, how pedestrians, bicycles or other 'objects' can be avoided, what traffic signs mean and what the corresponding action is. Tesla’s still pass all this data on to a data center, which distills patterns from it that regularly update the 'autopilots' of the whole fleet. In the long run, every Tesla, anywhere in the world, should recognize every imaginable pattern, respond correctly and thus guarantee the highest possible level of safety. This is apparently not the case yet and Tesla's 'autopilot' may therefore not be used without the presence of a driver 'in control'. Nobody knows by what criteria a Tesla's algorithms work.

Unsupervised learning is also applied when it comes to the prediction of (tax) fraud, the chance that certain people will 'make a mistake' or in which places the risk of a crime is greatest at a certain moment. But also, in the assessment of applicants and the allocation of housing. For all these purposes, the value of artificial intelligence is overestimated. Here too, the 'decisions' that a computer make are a 'black box'. Partly for this reason, it is difficult, if not impossible, to trace and correct any errors afterwards. This is one of the problems with the infamous ‘toelagenaffaire’.

The cybernetic loop
Algorithmic decision-making is part of a new digital wave, characterized by a 'cybernetic loop' of measuring (collecting data), profiling (analyzing data) and intervening (applying data). These aspects are also reflected in every decision-making process, but the parties involved, politicians and representatives of the people make conscious choices step by step, while the entire process is now partly a black box.

The role of ethical principles

Meanwhile, concerns are growing about ignoring ethical principles using artificial intelligence. This applies to near all principles that are discussed in the 9th episode: violation of privacy, discrimination, lack of transparency and abuse of power resulting in great (partly unintentional) suffering, risks to the security of critical infrastructure, the erosion of human intelligence and undermining of trust in society. It is therefore necessary to formulate guidelines that align the application of artificial intelligence again with these ethical principles.

An interesting impetus to this end is given in the publication of the Institute of Electric and Electronic EngineersEthically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. The Rathenau Institute has also published several guidelines in various publications.

The main guidelines that can be distilled from these and other publications are:

1. Placing responsibility for the impact of the use of artificial intelligence on both those who make decisions about its application (political, organizational, or corporate leadership) and the developers. This responsibility concerns the systems used as well as the quality, accuracy, completeness, and representativeness of the data.

2. Prevent designers from (unknowingly) using their own standards when instructing learning processes. Teams with a diversity of backgrounds are a good way to prevent this.

3. To be able to trace back 'decisions' by computer systems to the algorithms used, to understand their operation and to be able to explain them.

4. To be able to scientifically substantiate the model that underlies the algorithm and the choice of data.

5. Manually verifying 'decisions' that have a negative impact on the data subject.

6. Excluding all forms of bias in the content of datasets, the application of algorithms and the handling of outcomes.

7. Accountability for the legal basis of the combination of datasets.

8. Determine whether the calculation aims to minimize false positives or false negatives.

9. Personal feedback to clients in case of lack of clarity in computerized ‘decisions’.

10. Applying the principles of proportionality and subsidiarity, which means determining on a case-by-case basis whether the benefits of using artificial intelligence outweigh the risks.

11. Prohibiting applications of artificial intelligence that pose a high risk of violating ethical principles, such as facial recognition, persuasive techniques and deep-fake techniques.

12. Revocation of legal provisions if it appears that they cannot be enforced in a transparent manner due to their complexity or vagueness.

The third, fourth and fifth directives must be seen in conjunction. I explain why below.

The scientific by-pass of algorithmic decision making

When using machine learning, computers themselves adapt and extend the algorithms and combine data from different data sets. As a result, the final ‘decisions’ made by the computer cannot be explained. This is only acceptable after it has been proven that these decisions are 'flawless', for example because, in the case of 'self-driving' cars, if they turn out to be many times safer than ordinary cars, which - by the way - is not the case yet.

Unfortunately, this was not the case too in the ‘toelagenaffaire’. The fourth guideline could have provided a solution. Scientific design-oriented research can be used to reconstruct the steps of a decision-making process to determine who is entitled to receive an allowance. By applying this decision tree to a sufficiently large sample of cases, the (degree of) correctness of the computer's 'decisions' can be verified. If this is indeed the case, then the criteria used in the manual calculation may be used to explain the processes in the computer's 'black box'. If there are too many deviations, then the computer calculation must be rejected at all.

Governance

In the US, the use of algorithms in the public sector has come in a bad light, especially because of the facial recognition practices that will be discussed in the next episode. The city of New York has therefore appointed an algorithm manager, who investigates whether the algorithms used comply with ethical and legal rules. KPMG has a supervisory role in Amsterdam. In other municipalities, we see that role more and more often fulfilled by an ethics committee.

In the European public domain, steps have already been taken to combat excesses of algorithmic decision-making. The General Data Protection Regulation (GDPR), which came into effect in 2018, has significantly improved privacy protection. In April 2019, the European High Level Expert Group on AI published ethical guidelines for the application of artificial intelligence. In February 2020, the European Commission also established such guidelines, including in the White Paper on Artificial Intelligence and an AI regulation. The government also adopted the national digitization strategy, the Strategic Action Plan for AI and the policy letter on AI, human rights, and public values.

I realize that binding governments and their executive bodies to ethical principles is grist to the mill for those who flout those principles. Therefore, the search for the legitimate use of artificial intelligence to detect crime, violations or abuse of subsidies and many other applications continues to deserve broad support.

Follow the link below to find one of the previous episodes or see which episodes are next, and this one for the Dutch version.

Herman van den Bosch's picture #DigitalCity
Herman van den Bosch, professor in management development , posted

10 Accessibility, software, digital infrastructure, and data. The quest for ethics

Featured image

The 10th episode in the series Better cities: The contribution of digital technology deals with the impact of ethical principles on four pillars of digitization: accessibility, software, infrastructure and data.

In the previous episode, I discussed design principles - guidelines and values - for digital technology. The report of the Rathenau Instituut Opwaarderen - Borgen van publieke waarden in de digitale samenleving concludes that government, industry, and society are still insufficiently using these principles. Below, I will consider their impact on four pillars of digitization: accessibility, software, infrastructure, and data. The next episodes will be focused on their impact on frequently used technologies.

Accessibility

Accessibility refers to the availability of high-speed Internet for everyone. This goes beyond just technical access. It also means that a municipality ensures that digital content is understandable and that citizens can use the options offered. Finally, everyone should have a working computer.

Free and safe Internet for all residents is a valuable amenity, including Wi-Fi in public areas. Leaving the latter to private providers such as the LinkNYC advertising kiosks in New York, which are popping up in other cities as well, is a bad thing. Companies such as Sidewalk Labs tempt municipalities by installing these kiosks for free. They are equipped with sensors that collect a huge amount of data from every device that connects to the Wi-Fi network: Not only the location and the operating system, but also the MAC address. With the help of analytical techniques, the route taken can be reconstructed. Combined with other public data from Facebook or Google, they provide insight into personal interests, sexual orientation, race, and political opinion of visitors.

The huge internet that connects everything and everyone also raises specters, which have to do with privacy-related uncertainty and forms of abuse, which appeared to include hacking of equipment that regulates your heartbeat.

That is why there is a wide search for alternatives. Worldwide, P2P neighborhood initiatives occur for a private network. Many of these are part of The Things Network. Instead of Wi-Fi, this network uses a protocol called LoRaWAN. Robust end-to-end encryption means that users don't have to worry about secure wireless hotspots, mobile data plans, or faltering Wi-Fi connectivity. The Things Network manages thousands of gateways and provides coverage to millions of people and a suite of open tools that enable citizens and entrepreneurs to build IoT applications at a low cost, with maximum security and that are easy to scale.

Software

Computer programs provide diverse applications, ranging from word processing to management systems. Looking for solutions that best fit the guidelines and ethical principles mentioned in the former episode, we quickly arrive at open-source software, as opposed to proprietary products from commercial providers. Not that the latter are objectionable in advance or that they are always more expensive. The most important thing to pay attention to is interchangeability (interoperability) with products from other providers to prevent you cannot get rid of them (lock in).

Open-source software offers advantages over proprietary solutions, especially if municipalities encourage city-wide use. Barcelona is leading the way in this regard. The city aims to fully self-manage its ICT services and radically improve digital public services, including privacy by design. This results in data sovereignty and in the use of free software, open data formats, open standards, interoperability and reusable applications and services.

Anyone looking for open-source software cannot ignore the Fiwarecommunity, which is similar in organization to Linux and consists of companies, start-ups and freelance developers and originated from an initiative of the EU. Fiware is providing open and sustainable software around public, royalty-free and implementation-driven standards.

Infrastructure

Computers are no longer the largest group of components of the digital infrastructure. Their number has been surpassed by so-called ubiquitous sensor networks (USN), such as smart meters, CCTV, microphones, and sensors. Sensor networks have the most diverse tasks, they monitor the environment (air quality, traffic density, unwanted visitors) and they are in machines, trains, and cars and even in people to transmit information about the functioning of vital components. Mike Matson calculated that by 2050 a city of 2 million inhabitants will have as many as a billion sensors, all connected by millions of kilometers of fiber optic cable or via Wi-Fi with data centers, carrier hotels (nodes where private networks converge) to eventually the Internet.

This hierarchically organized cross-linking is at odds with the guidelines and ethical principles formulated in the previous post. Internet criminals are given free rein and data breaches can spread like wildfires, like denial of service (DoS). In addition, the energy consumption is enormous, apart from blockchain. Edge computing is a viable alternative. The processing of the data is done locally and only results are uploaded on demand. This applies to sensors, mobile phones and possibly automated cars as well. A good example is the Array of Things Initiative. Ultimately, this will include 500 sensors, which will be installed in consultation with the population in Chicago. Their data is stored in each sensor apart and can be consulted online, if necessary, always involving several sensors and part of the data. Federated data systems are comparable. Data is stored in a decentralized way, but authorized users can use all data thanks to user interfaces.

Data

There is a growing realization that when it comes to data, not only quantity, but also quality counts. I will highlight some aspects.

Access to data
Personal data should only be available with permission from the owner. To protect this data, the EU project Decode proposes that owners can manage their data via blockchain technology. Many cities now have privacy guidelines, but only a few conduct privacy impact assessments as part of its data policy (p.18).

Quality
There is growing evidence that much of the data used in artificial intelligence as “learning sets” is flawed. This had already become painfully clear from facial recognition data in which minority groups are disproportionately represented. New research shows that this is also true in the field of healthcare. This involves data cascades, a sum of successive errors, the consequences of which only become clear after some time. Data turned out to be irrelevant, incomplete, incomparable, and even manipulated.

Data commons
Those for whom high-quality data is of great importance will pay extra attention to its collection. In. this case, initiating a data common is a godsend. Commons are shared resources managed by empowered communities based on mutually agreed and enforced rules. An example is the Data and Knowledge Hub for Healthy Urban Living (p.152), in which governments, companies, environmental groups and residents collect data for the development of a healthy living environment, using a federated data system. These groups are not only interested in the data, but also in the impact of its application.

Open date
Many cities apply the 'open by default' principle and make most of the data public, although the user-friendliness and efficiency sometimes leave something to be desired. Various data management systems are available as an open-source portal. One of the most prominent ones is CKAN, administered by the Open Knowledge Foundation. It contains tools for managing, publishing, finding, using, and sharing data collections. It offers an extensive search function and allows the possibility to view data in the form of maps, graphs, and tables. There is an active community of users who continue to develop the system and adapt it locally.

To make the data accessible, some cities also offer training courses and workshops. Barcelona's Open Data Challenge is an initiative for secondary school students that introduces them to the city's vast dat collection.

Safety
As the size of the collected data, the amount of entry points and the connectivity on the Internet increase, the security risks also become more severe. Decentralization, through edge computing and federated storage with blockchain technology, certainly contribute to security. But there is still a long way to go. Only half of the cities has a senior policy officer in this area. Techniques for authentication, encryption and signing that together form the basis for attribute-based identity are applied only incidentally. This involves determining identity based on several characteristics of a user, such as function and location. Something completely different is Me and my shadow, a project that teaches Internet users to minimize their own trail and thus their visibility to Internet criminality.

There is still a world to win before the guidelines and ethical principles mentioned in the previous episode are sufficiently met. I emphasize again not to over-accentuate concepts such as 'big data', 'data-oriented policy' and the size of data sets. Instead, it is advisable to re-examine the foundations of scientific research. First and foremost is knowledge of the domain (1), resulting in research questions (2), followed by the choice of an appropriate research method (3), defining the type of data to be collected (4), the collection of these data (5), and finally their statistical processing to find evidence for substantiated hypothetical connections (6). The discussion of machine learning in the next episode will reveal that automatic processing of large data sets is mainly about discovering statistical connections, and that can have dire consequences.

Follow the link below to find one of the previous episodes or see which episodes are next, and this one for the Dutch version.

Herman van den Bosch's picture #DigitalCity
Herman van den Bosch, professor in management development , posted

Policy guidelines and ethical principles for digital technology

Featured image

The 9th episode of the series Building sustainable cities: the contribution of digital technology deals with guidelines and related ethical principles that apply to the design and application of digital technology.

One thing that keeps me awake at night is the speed at which artificial intelligence is developing and the lack of rules for its development and application said Aleksandra Mojsilović, director of IBM Science for Social Good. The European Union has a strong focus on regulations to ensure that technology is people-oriented, ensures a fair and competitive digital economy and contributes to an open, democratic and sustainable society. This relates to more than legal frameworks, also to political choices, ethical principles, and the responsibilities of the profession. This is what this post is about.

Politicians are ultimately responsible for the development, selection, application and use of (digital) technology. In this respect, a distinction must be made between:
• Coordination of digital instruments and the vision on the development of the city.
• Drawing up policy guidelines for digitization in general.
• Recognizing related ethical principles next to these policy guidelines.
• Creating the conditions for democratic oversight of the application of digital technology.
• Make an appeal to the responsibilities of the ICT professional group.

Guidelines for digitization policy

In the previous post I emphasized that the digital agenda must result from the urban policy agenda and that digital instruments and other policy instruments must be seen in mutual relation.
Below are five additional guidelines for digitization policy formulated by the G20 Global Smart Cities Alliance. 36 cities are involved in this initiative, including Apeldoorn, as the only Dutch municipality. The cities involved will elaborate these guidelines soon. In the meantime, I have listed some examples.

Equity, inclusiveness, and social impact
• Enabling every household to use the Internet.
• Making information and communication technology (ICT) and digital communication with government accessible to all, including the physically/mentally disabled, the elderly and immigrants with limited command of the local language.
• Assessing the impact of current digital technology on citizens and anticipating the future impact of this policy.
• Facilitating regular education and institutions for continuous learning to offer easily accessible programs to develop digital literacy.
• Challenging neighborhoods and community initiatives to explore the supportive role of digital devices in their communication and actions.

Security and resilience
• Developing a broadly supported vision on Internet security and its consequences.
• Mandatory compliance with rules and standards (for instance regarding IoT) to protect digital systems against cyberthreats.
• Becoming resilient regarding cybercrime by developing back-up systems that seamless take over services eventually.
• Building resilience against misuse of digital communication, especially bullying, intimidation and threats.
• Reducing the multitude of technology platforms and standards, to limit entry points for cyber attackers.

Privacy and transparency
• The right to move and stay in cities without being digitally surveilled, except in case of law enforcement with legal means.
• Establishing rules in a democratic manner for the collection of data from citizens in the public space.
• Minimalist collection of data by cities to enable services.
• Citizens' right to control their own data and to decide which ones are shared and under which circumstances.
• Using privacy impact assessment as a method for identifying, evaluating, and addressing privacy risks by companies, organizations, and the city itself.

Openness and interoperability
• Providing depersonalized data from as many as possible organizations to citizens and organizations as a reliable evidence base to support policy and to create open markets for interchangeable technology.
• Public registration of devices, their ownership, and their aim.
• Choosing adequate data architecture, including standards, agreements, and norms to enable reuse of digital technology and to avoid lock-ins.

Operational and financial sustainability
• Ensuring a safe and well-functioning Internet
• The coordinated approach ('dig once') of constructing and maintenance of digital infrastructure, including Wi-Fi, wired technologies and Internet of Things (IoT).
• Exploring first, whether the city can develop and manage required technology by itself, before turning to commercial parties.
• Cities, companies, and knowledge institutions share data and cooperate in a precompetitive way at innovations for mutual benefit.
• Digital solutions are feasible: Results are achieved within an agreed time, with an agreed budget.

Ethical Principles

The guidelines formulated above partly converge with the ethical principles that underlie digitization according to the Rathenau Institute. Below, I will summarize these principles.

Privacy
• Citizens' right to dispose of their own (digital) data, collected by the government, companies and other organizations.
• Limitation of the data to be collected to those are functionally necessary (privacy by design), which also prevents improper use.
• Data collection in the domestic environment only after personal permission and in the public environment only after consent by the municipal council.

Autonomy
• The right to decide about information to be received.
• The right to reject or consent to independent decision making by digital devices in the home.
• No filtering of information except in case of instructions by democratically elected bodies.

Safety
• Ensuring protection of personal data and against identity theft through encryption and biometric recognition.
• Preventing unwanted manipulation of devices by unauthorized persons.
• Providing adequate warnings against risks by providers of virtual reality software.
• Securing exchange of data

Public oversight
• Ensuring public participation in policy development related to digitization
• Providing transparency of decision-making through algorithms and opportunity to influence these decisions by human interventions.
• Decisions taken by autonomous systems always include an explanation of the underlying considerations and provide the option to appeal against this decision.

Human dignity
• Using robotics technology mainly in routinely, dangerous, and dirty work, preferably under supervision of human actors.
• Informing human actors collaborating with robots of the foundations of their operation and the possibilities to influence them.

Justice
• Ensuring equal opportunities, accessibility, and benefits for all when applying digital systems
• If autonomous systems are used to assess personal situations, the result is always checked for its fairness, certainty, and comprehensibility for the receiving party.
• In the case of autonomous analysis of human behavior, the motives on which an assessment has taken place can be checked by human intervention.
• Employees in the gig economy have an employment contract or an income as self-employed in accordance with legal standards.

Power relations
• The possibility of updating software if equipment still is usable, even if certain functionalities are no longer available.
• Companies may not use their monopoly position to antagonize other companies.
• Ensuring equal opportunities, accessibility, and benefits for all when applying digital systems.

The above guidelines and ethical principles partly overlap. Nevertheless, I have not combined them as they represent different frames of reference that are often referred to separately. The principles for digitization policy are particularly suitable for the assessment of digitization policy. The ethical principles are especially useful when assessing different technologies. That is why I will  use the latter in the following episodes.
In discussing the digitalization strategy of Amsterdam and other municipalities in later episodes, I will use a composite list of criteria, based on both the above guidelines and ethical principles. This list, titled 'Principles for a socially responsible digitization policy' can already be viewed HERE.

Democratic oversight

Currently, many municipalities still lack sufficient competencies to supervise the implementation and application of the guidelines and principles mentioned above. Moreover, they are involved as a party themselves. Therefore, setting up an independent advisory body for this purpose is desirable. In the US, almost every city now has a committee for public oversight of digitization. These committees are strongly focused on the role of the police, in particular practices related to facial recognition and predictive policing.
Several cities in the Netherlands have installed an ethics committee. A good initiative. I would also have such a committee supervise the aforementioned policy guidelines and not just the ethical principles. According to Bart Wernaart, lecturer in Moral Design Strategy at Fontys University of applied sciences, such a committee must be involved in digitization policy at an early stage, and it should also learn from mistakes in the field of digitization in the past.
The latter is especially necessary because, as the Dutch Data Protection Authority writes, the identity of an ethically responsible city is, is not set in stone. The best way to connect ethical principles and practice is to debate and questions the implications of policy in practice.

Experts’ own responsibility

A mature professional group has its own ethical principles, is monitoring their implementation, and sanctioning discordant members. In this respect, the medical world is most advanced. As far as I know, the ICT profession has not yet formulated its own ethical principles. This has been done, for example, by the Institute of Electric and Electronic Engineers (IEEE) in the field of artificial intelligence. Sarah HamidData scientists are generally concerned more with the abstract score metric of their models than the direct and indirect impact it can have on the world. However, experts often understand the unforeseen implications of government policy earlier than politicians. Hamid addresses the implications for professional action: If computer scientists really hoped to make a positive impact on the world, then they would need to start asking better questions. The road to technology implementation by governments is paved with failures. Professionals have often seen this coming, but have rarely warned, afraid of losing an assignment. Self-confident professionals must therefore say 'no' much more often to a job description. Hamid: Refusal is an essential practice for anyone who hopes to design sociotechnical systems in the service of justice and the public good. This even might result in a better relationship with the client and more successful projects.

Establishing policy guidelines and ethical principles for municipal digitization requires a critical municipal council and an ethics committee with relevant expertise. But it also needs professionals who carry out the assignment and enter the debate if necessary.

The link below opens a preliminary overview of the already published and upcoming articles in the series Building sustainable cities: the contribution of digital technology. Click HERE for the Dutch version.

Herman van den Bosch's picture #DigitalCity
Herman van den Bosch, professor in management development , posted

5. Collect meaningful data and stay away from dataism

Featured image

The fifth episode of the series Better cities: The role of technology is about the sense and nonsense of big data. Data is the new oil is the worst cliché of the big data hype yet. Even worse than data-driven policy. In this article, I investigate - with digital twins as a thread - what the contribution of data can be to urban policy and how dataism, a religion that takes over policy making itself, can be prevented (must read: Harari: Homo Deus).

I am a happy user of a Sonos sound system. Nevertheless, the helpdesk must be involved occasionally. Recently, it knew within five minutes that my problem was the result of a faulty connection cable between the modem and the amplifier. As it turned out, the helpdesk was able to remotely generate a digital image of the components of my sound system and their connections and saw that the cable in question was not transmitting any signal. A simple example of a digital twin. I was happy with it. But where is the line between the sense and nonsense of collecting masses of data?

What is a digital twin.

A digital twin is a digital model of an object, product, or process. In my training as a social geographer, I had a lot to do with maps, the oldest form of 'twinning'. Maps have laid the foundation for GIS technology, which in turn is the foundation of digital twins. Geographical information systems relate data based on geographical location and provide insight into their coherence in the form of a model. If data is permanently connected to reality with the help of sensors, then the dynamics in the real world and those in the model correspond and we speak of a 'digital twin'. Such a dynamic model can be used for simulation purposes, monitoring and maintenance of machines, processes, buildings, but also for much larger-scale entities, for example the electricity grid.

From data to insight

Every scientist knows that data is indispensable, but also that there is a long way to go before data leads to knowledge and insight. That road starts even before data is collected. The first step is assumptions about the essence of reality and thus the method of knowing it. There has been a lot of discussion about this within the philosophy of science, from which two points of view have been briefly crystallized, a systems approach and a complexity approach.

The systems approach assumes that reality consists of a stable series of actions and reactions in which law-like connections can be sought. Today, almost everyone assumes that this only applies to physical and biological phenomena. Yet there is also talk of social systems. This is not a question of law-like relationships, but of generalizing assumptions about human behavior at a high level of aggregation. The homo economicus is a good example. Based on such assumptions, conclusions can be drawn about how behavior can be influenced.

The complexity approach sees (social) reality as the result of a complex adaptive process that arises from countless interactions, which - when it comes to human actions - are fed by diverse motives. In that case it will be much more difficult to make generic statements at a high level of aggregation and interventions will have a less predictable result.

Traffic models

Traffic policy is a good example to illustrate the distinction between a process and a complexity approach. Simulation using a digital twin in Chattanooga of the use of flexible lane assignment and traffic light phasing showed that congestion could be reduced by 30%. Had this experiment been carried out, the result would probably have been very different. Traffic experts note time and again that every newly opened road becomes full after a short time, while the traffic picture on other roads hardly changes. In econometrics this phenomenon is called induced demand. In a study of urban traffic patterns between 1983 and 2003, economists Gilles Duranton and Matthew Turner found that car use increases proportionally with the growth of road capacity. The cause only becomes visible to those who use a complexity approach: Every road user reacts differently to the opening or closing of a road. That reaction can be to move the ride to another time, to use a different road, to ride with someone else, to use public transport or to cancel the ride.

Carlos Gershenson, a Mexican computer specialist, has examined traffic behavior from a complexity approach and he concludes that self-regulation is the best way to tackle congestion and to maximize the capacity of roads. If the simulated traffic changes in Chattanooga had taken place in the real world, thousands of travelers would have changed their driving behavior in a short time. They had started trying out the smart highway, and due to induced demand, congestion there would increase to old levels in no time. Someone who wants to make the effect of traffic measures visible with a digital twin should feed it with results of research into the induced demand effect, instead of just manipulating historical traffic data.

The value of digital twins

Digital twins prove their worth when simulating physical systems, i.e. processes with a parametric progression. This concerns, for example, the operation of a machine, or in an urban context, the relationship between the amount of UV light, the temperature, the wind (speed) and the number of trees per unit area. In Singapore, for example, digital twins are being used to investigate how heat islands arise in the city and how their effect can be reduced. Schiphol Airporthas a digital twin that shows all moving parts at the airport, such as roller conveyors and stairs. This enables technicians to get to work immediately in the event of a malfunction. It is impossible to say in advance whether the costs of building such a model outweigh the benefits. Digital twins often develop from small to large, driven by proven needs.

Boston also developed a digital twin of part of the city in 2017, with technical support from ESRI. A limited number of processes have been merged into a virtual 3D model. One is the shadowing caused by the height of buildings. One of the much-loved green spaces in the city is the Boston Common. For decades, it has been possible to limit the development of high-rise buildings along the edges of the park and thus to limit shade. Time and again, project developers came up with new proposals for high-rise buildings. With the digital twin, the effect of the shadowing of these buildings can be simulated in different weather conditions and in different seasons (see image above). The digital twin can be consulted online, so that everyone can view these and other effects of urban planning interventions at home.

Questions in advance

Three questions precede the construction of a digital twin, and data collection in general. In the first place, what the user wants to achieve with it, then which processes will be involved and thirdly, which knowledge is available of these processes and their impact. Chris Andrews, an urban planner working on the ESRI ArcGIS platform, emphasizes the need to limit the number of elements in a digital twin and to pre-calculate the relationship between them: To help limit complexity, the number of systems modeled in a digital twin should likely be focused on the problems the twin will be used to solve.

Both the example of traffic forecasts in Chattanooga, the formation of heat islands in Singapore and the shadowing of the Boston Common show that raw data is insufficient to feed a digital twin. Instead, data are used that are the result of scientific research, after the researcher has decided whether a systems approach or a complexity approach is appropriate. In the words of Nigel Jacob, former Chief Technology Officer in Boston: For many years now, we've been talking about the need to become data-driven… But there's a step beyond that. We need to make the transition to being science-driven in ...... It's not enough to be data mining to look for patterns. We need to understand root causes of issues and develop policies to address these issues.

Digital twins are valuable tools. But if they are fed with raw data, they provide at best insight into statistical connections and every scientist knows how dangerous it is to draw conclusions from that: Trash in, trash out.

If you prefer the Dutch version of the Better cities series, find an overview of the already published episodes via the link below.

Herman van den Bosch's picture #SmartCityAcademy
Herman van den Bosch, professor in management development , posted

2. Scare off the monster behind the curtain: Big Tech’s monopoly

Featured image

This post is about the omnipotence of Big Tech. So far, resistance mainly results in regulation of its effects. The core of the problem, the monopoly position of the technology giants, is only marginally touched. What is needed is a strict antitrust policy and a government that once again takes a leading role in setting the technology agenda.

A cause of concern

 In its recent report, the Dutch Rathenau Institute calls the state of digital technology a cause for concern. The institute advocates a fair data economy and a robust, secure and available Internet for everyone. This is not the case now. In fact, we are getting further and further away from this. The risks are pressing more each day: Inscrutable algorithms, deepfakes and political micro-targeting, inner-city devastation through online shopping, theft of trade secrets, unbridled data collection by Google, Amazon and Facebook, poorly paid taxi drivers by Uber and other service providers of the gig economy, the effect of Airbnb on the hotel industry and the energy consumption of bitcoin and blockchain.

The limits of legislation

Numerous publications are calling on the government to put an end to the growing abuse of digital technology. In his must read 'the New Digital Deal' Bas Boorsma states: In order to deploy digitalization and to manage platforms for the greater good of the individual and society as a whole, new regulatory approaches will be required… (p. 46) . That is also the view of the Rathenau Institute, which lists three spearheads for a digitization strategy: Strong legislative frameworks and supervision, value-based digital innovation based on critical parliamentary debate and a say in this for citizens and professionals.

More than growing inconvenience

In recent years, the European Commission has launched a wide range of legislative proposals, such as the Digital Services Act package, the Digital Market Act and the General Data Protection Regulation (GDPR). However, these measures do not get to the kernel of the problem. The near-monopoly position of Big Tech is the proverbial monster behind the curtain. The Rathenau Institute speaks in furtitive terms of "the growing inconvenience" of reliance on American and Chinese tech giants. Even the International Monetary Fund is clearer in stating that the power of Big Tech inhibits innovation and investment and increases income inequality. Due to the power of the big technology companies, society is losing its grip on technology.

Surveillance capitalisme

To curb the above-mentioned risks, the problem must first be named  and measures must then be tailored accordingly. This is done in two recent books, namely Shoshana Zuboff's 'The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power' (2019) and Cory Doctorow's 'How to destroy surveillance capitalism' (2021). Zuboff describes in detail how Google, Amazon and Facebook collect data with only one goal, to entice citizens to buy goods and services: Big Tech's product is persuasion. The services — social media, search engines, maps, messaging, and more — are delivery systems for persuasion.

Big tech's monopoly

The unprecedented power of Big Tech is a result of the fact that these companies have become almost classic monopolies. Until the 1980s, the US had strict antitrust legislation: the Sherman's act, notorious for big business. Ronald Reagan quickly wiped it out in his years as president, and Margareth Thatcher did the same in the UK, Brian Mulroney in Canada and Helmut Kohl in Germany. While Sherman saw monopolies as a threat to the free market, Reagan believed that government interference threatens the free market. Facebook joins in if it sees itself as a 'natural monopoly': You want to be on a network where your friends are also. But you could also reach your friends if there were more networks that are interoperable. Facebook has used all economic, technical and legal means to combat the latter, including takeover of potential competitors: Messenger, Instagram and WhatsApp.

In the early 21st century, there was still a broad belief that emerging digital technology could lead to a better and more networked society. Bas Boorsma: The development of platforms empowered start-ups, small companies and professionals. Many network utopians believed the era of 'creative commons' had arrived and with it, a non-centralized and highly digital form of 'free market egalitarianism' (New Digital Deal, p.52). Nothing has come of this: Digitalization-powered capitalism now possesses a speed, agility and rawness that is unprecedented (New Digital Deal, p.54). Even the startup community is becoming one big R&D lab for Big Tech. Many startups hope to be acquired by one of the tech giants and then cash in on millions. As a result, Big Tech is on its way to acquire a dominant position in urban development, the health sector and education, in addition to the transport sector.

Antitrust legislation

Thanks to its monopoly position, Big Tech can collect unlimited data, even if European legislation imposes restrictions and occasional fines. After all, a lot of data is collected without citizens objecting to it. Mumford had already realized this in 1967: Many consumers see these companies not only as irresistible, but also ultimately beneficial. These two conditions are the germ of what he called the megatechnics bribe.

The only legislation that can break the power of Big Tech is a strong antitrust policy, unbundling the companies, an absolute ban on acquisitions and rigorous taxation.

Technology agenda

Technology does not develop autonomously. At the moment, Big Tech is indisputably setting the technology agenda in the Western Hemisphere. China is a different story. With Mariana Mazzocato, I believe that governments should take back control of technological development, as they did until the end of the last century. Consider the role of institutions such as DARPA in the US, the Fraunhofer Institute in Germany and TNO in the Netherlands. Democratic control is an absolute precondition!

In the chapter 'Digitally just cities' in my e-book 'Cities of the future: Always humane, smart where it helps' (link below), I show, among other things, what Facebook, Amazon and Google could look like after a possible unbundling.

Herman van den Bosch's picture #SmartCityAcademy
Roelof Hellemans, posted

Siemens Mobility bouwt landelijk MaaS-platform Rivier NS, RET en HTM zetten in op digitale ontsluiting van heel Nederland

NS, RET en HTM laten
hun landelijke MaaS-platform bouwen door Siemens Mobility. Het platform maakt het mogelijk om een reis met verschillende vervoermiddelen in één keer online te plannen, boeken en betalen. RET-directeur Maurice Unck namens Rivier, de joint venture van de drie partijen: “Na de pandemie verandert ons reisgedrag.

We reizen, werken en leren flexibeler: in tijd, plaats en keuze van het
vervoermiddel. Daarom investeren we juist nu in de beste reismogelijkheden voor de consument. We willen de drempel verlagen om een reis met meerdere
vervoermiddelen eenvoudig digitaal te plannen, boeken en betalen. Daarom roepen
we alle Nederlandse mobiliteitsaanbieders op om zich aan te sluiten.”

 Naar verwachting zien
in het najaar de eerste apps van MaaS-providers het licht waarmee consumenten
hun multimodale reis in heel Nederland kunnen plannen.

Stel: je wil
graag bij een vriend, een klant of iemand anders op bezoek en gemakkelijk weten
hoe je daar het snelst bent en hoeveel dat kost. Hoe krijg je dat voor elkaar?
Je kunt kijken of er files zijn, een deelauto boeken, uitzoeken of het OV goed
werkt, nadenken over de fiets als alternatief en meer. Maar een reis
samenstellen waarbij deze verschillende vervoermiddelen van
mobiliteitsaanbieders optimaal worden ingezet, moet je nu nog helemaal zelf
doen. Dat is best een complexe puzzel die veel mensen liever overslaan. Terwijl
juist de combinatie van vervoermiddelen je als reiziger veel tijdwinst en
bewegingsvrijheid oplevert. Daarnaast helpt zo’n combinatie onze infrastructuur
zo goed mogelijk te benutten.

 Reisopties in één
keer zichtbaar, één keer afrekenen

Met het nieuwe platform
is het straks voor consumenten veel makkelijker om gebruik te maken van
beschikbare vervoermiddelen. Het platform kan verbonden worden met al bestaande
apps van MaaS-providers zoals de NS, RET, HTM. Maar het platform kan ook andere
bestaande apps en nieuwe apps bedienen. De snelheidswinst zit ‘m erin dat alle
afzonderlijke vervoersmogelijkheden op de route in één keer inzichtelijk
worden. Maak je bijvoorbeeld graag gebruik van een deelscooter of reis je
liever per trein of metro? De app houdt rekening met ieders persoonlijke
voorkeuren en past het advies daarop aan. Bovendien is er geen gedoe met
verschillende vervoersbewijzen en de betaling ervan: ook dat regel je heel
makkelijk vanuit je favoriete app of website.

Toegankelijk voor alle mobiliteitsaanbieders:

De initiatiefnemers
willen de mobiliteitsdiensten van zoveel mogelijk aanbieders in Nederland
samenbrengen. Of het nu gaat om taxibedrijven of deelfietsen, e-scooters of
zelfs particuliere automobilisten. Hoe meer partijen hun diensten aanbieden,
hoe beter het écht mogelijk wordt om heel Nederland digitaal te ontsluiten.
Aanbieders profiteren van het gemak van eenmalig laagdrempelig aansluiten en
hebben direct een landelijk bereik te met een platform dat doorontwikkeld is om
de klantbeleving te optimaliseren. Daarom roepen de initiatiefnemers alle
aanbieders op om zich aan te sluiten.

Roelof Hellemans's picture #Mobility
Eline Meijer, Communication Specialist , posted

Metropolitan Mobility Podcast met Maurits van Hövell: van walkietalkies naar het Operationeel Mobiliteitscentrum

Featured image

“Voorheen werd er gewoon rondgebeld: ‘Wij zitten in de instroom van de ArenA. We hebben nu 20.000 man binnen. Hoe gaat het bij jullie op straat?’” In de achtste aflevering van de serie A Radical Redesign for Amsterdam, spreken Carin ten Hage en Geert Kloppenburg met Maurits van Hövell (Johan Cruijff ArenA). Hoe houdt je een wijk met de drie grootste evenementenlocaties van het land, bereikbaar en veilig? Ze spreken elkaar in het Operationeel Mobiliteitscentrum over de rol van de stad Amsterdam, data delen en het houden van regie. A Radical Redesign for Amsterdam wordt gemaakt in opdracht van de Gemeente Amsterdam.

Luister de podcast hier: http://bit.ly/mvhovell

Eline Meijer's picture #DigitalCity
Yvonne Roos, Smart Health Amsterdam at Smart Health Amsterdam, posted

Smart Health Amsterdam is looking for an intern Communication & Events

Featured image

Featured image
Looking for an internship where you can develop new skills in communications, marketing, PR and event management? Do you have an interest in how AI & data science can contribute to a healthier society and better medical care? Want to work as part of a fun and inspiring team?

As Amsterdam’s key network for data- and AI-driven innovation Smart Health Amsterdam (Gemeente Amsterdam & Amsterdam Economic Board) in #the #life #sciences and #health sector, we’re looking for an intern. Interested? Get in touch today.

https://smarthealthamsterdam.com/p/jobs-at--smart--health--amsterdam Smart

Yvonne Roos's picture #DigitalCity
Dimitri Bak, Strategic Communication Advisor at City of Amsterdam, posted

Amsterdam: circulaire stad in 2050

Featured image

Amsterdam: circulaire stad in 2050

Ondanks de coronacrisis zijn tal van bedrijven in regio Amsterdam bezig met circulaire projecten, business cases en onderzoeken. Net als de gemeente Amsterdam streven zij naar een circulaire stad in 2050.

Benieuwd? Bekijk de video Amsterdam: circulaire stad in 2050. Voor meer informatie kun je ook kijken op de CACR pagina of op amsterdam.nl/circulair.

Dimitri Bak's picture #CircularCity
AMS Institute, Re-inventing the city (urban innovation) at AMS Institute, posted

Accelerating circularity: monitoring tool geoFluxus helps cities turn company waste into value

Featured image

Amsterdam 100% circular by 2050
The City of Amsterdam wants to be fully circular by 2050. That means that everything we use on a daily basis – from coffee cups to building materials – must consist of materials that have already had a previous life.

When it comes to household waste – this consists of, among others, vegetable, fruit and garden waste, paper, glass and textiles – the City has a duty to collect and process this. To give you an impression, the total household waste came down to about 380kg per year per person.

When comparing the amount of household versus company waste produced in the Amsterdam Metropolitan Area (AMA), still only 11% is household related, whereas 89% is company waste – such as sludge, scrap metals, wood and scrap lumber and very dedicated to the company processes related waste flows.

These company waste materials, as compared to consumer waste flows, often enter the waste flow in relatively good condition. This holds for instance for glass and wood, which are suitable for making window frames. If managed differently, these used materials in company 'waste' flows could be directly integrated at the start of the design process of new products.

So… How to boost the efficient re-use of company waste materials within the AMA?

geoFluxus: Turning data into comprehensible maps and graphs
With geoFluxus, incomprehensible waste data tables – including a.o. import and export and treatment methods – are converted into comprehensible maps and graphs. This is extremely valuable for spatial strategies in many other cities world-wide, and therefore TU Delft researchers Rusne Sileryte and Arnout Sabbe have founded the like-named spin off company geoFluxus, which has recently gone through a Arcadis City of 2030 Accelerator powered by Techstars.

Next to mapping waste, the geoFluxus team has connected open EU data on GHG emissions to the mapped waste flows by using transport, economic sector and waste treatment statistics. The resulting tool can provide governments with data evidence on what economic sectors, materials and locations hold the highest potential not only for waste reduction but also reductions of carbon emissions. Governments can use the tool to monitor progress towards circularity.

One company’s waste could be another one’s gain
The insights on the waste data generated by geoFluxus enable users to develop and test the impact of spatial strategies, for very specific locations, before actually implementing them. In addition, geoFluxus takes on a “match making” role: to have companies select company materials from other actors close by to re-use these instead of transporting the materials for waste treatment outside the AMA... Click on the link to read the full article >>

AMS Institute's picture #CircularCity