Trust and Mistrust in a Digital Society


How intelligent systems put our trust to the test and what to do about it

By Erik Hoorweg and Martijn van de Ridder

The Digital Society is based on the trust of customers and citizens that private and public organizations handle data with care and integrity; personal data does not leak; and, the truth will not be manipulated.

 

We blindly trusted the digital society for a long time. Not so much because the citizen had so much faith in the institutions and companies, but because the knowledge and the awareness to recognize that this trust might be unjustified, were lacking. Due to the increase of (and news about) data leaks, fake news, denial of service attacks, ransomware, troll factories and attention for the GDPR, the customer and citizen are finally becoming more aware of the risks of the Digital Society. Research by Capgemini1 and GfK shows that trust is a delicate balance: 60% of the Dutch society has ‘some’ trust in the digital society followed by the Dutch population with  ‘little’ trust (23%) which exceeds the Dutch population with ‘much’ trust (19%) by only a small margin.

 

Since most business models are now based on (personal) data, the digital society comes under pressure. Because what happens when customers and citizens withdraw their consent? What if they no longer approve for organizations to have access to their data? What is the next big risk to our confidence in the Digital Society? Our personal autonomy is being threatened through applications of intelligent systems (powered by artificial intelligence – AI). This is one of the most important trends for the coming years, according to Gartner2 and Forrester3.

 

These applications are implemented by specific algorithms that behave together intelligently, such as:

– The increase in smart public spaces such as smart cities. Most Dutch municipalities are busy equipping the public space with sensors (with or without face recognition), analysing the data and influencing citizens’ behaviour. This for the benefit of, for example, better traffic flow, air quality and service to the residents.

– The use of ‘predictive policing’ as in the Netherlands for example through the Crime Anticipation System. Based on the ‘Hot Times’ and ‘Hot Spots’ forecasts of crimes, the police can use its capacity more effectively. This helps preventively, but can also be reactive, with the police catching the perpetrators in the act. It is not important how many thieves are caught but how much crime can be prevented.

– Personal assistants such as Siri, Alexa and Google Assistant who understand people’s social AI intentions and anticipate on their needs.

 

From the perspective of citizens and customers’ trust, a number of comments can be made about the application of AI. Traditionally, the government responds to actual acts of citizens. Violations are penalized by fines, imprisonment or other measures. Now the government, in general, and, the security domain, in particular, will react to the citizen more based on predicted behaviour. In this scenario, the criterion for authoritative control becomes predicted ‘deviant behaviour’, instead of ‘punishable behaviour’. Moreover, based on AI, investigative services will focus more on behaviour that in itself is meaningless, the so-called weak signals. These are correlations and combinations of behaviour that statistically can lead to criminal activity. In that case, investigation services act as a kind of psychiatrist and no longer on reasonable grounds of guilt only. If this trend continues, innocent citizens will have to pay more attention to be visibly innocent and predictable. In this way, the government ultimately deprives its citizens of their personal autonomy. And let’s be honest; social and economic progress is not driven by ordinary, predictable citizens. It takes innovators who are always pushing the boundaries and who are unpredictable. Deviant behaviour cannot, therefore, only be viewed as undesirable.

 

The awareness that algorithms of intelligent systems are not automatically reliable or responsible is of great importance. They cannot be used indiscriminately, as were initially used in the Internet, without any awareness of security and privacy consequences.

 

– It is important that intelligent systems do not measure causality but correlations. The system assumes that there is a relationship, but whether that is real and how that relationship is caused, the system does not know.

 

– The correlations are only as reliable and representative as the data on which it is based. For example, the first virtual beauty contest with an AI jury4 yielded mostly white winners. The system turned out to give less appreciation to dark skin of its own accord because the photos on which the system was trained contained too little material of ethnic variation.

 

– One of the biggest misunderstandings about intelligent algorithms is that they are neutral and therefore make fair decisions. Leading mathematician Cathy O’Neil deals with this myth in her bestseller ‘Weapons of Math Destruction’5. The essence of her message is that algorithms contain prejudices and moral assumptions that we are often insufficiently aware of. This awareness is thus an essential starting point to prevent the application of intelligent algorithms going wrong.

 

To maintain trust in the application of AI, it is important not to lose sight of the human dimension. One of the most essential steps that need to be applied to enhance accountability is a data-lab approach in which every decision to move from the innovative idea to the implementation of intelligent, algorithmic applications is traceable and transparent. This approach has the following critical success factors:

 

– Use a case study funnel approach: Every step from idea to implementation of intelligent applications goes through a fixed structured process that can be traced at any time, to see which decisions have been taken at what time. Even when a certain action is taken based on a model, logging can be used to determine which data and steps have led to that result.

– Model management: As soon as an algorithmic model is developed, this is actively managed. The moment a model is applied in practice, it is adjusted on the basis of practical results.

– Artificial Intelligence Impact Assessment: Each algorithmic model is tested prior to development and therefore also by means of an impact assessment. This consists not only of a Privacy Impact Assessment (PIA)6 but also of norms arising from other relevant laws and regulations and an ethical framework of standards, as for example proposed in the English report: ‘AI in the UK: ready, willing and able?’.7 In short: is it possible, is it allowed and do we want it? Finally, it is made explicit which possible undesirable effects could occur and how to deal with this.

– Governance: The case study funnel approach is mana­ged by evaluating, after each phase, whether the project is working towards the objectives set according to the guidelines. After each phase, an initiative can be stopped if there is not enough evidence for validation of the hypothesis or when there are privacy concerns, for example.

 

This approach not only ensures a transparent process within organizations. It also ensures transparency towards regulators, customers and citizens. Actively involving citizens and customers in the design and control of intelligent systems helps to ensure social acceptance; to maintain trust; and, to avoid unpleasant surprises at a later stage.
 
About the authors

Drs. Erik Hoorweg MCM is Vice President at Capgemini Invent, the innovation and strategy brand of the Capgemini Group. Erik is responsible for the Public Market sector and likes to discuss societal issues: erik.hoorweg@capgemini.com

 

Martijn van de Ridder MSC works at Capgemini Insights & Data. Martijn is principal consultant and active in the field of (Business) Intelligence, Big Data & Analytics with a focus on public order and the security sector.

 

  1. https://www.trendsinveiligheid.nl/
  2. https://www.gartner.com/newsroom/id/3812063
  3. https://go.forrester.com/blogs/top-technology-trends-2018-2020
  4. www.beauty.ai
  5. Cathy Oneil, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’, (Crown Publishing Group/Penguin Random House)
  6. https://autoriteitpersoonsgegevens.nl/nl/zelf-doen/privacycheck/privacy-impact-assessment-pia
  7. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm
Share your thoughts

No Comments

Leave a Comment: