The privacy advocate for the dark side

By Roel van Rijsewijk

 

And then we had the lockdown and her message started to resonate, loud and clear

Inspiring conversations at my kitchen table with Lokke Moerel

 

It is the third of March as I am waiting for my next ‘kitchen-table-conversation’ with Lokke Moerel, lawyer, professor and a very effective privacy advocate. There is no privacy without security, so we are two sides of the same coin. And I am particularly looking forward to this meeting of both sides, since Lokke is an influential person: when companies have a data breach, she is the one invited in the boardroom for advice. She is described by people who know her as someone with a strong opinion, which she will share with you in no uncertain terms. I get all that in the next hour. An intense and sometimes humbling experience.

 

At the time of our meeting I was still in denial about the spread of the coronavirus, as I was in denial about the possible consequences of data hunger in the advertising industry. She painfully exposed my naivety regarding the advertising ecosystem. And then we had the lockdown a couple of weeks later, with all the discussions around digital solutions and privacy, and her message started to resonate, loud and clear.

 

How a lawyer becomes involved in technology
Lokke arrives on her bike right on time and I serve her a strong coffee. After the second cup the caffeine makes her even feistier than she already is. I start by asking how a lawyer ends up in a technical domain like cyber security.

 

“Well, sometimes I myself wonder how I got here. I started as a lawyer in advertising and copyright. You know, ‘which diapers can claim the driest baby-buttocks’. That’s already a technical question for which you need to involve engineers and experts. I also got involved in copyright cases for software. I learned that I like working with technical people and got more and more involved in technical and IT-related cases.”

 

“When companies started to use ERP systems”, she continues, “privacy became a big topic. Before ERP, every country and department had its own administration and now all this data was being combined in one global database raising significant privacy concerns and legal challenges with all the required data transfers. For example, in Germany you have to register the religion of your employees for payment of church taxes while this is illegal in other countries. How do you deal with that technically, ‘by design’, in the structure of the database and the allowed views of different users? Then I introduced the concept of ‘Binding Corporate Rules’ to deal with these issues.”

 

For those of you who don’t know: Binding Corporate Rules is the concept where a company, instead of trying to comply with all the local privacy laws- and regulations and ask permission for every cross-border transfer of personal data, you agree with the authorities on one set of self-defined rules to govern data transfers. An impressive legacy indeed.

 

“A constructive dialogue with regulators and supervisors is possible?”, I ask. You often hear the opposite narrative about the archetypical inflexible supervisors and rule-­based regulators.

 

“Oh yes, that’s perfectly possible. We negotiated a template on behalf of a couple of multinationals like Philips, Heineken, Shell and DSM with the authorities and they approved, in the EU and globally. And we also negotiated with SAP to have them solve some of the issues by design. And that’s how this lawyer became involved in technology.”

 

“Regulations and technology are converging”, I add to this, “an alignment of legal- and software code.”

 

“I think this is even the core of my success: the fascination with technology and the capability to make it secure and private by design. This is what drives me.”

 

Stressed-out, ‘blue’ lawyers
Another important focus of her work is the ethics involved, the universal principles.

 

“As a multinational you can’t deal with all the different regulations and authorities in the different jurisdictions in which you operate separately. That is what I teach in Tilburg: instead of talking about the different international- and national laws, you view it as one global system by looking for commonalities in the different jurisdictions. What are the common problems they try to regulate and what are the universal principles behind this? If you design your technology based on these principles, which are ethical by nature, the technology can scale globally.”

 

“And then compliance is the outcome.”, I pitch one of my favorite taglines.

 

“Well, let’s say for 80% yes. There will be some details left on the national level you need to take care off, based on careful considerations of the costs and benefits involved.”

 

“For some lawyers risk-based compliance is a very difficult concept, like being a little bit pregnant”, I look at her and say. “That concept doesn’t seem to bother you at all.”

 

“You have to be forward looking and risk-based. I do see that this leads to a lot of anxiety and stress for many lawyers who are more ‘blue’ by nature. When I had colleagues from the litigation department join our team, they would find our approach, well, difficult. My team was more ‘yellow’; forward looking and without a rigid frame. Because that is the complex reality that companies have to deal with. And if that leads to anxiety and stress then you are not fit for this job”, she concludes.

 

The business savvy, creative and innovative compliance officer
“As a privacy officer you have to be forward looking, think outside of the box and enable innovation. The old-school privacy officer is extremely conservative and compliance driven. In the US the chief privacy officers are more business savvy, creative and innovative thinkers.”

 

I am surprised about this. I thought that in Europe we tend to be more flexible and principle-based. And with the US being a more rules-based society, I would expect more old-fashioned, ‘ticking the box’- type of privacy officers there.

 

She explains that in the US there is no law that requires companies to have a privacy officer, but they all have one. “In Europe you have to have a privacy officer by law, which makes it more of a compliance function. The US might be more rules-based, but they are first and foremost worried about their reputation and brand, which makes it more of a risk function, focusing on privacy by design. You can’t wait until business has developed something and then come up with a verdict if it’s ok or not ok. It’s a different type of privacy officer, who is involved in the design from the start after which you jointly can agree that all privacy risks are mitigated. This really requires up- and reskilling of the typical privacy officers we have here”, she concludes and then asks, “Do you see?”, to check if I got the message.

 

“Yes, I see it in security as well; the same old-school security officers that build paper-based security systems, performing security tests on new technology after the fact”, I can confirm. “It also has to do with what they are told to do; if your brief is to make sure nothing goes wrong, that will make you very risk-averse and conservative by nature.”

 

“There is a growing divide between this type of compliance officer and the business”, she continues. “These compliance officers are deliberately left out of a project because they will obstruct and slow things down. That is a big problem. We need to stop with these rigid three lines of defense, because it hinders innovation.”

 

“Hallelujah!”, I exclaim. “These three lines of defense were intended as a defense in depth against a common threat. In reality every line hides in its own trench and they throw bombs at each other.” A frustration that comes with working for many years as a security consultant for financial institutions.

 

Lokke also works with financial institutions all over the globe. “They often ask for my advice when there is an internal conflict concerning some kind of innovation the business wants and compliance says ‘no’. The board wants to take a certain risk and compliance states it is against the rules. The only way to solve these types of conflicts is to innovate on compliance in multidisciplinary teams and I train them how to do this. It is strange how a lawyer like me steers them away from rules and regulations to focus more on design and behavior.”

 

“Then you are more a consultant than a lawyer”, I observe.

 

“I am totally a consultant!”, she exclaims. “Risk-based compliance is not about choosing between option A or option B, but it is about creating option C to make it even better. So instead of saying ‘no’, think of another way to achieve the desired outcome in a compliant way. For the old generation of compliance officers this is a bridge too far; they simply don’t have the skills for it. It stresses them out and leads to endless discussions. Don’t see the rules as a constraint, but innovate to do what you want to do, compliant with GDPR.”

 

Working for the dark side
And then she gets really excited: “You know the totally annoying thing is that in the US they are so much better at this than we are here. These are OUR rules and they are better at complying with these rules with a superior user experience than we are. We are more constrained by these rules than them. GDPR has become a competitive advantage for the big US tech firms.”

 

It’s ironic that these strict EU privacy laws, also intended to curb the dominance of US tech firms, has the opposite effect. “Do you help US companies with European regulations?”, I ask.

 

“Yes, I work for the dark side too”, she smiles. “As a privacy advocate you can sit on the sideline and complain but working for the big tech firms makes you more effective. Of course, they will not always follow my advice completely, but I have influence and can guide them in the right direction. And I learn a lot and bring that expertise back to Europe.”

 

“And what is the right direction according to you?”, I venture. “What is it you are trying to achieve?”

 

What makes her angry
“I am both a technophile and a technophobe. I am fascinated by new technology and the huge benefits it can bring to our lives. It’s like the first car without brakes: I am also very aware about the downsides and we need to mitigate these on the fly. Otherwise we end up in a society I don’t want to live in.”

 

“Could you describe this society you don’t want to live in?”, feeling we are getting to the heart of the matter.

 

“What makes me really angry is the whole online advertising ecosystem”, coming to the point. “That the Google’s of this world have their own system with your data to help them improve their services and make some money with advertising is fine with me. What bothers me is that they also collect our personal data outside of their perimeter, monitoring our behavior on other websites and suck up all this data to build profiles covering our behavior across the whole internet. For what? Just to show me even more targeted advertising, making it a one-way mirror. What they are able to do with all that data and their advanced data analytics, is that they know me even better than myself, just for advertising purposes. And then this data is sold to the highest bidder in online auction systems, ending up in the wrong places. There is a global personal data exchange where companies do nothing more than make money with personal data trade. It’s easy money over the backs of people who are not even aware of what is happening. Let all those smart data scientists do something useful like improving health care, instead of selling advertisements. It’s such a waste.”

 

“Let all those smart data scientists do something useful like improving health care, instead of selling advertisements. It’s such a waste.”

 

I am not totally convinced that online advertising is the big problem we all should worry about the most. “Maybe it’s a waste of all that brain power, but they give me more targeted advertising. Is that really such a bad thing?”

 

“Don’t be naive. What is essential for a good life, is access to all kinds of very critical commodities: access to health care, education, government services, money and financial services. These big tech firms who collect all this personal data of you are already moving into health care, education and financial services. It’s no longer the case that you have access to all these critical services and then Google does some advertising on the side. The online advertising ecosystem is converging with all these essential goods where data plays an important role, using algorithms that are incomprehensible at best but unethical at worst. If you are on the good side of these equations it’s all fine. But if you end up at the wrong side of the equations you enter a Kafkaesque world out of which there is no escape.”

 

I got that point, I think. “So what you are saying is that the current online advertising ecosystem is a warning. That we should ensure that this ecosystem with data markets and dominant tech firms is not transplanted to other, more critical sectors.”

 

“These types of concentrations of data with such a huge range of products and services, I mean look at Amazon! And they are already standing up and saying: ‘we could provide you with the digital platform in healthcare, education and financial services’. And they could if we let them. They are extremely well organized, have all the best talent and unlimited budgets for innovation. Talent and money that our universities and hospitals don’t have. Money, they made doing things that are illegal in my view. And they are moving from pure online and digitizing our offline world in smart homes and autonomous cars. You see what I mean?”

 

“Ok, I see what you mean”, I admit. “Maybe advertising is innocent, but they are moving outside of that business model.”

 

“Totally, but also advertising is not that innocent. It’s based on free personal data collection outside of our control. Maybe you wonder why I make such a fuzz around the use of cookies? Because it’s the root of the problem. If we can’t control the collection of personal data you will never control the ecosystem.”

 

Consent as permission to do evil

 

“What are your views on the cookie-law?”, I ask. “I find these cookie-walls extremely annoying.”

 

“We don’t need a cookie-law, it’s all covered in GDPR”, and she continues with explaining the problem with consent. “A cookie-wall is not free consent. Nobody feels good by clicking ‘yes’ on the use of cookies before going to a website. There are clear rules governing what is allowed and what is not allowed. That we now let the people, who have no idea what they are giving permission for, decide on what is allowed is just wrong. It encourages the tech firms to use every trick in the book to snatch your consent for unlimited data collection and- use. I mean, we even have had discussions in the European Council whether pre-ticked boxes count as consent for crying out loud! Let’s not discuss what is legitimate consent, but let’s discuss what we think is a legitimate process for data collection to start with. Collecting huge amounts of personal data and using it for misleading advertising and other harmful purposes is wrong, with or without your consent.”

 

“A cookie-wall is not free consent. Nobody feels good by clicking ‘yes’ on the use of cookies before going to a website.”

 

“Consent as blanket permission to do bad things”, I summarize. And as the second cup of coffee starts to kick in, she continues on this thread.
“Yeah, excuse me but we completely lost it! We need to return to what is legitimate, not what is legal. It’s a crazy world where our expensive judicial system has to have an opinion on a pre-ticked box. And these deep-pocketed companies keep on litigating because it distracts us from the real question. The longer they keep at this legal maneuvering the more we lose focus on where to draw the line. That’s what I have against this all; ethics hidden behind a cookie-consent wall”, and she slaps her hand on my kitchen table.

 

Moral Mist
It seems that this is what drives and motivates her, what she is most passionate about. “So, this is your fight?”

 

“Well I am not an activist if that’s what you mean”, she responds. “I need to operate from the inside, to really see what is happening in the whole ecosystem and then try to restrain it.”

 

“How can we restrain it?’’, I wonder.

 

“We need to draw a clear line: collect and use data only for the service you offer and stop snatching consent from consumers to collect data all over the internet for all kinds of purposes for which there is no legitimate interest. And don’t hide your illegal practices in unreadable terms and conditions. It can’t be that you need to read a book full of legal text to understand you are giving your children away. That is not how you protect the consumer.”

 

“Yeah, excuse me but we completely lost it! We need to return to what is legitimate, not what is legal.”

 

“Yes, ‘I have read and agreed with the terms and conditions’ is maybe the biggest lie on the internet”, I concur. “But drawing a clear line is not that easy. All these technological innovations are new and we don’t know exactly where to draw the line yet, we are discovering it as we go. At the core of these discussions is what I call the digital dilemma: the more data is collected and used, the more value it can potentially bring, but leading to privacy and security issues at the same time”, I try.

 

“Well, it’s not a digital dilemma at all, and if you like alliterations, let’s call it moral mist”, she counters immediately. “If you steal money from my wallet that is obviously wrong. If you follow me all day and night with a camera and record everything I do, that is clearly undesirable. But stealing my personal data with invisible technology is right? I don’t think so. In the digital world morality is being watered down compared to what we find acceptable in the physical world. There is no sense or awareness that these online practices are clearly wrong. When you are inside that world this is fascinating and disturbing at the same time. How can it be that you don’t see the evil of what you are doing? So, it needs people like me to say all the time: what the …. are you doing!?”

 

“That lack of morality, is that on purpose or just plain ignorance?”, I would like to know. “Do these big tech firms have the right intention but are incompetent, or are they just untrustworthy and evil people?”

 

“It’s more subtle than that. It’s an incremental process where morality is being stretched to the breaking point. What is the best way to get consent from our users, what if we use this type of box or that color scheme? If there is no clear line you should not cross and no ethical guidance from the top, then you as a developer will go as far as needed to get consent, to collect all that data and use it in any way you can. Then the regulators need to intervene hard with the big tech firms and then the process starts all over again. And they are just doing what everybody else in the market is already doing; they have lost all sense of right and wrong. And it is all these little decisions by all the engineers and data scientists involved, eventually leading to an outcome that is just horrible.”

 

“So, it’s incompetence, bordering on criminal negligence”, I conclude. “So that is what makes you angry: incompetence but with a huge responsibility they apparently can’t handle.”

 

“Compare it with Volkswagen cheating with emission tests; a couple of engineers tinkering with the software to achieve optimal results. Was the top aware of what was going on? Maybe not, but they did demand optimal results with no clear ethical guidance.”

 

“And everybody was doing it.”

 

“And everybody was doing it! The same with online marketing, they are just engineers and data scientists trying to achieve optimal results, crossing all kinds of ethical boundaries in the process.”

 

I am not that innocent (Oops!, I did it again)
Then I try to challenge her for the last time. “Optimal results in online advertising is not necessarily a bad thing. What annoys me if the result is not optimal. Take for example re-targeting. You have already booked that hotel in Paris and for the rest of the week you are being targeted for hotels in Paris. I wish it was optimal.”

 

“Do you not see you’re being manipulated?”

 

Lokke drains the last sip of her coffee and goes for the kill. “Now let me come back to your vision on the innocence of advertising. Of course, it is annoying if Zalando targets you for shoes you do not want. It becomes a real problem if the advertising becomes very relevant. Do you not see you are being manipulated? The real danger is when the targeting is optimal, think about it.”

 

I think about it and respond: “That is what I tried to say, then you receive relevant information. Nothing wrong with that, is it?”

 

“There is a very thin line between preferential targeting and manipulating. Abusing your sensitivities instead of catering to your preferences. Manipulating the sensitivities of pregnant women, impotent men, insecure teenagers, I can go on and on and on. Facebook can measure real-time emotions to manipulate you. Is this still preferential targeting or persuasion profiling? And they abuse these technologies for political purposes, for access to financial services, healthcare and government services. It is not that innocent anymore. And the real danger is when you don’t notice it, when they have become really good at it.”

 

“Yes, then it has become relevant. I still don’t have a problem with that.”

 

“That is the view of a very privileged person”, she retorts.

 

Ouch… that last remark puts me squarely on the defensive. I continue by pointing to other areas where personal data is collected on a massive scale outside of the advertising industry: tax services, health care, public transport.

 

Big brother is a public-private partnership
“Look at what is happening in the US”, is her reply. “If the data is already collected by the tech firms then the government will demand access and use it for their own purposes. What was that quote in the Guardian? ‘Big brother is a public-private partnership.’” She has made her point and I need to revise some of my beliefs and opinions. We continue to talk for a while about the privacy and security issues in healthcare and the problems with governance in decentralized networks and open source projects. Both topics that have become very relevant in the current discussions on suggested tracing apps to fight the coronavirus.

 

Between our conversation at my kitchen table and when I am writing this article, the world has changed dramatically. We wave goodbye as she climbs on her bike, both of us blissfully unaware of the worldwide pandemic that will break out in the next couple of weeks, shedding a new floodlight on her message.

 

Epilogue: privacy by design after the lockdown
As the pandemic unfolds and we enter the lockdown, I follow the discussions on the use of data to fight the pandemic and the related privacy concerns. Fresh from my conversation with her, I can’t help thinking ‘what would Lokke Moerel think of all this?’.

 

I see the transatlantic divide on how to use location data to fight the coronavirus. The US Centers for Disease Control and Prevention (CDC) has turned to data, provided by the mobile advertising industry, to analyse population movements in the midst of the pandemic, as Lokke predicted. Owing to a lack of systematic privacy protections in the US, data collected by advertising companies is often extremely detailed: companies with access to GPS location data, such as weather apps or some e-commerce sites, who already sell that data on for ad-targeting purposes. That data provides much more granular information on the location and movement of individuals than the mobile network meta-data received by European governments. All things Lokke has warned me about.

 

And then our health minister Hugo de Jonge announced the desire for two apps as a prerequisite for easing the lockdown. One app for self-diagnosis and remote guidance and one app for contact tracing. What would Lokke Moerel think of these, I wonder.

 

The now infamous ‘appathon’ is announced where seven vendors present and discuss their proposals for a contact tracing app in full transparency. The whole weekend I binge-watch the tragicomedy unfold during the live stream followed by thousands. One of the vendors presented a proposal based on the DP-3T protocol, an open source project that enables contact tracing via Bluetooth, without collecting any personal- or location data centrally. Google and Apple are working on an API and some changes to their respective mobile operating systems to make contract tracing based on the DP-3T protocol possible. And as further proof that the world has turned upside down after our meeting, France have called for Google and Apple to relax their privacy protections to enable a more centralized app, “tied’ to their county’s healthcare system.

 

“What concerns me about such an app is the number of false alarms it will generate, or no alarm when it should give one.”

 

The technophile in me gets really enthusiastic about the decentralized solution. This is a great example of privacy by design as advocated by Lokke. While I try to find out if Lokke Moerel is one of the experts involved in the appathon, I discover a recent Volkskrant article featuring her. It appears that right after our meeting, she and her husband were self-quarantined after receiving notice of cases of Corona infection of people at a workshop in Brabant attended by her husband. Her husband falls ill and a week later also Lokke. They use the OLVG corona check-app to fill out their symptoms and daily temperatures. Medical staff look at the results and provide remote guidance.

 

“A useful app”, is her comment in the article. “The app gives good insight in the course of the disease and it is comforting that medical professionals will contact you if your condition deteriorates. Moreover, medical experts can use the data for more insights in the spreading of the disease and the related symptoms. And your privacy is safeguarded.”

 

However, she is very critical about a contact tracing app. She confirms that developers can build an app ‘private by design’ using Bluetooth technology. So far so good. But as we learned Lokke is not only a technophile, she is also a technophobe.

 

“What concerns me about such an app is the number of false alarms it will generate, or no alarm when it should give one. And we don’t know how such an app will influence the behavior of people, businesses and institutions. If you ease the lockdown measures with the idea that this app will control infections, you create a false sense of security”.

 

And her comments in the Volkskrant on the announced appathon are spot-on in hindsight: “The app is not the solution, it’s only a means to an end. You can’t develop an app separately from all kinds of measures you need to take outside of the app. For example: what can businesses demand of their employers? Would access to public transport be conditioned on the use of this app? Will there be extra testing facilities available for people who have received a contact alert? How people and organizations will react to such an app is complex. I am afraid you will never figure this out in one weekend while staring blindly at the technology. The design of the app should be the end of the process, not the start.”

 

So, after the appathon it’s back to the drawing table. The Minister committed to quickly form a team with the right developers and also with experts in areas such as information security, privacy, fundamental rights, national security and inclusion. I strongly recommend and hope they invite Lokke Moerel to this team, so she can do her good work from the inside as is her preferred modus operandi. I personally would love to work with her in an open source project to realize a digital application to support contact tracing.

 

About the author
Roel van Rijsewijk is a cyber security consultant and evangelist with over 20 years of experience helping organizations become cyber resilient. He is a key note speaker and author of ‘Cyberrisico als Kans’ (The Upside of Cyber Risk).