It’s all about digital transformation

By Joris Willems – Partner and Head of Technology Group at NautaDutilh


NautaDutilh is an international law firm that advises clients under Dutch, Belgian and Luxembourg law. It has offices in Amsterdam, Rotterdam, Brussels, London, Luxembourg and New York. With over 400 lawyers, notaries and tax advisers, it is one of the larger law firms in the Benelux. For this edition, we spoke with Joris Willems, Partner and Head of Technology Group at NautaDutilh about technology, smart contracts, digital transformation and more. Joris deals with complex multi-jurisdictional transactions on a daily basis.


Question 1) The brief introduction above already gives a good impression of you. Can you tell me a bit more about your career path and your current position at NautaDutilh?
When I started out at Höcker Advocaten in 1999, I was regularly mistaken for a kind of helpdesk, even though I was in fact an IT/data lawyer. Could I perhaps help with computer malfunctions and problems, please? It still makes me smile thinking about back then. Given all the developments of the past decades, it’s now clear to everyone that IT/data law is a serious discipline, not just the helpdesk.


I moved to NautaDutilh, a ‘local elite firm’, in March this year to lead the Technology Group. I think that this type of independent firm is the way forward. For truly strategic issues, such as digital transformation and (new) technologies, you can see a flight taking place to quality and to firms that can provide that coverage – across the board. You often hear it said that ‘nobody gets fired for hiring IBM’. That’s still also true in the legal profession.


The Technology Group plays an important part in this. Every company faces technological issues, and we want to connect with them as effectively as possible. In order to be able to make that connection, the Technology Group brings together various different areas of legal expertise. NautaDutilh’s ESG department (Environmental, Social & Governance) has a special role to play in this regard. That department is like a thread running right through the fabric of the Technology Group, because in all digital issues we consider it an important starting point. Whether it’s the inherent bias of algorithms or pollution caused by data centers, ESG plays an increasingly important role – and not only in the board room.


Question 2)You are involved in digital transformation and (new) technologies in data and privacy, as well as in cyber­security matters on a daily basis. What issues are you most occupied with at the moment and do you see a certain trend or development coming?
Digital transformation is what keeps me occupied most of my time. Automation has become the core of digital transformation. With AI, machine learning, and robotics being integrated into businesses, it will be possible to automate a whole variety of processes and models. At the same time, such projects touch upon data, privacy and cyber security. It brings together all the various disciplines that make my work fun.


There are a number of interesting developments that are important for our clients. Big data is such an issue. It sounds like a trendy term, but it belongs in every business. Financial institutions and platform companies have been battling for years about the relationship with the customer. Who builds that relationship and is therefore better able to reach the customer? That battle about data is becoming more and more subject to restrictions. We are on the threshold of a surge in regulation of the tech market. New legislation is being busily drafted in America and Brussels that will force superpowers like Google, Facebook, and Twitter to be more transparent about their operations. That kind of legislation is going to affect other companies too. There’s also another development that may not appear directly linked to technology and data, but which will have a major impact on it. Geopolitical relations are becoming increasingly influential. Take Huawei, for example, of which technology had to be (partly) removed from critical Dutch infrastructure after pressure from the US government. Or the World Trade Organization issuing an indictment of Saudi Arabia’s role in facilitating the transmission of pirated content from Qatari-owned sports network BeIN Media Group. Technology and data are becoming increasingly important in those political relations.


Question 3) I often read that there is a great legal and also ethical challenge regarding the combination of deep learning and smart contracts. What exactly is this challenge and has this challenge shifted compared to 2 years ago?
Actually, there is not one but there are many challenges when it comes to deep learning and smart contracts. There are issues with bias, security and responsibility, to name only a few.


Bias seems to be the most frequently discussed ethical challenge. The problem is that deep learning has the ability to independently make decisions—such as which financial products to trade—and continuously adapt in response to new data, without making any ethical considerations. Depending on how a dataset is compiled, it is possible that the data reflects certain biases—such as gender, racial, or income biases—that could influence the behavior of a system based on that data. These systems’ developers intend no bias, but there are various practical examples of bias or discrimination in fields such as credit scoring, judicial sentencing and recruiting. Algorithms typically rely on probability, such as whether someone will default on a loan, and this bias may lead to unfair outcomes.


In addition, bugs and vulnerabilities in smart contracts are also a real challenge. This can lead to serious financial losses, for instance when it comes to blockchain-based financial and business transactions. One of the main attributes of smart contracts is of course their immutability, but when a mistake has been made within the programming stage of the smart contract, this mistake may in theory not be changed or rectified. A smart contract is still a human product, with its flaws and unwanted outcomes. The ability to change it might then be preferable to its immutability. 


Responsibility is also a challenge faced by lawmakers in particular. Because of the fact that machine learning is typically embedded within a complex system, this makes it difficult to establish what led to a certain error and which party (for example, the algorithm developer, the system deployer, or a partner) was responsible. Was there an issue with the algorithm, with some data fed to it by the user, or with the data used to train it, which may have come from multiple third-party vendors? And when it comes to blockchain, with its special feature of decentralization, there may not be any person to point at, as it is shared among all entities of the blockchain network.


“In the end of the day, a person wants to interact (also) with a person and not only a computer.”


Question 4) What I find interesting is the liability for acts and omissions of AI platforms or AI agents. What is its status quo? Do you notice a change in view since this year’s publication of the European AI Proposal?
We just talked about the challenge of responsibility. In the context of the AI Proposal, this is also a big point of discussion. It is the ability of a technical device to make autonomous decisions that challenges traditional assumptions of the liability system. ‘Someone’, a natural person or other legal entity, must be held responsible for an artefact making its ‘own’ decisions.  


The proposed Regulation intends to apply to public and private business providers placing AI systems on the EU market, users of AI systems located in the EU, and providers and users of AI systems located outside the EU, where the output produced by the AI system is used in the EU. Requirements are introduced that may apply to a variety of actors: providers, importers, distributors, and users, for the development, marketing and commissioning of high-risk AI systems. [For other, non-high-risk AI systems, only very limited transparency obligations are imposed, for example in terms of the provision of information to flag the use of an AI system when interacting with humans.] So if you take, for instance, a CV-screening tool, which can be considered high risk as it has potential fundamental rights implications relating to recruitment, the Regulation is meant to apply both to a developer of the CV-screening tool, as well as to a bank buying and using the tool.  


However, when it comes to the question of liability it is more interesting to look at the European Parliament’s resolution of 20 October 2020. In this resolution, the EU Parliament made recommendations to the European Commission (EC) on a civil liability regime for AI. The recommendations acknowledge that by its very nature AI could present significant difficulties to injured parties wishing to prove their case and seek redress. In common with the Commission’s proposal, the Parliament’s liability recommendation also refers to high-risk AI systems, subjecting these to a standalone strict liability, compulsory insurance-backed compensation system. Fault-based liability is proposed for systems causing ordinary risks. Under the system proposed by the EU Parliament, the front- and/or back-end operator of a high-risk AI system would be jointly and severally liable to compensate any party up to EUR 2,000,000 where someone has been caused injury by a physical or virtual activity or process driven by that AI system.


It will be interesting to see how the Commission’s proposal meshes with any forthcoming instruments tackling liability. The aim is to reach a General Approach by the end of the Slovenian Presidency’s mandate in December 2021, but most people expect these complex deliberations to last well into 2022.


I think the AI Proposal shows that there is increased recognition of the benefits that AI can play in society. Where people used to look at AI as something scary, we now see it as something that can actually help us in many areas such as improved medical care or better education. But because some AI systems obviously create risks, a new regulatory regime is necessary in order to protect users, including from a fundamental rights and user safety perspective. The fact that the European Commission (EC) aims to also facilitate investment and innovation in AI, to me shows that the EC’s approach towards AI has shifted from seeing it as mere science fiction to something that is actually happening.


Question 5) In the previous edition of DCSP, we published the announcement of AI-Lawyerbots. What do you think of AI-Lawyerbots and its use as a substitute for human lawyers?
I think AI-Lawyerbots can serve as a beneficial tool to improve efficiency and reduce manual tasks. There are even things that AI systems can do better than their human counterparts, such as conducting legal research. AI-Lawbots are smart at processing details, summarizing cases and looking up references. Moreover, where error rates increase when human lawyers get tired or have their own schedules, a machine doesn’t have that problem. Although AI-Lawyerbots can help predict the outcome of case, I don’t think they are ready yet to make the actual decisions. That is because in law, there are many gray areas that require interpretation which in turn requires emotional intelligence and advanced problem-solving skills—ones that no machine (in their current state) seems to be able to perform yet.


Also practicing law effectively requires a complex set of interactions between human beings. AI cannot mimic these human-to-human interactions (yet). I also doubt whether humans will trust AI-Lawyerbots enough to put their lives or their business’ crown jewels in their hands. Not to mention some high-profile clients who want to have the best possible representation, and if they get a robot, that’s a lawyer anyone can hire. 


That being said, AI-Lawyerbots can be very helpful in dealing with drafting, contracting, reviewing and editing legal documents. They could certainly be trusted companions to lawyers, helping them reduce the manual effort required in legal proceedings. This obviously saves costs and frees up precious time to take on more important tasks, such as caring for your clients.


Question 6) If we look at NautaDutilh as a company. Do you pursue a specific policy when it comes to the deployment and use of automated systems for the purpose of data, privacy and cybersecurity?
Information Security and Data Protection are key elements within the NautaDutilh business strategy. This also includes the deployment and use of automated systems where it serves its purpose. We consider it our responsibility towards our clients to define how we address this. Information Security is part of our day-to-day responsibilities. We weigh up the efforts required (both in financial terms and in terms of the necessity for business operations and the requirements of our clients) against the risks, as best as we can. Information Security and Data Protection within NautaDutilh is never finished. We are constantly trying to improve ourselves. We obviously take measures against the known risks but try to protect us even more against the unknown risks. We distinguish ourselves in particular in the following areas:


• First (and until now the only) legal service provider with a Responsible Disclosure Policy (see website).
• 24×7 Security Operations Centre.
• Anti-Malware techniques based on Artificial Intelligence.
• Dedicated Information Security and Data Protection Management Team.
• Continuous Vulnerability Monitoring.
• Chair of the Dutch Legal-ISAC (Information Sharing and Analysis Centre in collaboration with the Dutch National Cyber Security Center).


Question 7) Let’s take a look at the future. What do you think the future of the legal profession will look like if digital developments (e.g. automation) continue to increase the way they do now?
We spoke about many things digital. In the end of the day, I do believe that the legal profession will stand the test of time as long as it enhances its learning capabilities to adapt and change. It will always have people at its core – hopefully an increasingly diverse group of people though. Many processes will be automated and there will be many more smart technology tools to deliver services. In the end of the day, a person wants to interact (also) with a person and not only a computer. As Charles Dickers eloquently said over 150 years ago: “Electric communication will never be a substitute for the face of someone who with their soul encourages another person to be brave and true.”


About the author
Joris Willems is Partner and Head of Technology Group at NautaDutilh. He focusses on digital transformation involving legal challenges around new technologies, big data, cybersecurity and commercial contracts. He often acts as trusted advisor at boardroom level for (international) TMT companies and financial services firms. Joris has extensive experience with complex multi-jurisdictional transactions.