Media Details

The Law of Mass Surveillance in the Age of Artificial Intelligence

2024-03-23 |

The result is comprehensive judgements on personalities and characteristics, with often uncertain end goals. In the European Union, legislation such as GDPR seeks to advance the right of the individual in this space. For the opening keynote of our Governance and Policy track, we’re delighted to welcome the European Commission’s Director General for Justice and Consumers, Paul Nemitz, to discuss the law of mass surveillance in the age of artificial intelligence.

Participants

Paul Nemitz

Additional resources

Transcript

This is a provisional transcript. Copy may not be in its final form.

Framework For The Future

Legislation In The Age of Artificial Intelligence and Quantum Computing:

With European Commission Principal Advisor on Justice Policy and principal architect of GDPR Paul Nemitz

Continuing our series of Q and A sessions and building on the discussions started at our Forum last October, we were delighted to be joined again in January, by Paul Nemitz from the European Commission. His keynote address at WEDF2022 gave such rich insight into the world of policy making and the role the EU has in regulating technology. One of the principal architects of GDPR, Paul has been at the forefront of the legal and regulatory space for decades, and this session proved one of our most thought-provoking. Taking questions from our community, the below is a summary of this Q and A, featuring some of our favourite insights.

Paul to start simply, how do we determine what actually is personal data? What about the inferences upon that data?

According to GDPR, it is very simple. It's personal data, if the data either identifies an individual, or allows you to identify in individual. For example pseudonymous data where the name is just replaced by number but somewhere there's a list which rematches names and numbers, this is not making the data not personal data. Pseudonymization is not getting data out of the scope of personal data, anonymization is. Anonymization means you can never return to the individual. It doesn't matter what operation you do afterwards. And inferences are among the most important personal data. It would be ludicrous to say that personal data which is inferred data, is not personal data anymore. Then we might as well close down and not have GDPR because inferences are the goal of personal data, this is where most money is made, and this is what everybody wants to have you know, the inferences about what is the guy or the woman going to buy soon? What is their mood? What will be their political inclination? How will they act? It’s all inferences, and I see no wording in GDPR, which says inferences are not personal data. If they are identifying, or making it possible to identify an individual, that is the definition of personal data.

So what are the most important laws relating to artificial intelligence today?

Already in place, It's definitely the law on protection of data and privacy in Europe, that is the European Data Protection Regulation, and it is important because AI very often processes and collects personal data and so this law applies basically to all those processes in which AI deals with people and personal data. And there are many other laws which apply already today to artificial intelligence, most important, of course when it's used on a bigger scale, competition law, but also consumer protection law. When it runs certain operations like markets, it's the law related to the market.

So to make it short, AI is a multipurpose technology, it can be used in different contexts and one has to look both at the general law which will apply (which may or may not contain the word AI), and then one has to look at all the sectoral law where AI is applied to know which laws have to be respected.

What are the legal and legislative frameworks around AI ethics that you find the most advanced as of today?

Well, my opinion is: Ethics are good, law is better. Law has democratic legitimacy, it goes through a process in a democratic country, which gives it a different legitimacy than ethics. And law is binding so it can be enforced also against those who don't want to play ball. And that's important in the age of artificial intelligence in which power distribution between individuals and those who develop the technology, becomes a big issue and we must be able to bind those who have the power. …Ethics is a personal choice, or the choice of a group, it's voluntary… Of course, it's great if people have ethics such as wanting to comply with the laws and important company ethics in contrast to the ethics of “Let's move on and break things”, you know, or “If you make an omelette you have to break some eggs”, that type of ethics is an ethics which then leads people also to disrupt the law as part of disruptive innovation. And that's the type of ethics which are not good ethics in the technological sphere in which we live, in which technology is becoming so powerful. So it's important that from the outset, technology accepts the primacy of democracy and the law and the technology is built, from the outset and by design in line with the law to comply and not to break it.

Then in terms of ethical principles, what are the standard ethical principles that anyone working in tech or tech adjacent roles should be familiar with? What are the standard legal and legislative Frameworks around ethics that anyone working in tech or tech adjacent roles should be familiar with?

Well there are of course codes of ethics for certain professions also in the engineering profession there are associations who have codes of ethics as I said, all of this is good. But let's not confuse ethics with law. There is a long history of regulating technology by law and as if by Miracle when AI came around, suddenly everybody started to talk about ethics. Well, it wasn't of course a miracle, it was a lot of money which flew into this. You know, a lot of academics have been paid to start thinking about the ethics of AI, and even governments and the European Commission jumped on this, and we had a high level group on Ethics in AI. I was myself a member of the German government's data ethics group, but the outcome of these exercises has shown that we need binding law.

So if you ask me as engineers and tech developers, what do you need to look at in your work? The first question is indeed what law applies in the area in which I'm developing. And part of your ethics should be that you always obey the law which pertains to the area in your technology and basically that you follow the principle that where AI takes over functions, which previously were carried out by humans, then the program has to comply with all the laws with which the human head to comply with before. I think that's probably the easiest answer. And beyond that I am not against, [in fact] I think it's good, to engage in engineering ethics and you know to look at the codes of conduct, the high level group on AI of the European Union developed a number of principles on an Ethics for AI all of this is good. But when you see that there is a lot of ethics talk but not a lot of talk of law… then you have to become suspicious because it means people want to keep the democratic process out and in particular they don't want to be bound by law… If we don't have laws which we can enforce against those who develop technologies which don't comply with rules or run technologies and operate technologies which don't comply with our laws, we are lost and also democracy doesn't function… because democracy speaks through the instrument of law. The law is the most noble instrument of democracy. And also markets don't function. If we want a level playing field between competitors in the European Union for example, the internal market between 27 countries, if we don't have law, if everything is voluntary ethics, well, then we have some companies which comply there with the ethics and some others that don't, and when that happens is that maybe there's a competition downwards to the bottom of ethics because sometimes of course the rules which need to be complied with they cost money.

You know, environmental protection costs money, paying proper taxes costs money, and also data protection costs money, that is normal. We need companies to comply whether they like it or not there cannot be a free choice. And that's the problem with ethics. It's voluntary and therefore it often will not work and you can see this very concretely, you know, Google created an ethics board and they close it again, you know as they feel fit. And so I think it's good, you know, to have company in-house ethics, all this is fine. But the key point of the in-house ethics also has to be the first thing: ‘we want to comply with the law’. And then the next thing could be ‘we don't only want to comply with the law on the edge, you know, just being legal, but let's take a little risk in there but we want to be in the centre of the law, we want to pay our taxes properly as they are due, we're not going to try to make constructions, you know with a Cayman Islands and what have you, paradises of low taxation and also in data protection, for example, we are not going to go to the edge of what GDPR, may be allowing, but we will go straight to the centre because we want to set an example of how you can perfectly make money while fully complying with the law’. I think that's the type of attitude which I like and which we need, and I am actually convinced that one can make in digital, a lot of money,while fully complying with the law.

What do you believe are the greatest obstacles for representatives to make more time-efficient legislation on data privacy? Is there as much lobbying from Big Tech companies as in the US?

Yeah, Europe has become a favoured lobbying ground of US companies in particular. I mean Microsoft’s office is just here next to mine, it's huge and you know, there are a lot of very interesting events and this and that, you know, they have engaged an ambassador, a former Danish ambassador as the head of the office so, you know, they pretend they it's all in the public interest and we even have a former Danish Ambassador as a head of office. Well, you know, I think this is all cosmetics in the end. It's a company driven by the imperatives of the market and wants to maximise its profits and its money and already in the past also in the case of Microsoft, we have seen what they are able and ready to do to maximise profits and make money and so I would say yes, the lobbying has come to Europe much more. Vice President Reading of the commission at the time when we worked on GDPR (I was the lead director), she was the vice president of the Commission, I was working under her, she said she has never seen a lobby storm on Brussels as on GDPR, and since GDPR, this lobby storm let's say has been replicated on the DSA, on the DMA, on the AI regulation.

The main difference to America is that in the end we're still able to make a law. We're still able to agree in the European Parliament and in Council, while in America it seems unfortunately impossible to agree on effective laws. It would all be much easier in this world, many things would be better including for American companies and American citizens. If the American Congress would work to get itself around to agree on the laws which are necessary in Democratic societies to maintain their good functioning of democracy, to maintain fundamental rights citizens, rights consumer protection and so on, in the digital world. So, you know, we look to America to address this issue effectively. We look to the Congress and maybe one day laws will pass there. Certainly the attitude of the American government to the laws we are passing in Europe has completely changed. In the past we have always been very much criticised and let's say that Big Tech was singing from one hymn sheet with the government and the Chambers of Commerce of the US, and one was wondering who writes the hymn sheet the government is singing from? This was the case under Obama and also under Trump. And now this has really changed. I see an ambition in the US government to do the right things. But it seems to be very very difficult in Congress to get things done, and that's very unfortunate because if America doesn't regulate the issues which are not addressed in America, they have to be addressed in other parts of the world, you know, then we have to make the laws.

And then of course, you know the whole thing starts again about you know, protectionism and transatlantic this and that, and that's very unfortunate. If America would deal with these issues itself, we could save a lot of energy, and our community of values which we certainly have would be much more concrete and much more tangible for citizens also to feel.

Speaking of the United States, during the WEDF Q and A last week hosted by Bruce Schneier, he mentioned that when GDPR came into force he was at IBM, and instead of dividing European and non European customers, they applied best practice worldwide. Is the potentially global influence of EU policy taken into consideration when drafting legislation or is the focus always European centric first, and having a wider influence as a bonus?

I would say that obviously the EU is constantly in a global dialogue, with the UN, with the OECD, with all of our bilateral relations all across the world... We always are very transparent about what we do, and we try to learn from others. And so I would say we are not inward oriented, and we are not only europe-centric, we try to do policy development at the cutting edge and the cutting edge defined as worldwide and global. So we really look at bad experiences and good experiences. We look at science, articles and books, from all over the world. Obviously, our task is first of all to deliver the promise of the European Union treaties, and also the European Charter of Fundamental Rights. The promise to our people to live with respect for their fundamental rights and to live in a democracy and the rule of law and this is what we focus on. So to be very clear, in our lawmaking we definitely do not have the ambition to conquer the world, we don't have an ambition of a Brussels effect as one book is called, that's not our ambition. We don't aim for such impacts, but of course, we're happy if others want to adopt our laws, and we are ready to explain them. And we want to ensure that also in relation to what happens outside of Europe, if it touches on the interests of European citizens, the rules are complied with.

If a company operates entirely from outside the EU, but for example monitors the behaviour of Europeans… then our law applies, because their operations in the end touch on Europeans in Europe. And so it's true that our law also from that point of view may have a worldwide reach, and when it comes to exporting personal data from Europe to other countries or continents, the principle is that the protection level which we have here, travels with the data. So the data can only be processed outside the EU if, where it is processed, the level of protection is adequate, meaning substantially similar to the one in the EU. So if we then see afterwards that companies like IBM, Microsoft were talking about this, Apple, ...promise that they will give GDPR levels of data protection to their clients all over the world, I think it's a bonus.

But it also shows that contrary to some loud voices which very often are against laws, you know, classic neoliberal advocates, our law is not hindering innovation and competitiveness, but on the contrary our law creates trust, and this trust is necessary to be able to sell products and services in the modern, global, digital economy. So the fact that IBM says ‘we will give GDPR standards to all our clients worldwide’, for me is proof that GDPR is not only not of the devil, but that actually there are some companies, in this case American companies, who are smart enough (if I may phrase it like this) to pay their data protection officers out of the marketing budget. Because they have understood that the good protection of personal data, of confidentiality of behaviour, of confidentiality of communication, is actually a selling point…

We know from statistics and opinion polls, which we have not conducted but others, that all over the world people don't want to be snooped on by the government, and they don't want to be snooped on by powerful private companies or oligarchs, or whatever you want to call them. It doesn't matter whether it's private power or public power, people want to have their private space and that's also what modernity is about.

300 years ago, the king had the right to know everything and we didn't have any right to know what the king was doing in his castle and the progress of humanity in terms of enlightenment (and in terms of citizens rights and fundamental rights) is exactly this: the powerful have to be transparent, but that the individuals have a private space where nobody has access. And that's why we have all the rules that the state cannot just come into our house and search it, no taxation without representation, and we have the rules on data protection. This means that there cannot be a ‘stalker economy’, where those who want to make money with our data can just look into our private life and make us naked and we don't know what they are doing… I think the empirics are pretty clear. The people don't want to be snooped on and it doesn't matter whether the snooping comes from the state or the big monoliths of the digital economy.

At a conceptual level can mass data collection and individual categorization for the purpose of advertising ever be truly ethical? Is there a way forward here?

That's a good question. There are many who say we should simply have an interdiction of targeted advertisement. And I must say, I have sympathy for this movement, this is my personal opinion. GDPR allows some degree of targeted advertisement without mentioning this word, but it is possible to do targeted advertising complying with all the rules of GDPR. And it's not easy, because we have seen that some of the great associations bringing together companies on standards and ethics of advertisement, like the Interactive Advertising Bureau, had a code of conduct for advertising (and also targeted advertising), which was not in compliance with GDPR…

Where targeted advertising really becomes a problem is in elections and campaigning, because it's very untransparent. Targeted advertising in election campaigning means everybody gets to hear what they want to hear, and the public doesn't know anything about it because, what do I know what my neighbour gets fed on Facebook, Twitter, Instagram and other social media regarding politics? In the United States, there is a newsroom, an internet Journal, called the markup run by Julia Angwin. She's a former Wall Street Journal journalist, and they have developed something called the citizens browser. It's an ad-on a plug-in which you can put into your browser. And in this way, they have collected data voluntarily from people who want to go along with this, and they have found that in the election campaigns, the last election campaigns in America, people really have been presented a completely different reality depending on where they stand politically. And that's a problem, because on the one hand it leads to confrontational society, it leads to a polarisation, where people don't understand each other anymore, because they do really believe in different worlds and they don't realise that what they're getting is just a very one-sided picture of things.

For democracy and public discourse on a big scale to function, we need a common base of information and facts. These have to be reliable, so not fake news, and not alternative news as some have called it, but real news investigated by journalists, you know with good rules of journalism, two sources and so on. So I think when it comes to elections, we certainly have to be even more critical of targeted advertisement. What we're doing in the European Union is we're trying to regulate election advertisements on the internet in such a way that this problem of untransparent communications of political players is addressed.

AI and total mass surveillance are a concern of course, is the EU currently considering the looming issue of quantum computing and the implication for privacy?

Yes. We are looking into this and we are as I said initially, we're trying to be cutting edge, but I cannot tell you anything let's say relating to privacy which is now specific relating to GDPR for example. I think the wording of GDPR is sufficiently technology-neutral that it will have a meaning also when quantum computing arrives, and it can be applied in that context.

It's great that the EU Constitution and jurisprudence are providing defence against the predation of global corporations. Do you think the implementation of such rights and laws are effective or do the global tech companies remain too big to control?

No, there is no company, which is too big to be controlled but we must maintain this ambition also to control and to make them subject to the law. We need to maintain the primacy of democracy of law, and of law over money, over the markets, and over technology. So, you know, it must be very clear that these corporations are not outside the law. This is vital for the legitimacy of democracy, and I would say to those who are in those corporations well if you want democracy, then you should not do anything which undermines the ability of democracy to enforce the law also to you because the discourse of corporations controlling legislation and the making of the law and actually also ‘controlling enforcement afterwards and they can do whatever they want in the end’, is a very dangerous discourse in the end also for these corporations and for the good functioning markets and for international trade and so on.

So if we mean it with open markets, with fair competition, and democracy we all have an interest to support these mechanisms and not to talk them down, and to talk about the law always only as a cost factor and innovation obstacle and so on and so on, and but to all constructively contribute to making good laws which make a difference, which have an effect, which can be enforced, and which are enforced, to create good institutions which can enforce the law and then also make active effort to comply voluntarily.

In your opinion what are the steps that need to be taken to set up assurances and accountability in major and then also smaller tech companies? How should we think about our own principles in this light?

Well, I mean I'm talking about the ethics of compliance. So I think you know, this is something we need to teach and we need to get ourselves around to. This is something, you know disruptive innovation is good as long as it pertains to technology, but when it pertains to, you know, not complying with the law or for example to you know, exploit people, you know, maybe very disruptive you know, to start developing business models where you can pay less wages, or you don't employ people you just pretend they're all they're all individual entrepreneurs like the Uber driver debates, or the delivery people for on bicycles for food, you know. This is the type of disruption which I would say is not okay and which is society cannot exist with so first we need let's say an ethics of compliance with the law, but also an ethics of wanting a fair and socially just society which distributes revenues and which distributes income and wealth and happiness in a fair way.

And then of course, we need institutions which make sure that this happens. So I very much believe in a good mix of public and private enforcement. We need public authorities which have the right to investigate, and which have the right to sanction non-compliance, and we need people to be able to choose whether to complain to such authorities and then they investigate the company in question. Or, and this is also important, that people can go directly to courts and get courts to order, and maybe also sanction non-compliance directly and also get and pay damages, material damages and non-material damages. So private enforcement and public enforcement are both very important if you only have either or, that's already a rotten compromise because it reduces the effectiveness of the law.

You know in the end it's very simple. A law which companies don't like will be evaded by some. And the likelihood of compliance depends on the factor between likelihood of being caught if you don't comply, multiplied by the size of the sanctions, you are likely to incur if you have been caught. The product of these two multipliers determines compliance. So if it's very likely that you will not be caught, the sanction has to be much higher so that the product of the two has a high deterrent, one could say positive motivating value, to comply with the law…

When AI is being blindly applied based on such biassed data, and alignment efforts may be biassed to US and Chinese values, what can the EU and the rest of the world do, assuming regulator efforts and legal cases will come along too late to curb the further rise in power of these states of these states and large corporates? How can we actually counter what is happening right now?

Well, I would first of all agree with one thought here expressed, namely that we live in a world in which now the values of the society which creates a technology and the people which creates technology flows into the technology to a much bigger degree than previously. And this has to do something with the fact that we are now talking about technologies of human thinking, of human cooperation, of human communication which are, let's say, much closer to what is touched by our values in deep essence. If that is so, I think it is fair to say that the same problems which we see in the hardware discussion for example, you know the questions about the security of networks built by Huawei from outside Europe and outside US and by such a company which raises security concerns because of the location of the company and the ownership and so on. The same type of questions will be raised when AI in the future comes from China. Or for that matter from Russia, and I think these questions are legitimate. Because these companies' technologies are very untransparent. They're complex.

So, you know, who knows what they are doing in the background, who knows how they influence us. And so I think we have to learn to be critical. And on this point, I think the discourse on you know, let's learn trust and trustworthy AI. Well, I wish good luck. My personal view is it pays in democracy to have citizens who are critical to power. We have to teach our children over and over: Be critical. Ask questions. Don't take for granted that you're always being told the truth. And this is true. For governments. This is true for private parties, private corporations. And this is also true for technology developers, we need people who can think for themselves who ask questions and have a basic critical attitude. And we don't need you know to overemphasise this, let's say an ideology of having to learn to trust in technology, that would be the greatest mistake in light of all the issues, which you're raising here in your questions. How do we then regulate to make sure that there is no abuse? Well and intentional or unintentional and that there is no bias intentional and unintentional. Very difficult and you know, there will be no perfection in this. And in the high risk AI we will have to have a much closer look than in low risk programs, that's why we are dividing up AI into four groups, risk groups in the AI regulation. And let's hope that what comes out of the political deliberations between the Parliament and the European Council, European Council representing the member states, the Parliament the people, is a law in the end which is really meaningful and is not watered down to something which is gesture politics. Namely an instrument which is called AI regulation, but actually has no or very little impact in addressing the many challenges which we are facing with AI, like bias, for example, as you have just described it here in the question…

Stanford Internet Observatory recently published a report on generative AI and the dangers of usage in influence operation. Despite appearing comprehensive, it fails to mention the vital point that all LM’s in particular are based on data sets which are fundamentally biassed being based on predominantly US and Chinese created data sets. Meanwhile, researchers are ramping up efforts on AI linemen to counter the profound risk of AI misalignment with human values. But who is deciding what those values are, researchers, their funders, or society?

Well I mean, you know, I'm not a philosopher and so in these data ethics groups, there were some people who thought we’re now going to reinvent the new values for social existence in the digital and AI world. But me as a lawyer I say something very simple, the values according to which we live first of all find themselves in the constitutions. So in America, I would say read the American Constitution. In Europe, I would say read the Fundamental Rights Charter. One can also read the global documents on human rights.One can read the UN Charter. These are documents in which people have tried to agree on the basic rules, and thus the basic values according to which they want to live. And so these documents contain principles like democracy and how it functions, they contain the fundamental rights and they contain the protections against the state and so on, the state overreaching into our lives. So these are the values which I think we need to maintain and strengthen in the digital world. And that would also be my advice if you have a value issue first thing. Look at the Constitutional settlement in which you are working. Look at the globally relevant values which are laid down in such consensus documents. And that will always already bring you very very far.

And then if it gets more complicated, let's say on non-discrimination, it can get very complicated because there's not only primary law constitutions, but also secondary law, namely law which has been made by Parliament based on the constitution. And there's already literature on this. For example, there's an article by Sandra Wachter which shows that, I think she has analysed 18 models of fairness in mathematics, and she has come to the conclusion that most of these models, I think maybe all of them, don't fully encompass the notion of non-discrimination which we have in the European Union law, which includes indirect covert discrimination, even unintentionally discrimination. And so what I would also say is that the notion of values and fairness (including some of which are laid down in our laws) may not be fully apprehendable by algorithms and mathematics. In fact, I read in the book of Stuart Russell, which is called Human Compatible: The Problem of Control, that he thinks that whenever an algorithm is faced with a value, it will not be able to apply the value to the facts at hand but it needs to go back to a human. And so if you enter the values-sphere with your programming, I think you have to admit to yourself that as imprecise as human spoken language may be, it also has some great advantages over mathematics. For example its meaning can be reinterpreted in time.

So I think in the value debate in technology, we also have to rehabilitate natural language. And we have to admit that the openness of our language, the richness of all language, but in particular the openness for reinterpretation in the future also of the meaning of the word as technology and business models move on including the words in the law are a great advantage because it means we don't need constantly new laws if we have a law which contains technology neutral principles, which can just be reinterpreted and reapply it in the future to the next technology doesn't and we don't know today which will be the next technology. And so this interaction between human language, the law, democracy, and the principles of mathematics, the two worlds, I think we will have to work on a lot. But in the end the decision, on the rules according to which we live, namely based on our values is a decision which in a democracy, falls to the lawmaker because the law concretises our values and feeds to the Constitutional courts who check whether the law adopted in parliaments is in line with the Constitutions. That's how it works. We don't need technologists to reinvent values and we certainly don't need company ethics boards to reinvent values. If they just comply with the law…

Is there a danger of exaggerating the dangers of AI or surveillance? Is the danger being correctly recognised - or even not sufficiently recognised, in that we are still not yet fully aware of the implications in both areas?

Well, I think both are true. I mean, of course one can overemphasise the risks and dangers and at the same time we can also have a phenomenon which is that we don't fully recognize the risk. This is the beast of the complexity of these activities and technologies, and the other problem is it's not only the complexity It's the arcane-ness. Surveillance by its very nature is secret. And you know, the state may lie about it, and corporations also lie about it. If you read the book of Shoshana Zuboff (The Age of) Surveillance Capitalism, it was about the lying culture at Google, and you know, that's part of the problem and it's compounding the already great difficulty from the complexity of the technology and the business models. So I think your question is an appeal to wise judgement, which means on the one hand in the description of the risks, we need to be precise, but we should not limit ourselves to let's say empirically proven risks. We need to allow ourselves to have imagination, I would call this technological imagination.

What could be done and could be happening with this technology? This is very very important. That theoretical description of risks is taken seriously. And so from that point of view the call for evidence-based policymaking has to be modulated. Policymaking today in these fast-moving and very complex technologies can also be based on theories. of potential abuse and theories of potential risk. This is one of the great differences by the way between AI, artificial intelligence, and human intelligence. Human intelligence includes the ability to imagine something which has never been there. We can imagine the non-existing, and this power of imagination whether the imagination comes from hard thinking, or it comes from our dreams. This power of imagination is something AI doesn't have, AI just works with the facts of yesterday. It's trained on the facts of yesterday. But neither does it ask critical questions nor does it imagine. And so this part of technological imagination and you can call it technological fantasy, the ability of technical having to have technological fantasy and bringing together of technological knowledge societal knowledge, and then imagining having fantasies about what could happen and our society what could be done by this technology, this ability to think about long-term consequences is a very important skill and we need people to be trained in this. And this is something Hans Jonas already said in his book on the principle of responsibility, that we must enable ourselves by investments today, to understand impacts of technology, even if the technology may not yet be in place and so this is about the theoretical capacity of and combined with the capacity of imagination.

One can be unrealistic there, surely that is an issue, but If we don't allow ourselves this imagination, we may fall short of regulating the risks early enough. And I think we cannot afford not to do risk mitigation quickly in this fast-moving world, so I think we need to regulate much faster and much more efficiently than in the past in this area of new technology.

Making everybody a suspect is not just illegitimate, it's dangerous. Are there any other jurisdictions that are doing good work in this way or is Europe alone in your opinion?

Well, I mean domestically for Americans, after the Snowden revelations, if one looks at the domestic law in the United States, progress has been made to reduce domestic mass surveillance by the state or for the purposes of state investigations. I don't know the situation all around the world, but in Europe, let's not also not kid ourselves. It's only the highest courts which get the balance right and it is astonishing to see that over and over laws are made by legislators in Europe on the EU level and also on the national level also in Germany, for example, you know where the highest courts have to say “Stop, this goes too far” in surveillance.

So I think we live in a time where the technological capability and the fact that something is technologically possible, creates such a hunger and wish among the powerful to then also use this technology for their purpose namely for security purposes that people lose a little bit the balance between the need for security in a society and the need for protection of individuals. Namely you know, their freedom not to be seen, not to be recorded, not to be controlled. And I think that's a problem. We need to have let's say a more overarching questioning and debate. Why is this actually happening? Why are legislators over and over again even though the jurisprudence is ample, putting barriers over and over going back and testing jurisprudence?

So I would say here it's important to remind yourself here we see the rule of law and the state of the law where you have independent judges. And you know, it's very important because unfortunately in this complex subject matter of you know technology and how to use it for good purposes without going overboard, It's unfortunately often in the end only the judges who get it right.

What do you make of the European Commission proposal "CSA Regulation", being discussed in the European Parliament, which would mandate age verification and scanning of every private message of everyone in every app (chat control)?

Yeah, I would say the debate about the right mix of security and in this case protection of children, and liberty is a very, very difficult debate. And it's the same by the way, all the general debates about upload filters and and you know, do we have some do we not have them? I think we have to be honest, in the commercial world upload filters up totally present and reality. I mean the business model of Google, Facebook and so on, they don't work. These business models don't work without upload filters, because the business model is to place advertisements as fast as possible at the right place. So that means whenever somebody uploads something or types something, the upload filter, the content recognition system of these companies immediately recognises ‘What is this about’ and plays advertisements in that context. So, you know the battle cry “No upload filters. No upload filters. No upload filters” is a little bit disingenuous because it pretends there are no upload filters unless they are introduced by law, that's wrong. The upload filters have been invented and are being used to make billions of Euros of profits by Google, Facebook and others. And in the moment when a question is asked whether the upload filters could actually be used only for public interest, you know, for example to avoid illegal content on the platform then the platforms together with certain parts of civil society start upload filter screening. So I would say, you know, it is a difficult subject and we need the help of technologists. It is not good enough to just say “No, no, no”,  because I mean, obviously we don't want to have infrastructures where illegal content and (I think one has to differentiate between the nature of illegal content), I would say, you know, it's different thing, but the copyright is broken.

On the one hand whether child pornography is spread, or whether people are recruited for terrorism, I would say where the worst kind of illegal content, which has also huge terrible human implications is spread, I think we should all work together to see how this can be addressed in an effective way…

In terms of the law, what do we do about the fact that fines and penalties are seen as just the cost of doing business? The EU is better than the most, but still the fines are hardly important regarding profitability. What kind of fine is more than symbolic?

Are fines the way ahead at all? Well, here I have to make a compliment to the FTC the Federal Trade Commission in America, the highest fine on data protection and privacy has still been imposed by the FTC, the fine I think it was 4 billion on Facebook. And so, you know in some of the enforcement mechanisms that the United States, I see a lot of rigour and we can certainly look to them and learn, including from having in one house the enforcement on privacy, competition and consumer protection, and one can of course and you know say and I think that's right the FTC needs more resources and so on, but by principle not to separate these areas of enforcement. I think that's the future we need to bring areas of enforcement together because we are moving into a world of platform economy where platforms basically do everything. They are multisectoral. They use multipurpose technologies, and we cannot have an efficient enforcement in the future with fragmented regulators, fragmented in a double way both across borders and then still across subject matters. So, you know in Europe you have enforcers on data privacy, then there's another enforcer and on competition, and there's another enforcer on consumer protection, there will be new enforcers on digital and AI, and then of course we have banking regulations, too many. All of which deal with the platforms and all of which deal with the multipurpose technology AI so the platforms are very efficient. They see all the enforcers. They see all the issues AI creates in relation to this, in relation to that.

While on the side of enforcers, you know, you have through this fragmentation, a  lack of capitalization, of experience and now that's a problem. So this is what we have to work on and I would say it's true fines are not everything, but without fines everything is over nothing. So, you know the debate about whether we should have fines or not. I would say yeah, we need fines and they have to be high and they have to be really a deterrent. But we also, in addition, need all the other measures.

Of course, it's true that in enforcement of GDPR ordering the deletion of a whole body of data can be much more painful than a fine, but it doesn't mean that you don't need both of them. I would say our regulators have to learn in the area relating to data, to have as much guts as the European Competition Regulator in terms of the size of fines. We need to come into a situation where for big companies, you know billions of Euros are nothing unusual in terms of fines in order to a few years later not to have to impose these fines. The problem is if regulators don't act fast and rigorously from the outset, then the culture of non-compliance sets in and everything becomes very tedious. I would much wish that our regulators, and I see a certain tendency in this direction, now become much more rigorous. And you know once a regulator has taken two or three really tough decisions, afterwards the regulator can turn to much more constructive work because the effect of deterrence means a lot of quite ludicrous cases of breaking the law will not happen anymore.

So, you know where regulators complain about a lack of resources and “Oh it's also difficult” and so on, well, you know if they would get themselves around to impose a few big fines, maybe combined also with other measures, that would be a very efficient way to address their resource issue because afterwards they will have much fewer issues of non-compliance.

Looking ahead, are you optimistic about the future of rights and fundamental freedoms in the context of online life? What are the main challenges ahead?

Yes, I am optimistic, fighting for law and for rights is something which never ends and it's also okay that it's a fight. It has always been a fight… And also democratic engagement is something which is necessary and it continues and it's very hard. So one needs to have the frustration tolerance of a technologist. If we want to take that example, you know,  those people who are engaged in democracy have to be as tenacious and as ready to stick to the challenge as those who you know write a complex program and work day night and so on. So I am optimistic, but I think we have to regain the readiness and power of engagement. We must understand that politics is not about a flash mob or just a tweet here and a little ‘I like this’ button here. It is about continuous engagement in dialogue with other people, and many political projects, good political projects of reform, they don't work the first time, they don't work the second time, they don't work the third time, fourth or fifth time but then the sixth time yes. So one has to stick with it, or as Winston Churchill said KBO, keep buggering on, and that's very important.

And of course in the modern world, politics and democratic engagement competes with entertainment and you know with just hanging on your social networks 10 hours a day and basically doing nothing useful except maybe working a little bit here and there. But for the rest, you know entertainment takes a lot of time and of course some of these social networks are programmed for dependencies, there's a lot of reasons why people don't engage in politics anymore. And I would say to those, you know, no human is perfect. Okay if you don't engage constantly, but at least one engagement in an organisation which has a democratic picture and this can be certain associations, it can be self-organised civil society, it can be some forms of churches (not all churches are democratic but some are), it can be a political party, it can be a union.

...Here in the European Parliament in Brussels it is fascinating to see how civil society over and over engages and makes a difference. It's probably fair to say that the legislation here would look very different if we wouldn't have this active society, this active civil society. The corporations and also executive organisations of governments, you know, when it's about security internal Ministries and so on are very very strong and also companies like Google and Facebook and Amazon and Microsoft, they have huge hordes of lobbyists and they spend an incredible amount of money in Washington, in Brussels, and in many other capitals to influence the legislation. And for democracy, this is a problem. So we have to talk about this very loudly.

We have to be very clear that many forms of this influence with money are not okay. And I think we need to take these examples of capitalist and Commercial engagement in politics as an additional motivation that we engage in politics for other purposes, namely for rights, for equal treatment, for democracy. And that means that certainly the content of policies cannot and should not be shaped according to the business models or for that matter the technological inventions of these companies. Because if we mean it with democracy, then the laws have to be shaped in such a way that the majority of people benefit from innovations and from a well-running market, that there's good competition that revenues are distributed in the regions, not only a few companies making huge money and everybody else can be happy if they get a few pennies and so on and so on. So these are complex exercises, but I would say it pays to engage.

We have seen very young people like Max Schrems for example, he started to engage as a student and he is still only in his 30s. He has made a huge difference together with his friends and colleagues for many other member states. The history for example of engagement of Max Schrems and maybe a group of 30-40 civil society activists and in the shaping of GDPR still has to be written, but I am a witness to the fact that this engagement in the European Parliament makes a huge difference and I can say to everyone who's interested in the subject: get engaged.

…You know if there are organisations coming up and claiming they're doing good things, always my first question is where does the money come from? And I have learned in the many campaigns here in Brussels, which tried to influence the Parliament and also the thinking in the European commission, I have learned the term Astroturfing. It's an American technique of creating something which looks like grassroots but is not grassroots, Astroturf. What does it mean for lobbying? It means an organisation is created, it claims it acts for small and medium enterprises or it acts for rights, or acts for equity and fairness in AI but in reality, it's financed by the big guys who have a commercial interest behind this and I think there we have to be extremely rigorous and also talk about this very loudly. And I think we also have to say very clearly that there's too much of the money of Google, Apple, Facebook, Amazon, Microsoft in our universities. The job of the university is to work for pupils and not for these companies, and I think they are using their money to buy sympathies. They buy sympathies in the universities, you know, the life of professors who work with them, you know becomes just more comfortable if you have all the money you need for your research, they buy sympathies through press programs. They buy sympathies through parties, through events. And so it's a I think it's something to worry about and so it's good also that we have organisations like, you know, Lobby Watch and Corporate Watch and so on who even make statistics about how much money they spend, how many people they have.

And with the concentration of power and profits in these digital companies, the issue is not only the technology. We have to see the concentration of power and technology together with the power which comes from money and the power which comes from their lobbying activities, their public relations activities, their you know infiltrating academic life, press and so on. All these factors of power are in my book Prinzip Mensch, I have talked about the eight factors of power of big digital. These power factors have to be seen together and it's probably fair to say that we have never seen such a concentration of power in such a few companies as we now see it in the digital world, but I'm still optimistic.

If there is a critical mass of people who engage in democracy, they can make a difference and I'm still believing. And it's based on experience that it is worth it to engage.

To watch the full Q and A, and Paul’s keynote address from WEDF 2022, please visit forum.worldethicaldata.org, and be sure to follow us across social media and sign up to our newsletter to be notified of future WEDF events.