Realworld

R071 - Governance of Cyberspace, with Manuela Battaglini

Podcast 54 min

Follow Realworld!

Listen to the episode on Spotify ⁠⁠Apple Podcast⁠⁠ or ⁠⁠Google Podcast⁠⁠⁠.
Subscribe to our YouTube channel so you don't miss an episode.

Political and economic leaders of old Europe raise their voices. The most advanced economies produce technology, while less developed countries depend on them to consume it. Accumulation of resources and research in the great powers, with China and the United States at the forefront, with corporations increasingly concentrating economic power and thus preventing a much-needed redistribution of wealth. Have our decision-making processes in Europe expired? And in all this, what role does cyberspace play? Today we have the immense fortune to discuss it with Manuela Battaglini, lawyer and consultant expert in digital ethics and researcher in artificial intelligence and its impact on democracy and society. Co-founder and CEO of Transparent Internet.

What is the real world for you?

For me, the real world is a world with a lot of inequality, increasingly more inequality. It is a world that is screaming in pain, but it is also a world that is becoming more aware of the neoliberalism, savage capitalism that is punishing it.

And this awareness is the beginning of the change in consciousness that is also already happening. So, despite living in a very tough moment, I clearly see optimism at the end of the tunnel.

We have four years left of rampant neoliberalism.

I see it in this change of consciousness, in this change of mindset that is in high institutions, in some key and bold members of high institutions who have already made a quite clear paradigm shift. And I believe, I estimate that neoliberalism as we know it now has four more years of suffering left, but things are already changing, because neoliberalism has already taken off its mask. Before it was the cool company, before we all wanted to be Zuckerberg, we all wanted to be Steve Jobs. Today they are all under scrutiny. And that is already a change in the way of thinking, because the cards are on the table. There was a plan B, a Machiavellian plan for a long time, many of us didn't know it, and now neoliberalism and the people leading it speak openly, that the population has to suffer for them to earn more money, that there has to be more unemployment to punish the worker who is too high demanding many rights, that a certain political class has many privileges and the middle class must be eliminated so that there is a lot of inequality and power continues to concentrate in a few. It is already said without any kind of restraint. We have four years left of rampant neoliberalism.

R071 - Governance of Cyberspace, with Manuela Battaglini

Regulation and Innovation

What do you think of what the CEO of Ericsson says? This narrative that the European Union's competitive rules are repelling companies, are killing innovation.

That really when there is no market structure, when power is in few hands, concentrated in few actors, is really when innovation does not occur. I'll give you the example of Apple. Apple practically no longer invests in innovation, it invests much more in stocks and it is because it has no incentive, because it is a monopoly. Monopoly and oligopoly really foster non-innovation. And the regulation we just talked about is really what fosters innovation. So they only do this because they don't want their own monopolies to be crushed, they are not interested in their oligopolies.

I think we are regulating the wrong part of the iceberg, completely wrong.

The great need to regulate this is not only to redistribute a bit of that captured value and for justice itself, but we are also seeing how all this influences even authoritarian drifts. It has a very significant social consequence.

Privacy issues also come from there and data protection. I am leaning more towards competition and antitrust and copyright than data protection and privacy to solve the problem. Because after all, in Europe, I think we have a problem of lack of knowledge of business models.

And then, academia discussing for three years what is reliability, trustworthiness, transparency. And then, it also solves individual problems, not collective problems.

For example, the Data Protection Authority of such a European country has imposed a fine of so many thousands of euros because the data controller did not ask for consent or whatever. We are talking about practically individual cases, but with competition you make a boom, then you go to the root. And really, when we talk about: No, but what happened at the beginning of the century with the railroad and with all these monopolies, it is nothing but competition and monopoly problems. Then, even when we talk about physical technologies, which are technologies, and social technologies, such as laws, institutions, universities, education, culture, narratives, business models, are social technologies that go different ways and directions. And the solution is in social technologies, where social technology is regulation, but regulation of the lower or higher layers, as you want to look at it, of the market structure layer, of what we call artificial intelligence or the technology that surrounds artificial intelligence.

And also, if we look at it or explain it from the past, everything has been or almost everything has been a competition problem. An antitrust problem, to avoid monopolies precisely so that there is competition and therefore to foster innovation.

What is cyberspace?

If we are absolutely practical, pragmatic, it can be the network of servers and cabling. This is very important to say because when compared to the radio spectrum, some administrative professors have gotten angry with me and told me that the radio spectrum is a finite good and that cyberspace is infinite, nothing further from the truth.

Cyberspace is the backbone of all critical infrastructures. But cyberspace is the only one that is not governed.

It is an absolutely finite space. From a more philosophical point of view, why should there be democratic governance? Because our life is our analog life and our digital life, there is already an extreme parallelism. That is, it is no longer that we live 75% in an analog way to 25% in cyberspace. Our life is already turned over, we do absolutely everything in the digital world. What happens is that our resources and our critical infrastructures are governed democratically. For example, access to electricity, water, roads, public transport, are critical infrastructures to communication, to the radio spectrum itself, etc. It is democratic access. We all have the right to receive and access these supplies and these critical infrastructures that are managed by the Government, by the State.

Cyberspace is the backbone of all critical infrastructures, because everything just mentioned is managed through the internet. But cyberspace is the only one that is not governed, that is not regulated per se, as a whole. And that also does not depend on any ministry and is not regulated in a way that we can access democratically.

Article 128 of the Spanish Constitution, as an example of constitutional law, establishes that the wealth generated in our country belongs to the State and that the State can manage it in different ways, such as through public domain, a public service, etc. From there comes, what is cyberspace? Legal definition, is it a public domain good, as the radio spectrum is, or is it a public service? The radio spectrum, which is considered a public domain good due to the fundamental rights at stake, because it is a finite good, we must consider what part of nature is present in the radio spectrum, precisely to consider it a public domain good, like the coasts... Then, the air in which all the waves propagate is public domain. But that has been called into question, because the air we breathe is supposed to be the air through which the waves also propagate. So, what about the air we breathe? What about our movements that take place in that air?

So it is a bit in question this nebulous surrounding the radio spectrum that does not have the characteristics of a demanial good, such as it can be expropriated, etc. So, if we consider it a public domain good, a very problematic figure comes in, which is expropriation. Does the State have money to buy the servers and cabling of the big tech companies? Absolutely not. So, if we consider it a public service, it is a much less problematic figure and can be managed much better, it would be a public service because the big tech companies manage the public agora, there is nothing more public than the great conversation that is taking place and the management of the great conversation and, therefore, of people and, therefore, of communities and people's lives.

We are talking about billions of users, so there is nothing more public than this, no matter how much we want to consider it a private business. Once we get here, we have to think about how we are going to regulate it. I think the best way to regulate it is as a critical infrastructure, and it is basically that they are governed democratically to be able to have democratic access, that is, we all have the same access, so that the actors with power do not harm the actors with less power, which is what is happening now, in every sense, also, the monopoly and oligopoly too and also, deregulated, depending on what layer we are talking about. And on the other hand, so that they are regulated with reliability. That is, that the citizens themselves trust the regulation of that critical infrastructure that is cyberspace.

And then the regulatory framework, that is, the global legal framework, I will tell you about that in the thesis.

What other business models exist?

We could talk about the market structure that exists in the world of cyberspace. Of the whole iceberg, not just the tip, which is where we have really stayed. Yes. In fact, when you defined who I am in the introduction, delete that, because the thesis has made me rethink my profession completely. In fact, digital ethics, I don't know to what extent it serves, because one, it is voluntary and two, they serve more to clean the facade than anything else. Anywhere we have the hardware. That is, we are already going to deep waters where the deep-water sharks are.

This is pure geopolitics because many countries are involved. Already only in the world of hardware is the company NVIDIA, which has a market share of 80 to 95%. In research and development of these chips that in turn are manufactured in Taiwan by the company TSMC. NVIDIA is already a monopoly due to the market power it has. TSMC also in terms of manufacturing. On the other hand, to make certain microchips, it is necessary to manufacture, lithography equipment is needed, which are developed by a Dutch company called ASHL, and each piece of equipment sold to TSMC costs between 150 and 200 million dollars. But it is a single company in the world that manufactures this type of equipment. We are talking about monopolies in deep waters, where these deep-water sharks are roaming freely. Not regulated. This is a business model.

Then there is the world of cloud infrastructure. That is another completely different business model. Where hosting is provided, where data is offered, where infrastructure is offered, models are offered, access to all this is offered, the hosting of all the information. Where Amazon has between 30 and 40% market share, Microsoft has 20% market share, we are talking about an oligopoly. Google has much less, then there is Ali and all these that have much less too. And practically unregulated.

They have access to critical digital infrastructure. Whoever has total control of these critical infrastructures governs us all. That is why this democratic governance of cyberspace is so important, because it is all this cyberspace.

What we need is regulation that structures the market. We need it to allow us democratic access. We need ex-ante regulation and not ex-post.

Then we have the artificial intelligence model, where here is where intelligence is really developed, which is already the development of the models themselves. And finally, we have access to these models, which can be done through hubs or through API, but both are pure control. So these are different business models where we are really only focusing on the tip of the iceberg. And governments and tech giants are very comfortable. Bad business if they are comfortable. I no longer want to waste more time in my life discussing what algorithmic transparency is and what a reliable algorithm is, because I think there is no more sterile conversation than that, because it does not solve any problem.

What we need is regulation that structures the market. We need it to allow us democratic access. We need ex-ante regulation and not ex-post. The regulation we have now is ex-post regulation, that is, it regulates behavior. You have behaved badly. Since you have behaved badly, then, enforcement, the reinforcement. And the reinforcement is ex-post. And ex-post does not solve that much. Already. Ex-ante measures are the measures that are really much more beneficial, which has really happened since the beginning of the century, because if we remember when the railroad started, they were monopolies. And those monopolies were regulated and that regulation was a market structuring, where there were a series of rules, for example, what is called self-preferencing was prohibited, that you prefer yourself, it is completely prohibited.

Here we see it clearly in these business modes: if I have the cloud, which belongs to me as a company, then I will favor all our structure of my companies, because I will prefer myself over my competition. That creates a brutal monopolistic structure and apart, or oligopolistic in this case, almost immovable. We therefore need ex-ante. Ex-ante is already by itself you cannot self-prefer. You can no longer give privileges to your own companies. Already by itself, the market has to be structured.

When they say that innovation goes too fast and regulation goes too slow, I say... what do you want to regulate? Do you want to regulate ChatGPT? I don't think it's very smart to regulate ChatGPT. Let's regulate the hardware, let's regulate the cloud oligopoly. Because from that monopoly and this oligopoly arise all the discriminations, all the biases, all the monopolies, all the social inequality, everything that is really happening at the tip of the iceberg. And this cloud and hardware infrastructure is very stable. It does not change as fast as technological innovation, but all technological innovation is based on these deep waters.

Now talk to me about the artificial intelligence law. Now talk to me about the digital services law, which is practically a law that manages the conversation. And then, talk to me now about the digital markets law, which is supposed to be the law and the antitrust law, but that really does not... Then, I think we are regulating the wrong part of the iceberg, completely wrong.

 

And so concludes this incredible conversation with Manuela Battaglini. I hope you found it as incredible as I did. Personally, I believe that the fact that Manuela is a lawyer is what allows her to see the global reality with such clarity. A few episodes ago, we talked with Simona Levi about the importance of knowing the laws and legislative structures as a tool to transform, shape, and guarantee democratic quality and the freedoms conquered with centuries of effort and struggle.

Hopefully more public conversations like this. Hopefully more Manuelas helping us read reality. Hopefully more leaders listening and guiding our societies towards a fairer place. A big hug and I'll see you in the real world.

It turns out that copyright, one of the big problems is that they are extracting all the information from authors' works without having paid them a fee beforehand. That is, they are stealing the works from the artists.

But they have no legal basis, they do not use a single jurisprudence, not a single reference that is jurisprudence after all judicial precedent, not a law, absolutely nothing, but by my pretty face.

That is for the articles, but then we are talking about image-generating models, which extract all the images and works of many artists without having paid them a fee beforehand. They want to skip this responsibility in many ways. Now they are trying, this already started with OpenAI and their lawyers, to tell them: No, look, what I did was train the model, but really who is going to use your article is the end user. So I am not really using it. Who is going to use it for such, is the one who reads it and says: Wait, wait, wait, wait, here really the infringement is in the training the input, not the output, so don't try to be smart. So, for those models, image and video generators, they are also with the same. In reality, those who are going to commit the infringement here are the engineers or those people who use all these images to create their advertising, their story. This you are saying no, that really the infringement is in the collection of all that information without paying a fee to the artist.

So what does Mistral do? Mistral has also bypassed all this and has collected all this information to generate a database, Cosmopedia, which are synthetic data, not real, but officially generated by the Mistral model, but that serve to train other foundational models, image, video generators, etc., that would no longer be infringing copyright.

What happens is that there are people, especially, let's see the difference and we keep talking about behaviors. That is, I remember talking to Steen (Dr. Steen Rasmussen), who is very American, because he lived 20 years in the United States and says: If someone is going to mess it up here, these are going to be the Americans, the European is weaker. Well, they are indeed messing it up, the FTC, which would be the antitrust body, with its leader at the helm, Lina Khan, who is currently my heroine, they are messing it up because they are indeed crushing monopolies in the United States, but not only the technological ones, but also in the world of groceries, supermarkets, they are breaking many barriers. For example, there was a legal agreement per se in the United States, in which a worker who worked in a sector, if he was fired or left, could not work in that same sector so as not to compete for so long, so he was out of the market for X time. Well, that has also been eliminated. Of course, companies are furious with this woman and the FTC. They are doing an impressive job. On the other hand, the United States Department of Justice sued Apple for monopolistic and anti-competitive practices. And Apple is sweating bullets because this is also crushing it at the root. And then we have a digital markets law in Europe, which is a law that is supposed to be self-compliance, like: I publish the law and now you... comply with it.

So, of course, it is not going to happen, because these guys, the laws, they don't care. The sentences don't matter to them. So when this complaint came out, Tim Cook was not in his office surrounded by lawyers, saying: Gentlemen, what do we do? He was in China, because in China is really where he wants to gain market share, due to the super apps and other circumstances in China, so he does not have as much market share as he has in the United States or in the Western world, and that is really where he is concerned about increasing. He didn't care about the complaint, but what happens is that they are also hitting him at the root. In contrast, in Europe we worry about the blue bubble and the green bubble. That is, when an Apple and an Android communicate, or if Apple sends a message or an Android sends a message to an Apple, what happens? That the message is not encrypted, that the videos cannot be seen, that maybe you can't even read the message. And that is the famous false privacy that Apple has, that Apple promises.

And it is either between iPhones or here privacy, according to my terms. So that is for everyone to buy an Apple and then we are already in protected territory. Or, for example, the famous option of: Do you allow or do you not want Facebook or Meta to track you? No, I do not allow it. Suddenly, with that click, it made Meta lose a billion dollars, which doesn't really matter to me. But what Apple really did there was lock all its users in a completely walled space, where now this privacy under my terms. Now I am the one who watches you and I will use your information for advertising too. Just like the monopolistic practice of, for example, automobiles, this more in the United States, the automotive companies where everything has to be compatible with the iPhone, because if it is not, you cannot directly release a new vehicle model. These are absolutely anti-competitive practices. So, they are crushing companies at the root, really where it matters. But in Europe we cannot worry about the green bubble and the blue bubble, because this is focusing, not even on the tree, but on the leaf of a tree in an immense forest. So, we lose the perspective of the context, of the situation and of everything. And all these complaints that there are for copyright, all these complaints for monopoly and anti-competitive practices, etc., those that are really worthwhile are happening right now in the United States. And it is very interesting to read them, because when one reads them, one understands much better the business models. And to be able to understand how these giants and their fights work and how they are controlling us through their tentacles, then we can understand it much better by reading that type of complaints, which are also very interesting.

 

 

Legislation in Cyberspace

The problem you are explaining is a global problem, however, regulation is rather local. In the United States, there are some localized initiatives in some states. In Europe, we have some laws, the new law for artificial intelligence, the digital markets law, the digital services law. What do these laws focus on? Is it enough? Are they focusing from a critical infrastructure perspective or are they focusing on what these infrastructures produce?

It is not enough, but because I think they have been structured under a lack of knowledge of business models. Because the business model is not the brutal extraction of data for advertising.

That is a part. Tech giants have many business models. Not knowing all these business models, depending on what context we are talking about, also makes these regulations not contemplate the total spectrum. Then, the story is that also in Europe we have a narrower vision than they have in the United States, which is broader.

This idea of the artificial intelligence law, what it does is regulate risk, but not regulate social impact. It is divided into different risks, classifies some prohibited technologies that can no longer be changed. There are some high-risk ones, which the law really regulates high-risk technologies, medium-risk and practically no risk.

It is a law that has a lot of bureaucracy, it will be very difficult to implement and many steps must be followed. One of the things that I found very problematic was with the arrival of generative artificial intelligence, of foundational models. It generated a lot of concern for the concentration of power and the power they have when negotiating with the European Commission, in this case. Because 75% of the European Commission's negotiations prior to the negotiation were with industry. And only 11% were with civil society, that is, with societies that represent the civil population. We were very underrepresented and the large corporations were too represented. That, on the one hand. On the other, when the foundational models arrived, we could not stay with the law as it was, because then the companies of the foundational models were going to get away with it. And those who were going to pay the price were going to be the SMEs, the medium-sized companies in Europe, who were going to implement this tool in their businesses, because all the weight of the law was going to fall on them, bureaucratic and economic.

Tech giants have many business models. Not knowing all these business models also makes these regulations not contemplate the total spectrum.

The great danger was that they could practically end the European business fabric. Not only that, but also France, Italy, and Germany wanted to boycott the law and said that either it adapted to their circumstances or they were not going to approve the law, because France and Germany are developing their own foundational model and said that this law was going to end their technological innovation. The most beautiful thing about France is that one of the founders of Mistral (the foundational model developed in France) the previous year was Secretary of State (for the Digital Sector in France), and defended the legislation of the tech giants.

And then it turns out that Cédric O, when he was in the private company, decided to boycott those who were his allies the previous year, because he wanted to push forward his foundational model. Then a block was put in to regulate the foundational models, but it was a regulation tailored to these gentlemen. Several things were done, the models or the general purpose AI (general purpose artificial intelligence) are divided into standard, open source and also open license and systemic risk, which are the really dangerous ones.

The way or mechanism to be able to define that their foundational model entails a systemic risk is the flops, which would be the computing power when training the foundational model, which is nonsense, because that tells you absolutely nothing. And also, it is 10 to the power of 25. What happens? That Mistral is below its threshold and many of the large foundational models are below its threshold, only ChatGPT-4 and a few more are equal to or above that threshold.

Then, it turns out that the law is passed and two weeks after the negotiations ended, Microsoft announces that it wants to invest 10 billion dollars in Mistral. And well, it was a goal through all the squares of Microsoft to Europe and used Mistral to do all the dirty work, while they with their feet on the table smoked a cigar at ease.

And now, in addition, the trend is precisely the threshold, to lower it, but to maintain the power. With which, many of them are going to get away with it.

I hope that with this we bring certain urgent solutions in terms of sustainability and conservation of the planet and redistribution dynamics.

Depending on where. For example, Scandinavia in Southern Europe we consider it a reference. But when you live there, like everything, you realize it is not so much a reference. For example, they are very neoliberal, very capitalist countries. They try to disguise it by focusing many efforts on the common good, but, on the other hand, for example, they do not want to invest in human interaction with those at risk of social exclusion, but ChatGPT will solve all the doubts they have regarding any question to the public administration. And I have talked about this at events where I have participated as a speaker with some Danish politician, who made my hair stand on end. On the other hand, it is true that they have an idea of the common good that is still better than ours, but simply to also cover up that big brother they have, because it is extreme surveillance of the citizen, which is disguised as good vibes. That we are the happiest country in the world, which is not true either. It is the calmest country, but that calm does not lead to happiness. For other reasons that if you want we can talk about another day, which is a bit out of this conversation. And it is a country that I like, where I like to live, it is a civilized country, but it is not a human country. Because neoliberalism is very individualistic. If you fall, it is because you are clumsy. You get up on your own, it is your fault. If you are sad, then decide to be happy. That is pure neoliberalism. If I am sad, it is because the context I am in causes me this sadness. The famous people first is pure neoliberalism, because I am giving you all the tools for you to be happy, if you are not, it is your fault. And then I, who am the one really causing you the evils, get rid of them.

 

Jun 18, 2024

Carlos Iglesias

CEO en Runroom | Director Académico en Esade | Co-founder en Stooa | Podcaster en Realworld

Contact

Complete the form and we will contact you to explore how we can help your business grow.