Realworld
R062 - The Artificial Revolution, with Alessia Rullo
Creativity, the essence of the human spirit, intertwines with technology and innovation, forging a constantly evolving bond. However, one must ask, do these synergies contribute to people's well-being? Or do they rather alienate them in a frenzy of hyper-productivity? Where does art stand as a manifestation of this creative spirit, beyond the utilitarian vision? Prioritizing efficiency over empathy, the worship of innovation and technology over humanity itself. To what extent does it empower us or distance us from our true essence? To what extent does it elevate us or lead us to mere automated and dehumanized subsistence?
We kick off a new Realworld with more questions than absolute answers, so we'll have to surf through them with as much attention as intuition. And there's no one better than Alessia Rullo for that.
How do you see the current moment of technology?
From a pragmatic and I would also say historical point of view, I think we are at a turning point. We are beginning to experience a revolution announced more than 30 years ago that now, for a few months, is finally starting to manifest with something that makes sense, that is beginning to be tangible for everyone, because until now it has been like a promise. I see it as a revolution because from my point of view, which is mine, everyone's, because we are no longer talking about anything else, it is not just a revolution. The technological revolution has never been just a technological revolution. It has always been a social, organizational, economic, infrastructural, political revolution. What we are going through, we are beginning to plant a more concrete seed of changes that will probably affect us on all levels. Being very pragmatic, I think these changes need to be interpreted. We have to somehow not reject them, but manage them. And have the ability to, I say a word that may sound a bit good, manipulate them.
The technological revolution has never been just a technological revolution. It has always been a social, organizational, economic, infrastructural, political revolution.
What is the real world for you?
It is what excites me, what moves me. The real world is clearly for me what we can experience with our senses, which is the most obvious. And what is true is that when our emotions are even stronger, more brutal, anger, rage, pain, also joy, it is here where we can feel and perceive it more. I also believe that our perception is clearly partial, limited, imperfect. And I also think that the risk of having an imperfect and perhaps altered perception is even more present, with so much stimulation that somehow alters our perceptions. Personally, I believe that as human beings, we have not yet developed our full perceptive capacity.
The future of AI in the world of health
The world of health, the world of diagnostics, is transforming very quickly and there is indeed a huge acceleration. There are two or three directions we need to consider.
The first, the healthcare systems that were built in the 50s, 60s, post-war, are obsolete because they were constructed thinking of a population that had a certain demography in terms of age, in terms of race, because it was a very homogeneous population and with diseases that killed you. Now we have a much more heterogeneous, much older population with chronic diseases. That is to say, the existing healthcare system is no longer sustainable. So, one of the possible directions is how we transform what is the concept of care into something much more distributed. Very likely, diagnostics and care will move to where we live, which means completely rethinking our spaces in a different way.
It is also interesting to think about the actors who will provide this care. For example, nurses will play a fundamental role because they will probably be at the center of this system. It is very curious to think that precisely this population is at a very high risk of burnout right now. A nurse can do their job for a maximum of eight and a half years because the cognitive load is enormous, the stress is enormous. And one of the factors of that stress is also the tools we design for them, the famous clinical record, are tools that from a point of view not only of interaction but of usability, are extremely complicated.
We have to think about intelligent interactions so that the people who will provide care are there. And there will probably be intelligence mechanisms that will allow us to have much more advanced and also distributed diagnostics. Advanced in this type of capacity to anticipate and prevent possibilities of disease or whatever.
We have a lot of data, and this data is often fake. From the data, we must have good information. From this information, we must have knowledge. And from knowledge, we must have wisdom.
These would be the logical steps. Right now, from data to wisdom, there is a leap, many times, and we transform data without much wisdom, without it being true. So, empowerment in this case also involves a quality of information and therefore a quality of how the patient can consume this information. Because this empowerment will probably lead to decision-making.
And here is where we return to the previous point: the ethical component and the political component, which has regulatory, which must accompany any socio-technical decision. Tools are never neutral, that's the issue. Therefore, for now, we are giving meaning to it as human beings. I hope we can continue to give it, that this does not change. That is my hope, at least.
How to eliminate or reduce the friction of using artificial intelligence
The starting point for me is to understand what we are talking about. These are intelligences based on the deep learning mechanism that we train. And we tell them exactly what they have to learn and what they don't have to learn. Sometimes it seems like we're talking about a person, like a god.
The starting point for me is artificial intelligence for what? What is the problem we are trying to solve? Or the emerging behavior we want to promote. So, the starting point for me would be to first understand the context in its human, social, emotional, hard and soft elements in which we are applying it. I do design research in my work. And from there understand what elements we need to transform to introduce this type of intelligence.
First is, what do we have to do? And then, what level and what type of intelligence can we introduce to solve this specific problem?
There are many cases of implementation. The starting point is: people, the planet, the context, what are we doing to solve what. And from there, as always, use the power of imagination.
Design is "to make possible the impossible." It is to be able to design, project, imagine, shape, making tangible, what we believe is impossible.
There was a professor I adored from the Eindhoven Polytechnic, who said a phrase that stuck with me: Design is "to make possible the impossible." It is to be able to design, project, imagine, shape, making tangible, what we believe is impossible. So, this is how I would understand that type of project, giving shape to something that does not exist, but starting from a specific problem or starting point. And I insist, there is no concept of artificial intelligence as a superior entity.
Old claims in a disruptive moment
The point is that, in general, I do not believe that governments, institutions, are truly prepared to answer these questions. But not being prepared is an act of responsibility or significant irresponsibility. Now I don't want to seem too philosophical, somehow, we could even have new ontological categories that describe what exists and what does not exist. And not having a point of view, especially on all this, that helps guide this process, we cannot afford. And this point of view will clearly have ethical implications. And of course, it is not black and white, it will probably still be gray, especially because we are learning what that means. An example, now with ChatGPT, there is a topic that is the topic of copyright, which affects content creation. Until now there has been an industry that has been built towards content creation, towards authorship. Now this concept can become completely obsolete. And above all, how can authentic content be recognized?
I do not believe that governments, institutions, are truly prepared to answer these questions.
What does authentic content mean? I don't have the answer, but I think these are questions that need to be put on the table and start taking positions and giving guidelines that are imperfect, but better imperfect guidelines than no guideline. That is my point of view, which I also apply in my work.Done is better than perfect.We are in a moment of change. Surely we are not prepared, but we have to rely on a point of view that gives us guidelines to decide what is next, what comes right after. The risk, otherwise, is a potential anarchy, which will somehow happen.
User-centric perspective
We have a lot of frontend developers, UI designers, which will probably be a job that in the future will be less, but we are not talking about interaction designers, which would be the profession that working at this meeting point between the physical and the digital, emerging technologies could help develop interaction models that truly make sense. Technology, I always believe, has to be meaningful, it has to be anchored to a meaning, to a sense, otherwise it is an exercise in style. That is the important challenge that I think as designers we should take, embrace, assume.
We have to develop a multidimensional and systemic perspective
This topic is multidimensional, of high complexity and we have to develop a multidimensional and systemic perspective. Because unfortunately it is not black and white, but it probably works not in three dimensions, but in four, five or six. It is the one we still cannot even imagine. And so yes, there is a lot of complexity, but I think the topic involves it somehow.
What is the cause of digital fatigue?
When we talk, for example, about usability or user experience, we are talking about reducing the cognitive effort that we need to carry out a task, in this case a task with a digital interface. So, cognitive load, cognitive fatigue, is part of our way of functioning as human beings. It is true that this cognitive load in this wave of information that comes to us from the digital, is a bit out of control. I think this has been very tangible during COVID, because in COVID, for various reasons, especially for everyone's safety, the amount of information has multiplied, materialized in a life at some moments almost exclusively digital for many people, for many jobs. And this overexposure to the screen has transformed into a phenomenon that is digital fatigue, which is now almost a buzzword, how it can be reduced, how it can be compensated, decompensated.
We always talk about artificial intelligence taking away our jobs or empowering us. What if it allows us to develop our lives in another way?
When I think about the future of technology, I see technology in its capacity not so much to overload us with information, but to help us give us the information we need, when we need it, in the best way that suits us, which was a bit the idea of ambient computing, by Weiser in the early 90s, a technology, so to speak, transparent. Without friction or with the necessary friction. The friction that allows us to have the correct level of attention to be in that state of optimal flow, which is not only a productive state, it is a creative state. Here technology can have a role to empower us or make us develop our maximum creative, cognitive potential as human beings. And what role does artificial intelligence play in all this.
To what extent can we minimize these cognitive frictions to be able to develop our potential in another way or to have more time to live? You started talking about reconciliation. What a great topic absolutely unresolved without any willingness to truly solve it. And nothing, until we have a technology that perhaps opens another path. We always talk about artificial intelligence taking away our jobs or empowering us. What if it allows us to develop our lives in another way. For this to happen we need ethics. We need decisions that are political.
Do we have that control in manipulation?
The easy answer would be to say no, we don't know how to manipulate those changes. But I think that is precisely the challenge. We are clearly talking about artificial revolution, I am talking about artificial intelligence. There are all kinds of applications. I am a designer and every day in the team chat applications appear that tell us we will no longer have work. It is not just a matter of being afraid or thinking about the most advanced jobs or professions that I still think will have a place. But it is also about understanding what kind of future we desire.
I make a connection that seems not obvious. In France there is a general strike over the issue of pensions, because from 62 they will work until 64 years. And I sometimes wonder if the question is really the retirement age or is it more about what model of life we want to have and what role we want work to have in our lives. That is the question I think we are all asking ourselves a bit, so I also wonder what role artificial intelligence or this other type of intelligence could have to make us live, saying working less is a bit reductive, but living in a deeper, more conscious way, more connected with nature, with the world. These are the questions that I think would be interesting to start answering. But to answer them we cannot not take a conscious position of what is happening, which means putting all the ethical and political issues on the table, with everything that will be the intelligence of the future, the work of the future, the professions of the future.
What model of life do we want to have and what role do we want work to have in our lives. That is the question we have to ask ourselves.
Is the digital divide widening?
The issue of the divide, artificial or digital, as we want to call it, for me has two factors or two dimensions. One is in the use of these tools. Here is where I believe design can have a fundamental value, because it is in the interaction model that we can propose that we can minimize this divide. That is, in some way we are getting closer and closer to what a few years ago was called natural interaction, which does not presuppose learning or a code, which is closer to how we are interacting you and I. For example, also the research that is being done at Google, of an intelligence that is capable of recognizing some indications as contextual. If you see me with a full tray, you see that I am near a door, you will open the door for me. I don't have to prompt anything. So, I think there is a design challenge to be able to approach a truly transparent interaction model, which does not mean invisible, but has this level of adaptive, intelligent visibility, that allows me to work between the physical and the digital in a way that does not presuppose effort. And this is a dimension of the problem space.
The other dimension, sorry if I say it again, is economic, it is socio-technical and therefore economic, because as is happening now there will be a part of the population that will probably be very large and that at first will be excluded or will adopt this technology as it has been with mobile technology in another way. It is also true that if we think, for example, about the adoption of mobile technology in emerging countries or in countries that have a different level of development from, so to speak, Western countries, mobile technology has been a great element of democratization, here it is also very difficult to predict the future. What we can do is look at the patterns that have developed in the past and again try to regulate in the best way we can, but already knowing that the regulation we will implement today, tomorrow will not be valid. So, it is a continuous learning, it is learning and learning and relearning.
Taking this awareness at an infrastructural and governmental level is the step that is not being taken. Therefore, I am not so optimistic. In this case, I return to being pragmatic and realistic. That is, this is the important challenge. I do not believe, I return to perhaps one or two previous questions, that we can solve it by saying we stop for six months. Otherwise, let's take advantage of the six months to get to work, because this does not stop and it is difficult for it to stop with such a great disruption.
I have the feeling that this is happening over me and I am spinning in the wave,
The image that comes to my mind is someone trying to dry the sea with a mop. I believe that the virality, rather than blocking it, we have to be able to do it on the go, with limited information, taking risks, even risks of making mistakes.
It is this democratization of access to this technology that we need to work on. I believe the only way to do it is to do it on the go.
You said that somehow it has arrived, like a giant wave. And it is true that we have this feeling, but it has been 30 years that work has been done on artificial intelligence, and little by little we are beginning to see the first distancing. What happens is that this has arrived, grows, reaches this sweet spot, where it is now disruptive and ends up democratizing. It is this democratization of access to this technology that we need to work on. I believe the only way to do it is to do it on the go. I do not see it possible for an open paradigm like an open AI to be blocked.
I believe that history also teaches us that when the wave comes you can surf, it is the only thing you can do. That it is complicated, that it can be scary, that it can generate a lot of concerns. But of course, I also believe that there are people who have been working on these issues for years. So, maybe it is time for what has been learned or not learned to be put on the table. But this presupposes a serious, professional point of view, and, I insist, conscious of what is happening. It is this concept of awareness, which for me is taking responsibility and action.