Realworld
R063 - Internet and the Dark Forest, with Glòria Langreo
The universe is a dark forest, and each civilization is an armed hunter watching from behind the trees, walking, slowly pushing aside branches, and trying to move forward without making any sound. Even breathing is done carefully.
The novel The Dark Forest by Liu Cixin inspired Yancey Strickler to use this concept to describe what the internet has become in recent years, due to the fear of predatory behaviors, enhanced by technology that has become increasingly better at tracking its users, segmenting audiences, viralizing hate speech, or creating echo chambers.
I discovered this concept through a bold talk by Glòria Langreo, Design Director at Github and very close to Github Copilot. A new Realworld that represents a call to responsibility for those of us who create technology.
What is the real world to you?
The real world, to me, is actually a pleonasm. It's like saying go up or go down. The real world, to me, is everything. I understand the differentiation people make when they say the real world is in contrast to the digital world. For me, the digital world is simply a format, another format in which we are living, but it is still the real world. We talk about media, we talk about formats, we talk about different experiences, but at the core, everything is the real world.
We are creating permanent marks in this digital society, and I don't know what will happen in the future if we don't take these small moments for reflection.
Glòria Langreo, Design Director at Github
R063 - Internet and the Dark Forest, with Glòria Langreo
User Frameworks
In this podcast, we have talked about archetypes and the persona framework, the modeling of user archetypes. I have heard you openly and frontally criticize it. You position yourself somewhat against how it is being used. Why?
I think there are many ways to use... I simply think they are frameworks, that is, the topic of personas, focus groups, most of the frameworks we use in research, where do all these frameworks come from? What is the origin of these frameworks? It's marketing. I mean, it's marketing and selling things. I mean, you've seen Mad Men, it's this thing about how we're going to trick the ladies into buying lipstick. I mean, in the end, it's not about helping people have a better experience, like the user experience we sell, it's about how to manipulate to be able to sell our book. So, I think we are using a framework that is not designed for the purpose we want to give it in the end. The simple act of taking a huge mass of the population of thousands of people and condensing them, making a smoothie and condensing them into a single person who is perfect, has no defects, is the ideal neighbor, who lends you salt, who would never commit any crime or anything. I mean, they are beings of light that, coincidentally, it amuses me a lot because when I read the personas, I have taught a lot in bootcamps and such, and students come to you with the user persona and it's like needs, buy my product.
Normally, the user leads we define, why are they always related to my product? Humans have endless needs. This synthesis makes them, to me, fictitious beings, who are like a little excuse to justify your work, in the end. I'm not saying they aren't useful as a concept of a type of user. That is, they are frameworks that can be useful at certain times. But this thing of having a person named Juan who likes hockey and whatever, seems to me not the right framework to work with.
What approach do you prefer?
Well, I think that primarily user research, another hot take, normally what the purpose of user research is, what we want to do when we do user research is to have empathy towards the person you are working with. That is, understand where they come from, what need, I mean, what are the needs, how can you help this person. So, generating empathy from the barrier. It seems to me, as I mentioned the other day in a talk, that in the end, it's like a very colonialist view of society. That is, you go to an interview and you take notes and it's like the them and the us. You are generating an otherness or a distance that is completely contrary to empathy. I mean, how can you empathize with someone when you are observing them from your podium, from your pedestal as an expert in the field? I've been in a lot of focus groups. When we did twenty, we did focus groups in the office and such. And it's like that. I mean, you are observing a group of teenagers interact with each other and in the end, it's like being there in the thing, watching how the gorillas relate a bit. What empathy does that generate?
That generates a distance. So, I think one of the most effective ways to generate empathy is to involve users in your creation process. That is, have them with you sitting at the table as one more, as one more of the creation process. Having diverse teams is key for me. Having people from all kinds of backgrounds, experiences, situations. And in general, taking this more human approach towards people, having deep interviews, sitting down, chatting, understanding. And not taking this more clinical, analytical stance of examining the other.
My favorite phrase in the history of music is by Ari Puello in La ley de Murphys, which says "If something has to go wrong, it will." And it's like that. You can't foresee everything that can happen to a user ever, never. But there are ways, there are mechanisms in which you can position yourself in a way that you can foresee more or less what situations can happen. For example, at GitHub, we use a framework that I find much more interesting than any of these frameworks, which I didn't know before joining and I fell in love with. Most companies use postmortems. When something happens, when something goes wrong and there's a big screw-up, a postmortem is done and the team gets together and well, what happened? At GitHub, we do premortems, which I think is a wonderful concept. That is, a premortem is getting together with as many people as you can and thinking about everything that can go wrong. I mean, it's like being pessimists in life. That is, what can happen? Well, someone can use it to commit abuse on another person. We can be crucified on Twitter. It can be anything. That is, everything that can go wrong. Then, you can make an evaluation of whether this idea or solution you have thought of is worth it, seeing the risks or possible risks it may have and which of these risks you can stop and prevent from happening.
They are always perfect people. I mean, I've never seen a person defined in any company who is a sexual predator, for example, or I don't know, a pedophile. And on social networks, that person should exist because they are there. However, we don't define them. Why? Because it's a very delicate topic. I mean, how do I define a person who is a pedophile? In reality, it's an antiperson. It's someone you're defining that you don't want to access your product and you have to create mechanisms so that this person cannot access. However, it's something that is not usually done. And this has led to a lot of failures, which everyone knows, from Apple's AirTags, the keys thing, which they probably thought of to not lose the car keys or the mountain apartment or whatever. Someone from the upper-middle class, white, full of privileges. And AirTags are being used to track women to their homes. I mean, they are being used, they are being put in car wheels, in jacket pockets to follow them to their homes.
We are constantly patching instead of considering that this can happen before it happens, which is quite easy.
Regulation and Disruption
I see a huge gap between the speed of technological change, which we talked about a moment ago, and the cruising speed of regulation or reflection... It's something that scares me.
In fact, it is being seen a lot in the way people are starting to move around the Internet, generally in digital products, where there used to be a generalized carelessness, which still exists quite a bit, but as if many people are starting to hide in these safe networks, these safe spaces, which are these from Telegram channels to a Discord channel with four friends. That is, people are already afraid to go outside in this world, to the dark forest. It's like they are afraid, well, of what, at any moment, the predator comes and you say well, but from where? Where is the play going to come from? And this is people who more or less know what the story is about, because then there are people who don't even consider these things. Sometimes it amuses me, in the real world, many people are very careful with the protection of their data. We have very clear reactions about what our space is, our red lines. However, on the Internet, what do you want? My age, my ID, my such. Yes, take it, man, take it. My bra size, take it, everything. Yes, to everything. And we read it, we don't care.
At what point have we given up? At what point have we surrendered and said look, that's it, I give up and give it all away. It's curious that we are so careful in the physical world and we have such a huge disconnection between where the conflict originates.
I quote a quote from one of your talks by Edward Tufte. "There are only two industries that call their consumers users, illegal drugs and software." And you say that when we talk about users, we do it from a colonialist perspective. You express it like this. What does this mean?
Well, this, from this observation, from this barrier, from this law. Them us. We arrived here with the ship and let's see what these people are doing. Let's colonize them and teach them how they should behave to use my product correctly. Well, let's see, please, what disconnection is there here. And it's this perspective of them and us that in the end I think doesn't help anyone.
Speed and Digital Divide
We are at a time where the speed of change is brutal. Is it a problem?
In my opinion, it definitely is. It's wonderful that we can be having this constant innovation and that we look back. And six months ago, Dal·le was making a horrible artwork, like broken people and 26 fingers, and now you can hardly tell if it's a real image or not in anything, in three versions.
I think speed is dangerous, not in terms of how quickly we are improving technology, which is wonderful, but who is thinking about the consequences of these changes, of these new paradigms we are introducing and at what cost these changes are being introduced.
We saw it with Facebook. At the time, no one, it's like How great Facebook, I'm going to talk to my friends, social networks, how cool. Everyone was doing social networks, likes, comments, such. And after a few years, we said Oh my God, what have we done? I mean, what have we done? I've worked on two social networks and after a while, you look back and say "Wow, maybe we should have been more careful in many of the things we were doing."
And we are talking about the late 90s, early 2000s, these times were much slower. It doesn't mean they were safer or more thought-out environments, because they weren't, but there was a reaction time, in the sense that we found a problem or an ethical problem or a stumbling block. We had a time for reflection, a time to think about how to solve that and from there came ethics committees, codes of conduct, red lines that we never want to cross, like in the industry. That is, a series of departments emerged that took all these issues into account, which right now there is no time. Because when they have solved one thing, it has already changed 23 times. That is, all the information we are storing right now in these models is already stored. I mean, you can't say to yourself oh, sorry, no, I didn't want to put it in. It's already there. I mean, there's no going back.
We are like creating permanent marks. I think in this digital society we have, I don't know what will happen in the future if we don't take these small moments for reflection.
I've never seen a person defined in any company who is a sexual predator, for example, or I don't know, a pedophile. And on social networks, that person should exist because they are there. However, we don't define them. Why? Because it's a very delicate topic.
Glòria Langreo, Design Director at Github
Ethics and Leadership
Where should the leadership of these reflections that need to be taken come from?
Well, in fact, most companies that have been working on these topics for many years, Artificial Intelligence, like Alphabet, had ethics departments where there were people raising their hands, writing papers and saying watch out, be careful, that this such and such. And those people have been dismissed and removed from the picture. So, clearly, there are commercial interests in that not happening. What continues to fascinate me is that as a society we accept that, that we say well, it's all good, I mean, that we don't go out with torches in the street. It can't be that there are people who are worrying about how we store data, what data we store, how we use them, what bias, what biases this model has. We don't care. Sometimes I get the feeling that we are all watching the great spectacle of image and color, which are these models of image, video, sound, when what is underneath is happening and at some point, well, we'll see what happens there. Who is responsible? I'm not very clear.
Ethics in Product Creation
How do we incorporate that ethical reflection of data from a product creation perspective?
I think more and more companies are concerned about this and are trying to store only the data they need. At GitHub, for example, it's like that, that is, practically nothing is stored. At Sketch, the same. In fact, we had a lot of problems defining some features because we didn't have usage data, we didn't store anything. Then you have companies like Plausible, there are a lot of companies that are increasingly aware of this and in the end, it's a product value. Many people are using one product over another because of the privacy policies they have. Perhaps as users, we don't give it all the value we should. Many people don't even bother to see this comparison, I have these two applications that both interest me equally. Which one will be more respectful of my information? We don't do it because it's tiring and in the end, it's that, you're putting the responsibility on the user.
How do I ensure making a good product that ethically tries not to have a negative impact?
I think we always have to talk about trying because it's very difficult to achieve it 100%. That is, the intention has to be there and it's important that it is. And I think it's personally, I'm being pessimistic again, but I think it's impossible to generate a product that doesn't... I mean, that in no way can generate these situations. That is, utopias don't exist. It's these perfect worlds where nothing happens and everything is wonderful. That doesn't exist. It's another pleonasm. The unattainable utopia can't be, it doesn't exist. So, I think we have to try to design resilient products and strategies, since we can't prevent bad things from happening. We have to design strategies and products that are capable of reacting to those negative moments. That is, that know how to recover and react at a reasonable speed to those negative moments. I think having these precautions, simply that you take into account what can happen negatively and that you have mechanisms to fix these situations as they happen. The other day in a talk at Bilbostack, in fact, I called it the psetopia or the meh-topia. It's not a utopia, but it's not a dystopia either. Well, it's life, in the end. So, I think it's important to have these mechanisms of resistance and quick recovery.
I really like what you've explained about the premortem. And from there, I understand that features or iterations on the application itself, on the tool, come out.
Simply the fact of developing mechanisms to avoid these negative interactions that you don't want on your platform, often leads to this, to new functionalities that you hadn't even thought of, but that make perfect sense. It's an exercise that I recommend everyone to do because it's super interesting.
It seems to me an exercise of maturity and solidity and values and principles that is super difficult to really find.
If you have a set of principles, of values and they cost you nothing, they are free, they don't cost you money, they don't affect your role and they don't affect anything, they are probably not values. They are probably slogans or things, nice phrases for your company's landing page. But when they are real principles, they are the ones that truly affect the business, the profit you make. Because I, for example, often give the example of when I worked at Sketch, that despite being a private company, with a paid product that works by subscription, the file format is open, it is open source, because its founders firmly believe that your creations are yours. I mean, your designs are yours. They are not the owners of what you have generated. So, why should they have a closed file format that you will only be able to use on their platform and you will never be able to do anything else with that file.
That, which is a super nice thing and like a no-brainer, because in the end, it's something that I created, that I birthed, that is mine or my company's or whatever, that opens the door to a lot of applications, services that were built on top of Sketch: Abstract, Invision, Figma was born importing Sketch files.
That led to the birth and diversification of the market, which is a super good and super nice thing, in which they believe in having this market diversification. They don't believe in monopolies, but in turn, it is obviously affecting the business economically. They are losing users and losing customers. However, it is a core value they are not willing to give up. And that, to me, is one of the things that made me fall in love with the company's culture.
I firmly believe that whoever buys also has a lot of responsibility in the way things are defined.
Totally. There are several examples. A colleague from my team who worked at Blinkist talks a lot about how at Blinkist the app trial topic was a bit shady. People didn't quite understand when it was renewed, when it wasn't renewed, and so on. And besides having very angry customers, well, of course, the churn was like that. Instead, they did a job of taking a much more ethical approach to the trial, clearly explaining how it works, when we are going to charge you, in what way, etc. They did it in a super clear and well-explained way and the conversion skyrocketed. Of course. Many times we think it's something negative, how am I going to invest in this now, which won't give me anything? that it's better to be deceiving people, and then you do it in a respectful, ethical way, taking into account that people understand what is happening and the business goes much better for you. Because in the end, you are burning people.
If you have a horrible service, no one will recommend you, nor will anyone re-subscribe, nor will anyone stay with you. So they are numbers that work for the quarter. I mean, this quarter we had such revenue, but then, what happened with that? I mean, many metrics, on many occasions, especially when they are tied to the bonuses of the C level, of the most senior people in the company, in the end, what they want is the number at the end of the quarter. What is the follow-up of this number? It doesn't matter, I've already been paid. Sometimes we are very short-term, we look very closely at numbers that seem wonderful and then in the end have horrible revenue.
What is your current challenge as a designer at GitHub?
At GitHub, I'm not focused on Copilot, on the product itself, but I'm in the area of communities and open source. The whole part that doesn't make money for the platform, but is the biggest entry of users to our platform and the reason why open source exists.
Recently, I'm also working on growth, which is the part that does make money, that reports in money. As you said, the paradigm has changed: the way of developing, the way of creating code, is changing at an incredible speed. And it's interesting to see in what ways we can use this technology, not only to program but to help in certain areas of the development cycle. For example, in open source, we often see how a lot of people maintain projects a bit for the love of it, for the love of the community, because they believe in what they are doing. However, it often becomes a full-time job.
There are many people who could perfectly be doing that eight hours a day. However, there is no economic return. So, there are very few people who can dedicate themselves to open source unless they are within a company that is really creating open source products. At GitHub, a lot of work has been done to support open source maintainers, creating initiatives like sponsors, where you can give your money to help someone maintain a project. However, this is a double-edged sword because if a company gives you money to maintain an open source project, that creates a kind of dependency where you have to respond to this person at certain speeds, at certain rhythms x. And open source is not that, I mean, it's not a company giving money to a person who is working for the love of it. So, I think products like Copilot or, in general, artificial intelligence are a very powerful tool for this type of use case.
If we are able to help people working in open source respond quickly to these needs, to create a pull request directly in the code if someone reports a bug, to write. Many people want to collaborate on a project, make a pull request, but don't write tests. If you can have Copilot write the tests for you and directly merge them, you are shortening this chain of discussion. If people, instead of answering discussions or doubts, can make a query in a Copilot chat, for example, you are shortening this weight, this burden that people maintaining open source projects have. One of the challenges we have now is to see how we can apply this technology in a lot of areas.
Sometimes I get the feeling that we are all watching the great spectacle of image and color, which are these models of image, video, sound, when what is underneath is happening and at some point, well, we'll see what happens there.
Glòria Langreo, Design Director at Github
Data and Privacy
I think a key question we have to ask ourselves every time we store data is do I need them for my application to work? Because if the answer is no, you probably don't need to store them.
We are giving data that is being stored. Probably for nothing, just out of inertia. Is it dangerous?
I think it is dangerous in the sense that many times we don't consider what data I need, what data I don't need. There is no prior reflection. That is, data collection itself is not bad, it is not negative. What is negative is collecting them out of inertia and without considering why we need them, what they are useful for. I think a key question we have to ask ourselves every time we store data is do I need them for my application to work? Because if the answer is no, you probably don't need to store them.
And the issue is not the data collection you do right now in your application. Right now you store x data in your database and absolutely nothing happens. What happens is that your company may be sold in the future and those data pass into someone else's hands. Your company can be hacked and there can be a data breach and those data are exposed on the Internet. Your government can make changes in the law and may require you to give them those data. This happens in the United States constantly with the issue of abortion, the legalization of abortion.
A lot of women who used apps to track their menstruation gave data like location or a series of data that are now being required by the police to identify if a person has been pregnant and has stopped being so to be able to go and arrest them, basically, and put them in jail. Or insurance companies that use the fitness data from your watch that you are happily giving to Apple, are using them to deny you or give you health insurance in the United States. That is, was the menstruation tracking app company storing those data to screw you over and have you put in jail if you aborted? No. Had they thought this could happen? No. Were they storing your data for something? No. Could we have avoided this? If you didn't need all this info. Why do you want to know where I am?
I think there is a lack of this prior reflection and in the end, it's free. I mean, it's information that is free, like if you try to sign up on ProductHunt, and you try to log in with Twitter, it asks you to give it permission to follow people, to delete people, to create lists, to delete lists, to send messages and you say, but wait a minute. Why do you want all this? Well, because Twitter's API allows it, so, Why not?, Maybe I'm not going to do anything, but since it's free, I'll keep it. And well, in the end, we do this and we give it okay, yes or no, because I want, I'm interested in seeing this content.
How do you see the near future?
Yesterday I saw a beer ad generated by artificial intelligence that seemed crazy to me, as if it had been directed by David Cronenberg. If the future looks like something directed by David Cronenberg, all in. I'm there.
When something happens, when something goes wrong and there's a big screw-up, a postmortem is done and the team gets together and well, what happened? At GitHub, we do premortems, which I think is a wonderful concept.
Glòria Langreo, Design Director at Github