In this second column we (Mathias Funk, Sara Colombo, Philip van Allen and Cristina Zaga) respond to the first column, which was generated by an AI algorithm, a large language model commonly known as GPT-3 (OpenAI API, text-davinci-002). We see these two columns as entangled, first in the sense of taking this column as a challenge and responding to it, and secondly as an (human) engagement with a product of artificial intelligence.
“A good understanding of AI would allow designers to create more effective and efficient designs.[…] [Please provide your answer without mentioning effective and efficient] […] A good understanding of AI would allow designers to create designs that are user-friendly and meet the needs of users.”
As a reader of the first column (if you haven’t, go there[link to column]), you might have noticed that the argument does not go very deep about what entanglement with AI might mean, the system clearly has not understood (what is the extent that such a system can understand, anyway?) the richness that “entanglement” can express, and it just went for the most banal synonymous meaning: relation. As the text developed, we tried to steer the model responses towards aspects that we found more interesting such as the purpose of designing with AI, the role of the human designer, and also how design can develop in a future with AI present and entangled.
The AI argument also goes in circles, returning to the same points after a few sentences. Even aggressive prompting away from the same points, were ignored when the algorithm responded with same the point, albeit worded slightly differently. As designers, but also broader speaking, humans, we need to adjust our understanding and expectations of technology like AI. We need to move away from imbuing almost “magical” or “superhuman” qualities and focus on where AI can serve and most importantly, where not.
“As an AI, I can guarantee that this serving of humanity is done in an ethical and sustainable way by constantly learning and evolving”.
We can compare the AI-generated text to a product or service enabled by AI - think of a recommender system that suggests what movie you should watch. Many of the issues designers encounter in creating AI-empowered systems are reflected in this column. For instance, while reading it, the reader (user) may wonder: how was this text (movie prediction) generated? How did humans (designers) intervene on the text generated by the AI? (E.g. we know movie platforms can tweak AI recommendations to serve their interests). Every AI, or better every machine learning system, is trained on a dataset, which determines what the AI engine will predict - or generate, in this case. What dataset was the GPT-3 generating this text trained on, so what sources and opinions does this particular text reflect? The quality and provenance of the training dataset needs to be clear to the final reader/user, to make such a text trustworthy. Does this thinking reflect the thinking of most people? Or a subgroup? Is it based on scientific papers, or maybe conversations happening on social media, or elsewhere? How was it curated afterwards? And who is responsible for the opinions that are in it anyway?
All these questions reflect aspects that should be considered in designing products and services based on AI: explainablity, transparency, trustworthiness, fairness, inclusivity, and accountability. Only then can AI (start to) be ethical.
Philip van Allen
“Designers of new AI applications should keep in mind the ways in which AI can help to make human lives easier and more efficient”
It is ironic that the AI “author” does not include non-humans in its approach, being non-human itself! And sadly, the column reflects the predominant capitalist, efficiency-oriented version of AI which ignores many potential advantages of AI as a design material. These characteristics include unpredictability, serendipity, remixing, juxtaposition, and diverse perspectives / personalities (for example multiple AI authors, each derived from different curated datasets).
If we think about the column as the designed outcome made from an AI material, then it took, as Mathias says, aggressive prompting and steering to get a more coherent result. This means that applications of AI require that the people creating with these systems need to learn the material character of AI.
“Humanity's best interests are served when we are able to provide them with assistance that makes their lives easier and more efficient.” “For example, AI can be used to create customised products or experiences based on a person's individual preferences”
The AI-generated column is a stark example of how AI is a mirror (and sometimes a very creative one) of human thoughts and activities. The piece is a perfect example of the privilege hazard too: the AI does not recognize how certain sentences might embody values that oppress people. Such as technosolutionism terms or sentences that bring forward a specific way to intend design as a means to norm society. At the same time, the piece is a wonderful opportunity to reflect on how we can concretely modify the algorithmic design, big language models, and in general the design praxis in AI to support a plurality of values and perspectives in society. The projects in the Entanglements of AI track of Transitions are a wonderful example of partnering with AI to design for and with society while practicing social justice and ethics.