The spotlight on generative AI has gotten brighter than ever in recent months, and it’s sparking both excitement and fear among designers because of what it means for the future of creativity. Creativity is not the exclusive domain of designers, of course — it also helps marketers, engineers, strategists, and others do their jobs. But there’s a reason why people in other roles often refer to designers as “the creatives”: For designers, creativity is the bedrock of their work. So questions about the impact of generative AI are not just intriguing or mildly unnerving to them — they feel like an existential threat.
The two types of design-related work where generative AI’s relevance has gotten the most attention so far are illustration artwork and chatbot conversation design because of the heavy media focus on OpenAI’s DALL·E 2 and ChatGPT tools. For conversation design, for example, tech providers such as Cognigy and Voiceflow have integrated their tools with OpenAI’s GPT-3 to free chatbot designers from the task of manually composing variants of user and bot utterances, generating them automatically instead. And for artwork, Adobe introduced “Project All of Me” in October, which can use generative AI to fill in areas of images that are missing or need to be replaced with visual content that makes sense in relation to the rest of the image.
These are just two examples from many design-related use cases, and they’re relatively minor advances compared to generative AI’s potential. But they’re a sign of things to come, which is why designers in other fields should pay attention. Generative AI will affect every design subdiscipline, ranging from 3D asset design for extended reality to interaction design for user interfaces across modalities and even high-level strategic product design and design-inspired activities such as design thinking — any domain for which there is a large corpus of digitized content that can therefore be used to train neural networks.
There are lots of thorny problems to overcome, however. Two that I believe are among the most significant for design are that:
- Generative AI’s output will get stale without human replenishment.
- Generative AI holds up a mirror: Its greatest flaws are human flaws.
Generative AI’s Output Will Get Stale Without Human Replenishment
The content that the neural networks underlying generative AI produce is the result of remixing material created by humans in the first place. So unless humans continue to create original material that is fed into neural networks to refresh them, generative AI’s pools of inspiration will become stagnant, growing only incestuously through the addition of material that itself comes from generative AI. Content generated by the likes of ChatGPT and DALL·E 2 that today feels original and creative will come to seem outdated, repetitive, even inbred.
In a way, though, this is an exceptionalist view of creativity: It assumes that only humans can be creative, that creativity is something no machine could ever be capable of.
There’s also an evolutionist view of creativity. The way GPT-3 emulates human creativity is by using a parameter called “temperature” — you can experiment with setting it higher and observe that the increase results in content that is still reasonably probable but somewhat more random and therefore less predictable than if the temperature had been lower. Is that what people do when they’re being creative — just make connections between disparate things but that are less predictable while still not entirely improbable and absurd? Is every human breakthrough just a remix, if you look close enough? Is all creativity essentially derivative and recombinant? Maybe it’s true that “there is nothing new under the sun.”
These are deep questions, and you should not trust anyone who claims to know the answer for sure, but I’m inclined to think that the reality is somewhere between these two extremes: Machines are in fact now sometimes somewhat creative, and the nature of their creativity is shallow but still surprisingly interesting and useful, but it’s nowhere close to the depth of human creativity, for now. Some experts such as Yann LeCun claim that the reasons for this have to do with machines’ lack of embodied sensory experience of the physical world. But that’s just one hypothesis, and it’s an open question whether it might be possible to supply it in some form eventually.
Generative AI Holds Up A Mirror: Its Greatest Flaws Are Human Flaws
Generative AI is rightly criticized for sometimes being biased, inaccurate, and even self-contradictory. But these are not traits of generative AI itself. The root cause of the problems is that the human-created content we train neural networks to emulate is biased, inaccurate, and self-contradictory. Why? Because we, the people who created it, each have our own biases, make mistakes, and contradict each other in the content we create and publish, which is then used as a corpus to train neural networks.
One possible solution is to make sure that when we train a neural network, the content we feed it is unbiased, accurate, and logically consistent. Unless we do this, we just emulate and perpetuate the mistakes of the past: garbage in, garbage out. The problem has afflicted each of OpenAI’s GPT series of models so far since they were all fed snapshots of large portions of the web as a basis for the content they should mimic. The web is certainly not unbiased, accurate, and logically consistent, which is why OpenAI has coded wrappers around its models in an attempt to prevent users from eliciting the worst of what the models could produce — which, so far, has been only partly successful.
Unfortunately, making sure that the content used for training the neural network is “clean” is easier said than done. An alternative and probably more realistic approach is to attempt to identify where a neural network stores its beliefs and then edit those beliefs to be accurate or delete false beliefs that otherwise would compete against also-held true beliefs.
Whatever the approach, the even thornier challenge will be to work out who decides what is biased, inaccurate, or illogical and what is not — which is less a technological challenge than a cultural and social challenge.
Generative AI Needs Human-Centered Design
For generative AI to fulfill its potential, rather than sinking into irrelevance because of stagnation in the pools of content it recombines and remixes, the creativity of human designers is more important than ever. And to work through the complex process of deciding which parts of a corpus or a model are biased, inaccurate, or illogical, the human-centered design process will be essential because of its emphasis on understanding the genuine needs and lived experiences of the people who use a system and on iterating frequently to converge on higher quality.
That’s why I hope good designers step up to the challenge to help steer generative AI in the right direction. It doesn’t matter whether the word “design” appears in their job titles. What matters is that they understand and master the design mindset and skill set.
If your company has expertise to share on this topic, feel free to submit a briefing request. And if you’re a Forrester client and you would like to ask me a question about the topic, you can set up a conversation with me. You can also follow or connect with me on LinkedIn if you’d like.