Storya’s Manifesto on AI and Creativity (Part 2)
Is AI capable of creativity? What does this mean for artists? How can we leverage what is happening at the intersection of Art and tech to create opportunities for diversity, equity and inclusion?
We published the first part of our Storya’s Manifesto on AI and Creativity here. In it we covered:
Why call it a Manifesto?
The three big Questions
Creators vs Big Tech
What “art” can this technology “create”, right now?
Feel free to read that before diving into this second part, where we lay out our views on what is for us the first key question in the difficult conversation around art and AI technology.
Let’s dive in.
Question 1: Is generative AI capable of creativity?
This is very controversial because on the one hand, AI models can produce, or help produce, “artifacts”, as discussed in Part 1: from digital paintings to songs, from music videos to novels, from contemporary art installations to whatever this hilariousness is. On the other hand, there is fierce pushback against the very idea of AI creativity.
We approach this question as follows: if AI is capable of true creativity, there will be challenges but also positive outcomes, as for all other creative innovations before, from the printing press to the camera. New categories of art will emerge that did not exist before. If the answer is no, artists and creators will discard this tool after an initial period of experimentation, as it has been for shadow puppetry, Vaudeville, ferrotypes and other art forms that have become obscure or entirely forgotten.
We believe AI is indeed creative because it can formulate, experiment with, and iterate on complex creative ideas (for now, with a human “copilot”).
Let us break down the reasoning. Are we taking this stance because these forms of ideation, experimentation and iteration are identical to the human ones?
No, as AIs do not “experience” the world as we do. These systems generate “meaning” through mathematical methods, which even the engineers who built them lack a full understanding of. Until recently, the way AIs create associations remained a black box. But that is changing.
To quote from a May 21 Anthropic announcement on the latest research in this area:
We successfully extracted millions of features from the middle layer of Claude 3.0 Sonnet, […], providing a rough conceptual map of its internal states halfway through its computation. This is the first ever detailed look inside a modern, production-grade large language model. […] The features we found in Sonnet have a depth, breadth, and abstraction reflecting Sonnet’s advanced capabilities.
Papers like the above point to the fact that AIs’ creativity is driven by patterns learned during training and the manipulation of neuron activations. Put differently, AI can mimic creative processes by generating novel and contextually relevant outputs, but it lacks the conscious, emotional, and experiential aspects of human creativity. AI represents a form of synthetic creativity that is distinct from human creativity.
For the Anthropic case, the model is shown to have developed its own complex conceptual frameworks that maps to human notions, which it can flexibly draw upon and combine to create novel responses. This suggests emergent creativity, where the model finds new ways to combine its learned building blocks that go beyond its training data. The abstract nature of some of these features, and their relational structure, at the very least, seem to hint at the foundations of analogical and metaphorical thinking.
Do these findings definitively resolve the deeper questions around machine creativity?
Clearly not. At a minimum, though, they suggest the foundations are in place for a form of artificial creativity, even if it differs from human creativity in important ways. Further research is needed to probe the extent and nature of creative cognition in AI systems as they continue to scale up in size and capability.
AI could become the first form of synthetic intelligence we come across as a species, tucked away as we are in a corner of the Milky Way. It just so happens we were the ones to build it, partly in our own image, to use a Biblical reference, through the use of the “neural network” architecture.
Confronting the Biases
Here is a deeper issue for us humans: the impulse to disregard this synthetic intelligence as inferior, or as “fake”, is, at least in some ways, an echo of the in-group-out-group lizard brain instincts that, throughout human history, have led to slavery, colonialism, racism, religious wars, nationalism, industrial animal farming, and more.
We have not even overcome those challenges in relating to our fellow humans. The daily news is plenty of proof of that. Worse, the very way in which AI was built reflects existing biases and inequality. There are many studies out there on this, but we will just mention the library of the Algorithmic Justice Foundation and the writings of Dr. Joy Buolamwini, author of Unmasking AI, for anyone who wants to dig deeper on how companies and governments perpetuate racism and stereotypes through carelessly-built AI software.
But does the fact that we are still failing in this regard toward fellow humans, and that those failings are leaving footprints over our technology, give us reason to treat the first steps of artificial intelligence with disregard or contempt? We argue we should not. Who knows, by learning how to relate to the first limited form of synthetic intelligence, one that we have built, we may even learn how to better relate to each other and to the rest of nature’s earthly species in the process.
This is not just some hypothetical “alien civilization contact”-type of scenario. Indeed, the conversation around how we build and interact with AI has deep connections to issues of diversity and representation in the real world.
Fixing the Data
Embracing AI in the arts necessitates confronting the bias inherent in the data these systems are trained on. AI tools reflect existing inequalities and perspectives. If creators take a Luddite approach to AI, industry and capitalism will go on building it anyway, and AI will become another tool for replicating structures of oppression and exclusion of many groups.
Even if creators do take the proactive role, we see the risk of it remaining confined to privileged artistic communities of the developed economies. There are arguments for AI to be approached as the new electricity of our century, a foundational technology whose access must be made affordable. While governments, often crowded with octogenarians who do not even own a computer, creator communities and organizations must push for AI education programs that empower artists from all backgrounds to experiment and create.
Of course, with great power comes great responsibility. We must be mindful not to use AI to cheaply mimic or appropriate styles rooted in specific cultures, especially those historically exploited by the art world. Permission and collaboration are key. If AI-assisted artwork profits from the cultural heritage of a community, that community deserves to be part of the conversation and share in the benefits. We are seeing little evidence of this happening at the moment, which makes it all the more pressing.
The technology itself can help build bridges between individuals and communities like never before, thanks to barely-explored capabilities like real-time translation and vision. Building bridges is a big part of the mission of Storya, and we truly believe this technology, however flawed its starting point, can evolve to become a tool for the whole of humankind to open new horizon. And it all begins with the foundational aspect of all art forms, storytelling.
By addressing these issues, we not only create more inclusive and equitable AI technologies but also begin to dismantle harming practices. This approach aligns with the broader goal of learning to relate better to all forms of intelligence, human and synthetic alike. By doing so, we can foster a more inclusive and just world, where every voice has the opportunity to contribute to the rich tapestry of human and synthetic creativity.
Conclusion
Today we took one step further in thinking about AI affects creative communities, by trying to take a progressive approach that ensures that benefits and harms can be balanced. We believe the key in achieving the above is for creative organizations to support each other across borders, ensuring the conversation is steered firmly in the direction of making progress and reducing harm, rather than simply following the lead of whichever tech company happens to be taking the cheques from venture capitalists or existing Big Tech companies, who have every incentive either in preserving the status quo, which is highly exploitative of artists, or introducing changes that benefit themselves ahead of anyone else.
In Part 3 of our Manifesto, we will start with addressing the next big question: is Generative AI a threat to artists’ livelihood?
As always, we are building this document as a living, evolving record of our own thinking. We look forward to your comments, thoughts, criticisms, as we move toward defining our “Commandments” for how AI and artists can co-exists.
Peace,
Paolo