Artificial Creativity: How AI Is Changing Cultural Jobs

Artificial Creativity: How AI Is Changing Cultural Jobs

November 5, 2021

By Jessa Gamble and Briana Brownell • Photos Courtesy of Celeste Lanuza and Xander Steenbrugge

There’s a lot of concern about how artificial intelligence (AI) and automation could soon replace people in jobs like truck-driving or proofreading. But in creative fields like art, music and dance, pioneers are experimenting with AI as a tool to expand the limits of creative possibility. Their work points to new ways to unite the arts, challenge patriarchal systems, and form healthy relationships with our new non-human collaborators. 

Celeste Lanuza has long been combining dance and social justice commentary, with a special focus on addressing racial inequities. Though she has always felt the importance of personalizing her dance to express her identity, culture and emotions, Lanuza has never been a fan of the ever-present mirror in the life of a dancer, whether it lines the walls of a traditional dance studio or reflects the dancer on a screen.

“We have the mirror in the dance studio as a tool, which many times is a crutch,” she says. “Students and professional dancers I work with just stare at it, and it becomes a source of judgemental comparison.”  

“I love working without the mirror. And especially during COVID, I’ve been exploring dancing outside in the natural air.”

Recently, Lanuza started exploring motion-capture technology as a way to push boundaries even further. Using cameras to track dancer’s real-life motion, the technology transfers those movements onto a digital avatar, thereby allowing the dancers she choreographs to move with entirely different bodies. Through the use of AI, dancers can even create partners who are uniquely attuned to their movements, either onscreen or in the form of a robot.

“It liberated the dancers into feeling that freedom of expression, which is where dance comes from. So that was really exciting,” says Lanuza.

The Future of Dance

Coming from a Latinx family, Lanuza struggled in middle school with English as a second language, but dance was a great equalizer that allowed her to speak fluently. Children don’t need any equipment, like pianos or basketballs, to dance—they simply use the body they already have. Lanuza feels that if it is started very early on, dance can teach essential skills related to children’s connection with themselves, their relationships to each other, and their connection to the histories in the places where they dance.

Younger children can start with structured improvisation, an alternative approach to the strict conventions of ballet. They can experiment with time, energy, rhythm, melody, emotion and space—all areas that are full of potential for productive play. Improvising stories and themes unleashes the imagination of children before they have started any critical evaluation of their own and each other’s dances. Then they can move on to formations and follow-the-leader exercises that allow them to practise leadership.

If they decide to incorporate movement into their creative lives later on, the technology will be there for them to bridge enormous gaps of space and time: they will be able to dance in other bodies, perform a duet with their great-granddaughter, or even—as we will soon see—to create music as they move. One way this can happen is through AI-enabled “pose estimation,” which allows a computer to perceive the position and movement of a body without the spatial reference points required by previous generations of motion capture technology.

Blending the Senses through AI Art

Xander Steenbrugge lives in Ghent, Belgium, and works remotely for a US company, applying machine learning to problems in biology. He’s been watching the development of technology for generative adversarial networks, or GANs. 

Invented by a Google team in 2014, GANs are discriminative networks that will be shown a collection of representative images—for instance, the works of Dutch masters such as Vermeer, Bruegel the Elder and Rembrandt—in order to “train” it to be able to look at a fresh image and determine its authenticity. A complementary generative network creates images and tries to fool the discriminating network into accepting them as Dutch masterpieces. Essentially, this looks like two duelling systems playing a game of visual Balderdash. 

About two years ago, they developed the ability to create high-resolution images. GANs have also been used to create realistic images for computer games and fake videos (for example, recreating the late actor Carrie Fisher to appear in the most recent Star Wars film). 

“That’s when I started paying attention and started playing around with it,” says Steenbrugge. “I’ve always been driven by looking for new ways of finding beautiful aesthetics using technology.”

Under the name Neural Synesthesia, Steenbrugge used the networks to create videos that blend one image into another and allow music to determine the flow of images. “This project is an attempt to explore new approaches to audiovisual experience based on artificial intelligence.” say Steenbrugge. It became a new form of music visualization that, in a way, collaborates with the brain, which is so busy integrating information from all five senses that it readily sees ways they might correspond and blends them into a kind of synesthesia. 

As visually spectacular as these results can be (as seen on Steenbrugge’s Vimeo page), one thing that’s usually missing from AI-generated visuals, music and even stories, is narrative structure. Networks are very good at generating patterns—say, creating a song with the general feel of a Joni Mitchell composition—but they often feel like a recurring snapshot of a style; they don’t begin with an introduction, include riffs and motifs that banter throughout, or build to a climax. Theoretically, an AI that developed those capabilities could tip the balance into producing works that genuinely compete with human-made songs for our affections.

Those works might not even have to be original to have artistic value. A perfect imitation of a master’s style could create art that moves us just as much as the works in that master’s œuvre. 

Andy Warhol was one of the first to anticipate this development when he said, “I think somebody should be able to do all my paintings for me. I think it would be so great if more people took up silk screens so that no-one would know whether my picture was mine or somebody else’s.”

Leading AI historian Pamela McCorduck agrees: “Do we really have all the late Verdi and Shakespeare that we want? Of course not: We have only what accidents of history permitted us to have,” she writes in her book Aaron’s Code: Meta-Art, Artificial Intelligence and the Work of Harold Cohen.

As far back as human memory stretches, we have worked side by side with animals. Whether it’s a shepherd whistling complex directives to her sheepdog or a cowboy at one with his horse, there’s a human competency that allows us to empathize with minds and hearts vastly different from our own and align ourselves toward a common goal. 

In the future, creative professionals who rely on AI to co-create their art might experience something similar. A lot of the human contribution in AI art comes from the dataset they choose to feed their AI collaborator. That training set defines a large part of the output. They can also set the initial parameters of what they want to see and often have an intention or direction in mind and a rough idea of what the computer might come back with.

But once the network starts churning out options, the creative process becomes much more of a dance, a kind of two-way banter between human and machine. The human tweaks the parameters to set a slightly new direction, and the machine provides serendipity that sparks both delight and some new ideas in the human mind.

“Very often I’ve been surprised by what it generates,” says Steenbrugge. “The model comes back with feedback that is somewhat unpredicted, and then this cycle is really interesting because it sometimes leads you to places you would never have gotten on your own. “It’s really a dialogue between man and machine.”

The Future of Multimedia Art

When Steenbrugge realized that some trained professional artists had no access to the tools he was using, he decided to address it by building a platform. At, anyone with some music, some images and an idea can make their own trippy videoscape. 

He explains the process on his website. The process is complete in three steps.

  1. “Audio analysis – The audio analysis pipeline can detect percussion and harmonic elements, which are used to drive different parts of the video.
  2. “Visual engine –  WZRD‘s magic works by using audio elements to drive a machine learning technique called GAN. This results in an entirely new kind of visual experience.
  3. “Video render – Finally, a full video render is performed and combined with your audio.”

Source: WZRD

Above all, Steenbrugge was curious to see what other creative people without the preconceptions of those in his own field might do with these types of models. It’s the way he expects things to be for the next generation. His prediction is that within the next five years, the tools he uses will be available in a non-intimidating form, ideal for the creative person interested in engaging in digital tech through a simple and accessible tool . 

“We’ve seen this happen in technology a lot of times. When something is new, it’s usually kind of techie and you have to be a little bit nerdy and specifically trained to use these things, but as time progresses, layers and layers of abstraction are put on top of it and then it becomes usable to a larger audience,” he says.

Steenbrugge has been exploring and working with other forms of digital tech tools such as virtual reality headsets to turn his strange and compelling multimedia pieces into immersive spaces. 

When the Body Is a Keyboard

Celeste Lanuza’s work in the world of dance and Xander Steenbrugge’s in visual art and music might occupy separate silos in arts and culture, but as far as computers are concerned, they speak translatable languages. For that matter, anything that changes over time can be an equivalent source of information, as long as it’s measurable. Steenbrugge’s art can be informed and shaped by the beat of an instrumental percussion track, but could just as easily be driven by the heartbeat of the listener.

What’s exciting for Lanuza is the fact that her dancers could one day be influencing the music they dance to in real time, composing sound with their movements—and being moved, in turn, by that music. 

“I even teach that way now—I find myself using the same vocabulary, saying our body is an orchestra. We need to think of the different parts of our body being different instruments, and thinking of ourselves as music rather than a separate entity,” says Lanuza.

The possibilities straddle discipline and form: a disabled musician who uses a wheelchair might choreograph a virtual dancer through the medium she knows best, or two dancers’ pas de deux could generate a written love story in a particular writer’s literary style. 

Artificial Intelligence Itself Is a Creative Artform

The idea of artificial intelligence does not belong to computers and algorithms. Its roots run much deeper, with the ancient human wish to forge the gods. The arts have expressed these ideas through myths that go back millennia—and for just as long there have been conflicting feelings on the prospect of an independent intelligence designed by human minds. 

“This has been a human impulse for thousands of years, to create something outside the human cranium that has intelligence,” says Pamela McCorduck on Lex Fridman’s podcast (Fridman is an AI researcher working on autonomous vehicles, human-robot interaction, and machine learning at MIT). “Homer has robots in the Iliad, and the Odyssey is full of robots. How do you think Odysseus gets his ship from one place to another?… So we’ve had this notion of AI for a long time.”

The Ancient Greeks had a positive take on it, largely celebrating the robots that served as helpers in Hephaestus’s forge. The Ancient Hebrews looked to the Second Commandment, which forbids making graven images, interpreted to include any imitation of humans, including the non-visual. Much like other transgressive art forms, AI faces accusations of blasphemy. Stories of lifelike automatons in the Zhou dynasty were also interpreted with negative connotations, but instead of blasphemy they highlighted trickery. A story in the Book of Liezi tells of a lifelike automaton that, though initially met with wonder by the court, drew the wrath of King Mu when it flirted with his royal concubines. Our literary conventions bear this out in their habit of pitting human heroes against villainous machines. Think of that most famous of artificially intelligent characters, Frankenstein’s monster.

Two key innovators, Alan Newell and Herbert Simon, broadly accepted as the “godfathers” of the artificial intelligence field, were not exclusively—nor even primarily—focused on its technological aspects. They were cognitive psychologists and Simon was a serious painter who felt forced to give up his art because it was taking too much time away from his AI research.

For McCorduck, best known for her seminal 1979 text on AI, Machines Who Think, the process of creating human art is a lot like what a generative system does. Art begins with the impulse to make something special, taking something that could be minimal and functional, but creating something that transcends that purpose. It’s an urge she feels is as essential to life as protein.

“Art-making behaviour is universal among humans. It amazes me to consider that there is no such thing as a no-frills human culture, with clothing only to cover, and food only to eat and housing only to shelter. We decorate and design and present artifacts to one another constantly, every last one of us,” she writes.

Most of that expression doesn’t take the form of the elitist European realm of High Art, but rather as a daily practice by non-specialists, of storytelling and music-making and adorning oneself. 

To Lanuza, that daily artistic practice is a  lesson in equity, which is at the heart of the question of technology in the arts. When AI can let people transcend their language differences and even translate the vocabulary of one art form to another, something special happens. Even the idea of “good” dancing and “bad” dancing comes into question as computer algorithms make up their own, strange, version of dance.

Maybe that’s why most of the commercial robots released today that are designed to interact with humans come with a dance mode. Whether these robots dance for you or with you, they touch a place in your heart reserved for those who allow themselves to be vulnerable by making and sharing art. The robot knows that if you and she are to live together, that’s how you first connect. 

This article originally appeared in the fourth issue of Root & STEM, Pinnguaq’s free print and online STEAM resource supporting educators in teaching digital skills

Briana Brownell

Briana Brownell

About the author

Briana Brownell is a Canadian data scientist and technology artist. Her technology-enhanced creative projects span multiple areas, including AI-assisted Shakespearean sonnets, and AI-enhanced, assisted and generated visual art pieces.

Jessa Gamble

Jessa Gamble

About the author

Jessa Gamble is an internationally award-winning science and technology journalist and Penguin Random House author. Her writing has appeared in The Atlantic, Nature, New Scientist and The Guardian.