Login

Lost your password?
Don't have an account? Sign Up

Carnegie Mellon’s robotic painter is a step toward AI that can learn art techniques by watching people

Can a robot painter learn from observing a human artist’s brushstrokes? That’s the question Carnegie Mellon University researchers set out to answer in a study recently published on the preprint server Arxiv.

They report that % of people found the approach the paper proposes successfully captured characteristics of an original artist’s style, including hand-brush motions, and that only % of that same group could discern the brushstrokes drawn by the robot.

AI art generation has been exhaustively explored. An annual international competition — RobotArt — tasks contestants with designing artistically inclined AI systems.

Researchers at the University of Maryland and Adobe Research describe an algorithm called LPaintB that can reproduce hand-painted canvases in the style of Leonardo da Vinci, Vincent van Gogh, and Johannes Vermeer.

Nvidia’s GauGAN enables an artist to lay out a primitive sketch that’s instantly transformed into a photorealistic landscape via a generative adversarial AI system.

And artists including Cynthia Hua have tapped Google’s DeepDream to generate surrealist artwork.

But the Carnegie Mellon researchers sought to develop a “style learner” model by focusing on the techniques of brushstrokes as “intrinsic elements” of artistic styles. “Our primary contribution is to develop a method to generate brushstrokes that mimic an artist’s style,” they wrote. “These brushstrokes can be combined with a stroke-based renderer to form a stylizing method for robotic painting processes.”

The team’s system comprises a robotic arm, a renderer that converts images into strokes, and a generative model to synthesize the brushstrokes based on inputs from an artist.

The arm holds a brush that it dips into buckets containing paints and puts the brush to canvas, cleaning off the extra paint between strokes.

The renderer uses reinforcement learning to learn to generate a set of strokes based on the canvas and a given image, while the generative model identifies the patterns of an artist’s brushstrokes and creates new ones accordingly.

To train the renderer and generative models, the researchers designed and D-printed a brush fixture equipped with reflective markers that could be tracked by a motion capture system.

An artist used it to create strokes of different lengths, thicknesses, and forms on paper, which were indexed in grid-like sheets and paired with motion capture data.

In an experiment, the researchers had their robot paint an image of the fictional reporter Misun Lean.

They then tasked respondents unaware of the images’ authorship — from Amazon Mechanical Turk and students at three universities — to determine whether a robot or a human painted it. According to the results, more than half of the participants couldn’t distinguish the robotic painting from an abstract painting by a human.

In the next stage of their research, the team plans to improve the generative model by developing a stylizer model that directly generates brushstrokes in the style of artists.

They also plan to design a pipeline to paint stylized brushstrokes using the robot and enrich the learning dataset with the new samples.

“We aim to investigate a potential ‘artist’s input vanishing phenomena,” the coauthors wrote. “If we keep feeding the system with generated motions without mixing them with the original human-generated motions, there would be a point that the human-style would vanish on behalf of a new generated-style.

In a cascade of surrogacies, the influence of human agents vanishes gradually, and the affordances of machines may play a more influential role.

Under this condition, we are interested in investigating to what extent the human agent’s authorship remains in the process.”