Faces is a machine-learning driven art project, inspired by a hallucination, producing images from algorithms similar to those our cameras use to detect faces.
When I stumbled on this drawing of a hallucination, I was struck by how it had distilled the human face into light and dark. My next thought was, “I bet a camera’s face finding algorithm would go crazy on this.” From that, I wondered if I could create similar images using such algorithms.
Cameras use computationally cheap (fast) face detection. Considering sub-regions of the image at a time, they ask questions like, “Is the mean luminance in A greater than that in B?” A and B, and many other pairs of rectangles like them, have been chosen by engineers such that the question will generally be true when a face is present in the image.
The first task was to find a set of rectangle pairs, like A and B, which generally predicted the presence of a face. I preprocessed several images of friends, and generated random rectangle pairs in Matlab. If the mean luminance inside the first was greater than that inside the second for most of my pictures, then I kept that pair, along with a measure of its performance.
To create the new images, I sampled from this set of successful rectangle pairs. I started with a new, black image: where a rectangle pair preferred light, I lightened the image; where it preferred dark, I darkened.
Some of the results were very “facey”, some were not:
I painted two of these faces onto canvas with the help of a projector, but the photos I have of this are horrible.
Programmers designing computer-perception attempt to reduce entities to their most defining characteristics. This helps their programs efficiently identify those entities. These same principles make for good brain design, and so we sometimes find that our most efficient machine-learning algorithms have some kind of implementation in the brain.
This project explores some overlap between defining characteristics and minimal representations in the biological and mechanical domains.