I feel like the best nights are the ones when you're working on a project and you don't realize how late it's gotten. You've been adding and improving, bit by bit, and the end is in sight.
People find these moments of flow through jogging or meditation or whatever. But for me, there is something special about convincing a computer to do what I want, especially when that means making a images dance on a screen.
The spark was lit for me by seeing incredible work on Twitter by people like Zach Lieberman and Inconvergent. They both freely offered information online about how they had convinced their computers to make such beautiful work. They shared enough to make me think that maybe, just maybe, this is something I could do.
I've been visualizing data for years. I've created bar charts and pie charts and sankey diagrams and network graphs and all sorts of visualizations on any topic that could be summed up in data.
But this new project- generative art- was freeing. I wasn't visualizing data- I was visualizing randomness. I was sifting through the white noise on a television set and using that as my data.
The art becomes deciding what parameters to give up to the computer and to randomness. I would build things that I felt looked nice and then slowly change the code and say "okay, now you decide how to do that" or "now you decide where we start that line". A lot of the time is spent deciding just how much control to give over.
You can take a canvas and give randomness full control. You can say "computer, draw 100 points, and put those points wherever you want (as long as they're on the canvas)."
But what you end up with is a lot like that white noise that comes on your TV when you're tuned into the electromagnetic hum of the galaxy instead of the local news. The real art is in deciding how much control to give over to the computer, and in what places. In a lot of ways this felt like a form of education.
As I wrote new lines of code I felt myself teaching my robot the unspoken rules of art the way you'd teach a child. "These colors work well together", "try to make it so your lines don't overlap", "if you reach the edge of the canvas, go in a different direction", and so on.
Sometimes I feel like I understand God better when I work with my art robot. I often wonder, if there were a creator, why he would allow such catastrophic imperfection in the world.
But you realize that the best things in life are the things that you do not expect. Things that happen when the right combination of factors occur at just the right second and something you never anticipated happens.
But to get to those gold-plated moments of synchronicity you have to sort through a lot of things that aren't good. It requires wading through all those failures to find a little piece of magic. Or maybe all those failures is what makes finding it feel so good.
I've been trying to create scripts that can have results that look very diverse, even though it is roughly the same logic that creates it.
But I think I am running into the limits of how varied those things can truly be based simply on random numbers.
When I am beginning to make a new script I will start by creating something that I think looks cool without any randomness. Then I begin to pick out parameters that I can give over to randomness.
Things like how many lines to make, how quickly to move them, which direction they should move in, things like that.
This means when the scripts are run you never see the same thing twice. But truly 'creative' outputs are very rare. Most of the outputs will look the same, but every once in a while there will be one that is exceptional.
I think this is part of the reason that following a randomized art bot on Twitter is so enjoyable. It's like a slot machine with a lever being pulled again and again until you get a result you like.
I think the process of running and re-running the art bot scratches a similar itch in my brain.
I create new scripts every once in a while after work or on the week. I'll add the ones I like to the list of scripts that the artbot runs every hour.
I check his feed every day and go back through the work he's been making and retweet or fave any that I think are going in a good direction.
But I wish that I could do things like say to him "more of this" and "more of that" and actually affect the process. I want to start teaching him which random numbers work and which don't.
Luckily for me this bumps right into a technology currently in vogue: genetic algorithms. Like nearly everything in my life, I barely understand it, but people smarter than me have explained enough that I feel comfortable enough to operate the levers. I will now attempt to explain this thing I do not understand.
One of the earliest examples I ever encountered was Conway's Game of Life. As a wannabe teenage-hacker I went to a hacker conference in New York and saw someone with 5 dots on their wrist in a formation I would later learn was referred to as a 'Glider', a formation would continue to move and live forever.
It was also adopted as the hacker emblem by Eric S. Raymond. I'd later tattoo that symbol on my left wrist as a reminder to stick to my hacker roots. It's worked pretty well.
The Game of Life is made up a grid of cells. The cells follow 4 simple rules:
Any live cell with fewer than two live neighbours dies, as if caused by underpopulation.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overpopulation.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
From these 4 simple rules, you can make incredibly complicated patterns and even things like clocks.
I want to use a parallel technique to teach my art robot using a genetic algorithm. But I want to make that algorithm something more like:
Which sounds simple enough for an idiot like me to try and tackle.
Some of you may be seeing some problems with this plan already. Namely, how do you teach a robot what art is "good"? I feel like this is a difficult thing to teach humans, let alone robots.
Maybe instead of saying we want to make art that is good, we want the robot to make art that people like. This is something closer to something it can understand.
So I tell the art robot that every time he tweets some art, that if someone retweets it or faves it, that means somebody probably likes it. If someone likes the art, then make more art like that. In this way he gains his own sort of personality, inspired by his small group of fans (20 followers, a portion of them other art robots).
This sounds simple, but how do I teach him to learn what his followers like?
Take one art script for example. It starts with a bunch of random points, and then connects them all together with a smoothed line. Then it randomly moves all of the points again and draws another line. It does this a couple of thousand times. Random numbers can affect a lot of different things in this relatively simple piece of art.
If we take those 5 numbers we can store them and think of them as the DNA of that piece of art. Small changes to those numbers can have big effects.
This is the genome of one 'strain' of art for this script. It represents one way of thinking about approaching making this piece of art.
The way that most genetic algorithms work is by taking some concept and representing it as some numbers. Then it somehow judges the "fitness" of that group of numbers. The more "fit" a genome is, the more likely it is to live on to the next generation.
The program generates thousands of genomes, determines how "fit" they are, and then combines them into a new genome. Two fit parents ideally create an even more fit child.
To us, something is "fit" if people on Twitter like it.
At least that's how I'd like my art robot's mind to work.
He goes on and on for hundreds and hundreds of generations of creations until he has evolved to have his work featured in the MoMa. A parent can dream.
By the way, though I often refer to my art robot as "him" because it delights me, he obviously has no gender because I have not taught him about sex, genders, or even language yet.
Speaking of which, I suppose some malicious party on Twitter could work diligently to teach the art robot to only make works that say, looked like penises.
But I guess we'll cross that bridge when we come to it. I think in the future we will have AI trainers like we have dog trainers today. You will call in the Cesar Milan of artificial intelligence when your robot develops particularly problematic behaviors.
You might be wondering "that sounds great, but how are you actually going to figure out how to build a mind for your art robot?" Which is a pretty good question. The answer, as always, is Google.
Another stroke of luck for me is that the programming language I learned when I was 14 to make animated navigation menus on my myspace-era website can now be used to simulate neural networks and genetic algorithms.
I was able to find a nice library called, fittingly Genetic.js. Thanks again, smart people.
The smart person who made this has done all of the hard work of whatever genetic algorithms do and all I need to do is give it genomes and tell it how to judge fitness.
Unfortunately, most genetic algorithms are built to run hundreds or thousands of times a second, rapidly evolving and improving. They are using things like MATH to figure out how "fit" their genomes are. We are using HUMANS and as we know humans are slow, especially when we compare them to computers.
Our art robot runs once every hour. Every hour it should take a moment to sit back and think back to every piece of art it has ever made before, and think about the pieces that people liked.
Then it should take the ones people liked the most and make a new piece of art based on that.
This means that we need to give the art robot a memory.
To be continued...