The Way of the Algorithm
I just participated in a very invigorating and intellectually challenging debate at the MoMA R&D Salon on “The Way of The Algorithm”. MoMA R&D is “Part internal think tank and part external connector and incubator,” according its Director Paola Antonelli.
While we interact with algorithms all day long, knowingly/willingly or unknowingly/unwillingly, they are rarely the focus of public discourse, partly due to their invisible nature and the the lack of common understanding what exactly algorithms are or do. The goal of the evening was, and of course I cannot speak for Mrs. Antonelli here, to enlighten the audience about the nature of algorithms, where and how they touch our lives, and also to have some critical conversations about their limitations and concerns of the propagation of biases that their human creators might have inadvertently or maybe deliberately left behind.
Paola open the evening by going back to the roots of algorithms (a book On the Calculation with Hindu Numerals written by al-Khwārizmī, a Persian mathematician, astronomer and geographer in about 825 AD) and connected it to ‘the mother of all algorithms’ - the execution of our DNA that ultimately creates us and also touched on some exciting modern art and even dresses printed on 3D printers that unfold to full size when worn. My fellow panelists include Adam Bly, Founder and CEO of Seed Scientific, Hugo Liu, Principal Research Scientist at eBay Research, and Heather Dewey-Hagborg Artist and “Bio-hacker”.
What is an algorithm?
One of the simplest ways to understand algorithms is to think of them like recipes - like the one for grandma’s famous chocolate cake. It is a set of instructions that if executed correctly should enable anybody - specifically you or by the same token one of the new cooking robots - to recreate your favorite cake.
And in the end you can eat the cake and hopefully it tastes the same as if your grandma had made it.
So far the analogy is quite adequate and useful: the order by which Google returns your search results is determined by a set of instructions that give a score to each possible page, the order of your Facebook news feed is determined similarly and even at Dstillery, the decisions about to whom to show and ad to and where are based on some score that resulted from a set of instructions. They would not fit on this page, but they are sequences of instructions nevertheless.
However, once you start to ask in more detail where these instructions came from (obviously not my gandma) the analogy start to fall short. Modern algorithms/instructions were not written down by any human. Nobody invented ‘the recipe’. These instructions are created by other algorithms - meta algorithms so to speak - and the complexity not only exceeds what we could create but often what we can understand.
The rise of the meta-algorithm (or machine learning)
Lets try to continue with the robot and cake analogy: We introduce a second set of instructions, the meta-algorithm. It would tell the robot how it can make changes to the recipe depending on the feedback of the tasters. It could for instance suggest to change factors like cooking time, ingredients, or the order of the ingredients - the robot now creates a new cake based on some new recipe and have the tasters try the cake again. Imagine this going on for a while ... after a few iterations, you can have a cake that may have none of the original ingredients, but has become what the tasters have wanted. Which ultimately also means that it depends very much on who I hired for tasting - if it is a birthday party of 5 year olds, the preferred cake will be very different from the one influenced by a group of Michelin tasters.
Ultimately, while the original single recipe (if there ever was one) was a one size fits all, the resulting recipe can even be differentiated by the taster: the robot recognizes a 5 year old taster - heavy up on the sugar, a slim women - maybe more fruit and less heavy cream, etc. Welcome to the brave new world of personalization, machine learning and predictive modeling.
Who knows what’s in the cake?
Nobody really. You can maybe observe the robot and figure it out. But that is not even the important question. What we usually like to know is WHY the final recipe is what it is. Why did I receive that recommendation and you got a different one. The new recipe was created by feedback and the meta-algorithms. Many modern algorithms are not a singular creation, but build out of a process of a meta-algorithms and the manipulations of the tasters - you. And we kind of assumed that the meta-algorithm tried to make the cake taste good …. But maybe I instructed it to in fact NOT have my five year old like the cake too much, but instead to eat only little and fall asleep soon. Again - the final recipe will be very different depending on the objective I build into the meta-algorithm.
What does this mean about the proclaimed neutrality of algorithms or on the flip side the propagation of human biases of the creator?
Ultimately, the resulting algorithm is VERY objective (or neutral if you like) with respect to the objective it was asked to optimize too (taste, amount consumed, sleepiness) for the set of tasters involved. By the same token, it is biased if I can pick the tasters from which the meta-algorithm draws its feedback and the objective I ask for.
But just looking at the final outcome - the recipe of the cake - nobody knows if there is a bias. All I know as a designer of the meta-algorithm is that the algorithms that interact with us on a daily basis - recommending us music, restaurants, jobs and even friends, deciding what appears in the news feed, goes into the spam filter and in which order search results are presented are the culmination of influences from the data (the taste testers), the objective, the rules of the meta-algorithm and even the original algorithm. All of these can introduce objectivity or bias.
There is one more detail: Most people don’t understand what algorithms tell them – we misinterpret the results and that in itself creates bias. We take the predictions of algorithms as truth. I think you will like this cake with a probability of 72%. Algorithms are really good at correctly quantifying probabilities (for the set of tasters like the ones they have learned from). But 72% is not really certain … there is almost a one in three chance that we will not like it. But we tend to just default to the extreme (this is known as the lens model in psychology). Algorithms ‘know’ when they do not know something - but we as humans often don’t listen.
Can I tell if an algorithm is neutral or was biased?
Not really - unless you know what bias to look for. That is the important thing to understand. For better or worse, the issue isn’t that the human creator is biased. The issue now is that algorithms are no longer being created by humans. Now we have algorithms being created by algorithms: what I called a meta-algorithm. And even the creator of the meta-algorithm has a vague notion at best what the data did to determine the final outcome.
But all the gloom aside - it is simply astonishing what meta algorithms can achieve. It ultimately looks like magic that nobody can explain. Whether it is Amazon's recommendations, modern medical diagnosis, Google's ranking or Dstillery’s ability to identify the right audience, or even machines composing music - the underlying technology of predictive modeling - meta-algorithms that tune towards whatever you ask of them - can be awe inspiring. But you be the judge: how good can music be that was created by a meta-algorithm which learned to compose like Bach? Already 1997 a program called EMI (experiments in musical intelligence) by David Cope fooled a human audience into believing that its composition had in fact been composed by a human (the audience also mistakenly dismissed a song composed by a reasonably skilled human as being the uninspired product of computer). Could you have told that is was composed by EMI?
About MoMA R&D Salons
An important part of the MoMA R&D initiative is a series of salons on themes that are relevant to both MoMA and the wider world—topics that straddle the physical and the digital and apply to the experience of artists, visitors, and citizens alike.
I'm doing it right now !!!!!!!!!
I enjoyed the write-up, noted the comment probability is tough to estimate - since it has more formal definition. I need to think about this definition of algorithm relative algorithms such as linear programming, mixed integer, or some heuristics to search alternative bill of material. Would the meta algorithm make the leap to from chocolate cake to pecan pie on its own - or does the meta algorithm need to be enhanced. Great write-up, thanks for taking the time to post it
Great presentation
Great write up and analogy. I also believe as objective the function our internal biases affect the priors as well as aposteriori in some cases. We want to believe in many cases so to speak. However cake is always a goodness to fit.