Studying complex systems via video games

This article caught my eye because it was in the MIT alumni publication yet was written by a Williams College professor — a rare collision of my two alma maters.

That was my excuse… but then it turned out to also be a really interesting article.  Author Morgan McGuire writes, “modern video games… are arguably the most complex systems engineered in any discipline, ever.”  That had never occurred to me before.  As one example, he points out that the US federal tax code is about a third the length of a standard video game’s source code (not to mention the graphics, textures, maps, etc. that accompany it).

Unlike most engineering disciplines (including software engineering) where the goal is to make the solution as simple as possible, in game design complexity is often desirable because it makes the game more interesting to play.  Often, amazing complexity can be achieved with just a few interacting rules.  I remember reading that the game designer behind the Sim series (SimCity, The Sims, etc.) was always looking for simple yet powerful sets of rules that put the user in control of an essentially infinite number of options — consider the limitless number of possible cities that can be built in SimCity.

McGuire’s thesis is that game design strategies should be better formalized so that they can be applied to designing or improving complex systems in the real world such as government policy, economic regulation, social and technical networks, etc.  We need to be careful with this analogy though, because the goal in most of these disciplines is still to simplify if possible.  So the hope is that by analyzing complex games, we’ll be better able to understand the complexities that inevitably arise despite our best efforts in real world systems.

Socially Relevant Computing

I recently read an article in Communications of the ACM about making computer science in the classroom more socially relevant.  The author, Michael Buckley of U. Buffalo, points out that “there isn’t a textbook out of the 60 I have on my shelf that makes me see computing as socially relevant…. If I was a student, beginning these important four years, and I was taught programming via doughnut machines [, pet stores, and games — the types of examples he finds in the textbooks –], I would quit and go do something important.  Major in some field that had an impact.”

The author argues that when intro computer science is taught using silly, simple examples, students just don’t see the potential power and relevance of the tool.  “Even… pure mathematics has me counting and measuring planets and populations.”  He’s created an alternative set of examples that are much more socially relevant, involving for example voting systems (counting), pollution in the great lakes (2-d arrays), disaster evacuation (optimal paths), and drug interactions (databases).

These observations struck me as stunningly accurate.  I think a big part of why I was drawn to computer science was that I had a strong sense of the power of programming long before I took an actual CS class. I think I saw very early on that you could learn how to do math, and then you could program the computer to do that math a billion times in a second.  It could do all these things for you.  It just seemed like the ultimate tool.

Also, I remember thinking that the coolest part about taking Statistics 201 in college was getting to use all sorts of real-world data sets.  Historical SAT scores come to mind.  We ended the course by doing a project where we had to gather some data in the real world and analyze it with statistics.  My team looked at whether people’s close friends have the same sibling status (i.e. only child, older sibling, younger sibling).  Not rocket science, but certainly socially relevant!

Conversely, I remember spotting contrived examples from miles away.  In algebra and pre-calculus, you spend a fair amount of time learning how to do vector math, and most of the examples involve things like canoes in a fast-flowing stream.  It always seemed bizarre to me that we were spending so much time on something with so little real-world applicability.  Finally, I got to calculus and realized that the real reason we had to learn all that vector math was because it was vital for calculus, which allowed us to model physics, economics, biology, on and on.  My thought was “why didn’t you just tell us about calculus, instead of boring us with canoe examples?”

Mr. Buckley has gone even farther by setting up a lab where more advanced students work on “socially relevant” problems, including educational tools and devices for the disabled.  That’s fine, but it seems to me that the critical insight here is about how students are introduced to the field.  Once you’re advanced enough to work on real problems, you’re hopefully way past the point where you understand why computer science is interesting and relevant.

I share his outrage that none of the textbooks are up to the task. Let’s get moving.

Software that makes software

I just read an Interactions magazine article that was a bit muddled but pointed out,

The Industrial Revolution did not occur when we built steam engines, it occurred when we used steam engines to build steam engines.  The true information and computing revolution will not occur until we use software to build software; until we really use executable knowledge to create executable knowledge.

It’s hard to know specifically what this will look like.  Will it be more machine-learning based, probabilistic like the human brain?  Analogy-centric?  Evolutionary?  Based on internet-scale knowledge?  In any case, I don’t doubt that the essential “meta” argument is true.  Without software that can build software, computers will remain pretty dumb.

Simplifying the disciplines

Here’s what I’m thinking.

Science is all about gradually figuring things out by trial and error — making predictions and see if they hold up. You hope to be surprised. Engineering is all about clever hacks — breaking the rules while following the real rules. Design is about constraints — tradeoffs, compromises, and innovative solutions that manage to support all of the constraints. Art is about free association — thinking outside the box at all cost and doing something simply because it is new.

What is liberal arts? I think liberal arts is all about critical thinking — pushing an argument to its full logical conclusion. This is important in all of the above areas. For example:

  • Science: if light travels at a constant speed, the logical conclusion is that time slows down as objects speed up (special relativity).
  • Engineering: if transistors can perform a basic bitwise operation, then lots and lots of transistors can produce artificial intelligence (or at least Windows Vista).
  • Design: if computers are good for organizing information but bad for portability, pocket devices should be fully managed using a computer (iPod).
  • Art: if everyone paints only important or unusual objects, why not paint objects that are fully mass-produced and pedestrian? (Andy Warhol)

Do you see what I mean? It’s just another vindication of the liberal arts. I kind of got sidetracked from my original point, though, which was just new thinking about the essence of science and engineering.

Ray Kurzweil and the Future

I went to a talk given by Ray Kurzweil today. This is a man who helped shape the way I think, because I read one of his books at age 15 or so, and it was startling. As soon as he started talking today I knew I had seem him before… it must have been a similar talk at MIT last year, or maybe a recording I watched online. The funny thing about him is that he talks about these crazy things that will happen in the future, in a totally droning voice like he’s so bored with all these obvious predictions. Also, you can’t argue with the guy. His numerical evidence is just way too strong. You have no grounds whatsoever to disagree. The only way you can beg to differ is by going outside the game — finding what he’s not talking about.

There are a couple of his points that I wanted to touch on here. An audience member said that a century ago people predicted that they would have more leisure time in the future; why isn’t that the case? And Ray basically pointed out that it is the case — most people work a lot because they want to, not because they have to in order to survive. Their jobs are a big part of who they are, what gives them gratitude. So, in many senses, that’s leisure. It struck me as slightly profound. Not working is boring. And if no one depends on you, what is the meaning of your life?

Another interesting aspect of Kurzweil is that he talks about all these exponential trends as if they are completely inevitable. Computer power doubling every year, gene sequencing doubling every year, brain imaging resolution doubling every year. And I agree with him 100% — when you look at those numbers, it does seem inevitable. But you also can’t forget that it only happens because real people do it!

He talked a fair bit about renewable energy. Apparently the amount of electricity in the world coming from solar power has been doubling every 2-3 years. Right now it accounts for around 1% of our electricity. If you follow the exponential trend, this means that in 15-20 years, just about all our energy will come from solar. I think this prediction has an excellent chance of coming true. But it’s interesting to compare Kurzweil with Ted Nordhaus and Michael Shellenberger, who argue a very similar thing in their recent book: climate change will be solved by massive investment in technology. The difference is that N&S are closer to the ground, advocating more research funds towards renewable energy technology. To Kurzweil, it will just happen because people like N&S and all the scientists will do their thing and make it so. It’s kind of amazing how he’s lifted himself out of the big complex mess of actually doing it. But if everything is preordained, what’s the meaning of life?

If I’m coming off as giving Kurzweil a hard time, I don’t mean to. Not only is he brilliant, he is in fact working hard with many companies on new research and advocating for research funding from his positions on government panels. I should mention that he became famous partly because he invented the first realistic music synthesizer, the first flatbed scanner, and the first robust optical character recognition system… among others. (And he wrote some damn good books — because, as he says, he can’t work on systems that are 15 years away, so he can only write about them.)

Based on current exponential-growth information technology trends and the estimated computational capacity of the human brain, Ray estimates that a computer will pass the Turing test (be able to simulate a human) around the year 2029. Many people argue that such a thing could never happen, but I see no good reason why not. I will be 45 in 2029. Who knows what the world will be like when extra human intelligence is cheap. Ray was quick to point out that we will use these technologies to extend ourselves — as we have with all past technologies — not to build artificial intelligence robots that take over the world — as is popular in sci-fi.

One last thing. It occurred to me that regardless of all the technological progress that has taken place, all the “social networking” that goes on online, I still just really want to cuddle with real humans. As the world becomes more connected and more overwhelming, we need to figure out how to make sure people feel loved and involved in their real world communities, and interact with real physical people. Yes, maybe in 50 years we will have realistic physical virtual reality, but that is too far away to worry about (I will be 74). I wonder what I could do in the meantime to help create more loving, physical, local communities.

There was a couple sitting next to me, about my age, and they were bored and “passing notes” by typing them on a cell phone and passing it back and forth. It seemed vaguely ironic… here was Ray Kurzweil, telling us that 10 years ago only a few people even owned cell phones. It had never even occurred to me to pass notes on a cell phone. It makes me wonder if I will read this 10 years from now and think, typing on cell phones, how old-fashioned…

When the details make the difference

I attended a talk some weeks ago that I’ve been meaning to write about. It was given by one of the chief something-or-others of Continuum, a design firm in the Boston area. He used the analogy of the light spectrum to describe a spectrum of design — “ultraviolet” on the one end referring to impractical but beautiful artsy design, to “infrared” on the other focusing on “design that’s almost invisible, but it makes you feel warm, so you go back and buy more.” This “infrared” end is what Continuum is all about.

For me, the most interesting part of his talk was his point that his favorite design problems are very constrained, with very little wiggle room to change anything. He pointed out that when this is the case, the little details can make all the difference. For example, his team worked on disposable diapers (for Pampers), and had a huge impact on their market share by carefully adjusting the smell of the diapers and by introducing slightly different diaper shapes for different developmental stages of the baby (because parents love to talk about the development of their baby).

The reason this is interesting is that as any product becomes commoditized (a process which is only intensified by globalization), the more the design settles, the less room there is for change, and the more the details make the difference. This means that software interfaces will improve as applications like word processing become commodities, because the competitive advantage will be gained on “little things” like the details of the user experience — beyond the feature set to how good the users feel when they use the product. As Continuum guy put it: as technological differentiation between products decreases, the value of customer experience differentiation increases.

Apple has always been very good at the user experience aspects of product design, so I think they will do well in the future. Indeed, Apple does not tend to develop new paradigms per se but new crucial tweaks that enhance the usability, the feel, the efficiency of the product. They do well in consumer markets (as opposed to corporate) because that is where products are sold on the whole experience rather than some sort of price/feature tradeoff decision made by managers.

Two kinds of stress

Stress is generally thought of as a bad thing, but it occurred to me last night that there are really two very different kinds of stress. The type which I find very unpleasant is based on fear; fear of not finishing something you are supposed to do, or in general fear of being unable to prevent an unwanted scenario. For example, “can I finish my thesis and graduate in time.”

In contrast, the other type of stress I see as part of what makes life worth living: the desire to do more than might be possible. This type of stress is proactive, based on things you want to do, and comes with excitement. I think it is part of what Cervantes means by the “impossible dream.” For example, “can I make my thesis really excellent in the time I have left.”

Stress about grades can also illustrate the difference: worrying about the negative consequences of getting bad grades vs. striving for good grades to achieve positive consequences. It’s related to the optimistic vs. pessimistic explanation styles described in positive psychology.

Writing in the electronic margins

I came across an interesting article about the importance of being able to “write in the margins” and how that functionality has been neglected in computer systems in favor of the designer trying (in vain) to figure it all out ahead of time. I love this quote:

The fuzzy intersection of official and unofficial data has never been a comfort zone for information technologists.

How true…

It also follows a theme that has been recurring lately, which might be summed up as “computer science is hard:” in a sense we are trying to re-create the world from scratch, which is a very difficult job! Paper gives you all sorts of abilities “for free” but every new function on a computer (such as the ability to “write in the margins”) must be explicitly designed.

Advice from Steve Jobs

Recently published was some really important advice from Steve Jobs. Well, he was just answering interview questions. But I take it as advice. On getting things right:

“At Pixar when we were making Toy Story, there came a time when we were forced to admit that the story wasn’t great. It just wasn’t great. We stopped production for five months…. We paid them all to twiddle their thumbs while the team perfected the story into what became Toy Story. And if they hadn’t had the courage to stop, there would have never been a Toy Story the way it is, and there probably would have never been a Pixar.

“We called that the ‘story crisis,’ and we never expected to have another one. But you know what? There’s been one on every film. We don’t stop production for five months. We’ve gotten a little smarter about it. But there always seems to come a moment where it’s just not working, and it’s so easy to fool yourself – to convince yourself that it is when you know in your heart that it isn’t.

“Well, you know what? It’s been that way with [almost] every major project at Apple, too…. Take the iPhone. We had a different enclosure design for this iPhone until way too close to the introduction to ever change it. And I came in one Monday morning, I said, ‘I just don’t love this. I can’t convince myself to fall in love with this. And this is the most important product we’ve ever done.’

“And we pushed the reset button. We went through all of the zillions of models we’d made and ideas we’d had. And we ended up creating what you see here as the iPhone, which is dramatically better. It was hell because we had to go to the team and say, ‘All this work you’ve [done] for the last year, we’re going to have to throw it away and start over, and we’re going to have to work twice as hard now because we don’t have enough time.’ And you know what everybody said? ‘Sign us up.’

“That happens more than you think, because this is not just engineering and science. There is art, too. Sometimes when you’re in the middle of one of these crises, you’re not sure you’re going to make it to the other end. But we’ve always made it, and so we have a certain degree of confidence, although sometimes you wonder. I think the key thing is that we’re not all terrified at the same time. I mean, we do put our heart and soul into these things.”

On management style:

“My job is to not be easy on people. My job is to make them better. My job is to pull things together from different parts of the company and clear the ways and get the resources for the key projects. And to take these great people we have and to push them and make them even better, coming up with more aggressive visions of how it could be.”

On managing economic downturns:

“We’ve had one of these before, when the dot-com bubble burst. What I told our company was that we were just going to invest our way through the downturn, that we weren’t going to lay off people, that we’d taken a tremendous amount of effort to get them into Apple in the first place — the last thing we were going to do is lay them off. And we were going to keep funding. In fact we were going to up our R&D budget so that we would be ahead of our competitors when the downturn was over. And that’s exactly what we did. And it worked. And that’s exactly what we’ll do this time.”

Of course, that’s why it’s amazing to have a huge pot of cash you can draw from. College endowments are very useful for the same reason (plus the fact that it earns interest).

Apple has really done some amazing work. And they are amazing at focusing on a small number of products.

“I’m actually as proud of many of the things we haven’t done as the things we have done. The clearest example was when we were pressured for years to do a PDA, and I realized one day that 90% of the people who use a PDA only take information out of it on the road. They don’t put information into it. Pretty soon cellphones are going to do that, so the PDA market’s going to get reduced to a fraction of its current size, and it won’t really be sustainable. So we decided not to get into it. If we had gotten into it, we wouldn’t have had the resources to do the iPod. We probably wouldn’t have seen it coming.”

“Things happen fairly slowly, you know. They do. These waves of technology, you can see them way before they happen, and you just have to choose wisely which ones you’re going to surf. If you choose unwisely, then you can waste a lot of energy, but if you choose wisely it actually unfolds fairly slowly. It takes years.

“We don’t get a chance to do that many things, and every one should be really excellent. Because this is our life. Life is brief, and then you die, you know? So this is what we’ve chosen to do with our life. We could be sitting in a monastery somewhere in Japan. We could be out sailing. Some of the [executive team] could be playing golf. They could be running other companies. And we’ve all chosen to do this with our lives. So it better be damn good. It better be worth it. And we think it is.”