I finally read The Mom Test, a lovely little book about how to do effective customer research for startups. It kept reminding me of the quote that confirmation bias is the single biggest problem in business. The Mom Test adds the related finding that social norms are full of white lies, which help keep the peace but also mask the truth about bad ideas.
In other words, as a decision maker, you’re not only up against your own tendency to look for confirmatory evidence, you’re also up against everyone else’s tendency to give you confirmatory evidence as their way of being supportive.
The book also reminded me about all the other human needs and motivations that have so little to do with the presumed goal of maximizing the chance of business success. Autonomy, meaning, connection, learning, safety, creative expression and mastery… all needs that can be met regardless of whether a startup actually succeeds.
Being supportive is more important to most people than being honest or objective, so it’s really helpful to know how to dig for objective data when you need it.
The mistake that engineers sometimes make is that they assume by destroying [inefficient] institutions, you would necessarily be able to replace them with something better, which is not always the case — and it also ignores a lot of the wisdom that accretes slowly within the idiosyncratic practices and byways of an institution. Institutions sometimes have a hard time expressing their wisdom, but oftentimes it’s there, and [although] the things that institutions preserve may indeed be antiquated by the standards of optimization, they have enduring values as well.
I’m surprised how often I hear well-informed people argue that modern AI models are still “just repeating things that are in the training data.” This is simply not true. Large language models routinely solve math problems they have never seen, write poems that have never been written, and program software algorithms that are not in the training set.
Of course, these outputs are similar to things seen in the training data — but in the same way that humans mostly only solve math problems that are similar to ones we have seen, write poems that only use familiar elements of language, and write computer programs based on strategies we learned from other programs.
This is not to say that AI models have reached or exceeded the limit of what humans can do. For instance, I am not aware of any AI models that have invented entirely new fields of research. Indeed, AI models are not yet competitive with most (if not all) experienced professionals. But in terms of everyday creativity — which involves copying, combining, and transforming known ideas — current AI models are quite capable.
Part of the confusion may come from the need to “un-bundle creativity” as I wrote about previously. We may not be used to viewing creativity on a spectrum from “somewhat creative” to “as creative as an experienced professional.”
Another reason for misunderstanding may simply be our natural resistance to the idea that some of the things that used to be uniquely human are no longer so. (Though some animal species also exhibit creative problem solving.)
We might also see generic or cliché outputs from AI models and mistakenly attribute them to a lack of creativity. However, these generic responses likely come from models that are specifically trained to be neutral and multi-purpose. By default, popular systems do not veer far off the beaten path — for the same reason that most corporations do not hire erratic geniuses as spokespeople. They are still capable of creativity if prompted.
Finally, we might unnecessarily conflate creativity with agency. Part of being an artist is being moved — knowing what you want to create. Chatbots are designed to be assistants, only responding when prompted, so they do not have this type of intrinsic agency. A human needs to specify the goal and the constraints, but this still leaves plenty of room for the AI assistant to create novel solutions.
If we define the very concept of creativity as something only humans can do, then the word becomes mostly useless in discussions about AI. To meaningfully discuss the impact of the technology now, we need to acknowledge the spectrum of creativity and the AI models’ very real capabilities for creative problem solving and artistic expression.
Something I hear a lot in debates about AI are variations of: “sure, this chatbot can [do online research, tutor you in chemistry, search for drug candidates, …], but it’s not really intelligent.”
A similar sentiment was common in the 1960s and ’70s when electronic computers were first becoming widespread. “Sure, it can solve thousands of equations in one second, but it’s not really intelligent.”
We would have previously said such a performance would make a person extraordinarily intelligent, but we needed to un-bundle this capability of super speedy calculation from “intelligence” so that the word could keep its everyday meaning as “what only humans can do”. The field of artificial intelligence has thus been getting the “intelligence” rug pulled out from under it for decades, as we discovered how to make computers ever smarter.
If “intelligence” is defined as mental abilities that only humans have, then saying that a chatbot is “not really intelligent” is a tautology — one equals one. We figured out how to make a computer do it and thus it no longer fits in this definition of “intelligent”. This doesn’t tell us anything about the more impactful questions of how this technology will affect the world.
In order to have more meaningful conversations about the new capabilities of AI systems, we need to get more comfortable with the un-bundling of intelligence and stop getting distracted by words whose meanings are ambiguous in the computer age.
One of my photographs was recently featured in the Seattle Times. Someone asked if they could purchase prints, so I went ahead and created a storefront where you can order art prints and other products displaying my landscape photographs.
I consider it to be just a hobby, but I keep an eye out for beautiful scenes, and every now and then I get lucky. If you make a purchase, most of the money goes to the print shop and I get a small commission.
“Spirituality is recognizing and celebrating that we are all inextricably connected to each other by a power greater than all of us, and that our connection to that power and to one another is grounded in love and compassion. Practicing spirituality brings a sense of perspective, meaning, and purpose to our lives.”
Every time I see a news update on self-driving cars, I wonder, “when are autonomous vehicles actually going to become mainstream?”
To try to answer that question — and to provide another example of putting numbers in context — I performed the following analysis.
There are many definitions of “autonomous”, but for this analysis I’m going to focus on “Level 4” or above, meaning the driver is not required to pay attention to the road. This does not include current Tesla vehicles, nor most other commercially available driver-assist systems.
Waymo is by far the leading company in the US right now as measured by level 4 (or above) autonomous miles driven. They are frequently in the tech news and have a conspicuous presence on the streets of San Francisco. Waymo’s press releases have become more indirect over time, but my analysis estimates that their autonomous vehicle fleet drove about 27 million miles in 2024.
That’s just another “big number”, so let’s put it in context. According to the Federal Highway Administration, in 2024 the total distance driven by all US drivers was 3.2 trillion miles. If we conservatively assume that Waymo makes up 1/4 of all autonomous miles today, that implies that only about 1 out of every 30,000 miles driven was autonomous.
If we assume that growth proceeds exponentially (for example, doubling every year), how long will it take for autonomous vehicles to become mainstream?
I plotted the numbers. Note that the vertical axis is logarithmic! If it weren’t, all of the data points in the lower-left would be smooshed along the bottom of the graph. Straight lines on a logarithmic scale signify exponential growth.
Waymo’s autonomous mileage approximately tripled each year between 2014 and 2019 (from 100,000 to about 8 million). If we use that optimistic growth rate, autonomous mileage could approach ubiquity around 2034. If we instead use Waymo’s average 75% yearly growth rate from 2014 to 2024, we approach ubiquity around 2044.
In other words, even if everything goes extremely well for the autonomous car industry, it will probably take at least another decade before driverless vehicles become mainstream.
This is largely determined simply by the massive scale of the auto industry. Waymo’s website states, “We have over 40 million miles of real-world driving experience — that’s enough to drive to the Moon and back 80 times.” Then again, Americans as a whole drove to the moon and back 64 million times in 2024 alone.
[When] we become aware of the high costs of assuming responsibility for others’ feelings and trying to accommodate them at our own expense… we may get angry. I refer jokingly to this stage as the obnoxious stage because we tend toward obnoxious comments like, “That’s your problem! I’m not responsible for your feelings!” when presented with another person’s pain. We are clear what we are not responsible for, but have yet to learn how to be responsible to others in a way that is not emotionally enslaving. […]
At the third stage, emotional liberation, we respond to the needs of others out of compassion, never out of fear, guilt, or shame. Our actions are therefore fulfilling to us, as well as to those who receive our efforts. […] At this stage, we are aware that we can never meet our own needs at the expense of others. Emotional liberation involves stating clearly what we need in a way that communicates we are equally concerned that the needs of others be fulfilled.
Marshall Rosenberg, Nonviolent Communication (3rd ed.) p. 59-60
“[A need] is a perceived lack, something that is missing. Needfinding is thus a paradoxical activity—what is sought is a circumstance where something is missing. In order to find and articulate a need, this missing thing must be seen and recognized by someone.”