In search of the limits of AI
posted on in: AI, Programming, machine, learning and AGI.
~695 words, about a 4 min read.
AI has limits, even if many AI people can't see them. Henry Farrellgoes beyond the good and bad of AI, and instead shares his notes on the book, Irrational Decision: How We Gave Computers the Power to Choose for us by Benjamin Recht. One is how do we think about machine learning:
"And machine learning itself is no more and no less than a powerful statistical tool. I found this passage maybe the most clarifying explanation of what it does that I’ve ever read."
To frame the prototypical machine learning problem, I like to think about a hypothetical spreadsheet. Each row of the spreadsheet corresponds to some unit or example. But I don’t care what the units mean. I just know that I have a bunch of columns filled in with data. And I’m told one of the columns is special. I am about to get a load of new rows in the spreadsheet, but someone downstairs forgot to fill in the special column. Management has tasked me with writing a formula to fill in what should be there. For whatever reason, I don’t get to see these new rows and have to build the formula from the spreadsheet I have. The formula can use all sorts of spreadsheet operations: It can assign weights to different columns and add up the scores, it can use logical formulas based on whether certain columns exceed particular values, it can divide and multiply. … I’ll do an experiment. I’ll take the last row of my spreadsheet and pretend I don’t have the special column. I’ll write as many formulas as I can. … But why single out that last row? I can do something similar for every row! I’ll invent a set of plausible functions. I’ll evaluate how well they predict on the spreadsheet I have. I’ll choose the function that maximizes the accuracy. This is more or less the art of machine learning.
Finding the sweet spot in AI and its limits:
As per the quote at the beginning of this post, Ben doesn’t really engage with the question of whether AI is good or bad in any general sense. Instead, he proposes that it can carry out many tasks, including tasks that we might not anticipate right now, but that there are limits. AI, like mathematical rationality more generally, has a sweet spot: problems that are complicated enough that they can’t be solved by other computationally cheaper approaches, but that have enough regularities to be workable. Within that sweet spot, it can do extraordinary things. Outside the sweet spot, it may be redundant or completely useless. And there is an ambiguous zone in between, where it can do stuff but imperfectly.
I found this pull quote very much related to the Software Brain that Niley Patel argues it has taken across my profession:
"Equally, there are challenges that appear to be fundamentally resistant to mathematical rationality, including bureaucracy and politics:"
societies are not computer chips. While I noted in chapter 2 that computer chips were often analogized as microscopic cities, chips were always designed to be hermetically sealed and perfectly controlled. This is what made them optimizable. Real societies, on the other hand, had people. While it’s convenient to model and view the population, its health, and its market flows as mathematical abstractions, these run into the limits of the messiness that people bring to bear.
AI hype machine really wants you to believe they can solve for the messiness in human enterprise, creativity, and ingenuity. At the same time these propagandists know very little what normal people do in general and really out of touch with everyday people. It's why they haven't made a good product out of billions of dollars in AI.
This piece is the most insightful thing I’ve read so far about when AI is useful and when it isn’t. This piece assuages me a little in that AGI is not going to be coming from a fancy statistical model. I ordered the book soon after I read the piece. Here's an affiliate link to it.