The Reasoning Computer
The Turing test is dead, and we killed it. The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. From the 1940s 1 to the 2010s people programmed computers, and computers could only do what they were programmed to do in a rules-based deterministic manner. Sometimes a person would program the computer and it would do something unexpected, but 100 out of 100 times the computer was doing what it was programmed to do whether the person liked it or not. While there has been experimentation with what today we call AI since the 1950s, those machines were a long ways away from passing the Turing test.
Why does using ChatGPT feel more like a conversation with the smartest person you know than a computer? It's because ChatGPT doesn't solve problems deterministically the way a programmed computer does, it solves them probabilistically. 2 ChatGPT demonstrates the ability to think about something in a logical, sensible way, the definition of reasoning. 3
We've created something completely new here, a reasoning computer. 4
Working With A Reasoning Computer
There are so many political, societal, economic, and ethical implications of Large Language Models (LLMs), 5,000 words wouldn’t be enough to cover all those thoughts. (Trust me, there’s a much longer post sitting in my drafts.) But what’s really captivated me is why a reasoning computer really is different than anything we’ve used before, a conclusion I could only arrive at through experience.
ChatGPT has been an essential tool for me over the last month, especially over the last week as I've been building Plinky's browser extension. I'm a very experienced iOS developer but have little experience with web development. I know enough TypeScript and React to cobble together something with lots of help and guidance, but it will take me much longer than someone who knows what they're doing.
A browser extension is important for Plinky to be successful though, which presents a unique challenge: I know what I want, I know how to describe it, I don't quite know how to get it, but I will know when ChatGPT gives me the wrong answer so with some nudging I can get what I'm looking for. Here's why the process of pairing with ChatGPT works, and how it helped me build a fully functional browser extension that lives up to my standards in less than a week. (With far less frustration than if you took away the tool and gave me a whole month.)
- A simple browser extension to save links to Plinky's database is a much smaller problem than building a whole app. The problem is self-contained, which makes it quick and easy to test ChatGPT’s results and see if the output matches my expectations. In fields like mathematics or computer science it's generally easier to verify a solution's correctness than come up with a solution in the first place.
- I may be a novice web developer but I'm a great programmer. Even in a domain where I’m not comfortable I can describe the problem I'm trying to solve, assess whether a solution is good, do some research (on my own or with the aid of Perplexity and ChatGPT), and nudge the reasoning computer in the right direction.
- This isn't a process where I ask for something and am given exactly what I want, but I can promise you it's much easier than becoming a good enough TypeScript developer to build the high quality browser extension I want.
- Little by little the browser extension looks and works more and more how I want it to be, until it does exactly what I want it to do.
- The whole process is interactive so I’m learning about how to get to the right solution. Not only do I have what I want, but this iteration made me a better web developer, I started off only knowing what the wrong output looks like but now I also know how the correct solution should look.
This is just one example of how I was able to accomplish something I previously wouldn't have been able to do thanks to an LLM, the number of tasks I turn to LLMs for is growing every day. The same way that GPS becoming ever-present means I haven't opened a map in almost two decades, I find myself turning to ChatGPT or Perplexity rather than opening Google and clicking a bunch of links to find answers. I used to do my own research, I used to be the reasoning machine, but now I'm offloading more and more of that work to Large Language Models.
How Can A Reasoning Computer Even Work?
People will say that ChatGPT can't do math, and that's true in the most literal sense. A Large Language Model may not know what addition and subtraction mean to a human, but it can synthesize the correct results to add and subtract numbers better than a person. Similarly people point out that ChatGPT can't read, because it's just a stochastic parrot that means it can't provide intelligible output. It's true that LLMs are complex statistical models, yet despite ChatGPT not knowing English from Urdu the way people do it's still capable of translating from English to Urdu to Russian to French in a way that I never would be able to. The fact that Github Copilot 5 doesn't actually know the difference between JavaScript and Swift hasn't stopped it from making programmers 55% faster at coding.
Large Language Models use a different form of problem solving that starts with inputs and extrapolates technique. That's the reverse of how humans believe they develop their skills, if you study hard, read a lot, and put in enough hours as a writer you too can become the next Faulkner or Shakespeare. But think about the way you first learned your native language, you listened and watched the world around you for 1-2 years, then reverse-engineered how the technique works. We're reasoning machines too, the difference is that the entirety of the internet wasn't preloaded into our brains the way it was into an LLM. (For the best, I don't know if you know but there's some bad shit on the internet.)
When we say ChatGPT can't do this or ChatGPT can't do that what we're doing is anthropomorphizing flaws onto the system, derived from our own experiences of solving problems successfully. The problem solving process may be difficult for people to understand because this is the first computer that doesn't do exactly what you tell it to do. Our intuitions may view this as a flaw, but OpenAI loading the whole internet into ChatGPT and creating a simple model for how to think rather than directly programming the machine is the reason this computer is incredibly useful in new and previously unexplored ways.
Simon Willison says that these tools make you more ambitious with what you can accomplish, and I'd like to build upon his axiom. When you have a reasoning computer you only have to know what the wrong result looks like, not how to get the right result, and that alone has the power to change how society solves problems.
- Ada Lovelace deserves credit for writing the world's first computer program 100 years before ENIAC, but in this context I'm using the timeframe of the 1940s to focus the post on generally programmable computers.↩
- It's perfectly fair to debate whether this is how the inner-machinations of ChatGPT work, but I feel very strongly that at a minimum you can say this about the output ChatGPT provides.↩
- This isn’t because ChatGPT is sentient, but in all likelihood because it was trained on a corpus of human-generated data. It's difficult to define "thinking" in this context, my personal view is that there is no thinking without sentience, but in this context what I call thinking isn't the low-level internal machinations of ChatGPT, but one level higher — the step by step token output process that people using ChatGPT see in the process of getting their result.↩
- I'd like to co-credit Joe Ugowe with coining this term, it stemmed from a wide-reaching discussion we had last night about our experiences with ChatGPT and Large Language Models.↩
- Github Copilot is a Large Language Model product like ChatGPT, but trained with a coding-specific focus, which allows it to be integrated into a whole suite of Microsoft's programming-related tools and platforms.↩
Joe Fabisevich is an indie developer creating software at Red Panda Club Inc. while writing about design, development, and building a company. Formerly an iOS developer working on societal issues @Twitter. These days I don't tweet, but I do post on Threads.
Like my writing? You can keep up with it in your favorite RSS reader, or get posts emailed in newsletter form. I promise to never spam you or send you anything other than my posts, it's just a way for you to read my writing wherever's most comfortable for you.
If you'd like to know more, wanna talk, or need some advice, feel free to sign up for office hours, I'm very friendly. 🙂