I’ll be honest: I wasn’t really that impressed by chatGPT in the beginning. When the first open beta came out, I played with it for a few queries and found it a bit underwhelming. So while my colleagues and friends seemed to run around with their hair on fire in pure doomerism, I kind of… just enjoyed the show. That’s changed a bit. Not because of how the model is beating out law students taking the LSATs, but because there’s finally been some interesting use-cases coming out of it: stories of people using it to properly diagnose their pet’s sickness when their vet couldn’t, people using it for standard copy & paste legal contracts, and even authors using it to spellcheck and tone-check their articles. I’ll add my use-case: I asked it for a song and artists recommendation by giving it 5 artists I like, and it gave better results than Spotify. Low bar I’m sure. But I pay for Spotify, and it does better than it.
The thing that really popped out at me was the fact of how this usefulness came to be. And its that, it came out via the availability of the AI to the mass public. We’ve always thought about how AI could be used to scale XYZ, but we never really thought about how people could be used to scale AI. And that’s an interesting thought because if there is an answer to that: and if AI really is the economic future that may displace most of our current economic labors, then anyway that humans can scale AI is the remaining way that humans will be able to deliver unique economic value. In other words, its the only way for those of us paranoid about being replaced or made redundant in tomorrow’s economy to still have a job or a reason for our silicon overlords to not reduce us to inefficient Matrix batteries.
So then. What are those things, how can we scale AI?
Apologies in advance if this is a sort of linguistic jiu-jitsu, but the way we scale AI is by letting it scale us. We scale the technology but using it to scale something that doesn’t scale: ourselves, and humanity at a greater whole.
There’s 3 things that I think are (for now) uniquely human: the ability to make new theories or connections based off disparate bases of knowledge, our own personal history and passions/love, and our ability and initiative to do things that don’t and cannot scale.
New theories is hopefully an obvious one. This is how we’ve advanced all scientific progress: we take some discovery from physics, combine it with something from chemistry or biology, and voila; we get some new medicine or invention. This doesn’t happen with AI. Because AI is trained on what information we already have, to make connections that already exist, to answer questions we know to ask. Even if you were to ask it to answer a question you cannot think of; the way it’d do so would be to look for past instances of where someone asked another PERSON that in its databanks and give you a mutated form of that answer. AI can make connections, but a person still ultimately needs to be able to lead it to make the right ones. It’s why even if chatGPT is used to code, an engineer still needs to be there to query it properly, check the work, and guide it along. Here, AI is a multiplier.
Personal history and passions/love is a much more vague and fuzzy element than I thought I’d ever write. But it is true. Ultimately we’re humans and we seek groups and empathy. We want to associate ourselves with people who we trust: and that trust is generated via factors like knowing about their history, seeing their emotions, and experiencing the authenticity of those via tangible ways. Obviously: AI can’t do that yet. The closest AI has to it might be virtual personalities like Vocaloids, but those entire personalities and histories are choreographed by humans. If given a choice between a book written by AI or a human author on their most passionate topic, I think most would take the latter.
The last one is on doing things that don’t scale. I’ve spoken on this on a previous article on how to have individuals compete with giant monopolies (https://www.fwdthoughts.com/airbnb-vs-amazon-a-dualistic-approach-to-the-future-of-employment/), but it applies for AI too. Do things that don’t scale. This is most notable by companies like Apple (or at least used to be) that had small touches in their experience that were just utterly seamless and inspired love from their users. These would be small experiences like the little wiggle when moving apps on your iOS device. These features don’t get new users, they don’t increase revenue, they don’t really move any direct business metric, but they entrench user love.
AIs are trained on an existing wealth of data. They are trained with models that have specific purposes. They do not (yet) do nuance or ways to replace humans in the ways that they are imperfect but also better. And we’ll have tons of the time before AI can get to that point. Luckily for me and this article: when that time does come, I fully intend to use some more linguistic jiu-jitsu and rename that AI to something else. At that point when they have emotions, passions, irrationality, and even a moment in time as a birth to engender a personal/individual history; I’ll call them my silicon superiors and not AI.