Why AI isn’t going to take your job.
What two ancient Greek ideas about knowledge can teach us about the limits of automation.
I know that this is a subject that so many people smarter than me have written about, and rather than offering tumescent wisdom, I’m just peddling my own different perspective as something old that is new. This whole idea came from listening to Arvind Narayanan talking to the 20VC folks a couple of weeks ago, a podcast that was sent to me by a good friend and fellow admirer of Arvind as a teller of truths that not so many people want to hear.
He said something I view as incredibly important and interesting on the subject of AI taking jobs. His observation can be roughly put as follows:
AI is good at automating tasks. And even then, not all tasks, but some narrow tasks it can do astonishingly well. However, a job is not a single task - it’s a bundle of tasks put together, often with some less repeatable, task specific glue work to stick them all together. AI might be able to do some of the tasks in that bundle rather well, but it’s pretty unlikely that it will be able to do every bit with good fidelity.
This got me thinking about a parallel track, another reason why it will be tough to train an AI to take any one particular job. Permit me a slight sidebar to discuss a couple of ideas, and then I’ll get back to the point dear reader, I promise.
One of my favourite books of recent times, and one that has really changed my perspective is “Seeing like a state” by James C. Scott. In it, he covers amongst a vast swath of other fascinating material, two ancient Greek concepts: metis and techne. Metis is the capacity you build by repeated, effortful practice. We can view it as the unconscious merging of having seen a similar situation (although rarely the same situation) many times within a changing environment. Such skills can rarely be taught directly. By contrast, techne is all about hard knowledge, and could be characterised by book learning - it is the technical capability to understand a given situation and know what to do about it using reason rather than intuition. This knowledge can be expressed through language alone, and further ideas deduced by reason.
There’s a clue in there already as to what machines can learn, and what they will struggle with given that we tend to train them only on enormous quantities of static data scraped from the internet. So, for example, ChatGPT has proven quite good at designing travel itineraries for holidays - you can give it a destination and a style of travel and it will spit out things you should do and see, and how to find good restaurants or hotels. It does this by taking an “internet average” of what similar people have done or described online. But if you ask it how to interact with the people you meet on your travels, it’s likely to fail - when you go there are likely to be different individuals with different motivations, and also different capacities to add (or subtract!) from your experience. Nowhere is it written how to talk to an intransigent restauranteur or an extroverted tour guide to get the best out of your holiday - this is the metis we all have to assimilate by experience.
How does this make us safe in our jobs? Well, all human work involves some degree of interaction with other humans. We don’t write down on the internet our innermost thoughts or our understanding of ourselves when talking to others (that would be somewhat of an overshare apart from anything else), and we often talk around subjects rather than about them (see for example the excellent “Culture Map” by Erin Meyer). So how exactly are machine learning models going to be able to deal with this un-written knowledge? In short, with the current approaches, they aren’t.
A more likely outcome, unless some unepected new development happens in the world of machine learning research in the next couple of years, is that we will move up from the players of the orchestra to being the conductors; orchestrating the more simple work of mostly bots. This could be good and bad for humanity - good because there’s only so much conducting work to do, so if we manage to make a break with our history of not dealing well with automation (working hours getting longer after the introduction of water powered mills anyone?), it could be a path to a Keynesian “15hr week”. Bad because we have a poor history with these things, and conducting work is significantly more taxing, so if we’re doing 40 (or 60…) hour weeks of it, that’s a recipe for burnout.
I guess time will tell. But rest assured, it’s very unlikely that you can be replaced with AI alone.