AI and the Pointy Haired Boss

As Artificial Intelligence gains momentum a natural question is whether it is worth acquiring knowledge and cognitive skills since these will be on tap. Of course this depends on how far AI will go, so let me begin by stating my views. As of now we have no evidence one way or the other whether AI can ever reach the levels of genuine major breakthroughs - discovering relativity or quantum mechanics, inventing the transistor or even what are the huge breakthroughs within a field (in my field, mathematics, this would be things like the creation of symplectic topology by Gromov), and, if it can, when and how this will happen. But in my view the evidence is overwhelming that any cognitive task short of this is going to be within the capabilities of AI, and on the timescale of at most 5-10 years.

Those who will ever make major breakthroughs number at most in the hundreds or thousands, and these people should obviously acquire the knowledge and skills they need for this. This post is to address the question for the rest of us, and will take as a premise that my estimate is roughly correct.

Role of humans in the age of AI

As Ethan Mollick, a professor at the Wharton school of business and one of the most perceptive writers on AI has pointed out, we can learn much about our relations with AI (especially its agentic form) based on present human organizations. From this view, my best guess of our relation with AI systems is as a technical lead with a team of really smart workers with sound knowledge and even judgement. It will still be the role of the technical lead to decide what to actually try to accomplish, not because of some mysterious ability to ask the right question but simply because these are finally rooted in human needs and aspirations.

If indeed this is the human role, then I would argue that it will still be of great value to acquire knowledge and skills, including (maybe especially) in Science and Technology. A twist may be that core scientific knowledge may become relatively more important than technology as technology may change rapidly while the scientific laws of nature are eternal.

Before making a general case I discuss some examples - real and fictional.

The Master Chef and the Pointy-haired boss

In Subroto Bagchi’s book The Professional, the author quotes a story narrated by David Ogilvy, a founder of a top advertising agency. Ogilvy started his life as a professional working under a chef in the Majestic Hotel in Paris. The chef had a large number of assistants to do all that was needed to prepare the dishes served by the restaurant. Still, to show he could do it and keep himself sharp the chef time-to-time cooked himself.

The other side of this is the infamous Pointy-haired Boss of the Dilbert comics who manages a group of technical workers while knowing nothing. Needless to say much of what he does is daft.

Technology and the Limits of Science

In his marvellously entertaining book “Surely you’re joking Mr. Feynman”, the physicist Richard Feynman related the anecdote on an innovative solution found by an Army officer. Since tanks run out of fuel frequently, the officer suggested that sand can be used as a fuel. All that was left to the scientists was the practical details to make this work! Since physicists had just figured out how to use Uranium as a fuel this idea may not be as daft as it sounds as first, but certainly had no chance of working.

In the other direction, when the rockets of SpaceX were being designed, the reaction of Elon Musk to being told something was impossible (say at a given cost or when faced with other constraints) was to ask for an argument from the principles of physics why it really was impossible. Indeed many things turned out to actually be possible.

The CEO of the notorious company Theranos did the opposite. Theranos was based on the (as it turned out fraudulent) claim that they could perform a whole range of medical tests with tiny amounts of blood in miniature equipment. A senior chemist at Theranos tried to argue that the laws of thermodynamics meant some chemical tests simply could not be performed in this way, but was ignored.

The above cases are about possibilities limited by science. But just as important in the real world are judgements of difficulty. For instance, in an interaction in Bangalore Demis Hasabis said he hoped that with the coming of AGI (which he expected in 4-8 years) the time for discovery of a drug could drop by an order of magnitude from a decade to a few months, or optimistically maybe even a few weeks. While a hopeful scenario, this is far less than claims of many AI boosters that in two years all diseases will be cured! Part of the difference is simply the honesty of Hasabis, but a lot of it is that Hasabis actually understands what science involves.

Knowledge and Agency

AI is likely to reach and exceed expert human level in all cognitive tasks in a few years at most, and may well reach or exceed even genius human level - what is commonly called AGI/ASI (Artificial General Intelligence/Artificial SuperIntelligence). Yet it is important to realise that even ASI is very far from omniscience (knowing everything) leave alone omnipotence (able to do everything). The problems of both science and the real-world are far beyond our current reasoning abilities, and many are not limited by cognition but by nature - biology, energy needed, thermodynamics, etc. So there will still be a lot to be done.

It is frequently pointed out that as more and more tasks can be accomplished by AI, the most important human quality will be agency - bridging between what is worth doing in the real world and the actions to accomplish this. What I have argued here is that it will be very helpful for the person exercising this agency and directing the AI to have a good scientific, technical and real-world understanding. A person who knows whether they can do something in principle and how much time and effort it will take (maybe after optimistically assuming they come up with necessary ideas along the way) will be in a position to know where directing AI will bear the most fruit. This will allow using the new powers to accomplish a lot while not being frittered away on the impossible.

Such knowledge is not that different from education in the correct sense today.

Practical lessons?

This post has been written more in the spirit of what can be and maybe what should be. From the point of view of a young person today, I hope this is an argument why they should take the trouble to develop their knowledge. But practically there is a more important consideration - while cognitive skills of AI have reached human expert levels in many cases already, our hands are far better at dealing with the physical world than those of robots. So learning something to do with the physical world - biology, chemistry, mechanical engineering, nursing, plumbing, carpentry - is much safer than a purely white collar skill.