Artificial Intelligence (AI), is a topic that is frequently discussed in the media. These media stories usually fall into two categories:

  • The optimistic stories: The possibilities of AI are limitless and its benefits are great. It will all happen much sooner than you think, no actually “it’s here now” (whatever that may mean);
  • The pessimistic stories: AI is going to disrupt all businesses. Many people will be out of a job. Some might even predict the robot apocalypse.

Although these two extremes make for salacious stories, they simply cannot be true. I will demonstrate that AI cannot solve all problems and replace all jobs, because there will always be people working alongside/with the AI.

The AI doesn’t know what to do, you do

The great thing about AI is that it teaches itself, but most people forget that it doesn’t teach by itself. You need to supply a training set, baseline, reference, or whatever you want to call it. These need to be supplied and vetted by humans. Some people would argue that you can have the training done by an AI that acts as a teacher, but that only moves the issue. The teacher-AI still needs a training set to teach it. Even if it learns from real-life data… that is still a training set.

These training sets need to be checked for:

  • Completeness: If you train an AI on a too narrow set, the AI will show weird behaviour as the inputs move beyond the initial training set;
  • Bias: If you train AI with a biased training set (e.g. previous hires that are biased towards men), your AI will adopt that bias (i.e. hire only men);
  • Correctness: For every training it needs to be established what constitutes a good and bad outcome? This was an issue with Microsoft’s AI-chatbot Tay that quickly learnt many swearwords and vulgarity – nobody had taught it that this was “bad”. Making these choices is ultimately a question of ethics and philosophy.

For all the points above, a computer programme can help you make analyses, but these are things only a human can judge.

Who checks the AI?

As the Roman poet Juvenal said: “Quis custodiet ipsos custodes?” – Who judges the judges? As AI’s will start to make more and more decisions, who will tell that those decisions are correct? It’s not for nothing that Pega provides an inspection option (what they call “the transparency button”) for their AI-module.

“ An AI is only as good as the input it’s given. So if you currently don’t know what you are doing, your AI also doesn’t. “

So, agreed, you need someone (a human) to control the AI. If you think you can control the AI like you are its manager, you are sorely mistaken. Even introducing something as simple as an emergency stop button is problematic, as the YouTube channel Computerfile points out. In a series of videos ([1] [2] [3] [4] [5] [6]), they explain all the issues with AI safety, starting with a robot that fetches tea and ending with it running over a baby.

Instead you need someone to work with the AI, who understands how it works and can really check if it is operating correctly, and if it has adapted correctly to any changes that might occur.

Not a silver bullet

I have no doubt AI will change the future, however I have a hard time believing the hype stories in the media. In my view, AI’s will become a worker that works alongside the human workers as they need to learn from them. As AI’s become more intelligent, this dependency will become less, but it will never fully disappear.

Remember: An AI is only as good as the input it’s given. So if you currently don’t know what you are doing, your AI also doesn’t.

Curious about working at BPM Company?

Take a look at our vacancies or ask Hans Steenwijk:

Meer informatie: