What is AI?
To understand what is coming our way, we first need to go deeper into the meaning of that mysterious concept of AI, Artificial Intelligence. In short: it is a computer program that does the things that you do. That program does that less well, as well or better than you do. And it does this in one specific area, in a number of areas or over a very wide area. The better the program works, the more "intelligent" we think it is.
The shortest possible definition is therefore: AI is software. It is our thinking power that we have brought out of ourselves - in the form of code - and that we put to work for certain purposes.
Take Excel. A miracle of computing power. We can also do what Excel does, but we need extra time for that. Excel is AI in one specific area and it does that faster and better (because error-free) than we do. In English-language literature this is called an ANI, an Artificial Narrow Intelligence.
We are already surrounded by countless expressions of artificial intelligence (aka ANI), but we don't call it that. With the term artificial intelligence we usually refer to programs that can read, see, reason and think. In short, those who can do what we can do broadly and who are no less inferior to that. In theory this is called an AGI, an Artificial General Intelligence. The more ANIs there are, the smaller the jump to an AGI becomes. Google's self-driving car is such a combination of ANIs. Many smart systems that together form a new system. But they remain ANIs. More is needed for the jump to an AGI.
This leaves us at simplifying the field and bringing it to operations. With simple machine learning for everybody.
Only the we from ask of people to buy into the new and evolving field of AI.
Who we are and what we do is increasingly public property. The underlying mechanism is data collection, or Big Data. This creates new, moral questions. What role can ethics play? Last year, Google's AlphaGo computer defeated a Chinese grandmaster in the extremely complicated board game Go. A breakthrough for artificial intelligence (AI). According to Prof. Dr. Ronald Jeurissen, professor of business ethics, and Jan Veldsink MSc, core lecturer in AI and Cyber at Nyenrode Business University, AI is playing an increasingly important role.
The impact of AI will not leave anyone indifferent. AI is causing a landslide: power, income and knowledge fall into the hands of those in charge of the data. These are the 'Zuckerbergs', but also the government. The individual seems to be drawing the short straw. The concept of free society is at stake. On the other hand, awareness is growing: Pirate Party and Bits of Freedom are raising a warning finger. Ethics insist on alertness. That is why it is important that ethics be withdrawn from business. At Rabobank, for example, it is the ethics committee that measures technological decisions against the ethical yardstick. Expertise is necessary for a detailed moral compass. Every company should appoint a Chief Ethics Officer to formulate the right search questions,' says Jeurissen.
Big Data is at odds with our deep need to keep control over our lives. In the ethical discussion, privacy is definitely at the top of the list. Protection against invasion of privacy takes on an extra dimension now that technology can follow us everywhere. For companies, this offers an opportunity to move further into personalized recommendations. But who owns that mountain of data? The system is out of balance, in favor of the corporates. Who has my data, and more importantly, what happens to it? The consumer is groping in the dark, also because a lot is happening on an aggregated, invisible level. 'The right to property must be overhauled, it must be left to the individual,' says Jeurissen.
According to Jeurissen and Veldsink, data privacy leads to new issues, such as behavioral choice. Do you omit a visit to a dating site because your (potential) employer is watching? People tend to adjust their behavior to the safe middle, which is a restriction in freedom. Big Data also promotes the so-called hyper-identity, because it recognizes patterns in digital click behaviour. For example, if a vegetarian looks up certain information or products, he will receive feedback that fits in his/her own vein. This way you look more and more in your own mirror, read 'The Daily Me'. According to Veldsink, this is an impoverishment; “a technological society needs a diversity of opinions.”
The key question according to Veldsink and Jeurissen is: how do I relate to technology as a person? For Big Data, the human dimension has been lost. Companies are sailing on the golden principle of customer trust, but that line is thinning. The consumer takes data collection for sweet pie, but also has the freedom to exchange search engine Google for DuckDuckGo. Jeurissen's advice is; be crystal clear about your data policy, then you as a company will increasingly have an advantage.
According to Veldsink, companies are already making industry agreements about data policy. The next step would be to set up a general quality mark. And consult consumers in a panel. This way you can hang above the developments and look at the impact of AI on society. This future technological society does not have to be worse, that does not do justice to wonderful AI applications such as medical diagnostics and treatment and smart farming.
“It is enlightening to see that we are in a power game of three questions: is it worth it, is it allowed and is it appropriate? The so-called triangle of market, law and ethics,' according to Jeurissen. The legislation for data policy of organizations is strong. For example, in May 2018 the General Data Protection Regulation (GDPR) will apply. This means that from that date there will be one privacy legislation for the entire European Union. This regulation regulates, among other things, that data may not be processed without explicit permission. However, legislation is by definition a subsequent instrument and cannot record everything.
Staying critical of AI
'It is therefore essential that the parties in the force field enter into a dialogue with each other', Veldsink concludes. Technology is getting smarter; devices can provide answers that are appropriate in the context. Consider, for example, the Alexa emergency telephone. Handy for all kinds of facts, but in the meantime it listens in the house. Ask questions about it. The same goes for superior intelligent computers, which transcend human ingenuity. Should the computer serve and obey man?