Opinions & People

Staying In Focus with… Professor Anton Nijholt

by Samar Almont

You should be aware of AI’s shortcomings, why we need it and how it uses our preferences to make decisions, recommends Professor Anton Nijholt, a Dutch expert on the subject and a panellist at WIEF’s Global Discourse on artificial intelligence this May.

Professor Dr Anton Nijholt,
Computer Scientist at University of Twente, Netherlands and Global Research Fellow at Imagineering Institute, Johor, Malaysia

Define artificial intelligence in less than five words.
Understanding human abilities and beyond.

Why does AI interest you?
In my case interest started when writing a book on computers and languages: What kind of intelligence is needed to have a computer understand natural language; how can formal and model-based approaches to natural language help to understand language use; how can a computer translate from one language to another.

How has this interest evolved?
Then, my interest changed to human-computer interaction in general. Sparked by, various projects funded by the European Union where we researched human verbal and nonverbal behaviour in face-to-face and multi-party interaction.

Machine learning methods were used in attempts to determine affective or cognitive (emotional) states from such behaviour. Being able to recognise emotions in humans and to generate emotional behaviour in robots or virtual agents is a branch of AI that doesn’t really focus on solving complex problems. Although, one can argue that human-like modelling of common-sense and emotional behaviour are more difficult than well-defined problems such as chess and Jeopardy.

How do we use AI in our daily life?
The interesting thing is that the more computing devices are able to perform a certain task, the less you are willing to say that the task requires intelligence. AI, although not always indicated as such, but more often as computer vision, multimodal interaction, machine learning and animation, has become part of video games, entertainment, home and car environments and our wearables such as smartphones and sometimes watches, glasses, health as well as fitness devices and smart clothes.

What will you be bringing to the table at the Global Discourse on AI on 15 May 2017?
Currently, in computer science we are using AI methods to make applications a little more intelligent. Usually, in this so-called weak AI, the ‘intelligence’ is limited to a narrow domain, and rather than making decisions on its own, some human intelligence has to be added in order to use its results in an appropriate way. But clearly we see a trend to autonomous decision-making by integrating various AI techniques. Therefore, going into the direction of strong AI, where we explicitly attack the problem of building a system with human-level intelligence or human-like intelligence.

In computer science, in particular human-computer interaction research, the usual starting point is modelling human behaviour in a particular situation, for example, verbal and nonverbal interaction behaviour in human face-to-face communication and uses the models in an attempt to realise more natural interactions between humans and computers such as robots and virtual agents. But artificial general intelligence requires more than simulating human-like behaviour and human-level intelligence in very restricted domains.

In recent years there have been lots of discussions, usually not initiated by more down-to-earth AI researchers, about AI that passes human-level intelligence and that becomes a threat to humanity. This is because these

super intelligent systems will pursue their own goals and these goals are not necessarily compatible with human survival. This has led to discussions how AI can be controlled and how we can take care that future AI remains human-friendly.

 

Why?
Because it is worthwhile to know more about modelling human behaviour in order to pave the path for human-friendly AI.

You see, discussions about the future of AI focus on intelligence, where intelligence usually refers to some super-intellectual capabilities, logical reasoning and being able to process enormous amounts of information in a very short time in order to make decisions. It may be the case that all that’s also necessary to understand and simulate human behaviour or having AI that can interact with humans, understanding human behaviour and maybe acting like a human.

And your observations on AI development in Malaysia compared to the Netherlands?
In Netherlands, there’s a very strong background in knowledge representation formalisms, in computational linguistics with applications in natural language processing and language modelling research in multimodal application scenarios, including interactions in virtual environments.

I’m not fully familiar with the Malaysian situation but I’ve seen language and speech processing activities, machine learning and rule-based artificial intelligence activities in Malaysia. It’s not always easy to determine how important certain developments are. Research in the Imagineering Institute in Johor Bahru is different from what we see in many other countries. The research approach is focussed on the future of internet, where scent, taste and touch experiences need to be modelled, understood and generated in order to have them mediated through the internet.

Clearly, such experiences are essential in friendly and affective human-human interaction. Modelling them is not less important than being able to model intellectual capabilities.

What, in your opinion, is the future of AI in Malaysia?
The future of AI in Malaysia will not necessarily differ from the future of AI elsewhere in the world but we should also be aware of certain differences. Many new cities and city environments are appearing while at the same time traditional city environments remain and are appreciated.

Malaysia has universities and institutes that are able to attack AI problems in fresh ways that are more original than traditional views and approaches to AI. Such as: How can we introduce AI and smartness in the new cities and urban environments that will appear? Can this smartness be introduced in such a way that at the same time the urban environments become or remain child-friendly? Can AI play a role in the development of social and physical skills of children? Can we make cities that are not only ‘smart’ and efficient, but also playful, inviting friendly social behaviour and interactions among city dwellers?

Why should people be interested in AI?
[First of all], current AI research focusses on machine learning approaches to ‘big data’. Collecting data in order to allow such approaches will become a main issue in AI research. Google, Facebook, Twitter and other social media will use our data and knowledge about our interests and our interaction behaviour to tell us, using AI technology, who we are and what are our interests. We may not always agree with it. We may want to tell the AI that interacts with us that our assessment of a situation allows ambiguity, that there is not always a rational decision and that we don’t always agree with decisions that are made for us using AI research and technology.

[We should be interested because] AI-guided social robots may know about our preferences but we should be aware how they use our preferences in order to make decisions. We need to be aware of how AI technology embedded in our smart environments and our devices makes decisions for us or persuades us to make certain decisions. Do we really agree with the decisions that are made for us? People need to be aware of shortcomings of AI and possible, future dangers when there is no attention for ‘controlling’ AI.

Why do we need AI?
Why do we need science and technology? AI is part of science and technology and there is interest in AI research because its results can be employed in existing or new applications that increase efficiency in professional environments, add to safety in public and domestic environments and can be used to make such environments more liveable and more human-friendly.

Read more from other panellists of WIEF’s Global Discourse on A.I., Professor Nadia Thalmann, and Professor Dr Zyed Zalila.
___________________
Find out a little more on A.I.

April 13, 2017
share this article