Three artificial intelligence experts discussed their creations and, among other things, legal identity for intelligent robots, if AI can render humans obsolete and its shortcomings at WIEF’s 7th Global Discourse 2017 in Kuala Lumpur.*
Unlike Professor Nadia Thalmann of Nanyang Technological University in Singapore who built Nadine – a robot in her likeness – Professor Zyed Zalila of Intellitech in France believed AI will lead to the disappearance of a large number of skilled occupations. However, Nadia disagreed. ‘I don’t think so at all. AI with deep learning methods works very differently from how we think ourselves. We’re more adaptive and subtle and more aware of everything. Robots using AI are tools to help us but it’ll not make us obsolete at least for decades, if at all,’ she said.
Nadia’s 25-year interest in social robots had birthed Nadine through whom she continued her interest to ‘model humans and understand their behaviour and show a result on virtual humans or social robots [such as Nadine].’
Zyed, on the other hand, said, ‘AI will undoubtedly lead to the disappearance of a large number of skilled or even highly skilled occupations based on expertise. In 2015, the big international banks quickly realised that the new generation of customers required premium online banking services at a low price but with maximum availability. That’s why they’re implementing their digital transformation at high velocity. Job losses happened because AI is starting to be sensible.’
This will, he predicted, happen in other industries as well. ‘I imagine future medical practitioners will no longer be selected on their mere ability to learn knowledge by heart. Instead, they’ll become engineers capable of using AI systems specialised in medical diagnosis but have a high level of empathy or emotional intelligence to detect the emotions of their patients.’
Similarly for the world of finance and insurance, predicted Zyed, ‘where the human advisor, actuary and trader may be replaced by intelligent robots that are more efficient and available 24/7 on smartphones’ and vehicles driven autonomously, ‘won’t want to buy a vehicle. Instead, they’ll lease the services of a virtual driver. Also, the legal sector may have a virtual judge robot that downgrades the noble profession of lawyers by predicting legal decisions.’
Zyed continued, ‘But can we accept a ‘virtual investigator’ who’ll anticipate a person preparing a crime or an offence and can stop it before it happens? The foundation of justice, at least the French one, will be completely disrupted! Another danger would be the malicious misappropriation of AI. Do we want our future employer or insurer to know about the future diseases we may develop through analysis of our genome and living environment? Can we accept that parents decide a voluntary abortion just because a robot predicted their child will develop several cancers by age 40?’ Can we? That’s the question.
Using human preferences to make decisions
Incredulity aside, AI does have its shortcomings. According to Professor Anton Nijholt of University of Twente in Holland, Google, Facebook, Twitter and other social media will use our data and knowledge about our interests and our interaction behaviour to tell us, using AI technology, who we are and what are our interests. ‘We may not always agree with it. We may want to tell the AI that interacts with us that our assessment of a situation allows ambiguity, that there’s not always a rational decision and that we don’t always agree with decisions that are made for us using AI research and technology,’ he said.
We’ll always need AI. ‘It’s part of science and technology and there’s interest in AI research because its results can be employed in existing or new applications that increase efficiency in professional environments, add to safety in public and domestic environments and can be used to make such environments more liveable and more human-friendly,’ Nijholt added.
‘But,’ he reminded, ‘we should be interested in it because AI-guided social robots may know about our preferences but we should be aware how they use our preferences in order to make decisions. We need to be aware of how AI technology embedded in our smart environments and our devices makes decisions for us or persuades us to make certain decisions. Do we really agree with the decisions that are made for us? People need to be aware of shortcomings of AI and possible, future dangers when there is no attention for “controlling” AI.’
Legal Identity to Intelligent Robots
Zyed created xtractis® because it appeared necessary to propose an AI capable of doing the same work as a human scientist. ‘Who sets up equations to model a process based on a set of observations and an inductive reasoning that he leads through cognitive abilities,’ he explained. Due to that, Zyed proposed the combination of the theory of fuzzy relations of order N with proprietary algorithms of machine learning which allowed the creation of the intelligent robot xtractis®.
‘It does the same job as a scientist, except that it’s absolutely not limited by the number of simultaneous variables that it’s capable of analysing and exploring. It’s also able to estimate the robustness of new knowledge it discovers,’ he elaborated. So, for Zyed, giving a legal identity to an intelligent robot will have their performance evaluated objectively by the regulator. Hence, its performance will be officially recognised in the same way as a human’s diploma or expertise. ‘This will allow them to be co-authors of publications that present their findings,’ he said.
Another scary point would be the development of autonomous robots that’ll be granted permission to kill such as robot soldiers, smart drones and smart tanks, in order to avoid endangering human soldiers. Who’ll be responsible for military errors on the battlefield, then?
If a company exploits the intelligence of its robots to create value and money, why shouldn’t they pay social charges and taxes on these robots? If an intelligent robot is the inventor of a major discovery, who’ll be the author: the robot itself, its designer or its operator? ‘Many of these questions raised remain unanswered. This is because if we regulate too much, we’ll eventually kill the innovation and the development of AI and if not regulated, the population can be duped by having a blind confidence in a technology that’s not mature and, therefore, unreliable,’ he said.
Future of AI
‘As such, not really a future except that AI methods will be more and more used and more efficient all the time. There’ll be hardly a new method developed without using the AI algorithms and deep learning,’ Nadia said.
For Zyed, he planned to extend the cognitive abilities of xtractis®. ‘Today, it can be considered as a universal solver of predictive modelling of complex processes and phenomena problems. But it can only deal with qualitative and quantitative structured data on the process to be studied,’ he explained. ‘By 2018, we’re planning an important extension that’ll enable xtractis® to take advantage of unstructured data such as free text or speech. Thus, allowing it to evaluate the robustness and veracity of the text it analyses. And I hope that during my lifetime, xtractis® will be the first intelligent robot to win a Nobel Prize for all its discoveries and all the economic, societal, scientific and technological advances it has brought to humanity.’
When it came to AI’s human-level intelligence, Anton believed that artificial general intelligence required more than simulating human-like behaviour and human-level intelligence in very restricted domains. ‘In recent years there have been lots of discussions, usually not initiated by more down-to-earth AI researchers, about AI that passes human-level intelligence and that becomes a threat to humanity. This is because these super intelligent systems will pursue their own goals and these goals are not necessarily compatible with human survival. This has led to discussions how AI can be controlled and how we can take care that future AI remains human-friendly,’ he said. In other words, wait and see.
*Read more from panellists of WIEF’s Global Discourse on A.I., Professor Nadia Thalmann, Professor Dr Zyed Zalila and Professor Anton Nijholt or find out more about the 7th WIEF Global Discourse and other WIEF events from the WIEF Foundation Report 2017.