Is artificial intelligence already here, or is it just machine learning?

Source: eKapija Tuesday, 29.10.2019. 11:02
Comments
Podeli
Kriti Sharma (Photo: Kaspersky)Kriti Sharma
It's one thing what a certain machines recommends we do, and quite another how we interpret it. That is why I am deeply convinced that we are now at a stage where we shouldn't debate what artificial intelligence (AI) is and what machine learning is. Instead, we should try to prevent algorithms that would harm humankind from occurring, as it has happened before, Kriti Sharma said during her presentation at this year's NEXT conference, organized by the Russian antivirus company Kaspersky.

The conference, held in Lisbon, brought together experts from the fields of AI, robotics, machine learning, cyber security, as well as journalists from Europe, including a representative of eKapija.

Kriti Sharma is the founder of the AI for Good platform and is one of the leading voices in the global scene of AI, its implementation and, most importantly, its ethical side. She has therefore dedicated her career to examining how AI can help solve the current everyday problems. The Forbes magazine included her on its famous “30 under 30” list. Three years ago, she was included on the “Recode 100” list, as one of the most influential people in the world of technology, right next to Elon Musk, Mark Zuckerberg, Jeff Bezos... She is currently the vice president at Sage Group, one of the leading British tech companies.

– Artificial intelligence needs to correlate with ethical standards. In my opinion, it is very important for countries not to see this field as a kind of a competition about who's first in AI development – China, USA, Japan, India... All ethical principles of the use of AI should be globally coordinated, which is very difficult – Sharma emphasized.

Last year, Sharma created the rAInbow platform, whose purpose is to help victims of domestic violence in South Africa, which has proven to be a successful example of AI. As she says, in this country, one in three women experience violence, at home, from the partner. They wanted to create a tool which would help women share their problems, get advice and thereby stop the violence. They were surprised to see how much more open women were to talking about their problems to “someone behind the screen” than in person. This fact about trust encouraged them to realize that they had made a good tool.

– When we talked to the victims on site, we wondered why they hadn't reported the violence earlier. We saw, for example, that, at the moment the violence was reported, the victim had been exposed to it 35 times on average. When we put in all the data and the facts we had collected, we discovered several patterns – first, they were ashamed of admitting that there had been violence in the family, they feared facing the opinion that they might have had provoked violence and they would be told to use a phone service, which operated between 9 am and 5 pm, after which they would only get a voice message. I wonder, how is it that, in the era of highly developed technology, we're not using it to solve some very important issues in the society we live in?

In addition to rAInbow, the team of experts gathered by Sharma has created an online tool for informing young people in India about reproductive health.

– Last year, we carried out more than a million interactions on our platforms, which is an enormous amount. We saw that 50% of those that use our tools are ready to take action and prevent bad situations from happening. I therefore emphasize that our idea is to use AI for the public good.

Although the use of machine learning and AI is apparent worldwide, Sharma is aware that opinion is divided when it comes to this field.

– There are plenty of prejudices and there is a lot of fear, which could be a consequence of the lack of knowledge or stereotypes created by Hollywood. Scientist have realized as well that they need to include ordinary people in the use of AI. It has turned out that, by participating, they've freed themselves of their fears and realized its practical value. This is in fact a big challenge for us who deal in such technologies – how to create accurate perception.

Alexey Malanov (Photo: Kaspersky)Alexey Malanov
Sharma also points out that large companies are beginning to consider this topic more and more, compared to the perception that used to be common when AI was mostly used in production automation or face recognition technology.

– At the moment, AI is truly less expensive than five or ten years ago, but our ability to use it to influence the society is not a simple issue, because you need to have an investor. That is an opportunity for young people to launch their businesses, which eventually has a positive impact on the economy as well. Of course, we need knowledge to create good tools, but knowledge is more available today than it used to be. We have, for example, brought together young people of about 15 years of age around the the FutureMakers platform. They create algorithms in the fields of climate change, medicine etc. – says Sharma.


When will the real AI arrive?

Alexey Malanov of Kaspersky openly said at the beginning of his presentation at the NEXT conference – there is no such thing as artificial intelligence.

– There's the so-called strong artificial intelligence and weak AI, that is, machine learning. Strong AI has consciousness, is able to thinks, solve tasks... Machine learning means that the machine carries out a certain task the way we taught it. The fact is that strong AI does not exist and we don't know precisely when it will be implemented. Scientists believe that it will be implemented in around 50 years, in the USA they say in 70, whereas in Asia they believe it will happen 30 years from now – Malanov said.

As for machine learning, he says, it's what we have for now.

This is in fact creating programs without programming. What could go wrong? Systems can be created with bad intentions, for example, autonomous drones, which can kill people. Ethics are only one dimension of the use of machine learning. The biggest challenge for scientists is the fact that what's ethical and allowed in one state is not necessarily ethical and allowed in another. Some countries don't allow face recognition technology, while some do.

Malanov also points out that, today, algorithms influence our everyday decisions, from being recommended movies based on our preferences to more delicate situations.

– Algorithms make recommendations, but is the human being who makes the decision ultimately. Man is still smarter than algorithm.

Prejudices need to be overcome

In addition to the importance of the principles of implementation of AI, Sharma also addressed the topic of the importance of gender equality in this field and the importance of including various structures of the society. A team consisting of psychologists, scientists, sociologists and anthropologists can, as she says, create very useful tools. Also, algorithms too need to be tested in order to see whether they are based on prejudices, such as, for example, recruiting tools, which are more likely to pick men over women, white people over black people etc.

– I believe that, one day, AI technology will overcome inequalities and set prejudices. AI definitely reflects the time we live in, what we see around us and the historic prejudices and circumstances – Sharma concluded.

Teodora Brnjos
Comments
Your comment
Full information is available only to commercial users-subscribers and it is necessary to log in.

Forgot your password? Click here HERE

For free test use, click HERE

Follow the news, tenders, grants, legal regulations and reports on our portal.
Registracija na eKapiji vam omogućava pristup potpunim informacijama i dnevnom biltenu
Naš dnevni ekonomski bilten će stizati na vašu mejl adresu krajem svakog radnog dana. Bilteni su personalizovani prema interesovanjima svakog korisnika zasebno, uz konsultacije sa našim ekspertima.