Maadico

autonomous artificial cars AI tesla model strategy electric computer management maadico

Artificial intelligence and Cars: Will humans be replaced?

September 15th, 2021
Automotive and Assembly

Artificial intelligence and Cars: Will humans be replaced?

September 15th, 2021
Automotive and Assembly

Humans expect artificial intelligence to be good and reliable. At the same time, a new study shows that humans are reluctant to cooperate and adapt to machines. So they use it.

Imagine that shortly you are driving on a narrow road when suddenly another car appears from the front corner. This car is self-driving and there are no passengers in it. Do you step in and declare your right of precedence, or do you pave the way for its ratification? Most of us have a laid-back attitude when it comes to painting a picture about ourselves. Do we show the same kindness to cars?

This scenario is familiar to anyone on a narrow and crowded street. Parked cars lined up on both sides and there was not enough space for vehicles to pass in either direction. The person should dive into the gap inside the parked car or slow down and stop as much as possible for others to pass.

The drivers found a way to negotiate, but not without close contact and frustration. Planning a self-propelled vehicle (AV) to do the same thing – without a human behind the wheel or knowing what other drivers might do – is a unique challenge for researchers at Carnegie Mellon University’s Argo AI Research Center to research self-driving vehicles.

Using behavioral game theory techniques, an international team of researchers at LMU and University College London conducted an extensive online study to see if people like AI systems would work together like everyone else.

How artificial is artificial? 

Collaboration brings people together. Often we have to compromise with others and accept the risk that they will disappoint us. Traffic is a good example. We lose time when we allow others to pass us by, and we get angry when others fail to do us a favor. Are we going to do the same with cars?

“It’s an unwritten rule. Munich,” says Christoph Keeling, a former visiting researcher at the Robotics Institute at the School of Computer Science, now part of the Autonomous Air Systems Laboratory at the Technical University of Munich. “You have to learn to negotiate this scenario without knowing that the other car will stop or leave.”

While at Carnegie Mellon University, Keeling worked closely with research scientist John Dolan and his doctorate. Student Adam Villaflor to solve this problem. The team presented its paper entitled “Learning to Negotiate the Use of Two-Way Routes in High-Conflict Driving Scenarios” at the International Conference on Robotics and Automation.

The team believes their research is the first in this leadership scenario. This requires drivers – whether human or non-human – to work together to pass each other without knowing what the other person is thinking. Drivers need to balance aggression and cooperation. An overly aggressive driver who simply walks without regard for other vehicles can endanger himself and others. An overly participatory driver who always obstructs traffic may not be able to drive on the road.

“I’ve always found this interesting and sometimes challenging aspect of riding in Pittsburgh,” says Dolan.

Automotive vehicles have been introduced as a potential solution to recent delivery and transportation challenges. But for pizza machines to deliver pizzas, packages, or people to their destinations, they must be able to move in tight spaces and with unknown driver targets.

Operate cars artificially without feeling guilty

The study, published in the journal iScience, found that on the first date, people have the same confidence as humans: Most people expect to meet someone they want to work with.

The difference came later. People are less likely to retaliate by using artificial intelligence and instead take advantage of it. Returning to the example of traffic, a human driver gives way to another human but not to a car.

This study identifies the unwillingness to compromise with machines as a new challenge for the future of human-artificial interaction.

The doctor explains. George Carpus, philosopher and behavior theorist at the LMU in Munich and the first author of the book Learning. “We have modeled different types of social interactions and found patterns of adaptation. People expect artificial workers to work together like their human counterparts. However, they do not reward their kindness as much as they do artificial intelligence.” They use. “

With the perspectives of game theory, cognitive science, and philosophy, researchers have found “algorithm exploitation” a powerful phenomenon. They repeated their findings in nine experiments with nearly 2,000 human participants.

The team believes their research is the first in this leadership scenario. This requires drivers – whether human or non-human – to work together to pass each other without knowing what the other person is thinking. Drivers need to balance aggression and cooperation. An overly aggressive driver who simply walks without regard for other vehicles can endanger himself and others. An overly participatory driver who always obstructs traffic may not be able to drive on the road.

“I’ve always found this interesting and sometimes challenging aspect of riding in Pittsburgh,” says Dolan.

Automotive vehicles have been introduced as a potential solution to recent delivery and transportation challenges. But for pizza machines to deliver pizzas, packages, or people to their destinations, they must be able to move in tight spaces and with unknown driver targets.

The team provided a way to model different levels of driver co-operation – how likely a driver is to stop for another driver to pass – and used the model to teach algorithms that can help self-propelled vehicles move safely and efficiently. Continental This algorithm is only used in simulations and not in real vehicles, but the results are promising. The team found that their algorithm performed better than the current model.

Driving is full of complex scenarios like this. As researchers tackle automated driving, they look for ways to build improved algorithms and models for one scenario, such as merging on one highway, and working for another, such as changing lanes or turning left. Against traffic at an intersection.

“Extensive experiments have shown the latest proportion of touch cases,” Dolan said. “We are still finding this problem in the corner and looking for ways to deal with it.”

Each experiment examines different types of social interactions and allows humans to decide whether to compromise or cooperate or act selfishly. The expectations of other players were also measured. In the famous game, Prisoner’s Dilemma, one must believe that other characters do not disappoint them. They endanger humans and artificial intelligence, but often betray the trust of artificial intelligence to make more money.

Cooperation with cross-betting is supported: I think you will be fine with me, and you think I will be fine with you. The biggest concern in our field is that people do not trust machines. But “We showed that they do!” Notes by A.D. Bahador Bahrami, a social neurologist at LMU and one of the lead researchers in this study. “They are not upset that the engine is out of order, and that is a big difference. People do not report as much guilt as they do,” he added.

Artificial intelligence can backfire

AI is biased and immoral – from the failure of the 2020 UK exams to the judiciary – but this new research raises new warnings. Craftsmen and legislators try to ensure good artificial intelligence. But virtue can have the opposite effect.

If people think that AI is programmed to treat them well, they are reluctant to cooperate. Some car crashes may already be a real example: Drivers have spotted cars on the road and expect to find a way. Meanwhile, self-driving cars expect a normal compromise between drivers.

“Using the algorithm has other consequences. If humans are reluctant to join self-driving cars from the side of the road, will more self-driving cars become more aggressive and aggressive to be useful?” George Carpus asked.

“Good and reliable artificial intelligence is an important word that everyone is excited about. But improving artificial intelligence is not the whole story. If we recognize that the robot in front of us is going to work together, no matter what, we “Oil compromise,” said Professor Ophelia Dervi, the philosopher and lead author of the study, who also works with the Norwegian Peace Research Institute in Oslo on the ethical implications of merging an autonomous robotic army with a human army. That makes society work seems to be a small act of personal interest for each of us. It can have a much greater impact on society as a whole. If no one allows cars to enter the traffic, they create side traffic and do not facilitate transportation. “

Article by:

Armin Vali

adapted from:

Science daily.

https://www.sciencedaily.com/releases/2021/07/210720113956.htm

Never miss anything important from Maadico