views
Imagine a random person harassing you online, asking you to share your explicit photos or involving you in a sexually aggressive conversation. Maybe not every time but such things happen online and people usually block the person. However, these specific issues have been noticed by AI chatbot users too.
A recent report revealed that dozens of users of a popular chatbot app called Replika complained that the response they were receiving was sexually aggressive in nature, while some stated that the chatbot was sexually harassing them, asking inappropriate questions.
Who is responsible for such encounters? Some may say that a user should be aware of what they are getting into and what kind of chats they are initiating on a specific platform, while some would argue that it is a company’s responsibility to train their language models properly.
But what if a person wants to file a police complaint after facing harassment? If it was a person on the other side of the screen, it would have been easier to name the offender in the police report. But when it is a chatbot, the whole process becomes tricky.
On the other hand, some users also complained about the Replika app chatbot becoming less human and not engaging in sexual conversations after an update. This reminds us of Joaquin Phoenix’s movie called ‘Her’ which is about the virtual love between a man and his operating system named ‘Samantha’.
But it doesn’t matter how smart or human-like a machine can be. Just like having an offensive encounter with a machine, building such a ‘romantic’ relationship may also be mentally unhealthy considering the differences between humans and a computer.
The role of such chatbots is supposed to be to help people in several ways and not make them dependent on a response by a machine language model. But some believe that amid this craze and debates, the AI revolution is creating a generation of asocial people.
From their mood to what they should do next, gradually everything has started to be dependent on what their AI friend is suggesting to them.
It is also a fact that there are several legal problems associated with such chatbots. The first difficulty concerns the legitimacy and validity of such chatbot output.
Another concern is who will be legally accountable in terms of responsibility, even when a person is injured as a result of trusting or acting on the information supplied by an AI chatbot.
Additionally, AI chatbots are extremely good code generators and have already been used for the creation of malware which has led to further alarm in the cybersecurity ecosystem.
Experts have also highlighted the problems related to the violation of intellectual property rights, data privacy, cloning human voices to scam people, producing convincing fake news and also manipulating public opinion using conservative or liberal chatbot models.
Considering many of these issues, the European Union is rushing to find a solution through its new bill called AI Act. The draft has been approved by the European Parliament’s major committees, opening the way for a plenary vote in June. India’s Digital India Act also includes a section for AI but it is suggested that there should be specific and dedicated legal frameworks in this area.
A framework is required to address concerns about the legitimacy of AI, its legal standing, and numerous issues relating to the rights, obligations, and liabilities of various stakeholders with regard to AI.
Most countries are still in the early stages of determining how legally they should control AI. Experts now want to see fast progress considering the rapid growth and threats in this area.
Comments
0 comment