What will be a society where there is no place for ethics and moral values? A place where there is no limitation or punishment for theft, robbery or for hurting human sentiments. Humans need to be empathetic and compassionate towards each other and every living being around them. Similarly, as the world is only moving forward, it becomes pertinent that each and every aspect that touches the human or human world passes the ethical tests. And, the same thing is valid for Artificial Intelligence. Artificial Intelligence has endless boundaries wherein, the growth prospect for innovation is increasing every day. Tech giants like Alphabet, Amazon, Microsoft and Facebook have changed the human interaction with AI upside down. But, with great innovation and progress there comes much greater responsibility which is to manage the development of AI ethically in the light of human behaviour.
With the advancement in technology, the arrival of Big Data and mainly AI, more and more, it is these systems that make judgements. Those judgements sometimes may turn out to be wrong for the human good. Instead of humans machines are given the right to make decisions, machines can act like humans, they can be made to learn to act like humans, to perform a task like humans but is it possible to make decisions like a human? Will AI be able to make decisions based on human ethics? Will it be able to actually distinguish between what is good and what is bad? Following are some of the aspects where it becomes unquestionable that there is a need for Ethics in AI. Let’s dive deeper into this!
Top 5 Ethical Issues in Artificial Intelligence
Ethical issues with Self Driving Cars
The world is looking forward to using Self Driving Cars on a regular basis but the question is are we ready for it? Is autonomous driving safe or is it respecting the AI principles or does it have the potential to make decisions? For example, autonomous cars are fed tons of data to train them. But let us assume a situation where a person or an animal accidentally jumps right in front of the car. In such a case, Computer vision integrated with the car will alert the car about the presence of a living being on the road and, then what will the ML algorithms tell the car to do? Will it save the person or the animal in front of the car or it will save the person inside the car? Will it barge right into that person or risk the life of the person inside the car by pressing the emergency brake button? Lastly, the question is should such vulnerabilities be ignored and the machine is given the right to make decisions?
AI Robot compromising with Ethics as an HR Manager
Let’s think of a situation hypothetically, think of a company that wants to do a recruiting process to find the best worker within the company for a particular job which needs to be taken care of. This person could be the one with the highest income, the one who advances the fastest in the company or the one who is the most knowledgeable. This first step, which many times we do not take into consideration is that the human data that has been stored and fed to make the decision will be biased. And, ethically are we choosing ‘a right person’ to the job? An example could be Amazon’s algorithm, which only proposed men for certain positions because their training data contained a majority of male staff. The exactness of the model will differ on the choice of characteristics and it will not always be easy to understand the implicit bias included in these data.
Ethics being Compromised when Court, Law and Judgment is in the hands of AI
The government these days is looking forward to having a hand from Artificial Intelligence to increase the speed of maintaining law and regulation in the country. In the last few years, there has been gossip of letting the bots play a major role in making court work on a fast-track mode. Robots will be fed on the ML and NLP algorithms to perform text and voice analysis and based on the data that has been collected through the present case and the data that has been fed to them for such cases will be compared by the bot to come up with a decision which needs to be taken. But, again the question is will the bot be able to understand the intricacies of human situations? All humans are different, their circumstances are different and the actions taken in those scenarios will be different so will the bot be able to make a fair decision in such a case? Lastly, the question is, is AI ethical enough to understand human individuality?
Not a new topic to be discussed, but a relevant topic which needs a sheer discussion is that human biases have corrupted Artificial Intelligence as well. AI lacks Ethics that should have been there to not distinguish between humans on the grounds of their gender or sexuality. From text analysis to a voice assistant to chatbots, we see the traces of gender bias in artificial intelligence everywhere. From voice assistants to always being a female to chatbot always suggesting emoticons of a male for the post of CEO, we see Artificial Intelligence reflecting the stereotypes fed to it along with the data used to train it.
The topic came in limelight with the tweets posted by Microsoft’s Tay bot in the year 2016. The world was revealed to one of the darkest parts of artificial intelligence where Tay bot started tweeting racist tweets. It became evidently known that the AI lacks ethics and sentiments enough to consider the human race equally. The data fed to AI was racist in so many terms that it could cause mass disruption to human values and morals. And, the problem has only continued with the time where several improvisations have been done but the question is; is it enough?
On the research side, various related scientific papers and many other AI development companies are starting to build tools to measure these biases and find ways to explain the decisions made by their algorithms. While it is true that most technology brands have issued their ethical codes, and at the European level there are references with 7major points Trustworthy AI, there is still a long time to go in terms of ethics.