On the Future of Discrimination and Racism
By N. Monk, Futurist and Philosopher
The discrimination in the future will no longer just be about human versus human but also human versus machine because of a strong bias to humanity's species superiority. In the maybe not so distant future, A.I. might finally pass the Turing test or gain full sentience. However, despite that happening we have a long history of treating the other with contempt. Consciousness or sentience is one of the main things that makes humans the top animal on the planet. If another form of life with sentience appeared it would put what humanity is into question.
Consider this quote by Alan Turing, the father of modern computers. “You like strawberries. I hate ice-skating. You cry at sad films. I like ice cream. What is the point of different tastes, different preferences if not to say that our brains work differently, that we think differently. And if we can say that about one another why can’t we say the same for brains made of copper and metal?” This is a good argument in favour of machine thinking. I agree with Dr. Turing that whether machines can think is a stupid question. Of course, they can think.
If our thinking machines gained sentience, a few problems may arise. Do they receive the same basic human rights that we do? Do they have to obey humans? Would these sentient machines be allowed to do any job they wanted to? These kinds of questions have yet to be answered fully. If the machines do have to obey humans than people are already assuming, they are the superior race. If one were to deny the machines the opportunity to get a job as a teacher or other jobs to have some humans always in the field this too would be an example of favouring or giving privilege to one form of life. It is obvious that denying at least the basic human rights is also favouring the human species.
To end specism towards machines in the future it would need humans to relinquish their control over them completely, and this is not something I can imagine anyone doing at all. Imagine you have a calculator that is sentient, and you want it to do 345*746 but it tells you it does not feel like doing that. You would likely decommission this calculator or find another one. Decommissioning this calculator could be considered murder even though most people would think otherwise. Similarly, if sentient androids were walking down a street and it did not do what someone told it to do, people would smash it to bits, which is also a form of murder, but once again most people would not consider this murder at all.
Given how current racism is addressed and treated it is highly likely this attitude will carry over for machines, and history will repeat itself. However, I hope that we have progressed far enough to solve that problem before it occurs by solving the current issues plaguing our society.