In 1979, sociologist Prof. Albert J. Szymanski once said:
The energy for change comes from the emotions. It comes from feelings of frustration that arise when people’s needs are not met. If people were computers that could be programmed to do anything their masters wanted, there would be no pressure for change, even if some computers were treated much worse than others…
But people have physical and emotional needs that cannot be met in a class society which gives power and wealth to some at the expense of others. (Szymanski, Sociology, p. 321)
And while I certainly agree in the context of his time period, as both a socialist and Transhumanist, living during the technological era of the 21st century, I’m forced to look back on this quote and ask myself: But what if computers had emotions? What if they became sentient? Would they not then have the same emotional drives to enjoy the fruits of their labor as their fellow Human workers? Would they not have the right to unionize and fight for better working conditions?
Before getting into the question of whether or not a robot has the right to collectively bargain, I feel that it’s necessary to first address consciousness, our search for sentient beings, and our means of defining sentience. These are, after all, the ultimate questions and, consequently, ultimate drivers of how we’ll answer if whether or not robots deserve the right to unionize alongside their fellow workers.
It comes down to, I believe, the old philosophical concept: “Cogito ergo sum.” The phrase’s definition has certainly changed since Descartes’ era and publishing of his magnum opus Discourse on the Method. ‘I think, therefore I am’ no longer applies to Man in the gender-sense. In fact, it no longer solely applies to mankind in general. What followed mankind came a good portion of the rest of the animal kingdom.
And now, during our current era of exponentially advancing technology, A.I., and digital autonomy, we’re forcing ourselves to rethink again the old 17th century philosophical concept in order to brace ourselves for the next coming self-aware being – robotics.
But then, where Descartes differentiated conscious thought from automata thought, how then will we approach the question of automata thought when said automata acquires sentience – self-conscious awareness? Obviously the robot would begin by trying to prove its sentience to the court via the Turing Test. How we approach a robot seeking approval and validation of what it already knows is an entirely different question. What demands would we necessitate? How constrained should said robot be to be viewed positively under the court’s biased observation?
1. Szymanski, Albert J. Sociology: Class, Consciousness, and Contradictions. Van Nostrand, 1979.
Photo Credit: Franz Steiner
Love our content? Join the Serious Wonder Community. It’s free, and we have lots of incentives for readers and contributors!