I’m going out on a limb here to talk about something I have very little idea about. I came across this story earlier in The Economist called “Morals and the machine“, which explores the development of robotics in certain fields and the emergence of “machine ethics”.
The article outlines a bunch of areas where robotics have high levels of interaction with humans, and where a robot’s actions or behaviour could have negative implications or consequences for humans. Examples provided include the use of robotics in the military, transport and health care. As robotics become more sophisticated and being making ‘decisions’ that have an ethical dimension, so too will the programming of the robots need to be become equally sophisticated.
But how does one determine what kind of ethical program (paradigm or framework) to install in a robot? If you are creating a robot in the US for use or implementation in another part of the world, do you install an ‘Americentric’ ethical framework? or values and ethics that are culturally appropriate to the culture in which it will be used? If one thinks of ethics as being what is right to most people, then who are most people? I assume that the people making robots are making them for global consumption, whereby the user would not necessarily be part of the maker’s ‘most people’.
I wonder how significant masculinity and femininity are in shaping ethical thinking or behaviour. As robots are gender neutral, are the installed ethical frameworks more biased towards the masculine or feminine? Or age for that matter: the ethical decisions made by someone at age 10 compared to age 40 or 80 would be entirely different. Read more