I’m going out on a limb here to talk about something I have very little idea about. I came across this story earlier in The Economist called “Morals and the machine“, which explores the development of robotics in certain fields and the emergence of “machine ethics”.
The article outlines a bunch of areas where robotics have high levels of interaction with humans, and where a robot’s actions or behaviour could have negative implications or consequences for humans. Examples provided include the use of robotics in the military, transport and health care. As robotics become more sophisticated and being making ‘decisions’ that have an ethical dimension, so too will the programming of the robots need to be become equally sophisticated.
But how does one determine what kind of ethical program (paradigm or framework) to install in a robot? If you are creating a robot in the US for use or implementation in another part of the world, do you install an ‘Americentric’ ethical framework? or values and ethics that are culturally appropriate to the culture in which it will be used? If one thinks of ethics as being what is right to most people, then who are most people? I assume that the people making robots are making them for global consumption, whereby the user would not necessarily be part of the maker’s ‘most people’.
I wonder how significant masculinity and femininity are in shaping ethical thinking or behaviour. As robots are gender neutral, are the installed ethical frameworks more biased towards the masculine or feminine? Or age for that matter: the ethical decisions made by someone at age 10 compared to age 40 or 80 would be entirely different. Who is going to be on the programming committee? I would have nominated my grandmother, but not my grandfather (and for reasons beyond gender).
Does robotics allow for collective learning? If a robot makes a binary decision between ‘right’ or ‘wrong’ in a given situation, and later the programmer or owner determines that the decision was not the best one, can that learning be added to the collective robot consciousness? Are these robots operating off the cloud? or are all programs part of the hardware?
I guess some people may presume that humans are better equipped to make ethical decisions as it is. Given the state of the world, I’m not sure that we could make that assumption. Evidence the world over shows such vastly different choices and responses to very similar situations. After all, aren’t humans only acting out the programing that they have adopted or developed over the course of their life or lives?
And this makes me wonder what other elements are at play in helping me make ethical decisions. Is it just a matter of having the ‘right’ programming? How much do environmental factors (field of use), diet (periodic upgrades) and tiredness (battery charge) influence how I make choices? I have learned not to make ethical choices until after lunch, and that I am even more effective once I have slept on it.
I know for myself at least, that the ethical decisions I make are not 100% repeatable and predictable, even though they are probably within the realm of my values and thinking capacity. I wonder when they design an ethics for robots program, whether humans would be able to have that installed too? That would be cool. I guess that takes ethics education to a whole new space. Perhaps even one day, robots could teach us some things about ethics.