With automation taking over jobs across various sectors, there comes a point where you question the legitimacy of the work done by these machines. This phenomenon has gone on to create a growing resentment toward nascent AI technologies. With speculations and warnings about the manifold ethical issues revolving around artificial intelligence, it is true that there will be social and economic consequences — quite possibly overwhelmingly negative ones.
Questions continue to pile up against this upcoming tech with driverless cars – Should driverless AI be programmed to preserve its passengers? Or should it be programmed to have a preference for reducing the number of deaths? Should we perpetuate self-preservation or the greater good? Are we, as owners of the driverless cars, at least partially responsible for the decisions they make?
In situations wherein a typical “Hamlet Situation” occurs, diverless cars would be forced to encounter lose-lose scenarios and be tasked with making difficult decisions. Therein, these cars must weigh the evils to define a lesser one. Thus highlighting the importance of moral programming.