Reading List on Sci-Tech and Ethics

Fenster writes:

What ethical assumptions will need to be written into the programs for driverless cars, and who will make the decisions?

Is it moral to print a gun?

Is the return of eugenics in some form inevitable?

About Fenster

Gainfully employed for thirty years, including as one of those high paid college administrators faculty complain about. Earned Ph.D. late in life and converted to the faculty side. Those damn administrators are ruining everything.
This entry was posted in Philosophy and Religion, Science, Technology and tagged , , , , . Bookmark the permalink.

10 Responses to Reading List on Sci-Tech and Ethics

  1. Toddy Cat says:

    You got questions, I got answers (or opinions, anyway)…
    1. The morality of your driverless car should be up to each individual driver, just the way your morality while driving is. Nothing else is morally acceptable. I’ll be it won’t be, though.

    2. If it’s moral for you to own a gun, it’s moral to print one, it’s what you do with it that enters the realm of morality. Besides, anyone anywhere can get a gun if they really want one right now. I mean, it’s technically illegal for convicted felons to own a gun today in nost jurisdictions, and yet, I don’t think I’ve ever known a convicted felon who didn’t have one. Besides, sixty years ago, almost anyone could buy a gun through the mail, and crime rates were actually lower than today. Of course, in actual fact, we’re still along way from beig able to “print” guns. See this link:

    3. We have eugenics now, we just don’t want to talk about it. How many people with IQs of 130 marry people with IQs a standard deviation lower. Hell, I’ll be most people don’t even know all that many people with IQs more than a standard deviation below or above them, let alone have sex with them. Assortive mating=soft eugenics.


  2. Fenster says:

    I do find the question of morality and driverless cars a fascinating one. For sure, it is not yet on the radar screen, so to speak, of the public. Yet with driverless cars not that far around the bend, so to speak again, I expect there will be quite a collision (sorry) between individual and collective values. People tend to think of driving as an individual act, and that’s how we have tended to treat it–regulated in certain ways but otherwise left open to individual choices. Yet in fact there is no escaping the legitimacy of a collective viewpoint. “A Sunday Drive” is an individual conception; “traffic” is a collective one.

    Driverless cars will have to entail a shift in the direction of a collective POV–thinking of cars on the road more like cars in a train, with traffic optimized via shared programs regarding safe distance between cars at high speeds and with common approaches to accident conditions. I’m not saying good or bad, but I do think driverless will entail a larger shift relative to individualism and its opposite than most people currently think.


    • Toddy Cat says:

      You’re probably right, but this is also going to involve a lot of philosophy-class style arguments to which there is really no right or wrong answer, such as “are you morally obligated to sacrifice your life for others? If so under what circumstances?” Give our record as a society in dealing with such questions, I can only say that I don’t look forward to it.


  3. Fenster says:

    I agree we are not well equipped as a society to have those kinds of discussions. We tend toward individualism. More critically, we are now a highly diverse society, a fact that cuts against the ability to achieve consensus on tough issues. Sure, in a diverse society we can all agree, more or less, on the need for a driver’s license, and of a test to get one. Those things are trivial. But life or death decisions that go to the core of a culture’s values? Well, you have to have a core to consult. What if it is atrophied? When you’ve happily traded away shared values for diversity, good luck dealing with the tough ones.

    So I expect, as the article points out, that if we do get round to programming driverless cars for extreme situations, we will end up relying on the experts: a medical ethicist here, a philosopher there, with a retired race car driver thrown in for expert testimony. It will be interesting to see, though, what happens the first time one of these cars goes over a cliff to save a bus, a kid or a bunny rabbit. Vox populi in the courtroom.


  4. Maule Driver says:

    Ethics for driver-less cars? An interesting philosophical question but I would submit the interesting ethics are meaningless in the real world. In the real world, the cars are guided by rules. Even in the case of conflicts, rules will guide the actions of the driver-less cars. Only unforseen or unpredictable problem situations (e.g. deer) might require actions beyond the rules based system. As long we believe there are such problems out there, we will give the driver the option to intervene.

    This is going on right now in the skies. Most aircraft are flown by an autopilot from the moment after they leave the ground to seconds before landing. Many are landed automatically. Airliners all carry traffic detection and avoidance equipment. No ethics are involved. There are rules, machines follow rules more reliably than people, and autopilots fly better than pilots.

    Known but unpredictable problem requiring human intervention:
    (there is sound but only when there’s some documentary content)

    Rules based automation would have saved the day…if it hadn’t been overidden:

    The ethical considerations seem to lie outside piloting, driving, or automation:


  5. If we’re in a driverless-car accident, will we get to sue the experts who set up the devices and the rules they followed? Oh boy, more opportunities for lawyers.


    • Blowhard, Esq. says:

      >>will we get to sue the experts who set up the devices and the rules they followed? Oh boy, more opportunities for lawyers.

      You bet they will. Negligence and strict liability — defective design, for starters. Besides, it’s a Google car, right? In the biz we call that a “deep pocket.”


  6. Fenster says:


    I have been mulling your comment and come back to the notion that driverless cars will pose some tough ethical issues–legal ones, too.

    You write that ethics are not likely to be a problem because “the cars are guided by rules.” But that is just it: what rules? What do they prescribe in emergency situations? Who writes them? How clear are they to the driver in terms of acceptance of risk, etc.? The ethics are bound up in the rule-writing.

    I take it is that in your view the rules are mostly cut and dried. You write that “unforseen” problem situations may come up, but in those cases “we will give the driver the option to intervene.” Two points here.

    First, it seems to me that driving is likely to give rise to lots and lots of unforseen problems, and not just deer. Something comes out quickly between two cars parked on the right just ahead of you. A ball? A dog? A child? Then there is the vehicle coming at you in the left lane, where you might swerve. Bicyclist? Motorcyclist? Large semi? School bus?

    As I understand the technology, the idea is for a camera to see all and for onboard computation to calculate what needs to be known. Identify the object. If a ball, one path. If a dog, another. If a child, a third. But then further calculations and assessments are needed. A bicycle, semi or bus? Distances, spatial assessment of the area including road and off-road options, speeds, directions, probabilities.

    Second, I don’t think it is feasible to give the driver the option to intervene. He may be distracted or asleep and a split second is any any event not enough time to take control. It seems to me unavoidable that the rules will have to specify different situations and simply play them out. Many of these, not just a few, will be thorny.

    As far as legal liability is concerned, I will defer to the counselor in our midst, but my guess is that creators of driverless programs, like Google, will insist on all reasonable eventualities to be programmed, with outcomes and probabilities described, and with drivers required to sign that they agree with the risks. Google will always be subject to suit if a program does not work as described–no avoiding that–but what they will want to minimize is their legal liability when the program does work as advertised.

    If a program chooses to ram a semi rather than run over a child (and that situation will happen) you can’t have the survivors arguing they don’t like the outcome. Google will argue that the outcome is what was programed under those unusual and tragic circumstances, sorry.

    From a legal POV, though, I remain vexed by the question of liability for people who are not licensed and, in effect, not parties to the risks of the new technology. Kids and bicyclists are not part of the game, yet people will be writing programs that place values on their health and safety. It is one thing to suppose the new rules represent a consensus on the part of the people who are party to the risks, but there are a lot of folks who won’t or can’t consent to whatever rules get established.


  7. Pingback: Here and There | Uncouth Reflections

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s