Why Self-Driving Cars Are NOT The Future

It seems like self-driving cars have always been the hot item for disruptive innovators in the auto industry.

The image of sitting back, Minority Report style, in a sleek new vehicle with an AI-powered robot that effortlessly guides your car across town has bewitched many starry-eyed investors.

In fact, over $100 billion dollars have been invested in self-driving technology since it was first seriously proposed. But what have these hopeful investors received for their money besides some flashy tech showcases? Why have self-driving cars been so difficult to go mainstream?

Critics point to the technology as being deficient. It’s nearly impossible for computers to understand all the real-world nuances that human drivers must interpret each time they take the wheel.

Poorly paved roads, unpainted lines, a fallen tree… for human beings these are things we can quickly recognize and adapt to.

For even the most sophisticated AI brains though, there’s no comparison. There’s just too much that could go wrong that they can’t predict.

Even Anthony Levandowski, the co-founder of Google’s self-driving program is dubious. Fully autonomous self-driving cars are “an illusion” he told Bloomberg.

For every successful demo, there might be dozens of failed ones. Until AI can capture all the context absorbed by a human brain while driving, it will still be a technology that’s less adaptable and more prone to accidents than its human-brained counterpart.

So are self-driving cars just a big scam? Well… kinda. While Tom Cruise isn’t going to be jumping from one self-driving car to the next anytime soon, the auto industry has come a long way towards driver-assisted cars.

The current technology of self-driving cars on the market are cars that will “automatically break for you if they anticipate a collision, cars that help keep you in your lane, and cars that can mostly handle highway driving.

In each of these cases though, there’s still a person behind the wheel supervising everything and avoiding accidents.

Technological hurdles aside, if we could develop the AI that makes self-driving cars as safe as human-driven cars, they’d still have quite a few other hurdles to overcome before going mainstream.

The biggest hurdle, perhaps, is the problem of liability.

Last week, a man in North Carolina was driving at night, following his GPS. The GPS led him to a bridge that the man couldn’t see was unfinished. He then drove off the bridge, crashed upside down in the river below and died.

His GPS didn’t show that a portion of the bridge had been washed away – instead it went on mindlessly recommending it as the fastest route.

After the man’s death, questions came up about who should be held responsible. Was it all the man’s fault? What about the fault of the city for not repairing the bridge? The state? The bridge manufacturer? What about the GPS technology that got it wrong? Should they pay out?

It wasn’t clear where the fault lay and for that reason, all parties involved were vulnerable to lawsuits.

Another news story from last week: an attorney in Orange County filed a class-action suit against Kia and Hyundai for not having immobilizers in their older-model vehicles. These immobilizers would prevent the engine from starting without an associated key fob present.

The lack of these devices, the attorney argued, have made Kias and Hyundais easier to steal and were popular targets for car thieves who would then go on to use the stolen vehicles for other crimes.

While the suit seems like an obvious case of blaming-the-victim instead of the criminal, it also highlights how the question of “who is to blame for automotive deficiencies” is far from settled.

The list of liabilities continues to expand as well. The National Highway Transportation and Safety Administration (NHTSA) has only demanded more and more accountability from car manufacturers regarding auto safety regulations over the years.

According to NHTSA (an arm of the Department of Transportation), all vehicles MUST include specific types of seatbelts, they MUST disclose the locations of where all their parts are assembled (via the Labeling Act), they MUST follow all cybersecurity restrictions, and if a new safety recall should arise, the manufacturer MUST fix them at their own expense.

Today, about one in four vehicles on the road have an unresolved safety recall on them which has increased every year since the recall program’s inception. While some may say this is a good thing to have that much oversight around safety, it also does a lot to discourage manufacturers from sticking their necks out for potentially unsafe innovations.

The EPA is also squeezing vehicle manufacturers with new regulations – tightening its emission standards and adding restrictions that car manufacturers find increasingly difficult to abide by. As David Shepardson from Reuters said,

New rules [that] take effect in the 2023 model year… require a 28.3 percent reduction in vehicle emissions through 2026. The rules will be challenging for automakers to meet, especially for Detroit’s Big Three automakers. General Motors (GM.N), Ford Motor Co (F.N) and Chrysler-parent Stellantis NV (STLA.MI).

With all this red tape, automotive manufacturers are already feeling the weight of big brother pressing on their shoulders and would be reluctant to go all in on self-driving vehicles without all the safety concerns rigorously tested and approved to the point they can be sufficiently indemnified from lawsuits.

Further, what’s going to happen when one of those fully autonomous self-driving cars accidentally runs someone over? Even if the manufacturers can’t be held accountable (unlikely), our lawsuit-heavy country will find someone to blame. And when they do, that someone is going to have to shell out big time.

This then opens the door for all sorts of unscrupulous activity. Folks will be gaming the system jumping in front of self-driving cars and pretending they were hit hoping for another big pay day.

Eventually, the lawsuits will stack up until legislators attempt to settle the scope of responsibility for each participant, which will then be challenged, then appealed, then the question of who is responsible will dissolve into the land of legalize where few innovations ever come out alive.

Perhaps in another country with a more authoritative government, the liability issues can be overcome. Or maybe, self-driving cars can be controlled remotely by gig-workers using VR or something (so someone is still driving the car, they’re just not doing it from the inside), and they’d be accountable.

However, the dream of self-driving vehicles still has a world of hurdles before it can be a realty in the USA and is something eager investors should consider before opening up their checkbooks.

There’s a difference in the technology that’s possible and the technology that’s feasible. Perhaps self-driving cars will someday become possible from a purely technological standpoint.

However, it’s far more likely we’ll continue to see them through the lens of a Hollywood movie rather than the lens of our own windshields.

See more here coreysdigs.com

Header image: Forbes

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (15)

  • Avatar

    VOWG

    |

    The only safe way for something like that to work is with dedicated roadways for those vehicles only, you know, kind of like a railway. The lanes being defined in a way that prohibits leaving your lane except to follow a specific exit.

    Reply

    • Avatar

      Andy Rowlands

      |

      Yes I would agree with that.

      Reply

    • Avatar

      Tom

      |

      Maybe, but the costs would be prohibitive. And taxpayers are sick of funding every little global warming socialist’s brain fart.

      Reply

      • Avatar

        VOWG

        |

        Just one of many reasons they are NOT the future.

        Reply

      • Avatar

        aaron

        |

        ” And taxpayers are sick of funding every little global warming socialist’s brain fart.”
        Yet people continue to fund them through being good tax paying citizens
        where is the logic?

        Reply

    • Avatar

      Howdy

      |

      Shouldn’t the same logic be applied to aircraft?

      Reply

      • Avatar

        Squidly

        |

        Already done. Aircraft already fly in what you would call “virtual lanes”, that’s why you rarely hear of mid-air collisions between aircraft. Any modern aircraft is already equipped with a plethora of electronics to ensure they “stay in their lane” along with collision detection systems other such devices.

        Reply

        • Avatar

          Howdy

          |

          You misunderstand, Squidly. Anything leveled at autonomous vehicles, is also applicable to aircraft, in that both are autonomous to a certain extent, have ‘lane keeping’, and other add-ons, while the aircraft can almost fly and land itself. Vehicles are more of a ‘toy’. Perhaps they should be subject to stringent rules like the FAA?
          The aircraft collision detection has led to collisions when it was disabled by accident, or the advice from the system was misinterpreted.

          Mid-air collisions pale into insignificance compared to crashes into the ground, though I accept it’s not often.

          The 737 max demonstrates aptly what can go wrong, but in that case a hidden agenda was at work. It didn’t help that redundancy was not taken into account, as ridiculous as that is. Still, a parallel can be drawn.

          Aircraft fly ‘all over’, why should a road vehicle have to stick to what effectively is a no-go zone for anything else that makes it useless?

          Reply

  • Avatar

    Tom

    |

    Great article. I have not seen too many questioning the self-driving concept so thoroughly. To me, in the computer world, A/I might be good for dealing with email and that’s about it. Real number crunching and programming routines are best left to humans who know what they are doing.

    The bridge example is a great one as I have often wondered about liability for when things go wrong with automated vehicles. Perhaps big tech can go the big pharma route and get a pass while developing their products in much the same way as drug companies have a pass for any liability developing and marketing vaccines that do harm and cause deaths. The US congress will pander to anyone with enough bucks to satisfy their greed.

    Reply

  • Avatar

    Charles Higley

    |

    Blaming a car for not having modern equipment when manufactured years before is stupid. I have a car that does not have a key fob. Should we make every car ever produced have all of the modern safety equipment? NO! Try putting airbags in a Model T.

    Such a stupid idea means that every time they come up with a new safety idea, every car in the country would need to be updated. It’s simply not possible. On the other hand, driving a car without airbags is a risk that I willingly take to drive that car. The Austin Healey 3000 was discontinued because adding a collapsible steering column and other US requirements was not something the company was willing to do. However, I would drive one today if one was financially affordable. The trick is to not get into accidents and that’s my goal all the time.

    Reply

  • Avatar

    John V

    |

    I could see self driving vehicles in a campus type scenario, whether it be academic or manufacturing/corporate. The vias would have to be controlled, with safety concerns for pedestrian traffic as well. I don’t think self driving cars on our highway system would ever work.

    Sometimes, just because you dream it, you don’t have to make it happen.

    As a society, we should be walking more, anyway. One of the many reasons we are obese is because we drive everywhere, even a few blocks away. If everyone walked more and drove less, we would all benefit.

    Reply

  • Avatar

    Tom Anderson

    |

    Before my teens (about 74 years ago) I was a regular reader of “Popular Science” magazine and adored the monthly feature called “New Ideas from the Inventors.” The magazine did not have a comics section, but “New Ideas” (an inflatable beach cabana?) more than compensated. Our world of “science” has a place for that feature now.

    Reply

  • Avatar

    Tom Anderson

    |

    . . . And another thing:

    “The current technology of self-driving cars on the market are cars that will ‘automatically break for you if they anticipate a collision, cars that help keep you in your lane, and cars that can mostly handle highway driving.’ ”

    You mean, of course, “brake” for you, don’t you? Although the other seems implicit.

    Reply

  • Avatar

    Squidly

    |

    I actually invented a small transceiver device that would cut down auto collisions by magnitudes. A simple device that broadcasts vector information about the vehicle (direction, velocity) and receives vector information from the vehicles around it. Simple calculations could ensure that two cars could never occupy the same space. A very simple device that would almost eliminate collisions between vehicles. We have had this technology available to use for 50 years. I am very surprised it has not been utilized. It would be extremely inexpensive, and while not perfect, would certainly reduce highway collisions significantly.

    Reply

  • Avatar

    Squidly

    |

    An autonomous vehicle capable of the same abilities of the human brain is virtually impossible. Certainly at current technology it would require super computers the side of a semi-trailer, and even then would be lacking. Even today, technology simply cannot replace the pattern recognition power of the human brain. Our ability to recognize and react to patterns is uncanny and extremely difficult to replace through mechanical means (ie: computer processing).

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via