Tesla Full Self Driving requires human intervention every 13 miles

Snark218

Ars Legatus Legionis
32,812
Subscriptor
Mercedes is doing with a full suite of sensor tech, not merely cameras. It's the idea that cameras are enough that I find utterly ridiculous. The human brain's ability to extrapolate spacial relations from visual information is remarkable. Trying to replicate that functionality on silicon seems nearly impossible. However, because radar and lidar already operate in the spacial realm, no such extrapolation is needed and thus enables easier (faster) decision processing rather than spending most processing power on the exceedingly difficult visual to spatial calculations. For some reason, creepy Elon doesn't seem to get that.
Oh, on some level, he gets that. Or he has at least been presented with that information. But he publicly dissed LIDAR to justify yanking it for cost-cutting, and has spent the intervening years stridently insisting that it is not only feasible and desirable but the only conceivable valid solve to rely solely on a set of cameras whose resolution wouldn't do justice to a picture of my junk. He is constitutionally incapable of admitting mistake or fault. And he promised all his customers that their existing hardware would be sufficient for unsupervised FSD, and he made a lot of money on the stock pump that promise caused, so he's riding that bomb like Major Kong.
 
Upvote
20 (22 / -2)
Post content hidden for low score. Show…

aapis

Ars Scholae Palatinae
1,242
Subscriptor++
I’m still not convinced human drivers should be tested one single time and then allowed to do whatever they want until they (inevitably drive full speed into the car ahead of them and) die.

But, if you are going to allow the terrible computer to drive terribly, the least you can do is mandate a one-strike system. If the computer fucks up one single time, the feature is permanently disabled. Obviously this wouldn’t happen (Tesla wouldn’t implement it), but since this does happen with exceptionally dangerous human drivers I’m a bit surprised the regulators haven’t insisted on more overrides/control.
 
Upvote
6 (6 / 0)

ERIFNOMI

Ars Tribunus Angusticlavius
15,621
Subscriptor++
Isn't that the final warning it gives though? I've never had Google maps not tell me at 2 miles, which is a reasonable distance. The 1/4 mile is effectively just a "alright, we're finally here".
Not only that, don't you pay attention to where you're going anyway? I actually have the announcements disabled because I don't need someone to tell me three or four times that the exit is coming up. I like the built in nav in one of our cars because it does a chime to let you know an exit is coming, but when I'm using Google I have alerts only (which is speed traps, disabled vehicles, etc.). It always shows you what the next exit is and how far away it is. If it's take exit 20 in 100 miles, it's not like it's sneaking up on me.
 
Upvote
6 (6 / 0)

Uragan

Ars Tribunus Angusticlavius
9,797
We are in fact computation machines, and we cannot solve NP hard problems any more effectively than a computer can.

I'm trying to make the point that anything a human can do, a computer can do; the question is how powerful and how much input does the computer need.
A computer cannot make leaps in logic like humans can.
 
Upvote
7 (10 / -3)
I guess I don't know enough people with Teslas and FSD - specifically on HW4.

Everyone I've discussed it (12.5.x) with IRL loves it. Each update just keeps making it better and better.

I currently won't put any of them into the vegan or Crossfit category, but I am afraid that FSD 13 may push some of them into that "can't shut up about it" category.
 
Upvote
-18 (3 / -21)

Avalon

Ars Scholae Palatinae
1,400
Regulation exists because your average private citizen doesn't have the time, resources, and bandwidth to sue corporations every time they do something irresponsible, negligent, or monstrous.
Absolutely, but in the case of a car accident your legally required car insurance takes care of it for you because it's motivated to make someone else pay. And unlike with most collisions, they can get all the precise details surrounding the collision from the system/OEM.
 
Upvote
2 (2 / 0)
Post content hidden for low score. Show…

ERIFNOMI

Ars Tribunus Angusticlavius
15,621
Subscriptor++
I guess I don't know enough people with Teslas and FSD - specifically on HW4.

Everyone I've discussed it (12.5.x) with IRL loves it. Each update just keeps making it better and better.

I currently won't put any of them into the vegan or Crossfit category, but I am afraid that FSD 13 may push some of them into that "can't shut up about it" category.
I worked with people who had early Model Ss and Xs (before the Model 3 even existed) and they would never shut up about how great their car was either. Even when it was getting the screen replaced for the second time because it couldn't survive being exposed to the Sun (in a car with a glass roof) or some tech had to come fix the door handles so they could actually get in the fucking thing. They're just trying to justify it to themselves.
 
Upvote
25 (25 / 0)
Post content hidden for low score. Show…

aikouka

Ars Scholae Palatinae
1,109
Subscriptor
"Whether it's a lack of computing power, an issue with buffering as the car gets "behind" on calculations, or some small detail of surrounding assessment, it's impossible to know. These failures are the most insidious. But there are also continuous failures of simple programming inadequacy, such as only starting lane changes toward a freeway exit a scant tenth of a mile before the exit itself, that handicaps the system and casts doubt on the overall quality of its base programming," Mangiamele said.

I think this really reflects on how driving is hard to model because there are a modest number of small factors that we take into account. In the example above, changing lanes a tenth of a mile before the exit may not be an issue... in an ideal situation; however, as human drivers, we'll do things like look at the traffic up ahead to gauge where we might even attempt to merge over.

Essentially, there's a lot of nuance in driving. I've likened FSD to be like riding along with a kid on their learner's permit. They might do okay in some situations, but they likely lack the experience, situational awareness, or even just knowledge of the area to drive well.

Now, I think it's also worthwhile to add that the Tesla system also has to properly recognize certain things. For example, after they added a lane to a highway, they -- for some reason -- never put the speed limit signs back up until just recently. So, now there's actually a sign stating 65 MPH. Last I checked, the car failed to recognize the sign on a nice, sunny day even with a sign on both sides of the road. Not only that, it -- for some reason -- picked up the wrong speed limit ahead, and dropped my top speed to 35 MPH. (The speed limit along that road goes from 35 -> 45 -> 50 -> 65 over the course of about 3 miles.)
 
Upvote
5 (5 / 0)

jandrese

Ars Legatus Legionis
13,459
Subscriptor++
Really unbelievable how guys just love to bash Elon. This guy has done more for humanity than 99.9% of us will ever dream of. That is a fact. Haters are gonna hate I guess.
I'm far more neutral on Elon that the majority of this site (I have the downvotes to prove it), but even I'm highly skeptical on his FSD claims. I am in the camp that we are already into the diminishing returns part of AI training and Elon doesn't realize just how hard it will be to cross that last 10% gap. It's one of those classic problems where the last 10% of the problem requires 90% of the work. I fully expect Robotaxi events to be continually pushed back for years and years while the FSD dream remains just barely out of reach despite exponentially more resources being thrown at it every year.
 
Last edited:
Upvote
15 (15 / 0)
Post content hidden for low score. Show…

jeremyp66

Ars Scholae Palatinae
902
Subscriptor
That implies it's also impossible for people to drive.
No, it implies that Uragan doesn't understand what NP-complete means nor how it might apply to the problem of full self driving.

A computer (including the human brain) doesn't have to find the best solution to a problem in autonomous driving. It merely has to find any solution that works. Take the travelling salesman problem. That's NP-hard but, given a list of cities to visit I can easily work out a plausibly good itinerary that visits them all. It won't necessarily be the shortest itinerary, but that doesn't matter in real life.
 
Upvote
13 (14 / -1)

flying heath

Seniorius Lurkius
1
Subscriptor++
That's bonkers IMO. My experience (and personal standard) is 0.5 miles minimum, 1 mile target, and 2 miles in heavy traffic.

When I have Google maps giving me audio instructions for which exits to take, I am always infuriated when it says "in a quarter mile, take exit 123". That is wholly insufficient at highway speeds.
I appreciate that Apple CarPlay gives me a 2km (1.24 mile) warning before an exit. In heavy urban traffic anything less is not enough warning.
 
Upvote
4 (4 / 0)

meisanerd

Ars Centurion
1,051
Subscriptor
We should put Elon on the driver's seat of a Cybertruck and send him from California to his cows in Texas with FSD enabled, but with his hands and feet tied up!

Whatever happens - we'd get what we want! (to get him the hell out of here, and hopefully this planet!)
And who else ends up off this planet at the same time? Did they also deserve to die?
 
Upvote
3 (3 / 0)

Snark218

Ars Legatus Legionis
32,812
Subscriptor
Have you ever asked a chatbot to work out a completely novel problem? They'll make incredible (but wrong) leaps all the time. Just outright confident bullshit.
I think what he was referring to was more intuitive decisions made with the benefit of experience and intuition, e.g. a leap of logic I made just this morning. I was approaching a cross street. A box truck turning left across my lane of traffic made its move, and briefly blocked the line of sight of the driver of the Dodge Ram turning right from the cross street into my lane. And I eased back on the gas and covered the brake, even though the box truck And the Ram hammered it off the line and lurched into my lane, cutting me off just as I had kind of expected he might - for no objective reason, but based on my suspicion that he might not update his mental radar with my new position after the box truck blocked me, and frankly my honest bias against the situational awareness of Ram drivers. I'd have still probably avoided him, but I didn't launch my backpack at the dash hitting the brakes.
 
Upvote
17 (17 / 0)

AusPeter

Ars Praefectus
3,974
Subscriptor
We should put Elon on the driver's seat of a Cybertruck and send him from California to his cows in Texas with FSD enabled, but with his hands and feet tied up!

Whatever happens - we'd get what we want! (to get him the hell out of here, and hopefully this planet!)
Cybertrucks don't can't1 do FSD.

1. A whole different WTF?!?!? from Tesla
 
Upvote
8 (8 / 0)

Snark218

Ars Legatus Legionis
32,812
Subscriptor
My prediction: either the taxis they bring on stage will be driven by humans (probably in those funky robot suits) or there will be a minor ding when two of them collide.
"Why does Optimus have a junk-bulge?"

Or it'll be some dipshit two-door thing that can't fit a family going out to dinner or a bunch of friends going to pregame before a concert or whatever.
 
Upvote
0 (0 / 0)

Perardua

Wise, Aged Ars Veteran
192
Subscriptor
Well, my dude, it's been ten fucking years, and if it's a simple, easy improvement, then that really begs the question why Tesla hasn't done it.
Especially since Waymo seems to have found the right combination of sensors and algorithms. I get the feeling that Tesla is seriously off track. While Waymo may need human intervention at times, the cars appear to always maintain a safe state… well, at least much better than human drivers.
 
Upvote
9 (9 / 0)

msawzall

Ars Tribunus Angusticlavius
6,414
I find the level of animosity vs the harms quite striking. At least the goal for FSD is to have huge positive risk and productivity benefits.

I hope the same people are highly concerned about the very clear excess lethality to other people from heavy cars like pick-ups and large SUVs (https://www.economist.com/interacti...ans-love-affair-with-big-cars-is-killing-them). I hope they are vocal in pushing for cars to have geo-fenced speed limiters preventing them from top speeds that are well over that of any speed limit in the country given its being a huge factor in road fatalities. That would seem to be consistent with the FSD concerns they are presenting.
Nice what-a-bout...
 
Upvote
9 (9 / 0)

Melon of Troy

Wise, Aged Ars Veteran
185
Our roads are designed around human perception, human reaction times, human ergonomics. And these ill conceived attempts at "full self driving" are a monkey wrench in a system that barely works to begin with.

Tesla's FSD is fundamentally a failure to produce a product that meets our base expectations of safety in one of the deadliest activities humans engage in. This is the sort of failure that should not be met with fines, but with a legally enforced shuttering of the project and an audit to determine who needs certifications stripped and who needs to be barred from the industry altogether.
Agreed, it's insane that no regulator has shut down their FSD racket.
 
Upvote
7 (7 / 0)

balthazarr

Ars Tribunus Angusticlavius
6,209
Subscriptor++
I'm late to this one, and haven't yet watched the videos - seems that most of the testing was done in SoCal, so I'm presuming bright, sunny, clear weather throughout?

Where's the testing - and results - in heavy fog, snow, driving rain, etc.?

Commenters in these sorts of threads tend to rag on human drivers, but we're (mostly) capable of driving a vehicle in a whole host of conditions (and that's part of the problem - most drivers would 'chance it' and continue to drive in conditions where they should really pull over and let the worst pass).
 
Upvote
3 (3 / 0)
For self driving, there shouldn't be a need for regulation if the courts are functioning. And I've yet to see indications that they aren't. That said, I do think the US (and ideally some states) should be setting up test courses full of foreseeable gotchas for OEMs and other organizations to utilize. States could then allow self driving based on scores achieved there with a given car, sensors, and fw/sw load. That would be a pretty reasonable bar IMO (admittedly gameable, but with significant liability), but it would disallow tesla's model of frequent OTA updates (IMO, a good thing for something safety critical).

Also, I've always felt AI/ML is the wrong way to solve this problem. I have a background in designing safety critical systems for aircraft, to include sensor fusion for collision avoidance. While self driving is a harder problem to solve, it seems very doable with current technology using a traditional iterative engineering approach. I feel like all the companies are focused on trying to avoid spending a decade and a billion dollars in R&D working their way to a marketable product by hoping that AI/ML will do it faster and with less outlay but that's not how it's been playing out anywhere. Even when they do succeed, they'll have a product that no one fully understands (lead/chief engineer) and no one actually responsible for any piece of it (subsystem lead/product owner) and no great domain knowledge/IP other than AI/ML that can be applied to related problems. This is also points to an even bigger societal issue because often in the process of creatively solving problems with new technology/techniques those involved come up with other novel applications for them. Can AI/ML be expected to take over this role as well (and do it well)?
Courts are to clean up the mess after the fact. Regulation is to prevent the mess. "Mess" in this case is humans (Tesla customers or other innocent drivers / pedestrians) being killed. Money awarded in court does not bring those people back.
 
Upvote
19 (19 / 0)