Oh, on some level, he gets that. Or he has at least been presented with that information. But he publicly dissed LIDAR to justify yanking it for cost-cutting, and has spent the intervening years stridently insisting that it is not only feasible and desirable but the only conceivable valid solve to rely solely on a set of cameras whose resolution wouldn't do justice to a picture of my junk. He is constitutionally incapable of admitting mistake or fault. And he promised all his customers that their existing hardware would be sufficient for unsupervised FSD, and he made a lot of money on the stock pump that promise caused, so he's riding that bomb like Major Kong.Mercedes is doing with a full suite of sensor tech, not merely cameras. It's the idea that cameras are enough that I find utterly ridiculous. The human brain's ability to extrapolate spacial relations from visual information is remarkable. Trying to replicate that functionality on silicon seems nearly impossible. However, because radar and lidar already operate in the spacial realm, no such extrapolation is needed and thus enables easier (faster) decision processing rather than spending most processing power on the exceedingly difficult visual to spatial calculations. For some reason, creepy Elon doesn't seem to get that.
Not only that, don't you pay attention to where you're going anyway? I actually have the announcements disabled because I don't need someone to tell me three or four times that the exit is coming up. I like the built in nav in one of our cars because it does a chime to let you know an exit is coming, but when I'm using Google I have alerts only (which is speed traps, disabled vehicles, etc.). It always shows you what the next exit is and how far away it is. If it's take exit 20 in 100 miles, it's not like it's sneaking up on me.Isn't that the final warning it gives though? I've never had Google maps not tell me at 2 miles, which is a reasonable distance. The 1/4 mile is effectively just a "alright, we're finally here".
A computer cannot make leaps in logic like humans can.We are in fact computation machines, and we cannot solve NP hard problems any more effectively than a computer can.
I'm trying to make the point that anything a human can do, a computer can do; the question is how powerful and how much input does the computer need.
Absolutely, but in the case of a car accident your legally required car insurance takes care of it for you because it's motivated to make someone else pay. And unlike with most collisions, they can get all the precise details surrounding the collision from the system/OEM.Regulation exists because your average private citizen doesn't have the time, resources, and bandwidth to sue corporations every time they do something irresponsible, negligent, or monstrous.
I worked with people who had early Model Ss and Xs (before the Model 3 even existed) and they would never shut up about how great their car was either. Even when it was getting the screen replaced for the second time because it couldn't survive being exposed to the Sun (in a car with a glass roof) or some tech had to come fix the door handles so they could actually get in the fucking thing. They're just trying to justify it to themselves.I guess I don't know enough people with Teslas and FSD - specifically on HW4.
Everyone I've discussed it (12.5.x) with IRL loves it. Each update just keeps making it better and better.
I currently won't put any of them into the vegan or Crossfit category, but I am afraid that FSD 13 may push some of them into that "can't shut up about it" category.
Have you ever asked a chatbot to work out a completely novel problem? They'll make incredible (but wrong) leaps all the time. Just outright confident bullshit.A computer cannot make leaps in logic like humans can.
"Whether it's a lack of computing power, an issue with buffering as the car gets "behind" on calculations, or some small detail of surrounding assessment, it's impossible to know. These failures are the most insidious. But there are also continuous failures of simple programming inadequacy, such as only starting lane changes toward a freeway exit a scant tenth of a mile before the exit itself, that handicaps the system and casts doubt on the overall quality of its base programming," Mangiamele said.
I'm far more neutral on Elon that the majority of this site (I have the downvotes to prove it), but even I'm highly skeptical on his FSD claims. I am in the camp that we are already into the diminishing returns part of AI training and Elon doesn't realize just how hard it will be to cross that last 10% gap. It's one of those classic problems where the last 10% of the problem requires 90% of the work. I fully expect Robotaxi events to be continually pushed back for years and years while the FSD dream remains just barely out of reach despite exponentially more resources being thrown at it every year.Really unbelievable how guys just love to bash Elon. This guy has done more for humanity than 99.9% of us will ever dream of. That is a fact. Haters are gonna hate I guess.
No, it implies that Uragan doesn't understand what NP-complete means nor how it might apply to the problem of full self driving.That implies it's also impossible for people to drive.
I appreciate that Apple CarPlay gives me a 2km (1.24 mile) warning before an exit. In heavy urban traffic anything less is not enough warning.That's bonkers IMO. My experience (and personal standard) is 0.5 miles minimum, 1 mile target, and 2 miles in heavy traffic.
When I have Google maps giving me audio instructions for which exits to take, I am always infuriated when it says "in a quarter mile, take exit 123". That is wholly insufficient at highway speeds.
Pedantry is the pastime; "Elon hate" is child's playDid you suddenly forget what site you are on? This is Ars, where the circle-jerk for Elon hate is the passtime.
And who else ends up off this planet at the same time? Did they also deserve to die?We should put Elon on the driver's seat of a Cybertruck and send him from California to his cows in Texas with FSD enabled, but with his hands and feet tied up!
Whatever happens - we'd get what we want! (to get him the hell out of here, and hopefully this planet!)
Downtown San Francisco is "a less complex environment"? Seriously?
EDIT: Apparently it was Scottsdale, disregard.
My prediction: either the taxis they bring on stage will be driven by humans (probably in those funky robot suits) or there will be a minor ding when two of them collide.The Robbotaxi launch is certainly going to be lots of fun.
I think what he was referring to was more intuitive decisions made with the benefit of experience and intuition, e.g. a leap of logic I made just this morning. I was approaching a cross street. A box truck turning left across my lane of traffic made its move, and briefly blocked the line of sight of the driver of the Dodge Ram turning right from the cross street into my lane. And I eased back on the gas and covered the brake, even though the box truck And the Ram hammered it off the line and lurched into my lane, cutting me off just as I had kind of expected he might - for no objective reason, but based on my suspicion that he might not update his mental radar with my new position after the box truck blocked me, and frankly my honest bias against the situational awareness of Ram drivers. I'd have still probably avoided him, but I didn't launch my backpack at the dash hitting the brakes.Have you ever asked a chatbot to work out a completely novel problem? They'll make incredible (but wrong) leaps all the time. Just outright confident bullshit.
CybertrucksWe should put Elon on the driver's seat of a Cybertruck and send him from California to his cows in Texas with FSD enabled, but with his hands and feet tied up!
Whatever happens - we'd get what we want! (to get him the hell out of here, and hopefully this planet!)
The highway divider or semi will be fine!And who else ends up off this planet at the same time? Did they also deserve to die?
"Why does Optimus have a junk-bulge?"My prediction: either the taxis they bring on stage will be driven by humans (probably in those funky robot suits) or there will be a minor ding when two of them collide.
This will never not be hilarious to me.Cybertrucksdon'tcan't do FSD.
Especially since Waymo seems to have found the right combination of sensors and algorithms. I get the feeling that Tesla is seriously off track. While Waymo may need human intervention at times, the cars appear to always maintain a safe state… well, at least much better than human drivers.Well, my dude, it's been ten fucking years, and if it's a simple, easy improvement, then that really begs the question why Tesla hasn't done it.
And yet here you are…Did you suddenly forget what site you are on? This is Ars, where the circle-jerk for Elon hate is the passtime.
I'm so glad that dumb-fuck quote is getting memed.I firmly believe, actual FSD will come in the next few thousand days!
Nice what-a-bout...I find the level of animosity vs the harms quite striking. At least the goal for FSD is to have huge positive risk and productivity benefits.
I hope the same people are highly concerned about the very clear excess lethality to other people from heavy cars like pick-ups and large SUVs (https://www.economist.com/interacti...ans-love-affair-with-big-cars-is-killing-them). I hope they are vocal in pushing for cars to have geo-fenced speed limiters preventing them from top speeds that are well over that of any speed limit in the country given its being a huge factor in road fatalities. That would seem to be consistent with the FSD concerns they are presenting.
The fact that people are paying for it anyway is the funny part for me.This will never not be hilarious to me.
Agreed, it's insane that no regulator has shut down their FSD racket.Our roads are designed around human perception, human reaction times, human ergonomics. And these ill conceived attempts at "full self driving" are a monkey wrench in a system that barely works to begin with.
Tesla's FSD is fundamentally a failure to produce a product that meets our base expectations of safety in one of the deadliest activities humans engage in. This is the sort of failure that should not be met with fines, but with a legally enforced shuttering of the project and an audit to determine who needs certifications stripped and who needs to be barred from the industry altogether.
Yep. And the light bar, and all the other shit they threw in and then failed to actually include on the fucking car.The fact that people are paying for it anyway is the funny part for me.
Courts are to clean up the mess after the fact. Regulation is to prevent the mess. "Mess" in this case is humans (Tesla customers or other innocent drivers / pedestrians) being killed. Money awarded in court does not bring those people back.For self driving, there shouldn't be a need for regulation if the courts are functioning. And I've yet to see indications that they aren't. That said, I do think the US (and ideally some states) should be setting up test courses full of foreseeable gotchas for OEMs and other organizations to utilize. States could then allow self driving based on scores achieved there with a given car, sensors, and fw/sw load. That would be a pretty reasonable bar IMO (admittedly gameable, but with significant liability), but it would disallow tesla's model of frequent OTA updates (IMO, a good thing for something safety critical).
Also, I've always felt AI/ML is the wrong way to solve this problem. I have a background in designing safety critical systems for aircraft, to include sensor fusion for collision avoidance. While self driving is a harder problem to solve, it seems very doable with current technology using a traditional iterative engineering approach. I feel like all the companies are focused on trying to avoid spending a decade and a billion dollars in R&D working their way to a marketable product by hoping that AI/ML will do it faster and with less outlay but that's not how it's been playing out anywhere. Even when they do succeed, they'll have a product that no one fully understands (lead/chief engineer) and no one actually responsible for any piece of it (subsystem lead/product owner) and no great domain knowledge/IP other than AI/ML that can be applied to related problems. This is also points to an even bigger societal issue because often in the process of creatively solving problems with new technology/techniques those involved come up with other novel applications for them. Can AI/ML be expected to take over this role as well (and do it well)?