Apple Intelligence, Apple Intelligence, Apple Intelligzzzzzzzzzzzzzzzz

Mhorydyn

Ars Tribunus Angusticlavius
9,988
Subscriptor
When I try it on myself and my wife, the characters don’t look like us, they just look like people that have the same features. Does that make sense? Like if you told an artist to sketch a portrait of your friend, but you just described them verbally and didn’t share a photo for reference.

But whatever. As a free tool to play around with for fun, it’s fine. It’s just not very impressive.
I found the same thing after playing with it a bunch more. In a few cases it caught aspects of me and my wife that were accurate, but it took a ton of iterations.
 

effgee

Ars Praefectus
4,358
Subscriptor
I found the same thing after playing with it a bunch more. In a few cases it caught aspects of me and my wife that were accurate, but it took a ton of iterations.

Plus one’d. Just worse in my case.

Fed it a selfie, putzed around with it for a few minutes, chuckled in disgust and turned it off. That’s what it must feel like to ask a blindfolded, lobotomized Gibbon (*) to paint you a Rembrandt.

Now I just need to find out how to remove the 3+ GBs of crud AI downloaded onto my poor MacBook Pro.


(* – sorry, Gibbons. Please do accept my sincere apologies)
 
  • Like
Reactions: gabemaroz

gabemaroz

Ars Tribunus Militum
1,689
Image Playground is under:

/System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_Visual

The Writing Tool models are under:

/System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_GenerativeModels

Magic Cleanup is under:

/System/Library/AssetsV2/com_apple_MobileAsset_UAF_Photos_MagicCleanup

And the guardrails seem to be in:

/System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_Overrides

You’ll need to use terminal to remove them.
 

Bonusround

Ars Tribunus Militum
1,831
Subscriptor
If you're concerned about the spread of disinformation and its role in all kinds of catrastrophe, this is pretty much nearing "international crisis" kinds of levels, surely: Apple's AI Is Constantly Butchering Huge News Stories Sent to Millions of Users
From the story:
The AI alert also claimed that Florida senator Marco Cubio had been sworn in as secretary of state, which is also false as of the time of writing.

Did AI change the Senator's last name, or does Futurism need a copy editor?

And don't miss their (current) front-page story:

Screenshot 2025-01-15 at 4.19.48 PM.png
 

Honeybog

Ars Tribunus Militum
2,411
From the story:


Did AI change the Senator's last name, or does Futurism need a copy editor?

And don't miss their (current) front-page story:

View attachment 100134

Looks like they need a copy editor, to edit their headlines.

But in all fairness, that hackneyed joke has been around the Internet since, like, Seanbaby and Old Man Murray in the 90s.
 

gabemaroz

Ars Tribunus Militum
1,689
When the AI hype bubble finally implodes and the fallout lays waste to the industry, I hope the top executives in charge of this fiasco are culled first rather than the thousands of tech workers who were forced to integrate it.

But who am I kidding.

Honestly where is the need for summarizing news articles? Isn’t that what headlines are for? How short are our collective attention spans now that people can’t even be bothered to check the headline and first paragraph?

Newspaper writing was always built around putting the lede right up front. What is AI bringing to the table here?

I mean why do people even have news articles as a notification in the first place? How can anyone get anything done if some hypothetical electrician in Iowa is getting constant pings about the Gaza ceasefire and the fires in LA? It’s madness.
 
Last edited:

dspariI

Smack-Fu Master, in training
84
I've tried using the writing tools a little bit just to see what they give, and I have seen them make small beneficial tweaks. However, since it's my own writing, I also know when it's messing up in a variety of ways. I can understand how some of the mistakes were made, but other times it's wholesale invention of information. It does feel a little negligent to let it loose on third-party text.
 
Last edited:
  • Like
Reactions: Ashe

gregatron5

Ars Legatus Legionis
11,765
Subscriptor++
When the AI hype bubble finally implodes and the fallout lays waste to the industry, I hope the top executives in charge of this fiasco are culled first rather than the thousands of tech workers who were forced to integrate it.

But who am I kidding.
Oh some of them will, but with double-or-triple digit million-dollar exit packages :mad:

I was initially hopeful, but am getting increasingly incensed. How do I turn off the suggestions in Messages? At best they're banal, at worst they're insulting. I just want them to go away.
 

gabemaroz

Ars Tribunus Militum
1,689
Without Apple Intelligence, I would never have seen this emoji.
The only emojis we should be making are pitchforks and torches.

Man, living in the future sure is insanely great.
Doctorow calls this a reverse centaur.

Let’s pause for a little detour through automation theory here. Automation can augment a worker. We can call this a “centaur” — the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They’re a human head on a robot body (hence “centaur”). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You’re in charge, but you’re getting a second opinion from the robot.
That’s centaurs. They’re the good automation. Then there’s the bad automation: the reverse-centaur, when the human is used to augment the robot. Humans are good at a lot of things, but they’re not good at eternal, perfect vigilance.
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there’s another problem that is, if anything, even more fatal: the kinds of errors that AIs make. AI doesn’t just make errors — it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead.

But the most common errors that AIs make are the ones we don’t notice, because they’re perfectly camouflaged as the truth. These are the hardest kinds of errors to spot. They couldn’t be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn’t just being asked to spot mistakes — they’re being actively deceived. The AI isn’t merely wrong, it’s constructing a subtle “what’s wrong with this picture”-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
Some paraphrasing. Emphasis (underlines, not italics) mine.
 

daGUY

Ars Tribunus Militum
2,917
I've tried using the writing tools a little bit just to see what they give, and I have seen them make small beneficial tweaks. However, since it's my own writing, I also know when it's messing up in a variety of ways. I can understand how some of the mistakes were made, but other times it's wholesale invention of information. It does feel a little negligent to let it loose on third-party text.
I haven’t used the writing tools much, but the other day I had a big list of items in a text document and I wanted to reorder them alphabetically. Sure, I could have copy/pasted them into a Numbers spreadsheet, sorted the column, and copy/pasted back to my document, but I thought – this is the perfect thing to ask the writing tools to do, no? Just have it do it inline right in my document.

So I highlighted the text and told it “sort the items in this list alphabetically,” and it did. Cool! But I didn’t notice until later that out of maybe 30 or so items in the list, two of them were off by one position. It had sorted the items like this: Aardvark, Bear, Cat, Dog, Fish, Elephant, Giraffe, Hyena, Iguana, Kangaroo, Jaguar, Lemur, Monkey, etc. So yeah, these things still need some work!

New AI feature just dropped in the 18.3 beta!

Man, living in the future sure is insanely great.
I honestly think they should ditch the idea of notification summaries for news in general, and I’m surprised they shipped it in the first place. I just don’t see how using AI like this will ever be fully accurate, because it doesn’t understand meaning or context. Telling people “these summaries might not be accurate” but then showing them anyway isn’t a solution, it’s a copout. What purpose does a summary serve if I have to double-check it against every individual headline anyway in case the summary itself isn’t accurate? It’s the type of feature that only has any value if it works 100% of the time or close to it.
 

xoa

Ars Legatus Legionis
12,209
Subscriptor++
Might as well take note of Ars covering yet another worrisome move: Apple Intelligence, previously opt-in by default, enabled automatically in iOS 18.3. That really doesn't feel like a good classic Apple move, more a Microsoft one. So far Apple Intelligence, along with the AVP, are to me real worrying signs of MBA/next-quarter-ization at Apple. More stock price/trend driven but without real long term vision or willingness/ability to disrupt themselves, to lead and have people adopt organically by enthusiasm. Apple as much as any huge company has really earned some level of "it's fine to wait years until it's really ready", yet here it just feels so, rushed.

I can't help but wonder if to some extent though this is also the end result of a bunch of long building issues. Some of the most useful things I can imagine Apple doing in theory are hampered by their failure to develop certain ecosystems years ago. Like if they had a really solid self hosted smart home ecosystem vs the half baked warm/cold HomeKit system, slotting secure private self hosted (on high margin Apple hardware) AI into security and monitoring could be real useful with the current capabilities. But they aren't in a great position for that, and instead are stuck viewing everything through the window of the iPhone.
Doctorow calls this a reverse centaur.
I think he gets some of that analysis wrong, in particular wrt to self-driving cars, where with genuine irony he's completely blind to an already existing "reverse centaur" situation. But a lot of interesting ideas and neat terms, thanks for the link!
 

gregatron5

Ars Legatus Legionis
11,765
Subscriptor++
I found one thing that worked well today: I've had several appointments move and Siri rescheduled them exactly like I asked, it even asked to confirm one move because it was double booking. E.g. I said, "Hey Siri, move Coffee with <friend> from tomorrow to next Tuesday," and it Just Worked™. I was actually kind of impressed. Moving events to a different date and time is a real PITA in any calendar app.
 

Chris FOM

Senator
10,394
Subscriptor
Here’s a gripe with it, when using Apple Intelligence for proofreading (which should be genuinely useful even if you’re not otherwise a fan of generative AI stuff), it’s supposed to put a glowing line under any changes so you can review them. It doesn’t. The arrows to jump between each suggestion and buttons for accepting or rejecting individual changes are also missing. End result, using AI to proofread here is an all-or-nothing affair that you have to take on faith since it doesn’t highlight what’s changed. That is…not helpful at all. Maybe it works better in other apps, but I’m using Safari. That’s a first party app!
 

gregatron5

Ars Legatus Legionis
11,765
Subscriptor++
Here’s a gripe with it, when using Apple Intelligence for proofreading (which should be genuinely useful even if you’re not otherwise a fan of generative AI stuff), it’s supposed to put a glowing line under any changes so you can review them. It doesn’t. The arrows to jump between each suggestion and buttons for accepting or rejecting individual changes are also missing. End result, using AI to proofread here is an all-or-nothing affair that you have to take on faith since it doesn’t highlight what’s changed. That is…not helpful at all. Maybe it works better in other apps, but I’m using Safari. That’s a first party app!
I've found it works the way you describe it's supposed to work in Mail.

That said, in Mail it makes the changes and you have to undo the ones you don't want. (Which in my experience so far is about half of them.) I think I'd prefer if it were to preview the changes and let you choose to accept them.
 
  • Like
Reactions: Ashe

daGUY

Ars Tribunus Militum
2,917
There is no world in which the current implementation of LLM models will be able to alphabetize a sub-infinite list of words. They can't count and they can't order either. It's not so much 'needs some work' as it is 'needs a different paradigm.'
Right, but that’s what I mean: Apple should account for this and handle it appropriately. If you ask the writing tools to do something that an LLM can’t handle well – sorting, counting, math, etc. – they should recognize that and handle your request via a different method so that you at least get a correct response. Sorting a list of items alphabetically is an utterly trivial task for a computer in general, but just not a good use of an LLM.

If I tell it “sort this list alphabetically,” it should either actually sort the list alphabetically or just tell me that it can’t do so, rather than this middle ground where it claims to have done what I asked but in fact got it wrong.
 

gabemaroz

Ars Tribunus Militum
1,689
... they should recognize that and handle your request via a different method so that you at least get a correct response. If I tell it “sort this list alphabetically,” it should either actually sort the list alphabetically or just tell me that it can’t do so, rather than this middle ground where it claims to have done what I asked but in fact got it wrong.
This is implying a contextual general intelligence that can make decisions. So if that existed, why would it need to pass that task off to a different function? In other words, a different paradigm (that doesn't exist) is necessary.

Every time one of the big AI companies comes out and lies proclaims that AGI is right around the corner (while asking for another round of cash injections to keep their money-burning hype machines going) you can just look at the state of self-driving cars to get an idea of how 'close' we are. Self-driving cars would be a subset of AGI.
 

Bonusround

Ars Tribunus Militum
1,831
Subscriptor
This is implying a contextual general intelligence that can make decisions. So if that existed, why would it need to pass that task off to a different function? In other words, a different paradigm (that doesn't exist) is necessary.

Every time one of the big AI companies comes out and lies proclaims that AGI is right around the corner (while asking for another round of cash injections to keep their money-burning hype machines going) you can just look at the state of self-driving cars to get an idea of how 'close' we are. Self-driving cars would be a subset of AGI.

Agree with your assessment of AI startups, but not of self-driving. I take a Waymo across town once or twice a week. I encounter them every day as a pedestrian, and find them to be the most consistently considerate and careful drivers on the road. The improvements self-driving taxis have made over the past two years is real and impressive.
 

gabemaroz

Ars Tribunus Militum
1,689
I take a Waymo across town once or twice a week. I encounter them every day as a pedestrian, and find them to be the most consistently considerate and careful drivers on the road.
And the failures are just as real and impressive. My point was more in terms of order. When self-driving cars are solved (first), then I will believe we are on the way to AGI.

That means off-road, in construction zones, unmarked or poorly marked roads, adverse conditions, etc. They do great in a relatively controlled North American urban environment.

Now do the driving conditions of the remaining ~5 billion people on Earth:

IMG_6216.jpeg
 

Bonusround

Ars Tribunus Militum
1,831
Subscriptor
And the failures are just as real and impressive. My point was more in terms of order. When self-driving cars are solved (first), then I will believe we are on the way to AGI.
Autonomous self-driving is such a different challenge than whatever we'll choose to define/accept as "AGI"... no connection, IMO.

That means off-road, in construction zones, unmarked or poorly marked roads, adverse conditions, etc.
That sounds like a tall and completely unnecessary order. Autonomous off-roading? 🤣

They do great in a relatively controlled North American urban environment.

Now do the driving conditions of the remaining ~5 billion people on Earth:
Mmmm... bit a strawman, don't you think?
 

iPilot05

Ars Praefectus
3,229
Subscriptor++
Autonomous self-driving is such a different challenge than whatever we'll choose to define/accept as "AGI"... no connection, IMO.
That’s sort of the problem, isn’t it? AGI to laymen is the Matrix. A system that can beat a human at anything. Sure for you and I the metric will be an expert system that’s better than a human at something, but that’ll be hard to sell to the public as general intelligence.

Even then, just like autonomous airliners or driverless cars, the bar to clear is impossibly high. The moment a pilotless airliner crashes or even a self driving car moves over some old lady it’ll be all over the news. No amount of “well ashkully! Statistically speaking they’re still safer than a human!” Will overcome that. People will clutch their pearls and demand self driving to be so absurdly safer than a human it may never make it into the real world.

Edit to add: this is the problem with Apple intelligence and the current batch of AI solutions. They never quite come close to human level. Every glitch in search results, every oddly written paragraph, every Siri brain fart is really obvious to humans. It’s the Uncanny Valley of thought. I can spot a LLM written email a mile away. It’s hard to justify using these tools when you’ll ultimately be caught red handed by a savvy reader.
 

wrylachlan

Ars Legatus Legionis
13,699
Subscriptor
My point was more in terms of order. When self-driving cars are solved (first), then I will believe we are on the way to AGI.
Why? AGI is wholly different than self driving. If you want an intelligence that can write you a hard mathematical proof or can choose an optimal treatment regiment for a patient there are no split second decisions needed. For self driving it’s all split second decisions. I see these as almost wholly unrelated.
 

gabemaroz

Ars Tribunus Militum
1,689
I would define AGI the way Robert Heinlein defines human nature.
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.

AGI to laymen is the Matrix. A system that can beat a human at anything. Sure for you and I the metric will be an expert system that’s better than a human at something, but that’ll be hard to sell to the public as general intelligence.
Not really, Artificial General Intelligence has nothing to do with beating a human at anything. It just means it can take on an (apparently) structureless, paradigm-light, novel situation, break it down into some sort of internal symbolic framework, and build a (possibly erroneous) model of what is happening and then operate from there. It's not about success or expertise, it's about flexibility, hypothesizing, extrapolating, and logical reasoning.

Even then, just like autonomous airliners or driverless cars, the bar to clear is impossibly high. The moment a pilotless airliner crashes or even a self driving car moves over some old lady it’ll be all over the news. No amount of “well ashkully! Statistically speaking they’re still safer than a human!” Will overcome that. People will clutch their pearls and demand self driving to be so absurdly safer than a human it may never make it into the real world.
That's because every failure is explained in retrospect as "this situation was under-represented in the training data." Meaning it is unadaptable, inflexible, and ultimately, dumb.

Cruise initially said that its self-driving car “braked aggressively to minimize impact” but later said the vehicle’s software made a mistake in registering where it hit the woman. The car tried to pull over but continued driving 7 mph for 20 feet with the woman still under the vehicle.
So no, the bar is not impossibly high, but it is certainly higher than you would set for a single person because these are deployed en masse. If a single driver hits someone, they stop the vehicle and the authorities are called. If that driver does not stop, their license would almost certainly be revoked (among other things). So if a software system does the same, should it not also lose its license? Or should the public continue serving as implicit beta testers while a software company 'works out the kinks'? Can the software (or the developers) even reason in retrospect about the decision making process? Or do they just add dummies to the underside of the car as part of future training and (metaphorically) input – bad, stop.

Mmmm... bit a strawman, don't you think?
Am I misrepresenting the situation? Are these Waymos not operating in a North American urban environment? Are any of them operating in South America, the Middle East, Africa, continental Asia? Is this not at all pertinent to what most people would agree is 'autonomous self-driving'? How close are we really when the company has been operating for nearly twenty years (in various forms) and are now covering... five metropolitan areas?

Waymo, as of 2024, operates commercial robotaxi services in Phoenix (Arizona), San Francisco (California), and Los Angeles (California) with new services planned in Austin, Texas, Miami, Florida and Tokyo, Japan.
Why is a self-driving car different from what AGI would cover? If you drew a Venn diagram of what AGI should be able to do, would driving a car be firmly within it or not? Overlapping? Outside of the circle? Why?

AGI is wholly different than self driving ... there are no split second decisions needed... I see these as almost wholly unrelated.
I concede that the speed of processing is not inherently related to what AGI would (should) cover. But decision making in general absolutely is. As far as AGI being wholly different from self driving, see my questions above.

If we are going to talk about artificial general intelligence, then we are going to need to at least put together some boundaries and definitions, otherwise we are all talking past each other.

Here's a thought experiment: If you took a North American driver and put them into a car in Australia or Japan, most would generally be able to adapt to the change in traffic flow, perhaps after a period of overt caution and an occasional (accident-free) mistake, without prior training. That's general intelligence. It's not genius, it's just extrapolating from the known to the novel. Could a Waymo only trained in North America do the same? Why not?
 
  • Like
Reactions: Chris FOM

Bonusround

Ars Tribunus Militum
1,831
Subscriptor
Am I misrepresenting the situation? Are these Waymos not operating in a North American urban environment? Are any of them operating in South America, the Middle East, Africa, continental Asia? Is this not at all pertinent to what most people would agree is 'autonomous self-driving'? How close are we really when the company has been operating for nearly twenty years (in various forms) and are now covering... five metropolitan areas?
You are not. But attempting to claim "self-driving taxis don't yet exist" because they aren't operating globally, or taking their customers off-roading, seems a strawman to me.

Why is a self-driving car different from what AGI would cover?
I'll give you one: autonomous driving can work without understanding human language. Traffic signs, numbers, place names are just discrete tokens.
 
Last edited:

xoa

Ars Legatus Legionis
12,209
Subscriptor++
We're admittedly straying heavily into SB territory now vs something Apple focused, but:
Even then, just like autonomous airliners or driverless cars, the bar to clear is impossibly high. The moment a pilotless airliner crashes or even a self driving car moves over some old lady it’ll be all over the news. No amount of “well ashkully! Statistically speaking they’re still safer than a human!” Will overcome that. People will clutch their pearls and demand self driving to be so absurdly safer than a human it may never make it into the real world.
Nah, you're absolutely 100% wrong on this one. Driving is one of the few areas where insurance imposes a rare dose of cold hard reality, and where the utility value is so tremendously high and immediate that everyone can see it. Tens of millions of people (including extremely powerful constituencies like parents and the elderly) who'd desperately like to have their own arbitrary car usage can't under the current "reverse centaur" situation where human brains have to do the driving. The vast majority of us drivers even would love to be doing something else while in the car most of the time, if only sleeping. Everyone is familiar with the risks of DUI yet also the social pressures of being at a party and having a drink then needing to get home.

We've had a long history where only the rich had their own personal "self driving" cars (via paying to have another human brain deal with it vs their own). Lots of people have used taxis with lots of very gnarly experiences, and even those are only available in a few areas at anything but exorbitant cost. Witness how fast and hard the adoption of Uber, Lyft etc were, despite tons of very well publicized safety issues. It just doesn't matter because even that is so useful.

So in fact I'm going to go farther: people would easily accept more, not less, danger vs humans. That Waymo demonstrates the obvious, that it can be dramatically less (since entire classes of major human crash causes cease to exist), is almost besides the point. Humanity only bleats about safety when it doesn't cost too much.
Edit to add: this is the problem with Apple intelligence and the current batch of AI solutions. They never quite come close to human level. Every glitch in search results, every oddly written paragraph, every Siri brain fart is really obvious to humans. It’s the Uncanny Valley of thought. I can spot a LLM written email a mile away. It’s hard to justify using these tools when you’ll ultimately be caught red handed by a savvy reader.
Eh. Apple's efforts for sure, but the problems with others are for better and for much worse far more varied than that. On topic though I find Apple's efforts particularly disappointing because they shouldn't be having to take this crappy bandwagony next-quarter-shareholder-meeting approach vs focusing on specific actual useful deliverables that work.
 

gabemaroz

Ars Tribunus Militum
1,689
But attempting to claim "self-driving taxis don't yet exist" because they aren't operating globally, or taking their customers off-roading
I never said they don’t exist. I said they are deployed in an incredibly limited geographical area that is also well-regulated and highly predictable.

Let me also clarify what “off-road” means in this context. I’m not talking about dirt rallies, what I mean here is more akin to poorly marked. Gravel, dirt, unmarked, unpaved roads. Like you might see in rural driveways, farm roads, back country, or otherwise. Last mile kinds of areas where the path is flat and regular but underused and long enough that paving them is uneconomical.

I’m still waiting for some explanation of how AGI would not be a superset of self-driving.

Let me reiterate for emphasis and clarity:

1. Autonomous self-driving cars are not a solved problem
2. Self-driving is an easier problem to solve than artificial general intelligence
3. Therefore, we are not close to artificial general intelligence
4. When self-driving cars are solved then we will be closer to AGI

And then the rest were the edge cases where self-driving cars fail and why the gap remains quite large as well as time frame context for what “soon” and “imminent” means outside of hype and lies.

In the larger context, as an example, Tesla has (falsely) claimed full autonomy is an around the corner for about a decade. So if that subset of general intelligence (e.g. fully autonomous vehicles) remains unsolved, claims of AGI being imminent are likely even larger piles of bullshit.
 
  • Like
Reactions: Chris FOM

gabemaroz

Ars Tribunus Militum
1,689
Where the utility value is so tremendously high and immediate that everyone can see it.
Absolutely. Everybody wants fully autonomous vehicles, myself included. It would remake the world.

I’m not poo-pooing the state of the industry out of derision. The problem is just harder than assumed, progress has slowed, and the current paradigm isn’t working.

And so if an imminent solution isn’t coming for autonomous vehicles, AGI is even further out.

Apple has more money than some entire nations and they threw the towel in on their electric vehicle project because autonomy just wasn’t feasible.
 
Last edited:

xoa

Ars Legatus Legionis
12,209
Subscriptor++
Absolutely. Everybody wants fully autonomous vehicles, myself included. It would remake the world.

I’m not poo-pooing the state of the industry out of derision. The problem is just harder than assumed, progress has slowed, and the current paradigm isn’t working.
I guess I don't get where you're coming from on this at all. Waymo seems to be ramping up just fine as planned. They did something like 25 million miles with 4 million passengers last year, which is a massive leap over 2023 itself a massive leap over 2022 etc. I have family and friends out there and all of them say the rides are excellent now, itself a major leap over the first days. Like SpaceX, they seem to be approaching with deliberation attacking some of the trickiest parts with highest initial customer margin first and rolling forward as they build confidence. They started doing freeway testing last year and demoed it to CBS, Cnet, and other news orgs like a month ago. They're doing testing in Florida seeing how it does in that weather.

Not like I have any inside info but I see no signs that "the current paradigm isn't working", it looks more like the start of yet another technology S-curve (albeit done at a pace appropriate for the life/safety implications).
 

gabemaroz

Ars Tribunus Militum
1,689
I guess I don't get where you're coming from on this at all.
Uber sold off their self-driving stake. Apple is out. Cruise / GM quit the field. Tesla has yet to deliver and probably never will. So that really just leaves Waymo. And well.... they still have lots of problems.

I'm not sure where you are getting that 25 million miles number from for Waymo. This is what I'm seeing for California.

463457641_1306909680688223_3977988194464156873_n.jpg

And just for comparison, based on 2019 data, there are about 350 billion miles traveled by all vehicles on California roads. So that means self driving Waymo cars are covering less than 0.001% of them given that not all their vehicles are fully driverless.

They've been doing fully self-driving cars on public roads since 2015 – almost a decade.

And my point is that this problem is far from being solved after massive amounts of investment, research, development, etc. Many of the biggest names (with large war chests) have abandoned it entirely. So, again, this is still easier than AGI would be, and yet people are hyping its (AGI) arrival in 'a few thousand days.'

It's complete and utter bullshit. And the longer these estimations remain disconnected from reality and the hype bubble continues to be inflated, the more dangerous it becomes not just for the tech industry, but also the economy writ large.

There needs to be a serious discussion about whether we even have general intelligence at all given how much everyone is buying into the hype.
The creator of ChatGPT, OpenAI, is teaming up with another US tech giant, a Japanese investment firm and an Emirati sovereign wealth fund to build $500 billion of artificial intelligence (AI) infrastructure in the United States.
Insanity.

How does this relate to Apple? They gave up on their self-driving project. Out. Done. Most likely Tim Cook finally took a hard look at the numbers, the environment, the development pace, and said, enough is enough. Good.

Level 3 driver assist. Sure. Great. Largely functional and problem free. But level 5, fully autonomous? Nope.

Apple has, unfortunately, partially embraced the AI hype cycle. They've been more conservative than their competitors, absolutely. But unlike Project Titan, which lived and died in (relative) secrecy, the public at large has had Apple Intelligence foisted upon them.

Most of the use cases they have done so far will definitely see improvement over time, but there is no path to AGI from the models / paradigm they are using now. Furthermore, while Apple Maps was half-baked upon release, it is solid after more than ten years of work. Siri has been around almost as long, yet remains quite poor. That does not bode well.

Which is more dangerous? Spending $500 billion and succeeding ... or spending that amount (and possibly more) and failing? Personally, I think either path is equally terrible (at those eye-watering sums), but that's veering way, way off into the woods, much farther than I've already taken this thread.
 
Last edited:

xoa

Ars Legatus Legionis
12,209
Subscriptor++
Uber sold off their self-driving stake. Apple is out. Cruise / GM quit the field. Tesla has yet to deliver and probably never will. So that really just leaves Waymo. And well.... they still have lots of problems.
That link is from a year and a half ago, so the only thing it shows is how rapidly they're improving.
I'm not sure where you are getting that 25 million miles number from for Waymo. This is what I'm seeing for California.
I'm getting it from Waymo, who are operating in more places then just California.
And just for comparison, based on 2019 data, there are about 350 billion miles traveled by all vehicles on California roads. So that means self driving Waymo cars are covering less than 0.001% of them given that not all their vehicles are fully driverless.
Yes, and? There was a time when cars had driven less then 0.001% of the distance of horse drawn carriages too. When people had used a GUI on a computer less than 0.001% of the hours people had used command line interfaces. The nature of technology S-curves is that they go very slowly, right up until they don't.
They've been doing fully self-driving cars on public roads since 2015 – almost a decade.

And my point is that this problem is far from being solved after massive amounts of investment, research, development, etc. Many of the biggest names (with large war chests) have abandoned it entirely.
Again, this sounds just like SpaceX and the Falcon 9 and now Starship, or endless other examples. Tipping points abound in technology. Maybe we just are using different definitions of "far" or "solved", but as far as I'm concerned a product already in commercial production deployment under very challenging real world conditions, even if it has plenty of distance to go, and positively reviewed is well into the iteration game. Apple itself has repeatedly been a master of this. The original 2007 iPhone with no 3rd party applications and a host of other restrictions was far, far from the device that ultimately became one of the two behemoths dominating the post-PC computer landscape, but it certainly got the ball rolling. And speaking of that, Microsoft, Nokia, RIM, and others certainly put "massive amounts of investment, research, development, etc." into iPhone/Android competitors. And failed. The market is full of areas where just a few players end up dominating, and throwing a lot of money at it is no guarantee of success.

So, again, this is still easier than AGI would be, and yet people are hyping its (AGI) arrival in 'a few thousand days.'
About AGI I have nothing to say at all, that seems like a completely different class of problem. I would certainly indulge in a precautionary principle precisely because it seems nobody actually has any real path, the potential risks are so high, and in terms of theory it feels like we're getting to around the computation/memory point that it could maybe happen if anyone knew how. But I'm definitely not hyping it.

How does this relate to Apple? They gave up on their self-driving project. Out. Done. Most likely Tim Cook finally took a hard look at the numbers, the environment, the development pace, and said, enough is enough. Good.
I agree because I thought it was completely stupid for Apple to ever even think about it in the first place. It made zero sense as relating to any core competency they've got, nor was it clear how they'd bring anything special to it. It seemed entirely a product of lots of money without any real vision of where to go next. Which I still consider an extremely worrisome trend at Apple under the last 5 years of Cook. Apple Intelligence itself is looking like an example of that, as-is the unfortunate seeming lack of serious effort with Apple Vision. Wearable displays are a much clearer next step and disruptor for Apple's core, and that's where I'd have expected an all-out effort under Jobs to do something like refine retinal scanning displays and get any temporary think like AV iterating fast to build up to that.

But Apple failing at cars says nothing about the rest of the industry. There was no reason to think they'd be any good at it (and indeed they weren't). This is a company that has had major issues with just doing decent online services basically since iTools, that totally gave up on a developing Mac business market, that remains a damp squib in smart home stuff, all of which should be far more basic to their business. But their corp DNA isn't really setup to walk and chew gum at the same time. Which has made them very good and ultra polishing their core priority product but then leaves everything else on the back burner.
 

singebob

Ars Scholae Palatinae
766
Agreed. You see autonomous driving at the inflection point of an S-curve and I see it at the edge of a cliff.

Only time will tell but I think we’ve exhausted that avenue of discussion for the here and now.
I think the tipping point has only began. I feel like solutions like that to date are effectively things like this.

1737885396824.jpeg

It may have gotten off the ground, but it wasn't a really workable solution. The approach wasn't right.
There will be a Wright Flyer moment where it all comes together - and it will be in the next few years.

And that tangentially concerns me most about personal AI assistants, as long as everyone is grinding away at it. I'm already seeing implicit trust in what GPT churns out as results among " "regular Joes" " despite their stated distrust of AI. And given the centralised control that can be exercised for the models, it's a totalitarian state's wet dream endgame.