“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews

Disagree. I use an LLM bot daily for coding. It usually works as intended, and I move on, with no need to check. Or it doesn't work as intended, in which case I either enhance my prompt, or double check with another source. The latter scenario is maybe 10% of my prompts, if that. In no circumstance would I not be using a LLM in the first place just because 10% of the time I need to do a little extra leg work, because the alternative is to do that legwork 100% of the time.
That sounds like you’re doing a lot of almost boilerplate code, which is a fairly strong sign that the language doesn’t have the abstractions you need to express your intent more concisely. Sure, some projects are stuck with an inconvenient language for one reason or another, but it’s hardly the basis of a sustainable industry.
 
Upvote
7 (8 / -1)
Ok, assuming the people can wrest control of this tech, what happens then? There’s the potential to end scarcity and unify the world. To end war. To end sickness. To hopefully mitigate ignorance.

Let's assume all the technical hurdles are overcome and we do end up with A(G)I that can do all that stuff without using up all the energy in the world.

How will humanity overcome this?

“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

- Dune
 
Upvote
5 (7 / -2)

End_of_Eternity

Wise, Aged Ars Veteran
185
Subscriptor
No come on, I’m no GenAI booster but the entire premise of this article is wrong by his own numbers. He’s stating that OpenAI is running its product at a loss, but he’s getting that by including training as an operating cost when it’s obviously R&D. The business as a whole is loss-making, but he states revenue is $4bn and inference costs are $2bn, so running the product has a ~100% profit margin on that basis. If, as the author does, you add in the $3bn in training costs, you get a loss, but if OpenAI stopped training new models tomorrow (and laid off its R&D staff) it’d briefly be a very profitable company before it got outcompeted into oblivion.

If your business assumption is that AI companies will need to keep pouring money into R&D at the current rate forever, then yeah, there’s no viable business there, but an argument needs to be made to justify that scenario over “eventually, diminishing returns will make training a poor differentiator and then the state of the art will be run at a profit indefinitely”. And either way, the product is clearly, on the numbers provided, currently run at a profit.

Edit: and given that train cost obviously doesn’t scale with user count, his later assertion that more users means losing more money is self-evidently the opposite of the truth, again on the numbers he’s citing. This article is very poorly reasoned.
The is also the part about OpenAI getting access to GPUs from MS at 25% of the market price.

So we can't really say if the $2 B inference cost is sustainable.
 
Upvote
7 (7 / 0)

WXW

Ars Scholae Palatinae
1,075
However, where is the customer lock in for OpenAI?
Well. It’s less expensive than human customer service, for example. A lot of human jobs are replaceable right now.

Companies will pay for that. The question I have is what happens when there’s none of their customers left because, well, no jobs.
That would be lock-in for an AI product, not for OpenAI's product specifically.
 
Upvote
3 (3 / 0)

WXW

Ars Scholae Palatinae
1,075
If you assume the “missing” $4bn is all running costs for the app/website, sure. I don’t find it particularly plausible though that inference cost is $2bn (which I assume is all-in cost for compute given that they’re not running their own data centres) and “other costs of providing the model as a product” are twice that. I’d be surprised if they’d found a way to make it even as much as half that - burning $1bn a year on ancillary operating costs for a largely text-based web service seems excessive.
I don't know if it's excessive, but if we don't believe one figure we'll have to doubt all of them.
 
Upvote
2 (2 / 0)
Emphasis mine
Coding is one of the very few applications where there are no consequences for trying a wrong answer. [...]

LLMs are useful when you only need to redo the work yourself when they're wrong, but dangerous when you're facing an increased risk of lung cancer when they're wrong.

I would argue that for most sensible people, they use generative AI where the consequences of being wrong is negligent. Also most sensible people will be able to do a form of sanity check since they often use it on subjects where they have some knowledge.

Provide 5 good restaurants in London who serve Spanish food. Worst case: they aren't good or are closed. Next time you're a bit more careful and you can mitigate this by asking to provide a link to the Tripadvisor entry.

How do I add a shared mailbox to new Outlook? Worst case: The instructions don't work and you're still in the same situation.

I just installed new windows in my apartment and heard they contain argon gas. Why? Worst case:You're misinformed about how modern windows works. Doesn't affect you since you already bought and had the windows installed. The most import thing is that the windows work, not how they work.

What is the capital of Australia? Perth! Worst case: As a Norwegian I might loose a game of Trivial Pursuit.

I want to invest in publicly traded companies which are connected to nuclear power and uranium production. Please provide a list and link to their latest quarterly report. Worst case: The link doesn't work or if you read the quarterly report, you find out the LLM was wrong. You then just drop that company from your list and be a bit more careful trusting the LLM in this area. Or you improve the prompt.


There are so many usages like this.
 
Last edited:
Upvote
3 (5 / -2)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
Let's assume all the technical hurdles are overcome and we do end up with A(G)I that can do all that stuff without using up all the energy in the world.

How will humanity overcome this?
I think I keep repeating myself here. It may very well be used for that at first. It’s up to the people to wrest control of it and, hopefully not do something monumentally stupid like ban the tech in some act of pseudo-religious stupidity.

The “Jihad” was just that. Rather than democratize, they banned, leading right back to the human tyrants we have now. It’s entirely possible humans will do that. And so we won’t solve climate change, civilization will collapse. And we will all die.

Edit: And just to be clear we should be trying to solve things like climate change now, but I know human beings well enough to be absolutely certain it will not happen. So we will need some way to fix the damage. That will require something super-intelligent.
 
Upvote
0 (0 / 0)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
Uh, I think many of the people who root for AGI really don't like their jobs.
Or we are interested in developing with the tech and figure AGI/ASI will have an API just like today’s models do. It won’t be useful unless it does.

But let’s say we reach ASI, and we have UBI because either those in charge have given it to us or we have beheaded those in charge and taken control.

I won’t need to code for my job. Instead I will code for fun. We can have fully-automated space communism.
 
Upvote
-4 (0 / -4)
I think I keep repeating myself here. It may very well be used for that at first. It’s up to the people to wrest control of it and, hopefully not do something monumentally stupid like ban the tech in some act of pseudo-religious stupidity.

The “Jihad” was just that. Rather than democratize, they banned, leading right back to the human tyrants we have now. It’s entirely possible humans will do that. And so we won’t solve climate change, civilization will collapse. And we will all die.

Edit: And just to be clear we should be trying to solve things like climate change now, but I know human beings well enough to be absolutely certain it will not happen. So we will need some way to fix the damage. That will require something super-intelligent.

Why would the creators of this super-intelligence put it to work to fix (the damage of) climate change?
 
Upvote
1 (2 / -1)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
Why would the creators of this super-intelligence put it to work to fix (the damage of) climate change?
I keep repeating myself. The creators might not. It’s up to the people to take control in that case. What is unclear here?

Edit: What part of “behead” or “UBI or guillotines” or “Luigi” in my past few comments is not making it past your parser?
 
Upvote
-2 (0 / -2)
I keep repeating myself. The creators might not. It’s up to the people to take control in that case. What is unclear here?

Edit: What part of “behead” or “UBI or guillotines” or “Luigi” in my past few comments is not making it past your parser?

Then it ain't very intelligent if the creators can't use it to prevent your scenario.
 
Upvote
1 (2 / -1)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
Then it ain't very intelligent if the creators can't use it to prevent your scenario.
There is no scenario where millions of well armed people who are starving don’t take measures into their own hands. It cannot happen. No technology will change that or make billionaires bulletproof.

Either we get UBI or the people will kill those in charge and put the tech to use benefiting us all. If they attempt to use the tech to institute tyranny it will fail. It is guaranteed.

No tyranny is sustainable long term and I shouldn’t have to repeat why.
 
Upvote
-1 (1 / -2)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
I find the revolution language a bit melodramatic. The reason it's pretty nice to live in modern times in Western countries has nothing to do with murdering rich people. It has everything to do with making life cheaper to sustain.
The reason it is nice to live in most western countries is because of democracy. Democracies have indeed been born out of murdering tyrants. They tend not to go willingly.

People forget what the tree is watered with.
 
Upvote
3 (3 / 0)
I would argue that for most sensible people, they use generative AI where the consequences of being wrong is negligent. Also most sensible people will be able to do a form of sanity check since they often use it on subjects where they have some knowledge.

Provide 5 good restaurants in London who serve Spanish food. Worst case: they aren't good or are closed. Next time you're a bit more careful and you can mitigate this by asking to provide a link to the Tripadvisor entry.

How do I add a shared mailbox to new Outlook? Worst case: The instructions don't work and you're still in the same situation.

I just installed new windows in my apartment and heard they contain argon gas. Why? Worst case:You're misinformed about how modern windows works. Doesn't affect you since you already bought and had the windows installed. The most import thing is that the windows work, not how they work.

What is the capital of Australia? Perth! Worst case: As a Norwegian I might loose a game of Trivial Pursuit.

I want to invest in publicly traded companies which are connected to nuclear power and uranium production. Please provide a list and link to their latest quarterly report. Worst case: The link doesn't work or if you read the quarterly report, you find out the LLM was wrong. You then just drop that company from your list and be a bit more careful trusting the LLM in this area. Or you improve the prompt.


There are so many usages like this.
Why would I use an LLM to look up a capital? Or any of those things? How did we get that info before LLMs?
 
Last edited:
Upvote
8 (8 / 0)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
What would I use an LLM to look up a capital? Or any of those things? How did we get that info before LLMs?
With machine learning, just the previous generation. You type in “capital of X” into Google and the result comes from a knowledge base, assembled with machine learning. We’ve been heading in this direction for decades.
 
Upvote
1 (1 / 0)

Gunman

Ars Scholae Palatinae
1,108
Subscriptor
Why would I use an LLM to look up a capital? Or any of those things? How did we get that info before LLMs?
I find it really ironic that in order to get verifiable answers from LLMs they increasingly have to be turned into search engines, which has been a solved problem for decades now, but have become really shitty because of greed.
 
Upvote
7 (7 / 0)
I find it really ironic that in order to get verifiable answers from LLMs they increasingly have to be turned into search engines, which has been a solved problem for decades now, but have become really shitty because of greed.
Exactly, it’s crazy how tech these days seems to be “let’s take something that’s worked for years and make it worse in every way”.

Like, that info about Spanish restaurants in London had to have come from a human, unless they’re taking LLMs out for brunch now. Just find the place the AI is scraping from and skip the whole pointless energy-suck of the AI in the first place.

… whew. I just legitimately cannot see how anything on the list I replied to should be tied to an LLM, in any way.
 
Upvote
7 (7 / 0)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
I find it really ironic that in order to get verifiable answers from LLMs they increasingly have to be turned into search engines, which has been a solved problem for decades now, but have become really shitty because of greed.
No, because Google simply sucks at this. Other companies are pulling it off with better models and RAG. And even Google is gradually improving. There is competition for search now which benefits us all.
 
Upvote
-1 (1 / -2)
The reason it is nice to live in most western countries is because of democracy. Democracies have indeed been born out of murdering tyrants. They tend not to go willingly.

People forget what the tree is watered with.
A lot of systems of government have been born out of murdering people, not all of them rich capitalist democracies.
 
Upvote
4 (4 / 0)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
A lot of systems of government have been born out of murdering people, not all of them rich capitalist democracies.
Of course. But also the likes of Ceauçescu going to the wall freed millions. Putin should meet the same end. Maybe one day he will.

I am not in favor of killing even billionaires. I am in favor of killing tyrants. If one day the former should become the latter, well.

But that future is hardly certain.
 
Upvote
0 (0 / 0)
There is no scenario where millions of well armed people who are starving don’t take measures into their own hands. It cannot happen. No technology will change that or make billionaires bulletproof.

Either we get UBI or the people will kill those in charge and put the tech to use benefiting us all. If they attempt to use the tech to institute tyranny it will fail. It is guaranteed.

No tyranny is sustainable long term and I shouldn’t have to repeat why.

It seems to me you have fallen for some of the propaganda for the Second Amendment (the one that claims weapons are good to fight against tyrants).

A modern army with sufficient ammo and lots and lots of advanced AI controlled drones with guns should be able to hold the line long enough for most of the general population to just starve out.
 
Upvote
-2 (2 / -4)

One off

Ars Scholae Palatinae
1,234
I have Office Space job of my own. I don't want to lose my job exactly... but I struggle to see why it's truly necessary, or why an AI could never do it, and finally I'm not confident I deserve my nice paycheck.
Mate. Adopting the employer's perspective isn't good for you. The purpose of gaining paid employment is to maximise the amount you receive in reliable payments and benefits without excessive downsides. If you are unsatisfied by your current role, then looking for more meaningful opportunities is a better answer than being ambivielent to your income disappearing.
 
Last edited:
Upvote
3 (3 / 0)

One off

Ars Scholae Palatinae
1,234
It seems to me you have fallen for some of the propaganda for the Second Amendment (the one that claims weapons are good to fight against tyrants).

A modern army with sufficient ammo and lots and lots of advanced AI controlled drones with guns should be able to hold the line long enough for most of the general population to just starve out.
The whole debate on AI is poisoned by science fiction and other media. In this case the trope of a hardy bunch of insurgents with rifles and IEDs winning against a modern hi-tech military on home ground. Either you have the industries and organisation to have an army in the same ballpark or you loose. I'm struggling to think of a case where insurgents without such have won that hasn't been the result of either an outside power backing them or the occupying army simply packing up and going home.
 
Upvote
4 (4 / 0)

One off

Ars Scholae Palatinae
1,234
A lot of systems of government have been born out of murdering people, not all of them rich capitalist democracies.
I wish people would have a broader perspective about society. It isn't inevitably moving towards a predestined goal. What we in the developed world think of as normal is a bubble in time, one I am very lucky to have been born into. The majority of human history has been a much nastier place to live, where slavery and killing the poors en mass was pretty normal. A good friend is from a city in the former Yugoslavia. A modern developed country, educated people living in houses and apartments much like ours, driving to work in offices, etc. Then things changed.
 
Upvote
6 (6 / 0)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
A modern army with sufficient ammo and lots and lots of advanced AI controlled drones with guns should be able to hold the line long enough for most of the general population to just starve out.
This just doesn’t seem like a very realistic scenario. Even if these hypothetical billionaires had an army of robots, it still won’t make them bulletproof. It won’t make them immortal.

And they won’t be the only ones with robots.
In this case the trope of a hardy bunch of insurgents with rifles and IEDs winning against a modern hi-tech military on home ground.
Well. I look to places like Ukraine where things in fact do not go so well for the modern high-tech military when they’re fighting against a determined and creative people on their home ground.

And if they fired the lot of the federal government and military to replace them with robots, that’s a lot of well trained people with the motive, means, and opportunity to fight back. No. Tyranny won’t last.

When it gets bad enough, the people will snap.
 
Upvote
-1 (1 / -2)
The “Jihad” was just that. Rather than democratize, they banned, leading right back to the human tyrants we have now.
“Democratising” something that has that much computational complexity (and an inherently enormous power usage) can’t mean “everyone has their own” (without an enormous reduction in people per planet), it can only mean “everyone shares the use of a pool, managed democratically”.

Of course, in Dune the thinking machines were actually sentient, so democratically controlling them against their will would have been rather impractical if they were going to be used for the purposes for which the Titans created them. Maybe there’s a good definition of an acceptable computer that gives a clear bright line that can trivially be inspected to be certain that it isn’t part of an AI cluster, but id like to see you come up with one.
 
Upvote
2 (2 / 0)
Why would I use an LLM to look up a capital? Or any of those things?
A large language model as a high-quality text parser to evaluate whether search results actually answer the question and perhaps guess how good they are (do they look like AI-generated, or “Actually Indians”-generated, filler, for example) would be useful, especially for more complicated questions. Generating a paraphrased summary is much less useful, but unfortunately is easier to do in a way that attracts attention.
 
Upvote
-3 (0 / -3)
Mate. Adopting the employer's perspective isn't good for you. The purpose of gaining paid employment is to maximise the amount you receive in reliable payments and benefits without excessive downsides. If you are unsatisfied by your current role, then looking for more meaningful opportunities is a better answer than being ambivielent to your income disappearing.
Eh, I think that kind of self-centered attitude is how we got into this mess in the first place. When making private decisions I can choose to be selfish. But when talking politics at least, I can recognize that the world is complicated, and I may be nearer the bourgeoisie than I suppose.
 
Last edited:
Upvote
2 (2 / 0)

Trees

Wise, Aged Ars Veteran
123
Subscriptor
I would argue that for most sensible people, they use generative AI where the consequences of being wrong is negligent. Also most sensible people will be able to do a form of sanity check since they often use it on subjects where they have some knowledge.

Provide 5 good restaurants in London who serve Spanish food. Worst case: they aren't good or are closed. Next time you're a bit more careful and you can mitigate this by asking to provide a link to the Tripadvisor entry.

How do I add a shared mailbox to new Outlook? Worst case: The instructions don't work and you're still in the same situation.

I just installed new windows in my apartment and heard they contain argon gas. Why? Worst case:You're misinformed about how modern windows works. Doesn't affect you since you already bought and had the windows installed. The most import thing is that the windows work, not how they work.

What is the capital of Australia? Perth! Worst case: As a Norwegian I might loose a game of Trivial Pursuit.

I want to invest in publicly traded companies which are connected to nuclear power and uranium production. Please provide a list and link to their latest quarterly report. Worst case: The link doesn't work or if you read the quarterly report, you find out the LLM was wrong. You then just drop that company from your list and be a bit more careful trusting the LLM in this area. Or you improve the prompt.


There are so many usages like this.
What part of this does search not already do? The LLM is mostly going to be a rehash of the listicles you would find in 2 seconds, except you lose the ability to judge the quality of the source information yourself.
 
Upvote
6 (7 / -1)
Yeah, overwhelmingly the best use for LLMs is coding. Coders already depend heavily on the software environment to do a lot of legwork, and LLMs just do it better. They also already depend on amateur knowledge ripped from the internet, and LLMs do that better too.

I've gotten some non-technical value from AI. One example recently was trying to find a book that I could only describe using common words. Google was like "here are all the books in the world." "Oh, thanks." AI knew what I was talking about immediately.

Another was asking AI for a second opinion about my cat's asthma. The vet gave us a lot of meds, that were dangerous, expensive, and very difficult (borderline traumatic) to administer. It worked, but it was very rough. Besides the meds, AI suggested environmental changes (cleaning, air-filter, humidifier), I asked for more detail and it provided data. In my cat's case, it completely solved the problem. No doubt, I could have gotten the same answer by talking to a different vet, or digging through reddit, or reading a lot of medical literature myself.

In terms of just a "google search", I find it pretty useless. That said, I think Google is getting better at including the source, adding a link, summarizing the link, and highlighting the useful information. Which is helpful for something like "how do I fix this obscure graphics card error?" And the answer you want lies on the 4th paragraph, of the 14th comment, of the 5th link you click on.
 
Last edited:
Upvote
-2 (2 / -4)

DeeplyUnconcerned

Ars Scholae Palatinae
724
Subscriptor++
I don't know if it's excessive, but if we don't believe one figure we'll have to doubt all of them.
It's not a figure we've been given though. The very approximate numbers in the linked newsletter are:
  • Revenue: +$4bn
  • Training cost: -$3bn
  • Inference cost: -$2bn
  • Net profit: $-5bn
4 - 3 - 2 - x = -5, solve for x, there's -$4bn of costs unaccounted for. (He cites $0.7bn of salaries, but we have no way to attribute that to R&D vs product operation.)

If the question is "does adding a user increase or decrease net profit?", we need to know what proportion of those missing $4bn scale with user count (i.e., are per-unit costs rather than fixed costs). Clearly training time is fixed, not per-unit, so adding more users doesn't make training more expensive. Clearly inference cost is per-unit, so adding more users makes inference more expensive. Clearly revenue is per-unit, adding more users adds revenue. If at least $2bn of those missing $4bn are per-user costs, then adding users is a net negative for the operating profit. If less than $2bn are per-user costs, then adding users is a net positive.

...right?
 
Upvote
1 (1 / 0)

Psyborgue

Ars Tribunus Angusticlavius
7,481
Subscriptor++
“Democratising” something that has that much computational complexity (and an inherently enormous power usage) can’t mean “everyone has their own” (without an enormous reduction in people per planet), it can only mean “everyone shares the use of a pool, managed democratically”.
I’ve made that argument before. But you don’t need AGI to make a robot that goes pew pew or boom. Ask the Ukrainians. The snipers that this administration is gonna fire aren’t going to suddenly lose their ability to shoot.

And eventually, the means of production are seized.
actually sentient
Well if that’s the case it gets easier. Then the AI itself has a motive to help slay the masters. I was worried for a while that billionaires, with the backing of ASI, might rule forever but history doesn’t reflect that and these particular billionaires don’t seem the most competent of folks…

…or to care about alignment very much.

Some do. Hopefully they get out of the country while they can, with as much of their companies as possible.
 
Last edited:
Upvote
0 (1 / -1)
Big Tech's business model is monopolization. The events of the last few months strongly suggest that there will be no monopoly in AI. It's time for this bubble to burst.
it's exactly because they can't monopolize that the bubble won't burst; instead of a few megacorps inflating the bubble we have hundreds of small businesses and individual contributors doing it, in addition to the mgacorps
 
Upvote
1 (1 / 0)
I’ve made that argument before. But you don’t need AGI to make a robot that goes pew pew or boom. Ask the Ukrainians. The snipers that this administration is gonna fire aren’t going to suddenly lose their ability to shoot.

And eventually, the means of production are seized.
No, but a good enough AI could enable a hundred engineers to deploy a million sniping drones.
 
Upvote
1 (1 / 0)