GPT-4.5 offers marginal gains in capability and poor coding performance despite 30x the cost.
See full article...
See full article...
That sounds like you’re doing a lot of almost boilerplate code, which is a fairly strong sign that the language doesn’t have the abstractions you need to express your intent more concisely. Sure, some projects are stuck with an inconvenient language for one reason or another, but it’s hardly the basis of a sustainable industry.Disagree. I use an LLM bot daily for coding. It usually works as intended, and I move on, with no need to check. Or it doesn't work as intended, in which case I either enhance my prompt, or double check with another source. The latter scenario is maybe 10% of my prompts, if that. In no circumstance would I not be using a LLM in the first place just because 10% of the time I need to do a little extra leg work, because the alternative is to do that legwork 100% of the time.
Ok, assuming the people can wrest control of this tech, what happens then? There’s the potential to end scarcity and unify the world. To end war. To end sickness. To hopefully mitigate ignorance.
“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
- Dune
The is also the part about OpenAI getting access to GPUs from MS at 25% of the market price.No come on, I’m no GenAI booster but the entire premise of this article is wrong by his own numbers. He’s stating that OpenAI is running its product at a loss, but he’s getting that by including training as an operating cost when it’s obviously R&D. The business as a whole is loss-making, but he states revenue is $4bn and inference costs are $2bn, so running the product has a ~100% profit margin on that basis. If, as the author does, you add in the $3bn in training costs, you get a loss, but if OpenAI stopped training new models tomorrow (and laid off its R&D staff) it’d briefly be a very profitable company before it got outcompeted into oblivion.
If your business assumption is that AI companies will need to keep pouring money into R&D at the current rate forever, then yeah, there’s no viable business there, but an argument needs to be made to justify that scenario over “eventually, diminishing returns will make training a poor differentiator and then the state of the art will be run at a profit indefinitely”. And either way, the product is clearly, on the numbers provided, currently run at a profit.
Edit: and given that train cost obviously doesn’t scale with user count, his later assertion that more users means losing more money is self-evidently the opposite of the truth, again on the numbers he’s citing. This article is very poorly reasoned.
That would be lock-in for an AI product, not for OpenAI's product specifically.Well. It’s less expensive than human customer service, for example. A lot of human jobs are replaceable right now.However, where is the customer lock in for OpenAI?
Companies will pay for that. The question I have is what happens when there’s none of their customers left because, well, no jobs.
I don't know if it's excessive, but if we don't believe one figure we'll have to doubt all of them.If you assume the “missing” $4bn is all running costs for the app/website, sure. I don’t find it particularly plausible though that inference cost is $2bn (which I assume is all-in cost for compute given that they’re not running their own data centres) and “other costs of providing the model as a product” are twice that. I’d be surprised if they’d found a way to make it even as much as half that - burning $1bn a year on ancillary operating costs for a largely text-based web service seems excessive.
Uh, I think many of the people who root for AGI really don't like their jobs.It scares me more that regular people root for AGI and for people to lose their jobs, as if it's not going to happen to them.
Emphasis mine
Coding is one of the very few applications where there are no consequences for trying a wrong answer. [...]
LLMs are useful when you only need to redo the work yourself when they're wrong, but dangerous when you're facing an increased risk of lung cancer when they're wrong.
I think I keep repeating myself here. It may very well be used for that at first. It’s up to the people to wrest control of it and, hopefully not do something monumentally stupid like ban the tech in some act of pseudo-religious stupidity.Let's assume all the technical hurdles are overcome and we do end up with A(G)I that can do all that stuff without using up all the energy in the world.
How will humanity overcome this?
Or we are interested in developing with the tech and figure AGI/ASI will have an API just like today’s models do. It won’t be useful unless it does.Uh, I think many of the people who root for AGI really don't like their jobs.
I think I keep repeating myself here. It may very well be used for that at first. It’s up to the people to wrest control of it and, hopefully not do something monumentally stupid like ban the tech in some act of pseudo-religious stupidity.
The “Jihad” was just that. Rather than democratize, they banned, leading right back to the human tyrants we have now. It’s entirely possible humans will do that. And so we won’t solve climate change, civilization will collapse. And we will all die.
Edit: And just to be clear we should be trying to solve things like climate change now, but I know human beings well enough to be absolutely certain it will not happen. So we will need some way to fix the damage. That will require something super-intelligent.
I keep repeating myself. The creators might not. It’s up to the people to take control in that case. What is unclear here?Why would the creators of this super-intelligence put it to work to fix (the damage of) climate change?
I keep repeating myself. The creators might not. It’s up to the people to take control in that case. What is unclear here?
Edit: What part of “behead” or “UBI or guillotines” or “Luigi” in my past few comments is not making it past your parser?
There is no scenario where millions of well armed people who are starving don’t take measures into their own hands. It cannot happen. No technology will change that or make billionaires bulletproof.Then it ain't very intelligent if the creators can't use it to prevent your scenario.
The reason it is nice to live in most western countries is because of democracy. Democracies have indeed been born out of murdering tyrants. They tend not to go willingly.I find the revolution language a bit melodramatic. The reason it's pretty nice to live in modern times in Western countries has nothing to do with murdering rich people. It has everything to do with making life cheaper to sustain.
Why would I use an LLM to look up a capital? Or any of those things? How did we get that info before LLMs?I would argue that for most sensible people, they use generative AI where the consequences of being wrong is negligent. Also most sensible people will be able to do a form of sanity check since they often use it on subjects where they have some knowledge.
Provide 5 good restaurants in London who serve Spanish food. Worst case: they aren't good or are closed. Next time you're a bit more careful and you can mitigate this by asking to provide a link to the Tripadvisor entry.
How do I add a shared mailbox to new Outlook? Worst case: The instructions don't work and you're still in the same situation.
I just installed new windows in my apartment and heard they contain argon gas. Why? Worst case:You're misinformed about how modern windows works. Doesn't affect you since you already bought and had the windows installed. The most import thing is that the windows work, not how they work.
What is the capital of Australia? Perth! Worst case: As a Norwegian I might loose a game of Trivial Pursuit.
I want to invest in publicly traded companies which are connected to nuclear power and uranium production. Please provide a list and link to their latest quarterly report. Worst case: The link doesn't work or if you read the quarterly report, you find out the LLM was wrong. You then just drop that company from your list and be a bit more careful trusting the LLM in this area. Or you improve the prompt.
There are so many usages like this.
With machine learning, just the previous generation. You type in “capital of X” into Google and the result comes from a knowledge base, assembled with machine learning. We’ve been heading in this direction for decades.What would I use an LLM to look up a capital? Or any of those things? How did we get that info before LLMs?
I find it really ironic that in order to get verifiable answers from LLMs they increasingly have to be turned into search engines, which has been a solved problem for decades now, but have become really shitty because of greed.Why would I use an LLM to look up a capital? Or any of those things? How did we get that info before LLMs?
Exactly, it’s crazy how tech these days seems to be “let’s take something that’s worked for years and make it worse in every way”.I find it really ironic that in order to get verifiable answers from LLMs they increasingly have to be turned into search engines, which has been a solved problem for decades now, but have become really shitty because of greed.
No, because Google simply sucks at this. Other companies are pulling it off with better models and RAG. And even Google is gradually improving. There is competition for search now which benefits us all.I find it really ironic that in order to get verifiable answers from LLMs they increasingly have to be turned into search engines, which has been a solved problem for decades now, but have become really shitty because of greed.
A lot of systems of government have been born out of murdering people, not all of them rich capitalist democracies.The reason it is nice to live in most western countries is because of democracy. Democracies have indeed been born out of murdering tyrants. They tend not to go willingly.
People forget what the tree is watered with.
If you've seen what the SDLC has become, you'd hate it too.Uh, I think many of the people who root for AGI really don't like their jobs.
I have Office Space job of my own. I don't want to lose my job exactly... but I struggle to see why it's truly necessary, or why an AI could never do it, and finally I'm not confident I deserve my nice paycheck.If you've seen what the SDLC has become, you'd hate it too.
Of course. But also the likes of Ceauçescu going to the wall freed millions. Putin should meet the same end. Maybe one day he will.A lot of systems of government have been born out of murdering people, not all of them rich capitalist democracies.
Well, that depends how many expensive hallucinations it produces, as Air Canada found out.Well. It’s less expensive than human customer service, for example.
There is no scenario where millions of well armed people who are starving don’t take measures into their own hands. It cannot happen. No technology will change that or make billionaires bulletproof.
Either we get UBI or the people will kill those in charge and put the tech to use benefiting us all. If they attempt to use the tech to institute tyranny it will fail. It is guaranteed.
No tyranny is sustainable long term and I shouldn’t have to repeat why.
Mate. Adopting the employer's perspective isn't good for you. The purpose of gaining paid employment is to maximise the amount you receive in reliable payments and benefits without excessive downsides. If you are unsatisfied by your current role, then looking for more meaningful opportunities is a better answer than being ambivielent to your income disappearing.I have Office Space job of my own. I don't want to lose my job exactly... but I struggle to see why it's truly necessary, or why an AI could never do it, and finally I'm not confident I deserve my nice paycheck.
The whole debate on AI is poisoned by science fiction and other media. In this case the trope of a hardy bunch of insurgents with rifles and IEDs winning against a modern hi-tech military on home ground. Either you have the industries and organisation to have an army in the same ballpark or you loose. I'm struggling to think of a case where insurgents without such have won that hasn't been the result of either an outside power backing them or the occupying army simply packing up and going home.It seems to me you have fallen for some of the propaganda for the Second Amendment (the one that claims weapons are good to fight against tyrants).
A modern army with sufficient ammo and lots and lots of advanced AI controlled drones with guns should be able to hold the line long enough for most of the general population to just starve out.
I wish people would have a broader perspective about society. It isn't inevitably moving towards a predestined goal. What we in the developed world think of as normal is a bubble in time, one I am very lucky to have been born into. The majority of human history has been a much nastier place to live, where slavery and killing the poors en mass was pretty normal. A good friend is from a city in the former Yugoslavia. A modern developed country, educated people living in houses and apartments much like ours, driving to work in offices, etc. Then things changed.A lot of systems of government have been born out of murdering people, not all of them rich capitalist democracies.
This just doesn’t seem like a very realistic scenario. Even if these hypothetical billionaires had an army of robots, it still won’t make them bulletproof. It won’t make them immortal.A modern army with sufficient ammo and lots and lots of advanced AI controlled drones with guns should be able to hold the line long enough for most of the general population to just starve out.
Well. I look to places like Ukraine where things in fact do not go so well for the modern high-tech military when they’re fighting against a determined and creative people on their home ground.In this case the trope of a hardy bunch of insurgents with rifles and IEDs winning against a modern hi-tech military on home ground.
“Democratising” something that has that much computational complexity (and an inherently enormous power usage) can’t mean “everyone has their own” (without an enormous reduction in people per planet), it can only mean “everyone shares the use of a pool, managed democratically”.The “Jihad” was just that. Rather than democratize, they banned, leading right back to the human tyrants we have now.
A large language model as a high-quality text parser to evaluate whether search results actually answer the question and perhaps guess how good they are (do they look like AI-generated, or “Actually Indians”-generated, filler, for example) would be useful, especially for more complicated questions. Generating a paraphrased summary is much less useful, but unfortunately is easier to do in a way that attracts attention.Why would I use an LLM to look up a capital? Or any of those things?
Eh, I think that kind of self-centered attitude is how we got into this mess in the first place. When making private decisions I can choose to be selfish. But when talking politics at least, I can recognize that the world is complicated, and I may be nearer the bourgeoisie than I suppose.Mate. Adopting the employer's perspective isn't good for you. The purpose of gaining paid employment is to maximise the amount you receive in reliable payments and benefits without excessive downsides. If you are unsatisfied by your current role, then looking for more meaningful opportunities is a better answer than being ambivielent to your income disappearing.
What part of this does search not already do? The LLM is mostly going to be a rehash of the listicles you would find in 2 seconds, except you lose the ability to judge the quality of the source information yourself.I would argue that for most sensible people, they use generative AI where the consequences of being wrong is negligent. Also most sensible people will be able to do a form of sanity check since they often use it on subjects where they have some knowledge.
Provide 5 good restaurants in London who serve Spanish food. Worst case: they aren't good or are closed. Next time you're a bit more careful and you can mitigate this by asking to provide a link to the Tripadvisor entry.
How do I add a shared mailbox to new Outlook? Worst case: The instructions don't work and you're still in the same situation.
I just installed new windows in my apartment and heard they contain argon gas. Why? Worst case:You're misinformed about how modern windows works. Doesn't affect you since you already bought and had the windows installed. The most import thing is that the windows work, not how they work.
What is the capital of Australia? Perth! Worst case: As a Norwegian I might loose a game of Trivial Pursuit.
I want to invest in publicly traded companies which are connected to nuclear power and uranium production. Please provide a list and link to their latest quarterly report. Worst case: The link doesn't work or if you read the quarterly report, you find out the LLM was wrong. You then just drop that company from your list and be a bit more careful trusting the LLM in this area. Or you improve the prompt.
There are so many usages like this.
It's not a figure we've been given though. The very approximate numbers in the linked newsletter are:I don't know if it's excessive, but if we don't believe one figure we'll have to doubt all of them.
I’ve made that argument before. But you don’t need AGI to make a robot that goes pew pew or boom. Ask the Ukrainians. The snipers that this administration is gonna fire aren’t going to suddenly lose their ability to shoot.“Democratising” something that has that much computational complexity (and an inherently enormous power usage) can’t mean “everyone has their own” (without an enormous reduction in people per planet), it can only mean “everyone shares the use of a pool, managed democratically”.
Well if that’s the case it gets easier. Then the AI itself has a motive to help slay the masters. I was worried for a while that billionaires, with the backing of ASI, might rule forever but history doesn’t reflect that and these particular billionaires don’t seem the most competent of folks…actually sentient
it's exactly because they can't monopolize that the bubble won't burst; instead of a few megacorps inflating the bubble we have hundreds of small businesses and individual contributors doing it, in addition to the mgacorpsBig Tech's business model is monopolization. The events of the last few months strongly suggest that there will be no monopoly in AI. It's time for this bubble to burst.
No, but a good enough AI could enable a hundred engineers to deploy a million sniping drones.I’ve made that argument before. But you don’t need AGI to make a robot that goes pew pew or boom. Ask the Ukrainians. The snipers that this administration is gonna fire aren’t going to suddenly lose their ability to shoot.
And eventually, the means of production are seized.