Battlefront 2025 predictions

Horatio

Ars Legatus Legionis
24,224
Moderator
Ok, it's that time of year again! Time to make some guesses.

AR/VR
Meta does not release a headset this year
Apple does not either
Vision Pro remains at best a hobby
At least one waveguide-based glasses product is released

AI
At least 10 new AI wearable devices are released
Apple Intelligence continues to suck
There is a major breakthrough in AI
AGI is not achieved


Misc:
Apple crosses 4T in market cap
Microsoft crosses 4T in market cap
X sees a major drop in MAU
Companies can now pay to have antitrust cases dropped
Social media companies are pressured to surveil pregnant women from the Gilead-like states
 

wrylachlan

Ars Legatus Legionis
13,705
Subscriptor
AI
  • OpenAI and Google release SoTA models built on Mamba
  • At least one major player claims AGI
  • New Siri - better than the skeptics predict while still… bad
  • Apple nails UX of how an AI should interact with the OS.
Apple
  • Annual M series updates - M5 focused on ray tracing and ML inference
  • Apples push into the smart home is… fine
  • No AVP2
  • First steps to sunsetting Objective-C on Apple platforms
WinComm… er… QualDows… what are we going with here?
  • Slow and steady increase in number of Windows ARM SKUs.
  • There’s a notable swing in corporate purchasing towards ARM.
  • Intel continues its downward trajectory - more delays of 18A.
Gaming
  • Switch 2 sucks all the air out of the room in gaming
  • Next gen gaming GPUs are stupidly expensive
  • First major game that leans fully into AI NPCS.
Other
  • Bluesky user numbers start to hockey stick
  • Twitter meaningfully loses users
  • AR/VR treads water in both the market and mindshare
  • Health monitoring treads water
  • EVs have major market setbacks in the US while continuing to accelerate everywhere else
  • 2025 is the year self-driving cars take off
 

koala

Ars Tribunus Angusticlavius
8,015
AI: oracles continue to hallucinate. Applications that are not oracles (image/video/audio generation manipulation) continue to incrementally improve. AI continues to be mostly a hype/scam/bubble due to expectation about oracles + blatant copyright laundering. Besides Google/Microsoft/OpenAI, investment moderates. 10% chance of some major initiative imploding spectacularly. 1% chance of bubble bursting.

edit: Self-driving: it continues to require a person behind a curtain or a highly-controlled environment.

Social networks: Twitter continues to lead public discourse despite scandals and complaints. Perhaps a few major relevant accounts abandon it, but most major relevant accounts continue tweeting. Bluesky grows, but is still not the town square that Twitter is.

Apple: stock has a 2022-like performance.
 
Last edited:

LordDaMan

Ars Legatus Legionis
10,785
Lets see:

AI
Image generation and movie generation will improve a massive amount. By the end of the year there will be some porn website that allows you to upload a few photos to create your own custom porn movie.
Many of the major network will report some AI generated photo of Trump as real. Then when found out, they will still accuse Trump of it, later shifting to how AI is bad. This will last a week or so and within 6 months we will repeat the same cycle.
Someone in the defense industry shows off drones and tanks and other devices all run by AI. Massive blowback over this with the terminator series getting popular again.

Apple:
Vision Pro becomes a "hobby"
The push to be a gaming platform is yet again abandoned by Apple.
Some sort of new major security flaw like Cuckoo.

Microsoft:
New xbox announced
They try to make a big PC gaming push.
Some sort of Microsoft mockup for a future version of windows. Turns out this mockup is really windows 12.

Nvidia:
The RTX 5xxx series comes out. It's really powerful, but insanely expensive. Intel and AMD capitalize on this and releases much cheaper cards that offer about 90% of the performance for about 40% of the cost.
Nvidia doesn't care their gaming cards are doing badly as they make massive amounts of money on AI related chips.

Social Media:
Bluesky grows and grows in numbers of new users, but no one actually uses the platform. It becomes yet another niche platform
Truth social grows in membership, mostly just to follow whatever Trump posts.
Many famous people who said they will no longer post on twitter start posting again.

Other:
It will come out that there is extensive (illegal) spying on American citizens by various 3 letter agencies during the Biden administration.
Some foreign government will ban all Americans from using some website some citizen of that country has.
Half-life 3 (!)
 
Lets see:

AI
Image generation and movie generation will improve a massive amount. By the end of the year there will be some porn website that allows you to upload a few photos to create your own custom porn movie.
Many of the major network will report some AI generated photo of Trump as real. Then when found out, they will still accuse Trump of it, later shifting to how AI is bad. This will last a week or so and within 6 months we will repeat the same cycle.
Someone in the defense industry shows off drones and tanks and other devices all run by AI. Massive blowback over this with the terminator series getting popular again.

Apple:
Vision Pro becomes a "hobby"

The push to be a gaming platform is yet again abandoned by Apple.
Some sort of new major security flaw like Cuckoo.

Microsoft:
New xbox announced
They try to make a big PC gaming push.
Some sort of Microsoft mockup for a future version of windows. Turns out this mockup is really windows 12.

Nvidia:
The RTX 5xxx series comes out. It's really powerful, but insanely expensive. Intel and AMD capitalize on this and releases much cheaper cards that offer about 90% of the performance for about 40% of the cost.
Nvidia doesn't care their gaming cards are doing badly as they make massive amounts of money on AI related chips.

Social Media:
Bluesky grows and grows in numbers of new users, but no one actually uses the platform. It becomes yet another niche platform
Truth social grows in membership, mostly just to follow whatever Trump posts.
Many famous people who said they will no longer post on twitter start posting again.

Other:
It will come out that there is extensive (illegal) spying on American citizens by various 3 letter agencies during the Biden administration.
Some foreign government will ban all Americans from using some website some citizen of that country has.
Half-life 3 (!)
This view is the most likely of what I see for this year. I'd agree with all but bolded:

By the end of the year there will be some porn website that allows you to upload a few photos to create your own custom porn movie.
There is a school and law enforcement/AG in my PA County currently navigating something similar that happened to 60 minors. While I don't doubt such a site will inevitably exist, the political/social blowback may be massive due to that risk, and many new laws will need to deal with it. I don't see it this year.

Vision Pro becomes a "hobby"
I see it being niche, and popular for a week or two, then everyone will forget about it.

The only other thing that I'd say is I agree with your statement on Nvidia/AMD/Intel, but I don't see AMD caring as much as you make it sound (pretty sure they said they're not concentrating on higher-end cards anymore), and Arc is too young to get enough traction to make a dent yet in the market. It would be wonderful if AMD changed that mindset, but the mid-tier could be really interesting with the positive reviews the Battlemage series has had so far (in their line of one card, as of this post).
 
Last edited:

Shavano

Ars Legatus Legionis
63,947
Subscriptor
There is a major breakthrough in AI
I have trouble with this one. I'd say it's unquestionable that some AI vendor will claim a major breakthrough but that's just marketing. Every little advance is called a major breakthrough because of the hype. I think more specificity is required to decide if you were right.
  • New Siri - better than the skeptics predict while still… bad
  • Apple nails UX of how an AI should interact with the OS.
those points seem partially contradictory.

My predictions:
AI hype starts to level out but doesn't really start declining yet
Intel Gives up on native x86 architecture and does it all on a RISC architecture with emulation that outperforms native x86 but obscures that in their marketing. Continues to name processor architectures after water
Intel Gets starts to close the gap with TSMC on process
AMD makes inroads on NVIDIA's AI market share
AMD moves past Zen, introducing a new marketing name for a new architecture family.

edit:
Microsoft tries to introduce an AI based operating system and it's horrifying.
 
  • Like
Reactions: GeekyCheese

Louis XVI

Ars Legatus Legionis
11,151
Subscriptor
This year, I think tech will be more about politics, society, and culture than about tech itself. My predictions:
  • Elon Musk and Donald Trump will have a big, nasty, public falling out. Trump will purge Musk from government, and will possibly deport or imprison him. Very few people will stand up for Musk or mourn his fate.
  • Facebook will undergo a rightward lurch similar to Twitter’s, becoming a full-on Nazi bar. (This is cheating a little bit, because Zuckerberg just announced the dissolution of fact-checking, internal DEI initiatives, and radically reduced moderation standards). The vast majority of users will continue using it, as there is not a viable alternative means of communicating with distant friends and family.
  • Bluesky and Twitter’s status will solidify as mirror microblogging sites, with left-leaning folks congregating on Bluesky while conservatives stay on Twitter. Government agencies and large media organizations will cross-post everything to the two sites.
  • We’ll see more bifurcated services depending on whether users are in the US or EU, as the US strips back regulation and the EU increases and enforces regulation.
  • AI will continue to suck. Nobody will really want to use it, because its results will be unreliable, often false, and grotesque. Nonetheless, the major tech companies will continue to push it hard on consumers.
  • In the video game world, Microsoft will finally go full Sega, abandoning Xbox hardware and releasing all new games on PC, PS5, and Switch 2.
  • iOS will continue to dominate video gaming, raking in the lion’s share of gaming profits.
 

wrylachlan

Ars Legatus Legionis
13,705
Subscriptor
  • New Siri - better than the skeptics predict while still… bad
  • Apple nails UX of how an AI should interact with the OS.

those points seem partially contradictory.
Nah, the model and how the software interacts with the model are two very very different things. I’m saying that the model may not be that great, but the design work of how the model interacts with its context will feel right - how it lets you know what it’s interpreting from the screen, how it interacts with app intentions, transparency of what it’s doing for you, etc.

And I think more importantly I predict that Apple will get right how its model interacts with third party software.
 
I'm gonna jump on the bandwagon here.

Public facing consumer AI will continue to be a toy that is as much mocked as it is used.
No major AI breakthroughs will occur

However Enterprise business will see AI tools like Co-pilot become widespread (it's already started) and it's use will slowly undermine ancillary tech departments in particular technical writing. Stealth layoffs or zero growth in these fields will abound.

Similarly no purely AI based entertainment will make a big splash, but writing rooms across the industry will rely on AI to improve script quality with again stealth layoffs or lack of growth in writing rooms.
 

Exordium01

Ars Praefectus
4,087
Subscriptor
All the “AI” tools are toys. I don’t think the bubble is going to burst in 2025, but I do think this is the year of disillusionment.

More people roll their eyes to Sam Altman’s newest bullshit claim and consumers start pushing back on worthless “features” nobody asked for.

NVDA will undergo a minor correction by the end of the year based on slowing revenue growth.

Gamers will be disappointed with the 5000 series because most of the claimed benefit is from frame generation not actual performance but nobody else will care. AMD will shrink the ray tracing performance gap and I still won’t trust their hardware in anything that’s not a game console.
 
All the “AI” tools are toys. I don’t think the bubble is going to burst in 2025, but I do think this is the year of disillusionment.

You're not a programmer, or have even a passing need for programming/scripting anything yourself.

Nor are you a data scientist, nor do you work in any way in an academic environment.

Of course all you see is gimmickry.
 
Last edited:

blubyu

Seniorius Lurkius
23
Subscriptor
You're not a programmer, or have even a passing need for programming/scripting anything yourself.

Nor are you a data scientist, nor do you work in any way in an academic environment.

Of course all you see is gimmickry.
Spoken as somebody who isn't a programmer or anybody who needs help in programming/scripting. "AI"...and let's be honest...it isn't....simply helps those can't code.....code. Nothing wrong with that.....AI "can" code or script....not as good as people who actually know how to code/script....know why? Because it is "AI".
 

Exordium01

Ars Praefectus
4,087
Subscriptor
You're not a programmer, or have even a passing need for programming/scripting anything yourself.

Nor are you a data scientist, nor do you work in any way in an academic environment.

Of course all you see is gimmickry.
I have more than a passing familiarity with matlab and python and use SQL pretty much daily, but am not a programmer and have been out of academia for about a decade (but do have a PhD in EE). I'm guessing based on this answer you don't work in any of these fields. Not sure why you are so touchy about it though.
You know how people who can’t code learn to code?

By getting help coding.

In a freshman electronics class we had an assignment in matlab, which none of us had ever used before. We asked the professor where to start and he shrugged at us and told us that you figure it out. I was a bit frustrated at the time, but he wasn't wrong. You select a task and figure it out.

Anyway, what OpenAI is selling is not AI. It's pattern recognition wrapped around a way to launder plagiarized material. And they've just about found the limits of the approach. Whether you think their technology is useful or not, it is massively overvalued and uses way too much electricity relative to its utility.
 
Last edited:
  • Like
Reactions: CarterNomad

LordDaMan

Ars Legatus Legionis
10,785
Spoken as somebody who isn't a programmer or anybody who needs help in programming/scripting. "AI"...and let's be honest...it isn't....simply helps those can't code.....code. Nothing wrong with that.....AI "can" code or script....not as good as people who actually know how to code/script....know why? Because it is "AI".
I forget the specifics, but I once asked how to do something in PowerShell using Bing. and instead of just showing me links it wrote out a powershell script for me. Also did it for a dos batch file which it got somewhat wrong because NT's CMD has a slightly different syntax then command.com
 

zyyn

Ars Praetorian
523
Subscriptor++
I don’t see how using a tool that is quite likely to spit out plausible sounding bullshit would be a good way to learn how to code. Even worse, it could give you something mostly correct, but with a subtle bug. If you’re just learning how would you even know to look for something like that? I think if LLMs had been around 25 years ago when I was first learning to code they would have been more of an impediment than a helpful tool.

Even if the “AI” does produce some code that works well, the user is going to learn a lot less than if they had been forced to figure the answer out for themselves. Reading code written by someone or something else is an awful experience in the best case.

I agree with the predictions about a coming collapse of the big companies like OpenAI and Anthropic. I think it’s coming sooner than some others do though. Maybe as soon as this summer.
 
Last edited:

tb12939

Ars Tribunus Militum
1,843
They're not toys, they provide real value for professionals right now IF you know how to use them.
And if your domain suits them, i.e. one with copious examples and a forgiving tolerance of suboptimal output - so ideally an automated Gartner Report generator.

And yeah they can make a decent effort at variations of common coding problems in popular languages / frameworks - I'm impressed that so many of these are solvable as statistical problems, but that's seems to be all it is.

On the other hand, take something niche, inconsistent and picky (e.g. configuring Azure B2C custom policies, possibly the most Microsoft technology ever), and they're beyond useless, even the latest commercial models like o1 or o3-mini. They can't even follow the documentation to produce valid XML structure, never mind achieve the requirements (or alternatively, state that they are not possible for whatever reason) - it's just vaguely plausible hallucinations all the way down.
 

zyyn

Ars Praetorian
523
Subscriptor++
Every fucking textbook I've ever seen on programming is described in that paragraph.
The fact that reading textbooks is also not a great way to learn to code has no bearing on the level of usefulness of LLMs. With the textbook though the examples have presumably at least been read over by a human with programming knowledge.
 

Mark086

Ars Legatus Legionis
10,897
The fact that reading textbooks is also not a great way to learn to code has no bearing on the level of usefulness of LLMs. With the textbook though the examples have presumably at least been read over by a human with programming knowledge.
I've gotten substantially better results from an LLM than any textbook.

You use it to build test cases, you can use it to build code, then you use those test cases to validate the code. And yes, it is possible to validate the test cases before you assume they're sufficient, test cases are generally simplistic.

I know a couple people around here have a clue when it comes to using LLMs effectively, the majority do not.

It isn't going to help the type of developer that simply cuts and pasts from Slack Overflow, but then they were always useless anyway.

You know what else they simplify? A shit ton of boilerplate code. We've long moved past the days of

10 print "Hello World!";
20 goto 10

But we've expanded boilerplate to the point of ridiculous. I can concentrate on 10 lines of code, instead of the dozens of lines required to just set up shit to work.
 

zyyn

Ars Praetorian
523
Subscriptor++
I've gotten substantially better results from an LLM than any textbook.

You use it to build test cases, you can use it to build code, then you use those test cases to validate the code. And yes, it is possible to validate the test cases before you assume they're sufficient, test cases are generally simplistic.

I know a couple people around here have a clue when it comes to using LLMs effectively, the majority do not.

It isn't going to help the type of developer that simply cuts and pasts from Slack Overflow, but then they were always useless anyway.

You know what else they simplify? A shit ton of boilerplate code. We've long moved past the days of

10 print "Hello World!";
20 goto 10

But we've expanded boilerplate to the point of ridiculous. I can concentrate on 10 lines of code, instead of the dozens of lines required to just set up shit to work.
This seems a bit backward. Copilot and Gemini are very well suited for and somewhat ok at helping with the kind of task where an inexperienced or lazy programmer would normally just copy from stack overflow.

Programs are complicated now. Woah. But if you find yourself writing too much boilerplate that’s a pretty good indication there might be something wrong architecturally. An LLM could be a good way to put a band-aid on it if you don’t care about the poor sob who’s going to be maintaining it in a few years.

This assumption that everyone who dislikes LLMs just doesn’t know how to use them got tired a long time ago.
 

zyyn

Ars Praetorian
523
Subscriptor++
It's proven accurate.
The answer to the question “huh, this class has 27 dependencies. I need to write some tests now, but it’s going to be a pain. What should I do about that?” Should be “maybe I should rethink my architecture. Why is it handling all these different things?. Could it be split up?”

Not “eh it’s fine I’ll just have the tool generate 1500 lines of crap”

Prediction: option B is going to be the source of much frustration in the years to come. And it’s going to get even worse when the financial issues with building all those fancy data centers finally meet the reality of a lack of revenue.
 

Mark086

Ars Legatus Legionis
10,897
The answer to the question “huh, this class has 27 dependencies. I need to write some tests now, but it’s going to be a pain. What should I do about that?” Should be “maybe I should rethink my architecture. Why is it handling all these different things?. Could it be split up?”

Not “eh it’s fine I’ll just have the tool generate 1500 lines of crap”

Prediction: option B is going to be the source of much frustration in the years to come. And it’s going to get even worse when the financial issues with building all those fancy data centers finally meet the reality of a lack of revenue.
Maybe you don't belong in this problem space.
 
  • Like
Reactions: analogika

zyyn

Ars Praetorian
523
Subscriptor++
Maybe you don't belong in this problem space.
It's okay. Once you have a few more years of experience you may begin to understand the importance of maintainability.

It is interesting to be able to have a conversation with possibly one of the very people responsible for some of the messes I've been paid to clean up. Thank you.
 

Mark086

Ars Legatus Legionis
10,897
It's okay. Once you have a few more years of experience you may begin to understand the importance of maintainability.

It is interesting to be able to have a conversation with possibly one of the very people responsible for some of the messes I've been paid to clean up. Thank you.
Your ignorance is bleeding all over the floor.
 
And if your domain suits them, i.e. one with copious examples and a forgiving tolerance of suboptimal output - so ideally an automated Gartner Report generator.

And yeah they can make a decent effort at variations of common coding problems in popular languages / frameworks - I'm impressed that so many of these are solvable as statistical problems, but that's seems to be all it is.

On the other hand, take something niche, inconsistent and picky (e.g. configuring Azure B2C custom policies, possibly the most Microsoft technology ever), and they're beyond useless, even the latest commercial models like o1 or o3-mini. They can't even follow the documentation to produce valid XML structure, never mind achieve the requirements (or alternatively, state that they are not possible for whatever reason) - it's just vaguely plausible hallucinations all the way down.
If you create enough Cursor rules about something esoteric like that, it could work. You just have to embrace prompt engineering.
 

Shavano

Ars Legatus Legionis
63,947
Subscriptor
I don’t see how using a tool that is quite likely to spit out plausible sounding bullshit would be a good way to learn how to code. Even worse, it could give you something mostly correct, but with a subtle bug. If you’re just learning how would you even know to look for something like that? I think if LLMs had been around 25 years ago when I was first learning to code they would have been more of an impediment than a helpful tool.

Even if the “AI” does produce some code that works well, the user is going to learn a lot less than if they had been forced to figure the answer out for themselves. Reading code written by someone or something else is an awful experience in the best case.
Depends on the person who wrote it. The best programmers (you're about to hear the reason I consider them best) document their code in an organized way that helps the reader understand how the code works and why it was written the way it was. This makes their code much more readily maintainable, adaptable, and reusable, even if you're the only one who ever ends up maintaining it because over years, people forget why they did things and misremember their own work.

Not all code that is documented that way is good code: good documentation won't make poorly designed code well designed. but it will make it easy to identify the ways in which it isn't. But in my experience coders that take that kind of care usually also write well thought out and well designed code. Not that my experience is that extensive. I'm a hardware guy; coding or even reading code has always been a small part of my work.
I agree with the predictions about a coming collapse of the big companies like OpenAI and Anthropic. I think it’s coming sooner than some others do though. Maybe as soon as this summer.