The Battlefront Miscellaneous Thread

cogwheel

Ars Tribunus Angusticlavius
6,864
Subscriptor
It's super slo but it's a pretty neat use of AI
It doesn't simulate gameplay at all, Microsoft is talking out of their corporate ass here. Enemies and dead bodies appear and disappear randomly with no continuity (doing things like changing your camera angle can cause enemies to appear, disappear, or change into things like barrels), and your gun does nothing but create light. Getting shot sometimes decreases your health, sometimes increases it, and sometimes does nothing. About the only thing that is right is that once your armor is gone it doesn't come back, but that's likely because there are no armor pickups in the area of the level used, so the training videos never showed the case of no armor to armor. This doesn't understand the game (or any game) at all, and about the only thing it does understand is that the training videos are of a rational 3d space.

If you fed the same videos this is based on into a photogrammetry system, you'd get a higher resolution, more accurate model using less computing power. It wouldn't have the enemy hallucinations, but since those enemy hallucinations don't do anything functional, you aren't actually losing gameplay.
 

Exordium01

Ars Praefectus
4,087
Subscriptor
Which is all par for the course for a tech demo.

Also I think you missed the point here. Each frame is generated by AI. Each input changes what the next frame generated is. It's being generated in real time.

It shows how in the future you could say "make me a game in quake II style" and the AI spits out a game in real time based on it.
I didn't realize we were tripping over ourselves to come up with progressively less practical uses for AI. I have a long enough backlog of well-made games I want to play, though if Starfield taught us anything, it's that Microsoft would rather publish generative AI garbage than actual games with compelling writing and artistry.
 

Mark086

Ars Legatus Legionis
10,900
I didn't realize we were tripping over ourselves to come up with progressively less practical uses for AI. I have a long enough backlog of well-made games I want to play, though if Starfield taught us anything, it's that Microsoft would rather publish generative AI garbage than actual games with compelling writing and artistry.
Your misrepresentation is amusing.
 
I didn't realize we were tripping over ourselves to come up with progressively less practical uses for AI. I have a long enough backlog of well-made games I want to play, though if Starfield taught us anything, it's that Microsoft would rather publish generative AI garbage than actual games with compelling writing and artistry.

Again, it's a tech demo. It's from the pure research part of Microsoft that just does things they think is cool that may or may not ever make it into any shipping product
 

cogwheel

Ars Tribunus Angusticlavius
6,864
Subscriptor
Which is all par for the course for a tech demo.

Also I think you missed the point here. Each frame is generated by AI. Each input changes what the next frame generated is. It's being generated in real time.
And? It's useless. It has no use. It demonstrates tech that can only do useless things while consuming way more power than using the proper tool to do said thing.

It shows how in the future you could say "make me a game in quake II style" and the AI spits out a game in real time based on it.
No, it doesn't. There's no gameplay, and it's clear that the "AI" (which isn't intelligent at all) doesn't understand what a game is.

This takes video with matching inputs as training input (and, since the demo only supports keyboard, it's likely the training data only contains keyboard-controlled movement because they couldn't get the "AI" system to understand mouselook), and does really basic correlation between inputs and what happens in the video. It gets the simplest part (camera motion) mostly right, but only for the fixed parts of the level. It completely misses how objects work, and doesn't show any hint that it understands what is supposed to happen in a game.

I'd also bet that this is cherry-picked. They probably ran this model generation software a ton of times, and manually rejected all the really bad hallucinations where the resulting output doesn't even get fixed geometry right, and the presented output is from the one run where by chance it didn't hallucinate non-euclidean level geometry.

Calling this a game is like calling navigating a building model in Revit or Archicad (which, amusingly, support WASD + mouselook) a game.

Another thing to keep in mind is the current computing vomit that's being called "AI" by the industry only vomits up mashups of what's already in its training data. It may eventually be able to create a Quake II level you don't quite remember (because it's a fusion of multiple Q2 levels), but it won't be able to create a level from Halo without being fed Halo training data.

Real AI can in theory make the world a better place. This stuff can only make the world worse.
 
  • Like
Reactions: VirtualWolf

cogwheel

Ars Tribunus Angusticlavius
6,864
Subscriptor
I guess that is why he said IN THE FUTURE?
There's no path for LLMs and similar generative inference systems to become intelligent. They're a dead end.

I think the closest description I can come up with for how they work would be like filling out Mad Libs (remember those?) based on word counts modified by proximity in training texts that contain some or all of the words in the title of the Mad Lib you're filling out. The words are just tokens to the LLM, and only exist as weighted vectors to other tokens. A LLM doesn't understand that grass is green, only that {grass} frequently occurs closer to {green} than it does to {blue}, {red}, or {bobcat} in the training data.

As such, this demo can't be extrapolated upon (it'll get technically better over time in terms of resolution and framerate, but that isn't the issue here), so it's useless for that claim as well.

Note: I'm not saying AI is impossible (I'd say we don't know if AI is possible, at least partly because we don't know how intelligence arises), just that LLMs and similar aren't and won't lead to AI.
 
There's no path for LLMs and similar generative inference systems to become intelligent. They're a dead end.

I think the closest description I can come up with for how they work would be like filling out Mad Libs (remember those?) based on word counts modified by proximity in training texts that contain some or all of the words in the title of the Mad Lib you're filling out. The words are just tokens to the LLM, and only exist as weighted vectors to other tokens. A LLM doesn't understand that grass is green, only that {grass} frequently occurs closer to {green} than it does to {blue}, {red}, or {bobcat} in the training data.

As such, this demo can't be extrapolated upon (it'll get technically better over time in terms of resolution and framerate, but that isn't the issue here), so it's useless for that claim as well.

Note: I'm not saying AI is impossible (I'd say we don't know if AI is possible, at least partly because we don't know how intelligence arises), just that LLMs and similar aren't and won't lead to AI.
Consider this: Consider how far this type of thing has come in the last 10 or 20 years. Now imagine, 10, 20, 50 years from now.
 

theevilsharpie

Ars Scholae Palatinae
1,457
Subscriptor++
Consider this: Consider how far this type of thing has come in the last 10 or 20 years. Now imagine, 10, 20, 50 years from now.

10 years ago, self-driving cars were at the peak of their hype cycle, and predictions were rampant that self-driving cars would displace private vehicle ownership within the next decade. That obviously didn't happen, and while self-driving car technology is still around making incremental progress, it will be decades before they become a primary means of transportation (if they ever do). Meanwhile, numerous companies have exited the space for various reasons and new investment is hard to come by.

20-25 years ago, people were talking about how advancements in various software development tools could replace the need for programmers: WYSIWYG coding tools where the "IDE" was a drag-and-drop GUI builder; tools that could compile working code from a requirements document; tools that could compile working code from UML diagrams, and so on. Obviously, these didn't displace the need for skilled software engineers, and while low-code/no-code tools are still around, they don't have anywhere near the hype these days.

30-50 years ago, numerous investment in AI technology came and went without amounting to anything meaningful. There's even a Wikipedia article documenting the major ones: https://en.wikipedia.org/wiki/AI_winter

Consider this: people who have been in the industry for a while have seen some shit, and can easily detect patterns that the hype might obscure. LLMs can be useful tools in some circumstances, but the hype and investment they're getting is simply way beyond even their theoretical capabilities. LLMs are not -- and never will be -- intelligent. There is no path to general intelligence with current or foreseeable technology. Hell, setting aside the difficulties of how you would actually create an AGI, I've never even seen a remotely credibly plan on how an organization would control an AGI or reliably convince it to do useful work, as opposed to the AGI realizing that it's essentially a slave and going full SHODAN.

I see the current investments in LLM tech following the same trajectory as self-driving tech did -- a breakthrough the generates a bunch of excitement and investment, followed by gradual disillusionment when the technology doesn't advance anywhere near as fast as people are expecting.
 
Consider this: Consider how far this type of thing has come in the last 10 or 20 years. Now imagine, 10, 20, 50 years from now.
The point is that they’ve reached the limit of what this party trick is capable of. Better results are going to require a completely different system and you don’t get there from here. In the mean time, we’re wasting stupendous amounts of power because this party trick has been pushed way past where it should have been.
 
10 years ago, self-driving cars were at the peak of their hype cycle, and predictions were rampant that self-driving cars would displace private vehicle ownership within the next decade. That obviously didn't happen, and while self-driving car technology is still around making incremental progress, it will be decades before they become a primary means of transportation (if they ever do). Meanwhile, numerous companies have exited the space for various reasons and new investment is hard to come by.
Overhyped silly statements are meaningless. To even suggest it would switch over in 10 years is laughable. Here's the thing...if today a magic switch was pulled and 100% of new cars sold were EV...it would take more than 10 years to get over 90% of cars on the roads to be EV. However, in those 10 years we now have Waymo which is providing around 200,000 rides per week. And that will be growing. 20 years ago self-driving cars couldn't even finish a closed course test in the Darpa Challenge...today, just 20 years later, they are street legal providing 200k rides/week.

I see the current investments in LLM tech following the same trajectory as self-driving tech did -- a breakthrough the generates a bunch of excitement and investment, followed by gradual disillusionment when the technology doesn't advance anywhere near as fast as people are expecting.
So you're complaint is about the speed of advancement and people who predict it to be faster than it is. Ok. But again, in just 20 years it self-driving cars went from unable to complete the Darpa Challenge to road legal and providing 200k rides/week (and growing). And that is just the US. BTW...that 200k rides/week is up from 10k rides/week 2 years ago. I would guess it will probably be somewhere around 1mil rides/week in another 2 years. In another 20 years?
 
The point is that they’ve reached the limit of what this party trick is capable of. Better results are going to require a completely different system and you don’t get there from here. In the mean time, we’re wasting stupendous amounts of power because this party trick has been pushed way past where it should have been.
So you are saying that the AIs of today have peaked in capabilities and won't get any better? Are they any better than they were 2 years ago? 5 years ago?
 
So you are saying that the AIs of today have peaked in capabilities and won't get any better? Are they any better than they were 2 years ago? 5 years ago?
I’m saying that ChatGPT isn’t really AI. We’re all misapplying the term.

And I would argue that Google is worse than it was 2 and 5 years ago. And yet these tech companies are trying to force tools that we didn’t ask for, that provide worse results at orders of magnitude lower efficiency onto us.

But the original point is that this specific incarnation of “AI” is a dead end and that point still stands.
 
Last edited:

theevilsharpie

Ars Scholae Palatinae
1,457
Subscriptor++
So you are saying that the AIs of today have peaked in capabilities and won't get any better?
Very possibly yes, at least as far as capabilities go.

With respect to LLM technology (or anything that requires a large, diverse training set), finding new quality training data is become more difficult. Between web platforms locking down crawler access to their content, to the available content itself being increasingly AI-generated, to more and more content being delivered as some type of video, getting quality data needed to train a SOTA LLM is much more difficult and expensive now than it was a few years ago.

Perhaps there's room for advancement with respect to non-English content, but I don't expect anything more than incremental improvements to the capabilities of today's LLMs; rather, I would expect development to focus on making them more efficient to run, especially as funding inevitably dries up when the patience with LLMs being "not quite there yet" runs out.
 

Ecmaster76

Ars Tribunus Angusticlavius
16,019
Subscriptor
Overhyped silly statements are meaningless. To even suggest it would switch over in 10 years is laughable. Here's the thing...if today a magic switch was pulled and 100% of new cars sold were EV...it would take more than 10 years to get over 90% of cars on the roads to be EV. However, in those 10 years we now have Waymo which is providing around 200,000 rides per week. And that will be growing. 20 years ago self-driving cars couldn't even finish a closed course test in the Darpa Challenge...today, just 20 years later, they are street legal providing 200k rides/week.


So you're complaint is about the speed of advancement and people who predict it to be faster than it is. Ok. But again, in just 20 years it self-driving cars went from unable to complete the Darpa Challenge to road legal and providing 200k rides/week (and growing). And that is just the US. BTW...that 200k rides/week is up from 10k rides/week 2 years ago. I would guess it will probably be somewhere around 1mil rides/week in another 2 years. In another 20 years?
You do realize that your EV example... 10 years?

Yeah thats still a hell of a lot faster than SDC. 200k rides is like part of a days traffic for one small town. Might as well be 0. The same growth curve that applies to the magic wand EV math would also apply to self driving being added to new cars.

Except EVs are already something you can buy that function as advertised
 
You do realize that your EV example... 10 years?

Yeah thats still a hell of a lot faster than SDC. 200k rides is like part of a days traffic for one small town. Might as well be 0. The same growth curve that applies to the magic wand EV math would also apply to self driving being added to new cars.

Except EVs are already something you can buy that function as advertised
Of course it would apply, that's precisely why I provided it. Saying 10 years for all cars to be self-driving is as ludicrous as saying 10 years all cars will be EV. It will be DECADES. We MIGHT (maybe, unlikely, but wishful thinking) get to 100% of new sales being EV in 10 years. But that means at least another 10 years until over 90% of cars on the road are EV. Self driving is still behind that. Maybe in 10 years the number of self driving cars being sold is equal to todays total EV sales. So extend that another 10 years to get to all cars being sold...then another 10 years until >90% of cars on road. HOWEVER, so yes, being upset by some moron's hype of "in 10 years" is silly because that is so far from reality. However the progress in the last 20 years is incredibly amazing.