I think ML has lots of potential in this area specifically.
Imagine a game with bare-bones graphics and lighting, and a NN that converts it into something pretty. Indie developers can make AA-looking games and all game developers can devote more effort into design and logic. Artists will still be needed for art direction and possibly fine-tuning, although there will be less needed for each game (also less developers needed with AI agents and better tools).
Related, ML also has potential for AI enemies (and allies). Lots of players still prefer multiplayer, in part because humans are more realistic enemies (but also because they want to beat real humans); but multiplayer games struggle because good netcode is nontrivial, servers are expensive, some players are obnoxious, and most games don’t have a consistent enough playerbase.
mdp2021 15 hours ago [-]
> Imagine a game with bare-bones graphics and lighting
Upscaling solution mainly targeted at mobile gaming, with an 'AI pipeline' for upscaling graphics (They claim 540p upscaled to 1080p at 4ms per frame). I'm a bit skeptical because this is a press release for chips that are in the works and claim to be releasing in DEC-26, and then on actual devices after that. So sounds more like a strategic/political move (Perhaps stock price related manoeuvring).
Unreal Engine 5 plugin will allow previewing the upscaled effects using the though, which will be nice for game developers.
wmf 16 hours ago [-]
It's a copy of DLSS/FSR4 which are pretty well understood by now. As for the schedule, Arm always announces IP ahead of time.
ksec 5 hours ago [-]
Is DLSS really that mature by now? I thought only DLSS 4 was good enough and we should still have ways to improve on it.
And there seems to be a lot of hate towards DLSS from Gaming community.
wmf 4 hours ago [-]
DLSS 2.x is pretty good; I'd expect Arm NSS 1.0 to be similar to that.
bobajeff 4 days ago [-]
It sounds like this a geared towards games. However, I like the idea of exposing all of the ML features through Vulkan extensions rather than some proprietary API. Though I think exposing them through OpenCL extensions would work for me as well.
ksec 5 hours ago [-]
At this point, IMG / PowerVR isn't even used by MediaTek. Which means GPU on Mobile is just Apple, Qualcomm Adreno, ARM Mali. Still wish ARM had rebranded their Mali range.
Samsung Exynos uses AMD RDNA but I am not even sure if they are being used at all. Nvidia seems to have no interest in the market.
ryao 5 hours ago [-]
Nvidia has the Tegra line, but the market is not interested in it outside of game consoles.
msh 3 hours ago [-]
Or Qualcomm used their monopoly to keep nvidia out of phones.
TiredOfLife 2 hours ago [-]
Before Switch basically every company made 1-2 Tegra products only to newer use Nvidia again. Tegra was late and bad.
TiredOfLife 2 hours ago [-]
Apple is basically PowerVR with serial numbers partially filed off
imbusy111 15 hours ago [-]
I figured there is a need for generating a lot of samples and building a predictive model per game for best results. Documentation confirms:
> Most of these corner cases can be resolved by providing the model with enough training data without increase the complexity and cost of the technique. This
also enables game developers to train the neural upscalers with their
content, resulting in a completely customized solution fine-tuned for the
gameplay, performance, or art direction needs of a particular title.
There are now at least three ways to accelerate machine learning models on consumer hardware:
- GPU compute units (used for LLMs)
- GPU "neural accelerators"/"tensor cores" etc (used for video game anti-aliasing and increasing resolution or frame rate)
- NPUs (not sure what they are actually used for)
And of course models can also be run, without acceleration, on the CPU.
colejohnson66 14 hours ago [-]
An "NPU" is a matrix multiplier accelerator. It removes some general-purpose stuff that GPUs provide in favor of more "AI"-useful units, like support for values a byte or smaller (i.e., FP4, INT4, etc.).
cubefox 14 hours ago [-]
All three of them accelerate matrix multiplications actually.
almostgotcaught 12 hours ago [-]
any thing that computes matmul faster than by hand technically accelerates matmul - so what's your point?
atq2119 12 hours ago [-]
At least for desktop gaming, the tensor cores are in the GPU compute units (SM), same as for the big data center GPUs.
It seems ARM believe it makes sense to go a different route for mobile gaming.
catgary 11 hours ago [-]
From what I can tell, NPUs are mostly being used by Microsoft to encourage vendor lock-in to the MicrosoftML/ONNX platform (similar to their DirectX playbook).
jonas21 10 hours ago [-]
They're used a lot on mobile. Apple uses their "neural engine" NPU to power their on-device ML stuff and Samsung does something similar in their Exynos processors. Apple also exposes the NPU to developers via CoreML.
westurner 5 hours ago [-]
How many TOPS/WHr?
ltbarcly3 12 hours ago [-]
"Arm neural technology is an industry first, adding dedicated neural accelerators to Arm GPUs"
HiSilicon Kirin 970 had an NPU in like 2017. I think almost every performance-oriented Arm chip released in the last 5 years has had some kind of NPU on it.
I suspect they are using Arm here to mean "Arm-the-company-and-brand" not "Arm the architecture", which is both misleading and makes the claim completely meaningless.
atq2119 12 hours ago [-]
The marketing speak isn't exactly clear, but I believe the point is that this is like an NPU inside of the GPU instead of next to it as a separate device. That would indeed be new, and I can see how it'd be beneficial to integration with games.
Imagine a game with bare-bones graphics and lighting, and a NN that converts it into something pretty. Indie developers can make AA-looking games and all game developers can devote more effort into design and logic. Artists will still be needed for art direction and possibly fine-tuning, although there will be less needed for each game (also less developers needed with AI agents and better tools).
Related, ML also has potential for AI enemies (and allies). Lots of players still prefer multiplayer, in part because humans are more realistic enemies (but also because they want to beat real humans); but multiplayer games struggle because good netcode is nontrivial, servers are expensive, some players are obnoxious, and most games don’t have a consistent enough playerbase.
https://eu-images.contentstack.com/v3/assets/blt740a130ae3c5...
# The Art Of Braid: Creating A Visual Identity For An Unusual Game
https://www.gamedeveloper.com/design/the-art-of-braid-creati...
First, porn.
Second, artificial botting to make your game look active.
Third, hire a art developer in india, VPN them to your AI tool, fire them when the game is done.
You really should check your prescription rose colored glasses.
There's no reason to involve an NN in this one. We had convincing bots with varied behaviours for ages.
https://community.arm.com/arm-community-blogs/b/mobile-graph...
Upscaling solution mainly targeted at mobile gaming, with an 'AI pipeline' for upscaling graphics (They claim 540p upscaled to 1080p at 4ms per frame). I'm a bit skeptical because this is a press release for chips that are in the works and claim to be releasing in DEC-26, and then on actual devices after that. So sounds more like a strategic/political move (Perhaps stock price related manoeuvring).
Unreal Engine 5 plugin will allow previewing the upscaled effects using the though, which will be nice for game developers.
And there seems to be a lot of hate towards DLSS from Gaming community.
Samsung Exynos uses AMD RDNA but I am not even sure if they are being used at all. Nvidia seems to have no interest in the market.
> Most of these corner cases can be resolved by providing the model with enough training data without increase the complexity and cost of the technique. This also enables game developers to train the neural upscalers with their content, resulting in a completely customized solution fine-tuned for the gameplay, performance, or art direction needs of a particular title.
Source: https://developer.arm.com/documentation/111019/latest/
- https://github.com/KhronosGroup/Vulkan-Docs/blob/5d386163f25... Adding tensor ops to the shader kernel vocaborary (SPIR-V). Promising.
- https://github.com/KhronosGroup/Vulkan-Docs/blob/5d386163f25... Adding TenforFlow/NNAPI/-like graph API. Good luck.
It seems ARM believe it makes sense to go a different route for mobile gaming.
HiSilicon Kirin 970 had an NPU in like 2017. I think almost every performance-oriented Arm chip released in the last 5 years has had some kind of NPU on it.
I suspect they are using Arm here to mean "Arm-the-company-and-brand" not "Arm the architecture", which is both misleading and makes the claim completely meaningless.