Nvidia's DLSS 5 is a Slap in the Face to the Art of Video Game Design

So, Nvidia just revealed DLSS 5, its new AI graphics tech that uses generative systems to “enhance” video games with more photo-realistic effects, and I’m not going to worry about mincing my words here: I think it looks shit. Yes, we’ve barely seen a minute of it in action, but if what’s teased is where technology giants think the future of graphics tech in games is going, then I’m afraid I’m out. The first shot of Nvidia’s DLSS 5 announcement trailer gives us a good look at the impact the technology has on Capcom’s latest, Resident Evil Requiem. It’s already a stunning-looking video game, so I can’t say I ever felt it was in need of enhancements, but lo and behold, as that green bar sweeps across the screen, a yassified Grace Ashcroft is left staring back at us, devoid of any discernible character, as if the light behind her eyes has been switched off by the technology. It’s the sort of smoothed-over face and unrealistic lighting that we’ve become accustomed to seeing in the corners of the app store, or on the advertising banners of websites you’d only look at in incognito mode. It takes a character so carefully crafted by the art team at Capcom, and says “no, we can do that better,” adding a layer of sheen that makes Grace stand out in Requiem’s world, rather than feel a part of it. Not once on my playthrough of Resident Evil Requiem did I think that it didn’t look photo-realistic enough or that either of its two protagonists was in need of a glow-up, and I have definitely never played a match of EA FC 26 where I wished that Virgil Van Dijk looked less like his real-life counterpart. I play games to experience a crafted piece of artistry, whether the developers are aiming to take me to far-flung fantasy worlds, or recreate real-life with as much fidelity as possible. But DLSS 5 offers none of this to me, instead replacing the paintbrush held in a human hand with an AI slopping a big vat of oil over the canvas. What are we doing here? AI has no artistic or authorial intent. What it does is read an image as if it were purely zeros and ones and overwrites it according to its training data, theoretically in an attempt to make it look “better”. On Nvidia’s accompanying blog post, the company explains that the model is trained to "end-to-end understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast – all by analyzing a single frame." The idea, in theory, is to improve the look of characters while also keeping them grounded in the scene they already stand in. The results are just off-putting to my eye, though. Each one of the Hogwarts Legacy characters in the trailer looks like they’re now spotlit from behind the camera in an off-putting way that by no means looks natural. Yes, we now live in a world where game environments are largely dynamically lit, but the developers and technical artists behind those systems still have ultimate control over how they look. They can decide the mood they’re trying to set and will spend much of their time making sure it fits the game’s vision, but Nvidia and the tech behind this AI filter obviously think it knows better. The technology behind DLSS 5 threatens not only to make games visibly distracting but also completely alter the emotions of a story. Art direction is such a huge part of video game design. The worlds and characters that these developers spend years handcrafting are what root us in the experience. I very recently started a replay of Uncharted 4, and it still strikes me in this nearly decade-old game how incredibly nuanced Nathan Drake’s face is during its cutscenes. There are little wrinkles, bashes, and bruises that come and go throughout its story that reflect his place in the world and the struggles he’s going through. I couldn’t imagine ever wanting an AI filter layered on top, that would no doubt smooth over his wrinkles and remove his blemishes, recalibrating Naughty Dog’s flawed hero to better reflect the “perfect” men that are promoted by society and thus flood its training data. But cuts, scuffs, and genetic “imperfections” are the small details that make us connect to characters, and are the intent of the artist who made them. The technology behind DLSS 5 threatens not only to make games visibly distracting but also completely alter the emotions of a story if it is embraced by the corporations that employ these artists. I can only imagine the collective sigh let out by the majority of video game developers when this trailer was released, but fear that the people in charge of the money may have let out a little smile instead. This feels like the dawn of a new era, and a saga that will stretch on far beyond Nvidia’s announcement this week. Already we’re seeing pushback from fans slamming DLSS 5 as “AI-generated slop,” and Bethesda has quickly committed to “further adjusting the lighting and final effect” of Starfield’s implementation of th

Mar 17, 2026 - 22:34
 0
Nvidia's DLSS 5 is a Slap in the Face to the Art of Video Game Design
So, Nvidia just revealed DLSS 5, its new AI graphics tech that uses generative systems to “enhance” video games with more photo-realistic effects, and I’m not going to worry about mincing my words here: I think it looks shit. Yes, we’ve barely seen a minute of it in action, but if what’s teased is where technology giants think the future of graphics tech in games is going, then I’m afraid I’m out.

The first shot of Nvidia’s DLSS 5 announcement trailer gives us a good look at the impact the technology has on Capcom’s latest, Resident Evil Requiem. It’s already a stunning-looking video game, so I can’t say I ever felt it was in need of enhancements, but lo and behold, as that green bar sweeps across the screen, a yassified Grace Ashcroft is left staring back at us, devoid of any discernible character, as if the light behind her eyes has been switched off by the technology. It’s the sort of smoothed-over face and unrealistic lighting that we’ve become accustomed to seeing in the corners of the app store, or on the advertising banners of websites you’d only look at in incognito mode. It takes a character so carefully crafted by the art team at Capcom, and says “no, we can do that better,” adding a layer of sheen that makes Grace stand out in Requiem’s world, rather than feel a part of it.

Not once on my playthrough of Resident Evil Requiem did I think that it didn’t look photo-realistic enough or that either of its two protagonists was in need of a glow-up, and I have definitely never played a match of EA FC 26 where I wished that Virgil Van Dijk looked less like his real-life counterpart. I play games to experience a crafted piece of artistry, whether the developers are aiming to take me to far-flung fantasy worlds, or recreate real-life with as much fidelity as possible. But DLSS 5 offers none of this to me, instead replacing the paintbrush held in a human hand with an AI slopping a big vat of oil over the canvas. What are we doing here?

AI has no artistic or authorial intent. What it does is read an image as if it were purely zeros and ones and overwrites it according to its training data, theoretically in an attempt to make it look “better”. On Nvidia’s accompanying blog post, the company explains that the model is trained to "end-to-end understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast – all by analyzing a single frame." The idea, in theory, is to improve the look of characters while also keeping them grounded in the scene they already stand in. The results are just off-putting to my eye, though. Each one of the Hogwarts Legacy characters in the trailer looks like they’re now spotlit from behind the camera in an off-putting way that by no means looks natural. Yes, we now live in a world where game environments are largely dynamically lit, but the developers and technical artists behind those systems still have ultimate control over how they look. They can decide the mood they’re trying to set and will spend much of their time making sure it fits the game’s vision, but Nvidia and the tech behind this AI filter obviously think it knows better.

The technology behind DLSS 5 threatens not only to make games visibly distracting but also completely alter the emotions of a story. Art direction is such a huge part of video game design. The worlds and characters that these developers spend years handcrafting are what root us in the experience. I very recently started a replay of Uncharted 4, and it still strikes me in this nearly decade-old game how incredibly nuanced Nathan Drake’s face is during its cutscenes. There are little wrinkles, bashes, and bruises that come and go throughout its story that reflect his place in the world and the struggles he’s going through. I couldn’t imagine ever wanting an AI filter layered on top, that would no doubt smooth over his wrinkles and remove his blemishes, recalibrating Naughty Dog’s flawed hero to better reflect the “perfect” men that are promoted by society and thus flood its training data. But cuts, scuffs, and genetic “imperfections” are the small details that make us connect to characters, and are the intent of the artist who made them.

The technology behind DLSS 5 threatens not only to make games visibly distracting but also completely alter the emotions of a story if it is embraced by the corporations that employ these artists. I can only imagine the collective sigh let out by the majority of video game developers when this trailer was released, but fear that the people in charge of the money may have let out a little smile instead.

This feels like the dawn of a new era, and a saga that will stretch on far beyond Nvidia’s announcement this week. Already we’re seeing pushback from fans slamming DLSS 5 as “AI-generated slop,” and Bethesda has quickly committed to “further adjusting the lighting and final effect” of Starfield’s implementation of the tech after it showed multiple characters suffering from the same smooth-faced fate in its space RPG. “This will all be under our artists’ control, and totally optional for players”, Bethesda Game Studios added on social media. It may well be completely optional for now on existing games, but I fear for what happens when studios start having to use this tech more in step with the development process.

If we allow this sort of technology to thrive, are we giving the go-ahead for companies to place less importance on curated art direction and instead do the bare minimum and let AI fill in the gaps? I don’t know about you, but I like my art to be made by humans. I want to know if someone decided to light a scene in a certain way or if the small details on a character’s face were sculpted with intention. So I’ll continue to say that visual “upgrades” like this look like shit — it’s not like the tech behind it has any feelings to hurt anyway.

Simon Cardy is a Senior Editor at IGN who can mainly be found skulking around open world games, indulging in Korean cinema, or despairing at the state of Tottenham Hotspur and the New York Jets. Follow him on Bluesky at @cardy.bsky.social.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

XINKER - Business and Income Tips Explore XINKER, the ultimate platform for mastering business strategies, discovering passive income opportunities, and learning success principles. Join a community of thinkers dedicated to achieving financial freedom and entrepreneurial excellence.