AI Slop or the Future of Gaming?
NVIDIA had everyone's attention at GTC 2026 when it unveiled DLSS 5, the next evolution of its Deep Learning Super Sampling upscaling tech. But where previous DLSS versions took lower-resolution frames and made them look sharper, DLSS 5 does something far more aggressive: it uses generative AI to add lighting, materials, textures, and visual details that weren't in the original frame at all [1]. The demo footage showed characters like Resident Evil's Grace with smoother skin, more detailed faces, and enhanced environmental lighting. Some viewers called it impressive. Others — including developers who were being showcased — called it "AI slop." The backlash was instant, loud, and, unusually, included the people whose games were supposedly being improved.
Here's what DLSS 5 is actually doing under the hood, and why that matters. According to communications between YouTuber Daniel Owen and an Nvidia representative named Jacob, the input for DLSS 5 is simply a rendered frame plus motion vectors [1]. That's it. The system doesn't receive geometry data, PBR material maps, or lighting source information. It infers all of those things using generative AI — and then adds details based on what it thinks should be there. The Nerd Nest podcast, a gaming hardware deep-dive show, broke this down clearly: "The generative AI is inferring all of these things — PBR, geometry, light sources — based on what the Gen AI can see in that frame." That's not enhancement. That's invention. And when the AI decides Grace from Resident Evil 9 should look like an Instagram filter, there's no artist who approved that call.




