• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

So what's with all the Zippo hate? Wasn't he a liked leaker back in ERA or am I misremembering?

What changed?

Yes, he was a leaker back in ERA. Heavy emphasis on was because he was banned for spreading incorrect and misleading information
 
while this is true, it's techically not Nintendo's ONLY source of revenue now. They get universal royalties from Super Nintendo World, they also have NSO subs which I imagine can be a pretty decent revenue source
Plus they now have a billion dollar movie franchise, with probably more on the way.

Nintendo replaced its second pillar (dedicated handheld devices GB/DS) with movie/tv entertainment business.

They added Illumination’s CEO to their board, they bought that little film studio DynamoPictures, and they started a branch of their business to focus on films/tv.

Additionally, Nintendo is extremely shrewd with their finances and money. They investing wisely in more real estate and buildings for studios, they save a lot of cash and don’t over extend themselves, Switch will continue to bring in money through game sales for a couple of years, they are using their IPs a lot more now for merchandise…

Nintendo has so many revenue streams and are very diversified that they don’t really need two video game platforms anymore. They aren’t really that risk adverse when you take into account all their business endeavors. They can weather a storm if a piece of hardware doesn’t take off.

Also the NG Switch will have Pokémon, the largest IP in the world. It’ll sell.
 
Yes, he was a leaker back in ERA. Heavy emphasis on was because he was banned for spreading incorrect and misleading information

Can someone please give me proper examples of misleading info being spread by Zippo? :) He was also banned from Reddit but every time I asked for sources/receipts, no one could bring any up :/

I've been extensively collecting the claims of Zippo to get to the bottom of it, and so far, if one looks close enough and separates the "insider info" from personal speculation, the only things he got wrong was the next Mario Kart game coming soon (which he said in January 2022), the state of the next Fire Emblem projects and which companies were developing what, and the developer of the next 2D Sonic game (presumably Sonic Superstars), which he said was developed 100% internally (we now know it's developed by Arzest). Apart from that, most of his info is broad and "safe", and he usually only misses on the release windows, which may or may not be personal speculation at times 😅

I've been collecting the claims from his blog, and I've almost been through all the posts. Does anyone have links to his claims in other forums such as ResetEra? Gotta collect 'em all ;)
 
Didn’t that Nick person said Nintendo will reveal the Switch Pro on one of the Geoff Keighley shows? The switch pro ended up being the OLED switch.
Which also wasn't revealed at a Keighley show lmao
No, he claimed it would be revealed prior to E3 2021. That got several people, even Nate said it was 100% certain to be there at one point. There are much more egregious examples from Nick though, I still wouldn't put much stock into what he says.
 
Could the Unreal Engine thing just be an optimization option for the Switch 2 (UE5), much like how UE4 has one?

Not sure how relative it is to Switch 2, but I do think the reason Nintendo has used Unreal engine for some of their games is to help pass along optimizations to further improve the engines performance on their hardware. I am not convinced that Nintendo couldn't have achieved similar results with Pikmin 4 using their own bespoke engine. On top of that, Nintendo may be moving towards developing certain games with more of the work being done through outsourcing. By using an engine that other developers are very familiar with, it would help streamline development. Development team sizes have grown exponentially over the years and honestly, many workers in game development these days are primarily experienced with using game engines rather than creating their own with high level C++ programing.
 
if the Switch sucessor have a similar design as show in this video, it gonna confuse so many people(specaily the more casual consumers, who trough Wii U was a acessory for Wii)
I don't know about that. I feel like the name alone will communicate to consumers that this the next system without any confusion. Not that I don't expect the appearance to change at all, but there are levels to this.
 
I don't know about that. I feel like the name alone will communicate to consumers that this the next system without any confusion. Not that I don't expect the appearance to change at all, but there are levels to this.
then Nintendo better not name the successor with a stupid name such as Switch U, New Switch or something like that
 
Can someone please give me proper examples of misleading info being spread by Zippo? :) He was also banned from Reddit but every time I asked for sources/receipts, no one could bring any up :/

I've been extensively collecting the claims of Zippo to get to the bottom of it, and so far, if one looks close enough and separates the "insider info" from personal speculation, the only things he got wrong was the next Mario Kart game coming soon (which he said in January 2022), the state of the next Fire Emblem projects and which companies were developing what, and the developer of the next 2D Sonic game (presumably Sonic Superstars), which he said was developed 100% internally (we now know it's developed by Arzest). Apart from that, most of his info is broad and "safe", and he usually only misses on the release windows, which may or may not be personal speculation at times 😅

I've been collecting the claims from his blog, and I've almost been through all the posts. Does anyone have links to his claims in other forums such as ResetEra? Gotta collect 'em all ;)
I don’t think anyone cares much about him to waste time on searching up his posts.
 

Thanks, I've been through that thread, but that's exactly where no one was able to provide any concrete evidences 😅 As for the Sonic Stadium forum, thanks for that, I'm adding that to my list :)

He is someone who might've known one or two things a few years ago and he just makes stuff up, there are still people who believe him even though he just makes guesses at stuff and his predictions are either very wrong or extremely safe. Awful track record and his 'leaks' have been banned from the gamingleaksandrumours subreddit lol.

He was banned from that subreddit bc a mod got "emotional" over Zippo claiming that a Chrono Trigger remake is in the works, and in that claim he simply said "A new Nintendo Direct will likely be airing in about 48 hours. I don't know when this is being announced, but we can hope and pray it's announced there.", but it didn't air in the Direct and that upset a lot of people, including that mod who then banned him and other "leakers".

I'm not claiming Zippo's track record is solid, but there's a propensity for people to jump to general conclusions without looking at the finer details, even if the "leaker" could've worded it better. In my personal research, most of the things Zippo has claimed are either yet-to-be-verified or came true, but one thing he consistently gets wrong is the release window, which may or may not be his pesonal speculation. And this is what usually upsets people, that their favorite game X didn't show up at event Y.
 
CPU and GPU frame time can (and should) overlap. In fact, in an ideal scenario, they're independent and can each take up to 100% of the frame time simultaneously. That limit is not usually achievable in practice, but in general I wouldn't say it's true that the GPU has "16.6 ms minus CPU time" to render a frame.
Even back in the ol' days of the retro systems, the CPU and PPU worked simultaneously outside of the VBlank.
 
0
The fact that Game Freak is experimenting with the Unreal Enngine is nothing new. I hope that with the Switch 2 gen 10 it will take advantage of Epic's engine.

DBODxxu.png
 
The fact that Game Freak is experimenting with the Unreal Enngine is nothing new. I hope that with the Switch 2 gen 10 it will take advantage of Epic's engine.

DBODxxu.png
hard to say. they have unity experience that they used for their own games but that hasn't translated to Pokemon being on Unity. it's possible that UE5 is for their Private Division game
 
The fact that Game Freak is experimenting with the Unreal Enngine is nothing new. I hope that with the Switch 2 gen 10 it will take advantage of Epic's engine.

DBODxxu.png
The “Nintendo hire this man” crowd are eating rn
 
He was banned from that subreddit bc a mod got "emotional" over Zippo claiming that a Chrono Trigger remake is in the works, and in that claim he simply said "A new Nintendo Direct will likely be airing in about 48 hours. I don't know when this is being announced, but we can hope and pray it's announced there."
That’s exactly how confidence tricks work. The con man/woman takes a known fact (direct airing in 48 hrs) and associates that with something people really want and/or likely to happen (Chrono Trigger remake fits both bills), all the while maintaining his/her deniability (“let’s hope and pray”).
 
I hadn't really thought about this before, but arguably, DLSS is especially well-suited to the paradigm where the CPU moves on to frame N while the GPU is still rendering frame N-1.

The main obstacle to achieving good CPU-GPU overlap with a naive approach is that the in-memory resources the CPU computed for the current frame -- positions and state of all the game objects, lighting, particles, etc. -- need to be kept around until the GPU is done using them to render. One solution is to double buffer some of your game state (which means more memory usage and increased code complexity), another is to allow overlap but use barriers to ensure that specific resources don't start getting updated in the current frame until the GPU's previous frame is done with them (which could slow you down a lot from the ideal overlap).

DLSS, on the other hand, pretty much only needs a few image buffers and the motion vectors from the previous frame. Those are trivial to buffer. Even with a fully naive approach to your pipeline, once the GPU is done with native rendering and ready for DLSS, you can allow the CPU to move on to the next frame without worrying that it will overwrite things the GPU still needs. There will be some rendering steps after DLSS -- Nvidia usually uses UI elements and tonemapping as examples -- but all the expensive native scene rendering is already done first. So the however many hypothetical milliseconds of DLSS execution could, in an ideal but not at all theoretical scenario, be almost free.
It's very much like an assembly line (sort of). Just because the stage at the end of the pipeline is working on what was given it doesn't mean the stages prior to it aren't doing something. They are working on the "next" part so the stage after them have something to work on.

The CPU works on the logic of the frame, and prepping the GPU with the information it needs to render the scene. After it finishes, it immediately begins work on the next frame for prepping the GPU again. When the GPU gets that information, it renders it into a buffer, which then gets displayed to the TV/monitor.

DLSS is really just an additional step in the pipeline between the GPU and the TV/monitor, taking what the GPU produced, and generating an upscale. It doesn't halt the GPU just like the GPU doesn't halt the CPU.
 
Last edited:
0
CPU and GPU frame time can (and should) overlap. In fact, in an ideal scenario, they're independent and can each take up to 100% of the frame time simultaneously. That limit is not usually achievable in practice, but in general I wouldn't say it's true that the GPU has "16.6 ms minus CPU time" to render a frame.
I certainly wasn't trying to be universal about it, just illustrative about DLSS still being a trade off.

But I would say it's true in general. While you can interleave tasks, like asset decompression and loading, sound, etc, you still have to run all physics, AI, and input sampling before GPU can render. UE4's poor threading model puts most of the tasks that could be interleaved in the mainthread by default. It still remains the norm, until UE5 is widely adopted.

DLSS, on the other hand, pretty much only needs a few image buffers and the motion vectors from the previous frame. Those are trivial to buffer.
On the one hand, this is pretty much how Frame-Gen works, by interpolating during CPU time. On the other hand, it also forces Nvidia reflex, which creates faux-backpressure to force synchronization between the CPU and the GPU, in order to guarantee the CPU is delivering frames just-in-time, to minimize latency.
 
@oldpuck Does DLSS run in parallel with rendering? I would assume it almost has to because its essentially filling in the blanks. For example, if the DLSS time is 6ms to get its work done and the actual rendering takes 8ms, I dont think you would just add the two together. By the time the GPU is ready to rasterize the image, the DLSS work has to be done in order to fill in the blanks. Is this correct?
 
I certainly wasn't trying to be universal about it, just illustrative about DLSS still being a trade off.

But I would say it's true in general. While you can interleave tasks, like asset decompression and loading, sound, etc, you still have to run all physics, AI, and input sampling before GPU can render. UE4's poor threading model puts most of the tasks that could be interleaved in the mainthread by default. It still remains the norm, until UE5 is widely adopted.


On the one hand, this is pretty much how Frame-Gen works, by interpolating during CPU time. On the other hand, it also forces Nvidia reflex, which creates faux-backpressure to force synchronization between the CPU and the GPU, in order to guarantee the CPU is delivering frames just-in-time, to minimize latency.
You have to run those tasks before the GPU can begin rendering the first frame, but you don't have to wait for the GPU to finish that frame before you start running those tasks for the second frame. This is a fundamental principle.

And like I said, even in the case of a pipeline where the CPU is stupidly waiting for all rendering to finish before doing anything, it would be trivial to amend that so it doesn't wait for DLSS, since you don't even have to worry about the rest of the pipeline state -- all you need to preserve is the inputs to DLSS (and maybe a little extra game data like HUD element position, which you could just entirely double buffer if you needed to).

I'm assuming the most pessimistic and outdated kind of pipeline here and I still think the DLSS cost can basically be amortized entirely within the next frame's CPU time. In fact, it's specifically when assuming a bad pipeline that this becomes a really notable advantage, since a good pipeline hypothetically doesn't need to wait for native rendering either, but a bad pipeline that has that issue should have no trouble at least doing it with DLSS.
 
But I would say it's true in general. While you can interleave tasks, like asset decompression and loading, sound, etc, you still have to run all physics, AI, and input sampling before GPU can render. UE4's poor threading model puts most of the tasks that could be interleaved in the mainthread by default. It still remains the norm, until UE5 is widely adopted.
But it still isn't really taking away from the other parts of the system. The CPU could process its logic in 16.66ms, and the GPU could render a scene in 16.66ms, but that doesn't mean it's displaying a new frame every 33.33ms as if the CPU is stalled until the GPU finishes. While the GPU is processing that current graphical frame, the CPU is processing the next logical frame. So the CPU is processing at 60fps, and the GPU is processing at 60fps. What we have is latency of 16.66ms after a logical frame is processed. If DLSS were added and it spent 6ms doing its job, the game would still run at 60fps. The latency after a processed logical frame, however, would increase by 6ms to become 22.66ms.
 
The fact that Game Freak is experimenting with the Unreal Enngine is nothing new. I hope that with the Switch 2 gen 10 it will take advantage of Epic's engine.

DBODxxu.png
I would LOVE to see them take notes from the Pikmin 4 engine somehow.

But yes, probably related to their PS5 game more than Pokemon, to maintain reasonable thought.
 
But it still isn't really taking away from the other parts of the system. The CPU could process its logic in 16.66ms, and the GPU could render a scene in 16.66ms, but that doesn't mean it's displaying a new frame every 33.33ms as if the CPU is stalled until the GPU finishes. While the GPU is processing that current graphical frame, the CPU is processing the next logical frame. So the CPU is processing at 60fps, and the GPU is processing at 60fps. What we have is latency of 16.66ms after a logical frame is processed. If DLSS were added and it spent 6ms doing its job, the game would still run at 60fps. The latency after a processed logical frame, however, would increase by 6ms to become 22.66ms.
I think this concept of latency is a little too optimistic because it ignores vsync, which the CPU and GPU must respect to avoid tearing unless VRR is available. But I also think the overall scenario, leading to a latency increase in the first place, is too pessimistic because DLSS isn't going to be an added number of milliseconds. The whole point of image upscalers is to spend less time getting a reasonable quality version of an image at a resolution that would take way longer if you actually had to render it. A game that's already at 16.6 ms GPU frame time wouldn't just add DLSS as-is, they would decrease the rendering resolution and then apply as much DLSS as they could fit into the time they saved from the resolution decrease.
 
If we’re getting Prime info, maybe a Marketing Department received some promo material

And this close to September Direct and Gamescom?

Yeah, I think the game could be coming within the five to six months
unless MP4 is not a cross-gen game (unlikely but who knows) things starting to leak out about this title could be indicative that Drake is coming sooner rather than later.

the floodgates truly are opening.
 
You have to run those tasks before the GPU can begin rendering the first frame, but you don't have to wait for the GPU to finish that frame before you start running those tasks for the second frame. This is a fundamental principle.
Ah, yeah, I see what you're saying. Yeah, the GPU needs all the data to render the frame, but the CPU doesn't have to wait for the frame to render to begin it's tasks for the next frame - unless it receives backpressure from the render queue, which is essentially a signal from the driver saying "woah woah woah, you're getting too far ahead, stop."

I'm sure DLSS can be amortized away in CPU, in many cases.

@oldpuck Does DLSS run in parallel with rendering? I would assume it almost has to because its essentially filling in the blanks. For example, if the DLSS time is 6ms to get its work done and the actual rendering takes 8ms, I dont think you would just add the two together. By the time the GPU is ready to rasterize the image, the DLSS work has to be done in order to fill in the blanks. Is this correct?
DLSS 2? No, it runs after the frame is rendered. Without a rendered frame, there is no image to upscale and no "blanks" to fill in.

Take a 1080p frame and try to upscale it to 4k. 4k is exactly 4 times as big as 1080p, so the fast, simply way to upscale it is to turn every pixel into a 2x2 square of pixels

Code:
1080p pixel => 4kpixel

*               **
                **

But that gets you jaggies, and you don't get any extra detail that wasn't in the first image.

Code:
1080p pixels => 4k pixels

*               **
 *              **
  *               **
                  **
                    **
                    **

See how the small angled line on the left becomes a jagged stair step on the right? Anti aliasing comes around and says "hey, I can guess where edges are, and then smooth them out"
Code:
1080p pixels => 4k pixels

*               *
 *               *
  *                *
                     *
                     *

You see how you get a smoother line, but it's not perfect? There are lots of antialiasing techniques, but there are limits. And again, you're getting smoother edges, but you aren't getting new details. That's why some folks complain that Anti Aliasing looks blurry - it smudges the edges of things that it thinks should be smoothed, but it can get it wrong, removing detail, without adding anything new.

You ever playing a video game and you see grass fizzle and pop? That's little tiny details that are smaller than a pixel. As you move around, some of these little subpixel details will pop into and out of existence. As a higher res, where the pixels are actually smaller, that's a detail that might stay on the screen all the time. DLSS watches old frames, and captures all that subpixel detail, and then uses AI to add it back to the current frame - but it needs the current frame's information to know when to put that detail in and where.
Code:
1080p
frame 1 => frame 2 => frame 3

*             *           *
 *                       * 
  *           *         *

See that pixel in the middle that vanishes and then comes back? Maybe this is a zigzag pattern that would show up in 4K. DLSS tries to catch that. But it takes a couple frames to have enough data to work with. That's why DLSS 2 makes things ugly for like 2 frames right after a camera cut.

Code:
1080p
frame 1 => frame 2 => frame 3

*             *           *
 *                       * 
  *           *         *

...becomes 4k upscaled
frame 1 =>      frame 2 =>      frame 3

**                **               *
**                **             *
  **                               *
  **                                 *
    **            **               *
    **            **             *

That's what's amazing about DLSS 2 (and FSR 2, and XeSS) - that it smartly finds the that the artist created zigzag and make a 4K version of it, despite the fact the zigzag itself never full appears in any of the frames it was working from. But you can see that the zigzag is tending to move from right to left in the last upscaled frame. That data came from the last 1080 frame, which just gave us an angled line moving from right to left.

DLSS needs the current frame's data in order to upscale. It doesn't just fill in the blanks with prior frame's data, it tries to learn what the underlying image is supposed by combining the current, completely rendered frame, with information from the past.
 
I think this concept of latency is a little too optimistic because it ignores vsync, which the CPU and GPU must respect to avoid tearing unless VRR is available. But I also think the overall scenario, leading to a latency increase in the first place, is too pessimistic because DLSS isn't going to be an added number of milliseconds. The whole point of image upscalers is to spend less time getting a reasonable quality version of an image at a resolution that would take way longer if you actually had to render it. A game that's already at 16.6 ms GPU frame time wouldn't just add DLSS as-is, they would decrease the rendering resolution and then apply as much DLSS as they could fit into the time they saved from the resolution decrease.
I was just making a simple case with some static numbers. Certainly there's more to it than that, but all in all, the idea I wanted to convey is that these different components are more or less working simultaneously, with each working on what the next stage will work on.
 
0
Ah, yeah, I see what you're saying. Yeah, the GPU needs all the data to render the frame, but the CPU doesn't have to wait for the frame to render to begin it's tasks for the next frame - unless it receives backpressure from the render queue, which is essentially a signal from the driver saying "woah woah woah, you're getting too far ahead, stop."

I'm sure DLSS can be amortized away in CPU, in many cases.


DLSS 2? No, it runs after the frame is rendered. Without a rendered frame, there is no image to upscale and no "blanks" to fill in.

Take a 1080p frame and try to upscale it to 4k. 4k is exactly 4 times as big as 1080p, so the fast, simply way to upscale it is to turn every pixel into a 2x2 square of pixels

Code:
1080p pixel => 4kpixel

*               **
                **

But that gets you jaggies, and you don't get any extra detail that wasn't in the first image.

Code:
1080p pixels => 4k pixels

*               **
 *              **
  *               **
                  **
                    **
                    **

See how the small angled line on the left becomes a jagged stair step on the right? Anti aliasing comes around and says "hey, I can guess where edges are, and then smooth them out"
Code:
1080p pixels => 4k pixels

*               *
 *               *
  *                *
                     *
                     *

You see how you get a smoother line, but it's not perfect? There are lots of antialiasing techniques, but there are limits. And again, you're getting smoother edges, but you aren't getting new details. That's why some folks complain that Anti Aliasing looks blurry - it smudges the edges of things that it thinks should be smoothed, but it can get it wrong, removing detail, without adding anything new.

You ever playing a video game and you see grass fizzle and pop? That's little tiny details that are smaller than a pixel. As you move around, some of these little subpixel details will pop into and out of existence. As a higher res, where the pixels are actually smaller, that's a detail that might stay on the screen all the time. DLSS watches old frames, and captures all that subpixel detail, and then uses AI to add it back to the current frame - but it needs the current frame's information to know when to put that detail in and where.
Code:
1080p
frame 1 => frame 2 => frame 3

*             *           *
 *                       *
  *           *         *

See that pixel in the middle that vanishes and then comes back? Maybe this is a zigzag pattern that would show up in 4K. DLSS tries to catch that. But it takes a couple frames to have enough data to work with. That's why DLSS 2 makes things ugly for like 2 frames right after a camera cut.

Code:
1080p
frame 1 => frame 2 => frame 3

*             *           *
 *                       *
  *           *         *

...becomes 4k upscaled
frame 1 =>      frame 2 =>      frame 3

**                **               *
**                **             *
  **                               *
  **                                 *
    **            **               *
    **            **             *

That's what's amazing about DLSS 2 (and FSR 2, and XeSS) - that it smartly finds the that the artist created zigzag and make a 4K version of it, despite the fact the zigzag itself never full appears in any of the frames it was working from. But you can see that the zigzag is tending to move from right to left in the last upscaled frame. That data came from the last 1080 frame, which just gave us an angled line moving from right to left.

DLSS needs the current frame's data in order to upscale. It doesn't just fill in the blanks with prior frame's data, it tries to learn what the underlying image is supposed by combining the current, completely rendered frame, with information from the past.
I believe what @Goodtwin meant is that while DLSS is upscaling the previous frame, the GPU can already draw the next one, without having to wait for the other.
 
But it still isn't really taking away from the other parts of the system. The CPU could process its logic in 16.66ms, and the GPU could render a scene in 16.66ms, but that doesn't mean it's displaying a new frame every 33.33ms as if the CPU is stalled until the GPU finishes. While the GPU is processing that current graphical frame, the CPU is processing the next logical frame. So the CPU is processing at 60fps, and the GPU is processing at 60fps. What we have is latency of 16.66ms after a logical frame is processed. If DLSS were added and it spent 6ms doing its job, the game would still run at 60fps. The latency after a processed logical frame, however, would increase by 6ms to become 22.66ms.
I think this concept of latency is a little too optimistic because it ignores vsync, which the CPU and GPU must respect to avoid tearing unless VRR is available. But I also think the overall scenario, leading to a latency increase in the first place, is too pessimistic because DLSS isn't going to be an added number of milliseconds. The whole point of image upscalers is to spend less time getting a reasonable quality version of an image at a resolution that would take way longer if you actually had to render it. A game that's already at 16.6 ms GPU frame time wouldn't just add DLSS as-is, they would decrease the rendering resolution and then apply as much DLSS as they could fit into the time they saved from the resolution decrease.
My original point- which might be genuinely irrelevant now - is that DLSS upscaling isn't free, and it's cost varies with output resolution. Yes, DLSS 2's upscaling time can be hid inside of CPU time.

But that would be at the cost of rendering additional features at the lower upscaled resolution. No matter where you hide DLSS's upscaling time, that will be true. If rendering time of this frame extends into the next frame's CPU work, that rendering could be anything, not just DLSS. 4K as the output target doesn't guarantee the highest quality image. Many devs will be deciding between balancing next gen visual features vs final output resolution, just like they do now. DLSS doesn't make that calculus go away, it just makes the effects significantly cheaper.

I recognize my example was beyond simplistic - but the calculus remains. There will be plenty of reasons to select a less-than-4k output
 
Quoted by: LiC
1
I believe what @Goodtwin meant is that while DLSS is upscaling the previous frame, the GPU can already draw the next one, without having to wait for the other.
Sorta? It's more complicated than that.

DLSS absolutely increases the amount of time it takes one frame to render, period.

Can some of that time be hidden in empty gaps elsewhere? Absolutely.

Is that a new idea? No, video games are already doing everything they can to use every empty gap on the hardware.

In theory, Drake can render 12 frames at a time, with 12 queues each mapped to an SM. But that doesn't mean it can render all those frames just as fast as it could render 1 frame that utilized multiple SMs.

Existing engines are optimized - they might be poorly optimized but that doesn't mean that DLSS will magically optimize them. Either the the renderer is very parallel, and the GPU is well optimized, in which case, room will need to be made for DLSS (like by knocking down the base resolution, as LiC says, or adjusting rendering features), or the renderer is pretty serial, in which case DLSS will get shoved into that shitty, serial pipeline, and will not automatically increase GPU utilization.
 
0
I'll always wonder why people mistake art style for technical prowess... You'd expect the distinction would be obvious enough at this point, especially in a hardware thread.

Most people have a poor understanding of the technical nuts and bolts of videogames, but they do know what looks good. For example, Metroid Prime Remastered looks amazing, but most of the lighting and shadows are baked in, so technically its faking a lot of things instead of doing the more demanding real time effects. From this understanding, its easy to see how a game like this could actually look worse if it used a lot of low quality real time effects. So while its technically more ambitious, what the gamer sees will be perceived as worse. This is common amongst most things, take professional sports for example, fans will have a limited understanding of the play calling and execution, but they know when a team looks bad. Just because you don't have the knowledge needed to be a coach or a player doesnt mean the eye test is wrong, if the team looks bad it probably is performing poorly, even if the reasons are more nuanced than the fan understands.
 
Most people have a poor understanding of the technical nuts and bolts of videogames, but they do know what looks good. For example, Metroid Prime Remastered looks amazing, but most of the lighting and shadows are baked in, so technically its faking a lot of things instead of doing the more demanding real time effects. From this understanding, its easy to see how a game like this could actually look worse if it used a lot of low quality real time effects. So while its technically more ambitious, what the gamer sees will be perceived as worse. This is common amongst most things, take professional sports for example, fans will have a limited understanding of the play calling and execution, but they know when a team looks bad. Just because you don't have the knowledge needed to be a coach or a player doesnt mean the eye test is wrong, if the team looks bad it probably is performing poorly, even if the reasons are more nuanced than the fan understands.
Well, I don't think any of those games look particularly bad to begin with, all the opposite. Now, you're right that this thread has a huge gap in knowledge between posters, but if the distinction is still not obvious in one dedicated exclusively to specs and capabilities of hardware... That's not looking very good for the rest of places, just saying.
 
Well, I don't think any of those games look particularly bad to begin with, all the opposite. Now, you're right that this thread has a huge gap in knowledge between posters, but if the distinction is still not obvious in one dedicated exclusively to specs and capabilities of hardware... That's not looking very good for the rest of places, just saying.
where does the assumption that Nintendo own internal teams are lacking from a tech perspective, the games they have been releasing on Switch usually are among the best looking while pushing either impressive visuals or scale relative to the platform. At some point it becomes more of a manpower problem rather than lack of technical knowledge hence why they're ramping up hirings.
 
where does the assumption that Nintendo own internal teams are lacking from a tech perspective, the games they have been releasing on Switch usually are among the best looking while pushing either impressive visuals or scale relative to the platform. At some point it becomes more of a manpower problem rather than lack of technical knowledge hence why they're ramping up hirings.
Well, the thing is that Nintendo's internal teams are wildly inconsistent in that department and only a few can reach those levels of fidelity in a regular basis (which is generally equal to 5-6 years of development, like everyone else). Still, historically speaking keep in mind no studio that ever took a generational leap this huge has been able to release something comparable to the best of the one before until at least a couple of years. That said, I'm super excited for those news since the date they're giving is actually pretty close, 2028 won't even be close to ending 9th gen by no stretch.
 
where does the assumption that Nintendo own internal teams are lacking from a tech perspective, the games they have been releasing on Switch usually are among the best looking while pushing either impressive visuals or scale relative to the platform. At some point it becomes more of a manpower problem rather than lack of technical knowledge hence why they're ramping up hirings.

Its somewhat difficult to directly compare Nintendo's best work to a team like Naughty Dog. Nintendo has basically been working with hardware a generation behind Naughty Dog even since the PS3 hit the market. While a game like Zelda TotK isn't going to wow anyone in 2023 based on its visuals, on a technical level, it had developers commenting on how impressive it was. The physics system was already impressive, very impressive relative to the hardware its on, but then they throw in a crazy crafting system that was probably a nightmare to keep from causing countless game breaking glitches. I believe Nintendo is very competent when it comes to their technical prowess, but even better at managing their resources. So many developers let the game get away from them and try to find optimizations towards the end of development, often coming up short of maintain the target framerate. Nintendo more often than not hits its desired framerate and holds it.

Well, the thing is that Nintendo's internal teams are wildly inconsistent in that department and only a few can reach those levels of fidelity in a regular basis (which is generally equal to 5-6 years of development, like everyone else). Still, historically speaking keep in mind no studio that ever took a generational leap this huge has been able to release something comparable to the best of the one before until at least a couple of years.

I think its fair to point out that Naughty Dog is a team on not a publisher with lots of teams. Is Naughty Dog more technically capable than most Nintendo teams? Yes, but Nintendo has a lot of teams and a few of them are top tier. Still, PS5 is out and exclusives for the hardware are going to be a thing. No matter how talented Nintendo's team is, they cant make up for a 6 Tflop deficit and CPU performance that is less than half. Nintendo will make some terrific looking but on a technical level they cant be on the same level as Sony's top games, how could they, they have less to work with.
 
Well, the thing is that Nintendo's internal teams are wildly inconsistent in that department and only a few can reach those levels of fidelity in a regular basis (which is generally equal to 5-6 years of development, like everyone else). Still, historically speaking keep in mind no studio that ever took a generational leap this huge has been able to release something comparable to the best of the one before until at least a couple of years. That said, I'm super excited for those news since the date they're giving is actually pretty close, 2028 won't even be close to ending 9th gen by no stretch.
I guess it does depend on the teams, if we're purely talking about EPD I'd say it's as good as it gets closely followed by Monolith Soft and Next Level Games (still waiting to see how MP4 turns out after MPR). And as Goodtwin mentioned there will always be an hardware gap due to Nintendo choice to focus on having mobile hardware first and foremost.

If we're talking about other teams such as Gamefreak, Hal or Intelligent Systems then Nintendo has less control over how they manage their tech besides offering assistance and supervision where they can (I imagine).
 
Its somewhat difficult to directly compare Nintendo's best work to a team like Naughty Dog. Nintendo has basically been working with hardware a generation behind Naughty Dog even since the PS3 hit the market. While a game like Zelda TotK isn't going to wow anyone in 2023 based on its visuals, on a technical level, it had developers commenting on how impressive it was. The physics system was already impressive, very impressive relative to the hardware its on, but then they throw in a crazy crafting system that was probably a nightmare to keep from causing countless game breaking glitches. I believe Nintendo is very competent when it comes to their technical prowess, but even better at managing their resources. So many developers let the game get away from them and try to find optimizations towards the end of development, often coming up short of maintain the target framerate. Nintendo more often than not hits its desired framerate and holds it.



I think its fair to point out that Naughty Dog is a team on not a publisher with lots of teams. Is Naughty Dog more technically capable than most Nintendo teams? Yes, but Nintendo has a lot of teams and a few of them are top tier. Still, PS5 is out and exclusives for the hardware are going to be a thing. No matter how talented Nintendo's team is, they cant make up for a 6 Tflop deficit and CPU performance that is less than half. Nintendo will make some terrific looking but on a technical level they cant be on the same level as Sony's top games, how could they, they have less to work with.
That's why the baseline in this regard has always been matching the best of the 8th generation, rather than the one we're currently in. There's no way to compare to current gen, I think even Ubisoft's next offerings showed how impossible that will be... But, the jump to 8th gen fidelity is already very huge and home to some of the best looking games ever, their PBR workflows will take an insane jump in fidelity that paired with their cartoon artstyles might actually kill the gap for the end user. This time for real, not because you have an affinity for their cartoon visuals and therefore willing to overlook shortcomings.
 
Last edited:
I'll always wonder why people mistake art style for technical prowess... You'd expect the distinction would be obvious enough at this point, especially in a hardware thread.
First, I’m not “people”. My expression of disgust was more at the idea that Nintendo EPD haven’t accomplished anything which holds up to Naughty Dog and Guerilla on a technical level, when they’ve surpassed both. The narrative of them being behind the industry and playing perpetual catch-up, when they’ve been leaders on technical levels, among others. I also find it hella wild that you’re telling me, someone in creative events, that I’ve mistaken “art style” for “technical prowess”. All I wrote was “URGH”. So, the fact that you took THAT much from it, which is wrong, by the way, then tried to techbrosplain to me is telling, hilarious and too cute. In the case of Guerilla especially, please, don’t take my word for it when I tell you Breath surpassed Horizon: Zero Dawn on multiple accounts in 2017. I’ll let a past post and the video speak. Whatever. As You Were.

Not my video, but yes, they have. Surpassed it, even. Horizon being put on a pedestal is always amusing, especially after watching this. BTW, photorealism is one of many art styles. You can draw it and fake it on Art Academy: Sketchbook. You don’t need “more power” for it, and it won’t melt your Switch. More power is needed to bring the characters and world to life. Breath already blows TLOU, Horizon, and a ton of AAA titles out of the water, while developers across the gaming spectra are in awe of what was accomplished in Tears, on the Switch. This is what I mean about overshooting the capacities of PS/XBox hardware while undershooting Nintendo hardware. Also ignoring that PS first party publications targeted 30FPS more often than not, while Nintendo publications targeted 60FPS. Take Super Mario 3D World, for example. A Wii U title which launched around the time of the XB1/PS4 launches. Sackboy on PS4 is 720p and 60FPS - When the PS4 targets 60FPS, there isn’t THAT much between their games and Nintendo’s output. Try to tell someone that Sackboy is orders of magnitude more than that game, and you would rightly be laughed off the face of this planet. Of course, PS is totally fine with putting that game on PS4 NOW, because the discourse around that system is done. But seeing such claims on enthusiast forums persist is a kind of reach that would have Dhalsim, Stretch Armstrong and Elastigirl tapping out in submission, it’s painful to read and watch.

 
the technical discussion on DLSS is a bit over my head (interesting though!) but considering Nintendo potentially has their own custom version of the technology along with custom (within reason) hardware...is there any chance that upscaling on NG could be more efficient or even lower latency than what we've seen so far ie running on desktop GPUs with extra overheads.
 
Last edited:
Nice, hopefully this means their teams will be Naughty Dog/Guerrilla tier by then. Doesn't seem like Drake will be underutilized after all..
Nintendo's already cooking just fine. Their texture work was on the level since the Wii, possibly GameCube. I find Nintendo tends to do well with texture and lighting techniques across the board. Modeling is still important but it's icing by comparison.
 
First, I’m not “people”. My expression of disgust was more at the idea that Nintendo EPD haven’t accomplished anything which holds up to Naughty Dog and Guerilla on a technical level, when they’ve surpassed both. The narrative of them being behind the industry and playing perpetual catch-up, when they’ve been leaders on technical levels, among others. I also find it hella wild that you’re telling me, someone in creative events, that I’ve mistaken “art style” for “technical prowess”. All I wrote was “URGH”. So, the fact that you took THAT much from it, which is wrong, by the way, then tried to techbrosplain to me is telling, hilarious and too cute. In the case of Guerilla especially, please, don’t take my word for it when I tell you Breath surpassed Horizon: Zero Dawn on multiple accounts in 2017. I’ll let a past post and the video speak. Whatever. As You Were.
I didn't take all that much, and the ways it "surpassed" it are still not the technical department per see, extensive physics systems are not for everyone and every type of game (even funnier to claim such a thing since ZD actually had a lot more interactivity than similar open worlds at the time). To give you an idea of how ridiculous it is to say BOTW surpassed ZD in the technical department... A XC3 model that's already higher quality than any BOTW model is only around 20k triangles, just Aloy's hair in that game was around 100k triangles. It may have surpassed it in other ways, but it sure as hell wasn't this one and for good reasons (Wii U vs mid-gen PS4 game, how are you even supposed to compare them?).
 
Last edited:
hey nate, today is my mom's dog birthday. Do you have a scoop to give me please? THANKS!
How nuts would it be if he gave you one.
Like in the late 90s when in France we started having TV shows where people could talk live to the presenter. The first person who asked for a car got one, just because they were the first to ask.
 
So what's with all the Zippo hate? Wasn't he a liked leaker back in ERA or am I misremembering?

What changed?
In addition to what others pointed out, he developed a tendency to use his blog to trash talk this community back when it was a part of ResetERA, and continued trashing us once Famiboards was created. He tends to take arrogant victory laps around those who doubt his claims when stuff happens that could be seen as lining up with his guesses, and he tends to piggyback off of other people like Nate or Emily and then will write up blog posts about how actually he got that info waaaaay before they did, etc.

And one of the most surprising things I saw him do was when a DK thread was started here on Fami and people in the thread were discussing stuff about potential new DK games, he went on an angry tirade on his blog, even naming specific Fami users from that thread and cussing them for stuff they were saying about upcoming or cancelled DK games simply because what they were saying didn't line up with the DK stuff he had already "leaked".

So regardless of what he may or may not have known in the past, or what he may or may not have gotten "right" with his guesses and piggybacks, the dude has it out for this community and actively attacks us when he sees a chance. He is not a good source for leaks and just generally not a good internet personality, full stop.

Anyway sorry yall, back to hardware 😅
 
In addition to what others pointed out, he developed a tendency to use his blog to trash talk this community back when it was a part of ResetERA, and continued trashing us once Famiboards was created. He tends to take arrogant victory laps around those who doubt his claims when stuff happens that could be seen as lining up with his guesses, and he tends to piggyback off of other people like Nate or Emily and then will write up blog posts about how actually he got that info waaaaay before they did, etc.

And one of the most surprising things I saw him do was when a DK thread was started here on Fami and people in the thread were discussing stuff about potential new DK games, he went on an angry tirade on his blog, even naming specific Fami users from that thread and cussing them for stuff they were saying about upcoming or cancelled DK games simply because what they were saying didn't line up with the DK stuff he had already "leaked".

So regardless of what he may or may not have known in the past, or what he may or may not have gotten "right" with his guesses and piggybacks, the dude has it out for this community and actively attacks us when he sees a chance. He is not a good source for leaks and just generally not a good internet personality, full stop.

Anyway sorry yall, back to hardware 😅
Sheesh, I knew Zippo was controversial, but not to this extent.
 
I believe what @Goodtwin meant is that while DLSS is upscaling the previous frame, the GPU can already draw the next one, without having to wait for the other.
I believe Nvidia shared some slides about that, running DLSS in parallel, upscaling the previous frame, and comparing it to a more serial approach.
There were quite some gains in GPU utilization.
I couldn't find a source, not even sure how to search for it.

But we're effectively delaying the output by one frame, adding 16ms to input latency at 60fps and an even worse 33ms at 30fps, on top of the already existing hardware and engine latency.

A quick "maximum input lag" search shows the following:
Professional competitive gamers try to keep input lag under 15 milliseconds. Casual gamers and enthusiasts are usually comfortable with latency under 40 milliseconds. Beyond 50 milliseconds, the delay becomes more noticeable.
We could be blowing past our budget in one go.
This would be disastrous in online shooters but wouldn't matter much in most Nintendo games IMO.
 
Last edited:
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom