• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Honestly, it's all largely a moot argument anyways. In terms of raw performance, it could be PS4+ but with DLSS (be 2 or 3.5) the performance and resolution will be far beyond what the PS4 or even a PS4 Pro could achieve. Factor in RT and such, and the system will be competing with the PS5/Series line. It's more than capable hardware and will be exceptionally modern. If it rivals Series S fidelity and performance with DLSS enabled, that bodes well for the hardware and the support it'll receive.
BRB, need to make a YouTube video "Renowned Nintendo insider confirms Next Gen Switch as capable as PS5 / Series X"

Just need to snap the perfect picture with my mouth wide open in shock for the thumbnail.
 
I ain't worried.
Nor should anyone else be.
The one aspect I am worried about is full-phat backwards compatibility.

At one point I was convinced it was a no-brainer for Nintendo to push for this.
Then, thanks to a few NateDrake podcasts coupled with some of MVG's "How such-n-such hardware was compromised" videos, one common vulnerability vector seems to be the backwards-compatibility allowances on some of those machines.

I feel confident some form of backwards-compatibility will be offered, specifically we can play enhanced versions of supported digital Switch titles. However, coupled with the engineering challenges of old games running on the new architecture, it has me currently doubting a cartridge slot compatible with my Switch physical library will be there.

I'm not sure how to feel about that, if the above comes to pass.
 
When do we expect the announcement Video? If the late 2024 is true i would think we see a Teaser maybe on The Game Awards but mor likely i would say Feb 24 ;)
It will happen at a time and place fully under Nintendo's control, where they don't have to share the spotlight. Will most likely be in a dedicated trailer drop, with an outside chance of a Nintendo Direct.
 
That's fair then. My argument would then be are Nintendo going to release bleeding edge hardware which struggles to come in at under $500 though when they've just hit a massive home run by releasing a moderately (at the time) powerful hybrid console which they could sell for $349 but even then barely lasted 2.5 hours in it's flagship launch game? Also the tensor cores aren't free to manufacture and we know it has them so that has to be added to the BoM.

How long is this 1.5tflop mobile GPU going to last in terms of steady performance while gaming or does this chip even sustain it's performance or is it in a phone which throttles it when it gets too hot after 20 minutes at which time you lose 60% of your gaming performance? and if it's not in a phone how big is the device? we get back to the brick like Steam Deck again. I just don't see it personally but I will be glad to be wrong!

A device with the exact same dimensions as the current Switch which is 900gflops handheld / 1.8tflops docked with tensor cores for DLSS/RT, lasts 3 hours when mobile at $399 and plays every current Switch game will be a phenomenal value proposition for 99% of their target audience. There is no need to push further than that and if there's one thing we know about corporations it's they do the bare minimum especially in a post Covid / current War economy.

I think you're overselling how advance Drake will be while underselling the TX1 inside the Switch. Neither A78 and 4nm are new technologies and TX1 while old is still one of the best chip at the time. There are other chips that claims better performance at the time but benchmark performance is very different from what is stable for games.

Steam Deck is definitely bulky but Nintendo have many advantages that Valve doesn't. Most importantly is ARM. It is much more energy efficient by design and impossible for Valve to use since PC games doesn't support ARM. Nintendo can also order custom chips while Valve will have to work with what is available. Nintendo will also probably use eUFS storage which is more energy efficient than NVME since they don't have to worry about being user replaceable. Finally economy of scale means they can get parts for cheaper.
 
The one aspect I am worried about is full-phat backwards compatibility.

At one point I was convinced it was a no-brainer for Nintendo to push for this.
Then, thanks to a few NateDrake podcasts coupled with some of MVG's "How such-n-such hardware was compromised" videos, one common vulnerability vector seems to be the backwards-compatibility allowances on some of those machines.

I feel confident some form of backwards-compatibility will be offered, specifically we can play enhanced versions of supported digital Switch titles. However, coupled with the engineering challenges of old games running on the new architecture, it has me currently doubting a cartridge slot compatible with my Switch physical library will be there.

I'm not sure how to feel about that, if the above comes to pass.
Cartridge slot isn't really a vector for vulnerabilities. What would hurt cartridge BC is the ability to make an updated game card. I think they can definitely make one and still be compatible with Switch cards though
 
0
BRB, need to make a YouTube video "Renowned Nintendo insider confirms Next Gen Switch as capable as PS5 / Series X"

Just need to snap the perfect picture with my mouth wide open in shock for the thumbnail.

This best not happen. Any channel that does so... I'll blast.

The one aspect I am worried about is full-phat backwards compatibility.

At one point I was convinced it was a no-brainer for Nintendo to push for this.
Then, thanks to a few NateDrake podcasts coupled with some of MVG's "How such-n-such hardware was compromised" videos, one common vulnerability vector seems to be the backwards-compatibility allowances on some of those machines.

I feel confident some form of backwards-compatibility will be offered, specifically we can play enhanced versions of supported digital Switch titles. However, coupled with the engineering challenges of old games running on the new architecture, it has me currently doubting a cartridge slot compatible with my Switch physical library will be there.

I'm not sure how to feel about that, if the above comes to pass.
All hurdles can be solved for and addressed. As we said recently, BC needs to be there. They cannot afford not to have it. I do believe BC will be there. It's not a concern that weighs on my mind.
 
The one aspect I am worried about is full-phat backwards compatibility.

At one point I was convinced it was a no-brainer for Nintendo to push for this.
Then, thanks to a few NateDrake podcasts coupled with some of MVG's "How such-n-such hardware was compromised" videos, one common vulnerability vector seems to be the backwards-compatibility allowances on some of those machines.

I feel confident some form of backwards-compatibility will be offered, specifically we can play enhanced versions of supported digital Switch titles. However, coupled with the engineering challenges of old games running on the new architecture, it has me currently doubting a cartridge slot compatible with my Switch physical library will be there.

I'm not sure how to feel about that, if the above comes to pass.
Same. This is my #1 wish for Switch 2. I feel pretty confident, but you just never know.
 
0
Why some people think that 1.5 teraflops is not possible for Switch 2 in Portable? lol
I'm starting to think this is one of those "because Nintendo" instances without understanding the situation. I mean, the Switch was initially going to be much weaker (roughly half a Wii U) until Nvidia showed up on their doorstep with the TX1, so we ended up getting something much stronger in the process. Now we're getting something custom-designed specifically for Nintendo's needs. The comparisons made with the PC-based portables against Switch's 2 theoretical power really misses one of the more important aspects. The PC-based portables use architectures that have higher power consumption. ARM is quite a bit more power efficient over x86 (which has decades of holding onto legacy support), and Nvidia is known for being a bit more power efficient than AMD at the very least.

We had a very thorough look at possible clocks and reasonings for Switch 2 by Thraktor some time back. Perhaps these folks should take a look at it.
 
I think you're overselling how advance Drake will be while underselling the TX1 inside the Switch. Neither A78 and 4nm are new technologies and TX1 while old is still one of the best chip at the time. There are other chips that claims better performance at the time but benchmark performance is very different from what is stable for games.

Steam Deck is definitely bulky but Nintendo have many advantages that Valve doesn't. Most importantly is ARM. It is much more energy efficient by design and impossible for Valve to use since PC games doesn't support ARM. Nintendo can also order custom chips while Valve will have to work with what is available. Nintendo will also probably use eUFS storage which is more energy efficient than NVME since they don't have to worry about being user replaceable. Finally economy of scale means they can get parts for cheaper.
Also, Nvidia is more efficient than AMD. The difference is no longer massive like it was pre-RDNA 2, but is still there.

Edit: Here's a good article about it:


Here, RDNA 2 vs Ampere seems to be very even, but Ampere is on Samsung 8nm while RDNA 2 is on TSMC 7nm, which very significant. If Switch 2 is in TSMC 4nm then the difference vs Steam Deck in performance/watt would be large.
 
Last edited:
Honestly, it's all largely a moot argument anyways. In terms of raw performance, it could be PS4+ but with DLSS (be 2 or 3.5) the performance and resolution will be far beyond what the PS4 or even a PS4 Pro could achieve. Factor in RT and such, and the system will be competing with the PS5/Series line. It's more than capable hardware and will be exceptionally modern. If it rivals Series S fidelity and performance with DLSS enabled, that bodes well for the hardware and the support it'll receive.
until Nintendo anounce it next hardware and we all see some games for the console, it will be hard to measure how powerful/weak the game cant be, if the next 3D Mario look like the Super Mario Bros movie art style, this will give us a idea of the console graphical/techinal perfomance
 
0
Have we any indications yet of a S2 prototype or final configuration SDK to be shown privately later this week, or if it may have already happened today? I was hoping this forum would have an insider attending in-person!
 
Why some people think that 1.5 teraflops is not possible for Switch 2 in Portable? lol
I'm starting to think this is one of those "because Nintendo" instances without understanding the situation.
Because Steam Deck is 1.6 and Steam Deck so Big. And because we talked about how much pushing it 1TFLOP would be for so long in the 8nm days.

For the record, I don't think 1.5 TFLOP is where we'll land. @Thraktor is right that 1.5 TFLOP is maximum efficiency, but max efficiency isn't max battery life. And 3 TFLOPS seems like the docked max before the GPU starts to get bottlenecked across the board. So I wouldn't be surprised to find something a little shy of that in both modes.

The difference between 400MHz GPU and 500MHz GPU is yawn. It's not that it doesn't matter - it's a 25% increase in perf - but it's not fundamental. Especially with so many other aspects of the system seemingly in place.
 
Have we any indications yet of a S2 prototype or final configuration SDK to be shown privately later this week, or if it may have already happened today? I was hoping this forum would have an insider attending in-person!
Nintendo might show the finalized devkit of Switch sucessor at Gamescom/Tokyo Game Show
 
Nintendo's presence at Gamescom is uhhhh... what you'd expect?


surprise-whats-in-the-box.gif
 
Honestly, it's all largely a moot argument anyways. In terms of raw performance, it could be PS4+ but with DLSS (be 2 or 3.5) the performance and resolution will be far beyond what the PS4 or even a PS4 Pro could achieve. Factor in RT and such, and the system will be competing with the PS5/Series line. It's more than capable hardware and will be exceptionally modern. If it rivals Series S fidelity and performance with DLSS enabled, that bodes well for the hardware and the support it'll receive.

It will always be a conversation for debate leading up the release of the hardware. I remember when it was confirmed that the Switch would be using the standard Tegra X1 and there was a significant amount of worry for how this would effect third party support. We had already seen the Shield TV struggle to run some Xbox 360 ports, so when we found out the Switch would be using the same Tegra X1 processor but downclocked, the outlook didn't look great. In the end the Switch became wildly popular and third party support, while certainly not on par with PS or Xbox, was still far more significant that the specs would have made you believe. There was plenty of skeptics would swore up and down a game like the Witcher 3 could not run on the Switch, no matter how many compromises were made.

SNG will be in a similar spot, but thanks to Xbox Series S, has a partner in crime within a stones throw of it. At minimum, we appear to be looking at a PS4+ with a better CPU and probably more RAM. As long as SNG is successful, and its hard to imagine it flopping seeing as how Switch is still selling nearly as well as the PS5, developers will have plenty of incentive to accommodate these lower spec machines when developing their games. So regardless if SNG is 2TFLop or 3.4 Tflop, it will have little to no impact on the amount of ports it will see. Peoples natural assumption is if the power differential is smaller that will mean more ports, but the reality is a 3.4Tflop SNG that doesn't sell well would get less ports than a 2Tflop SNG that is selling very well just like its predecessor. This is nothing new of course, the conversation regardless its performance will persist until the hardware is released and the games start to roll in.
 
Well, you don't show up at an event, book out floor space for a private room, unless you have something to show someone. Seems very plausible at this point Nintendo's only reasoning for attending Gamescom was to meet with developers behind closed doors. Between Gamescom this month and TGS next month, information is bound to start trickling out in the very near future.
 
No one (or outlet) is going to rush to share or report anything they get told during Gamescom.
Gonna assume these things get talked about, cross referenced to make sure everyone is hearing the same thing then once enough people know then shit starts going down?
 
Shuntaro Furukawa Nintendo have a harsh battle to convince people to buy our games

There is a cottage industry devoting to translating any Nintendo executive quotes in the most sensational mannner for engagements. For example, Furukawa didn’t actually say that every year is “do or die” for Nintendo, when “critical” is a more accurate translation. And for this quote from NHK, “harsh battle” is really a stretch, when a simple “tough competition” suffices.
 
Some Brief Expectation Setting

Hi, I'm Old Puck. You may know me from such films as DLSS 2: Electric Boogaloo and A Midsummer Night's Stream. If you've been here a while, you've probably formed your own expectations for the REDACTED, A New Console By Nintendo(tm).

But if you're new, or maybe a little techo-intimidated, maybe you haven't. Maybe you want to be excited, but you've been burned by Nintendo before. Maybe you've seen a lot of arguing about numbers but you don't know what those numbers mean. Maybe you just want expectations that are reasonable, but also don't set you up for disappointment.

Don't worry, I'm here to help.

TL;DR: Last Gen ports are easy, Next Gen ports are hard, Nintendo games look great

Every discussion here comes down to this, we're just arguing over the definitions of "easy" and "hard" - not the fundamental truth of this statement. You stay there, you'll be alright.

It'll be the best damn handheld on the market. But "best" doesn't mean "most powerful" in every way.

We know a lot about Drake, the chip inside the new console, but we don't know everything. There are a lot of estimates to fill in those blanks. But even the most pessimistic estimates make a pretty excellent handheld.

The Steam Deck is an amazing piece of kit, but it's magic trick is making a PC that fits in your hand. The Switch's magic trick is to make a powerful handheld that plugs into your TV. That's not the same trick, and when you understand that, you can start to see the advantages and disadvantages Nintendo has.

Nintendo will let developers code down to the hardware, taking advantage of every specific feature that Redacted offers. When developers do that, there will be enough power to offer what Steam Deck does, but in a smaller form factor, with better battery life.

It'll be "last gen +" when you dock it, but not "last gen Pro +."

How the hell did The Witcher III get on Switch? The answer is "a lot of hard development work." But the other answer is "the Switch may not be powerful, but it is modern." All those modern features let the development studio perform clever optimizations that wouldn't be possible on the 360, even though the Switch isn't way ahead of the 360 in raw horsepower.

The optimists who expect 3.5 TFLOPS of Raw GPU Compute!!! and the pessimists who expect 2 TFLOPS of Widdle Baby Console, For Babies? Both of them are in this range.

There is only so much electricity, silicon, and money to go around. Even if Nintendo chooses to make any single metric competitive with the last gen Pro consoles, or the Series S, or whatever - that doesn't leave enough power to make every metric competitive. We're discussing how big the "plus" in "last gen plus" might be, and where it might be - RAM, CPU, GPU? - but we're not going to get all of them all at once.

The Pro consoles are shit comparison points, anyway

The Pro consoles didn't have any exclusives, they were always "enhanced" base model games. There is not a single game every published that actually takes full advantage of a Pro system's hardware. The One X has an absolute monster of a GPU because Microsoft was trying to baseball bat the Xbox One's library up to 4k.

The Pro consoles were also pretty crap at things that the base consoles were also crap at. The One X has the CPU of a potato, because the One had a potato for a CPU, and improving that potato wasn't worth it for enhanced Xbox One games.

Winning some sort of point-for-point spec battle with a Monster Potato isn't a victory. There are better paths to games that look as good as Monster Potato Games, paths that open up more modern ports.

What about the Series S?

It's like the Pro console but backwards. Again, there are no exclusives, it's designed to receive cut-down next gen games. Complaints about the Series S from developers has more to do with being required to make Next Gen Magic happen on Series X, while achieving Feature Parity on Series S.

There are places that Drake simply can never compete with the Series S no matter how much you tweak the numbers - like CPU performance. There are places where Drake can beat it quite easily - like the memory architecture. And then there are places where maybe Drake could be competitive, but at a cost that in practice you probably aren't will to pay - like GPU performance.

What about Frame Generation?

The thing you probably think Frame Gen does? It doesn't do that.

Maybe it's possible on Drake, maybe it isn't. But it's not magic, and the best case scenarios I've seen describe a technology less useful than the Switch's IR camera.

IS NINTENDOOOOOOOMED?

Best damn handheld on the market. Huge upgrade for Nintendo games on your TV. Great last gen ports you can take on the go. Pokemon. The exact recipe that made you buy a Switch in the first place.

I ain't worried.
Oldpuck continuing to be my favourite poster in this thread, explaining stuff that even MY dumb baby brain can understand
 
In fact, RT uses so few rays that the raw image generated looks worse than a 2001 flip phone camera taking a picture at night. It doesn't even have connected lines, just semi-random dots. The job of the denoiser is to go and connect those dots into a coherent image. You can think of a denoiser kinda like anti-aliasing on steroids - antialiasing takes jagged lines, figures out what the artistic intent of those lines was supposed to be, and smoothes it out.

The problem with anti-aliasing is that it can be blurry - you're deleting "real" pixels, and replacing them with higher res guesses. It's cleaner, but deletes real detail to get a smoother output. That's why DLSS 2 is also an anti-aliaser - DLSS 2 needs that raw, "real" pixel data to feed its AI model, with the goal of producing an image that keeps all the detail and smoothes the output.

And that's why DLSS 2 has not always interacted well with RT effects. The RT denoiser runs before DLSS 2 does, deleting useful information that DLSS would normally use to show you a higher resolution image. DLSS 3.5 now replaces the denoiser with its own upscaler, just as it replaced anti-aliasing tech before it. This has four big advantages.
This post is a great explainer, but there are some fundamental differences between denoising and antialiasing that are worth clarifying. For example, I wouldn't say that anti-aliasing is deleting "real" detail. Both noise and aliasing are sampling artifacts, but their origin is very different. I'll elaborate on why:

Spatial frequency

Just like how signals that vary in time have a frequency, so do signals that vary in space. A really simple signal in time is something like sin(2 * pi * t). It's easy to imagine a similar signal in 2D space: sin(2 * pi * x) * sin(2 * pi * y). I'll have Wolfram Alpha plot it:

image.png


The important thing to notice here is that the frequency content in x and y separable. You could have a function that has a higher frequency in x then in y, like sin(5 * pi * x) * sin(2 * pi * y):

image.png


So just like time frequency has dimensions 1/[Time], spatial frequency is a 1-dimensional concept with dimensions 1/[Length]. The way the signal varies with x and the way it varies with y are independent. That's true in 1D, 2D, 3D... N-D, but we care about 2D because images are 2D signals.

What is aliasing, really?

Those sine functions above are continuous; you know exactly what the value is at every point you can imagine. But a digital image is discrete; it's made up of a finite number of equally spaced points. To make a discrete signal out of a continuous signal, you have to sample each point on the grid. If you want to take that discrete signal back to a continuous signal, then you have to reconstruct the original signal.

Ideally, that reconstruction would be perfect. A signal is called band-limited if the highest frequency is finite. For example, in digital music, we think of most signals as band-limited to the frequency that the human ears can hear, which is generally accepted to be around 20,000 Hz. A very important theory in digital signal processing, called the Nyquist-Shannon theorem, says that you can reconstruct a band-limited signal perfectly if you sample at more than twice the highest frequency in the signal. That's why music with a 44 kHz sampling rate is considered lossless; 44 kHz is more than twice the limit of human hearing at 20 kHz, so the audio signal can be perfectly reconstructed.

When you sample at less than twice the highest frequency, it's no longer possible to perfectly reconstruct the original signal. Instead, you get an aliased representation of the data. The sampling rate of a digital image is the resolution of the sensor, in a camera, or of the display in a computer-rendered image. This sampling rate needs to be high enough to correctly represent the information that you want to capture/display; otherwise, you will get aliasing.

By the way, this tells us why we get diminishing returns with increasing resolution. Since the x and y components of the signal are separable, quadrupling the "resolution" in the sense of the number of pixels (for example, going from 1080p to 2160p) only doubles the Nyquist frequency in x and in y.

So why does aliasing get explained as "jagged edges" so often? Well, any discontinuity, like a geometric edge, in an image is essentially an infinite frequency. With an infinite frequency, the signal is not band-limited, and there's no frequency that can satisfy the Nyquist-Shannon theorem. It's impossible to get perfect reconstruction. (https://pbr-book.org/3ed-2018/Sampling_and_Reconstruction/Sampling_Theory) But you can also have aliasing without a discontinuity, when the spatial resolution is too low to represent a signal (this is the reason why texture mipmaps exist; lower resolution mipmaps are low-pass filtered to remove high frequency content, preventing aliasing).

You can even have temporal aliasing in a game, when the framerate is too low to represent something moving quickly (for example, imagine a particle oscillating between 2 positions at 30 Hz; if your game is rendering at less than 60 fps, then by the Nyquist-Shannon theorem, the motion of the particle will be temporally aliased).

So what do we do to get around aliasing?

The best solution, from an image quality perspective, is to low-pass filter the signal before sampling it. Which yes, does essentially mean blurring it. For a continuous signal, the best function is called the sinc function, because it acts as a perfect low pass filter in frequency space. But the sinc function is infinite, so the best you can do in discrete space is to use a finite approximation. That, with some hand-waving, is what Lanczos filtering is, which (plus some extra functionality to handle contrast at the edges and the like) is how FSR handles reconstruction. Samples of the scene are collected in each frame, warped by the motion vectors, then filtered to reconstruct as much of the higher frequency information as possible.

The old-school methods of anti-aliasing, like supersampling and MSAA, worked similarly. You take more samples than you need (in the case of MSAA, you do it selectively near edges), then low-pass filter them to generate a final image without aliasing. By the way, even though it seems like an intuitive choice, the averaging filter (e.g. taking 4 4K pixels and averaging them to a single 1080p pixel) is actually kind of a shitty low-pass filter, because it introduces ringing artifacts in frequency space. Lanczos is much better.

An alternative way to do the filtering is to use a convolutional neural network (specifically, a convolutional autoencoder). DLDSR is a low-pass filter for spatial supersampling, and of course, DLSS does reconstruction. These are preferable to Lanczos because, since the signal is discrete and not band-limited, there's no perfect analytical filter for reconstruction. Instead of doing contrast-adaptive shenanigans like FSR does, you can just train a neural network to do the work. (And, by the way, if Lanczos is the ideal filter, then the neural network will learn to reproduce Lanczos, because a neural network is a universal function approximator; with enough nodes, it can learn any function.). Internally, the convolutional neural network downsamples the image several times while learning relevant features about the image, then you use the learned features to reconstruct the output image.

What's different about ray tracing, from a signal processing perspective?

(I have no professional background in rendering. I do work that involves image processing, so I know more about that. But I have done some reading about this for fun, so let's go).

When light hits a surface, some amount of it is transmitted, and some amount is scattered. To calculate the emitted light, you have to solve what's called the light transport equation, which is essentially an integral over some function that describes how the material emits light. But in most cases, this equation does not have an exact, analytic solution. Instead, you need to use a numerical approximation.

Monte Carlo algorithms numerically approximate an integral by randomly sampling over the integration domain. Path tracing is the application of a Monte Carlo algorithm to the light transport equation. Because you are randomly sampling, you get image noise, which converges with more random samples. But if you have a good denoising algorithm, you can reduce the number of samples for convergence. Unsurprisingly, convolutional autoencoders are also very good at this (because again, universal function approximators). Again, I'm not in this field, but I mean, Nvidia's published on it before (https://research.nvidia.com/publica...n-monte-carlo-image-sequences-using-recurrent). It's out there!

And yes, you can have aliasing in ray-traced images. If you took all the ray samples from the same pixel grid, and you happen to come across any high-frequency information, it would be aliased. So instead, you can randomly distribute the Monte Carlo samples, using some sampling algorithm (https://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/Careful_Sample_Placement).

Once you have the samples, DLSS was already very similar in structure to a denoising algorithm. If, for example, the Halton sampling algorithm (https://pbr-book.org/3ed-2018/Sampling_and_Reconstruction/The_Halton_Sampler) for distributing Monte Carlo samples sounds familiar, it's because it's the algorithm that Nvidia recommends for subpixel jittering in DLSS. So temporal upscalers like DLSS already exploit random distribution to sample and reconstruct higher frequency information. So it makes sense to combine the DLSS reconstruction passes for rasterized and ray traced samples because, in many ways, the way the data are structured and processed is very similar.

tl;dr

Aliasing is an artifact of undersampling a high-frequency signal. Good anti-aliasing methods filter out the high frequency information before sampling to remove aliasing from the signal. Temporal reconstruction methods, like DLSS and FSR, use randomly jittered samples collected over multiple frames to reconstruct high frequency image content.

Noise in ray tracing is an artifact of randomly sampling rays using a Monte Carlo algorithm. Instead of taking large numbers of random samples, denoising algorithms attempt to reconstruct the signal from a noisy input.
 
Translation: "We're in the entertainment business and we're constantly fighting every other entertainment company for your time and attention."
Sony or Microsoft (I forgot which) said basically the same thing.

Their competitor isnt just Sony or Microsoft..... it's Netflix, it's books, it's going outside, it's everything else you could be doing instead of playing their games.
 
Nintendo might show the finalized devkit of Switch sucessor at Gamescom/Tokyo Game Show
If you're seeing dev kits at these shows, you definitely lower tier in Nintendo's eyes. Also, I don't think they would be at these venues specifically. Maybe use them as cover to invite people to their HQs for briefing
 
I’m sorry but I AM an OLED lover and I WILL complain if the Switch 2 ships with only an LCD, I don’t care how good of an LCD it is.

Nintendo has no one to blame but themselves for shipping the Switch OLED Model! They showed me just how good a mobile screen CAN look, they should have expected complaints if they’re gonna downgrade us like that!
OLED is cool and all for a phone and a mid-gen upgrade that only lasts about 2-3 years, but is impractical for a device meant to be used 4-6 years (and yes I have seen WULFF DEN's video). I have owned multiple OLED devices, all burning in from Apps like Chrome and Instagram which are used like 2 hours max a day within a year (the Switch equivalent would be the home menu and Nintendo Switch Online borders.

Personally if they do go with OLED, we should all wish that it's LG making the Panel as they know how to get the best longevity out of an OLED screen


 
Also, Nvidia is more efficient than AMD. The difference is no longer massive like it was pre-RDNA 2, but is still there.

Edit: Here's a good article about it:


Here, RDNA 2 vs Ampere seems to be very even, but Ampere is on Samsung 8nm while RDNA 2 is on TSMC 7nm, which very significant. If Switch 2 is in TSMC 4nm then the difference vs Steam Deck in power consumption would be large.
I'm not sure this is saying what you are suggesting it is saying. Not that I'm calling you out, this is tricky stuff.

@Samuspet is talking about FLOPS per Watt. How much does it cost to go past 1 TFLOP? But that's not what this article is talking about - this is talking about benchmark performance per watt.

And the article isn't benchmarking architectures, it's benchmarking GPUs. That matters, because different cards in the same architecture might have radically different efficiencies. In general wider but slower designs are more power efficient than faster but narrower designs. You can build cards both way with either architecture.

Let's take a look at two cards that perform very similarly on the benchmark. The RX 6700 XT and the RTX 3070 have nearly identical benchmark results and power draw numbers. On paper, for this test, the cards look basically exactly the same.

But they have totally different designs. The RX 6700 XT is made up of 2560 RDNA 2 cores, running at 2.6 GHz. The RTX 3070 is made up of 5888 Ampere cores, running at 1.7 GHz. The RDNA 2 card is going for that traditionally inefficient narrow but fast design, and yet is totally keeping up with Ampere. This actually implies the opposite. The RDNA 2 is much more efficient than Ampere.

If you look at it per TFLOP, the way Samuspet was, it's even more stark. Every single RX 6000 series card absolutely stomps the RTX 30 equivalent in efficiency here.

CardWattsTFLOPSTFLOPS/Watt
RTX 3070219.320.310.0926128591
RTX 3060Ti205.516.20.07883211679
RTX 3060 (12GB)171.812.740.07415599534
RTX 308033329.770.0893993994
RTX 309036135.590.09858725762
RX 6800235.432.330.1373406967
RX 6700xt215.526.420.1225986079
RX 6900xt308.546.080.1493679092
RX 6800xt303.441.470.1366842452

You can also see that both sets of cards have lots of variation. Details in the way those cores are arranged and clocked can have huge implications for power consumption.

If we look at frames/Watt, what this benchmark was designed to do, we can see that the two arches are roughly equivalent.

CardWattsFramesFrames/W
RTX 3070219.3116.60.5316917465
RTX 3060Ti205.5106.30.5172749392
RTX 3060 (12GB)171.883.60.4866123399
RTX 3080333142.10.4267267267
RTX 3090361152.70.4229916898
RX 6800235.4130.80.5556499575
RX 6700xt215.51120.5197215777
RX 6900xt308.5148.10.4800648298
RX 6800xt303.4142.80.4706657877

But that begs an obvious question. If frames/watt are similar, but TFLOPS/watt are different, doesn't that imply that there is a difference in Frames/TFLOP? Yes it does.

CardTFLOPSFramesFrames/TFLOP
RTX 307020.31116.65.741014279
RTX 3060Ti16.2106.36.561728395
RTX 3060 (12GB)12.7483.66.562009419
RTX 308029.77142.14.773261673
RTX 309035.59152.74.290531048
RX 680032.33130.84.045777915
RX 6700xt26.421124.239212718
RX 6900xt46.08148.13.213975694
RX 6800xt41.47142.83.443453099

The Ampere cards are stomping all over the face of the RDNA 2 cards, not in the number of TFLOPS, but their quality.

What the hell does all this mean?

The first thing to take away is that there is huge variation in these devices. There isn't one Rosetta stone that allows us to definitively determine which arch is more efficient, or more powerful, but X%, all the time every time.

The second is that the numbers are a bit deceiving. We get hung up on comparing TFLOPS to TFLOPS, but an RDNA2 TFLOP isn't an Ampere TFLOP... but from this chart we can see a 3060 TFLOP isn't a 3090 TFLOP!
 
I just found that Intel had published a paper last year that also combined denoising and supersampling into one network. I haven't read it through yet but, as the Facebook paper was for supersampling, this should be useful for figuring out how DLSS ray reconstruction is working behind the scenes.

image.png


And, no surprise, the core feature extraction network is a convolutional autoencoder (specifically, a U-Net, which is a common architecture for image segmentation and the like). Plenty of modifications in the other blocks, though:

image.png
 
This post is a great explainer, but there are some fundamental differences between denoising and antialiasing that are worth clarifying. For example, I wouldn't say that anti-aliasing is deleting "real" detail. Both noise and aliasing are sampling artifacts, but their origin is very different. I'll elaborate on why:
This is why Anatole is my favorite poster, because they make me feel like I almost understand math
 
0
If you're seeing dev kits at these shows, you definitely lower tier in Nintendo's eyes. Also, I don't think they would be at these venues specifically. Maybe use them as cover to invite people to their HQs for briefing
Nintendo always show dev kit for a future console at important trade shows, such as E3, Gamescom or Tokyo Game Show, did you forgot Nintendo showed dev kit for Switch at E3 2015?
 
Why would they need to move people from the Gameboy to the Gameboy Advance?
Why would they need to move people from the DS to the 3DS?
Why would they need to move people from the Wii to the Wii U?

Cause in all those cases, they had to start diverting software development away from the old architecture on to the new one, as well as focusing on a new set of services and ecosystem different from the previous.

That wouldn’t be the case here. Same ecosystem, same shop, same services…nothing to try and move people away from and towards to.

And most importantly, it’s the same base architecture that’s in the new hardware whose design is specifically geared to run low profile games and output them high profile.

There is no good reason for Nintendo NOT to continue to put those low profiles on the current models.

So what would the rush be to move people over?

Like, even putting aside all the graphical power discussion for a moment, the good reason is simply because Switch sales are steadily declining because of market saturation and will eventually flatline. You know, just like every videogame console ever?
It doesn't matter how successful a console is, you need to move from it when its life cycle is about to end.

It’s slowing down because of hitting saturation, not because it’s losing engagement.

I think people are overestimating how much of the Switch userbase will really NEED to play Mario Kart and Animal Crossing and such in 4K with better graphics. And willing to pay another $400 any time soon to do it. So why not cater to them for another 5-6 years?

Even with the other consoles, the ps4 and One got support for most of the games for 3 years. The current models are going to have a much larger engagement and longer tail than those machines.


Nintendo will stop making first party games for the switch by the end of 2025 and even then it will only be small cross gen titles and small remake/remasters.
Switch, Switch Lite and Switch OLED will all be out of production by 2026 max.

But why, though.

I can see them stop making TX1+ hardware by 2026, sure. But why stop putting most of their big games on them?

I believe Nintendo when they say this will have an abnormally long lifecycle.

For the same reason Mario Galaxy/New Super Mario Bros Wii didn't come out on the Gamecube and Mario 3D world/New Super Mario Bros U didn't come out on the Wii even though they could have, because they need to push people to buy the new hardware.

lol. Gamecube is easy. It’s the same reasons Nintendo published ZERO games for the Gamecube after 2006. None of those reasons apply in this situation.

Even the people who don’t like what I’m saying will agree that most Nintendo games will appear on the OLED/Lite for at least 3 years after this new model releases.

Similar to Wii. Same reasons they never published a game to the Wii after the Wii U came out. Wii engagement was pretty much dead for Nintendo games by the time 2012 rolled around. Yes, they embedded a chip into the Wii U to play Wii games, but to still make new post 2012 games for the Wii would require two different teams making two different versions (and then having to deal with the design difference of the Wii not having a gamepad screen)

In this particular Switch situation, we have none of that. It would be the same exact team and development making a low end profile and then using the new hardware to make a high end profile, they don’t have to have focus on two different versions from being built from the ground up separately.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom