• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Please be more careful about discussing content hidden in Hide tags. People hide their posts for a reason. -xghost777, meatbag, Dardan Sandiego, Party Sklar, MissingNo, big lantern ghost, MondoMega
* Hidden text: cannot be quoted. *
I'm surprised that according to the links 6GB LPDDR5X is cheaper compared to LPDDR5 modules even though X has more bandwitdh.
 
I wanted to run a personal, non-scientific simulation of how 'current-gen' may compare to a possible Switch 2 average / worst case scenario. Since I've been playing on Deck docked a lot and realize that it is often 'good enough' for me. It has kind of felt like having a preview Switch 2 years in advance.

Lies of P on Steam Deck:

Top - native 1800p upscaled to the Deck's 2160p 60 output, High Settings to mimic the PS5's performance mode.
Bottom - FSR2 performance mode (internal res 540p) upscaled to the Deck's 1080p 120 output, spatial upscaled to my TV's 4K canvas. Low-Medium settings, FSR sharpness at max

Compression ruins the sharpness of this comparison but ironically makes it more poignant since it reflects what I see at a living room distance / handheld screen. If you zoom in you can see the FSR artifacting on entities like the chandelier, cat, and butler.

p4k1.jpg



Harkening back to the oh so infamous 'comparable' wording from that Gamescom report. This is absolutely comparable.

At the six-feet plus distance I play at, the first image is obviously sharper. But the second image is still sharp, still pleasant. And most importantly - it's playable. At my pseudo PS5 settings the game cannot even hit 10 FPS, while the latter image is running at 50-60 FPS (bless VRR) and still looks the way it does. The game is very scalable and 'Low' settings are still fantastic.

The FSR artifacting is more apparent in motion - but again, living room or handheld display distance, I don't pay any mind. Also - the comparisons between DLSS and FSR at this point are well-established, plenty of 540p -> 1080p DLSS vids out there. This is underselling how well a DLSS upscale on Switch 2 may look. And honestly underselling how a Lies of P port may look, since it would be a native port and not a Windows game crammed under a compatibility layer.

The point of this comparison is to (mostly show how pretty this game is but also) show how far the gap can be in resolution and settings while still maintaining a great image with upscaling, in a current-gen game that is well optimized but still relatively demanding, and running on a mobile device. For context, as far as I'm aware the game does not have a performance mode on PS4 / PS4 Pro, and the Series S performance mode is dynamic 1080p.

So what does it mean for when Switch 2 gets the 'worst' version of a multiplatform game? I'm thinking it'll border on 'who gives a shit' territory. The above degree of difference is 100% worth it for me to have a convenient portable version of a game.
 
Hate the Eight

I think you mean The Hateful Eight

tumblr_o0bpqlnui31qlmv8fo2_500.gif


I dunno! With the state of generative AI on one hand, and "neural shaders" on the other, it certainly seems possible. It's Nvidia's stated goal. But I think that's a couple leaps beyond where we are now.

The core idea of DLSS is: 3D video games have patterns. Those patterns exist across video games, and within a video game. These patterns exist because they are made by human artists, for human players, using hardware that simulates the way the real world works.

In classic rendering, an artist is trying to create a moving image using that hardware simulator of light and 3D objects. The resolution and frame rate of the Universe is very high, and your neurovisual system is highly trained to perceive it, and notice when something is off. It isn't practical to run any game engine at a resolution and frame rate of the Universe. In modern games there is detail and richness that is the artist's intent, lost behind the limitations of silicon.

And you can feel it missing! If I gave you a low res image, even if you're a non-artist, you could point to pixels that were wrong. Incorrect edges and aliasing. Fuzzy text that you could just barely read and ought to be sharp. Popping and fizzing pixels that ought to be fine strands of hair. You could likely correct some of these problems by hand, with a little help. All of this without talking to the artist, or knowing what's going on in the game engine.

Intuitively, it makes sense that a program, also trained on these patterns, could fill in the blanks too. If it can do it at a decent enough level of quality, faster than running the simulation, then we have a performance win, a way to extend visual quality beyond the limits of the hardware.

That's what DLSS Upscaling does. It fills in pixels that are missing in a low resolution image. It uses training on Ultra High Res 3D scenes, and information about the video game you're currently playing (previous frames, data about how things are moving) to make its guesses.

This is what DLSS Frame Generation does. It takes two frames and fills out a middle frame like an animation inbetweener.

This is what DLSS Ray Reconstruction does. It takes a couple points of light from a ray tracer and tries to infer what the same scene would look like with thousands of points of light.

In terms of how far neural rendering can go - all of these paths are still trying to uncover a "ground truth," what the game engine would generate if the settings were really as high as DLSS is trying to act like they are. The ground truth set by an artist - and it's inputs still need to be a crystal clear representation of the artists visual intent.

Right now, the best way to say what a visual artists wants is to have that artist create some visual art. Neural rendering may insert itself into deeper parts of the pipeline, but until there is a radical overhaul of the way the physical hardware works, some level of traditional rendering is always going to be necessary.

One thing I already see is the closer we get to photorealistic rendering, ironically the further away it is. In other words, we’ve become quite good at detecting when something is β€œoffβ€œ or β€œincorrect” similar to what you mentioned. Artists can try, and they do succeed in many ways to make games β€œlook” and β€œfeel” real, but it never really gets there. And I think we’re approaching that plateau where no matter how many pixels, polygons, shaders, lights, shadows, effects, whatever you can throw at it, we'll always look at it, and think, β€œsomething feels off.”

Like you also mentioned, our brains, and subsequently our eyes are extremely good at filling in details, and since 3D became a thing, many consumers have become good at seeing those details, but also detecting something that is incorrect.

In a weird mathematical way, imagine dividing the number 2 by 2. the result is 1. Now divide that number by 2 for say 15 times, and you always a get closer to zero, but never truly reach 0. That is how I feel realistic graphics will become. Always closer, but never truly there.
 
Why am I seeing so much doomposting about Switch 2 in Twitter suddendly?People are already complaining because "PS4 power" and shit.
I think it's mostly people trying to farm interactions/clicks, either that or people who expect an handheld device to be as powerful as a PS5 (or both).
 
Is it more likely Nintendo will reveal Switch 2 in April rather than in March? April is the start of their new fiscal year and it will put the anouncement some distance from the Peach game release in March as well.
 
I think people, by and large, are ignorant of the realities of tech.
Sometimes I forget how much this forum is a bubble in the internet. T239 is public information and has been discussed to death here, but in other parts, even people who works making content don't have a single clue about what to expect or what they are talking about.
 
Is it more likely Nintendo will reveal Switch 2 in April rather than in March? April is the start of their new fiscal year and it will put the anouncement some distance from the Peach game release in March as well.

If releases in H2 they can wait till April/may for a reveal, yes.

So we have another 3-4 months of madness πŸ™ƒ
 
Why am I seeing so much doomposting about Switch 2 in Twitter suddendly?People are already complaining because "PS4 power" and shit.
I think it's because for a while people haven't really talked about tech specs and then one random article comes out saying "8GB" and everyone (including Nintendo do YouTubers who should know better) are covering and spreading that spec around.
 
I think it's because for a while people haven't really talked about tech specs and then one random article comes out saying "8GB" and everyone (including Nintendo do YouTubers who should know better) are covering and spreading that spec around.
Honestly while it’s bad now that people are dooming and glooming over β€œ8 GB” (and yeah, YouTubers who know better are really trying to get those clicks) it’ll be nice when they find out it’s gonna have 12 or maybe even 16 GB and there’s a nice wave of positivity (though I don’t wanna hear anyone say β€œNintendo listened to our whining and gave us more RAM!”)
 
One thing I already see is the closer we get to photorealistic rendering, ironically the further away it is. In other words, we’ve become quite good at detecting when something is β€œoffβ€œ or β€œincorrect” similar to what you mentioned. Artists can try, and they do succeed in many ways to make games β€œlook” and β€œfeel” real, but it never really gets there. And I think we’re approaching that plateau where no matter how many pixels, polygons, shaders, lights, shadows, effects, whatever you can throw at it, we'll always look at it, and think, β€œsomething feels off.”

Like you also mentioned, our brains, and subsequently our eyes are extremely good at filling in details, and since 3D became a thing, many consumers have become good at seeing those details, but also detecting something that is incorrect.

In a weird mathematical way, imagine dividing the number 2 by 2. the result is 1. Now divide that number by 2 for say 15 times, and you always a get closer to zero, but never truly reach 0. That is how I feel realistic graphics will become. Always closer, but never truly there.
You're talking about the uncanny valley? Yeah Robert Zemeckis has been tryina figure out how to get past that for years.
 
There will always be some ulterior reason presented as to why the hardware will be better or worse than expected. With the Switch it was often "Nintendo got a deal on bargain bin scrapped TX1s" and "Capcom forced Nintendo's hand on 4GB of RAM". In this instance we may hear that Nvidia was the one who forced Nintendo's hand this time, or that the Switch 2 is built on shaven car parts. 12 GB of RAM may be seen as Nintendo cheaping out in comparison to the Steam Deck's 16 GB because context doesn't matter.

It's futile. But I think more folks on average will be satisfied with the performance of a Switch 2 that these voices will be seen as what they are, a fringe group of unreasonable folk who can be more easily ignored. Right now it's annoying to hear them because they show up frequently in mainstream conversation and it's the overwhelming narrative being presented. As I said above we're nearing into "who gives a shit" territory, like I just wanna play games man.
 
I suspect it's more development timeliness than intended release dates.

LPDDR5X spec revealed in july 2021

Nvidia Grace revealed with lpddr5x memory support in April 2020

I'd say there was enough time to incorporate a newer memory controller if we assume these standards take a couple of years to finalize
I think definitely with what we know of t239 being taped out around the same time as Grace and the Ada Lovelace cards, could definitely lend to the memory controller being more in line those products.

* Hidden text: cannot be quoted. *


also, I'm kinda surprised we haven't seen many devs do performance/fidelity modes on switch. it'd be docked more exclusive, but having handheld settings in docked mode would give a boost to performance if the game isn't bound by non-gpu

I was actually wondering if this is something that we see more of this time around...
 
You're talking about the uncanny valley? Yeah Robert Zemeckis has been tryina figure out how to get past that for years.

Yup. Pretty much.

Quite frankly, I think Robert Zemeckis should go take Flight, and make Contact with whatever uncanny valley he's trying to get past. He might have to go Back to the Future though. The Walk of Romancing the Stone that is this kind of singularity is probably What Lies Beneath for him. But I think we can let him Cast Away into the sunset.
 
It's funny because a year ago, people are saying that it wouldn't even reach Steam Deck level. Nintendo always gonna Nintendo if you keep on raising your standard.
 
Can a 4MB L2 cache on the GPU significantly improve bandwidth efficiency in comparison to 1MB? I understand having a larger cache takes more die size since there are diminishing returns on memory scaling on more advanced nodes. Just curious if a 4MB GPU cache can alleviate bandwidth constraints if Switch 2 utilizes LPDDR5 modules instead of LPDDR5X.
Theoretically, yes.

As Nvidia mentioned, a larger L2 cache on the GPU increases the cache hit rate and reduces the cache miss rate when data can't be found on the L1 cache on the GPU, since a larger L2 cache increases the amount of data that can be stored in the L2 cache. And since a larger L2 cache on the GPU means more data can be found there, the frequency that the GPU has to request data from RAM, etc., when data can't be found on the L2 cache, has been reduced, which can theoretically increase the amount of RAM bandwidth available.

And a larger L2 cache on the GPU can also be beneficial for LPPDR5X, not just LPDDR5.

But as mentioned before, a larger L2 cache does take up more die space, which can also reduce chip yields (here and here).
 
It's funny because a year ago, people are saying that it wouldn't even reach Steam Deck level. Nintendo always gonna Nintendo if you keep on raising your standard.
I remember DF saying it much less than a year ago. Paraphrasing "the steam deck is a 20 watt device, there's just no way Nintendo is going to match its performance".
 
I wanted to run a personal, non-scientific simulation of how 'current-gen' may compare to a possible Switch 2 average / worst case scenario. Since I've been playing on Deck docked a lot and realize that it is often 'good enough' for me. It has kind of felt like having a preview Switch 2 years in advance.

Lies of P on Steam Deck:

Top - native 1800p upscaled to the Deck's 2160p 60 output, High Settings to mimic the PS5's performance mode.
Bottom - FSR2 performance mode (internal res 540p) upscaled to the Deck's 1080p 120 output, spatial upscaled to my TV's 4K canvas. Low-Medium settings, FSR sharpness at max

Compression ruins the sharpness of this comparison but ironically makes it more poignant since it reflects what I see at a living room distance / handheld screen. If you zoom in you can see the FSR artifacting on entities like the chandelier, cat, and butler.






Harkening back to the oh so infamous 'comparable' wording from that Gamescom report. This is absolutely comparable.

At the six-feet plus distance I play at, the first image is obviously sharper. But the second image is still sharp, still pleasant. And most importantly - it's playable. At my pseudo PS5 settings the game cannot even hit 10 FPS, while the latter image is running at 50-60 FPS (bless VRR) and still looks the way it does. The game is very scalable and 'Low' settings are still fantastic.

The FSR artifacting is more apparent in motion - but again, living room or handheld display distance, I don't pay any mind. Also - the comparisons between DLSS and FSR at this point are well-established, plenty of 540p -> 1080p DLSS vids out there. This is underselling how well a DLSS upscale on Switch 2 may look. And honestly underselling how a Lies of P port may look, since it would be a native port and not a Windows game crammed under a compatibility layer.

The point of this comparison is to (mostly show how pretty this game is but also) show how far the gap can be in resolution and settings while still maintaining a great image with upscaling, in a current-gen game that is well optimized but still relatively demanding, and running on a mobile device. For context, as far as I'm aware the game does not have a performance mode on PS4 / PS4 Pro, and the Series S performance mode is dynamic 1080p.

So what does it mean for when Switch 2 gets the 'worst' version of a multiplatform game? I'm thinking it'll border on 'who gives a shit' territory. The above degree of difference is 100% worth it for me to have a convenient portable version of a game.
Good test run, especially for a game that is not rated as optimized for Steam Deck, a few things I'm wondering though:

  • Which wattage mode was this running in?
  • How much in this case was it benefiting from having ~16GB of RAM usable? If there a way to test that limits access to only 12GB RAM?
  • How much was it maxing out the CPU?
  • Was this a version 1 Steam Deck or an OLED?
 
I don’t think Switch 2 supports LPDDR5X so I would be glad if they went with 16GB LPDDR5
Honestly, if Nintendo saw the way the price trajectory was going while working on Drake, and realised including it would save them a boatload of money (and improve bandwidth) because X would be cheaper it might be something they would be willing to delay Drake over. Obviously that decision would have been taken well before tape out.
 
If it's not "all over the place", curious, does anyone know what kind of monetary agreement are "standard" for Nintendo giving out new IP games to 3rd parties (such as some examples OldPuck listed.. Fire Emblem: Warriors, Mario + Rabbids, Link's Awakening, Cadence of Hyrule, Metroid Dread, Age of Calamity, Pikmin 4)

Like what % of revenues go to 3rd party, what % goes to Nintendo, etc?

I know there are probably intangible benefits (3rd party name getting more attention, leading to more potential future projects, etc).
We know how much a publisher gets from a $60 game, but I believe how much the developer gets is case-by-case. It might be a fixed amount of money from the publisher with maybe a bonus or a percentage of how much the publisher got from the game sales. Famously, Bethesda paid Obsidian upfront for Fallout New Vegas, and promised them a bonus if they reached 85+ on Metacritic.

In Nintendo cases, the various games have been made under different deals.

For example, Nintendo commissioned Metroid Dread and Pikmin 4 to MercurySteam and Eighting. These developers didn't pay to work on the IP. However, Nintendo was not only the publisher of those games, but also worked to some extent on them, so Nintendo might got part of the developer share too.

The Mario + Rabbids and the Warrior games were different. Those were UbiSoft / Tecmo games, and Nintendo licensed them their characters. UbiSoft likely paid to use Mario. But Tecmo and UbiSoft were also the publishers of the games in some regions, so they got more money on each sales (but they did license the publisher rights to Nintendo in other regions).
 
Good test run, especially for a game that is not rated as optimized for Steam Deck, a few things I'm wondering though:

  • Which wattage mode was this running in?
  • How much in this case was it benefiting from having ~16GB of RAM usable? If there a way to test that limits access to only 12GB RAM?
  • How much was it maxing out the CPU?
  • Was this a version 1 Steam Deck or an OLED?
I ran another test with the TDP locked to 11 W to mimic the launch docked Switch. Still performance FSR2 and mix of low and medium settings with texture detail at high. I'm using the built in performance monitor and went to a busier outdoors area with a bunch of enemies and combat.

  • TDP limit 11W, GPU showing between 5-7 W, CPU 1-1.5 W. The previous test was uncapped tdp, so presumably 15 W.
  • VRAM usage is 5.3 GB with total ram usage being 10 GB. Not sure if there's a way to limit the total available ram to 12 GB. Whether or not the 16 GB of RAM is helpful in this case is more difficult to figure out since I'm unsure how much RAM Steam OS takes up in gaming mode.
  • Never maxed out the CPU, most was around 70%, average 50. GPU however is running at a glorious 99%
  • OLED Steam Deck

Average FPS is 40-50 with the TDP limit of 11w, 50-60 without. The game is still pretty.
 
Last edited:
Theoretically, yes.

As Nvidia mentioned, a larger L2 cache on the GPU increases the cache hit rate and reduces the cache miss rate when data can't be found on the L1 cache on the GPU, since a larger L2 cache increases the amount of data that can be stored in the L2 cache. And since a larger L2 cache on the GPU means more data can be found there, the frequency that the GPU has to request data from RAM, etc., when data can't be found on the L2 cache, has been reduced, which can theoretically increase the amount of RAM bandwidth available.

And a larger L2 cache on the GPU can also be beneficial for LPPDR5X, not just LPDDR5.

But as mentioned before, a larger L2 cache does take up more die space, which can also reduce chip yields (here and here).
Great article, thanks for linking it!

Cache is wonderful for a bandwidth-constrained system, it is a very smart application of the Pareto principle to improve effective bandwidth. Assets like the main protagonist's clothing textures will be on screen 90%+ of the time, and keeping them in cache rather than fetching them from distant RAM each time the asset temporarily disappears from view helps your memory allocation tremendously. I definitely hope they can include a large cache considering the levels of RAM access alleviation they achieved with going from 2MB to 32MB in the Ampere-to-Lovelace transition (50% less RAM utilisation). Do you think that 4MB can have a similar bandwidth alleviation effect for a smaller GPU (with presumably smaller data and texture loads to match) like this?

I wanted to ask a related question: if we assume that Switch 2 can get something like the 4MB L2 cache, how does that fare against the AMD GPUs used in the gen 9 family of consoles? I think that AMD was big on the L3 infinity (i.e. really big) caches, but do we know how that translates to real-world approximate bandwidth improvements?
 
There will always be some ulterior reason presented as to why the hardware will be better or worse than expected. With the Switch it was often "Nintendo got a deal on bargain bin scrapped TX1s" and "Capcom forced Nintendo's hand on 4GB of RAM". In this instance we may hear that Nvidia was the one who forced Nintendo's hand this time, or that the Switch 2 is built on shaven car parts. 12 GB of RAM may be seen as Nintendo cheaping out in comparison to the Steam Deck's 16 GB because context doesn't matter.

It's futile. But I think more folks on average will be satisfied with the performance of a Switch 2 that these voices will be seen as what they are, a fringe group of unreasonable folk who can be more easily ignored. Right now it's annoying to hear them because they show up frequently in mainstream conversation and it's the overwhelming narrative being presented. As I said above we're nearing into "who gives a shit" territory, like I just wanna play games man.
the "nintendo got a deal on tx1's" is kinda true though. that's less Nintendo and more Nvidia doing everything they can to get someone to buy it
 
the "nintendo got a deal on tx1's" is kinda true though. that's less Nintendo and more Nvidia doing everything they can to get someone to buy it
There's a truth in all the statements above but it's not framed with the nuance of Nvidia providing all the software support they could for a product they needed a win on. The frequent implication is that Nintendo went with the cheapest possible option and opted for a low end, outdated, failed tablet SoC.
 
QDEL has a ton of potential but I imagine we’re a couple years away from seeing it in consumer goods.
Understood. Maybe I misunderstood what Digital Trends guys was saying. He made it sound like sharp was saying its ready to go and cheap. Didn't need much modifications to the pipeline especially for smaller devices. But I guess if that was just a prototype shown, thats all it was. But is there a potential that they already have this underway with their clients. Would be really cool though. Emissive displays are awesome. And Oleds are finally at a point where the average consumer can get their hands on it. But Still cool to see new tech advances come forward. This tech does seem like a match made in heaven for nintendo.


Curious about that prototypes performance compared to oled. From everything I have read igzo and the tft set up is fast.
 
Understood. Maybe I misunderstood what Digital Trends guys was saying. He made it sound like sharp was saying its ready to go and cheap. Didn't need much modifications to the pipeline especially for smaller devices. But I guess if that was just a prototype shown, thats all it was. But is there a potential that they already have this underway with their clients. Would be really cool though. Emissive displays are awesome. And Oleds are finally at a point where the average consumer can get their hands on it. But Still cool to see new tech advances come forward. This tech does seem like a match made in heaven for nintendo.


Curious about that prototypes performance compared to oled. From everything I have read igzo and the tft set up is fast.
The tech seems ready to commercialize, but it will still take time to develop real products around it, do all the preliminary testing, update production lines, etc. That all takes time and money, and I don’t get the impression these industries can really stop on a dime and change course.

Excited to see where it goes, for sure.
 
I haven't spoken French since I dropped out of college, but I've also seen May/Summer being thrown around, since last year.
It's hard to get a true consensus on the release date, but I think we'll have a good idea if we get to March 15th without an announcement.

I think May is still very likely but it's not like we have any actual proof about when it'll be released. Nintendo's lineup isn't bright enough in the first half of the year to indicate that it'll release in September, but we're gradually stepping towards March without any rumors in sight about a reveal. In all likelihood it's just Nintendo being stealthy, but without information... yeah it's just kinda hard to tell.

Maybe when we get to next week with a Direct Mini... or maybe when we get to a direct mini in two weeks... or maybe a direct at the end of the month... or maybe a direct in February... or maybe a dir-

the-stanley-parable-game.gif


For all intents and purposes I believe it's going to be a reveal in Feb/Early-March, but there's literally no way of being sure.
 
You would need to go even lower, as these 11W are for the whole system. Steam Deck at 11W for the APU is like drawing what, 23W total?
Since power consumption won't be 1:1 with the Deck being much more power hungry and performance crapping out at below 10 W I didn't try to line up the total wattage being pulled from the wall for both devices, so I just picked the docked Switch value for the TDP limit as a starting point to try to limit the power draw. When I set a 5W value Lies of P drops to 12 FPS which wouldn't be representative of how a Switch port would run at the equivalent SoC wattage.
 
Theoretically, yes.

As Nvidia mentioned, a larger L2 cache on the GPU increases the cache hit rate and reduces the cache miss rate when data can't be found on the L1 cache on the GPU, since a larger L2 cache increases the amount of data that can be stored in the L2 cache. And since a larger L2 cache on the GPU means more data can be found there, the frequency that the GPU has to request data from RAM, etc., when data can't be found on the L2 cache, has been reduced, which can theoretically increase the amount of RAM bandwidth available.

And a larger L2 cache on the GPU can also be beneficial for LPPDR5X, not just LPDDR5.

But as mentioned before, a larger L2 cache does take up more die space, which can also reduce chip yields (here and here).
Thank you for the update. It seems like based on the article findings, the risk of defects are higher on 3nm nodes. It makes me wonder if Nintendo will just stick with the rumor 4N (assuming its 4N) throughout Switch 2 life-cycle, unless 3nm technology becomes more mature in a few more years.
 
0
You would need to go even lower, as these 11W on switch are for the whole system (minus screen of course). Steam Deck at 11W for the APU is like drawing what, 23W total?
Since power consumption won't be 1:1 with the Deck being much more power hungry and performance crapping out at below 10 W I didn't try to line up the total wattage being pulled from the wall for both devices, so I just picked the docked Switch value for the TDP limit as a starting point to try to limit the power draw. When I set a 5W value Lies of P drops to 12 FPS which wouldn't be representative of how a Switch port would run at the equivalent SoC wattage.


There are 3 main reasons why Drake will literally run circles around Van Gogh in power efficiency. Node, chip size and architecture.

Node: 4nm vs 7nm brings about 30% reduction in power consumption.

Chip size: Drakes gpu has about 3 times more shaders running at less than half the clock speed in portable mode. This wide and slow design, is far more power efficient than Van Goghs narrower and faster design.

Architecture: Ampere on 8m is about as efficient as RDNA2 on 7nm. On equal node, Ampere is more efficient. Similarly Arm 78 beats the X86 Zen 2 handily.

So yea, giving the SD 11 watts is being far to kind to Drake for a comparison.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom