• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Everything is going in the worst possible direction and I fear the thread is going to replicate the tragic end of wiiu speculation back in the day.
??? This is pretty old, and kopite has been wrong on multiple occasions too regarding Drake.

Anyway, ILikeFeet stated:

Kopite7Kimi admits to not knowing and assumed 8N based on Orin

I'm (@Hermii as well) pretty familiar with Kopite's tweets (including his "8nm?" and his seemingly-more-confident "SEC8N" tweet), I don't recall Kopite ever admitting he doesn't know and assuming it's 8N based on Orin.

On the flip side, I've seen Kepler state exactly that - he admits he don't know but assumes 8N because Orin is 8N.
 
Everything is going in the worst possible direction and I fear the thread is going to replicate the tragic end of wiiu speculation back in the day.
Honestly, I think all 8nm means is that our performance per watt calculations are way off and Nvidia engineers did the impossible. There is simply a limit to how low ampere can go (I believe 420mhz is the absolute minimum). And even if that's the final clocks, I woudnt say that's terrible.
 
Last edited:
Everything is going in the worst possible direction and I fear the thread is going to replicate the tragic end of wiiu speculation back in the day.
Why? The GPU and CPU hardware structures and sizes are known. There's also power and bandwidth efficiency features. It's a far cry from the Wii U situation.

Also, it's tiring to see all this blanket level statements of "X node means this and Y node means that". It's a custom effort. There's so many knobs Nvidia can change to achieve higher efficiency on whichever node they chose, alongside platform level power savings. Wherever it's fabbed, performance will be at the level Nintendo wanted.
 
Its still a lot more recent than the question mark thing, and the most recent thing we have from him afaik.
Well yeah, just a bit confused by the "oh no we're doomed now" aura when the tweet was made in September 2023. When we went through all that dooming already for a bit all the way back then :)

Circle of life I guess.
 
Honestly, I think all 8nm means is that are performance per watt calculations are way off and Nvidia engineers did the impossible. There is simply a limit to how low ampere can go (I believe 420mhz is the absolute minimum). And even if that's the final clocks, I woudnt say that's terrible.
It's already pretty bad, it's a machine that's going to sell for close to $400, and I think Nintendo would be replicating the tragic results of the wiiu era if they traded off lower power consumption and stability by lowering a lot of clock frequencies for cost.
 
Neural enemy AI that learns how you play would be cool, but I understand that's a very difficult balance of training them enough to be a challenge, but not so much that they become gods amongst men.
Oh, I would very much love to have one game and only one game to have a crazy impossible AI. The only thing is, they have to marketed it as such. So I won't feel gaslit and crazy.

So can we talk about ampere itself. Like what benefits will it have for developer to use. I am talking about the developers who strictly will take full advantage of the architecture.
 


This was a topic of conversation 1000+ page ago,

Everything is going in the worst possible direction and I fear the thread is going to replicate the tragic end of wiiu speculation back in the day.

Until we get new information stating otherwise, the switch 2 will be 8nm Samsung. While there has been calculations that points out that the switch 2 might be using a different node we don't have concrete evidence that explicitly say it's using 4nm tsmc or another node from Samsung. I would recommend you taper expectations based on the current fact that ampere is developed using 8nm Samsung so that there isn't any disappointment in the future.
 
I'm not sure if Infinity Cache can help drake erase the clock frequency and horsepower disadvantage, at least oldpuck didn't mention this statement in his previous discourse
Infinity Cache is an RDNA2 feature. Redd is pointing out that the APUs that AMD makes - chips that have CPU+GPU, like in Steam Deck, Xbox, Playstation - don't have the Infinity Cache.

Redd doesn't frequent these forums as much anymore, but he's a smart guy and I learned a lot about consoles from him - he's (correctly) pointing out that because of the lack of Infinity Cache, the Steam Deck might underperform what you'd expect, based on benchmarking other AMD cards.

Redd is asking the right questions here - does the AMD "raster advantage" go away without Infinity Cache? Does that bring Steam Deck's raw horsepower down? Does the lack of Infinity Cache affect other parts of the system?

Infinity Cache is a tool that AMD uses that make their chips need less memory bandwidth. Memory bandwidth is expensive, and isn't growing as fast as other parts of the GPU, to get big, powerful GPUs at good cost and good performance, you need to be really bandwidth efficient.

APUs are low-to-mid range devices when it comes to performance. They're not pushing the envelope in terms of what memory technology can deliver. APUs need to support both the GPU and the CPU - you can't reduce bandwidth load by GPU changes alone. And Infinity Cache itself takes up a lot of space on the physical chip, which is at a premium in a laptop, or portable PC, or $500-or-less console. That's probably why APUs don't use Infinity Cache. The performance-per-dollar is better to cut the cache, and give the chips the higher bandwidth they need to compensate.

Ideally, we'd want to know exactly how this affects these custom chips. You can benchmark the Steam Deck, of course, but since its CPU is built in and has no perfect match in PC land, you can't isolate how much the CPU is influencing the benchmark. There is no apples to apples rig, like you'd set up for classic graphics card benchmarks.

Instead we have to do other sorts of tests and guess around. Fortunately, Valve inadvertently gave us one. The OLED Steam Deck upgrades the memory controller, and increases the bandwidth by 10-15%. One thing we could test is, does the increase in bandwidth increase performance consistently? If it does, that would suggest that Steam Deck is hungry for bandwidth, and is underperforming relative to other AMD hardware.

It's not a perfect test, unfortunately, because the updated Deck also is more power and heat efficient. Under heavy load, the Deck prefers to spend as much power as possible on the GPU, and whatever is left goes to the CPU. Since OLED Deck is more efficient, we will potentially have slightly boosted CPU. But it's as close as we're gonna get.

Best benchmarks I've seen suggest that Steam Deck OLED is 2.5-8% faster in games, but the tests strongly suggest that it's CPU limited cases that get us the higher number. The "AMD raster advantage" is something like 30%. I would bet that the lack of Infinity Cache isn't really hurting the Steam Deck here.

I would suspect that it hurts ray tracing, though. I don't have benchmarks here, but RT loves cache. The "pointer chasing" that RT creates is the worst case scenario for memory - lots of small requests get bogged down by latency - while creating a best case scenario for cache - small chunks of data that get hammered repeatedly. I don't have benchmarks here that I trust unfortunately (though I guess I could do some myself, as I have an OLED Steam Deck), but I suspect this is part of the reason that the Series S (which also does not have an Infinity Cache) has ray tracing disabled in so many games that have it on the bigger consoles.

TL;DR: The horsepower advantage of the Steam Deck (as a handheld) is probably very real. It really really really doesn't matter (probably). Just because it's got a bigger engine doesn't mean a tank is faster than a Honda Civic.
 
sometime I wonder how crazy we’re in the 8th year on the Switch.

Makes me curious how the OS will look and the game library.
the craziest part is that in a few years, people will feel nostalgic for the Switch. Like sometimes it’s terrifying how fast life can go by.

Like… the wii is now considered retro. Sometimes I think its crazy how fast technology can advance.

Lastly are teraflops meant for resolutions? Like how important are teraflops, since the PS5 pro will have a 33TF, which is quite crazy.
 
This was a topic of conversation 1000+ page ago,



Until we get new information stating otherwise, the switch 2 will be 8nm Samsung. While there has been calculations that points out that the switch 2 might be using a different node we don't have concrete evidence that explicitly say it's using 4nm tsmc or another node from Samsung. I would recommend you taper expectations based on the current fact that ampere is developed using 8nm Samsung so that there isn't any disappointment in the future.
With the combination of cost, size, and stability I can fully guess what concessions Nintendo will make to clock frequency and base performance, and with 8nm I'd be shocked once again at how conservative Nintendo is with hardware performance, and would prove that optimistic guesses are never reliable.
 
0
This was a topic of conversation 1000+ page ago,



Until we get new information stating otherwise, the switch 2 will be 8nm Samsung. While there has been calculations that points out that the switch 2 might be using a different node we don't have concrete evidence that explicitly say it's using 4nm tsmc or another node from Samsung. I would recommend you taper expectations based on the current fact that ampere is developed using 8nm Samsung so that there isn't any disappointment in the future.
I’m curious how reliable is this guy.

Also if it’s a 8NM wouldn’t it mean that Nvidia engineer are working their best to make it the best option for the Switch 2.

Also the switch first launched with 20nm, but I’m guessing with the successful launch Nintendo got a deal for 16NM that most expected it would launch with in the NX days.

I’m curious if Nvidia somehow found a way to make the 8NM the most efficient for the Switch 2. Like I’m not an expert, but isn’t there a chance that Nvidia and Samsung tweaked with 8NM.
 
I know it's been posted before, but we're stuck in a cycle after all💀
edit:When will the cycle be broken
It’s what happens when you have No CDG, Switch 2 ports and radio silenced and also No new information to talk about.
 
0
I don’t like making Raccoon unhappy. If it helps 1) just guessing and 2) not head over heels about it either
at this point I think I will greatly miss and yearn for Nintendo Switch

and increasingly it seems like Nintendo themselves are in the same boat!
 
I’m curious how reliable is this guy.

Also if it’s a 8NM wouldn’t it mean that Nvidia engineer are working their best to make it the best option for the Switch 2.

Also the switch first launched with 20nm, but I’m guessing with the successful launch Nintendo got a deal for 16NM that most expected it would launch with in the NX days.

I’m curious if Nvidia somehow found a way to make the 8NM the most efficient for the Switch 2. Like I’m not an expert, but isn’t there a chance that Nvidia and Samsung tweaked with 8NM.

Kopite has been right about Nivida's desktop GPUs but have been iffy with anything terga related. Knowing that Nintendo is using the t239 with a gpu based on ampere, the safe answer is Nintendo using 8nm Samsung.
at this point I think I will greatly miss and yearn for Nintendo Switch

and increasingly it seems like Nintendo themselves are in the same boat!

Nintendo misses the Switch sooo much that they delayed the switch 2 ;)
 
Until we get new information stating otherwise, the switch 2 will be 8nm Samsung. While there has been calculations that points out that the switch 2 might be using a different node we don't have concrete evidence that explicitly say it's using 4nm tsmc or another node from Samsung. I would recommend you taper expectations based on the current fact that ampere is developed using 8nm Samsung so that there isn't any disappointment in the future.
This calculations also say SEC8N is pretty unlikely with 12SMss in the picture. Not impossible, but if they can make it work with 12SMs, it'd be an impressive feat of engineering.

However I have a bigger issue with anyone assuming T239 is 8N, simply because Ampere GPUs are.

The original Tegra X1 would have been 28nm instead of 20nm if your statement held true. Maxwell GPUs were 28nm, whereas T210 (original Switch SoC) is 20nm.
 
Kopite has been right about Nivida's desktop GPUs but have been iffy with anything terga related. Knowing that Nintendo is using the t239 with a gpu based on ampere, the safe answer is Nintendo using 8nm Samsung.
That's if you ignore that this is an entirely custom project, developed on an overlapping schedule with Ada Lovelace.
 
This calculations also say SEC8N is pretty unlikely with 12SMss in the picture. Not impossible, but if they can make it work with 12SMs, it'd be an impressive feat of engineering.

However I have a bigger issue with anyone assuming T239 is 8N, simply because Ampere GPUs are.

The original Tegra X1 would have been 28nm instead of 20nm if your statement held true. Maxwell GPUs were 28nm, whereas T210 (original Switch SoC) is 20nm.
Uh. I didn’t know that Tegra X1 would have technically be 28 NM.

Also with the switch mini revisions, it’s now 16NM.

Also hasn’t there been job listing suggestions being that the Tegra 239 was worked with 5NM or am I miss remembering.
 
Uh. I didn’t know that Tegra X1 would have technically be 28 NM.

Also with the switch mini revisions, it’s now 16NM.

Also hasn’t there been job listing suggestions being that the Tegra 239 was worked with 5NM or am I miss remembering.
It was in the time frame yes, but nobody can truly tie that project to T239. It however does offer hope that it's fabbed in TSMC 4N, but yeah as Hermii said, not conclusive.
 
Uh. I didn’t know that Tegra X1 would have technically be 28 NM.

Also with the switch mini revisions, it’s now 16NM.

Also hasn’t there been job listing suggestions being that the Tegra 239 was worked with 5NM or am I miss remembering.
All the reasons for speculating on a 4N process are "theory" and speculation on data from a leaked dlss document from 2022, my question remains, I've said before in that thread not to be optimistic that Nintendo will be more aggressive in their hardware performance choices, but I also don't think that 8nm will be a more likely choice if they do go with 8nm.it does go 8nm, I'd think Nintendo would still be disappointing in terms of hardware performance.
 
The reason for speculating on 4N is that it's just a clearly better node than Samsung 8nm at this point as the later this takes to release, the lower the costs for 4N probably are, to the point where 4N is probably going to be cheaper than Samsung 8nm by the time the Switch 2 is selling a lot of units.

4N (5nm+) is also the last "OK" node ever made (though it's not great at all) which would be nice to have for a system that will probably have an extremely long life cycle. I would expect the Switch 2 to last 8-12 years so it would be nice to have at least TSMC 7nm+ instead of a clearly worse node from 2025.
 
All the reasons for speculating on a 4N process are "theory" and speculation on data from a leaked dlss document from 2022

DLSS document is misrepresenting it. It was the entire graphic API for the next Switch, containing almost every spec of the gpu.

We know what the gpu look like. We know the performance per vat of other ampere gpus. From there we can extrapolate the likely power consumption of the gpu on 8nm. And it seems way to high. But of course, its napkin math compared to what Nvidia is doing.

, my question remains, I've said before in that thread not to be optimistic that Nintendo will be more aggressive in their hardware performance choices, but I also don't think that 8nm will be a more likely choice if they do go with 8nm.it does go 8nm, I'd think Nintendo would still be disappointing in terms of hardware performance.
As I said, we know the size of the gpu, and its not possible to underclock it that much.
 
0
All the reasons for speculating on a 4N process are "theory" and speculation on data from a leaked dlss document from 2022, my question remains, I've said before in that thread not to be optimistic that Nintendo will be more aggressive in their hardware performance choices, but I also don't think that 8nm will be a more likely choice if they do go with 8nm.it does go 8nm, I'd think Nintendo would still be disappointing in terms of hardware performance.
It's an Speculation thread. People can be as optimistic or as pessimistic as they want. You think T239 will be fabbed on 8nm and thus be disappointing? That's your opinion and it's fine. But don't go policing others.
 
All the reasons for speculating on a 4N process are "theory" and speculation on data from a leaked dlss document from 2022, my question remains, I've said before in that thread not to be optimistic that Nintendo will be more aggressive in their hardware performance choices, but I also don't think that 8nm will be a more likely choice if they do go with 8nm.it does go 8nm, I'd think Nintendo would still be disappointing in terms of hardware performance.
I have no idea what "leaked DLSS document from 2022" is even referring to.

The basis for me feeling 4N is more likely than 8N is purely mathematical.
 


Everything is going in the worst possible direction and I fear the thread is going to replicate the tragic end of wiiu speculation back in the day.
At least I feel like expectations are tempered this time. Will the Switch be a portable series S? No. Will it be equivalent to a PS4? Very likely and I think that’s what most here are expecting
 
Sea of Thieves 4k60fps on PS5/X Series X, and 1080p 30fps on X Series S. In comparison to last gen, Xbone at 900fps and Xbone X at 4k 30fps.

Wonder where a Switch 2 will fall under. Hopefully 1080p 60fps without DLSS. I wonder how much CPU it uses.

I hope 40fps becomes a common thing. A. nice middle ground between 30 and 60fps.
Fairly low. Also bandwidth becomes more and more of a bottleneck the higher you go, so even if you could you probably woudnt go that high.
Unless Switch 2 gets lpddr5x and near 134 GB/s 🤔
 
This calculations also say SEC8N is pretty unlikely with 12SMss in the picture. Not impossible, but if they can make it work with 12SMs, it'd be an impressive feat of engineering.

However I have a bigger issue with anyone assuming T239 is 8N, simply because Ampere GPUs are.

The original Tegra X1 would have been 28nm instead of 20nm if your statement held true. Maxwell GPUs were 28nm, whereas T210 (original Switch SoC) is 20nm.

I am just trying to say until we have further confirmation, SEC8N is the default node the switch 2 is using.

That's if you ignore that this is an entirely custom project, developed on an overlapping schedule with Ada Lovelace.

Despite it being a custom project, we don't have evidence that point towards another node outside of the napkin math that was done.

Now I do believe based on what was shared here that the most optimal node to use is 4nm, we don't have confirmation on the node. So I am just playing it safe so I don't get letdown if Nintendo announces the console and we find out it's on SEC8nm.
 
At least I feel like expectations are tempered this time. Will the Switch be a portable series S? No. Will it be equivalent to a PS4? Very likely and I think that’s what most here are expecting
Equivalent to the ps4 in a similar way Switch is equalent to the 360.

More ram, better cpu, a gpu that's generations ahead in features. And that's regardless of node.
 
So can we talk about ampere itself. Like what benefits will it have for developer to use. I am talking about the developers who strictly will take full advantage of the architecture.
We've talked about this a lot, but I know you're newish. I'll compare it to other hardware, I think that's the simplest way to look at it.

BIG FAT NOTE HERE, DON'T @ ME WITHOUT READING: I'm not talking about T239, or the Switch 2, I'm talking about Ampere the architecture. I'm not trying to say anything about how powerful these tools are in one machine versus another. I'm just talking about what extra tools are in the toolbox, period. I'm counting the number of widgets on the swiss army knife, not comparing whose toothpick is bigger.

Against RDNA 2, the core of modern consoles like PS5/Xbox Series/Steam Deck

DLSS:
We talk about this a lot. The short version is that it is a technology that allows you to keep most of the detail of high resolution, and most of the frame rate of a low resolution. It has multiple modes, which give you higher and higher frame rates for lower and lower levels of detail. There is similar technology on the other consoles, but DLSS looks much better at each mode, allowing developers to push it much farther.

RT Parallelism: Nvidia's RT cores can run simultaneous with shading, AMD's cannot. Even in a case where and Nvidia GPU and an AMD GPU are equally as fast as every single step of rendering, a clever programmer (and it does require cleverness to utilize fully) can get better performance out of Nvidia by doing more things at once.

BVH traversal acceleration: Nvidia's RT cores are, about, 1.7-2.0x as powerful as AMD's, mostly because they accelerate this part of the RT pipeline. Note, I'm not comparing Switch 2 to PS5. I'm comparing Ampere to RDNA2. The number of cores, and their clock speed matters. But all else being equal, complex RT effects are cheaper on Ampere hardware.

Against PS5 specifically

Variable Rate Shading:
This is DirectX 12 feature that didn't make it into the PS5, but did make it into the Xbox Series. (Or, more honestly, it's an Xbox feature which is why Microsoft put it in DirectX 12). DLSS and other upscalers have made this seem less useful, though some smart developers are starting to do cool things with it. It allows parts of the screen to render at lower resolution than the rest. Can be useful in areas where there isn't a lot of detail, or where the player is unlikely to be looking, saving performance.

ML acceleration: The tensor cores - mostly exists to support DLSS, but technically it can support any machine learning based operation. Xbox Series consoles have custom machine learning hardware to handle this, but PS5 doesn't. (The rumored PS5 Pro does)

Against Maxwell, the core of the original Switch
Everything above, plus:

Raytracing, period: Above, I talked about Nvidia ray tracing hardware versus AMD ray tracing hardware. But Switch doesn't have RT hardware at all. RT hardware can be used to do clever things that aren't graphics, either, like 3D sound. There are also some obscure features in the GPU that really exist to support RT (like Conservative Rasterization) which I won't get into here.

Mesh Shaders: A more modern way of handling geometry in the GPU. It is designed to allow devs to push much higher polygon count. But some clever developers are using it to do GPU accelerated animations as well.

Sampler Feedback: A nice but small optimization. Lets the game know more about how textures are being drawn to the screen, allowing them to optimize away some high resolution drawing that wouldn't be visible

Higher quality video: Nvidia has built in hardware for encoding/decoding video. Ampere supports higher quality versions of the existing formats, and adds AV1 support

Hardware accelerated JPEG: This is probably the single most useless thing on the list, as it's really for things like Photoshop, and game textures will almost always be in a special format, but Nvidia has added dedicated JPEG support in hardware.

Against GCN, the core of PS4/PS4 Pro/Xbox One/One S/One X
Everything from before, obviously, but also

16 bit precision: Most graphics operations use 32-bit numbers, because they're very precise, and you don't want pixels shaking around the screen because of rounding errors. But their are some operations where a full 32 bits of precision isn't necessary. Ampere can pack two 16-bit instructions into a 32-bit window, and get double performance there. This absolutely takes some cleverness to take advantage of.

Tiled rendering/caching: Old architectures drew the whole screen at once, which was heavy on memory bandwidth and cache hostile. Ampere (and Maxwell, for that matter, which is why it's down here) slice the frame up into tiles that fit in the cache, and draw them piecemeal. This reduces memory load, because it's more friendly to cache, and spreads memory usage out, so that memory is working longer at lower speeds instead of briefly at high speeds. This is basically free, developers don't have to do anything to take advantage of it, and is why last gen having "higher bandwidth" isn't really a problem.
 
Are we seriously discussing kopite 8nm again? isn't this like 5th-7th time now?
He's gotten literally every single thing about the T239 wrong and he even said that he was just assuming 8nm because of Orin, I'm not sure why people are still so hung up on this.
 
Nowhere near Wii U level bad.

Right?

I remember some of the early rumors were Wii U, or rather the Wii successor at the time, was going to use a version of IBM's Power7 CPU, and was projected to be quite the beast of a machine, and using a relatively newer AMD GPU (which it did in the end). Instead, Ninendo took the IBM PPC750, which was already modified from GCN to Wii with I think more Cache, faster clock speed, and smaller node, to a Frankenstein asymmetric CPU that while on paper was core for core about as strong as the AMD Jaguar Cores in the PS4/X1, was still outclassed in the end when those two systems came out. Didn't help that the CPUs in both the 360, and PS3, while more ancient, I think were able to brute force things easier than the Wii U CPU, though my memory could be a little fuzzy on some of the more specific details.

The equivalent in this case would be if Nintendo decided to rather than use Tegra Orin, and by extension, the modified t239 Chip, and instead used the same Tegra X1, modify the piss out of it to have 8 or so Cores with more cache, and clockspeeds, while frankensteining an Ampere-class GPU, but without DLSS, or Ray-Tracing, and in docked mode would be like plugging in a Steam Deck into your TV, and FSR was the only way for upscaling.

Yeah...that ain't fucking happening this time around.
 
Are we seriously discussing kopite 8nm again? isn't this like 5th-7th time now?
He's gotten literally every single thing about the T239 wrong and he even said that he was just assuming 8nm because of Orin, I'm not sure why people are still so hung up on this.
There it is again. People are saying this, but nobody has provided any proof he ever did so.
 
I am just trying to say until we have further confirmation, SEC8N is the default node the switch 2 is using.



Despite it being a custom project, we don't have evidence that point towards another node outside of the napkin math that was done.

Now I do believe based on what was shared here that the most optimal node to use is 4nm, we don't have confirmation on the node. So I am just playing it safe so I don't get letdown if Nintendo announces the console and we find out it's on SEC8nm.
Trust the napkin.
 
I am just trying to say until we have further confirmation, SEC8N is the default node the switch 2 is using.
For what it's worth, the default assumption indeed used to be 8N. But that changed almost overnight after the nvidia leaks when it was discovered there was 12SM. The assumption for # of SMs before that was lower than 12SM.
 
I love how the discourse of the switch 2's power went from "It'll be between ps4 and xbox series S" to "barely reaching steam decks performance".
Funnily enough, they can both be true at the same time. Again quoting Oldpucks post.


I’ve got an example that might be illuminating.

Control has a custom Steam Deck setup and it’s great. Mix of medium/low/high settings that runs at native resolution and is a rock solid 30fps. I played the whole game that way and loved it.

Drake could definitely top it. FSR makes the game look like a pixelated mess, but DLSS looks good, and the basic RT settings are right up Drake’s alley. You could absolutely put together a prettier version of Control running 30fps on Switch 2.

After I beat the game, I decided to run some benchmarks because I’m a giant fucking nerd. I pushed every single setting down as low as it could go and went into a firefight that generally tanks the frame rate.

It was fucking fantastic. The game was hitting 90fps most of the time and 75+ in combat. The resolution was 540p but with the frame rate that high, popping and fizzing from low res goes away, and it felt really smooth. Made me want to replay the game from the top.

Drake would struggle with this. A 30% raster perf drop would mean my ultra high frame rate firefight would become sub 50fps. 540p on a 7 inch screen is going to look much better than on a 7.9 inch screen, and without the high frame rate to smooth it out you would see every pixel artifact.

In a world where consoles are offering “fidelity” and “performance” modes, you’re looking at a wild arena where Nintendo’s bitty handheld can look better that other console’s fidelity mode while not being able to achieve their performance mode.
 
I love how the discourse of the switch 2's power went from "It'll be between ps4 and xbox series S" to "barely reaching steam decks performance".

This is why I would rather wait for either really solid leaks from reliable sources or actual details from Nintendo.

Problem is we are in the eight year of the Switch's lifespan and when asked about the Switch successor, Nintendo is still claiming they don't know that girl.

Hoping Nintendo finally confirms new hardware before the end of the fiscal year but at this point I expect nothing.

I basically feel like that guy sitting at the desk in the beginning of Battlestar Galactica waiting for the Cylons to show up...
 
Playing dumb here, but if it was such a shit node as you suggest, then why bother taping out the chip in that node to begin with?

And I think Goodtwin's point is it doesn't matter if a node was shit, god-tier or whatever. Tegra X1 started on 20nm for some reason or another, and it was a node that otherwise wasn't used by Nvidia much.
they bought into 20nm and committed to production. several companies did including Qualcomm. they all moved off of it as soon as they could. but since TX1 was a different type of product than a mobile chip, Nvidia couldn't move until Nintendo could, I guess

I get what his point is, my point is I don't think the TX1 is really proof of anything other than Nvidia not jumping into nodes immediately. sure we don't have proof of any one node being used so anything is up for grabs, but that doesn't make options "reasonable" given the absence of information. what we know: Nvidia has capacity at TSMC 7nm and 5nm and Samsung 8nm. what we don't know: Nvidia having capacity on other nodes. if anything, the TX1 tells me that Nvidia doesn't want to be stuck in a single-product land. methinks Nintendo wouldn't want to either, not unless Samsung is bending over backwards to sell Nvidia on it.
 
There is just so much bias with this, it basically boils down to:
"If kopite says it is 4nm then it is true!!!"
"If he says its 8nm it may be speculation"

Even if kopite does or does not state whether this thing being 8mn or not is speculation on his part, we should be taking this with a grain of salt the size of a pear.
Credibility aside the same goes to all "insiders" ,unless confirmed by nintendo all of this is basicaly fighting over which headcanon sounds the coolest (aside from the very obvious exceptipn being the nvidia leak)
 
I love how the discourse of the switch 2's power went from "It'll be between ps4 and xbox series S" to "barely reaching steam decks performance".
These can be two completely different contexts, when comparing to deck it's naturally based on portable mode, however when we say that switch2 has the level of performance between ps4 and xss it's clearly based on a docking mode discussion.
 
ML acceleration: The tensor cores - mostly exists to support DLSS, but technically it can support any machine learning based operation. Xbox Series consoles have custom machine learning hardware to handle this, but PS5 doesn't. (The rumored PS5 Pro does)
Ok, this is new, I wasn't aware that Xbox support ML. What do they use it for?
Mesh Shaders: A more modern way of handling geometry in the GPU. It is designed to allow devs to push much higher polygon count. But some clever developers are using it to do GPU accelerated animations as well
So here's what I want to know. With traditional polygon. You have to use the CPU to draw call them is that the same here as well? Or is it still polygon but you can have a low polygon count model and mesh shader just boost it more? Isn't that similar to tesselation?
Whatever happen to tesselation? I thought it was suppose to be the next hip thing when gen 8 came out or at least around direct 10 or 11? Same with POM
Also, I guess we will have a while before mesh shader is the standard right? I mean gaming overall.


Tiled rendering/caching: Old architectures drew the whole screen at once, which was heavy on memory bandwidth and cache hostile. Ampere (and Maxwell, for that matter, which is why it's down here) slice the frame up into tiles that fit in the cache, and draw them piecemeal. This reduces memory load, because it's more friendly to cache, and spreads memory usage out, so that memory is working longer at lower speeds instead of briefly at high speeds. This is basically free, developers don't have to do anything to take advantage of it, and is why last gen having "higher bandwidth" isn't really a problem.
I thought Xbox 360 and one had it or some interpretation of it?
I love how the discourse of the switch 2's power went from "It'll be between ps4 and xbox series S" to "barely reaching steam decks performance".
Steam deck is way more advanced than the PS4 and Xbox one. Newer architecture. Supports ray-tracing better CPU. Idk what the memory bandwidth is but it had more ram




BIG FAT NOTE HERE, DON'T @ ME WITHOUT READING: I'm not talking about T239, or the Switch 2, I'm talking about Ampere the architecture. I'm not trying to say anything about how powerful these tools are in one machine versus another. I'm just talking about what extra tools are in the toolbox, period. I'm counting the number of widgets on the swiss army knife, not comparing whose toothpick is bigger.
So what change on the instruction level? Code wise? From what I understand is that a change in architecture is a change in the micro-architecture language. Like some instruction could take X cycle per second to finish but the newer instruction sets take X - Y Cycles per second due to some optimization.
16 bit precision: Most graphics operations use 32-bit numbers, because they're very precise, and you don't want pixels shaking around the screen because of rounding errors. But their are some operations where a full 32 bits of precision isn't necessary. Ampere can pack two 16-bit instructions into a 32-bit window, and get double performance there. This absolutely takes some cleverness to take advantage of.
I seen an example of this. I remember where someone was showing a 32 bit texture of a water reflection and they showed that you can use a 16 bit operation to reduce the quality but while it was noticeable while you see it by itself it will be largely unnoticeable if you're looking at the overall game environment.
 
This is why I would rather wait for either really solid leaks from reliable sources or actual details from Nintendo.
Dammit Kevin, we need to find it ourself. We go to Canada and find a major switch developer. Offer everyone pancakes with maple syrup and they will surely let us in their office.

I remember some of the early rumors were Wii U, or rather the Wii successor at the time, was going to use a version of IBM's Power7 CPU
Lol, I remember that. I think some people even thought it was suppose to use the power of Watson. Watson was that cheating bastard "AI" who was on jeopardy who pretty much googled the answer on the internet and those decent but extremely smart folks were just jobbers too? I hope jeopardy paid those folks.


Instead, Ninendo took the IBM PPC750,
Looking back at it, at least for the handheld, Nintendo always use their previous gen. I guess it make sense for Nintendo to had that thought in their head reuse it again, at that time.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom