• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

So at 11W = 40~50fps
5W = 12fps

?

I didn't know the performance would drop so much, wow
It really does. There's a good reason why you have to edit the wattage per game. It's a bit of a mess, but that's kind of how it's like with something like the Steam Deck.

Personally I just keep the Ws on my original model at default and accept that the battery life isn't going to be much longer than 2hrs nine-out-of-ten times. I'm not really brave enough to toggle it much past the basic "Refresh rate" setting. The Steam Decker's battery life is probably going to be worse, but it'll be more powerful so i don't particularly care.
 
Node: 4nm vs 7nm brings about 30% reduction in power consumption.

In this case Serif is using the OLED model. Is there much of a difference between 4N (optimized 5nm?) and 6nm in power consumption?



Even if drake is indeed at 4N, I'm still expecting the whole system to draw 11~12W in handheld mode. It's hard for me to imagine all that power at ~550MHz drawing like 7W total.
 
“Two Focuses duct taped together!”

But seriously, given Ford’s recent history of shit quality, and leading the pack in recalls, this is unfair to Nintendo.
Wait. Ford Focus. Remember that Nintendo Focus leak? We've come full circle. Focus confirmed.
 
I wanted to ask a related question: if we assume that Switch 2 can get something like the 4MB L2 cache, how does that fare against the AMD GPUs used in the gen 9 family of consoles? I think that AMD was big on the L3 infinity (i.e. really big) caches, but do we know how that translates to real-world approximate bandwidth improvements?
Well, there's no L3 cache (e.g. Infinity Cache) that the GPUs on the current-gen console's APUs have access to (here).

Although I don't know how a larger L2 cache quantifiably translates to approximate, real world RAM bandwidth improvements, assuming the information TechPowerUp provided about the Xbox Series S's GPU is accurate, in a hypothetical scenario that Drake's GPU has 4 MB of L2 cache, that's double the L2 cache that the Xbox Series S's GPU has.
 
In this case Serif is using the OLED model. Is there much of a difference between 4N (optimized 5nm?) and 6nm in power consumption?
I just used the numbers fro 5nm, since nobody knows the exact figures for Nvidias 4nm node. But we can assume it's more efficient. 6 nm is also "just" an optimized 7nm.
 
Well, there's no L3 cache (e.g. Infinity Cache) that the GPUs on the current-gen console's APUs have access to (here).

Although I don't know how a larger L2 cache quantifiably translates to approximate, real world RAM bandwidth improvements, assuming the information TechPowerUp provided about the Xbox Series S's GPU is accurate, in a hypothetical scenario that Drake's GPU has 4 MB of L2 cache, that's double the L2 cache that the Xbox Series S's GPU has.
Potential 4M of L2 Cache for GPU, 8MB of L3 Cache for CPU, it would help a lot I assume
 
0
I'm sure this has been talked about a lot here, but it's not exactly something that's easy to find.

Coming from SEC 8nm to TSMC 4N, is it safe to expect at least a 50% reduction in power consumption?

edit: I can't remember if I actually read about it, but I have this strange memory about a 80% power reduction figure. Maybe I dreamed about it LOL
 
Last edited:
As I understood, he was comparing tsmc 7nm vs tsmc 4n.
Ah gotcha didn’t see that part.

Someone just said SEC8N is 10nm so I guess I’m even more confused. Always thought SEC8N was 7nm for some reason (TSMC 4N is optimized 5nm unless I’m mistaken yet again lol)
 
I'm not really a hardware nerd, So could someone please explain the benefits of the Cache?

Like what is the difference between a 2MB Cache and a 4MB Cache, And why is it so small but apparently makes a big difference?

Please explain like I am 5, On the level of an @oldpuck post
 
The importance of optimization and power efficiency.

Nintendo reports the Switch OLED uses 4 W in handheld mode during active gameplay. DOOM (2016) on the Switch runs at a handheld resolution of dynamic 576p, at 30 FPS, at 'low' settings but also some settings dialed back further.

When I set the APU TDP cap to 4 W on the Steam Deck OLED, set the resolution in DOOM to 486p (the lowest setting I could use!), the resolution scale to 74% to match the lowest range of the dynamic 360p-576p of the Switch version, set all settings to low (but the API to Vulkan to give it a leg up), and the FOV to 90 to minimize onscreen rendering, I'm averaging 26 FPS in the heat of battle. Woof!

They did a good job with that port.
 
Last I checked (Unless they've updated Sales Numbers) The deck has only sold like 1 million units

Which I guess is still pretty impressive for a piece of hardware that is not sold on store shelves

I had the number 3 million in my head. All I was able to find was “multiple millions” from Valve. No real official numbers, but I’d say it’s well over 1 million by now.

Its going to be insanity getting our hands on this thing.

Thank god im in Australia.

Same. I feel like anybody I know here that’s cared right at announcement and ahead of launch has gotten the recent devices. If you are at EB after the announcement, you’ll probably get one.
 
0
I'm not really a hardware nerd, So could someone please explain the benefits of the Cache?

Like what is the difference between a 2MB Cache and a 4MB Cache, And why is it so small but apparently makes a big difference?

Please explain like I am 5, On the level of an @oldpuck post
A CPU cache is a type of very fast memory that's attached directly to the CPU for the most important data. CPU cache is generally divided into 3 levels, with L1 cache typically being for each CPU core, L2 cache being for groups of cores, and L3 cache being for the CPU as a whole. A larger cache means that more data can be available to the CPU at a time, and thus it isn't as bottlenecked by the RAM bandwidth.
 
I'm not really a hardware nerd, So could someone please explain the benefits of the Cache?

Like what is the difference between a 2MB Cache and a 4MB Cache, And why is it so small but apparently makes a big difference?

Please explain like I am 5, On the level of an @oldpuck post

The cache holds data the chip needs, like RAM, but because it's right on the chip it's much faster. The less you need to move data from cache to RAM and back again rather than just keeping it in cache, the better, because moving an asset from cache to RAM could take like 100 nanoseconds vs like 3 nanoseconds for the chip to just read what's on the cache. The reason why it's so important even when it's as small as 2-4MB is because it focuses on data you're constantly using. Basic gameplay code, geometry and texture data for the main character, walking and attacking animations, etc. If you can keep more of that constantly-used data right on the chip, you can cut down on cache-to-RAM lag way more than you'd think.
 
Basically, if there was no game running, there would be no cap. The Wi-Fi chip in the Switch can max out theoretically at ~100MB/s and the Ethernet adapter (when used on other devices) achieves about the same. But the Ethernet adapter is throttled to ~8MB/s in real-world use with Switch and Wi-Fi is limited to about ~9MB/s, so the limit seems to be 10MB/s or less across all Switch network activity.
I wanted to revisit this subject again.
When a device receives network activity, it has to do stuff with it. Network data isn't just automatically passed through to wherever it needs to go. Whether it be writing it to the internal storage or passing it to a game as input data or preparing it for network output, it all passes through RAM and the CPU before coming to its final destination. So it eats into available RAM storage and available CPU clocks. When figuring out what the max addressable CPU and RAM are to developers, network activity might experience a cap to set a clear expectation of what is addressable for games while still leaving enough compute cycles and RAM to process incoming (and outgoing) network packets.
On top of this, if your internal and external storage can only write at a certain speed that is slower than the maximum network speed, you end up with a bottleneck regardless, as you'd have a bunch of data stowed in RAM waiting to be written to the storage medium, so it's rather typical that network traffic is capped on devices with no variation in storage write speeds.
With these 2 considerations in mind, Nintendo opted to put a hard cap on how much data could be retrieved or sent at any given moment, which Switch communicates to the device on the other end of the network connection before it starts trading that data back and forth.

What the next Nintendo hardware's real-world network speed is will be dependent on how much of its hardware Nintendo is willing to allocate strictly to processing network data and/or how well it can write data to the storage methods available.
Nintendo wants to expand with more subscribers for the NSO. If their current service bottleneck because of hardware limitations. Wouldnt they focus on areas that made the service bad and get more ram?
 
I'm not really a hardware nerd, So could someone please explain the benefits of the Cache?

Like what is the difference between a 2MB Cache and a 4MB Cache, And why is it so small but apparently makes a big difference?

Please explain like I am 5, On the level of an @oldpuck post

The reason cache exists is that RAM, like the DDR4 in your PC or the LPDDR4 in the Switch, is slow. Not slow in human terms, but slow from the point of view of the CPU. When a CPU core requests data from RAM, it takes something like 50 nanoseconds to arrive (each nanosecond is a billionth of a second). For us, that's an incomprehensibly short amount of time, but for a CPU it's an eternity. If you've got a CPU clocked at 2GHz, then it's capable of running an instruction once every 0.5 nanoseconds. So if the CPU has to wait 50ns for data to come back, it has to wait 100 times as long as it takes to run an instruction. If every instruction needs data (which they all do), then the CPU would spend 99% of its time waiting for data to arrive from RAM, and only 1% of its time actually calculating anything.

So, instead of having to get data all the way from RAM, CPU designers added what's called a cache next to the CPU. This is a much smaller pool of embedded memory right next to the CPU that's designed to return data much more quickly than RAM, potentially as quickly as one clock cycle (0.5ns in our example above). Every time the CPU accesses any data from RAM, it's held in the cache, until it's either used again, or it's not accessed for a while and gets booted out of the cache back to the main RAM. This means that if the CPU is accessing the same data (or instructions, which are also data) over and over again, so long as it's small enough to fit in cache, it will be stored there and can be accessed very quickly. Typically most code does behave like this (access small amounts of data repeatedly), so caches work very well to keep CPUs fed, and they're a very important part of CPU design.

When it comes to the size of a cache, it's a bit more complicated, because the bigger the cache is the slower it is, which means bigger isn't always better when it comes to cache design. In general you want to have as much data as possible close to the CPU so that it can be accessed quickly, but if you just add an absurdly large cache on there it's not going to do much good if it's almost as slow as waiting for data from the RAM itself.

This is the reason you'll see multiple levels of cache on a CPU, typically L1, L2 and L3. L1 is a very small pool of cache, usually around 64KB (with a separate L1 for instruction data as well), with the goal of being as fast as possible, usually returning data within a single clock cycle. Then after that, there's the L2 cache, which is a bit bigger, and a bit slower, typically anywhere from 256KB to 1MB per core on modern CPUs. Finally, there's usually an L3 cache, which is shared between the cores and it even bigger and even slower, ranging from around 2MB on a lower-end phone CPU to as much as 1GB on high-end server CPUs. This cache hierarchy, as it's called, is an attempt to get the best of both worlds with very quick response from the L1 if the data's available there, but also having larger pools that are at least reasonably quick if it's not available from the closest cache.

Regarding how much of an improvement cache makes, it all depends on the software. Software which only works on a few hundred KB of data won't see much benefit moving from 4MB of L3 to 8MB, as the dataset will all fit in cache either way, but if your software is working with datasets of hundreds of MBs, then more cache is pretty much always better. On the PC front, games tend to benefit from increased cache sizes more than other software, but it still depends quite a bit from game to game, and may be a little different on a console where developers can optimise their code around datasets that fit nicely in the cache they have available to them.
 
The reason cache exists is that RAM, like the DDR4 in your PC or the LPDDR4 in the Switch, is slow. Not slow in human terms, but slow from the point of view of the CPU. When a CPU core requests data from RAM, it takes something like 50 nanoseconds to arrive (each nanosecond is a billionth of a second). For us, that's an incomprehensibly short amount of time, but for a CPU it's an eternity. If you've got a CPU clocked at 2GHz, then it's capable of running an instruction once every 0.5 nanoseconds. So if the CPU has to wait 50ns for data to come back, it has to wait 100 times as long as it takes to run an instruction. If every instruction needs data (which they all do), then the CPU would spend 99% of its time waiting for data to arrive from RAM, and only 1% of its time actually calculating anything.

So, instead of having to get data all the way from RAM, CPU designers added what's called a cache next to the CPU. This is a much smaller pool of embedded memory right next to the CPU that's designed to return data much more quickly than RAM, potentially as quickly as one clock cycle (0.5ns in our example above). Every time the CPU accesses any data from RAM, it's held in the cache, until it's either used again, or it's not accessed for a while and gets booted out of the cache back to the main RAM. This means that if the CPU is accessing the same data (or instructions, which are also data) over and over again, so long as it's small enough to fit in cache, it will be stored there and can be accessed very quickly. Typically most code does behave like this (access small amounts of data repeatedly), so caches work very well to keep CPUs fed, and they're a very important part of CPU design.

When it comes to the size of a cache, it's a bit more complicated, because the bigger the cache is the slower it is, which means bigger isn't always better when it comes to cache design. In general you want to have as much data as possible close to the CPU so that it can be accessed quickly, but if you just add an absurdly large cache on there it's not going to do much good if it's almost as slow as waiting for data from the RAM itself.

This is the reason you'll see multiple levels of cache on a CPU, typically L1, L2 and L3. L1 is a very small pool of cache, usually around 64KB (with a separate L1 for instruction data as well), with the goal of being as fast as possible, usually returning data within a single clock cycle. Then after that, there's the L2 cache, which is a bit bigger, and a bit slower, typically anywhere from 256KB to 1MB per core on modern CPUs. Finally, there's usually an L3 cache, which is shared between the cores and it even bigger and even slower, ranging from around 2MB on a lower-end phone CPU to as much as 1GB on high-end server CPUs. This cache hierarchy, as it's called, is an attempt to get the best of both worlds with very quick response from the L1 if the data's available there, but also having larger pools that are at least reasonably quick if it's not available from the closest cache.

Regarding how much of an improvement cache makes, it all depends on the software. Software which only works on a few hundred KB of data won't see much benefit moving from 4MB of L3 to 8MB, as the dataset will all fit in cache either way, but if your software is working with datasets of hundreds of MBs, then more cache is pretty much always better. On the PC front, games tend to benefit from increased cache sizes more than other software, but it still depends quite a bit from game to game, and may be a little different on a console where developers can optimise their code around datasets that fit nicely in the cache they have available to them.
Thanks this is exactly the explanation I was looking for, Even on the part where I was going to ask why we dont just make it multiple GB in size

Thank you
 
The reason cache exists is that RAM, like the DDR4 in your PC or the LPDDR4 in the Switch, is slow. Not slow in human terms, but slow from the point of view of the CPU. When a CPU core requests data from RAM, it takes something like 50 nanoseconds to arrive (each nanosecond is a billionth of a second). For us, that's an incomprehensibly short amount of time, but for a CPU it's an eternity. If you've got a CPU clocked at 2GHz, then it's capable of running an instruction once every 0.5 nanoseconds. So if the CPU has to wait 50ns for data to come back, it has to wait 100 times as long as it takes to run an instruction. If every instruction needs data (which they all do), then the CPU would spend 99% of its time waiting for data to arrive from RAM, and only 1% of its time actually calculating anything.

So, instead of having to get data all the way from RAM, CPU designers added what's called a cache next to the CPU. This is a much smaller pool of embedded memory right next to the CPU that's designed to return data much more quickly than RAM, potentially as quickly as one clock cycle (0.5ns in our example above). Every time the CPU accesses any data from RAM, it's held in the cache, until it's either used again, or it's not accessed for a while and gets booted out of the cache back to the main RAM. This means that if the CPU is accessing the same data (or instructions, which are also data) over and over again, so long as it's small enough to fit in cache, it will be stored there and can be accessed very quickly. Typically most code does behave like this (access small amounts of data repeatedly), so caches work very well to keep CPUs fed, and they're a very important part of CPU design.

When it comes to the size of a cache, it's a bit more complicated, because the bigger the cache is the slower it is, which means bigger isn't always better when it comes to cache design. In general you want to have as much data as possible close to the CPU so that it can be accessed quickly, but if you just add an absurdly large cache on there it's not going to do much good if it's almost as slow as waiting for data from the RAM itself.This is the reason you'll see multiple levels of cache on a CPU, typically L1, L2 and L3. L1 is a very small pool of cache, usually around 64KB (with a separate L1 for instruction data as well), with the goal of being as fast as possible, usually returning data within a single clock cycle. Then after that, there's the L2 cache, which is a bit bigger, and a bit slower, typically anywhere from 256KB to 1MB per core on modern CPUs. Finally, there's usually an L3 cache, which is shared between the cores and it even bigger and even slower, ranging from around 2MB on a lower-end phone CPU to as much as 1GB on high-end server CPUs. This cache hierarchy, as it's called, is an attempt to get the best of both worlds with very quick response from the L1 if the data's available there, but also having larger pools that are at least reasonably quick if it's not available from the closest cache.

How slow does the L3 server cache get at 1GB? Is that getting to the point of being just as slow as RAM?

Regarding how much of an improvement cache makes, it all depends on the software. Software which only works on a few hundred KB of data won't see much benefit moving from 4MB of L3 to 8MB, as the dataset will all fit in cache either way, but if your software is working with datasets of hundreds of MBs, then more cache is pretty much always better. On the PC front, games tend to benefit from increased cache sizes more than other software, but it still depends quite a bit from game to game, and may be a little different on a console where developers can optimise their code around datasets that fit nicely in the cache they have available to them.

Is it true that the 96MB of V-Cache you get on the 3D Zen chips is about the most that modern games need? I've heard they don't benefit much from going from 96 to 128.
 
I wanted to revisit this subject again.

Nintendo wants to expand with more subscribers for the NSO. If their current service bottleneck because of hardware limitations. Wouldnt they focus on areas that made the service bad and get more ram?
Outside of online play, most of NSO’s value proposition is pretty solid. It’s online play and eShop download speeds that are the big issue. And while I’m fairly sure that there will be a greater hardware allocation available for online just by virtue of far more performant hardware, I’m still expecting a cap, it’ll just be capped at higher than 10MB/s. Nintendo will always prioritize CPU cycles and RAM usage for games and the OS, but at least there’s more breathing room now. At least a doubling can be anticipated, but hopefully more.
 
The reason cache exists is that RAM, like the DDR4 in your PC or the LPDDR4 in the Switch, is slow. Not slow in human terms, but slow from the point of view of the CPU. When a CPU core requests data from RAM, it takes something like 50 nanoseconds to arrive (each nanosecond is a billionth of a second). For us, that's an incomprehensibly short amount of time, but for a CPU it's an eternity. If you've got a CPU clocked at 2GHz, then it's capable of running an instruction once every 0.5 nanoseconds. So if the CPU has to wait 50ns for data to come back, it has to wait 100 times as long as it takes to run an instruction. If every instruction needs data (which they all do), then the CPU would spend 99% of its time waiting for data to arrive from RAM, and only 1% of its time actually calculating anything.

So, instead of having to get data all the way from RAM, CPU designers added what's called a cache next to the CPU. This is a much smaller pool of embedded memory right next to the CPU that's designed to return data much more quickly than RAM, potentially as quickly as one clock cycle (0.5ns in our example above). Every time the CPU accesses any data from RAM, it's held in the cache, until it's either used again, or it's not accessed for a while and gets booted out of the cache back to the main RAM. This means that if the CPU is accessing the same data (or instructions, which are also data) over and over again, so long as it's small enough to fit in cache, it will be stored there and can be accessed very quickly. Typically most code does behave like this (access small amounts of data repeatedly), so caches work very well to keep CPUs fed, and they're a very important part of CPU design.

When it comes to the size of a cache, it's a bit more complicated, because the bigger the cache is the slower it is, which means bigger isn't always better when it comes to cache design. In general you want to have as much data as possible close to the CPU so that it can be accessed quickly, but if you just add an absurdly large cache on there it's not going to do much good if it's almost as slow as waiting for data from the RAM itself.

This is the reason you'll see multiple levels of cache on a CPU, typically L1, L2 and L3. L1 is a very small pool of cache, usually around 64KB (with a separate L1 for instruction data as well), with the goal of being as fast as possible, usually returning data within a single clock cycle. Then after that, there's the L2 cache, which is a bit bigger, and a bit slower, typically anywhere from 256KB to 1MB per core on modern CPUs. Finally, there's usually an L3 cache, which is shared between the cores and it even bigger and even slower, ranging from around 2MB on a lower-end phone CPU to as much as 1GB on high-end server CPUs. This cache hierarchy, as it's called, is an attempt to get the best of both worlds with very quick response from the L1 if the data's available there, but also having larger pools that are at least reasonably quick if it's not available from the closest cache.

Regarding how much of an improvement cache makes, it all depends on the software. Software which only works on a few hundred KB of data won't see much benefit moving from 4MB of L3 to 8MB, as the dataset will all fit in cache either way, but if your software is working with datasets of hundreds of MBs, then more cache is pretty much always better. On the PC front, games tend to benefit from increased cache sizes more than other software, but it still depends quite a bit from game to game, and may be a little different on a console where developers can optimise their code around datasets that fit nicely in the cache they have available to them.
How much cache Switch 2 CPU will have in your opinion? Imo 8MB L3 would be just the best
 
0
As someone who has recently taken over a (very small-time) gaming news/review site, I'm curious what everyone's thoughts are on the ideal way to report on rumours/leaks. Especially those that might arise from this very forum/thread. There are a lot of (justified) complaints about news sites/YouTube channels being underhanded and lurking to then sensationally report on stuff for clicks, is there any room for a site that does this responsibly?

It may be that everyone feels there's no tangible benefit to this community to draw attention to stuff posted here, which is fair and understandable. Happy to hear that from people too.

For the record, my site is not currently monetised and I have a full-time job completely unrelated to video games. So, in the spirit of full transparency, my interest in getting clicks is not financial, but it would be nice to get more people on the site to check out my reviews. More eyes on the site would help me get more (and earlier) review codes which is fun (for me).

Let me know your thoughts!
 
What are you planning to do with your current Switch 1 once the Switch 2 gets released?

Are you keeping it for your collection, or are you selling it to make the Switch 2 more affordable?

Myself I'm selling it. I hope stores (GameStop in this case) has decent trade-in deals towards a Switch 2.
 
What are you planning to do with your current Switch 1 once the Switch 2 gets released?

Are you keeping it for your collection, or are you selling it to make the Switch 2 more affordable?

Myself I'm selling it. I hope stores (GameStop in this case) has decent trade-in deals towards a Switch 2.
A modded Switch is a great retroconsole. You can even inject games into the official NES/SNES/N64/GB/GBA apps.
 
What are you planning to do with your current Switch 1 once the Switch 2 gets released?

Are you keeping it for your collection, or are you selling it to make the Switch 2 more affordable?

Myself I'm selling it. I hope stores (GameStop in this case) has decent trade-in deals towards a Switch 2.
I'm considering maybe selling it, if the market for the Erista units is strong at that point. I hear those are the ones modders tend to want so 🤷
 
What are you planning to do with your current Switch 1 once the Switch 2 gets released?

Are you keeping it for your collection, or are you selling it to make the Switch 2 more affordable?

Myself I'm selling it. I hope stores (GameStop in this case) has decent trade-in deals towards a Switch 2.
Considering I have a Day 1 Switch, I'll likely keep it and it will exclusively become a Custom Firmware device (Homebrew :D)
 
What are you planning to do with your current Switch 1 once the Switch 2 gets released?

Are you keeping it for your collection, or are you selling it to make the Switch 2 more affordable?

Myself I'm selling it. I hope stores (GameStop in this case) has decent trade-in deals towards a Switch 2.
If there’s no BC (very unlikely), I’m keeping my Switch.

Otherwise I’ll need to find reason to keep my Switch if Switch 2 has BC.
 
How slow does the L3 server cache get at 1GB? Is that getting to the point of being just as slow as RAM?
While slower cache is indeed slower like @Thraktor said, I believe that cost is a bigger limiting factor. Cache is very expensive since its on-die and takes away silicon budget from the CPU itself. It is a very delicate balance act.
 
Last edited:
What are you planning to do with your current Switch 1 once the Switch 2 gets released?

Are you keeping it for your collection, or are you selling it to make the Switch 2 more affordable?

Myself I'm selling it. I hope stores (GameStop in this case) has decent trade-in deals towards a Switch 2.

If switch 2 has full BC, I'll try to mod my V1 to make it like a batteryless Switch TV
 
What are you planning to do with your current Switch 1 once the Switch 2 gets released?

Are you keeping it for your collection, or are you selling it to make the Switch 2 more affordable?

Myself I'm selling it. I hope stores (GameStop in this case) has decent trade-in deals towards a Switch 2.
Keep it, for once. It's nice to have like, a Switch that's just a Switch on standby. Upsides like, games that aren't patched will probably look better on it, having multiple save files for Pokémon and Animal Crossing.

The right Joy-Con Rail of my OLED Model would disconnect at random, I just flung some contact cleaner and bang, it's been perfect. Like there's been some weird stuff with my OLED Model, like some thermal paste fell out through the SD card reader, or some slight screen retention from the home screen, but to look at it or use it, it's near enough day one perfection. It's a nice sturdy thing and I'd like to keep it in perpetuity, really, for a bunch of reasons.

If I had all the money in the world I would very much like to have a collection with every revision of every Nintendo handheld, but especially all Switch revisions. I just think they're neat.

Even just having an OLED Model for unpatched games, Joy-Con charging, or to fiddle around with is of value to me, even if I'll absolutely mainly use NG Switch. I also think there's a good chance the Dock with LAN Port and NG Switch will be compatible, so, hopefully, I'll have two docks out the gate for it.
 
Outside of online play, most of NSO’s value proposition is pretty solid. It’s online play and eShop download speeds that are the big issue. And while I’m fairly sure that there will be a greater hardware allocation available for online just by virtue of far more performant hardware, I’m still expecting a cap, it’ll just be capped at higher than 10MB/s. Nintendo will always prioritize CPU cycles and RAM usage for games and the OS, but at least there’s more breathing room now. At least a doubling can be anticipated, but hopefully more.
I am hoping so. That was my biggest grief with the Switch.
If I had the parts I'd do this mod


Instead of that. I wish Nintendo sold shells of previous Nintendo consoles. So I can have a 64 shell to house my switch 2.
 
0
It's not a huge distinction, and I'll admit I'm probably hung up on it just because I've been staring at the data forever. But Drake (T239) is a totally separate design from Orin (T234). They're very similar, but almost every component in T239 is different from it's T234 counterpart.

The CPU is a slightly different variant, in a different cluster configuration (8 cores in 1 cluster, instead of 3 clusters of 4 core each), with a larger L2 cache. This is a slight optimization for workloads with a medium number of threads and a lot of locking - exactly what you'd expect in a video game engine.

The GPU is 1 large GPC instead of 2 medium sized ones. Orin's design resembles the laptop offerings, Drake's design looks like the desktop GPUs.

The memory controller includes updates from Lovelace, that make it more power efficient. There is also evidence it's been updated to support the newer, faster memory standard.

More UPHY lanes have been given to DisplayPort, in order to support 4k HDR. The File Decompression Engine has been added (and likely integrated with the SSD controller).

The whole chip is full of micro-optimizations, and tiny little features that make it work better as a gaming device, even when it is the change is surprisingly intrusive. For example, the CPU difference is minor in its performance impact, but major in terms of work - you can't break up CPU clusters on-chip, so finding room for a cluster of 8 can be harder than finding room for 3 clusters of 4.

There isn't any fat left over from Orin, either. Orin has to support lots of different cars and combinations of technologies, so it's IO controllers are like a Swiss army knife - there is one of everything. There were a few random bits and pieces that Drake has that was reasonable to assume it was cheaper to keep than to pay money to design away. But no, turns out, Nintendo has clever repurposed all of them, so it only has the ones it needs. And in one of the weirdest micro-optimizations, they kicked out the whole USB controller with a new one, because Orin supports 3 USB ports, but Nintendo only needs 2.

Ultimately, "cut down Orin" conveys all the information most of even the biggest tech heads care about, and it's what most of us thought was happening for a long time. And no doubt, there are lots of plumbing in the chip design that they share - a huge portion of chip design is figuring out how the pieces talk to each other as much as it is designing the pieces themselves. But Drake is a different design, and is almost obsessively tuned to Nintendo's needs, down to the tiniest detail.
Is there any way to view all of these details on the minutiae of T239?
 
While slower cache is indeed slower like @Thraktor said, I believe that cost is a bigger limiting factor. Cache is very expensive since its on-die and takes away silicon budget from the CPU itself. It is a very delicate balance act.
I can see why PS5/X series doesn't have Infinity cache for their GPUs, I didn't realize that cache needs to be balanced with the rest of the SOC. I find it fascinating just to learn how companies design their SOCs
 
I’m have feeling that BC will be only digital for some reason, info about new cartridge format lead to that
Is there reason to believe the cartridge will look significantly different? They can still maintain backwards compatibility with cartridges easily if they just do what they did with 3DS Cartridges and add a notch so they cant be inserted into a Switch 1
 
I’m have feeling that BC will be only digital for some reason, info about new cartridge format lead to that
Extremely unlikely.

But even if that’s true, no biggie, take a look at my collection link in signature. Over 90% of my library is digital lol
 
0
Is there reason to believe the cartridge will look significantly different? They can still maintain backwards compatibility with cartridges easily if they just do what they did with 3DS Cartridges and add a notch so they cant be inserted into a Switch 1
I have zero understanding of electrical engineering, but my very limited knowledge tells me the physicallity of the switch cartridge isn't a limitation, but the controllers and readers are. if they updated them to support the Switch and Drake, then they probably won't have to ditch physical BC, like DS/3DS and GB/GBC/GBA
 
I have zero understanding of electrical engineering, but my very limited knowledge tells me the physicallity of the switch cartridge isn't a limitation, but the controllers and readers are. if they updated them to support the Switch and Drake, then they probably won't have to ditch physical BC, like DS/3DS and GB/GBC/GBA
As long as the general cartridge slot remains the same shape or can fit the old cartridge, I see no reason why they cant retain compatibility

Much like how DS Cards fit in a 3DS Slot and GB/GBC Games fit in a GBA Slot
Even if they update the connectors they can retain compatibility
 
0
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom