• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

It's so close I'm surprised there wasn't a top down mandate to fit it into a 16gig card.
I'm definitely making some assumptions that the Switch firmware is probably in the > 100MB range (would be good to get some hard numbers on that) and all the units are the same. Any reprints that include patches will be unambiguously over the threshold, though, as the day one patch pushes it to 16.2GB. It's really hard to say where the exact threshold actually is, but this game is close enough that it's probably above it.
 
I'm definitely making some assumptions that the Switch firmware is probably in the > 100MB range (would be good to get some hard numbers on that) and all the units are the same. Any reprints that include patches will be unambiguously over the threshold, though, as the day one patch pushes it to 16.2GB. It's really hard to say where the exact threshold actually is, but this game is close enough that it's probably above it.
Did Smash launch on a 32GB card? Because if not, it crossed the threshold and must have transitioned to 32GB after the first run.
 
Did Smash launch on a 32GB card? Because if not, it crossed the threshold and must have transitioned to 32GB after the first run.
Smash's initial cart image was 13.6 GB, probably below the threshold. The full size including updates climbed above 16GB at some point, but I couldn't tell you what version they're printing on the carts now.
 
Did Smash launch on a 32GB card? Because if not, it crossed the threshold and must have transitioned to 32GB after the first run.

Smash's initial cart image was 13.6 GB, probably below the threshold. The full size including updates climbed above 16GB at some point, but I couldn't tell you what version they're printing on the carts now.
Just checked, the only documented cart revision I can find was only at 2.0.1, but this database is probably not exhaustive. It's possible they haven't spun up a new cart revision since it crossed the threshold.

 
Tears of the Kingdom isn't going to do 22 million in 60 days. Pokemon games do not have long tails, partially because the next Pokemon game is alwaus coming. Arceus sold 12 million in the first 6 weeks... and only 2 million more over the next year. Almost exactly the same numbers for Diamond/Pearl remakes.

Pokemon burns hot and fast. Zelda has a broader "gamer" appeal, and doesn't compete with itself in the same way. Hence the longer tail. Which is one of the reasons that I don't expect Tears to outsell Breath of the Wild. Zelda's appeal hasn't dimmed, and so new Switch buyers are buying the game. It's hard to imagine there is a large market of folks who own Switches, haven't bought Breath of the Wild but want to give Tears a shot. Maybe the sandboxy nature of the gameplay will set YouTube on fire, and it gains steam from that, but I think the most likely result is that Tears is simply able to match Breath of the Wild's sales, which would be a huge accomplishment.

The larger Switch install base will probably push Tears to have a stronger launch than Breath of the Wild, but if it's 20+ million in 2 months, well, I will be very impressed. I'd say I'd eat my hat, but I actually can see my hat from where I'm sitting, and I just ate a lot of brownie, so I'm full, and the idea of eating my hat makes me kinda sick.
 
Tears of the Kingdom isn't going to do 22 million in 60 days. Pokemon games do not have long tails, partially because the next Pokemon game is alwaus coming. Arceus sold 12 million in the first 6 weeks... and only 2 million more over the next year. Almost exactly the same numbers for Diamond/Pearl remakes.

Pokemon burns hot and fast. Zelda has a broader "gamer" appeal, and doesn't compete with itself in the same way. Hence the longer tail. Which is one of the reasons that I don't expect Tears to outsell Breath of the Wild. Zelda's appeal hasn't dimmed, and so new Switch buyers are buying the game. It's hard to imagine there is a large market of folks who own Switches, haven't bought Breath of the Wild but want to give Tears a shot. Maybe the sandboxy nature of the gameplay will set YouTube on fire, and it gains steam from that, but I think the most likely result is that Tears is simply able to match Breath of the Wild's sales, which would be a huge accomplishment.

The larger Switch install base will probably push Tears to have a stronger launch than Breath of the Wild, but if it's 20+ million in 2 months, well, I will be very impressed. I'd say I'd eat my hat, but I actually can see my hat from where I'm sitting, and I just ate a lot of brownie, so I'm full, and the idea of eating my hat makes me kinda sick.
It would be near disaster level for it not to have a stronger launch than BOTW. I don't remember the exact number but I wanna say it shipped like 3M in the first week?

I expect TotK will triple that on its first day, just by virtue of the changed install base. Definitely also skeptical about 20M+ in the first 2 months but people are kinda ravenous for Zelda right now so I wouldn't be at all shocked.
 
It would be near disaster level for it not to have a stronger launch than BOTW. I don't remember the exact number but I wanna say it shipped like 3M in the first week?

I expect TotK will triple that on its first day, just by virtue of the changed install base. Definitely also skeptical about 20M+ in the first 2 months but people are kinda ravenous for Zelda right now so I wouldn't be at all shocked.
The attach Ratio for BOTW was greater than the consoles sold at launch iirc.
 
The attach Ratio for BOTW was greater than the consoles sold at launch iirc.
I don't know if it was attach rate or just straight-up copies sold, but yeah. I was under the impression BotW sales numbers exceeded Switch numbers because basically everyone who bought a Switch bought BotW, and then also many people who owned WiiUs bought it.
People like me.
 
You guys are probably over thinking it a bit. It's a massive tentpole release, argueably their biggest ever in terms of up front sales on a highly popular console, that under normal circumstances would probably be a holiday game. Nintendo has not shield away from over shipping it's games, and they'll have every incentive to stuff the channels with TotK. They'll be perfectly happy to tout that it's their fastest selling game ever, even if it means sell through won't catch up until later in the year. I'd fully expect 20m shipped pretty quickly.
 
It would be near disaster level for it not to have a stronger launch than BOTW. I don't remember the exact number but I wanna say it shipped like 3M in the first week?

I expect TotK will triple that on its first day, just by virtue of the changed install base. Definitely also skeptical about 20M+ in the first 2 months but people are kinda ravenous for Zelda right now so I wouldn't be at all shocked.
It's a little hard to game the numbers out, as sales charts only report physical sales, but Nintendo will sometimes drop press releases about total sales for big games. But Scarlet/Violet did 10 million in its first 3 days, according to Nintendo. If I had to make a guesstimate, that represents a ceiling on the speed of sales for Switch games, which are somewhat throttled by Nintendo's ability to physically create and ship carts.

Splatoon 3 (slightly more than) doubled sales in its first quarter over Splatoon 2, but seems headed toward roughly the same or slightly lower lifetime sales. In terms of "big game, sequel on Switch, sales speed affected primarily by install base" that is the closest benchmark I can think of. Xenoblade Chronicles 2 vs 3 is a similar doubling of sales pace while on target for similar life time sales, but I'm not sure it's a useful datapoint due to it being a much smaller franchise. Fire Emblem Engage saw no real increase in pace over Three Houses, but again, not super trustworthy.

Maybe I'm just overcompensating for the fact I know I live in a Nintendo bubble, but while I'm sure that Tears will be one of the best selling games for Switch not called Mario Kart, I think Breath of the Wild's 30 million is the sales ceiling, that ~3 million is the max sales that is physically possible until our All Digital Masters have conquered the Earth, and I think a trebling of sales pace represents the optimistic extreme of what the install base will do in terms of churning through those lifetime sales early on.

If Tears does burn as hot and fast as Pokemon then we would expect to see 50% of lifetime sales in the first week or so, 15 million That's still a pretty far cry from 9 million on launch day - CoD:MWII did 8 million in it's opening 3 days for a cross platform comparison. I think Tears is going to be the best selling game of 2023, but if GTA:V remains the fastest selling game of all time, with 30 million in 6 weeks, and I don't think Tears is gonna break it
 
If Nintendo is planning a release this year then they need to start producing hardware really soon. But we haven’t heard anything so it’s starting to become unlikely they will launch it this year.

If there is a new Mario game coming out it will most likely have an Oled version being produced for it like the other big games
 
If Nintendo is planning a release this year then they need to start producing hardware really soon. But we haven’t heard anything so it’s starting to become unlikely they will launch it this year.

If there is a new Mario game coming out it will most likely have an Oled version being produced for it like the other big games
Yeah it’s too quiet. I don’t see it coming this year.
 
Nintendo doesn't tend to go backwards with screen technologies. Expecting Switch 2 to not be OLED would be like saying the DS could have launched without a backlight. Not bloomin' likely.
ENTER:
lh5f764d4lm01.jpg

(In all seriousness, no way they can revert to LCD in this day and age)
 
Last edited:
Nintendo doesn't tend to go backwards with screen technologies. Expecting Switch 2 to not be OLED would be like saying the DS could have launched without a backlight. Not bloomin' likely.
Could be a compromise due to rising costs and inflation, and wanting to prioritize soc and ram.

Not saying it's likely, but woudnt completely rule it out.
 
As we thought the big difference between the Z1 and other Phoenix APUs is that the Z1 can go as low as 9W TDP. At that power, I can't see there being a practical difference with the Steam Deck and buyers of this device won't play that low

 
Call me optimistic, but I’m expecting TOTK to wipe BOTW out. It can reach half of BOTW’s lifetime sales in a month
No doubt it will start much bigger, with like 8x as many systems out there as there were by the end of 2017. Matching/exceeding the eventual total of a previous evergreen, though, always hard to say.
the neural texture compression seems more for offline production than something that happens during the game. you store the compressed textures onto the card and then decompress them on the fly
The compression happens at production time, but the benefits extend to runtime since it allows higher resolution textures using less RAM. The earlier link said this new method took more processing time, but I have no idea what kind of differences there would be considered notable or negligible when it comes to textures.
 
No doubt it will start much bigger, with like 8x as many systems out there as there were by the end of 2017. Matching/exceeding the eventual total of a previous evergreen, though, always hard to say.

The compression happens at production time, but the benefits extend to runtime since it allows higher resolution textures using less RAM. The earlier link said this new method took more processing time, but I have no idea what kind of differences there would be considered notable or negligible when it comes to textures.
I know, I was the first one to post the paper. But that compression in real time doesn't seem beneficial when you you have to store that 256MB source. That's the biggest benefit here, cutting down the sizes so you can save on space. That in turn saves on ram usage, which you can compress further if needed. It takes more time to compress from the high quality, 4K master, but from an already compressed, 2K source, it'll be much faster (quality is unknown though)
 
I know, I was the first one to post the paper. But that compression in real time doesn't seem beneficial when you you have to store that 256MB source. That's the biggest benefit here, cutting down the sizes so you can save on space. That in turn saves on ram usage, which you can compress further if needed. It takes more time to compress from the high quality, 4K master, but from an already compressed, 2K source, it'll be much faster (quality is unknown though)
Does this work well on ampere?
 
I know, I was the first one to post the paper. But that compression in real time doesn't seem beneficial when you you have to store that 256MB source.
Doesn't seem beneficial, agreed, but why would that be a consideration anyway? Textures aren't being stored uncompressed today, so it's not a change.
That's the biggest benefit here, cutting down the sizes so you can save on space. That in turn saves on ram usage, which you can compress further if needed.
To me being able to get higher resolution texture output without needing more RAM/bandwidth seems like a bigger deal. Even for a game installing to 200GB on an arbitrary medium, what the GPU could actually display is a bottleneck. And with RAM increasing much slower than it used to, finding other ways to work around that bottleneck seems like a great thing.
 
Does this work well on ampere?
they only tested it on a 4090, so it'll probably be much slower on ampere

Doesn't seem beneficial, agreed, but why would that be a consideration anyway? Textures aren't being stored uncompressed today, so it's not a change.
this is just to get more quality out of a given file size. so this whole thing isn't really novel, but a better way to do what's already being done
 
Honestly, the idea of a 2025/26 release baffles me.

I think at this point it's no longer about "how well Switch sales hold up" but it's about "developers will need new hardware".

Not because developer's needs are at the top of their minds but because, in my opinion, by that time (2025/26) most developers will want a product where they can put their game easily and not jump through hoops to adapt xbox s-series titles onto a console less powerful than the old Xboxone. You would be talking about ultra-miracle ports that probably no software house would want to do.

It would be like having to adapt android apps for smartphones with old processors and 1GB of ram, could it be done? maybe... developers do it? not a chance. (If you don't have a smartphone with 4GB of ram nothing works these days 🤣)

Another two/three years of Switch seems unthinkable from this point of view.
 
this is just to get more quality out of a given file size. so this whole thing isn't really novel, but a better way to do what's already being done
If it's doing it in a different way that produces much better results, that seems pretty novel to me. And it's not just getting more quality out of a given file size, but getting more quality out of a given VRAM size.
 
Does this work well on ampere?
In theory it should work as well on Ampere as on Ada, but there's going to be a significant performance cost either way, as you're moving from work that's done basically for free on texture units (BC decompression) to the tensor cores.

Here's a quote from the section on decompression performance from the paper:
6.5.2 Decompression. We evaluate real-time performance of our method by rendering a full-screen quad at 3840 × 2160 resolution textured with the Paving Stone set, which has 8 4k channels: diffuse albedo, normals, roughness, and ambient occlusion. The quad is lit by a directional light and shaded using a physically-based BRDF model [10] based on the Trowbridge–Reitz (GGX) microfacet dis- tribution [76]. Results in Table 4 indicate that rendering with NTC via stochastic filtering (see Section 5.3) costs between 1.15 ms and 1.92 ms on a NVIDIA RTX 4090, while the cost decreases to 0.49 ms with traditional trilinear filtered BC7 textures.
So rendering a 4K image with trilinear BC7 textures takes 0.49ms on an RTX 4090, and NTC bumps that to between 1.15ms and 1.92ms. I'd be very surprised if the BC7 case is actually decompression limited at all (the RTX 4090's texture units have a throughput of 1.3 trillion texels a second), so let's assume the actual rendering takes 0.49ms, which means the neural texture decompression takes between 0.66ms and 1.43ms. The RTX 4090 tensor cores are capable of around 330 Tflops FP16, whereas on T239 the absolute best case scenario is around 15 Tflops of FP16 tensor core performance. So assuming that it's 100% limited by tensor core throughput, on T239 you'd be looking at around 14.52ms to 31.42ms for the same task. Even at 1080p, that would imply 3.63ms to 7.86ms just for texture decompression. Of course this is a very crude measurement, but that's quite a high cost when BC/ASTC decompression is basically free.

I'm not trying to disparage the technology, or anything like that, but it's an academic paper, not something Nvidia is advertising for commercial use. The way they managed to get random access working is very smart, and I'm impressed that they can get anywhere near real time performance when they require running a neural net to decompress every single texel, but people should temper their expectations if they expect to see it in games soon.

Edit: Also, as I mentioned before, I suspect a large portion of the benefit here isn't actually from the neural representation, but the fact that they're combining together more channels than is possible with any current block compression format. Looking over the paper again, they do suggest as much:
Benefits Proportional to the Channel Count. Our method shows a high compression efficacy for materials with multiple channels. However, for lower channel counts, e.g., just RGB textures, our storage cost is similar at iso-quality (Table 5). This means that our method would lose some of its advantage if it was to be applied to single textures or regular images.
If you look in table 5 in the paper, they claim a PSNR of 28.96 and a bits per pixel of 0.55 if the method is limited to four channels, which is the maximum supported by any current block format. Up in table 3, when comparing different compression algorithms with the same dataset, they show that ASTC 12×12 (which, FYI, Switch supports) achieves a PSNR of 28.90 with a bits per pixel of 0.43. So limited to the same channel count of existing algorithms, ASTC can beat them in terms of compression for a similar PSNR. That's not to say that their method wouldn't win out over a many-channel block format, but I strongly suspect that applying more traditional approaches to higher channel count textures could at least get close in performance to NTC while having the very significant benefit of being decompressible in hardware within texture units.
 
Last edited:
me reading every other Thraktor post
This ls very funny and describes me as well whenever I read tech heavy posts, ahahahhahahaha
So, just for funsies, how about a little explainer? This loops back to the "physically based rendering" conversation from the Metroid Prime remaster.

TL;DR: @Thraktor is suggesting - and I agree - that this paper actually does one Bleeding Edge AI thing, and one More Boring Modernization Thing, and that the cool improvements that Nvidia is reporting aren't actually because of Bleeding Edge AI, but the Boring Modernization part.

Longer Version:

One way to think of an image is that it is a map of how light bounces off a surface. Go with me here, it sounds more complicated than it is.

Take a piece of printer paper, a blank white sheet. Go outside in bright, clear sunlight. It practically glows. It can be literally blinding. If you were to make an image of that white printer paper, you'd just have a rectangle of pure, white pixels.

Now say you use that image as a texture in a video game. Just like the white light of the sun bounces off the white pieces of paper and projects white light into your eyes, the game engine uses each pixel of the image as a map of the color of light that the texture bounces back.

But a blank white texture looks... terrible. Like, take that white pieces of paper out of the bright sun, you pick up the colors of the light in the room a little, you get little shadows where there are tiny creases in the paper, you can see the grain a little bit. So in classical game engines, the artist wouldn't just make a flat white texture for a piece of paper, they'd add all those little details to the image. Much better looking, but it's still just one channel of data, one map.

The problem, then, is what happens when the light changes? You move the piece of paper around in the game engine, or you move the camera around so you see it from a different angle? In real life, you'd see all the little shadows shift, the color from the light change. If you don't account for that in the game engine, then it doesn't look like a piece of paper anymore, it looks like a picture of a piece of paper, if that makes sense.

So, over time, game engines started to add new channels to the textures. Just like you can think of an image as a map that says "when light hits this pixel, send back this color", you can think of these other channels as a second set of maps that tell the game engine more about how light interacts with the object. One channel might be a "roughness" map, that is an image of our piece of paper that shows the parts of it that are smooth (and shinier) vs the parts that are rough (and less shiny), or the parts that are crumpled (and case little micro-shadows) versus the parts that are flat (and don't cast micro-shadows).

Engines can then combine all the information from these various maps to decide how to shade the surface. Take the color from the first channel and use the other channels to shade the surface. This is how you get highlights on shiny objects but not on rough ones, or how a brick wall is a single texture, but when you move the camera, the shadows between the bricks shift and move.

Modern texture formats know how to compress 4 of these channels in a smart way, but modern game engines actually need 10 channels most of the time. This is the "physically based rendering" that Metroid Prime: Remastered uses. This Nvidia paper compares those formats to a new format that uses AI to compress/decompress the textures, but also supports 10 channels, instead of the usual 4.

Thraktor's assertion - which I think is correct - is that the advantages that Nvidia's new format has are probably not AI related, but just because they smartly handle the number of channels modern game engines need, and that it's more likely we'll see an emerging format that supports 10 channels, but doesn't use the AI compression/decompression.
 
So, just for funsies, how about a little explainer? This loops back to the "physically based rendering" conversation from the Metroid Prime remaster.

TL;DR: @Thraktor is suggesting - and I agree - that this paper actually does one Bleeding Edge AI thing, and one More Boring Modernization Thing, and that the cool improvements that Nvidia is reporting aren't actually because of Bleeding Edge AI, but the Boring Modernization part.

Longer Version:

One way to think of an image is that it is a map of how light bounces off a surface. Go with me here, it sounds more complicated than it is.

Take a piece of printer paper, a blank white sheet. Go outside in bright, clear sunlight. It practically glows. It can be literally blinding. If you were to make an image of that white printer paper, you'd just have a rectangle of pure, white pixels.

Now say you use that image as a texture in a video game. Just like the white light of the sun bounces off the white pieces of paper and projects white light into your eyes, the game engine uses each pixel of the image as a map of the color of light that the texture bounces back.

But a blank white texture looks... terrible. Like, take that white pieces of paper out of the bright sun, you pick up the colors of the light in the room a little, you get little shadows where there are tiny creases in the paper, you can see the grain a little bit. So in classical game engines, the artist wouldn't just make a flat white texture for a piece of paper, they'd add all those little details to the image. Much better looking, but it's still just one channel of data, one map.

The problem, then, is what happens when the light changes? You move the piece of paper around in the game engine, or you move the camera around so you see it from a different angle? In real life, you'd see all the little shadows shift, the color from the light change. If you don't account for that in the game engine, then it doesn't look like a piece of paper anymore, it looks like a picture of a piece of paper, if that makes sense.

So, over time, game engines started to add new channels to the textures. Just like you can think of an image as a map that says "when light hits this pixel, send back this color", you can think of these other channels as a second set of maps that tell the game engine more about how light interacts with the object. One channel might be a "roughness" map, that is an image of our piece of paper that shows the parts of it that are smooth (and shinier) vs the parts that are rough (and less shiny), or the parts that are crumpled (and case little micro-shadows) versus the parts that are flat (and don't cast micro-shadows).

Engines can then combine all the information from these various maps to decide how to shade the surface. Take the color from the first channel and use the other channels to shade the surface. This is how you get highlights on shiny objects but not on rough ones, or how a brick wall is a single texture, but when you move the camera, the shadows between the bricks shift and move.

Modern texture formats know how to compress 4 of these channels in a smart way, but modern game engines actually need 10 channels most of the time. This is the "physically based rendering" that Metroid Prime: Remastered uses. This Nvidia paper compares those formats to a new format that uses AI to compress/decompress the textures, but also supports 10 channels, instead of the usual 4.

Thraktor's assertion - which I think is correct - is that the advantages that Nvidia's new format has are probably not AI related, but just because they smartly handle the number of channels modern game engines need, and that it's more likely we'll see an emerging format that supports 10 channels, but doesn't use the AI compression/decompression.
Can engine channels be compared to Photoshop Layers? Like, we see Tears of the Kingdom ‘s box art, but there’s a lot going on that’s deeper than a single image: it’s layers of placement of Link, edited lighting here, contrast there, etc
 
Been thinking about how obscenely long Pokemon Home is taking for the mainline games, and my gut started telling me they're saving it so they can shadowdrop it while also doing a presentation for the SV DLC. But then I started thinking more, and remembered that a next-gen patch is supposed to arrive with the Indigo Disk - and now I wonder, does that Spring window for Home mean we're going to see the REDACTED reveal in that timeframe? Nintendo gives TotK a few weeks to breathe, and then around (or before) E3 time they reveal the new console for a Nov 2023 - Mar 2024 release. TPC can then finally talk about SV's REDACTED patch a while later, as well as a bunch of info for The Teal Mask and a Home shadowdrop. This timeline assumes a fair amount of things but I wanted to throw it out there

EDIT: Release window for Home is actually "early 2023" and not Spring 2023
 
Last edited:
Been thinking about how obscenely long Pokemon Home is taking for the mainline games, and my gut started telling me they're saving it so they can shadowdrop it while also doing a presentation for the SV DLC. But then I started thinking more, and remembered that a next-gen patch is supposed to arrive with the Indigo Disk - and now I wonder, does that Spring window for Home mean we're going to see the REDACTED reveal in that timeframe? Nintendo gives TotK a few weeks to breathe, and then around (or before) E3 time they reveal the new console for a Nov 2023 - Mar 2024 release. TPC can then finally talk about SV's REDACTED patch a while later, as well as a bunch of info for The Teal Mask and a Home shadowdrop. This timeline assumes a fair amount of things but I wanted to throw it out there
While I don't think it's likely, I'd love to see it. I think they'll just drop Home out of the blue with a date for more info on the Teal Mask. Then, new console reveal, then a week or two later, Teal Mask trailer, updated release window for Indigo Disk, "with enhancements for [REDACTED]".

Which basically means I agree with the broad strokes. Honestly the Indigo Disk leak is one of our saving graces hope wise, other than a purely analytical look at the processor's development.
 
Honestly the Indigo Disk leak is one of our saving graces hope wise,
You mean the person that leaked the Present’s content and also commented on a next gen patch for S/V?

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
Re: textures.

Pikmin 4's bulborbs are less detailed than Pikmin 3, with no normal map to give their spots depth. :(
Hope that was from an earlier build. Then again, the game was five months out when that trailer was shown, so who knows.
 
You mean the person that leaked the Present’s content and also commented on a next gen patch for S/V?

* Hidden text: cannot be quoted. *
That's the one, though they also leaked the existence of Tera forms (backed up by the most reliable Pokemon leaker) and that was nowhere in the presentation
 
Been thinking about how obscenely long Pokemon Home is taking for the mainline games, and my gut started telling me they're saving it so they can shadowdrop it while also doing a presentation for the SV DLC. But then I started thinking more, and remembered that a next-gen patch is supposed to arrive with the Indigo Disk - and now I wonder, does that Spring window for Home mean we're going to see the REDACTED reveal in that timeframe? Nintendo gives TotK a few weeks to breathe, and then around (or before) E3 time they reveal the new console for a Nov 2023 - Mar 2024 release. TPC can then finally talk about SV's REDACTED patch a while later, as well as a bunch of info for The Teal Mask and a Home shadowdrop. This timeline assumes a fair amount of things but I wanted to throw it out there
Ooh, interesting theory. I think this could be possible.
 
0
Also, do you think more Switch games will get 'next-gen patches/updates' for the next system?
 
Also, do you think more Switch games will get 'next-gen patches/updates' for the next system?
nope. outside of games that are poised to get dlc, like Zelda, Pokemon, Splatoon, I don't foresee too many patches. from third parties, maybe if the game is a GaaS, like Apex, Fortnite, and Overwatch

So, I have been out of the loop for some time regarding the switch successor. Is there something behind people discussing now 4nm rather than 8nm nowadays?
8nm would make the Drake design very difficult to use. it's big, consumes a good deal of power, and reigning that in is pointless when it was cheaper to just make a smaller chip.
 
Same reason as it was back when the Nvidia leak happened. Power consumption.
And Nvidia's capacity for 4N>> their capacity for 5LPP/LPE, the only other suitable node for the size of the processor at the same power consumption as Nintendo Switch (HAC-001)(V1)
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom