• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

It's only impressive if it's predictive. If you're so smart, what's next?
Education needs to be added to the loop ...
So, just for funsies, how about a little explainer? This loops back to the "physically based rendering" conversation from the Metroid Prime remaster.

TL;DR: @Thraktor is suggesting - and I agree - that this paper actually does one Bleeding Edge AI thing, and one More Boring Modernization Thing, and that the cool improvements that Nvidia is reporting aren't actually because of Bleeding Edge AI, but the Boring Modernization part.

Longer Version:
... I love this thread, it's not just people arguing, I can actually learn stuff :geek:
 
Can engine channels be compared to Photoshop Layers? Like, we see Tears of the Kingdom ‘s box art, but there’s a lot going on that’s deeper than a single image: it’s layers of placement of Link, edited lighting here, contrast there, etc
Yes and no! Let me show you an example.

Here is the "albedo" channel on a brick wall texture. Albedo is a fancy physics word, but in this case it basically means "raw image." When a gamer thinks of a texture, this is the channel they think of. Placed in a spoiler tag, just so we don't clog everyone's screens

Edited to add: Imgur is a butt sometimes. Try opening the images in a new tab and then refreshing if they don't load for you?



See? Simple, kinda flat looking because of the lighting. If that were slapped on a wall surface in a game it would look... kinda bad? But functional. Here is a "height' channel.



Okay, what the hell is going on here? It looks like a black and white copy of the "albedo" texture, but if you combines these two (like photoshop layers) you wouldn't get anything good? What gives?

Well, it's not an image that you show the user directly. The "height" channel tells the game engine how tall each part of the brick wall is. So when bricks have bumps that stick out, those pixels are colored white, and the deeper parts where the mortar is, those are colored black. In game, if you look at this brick wall directly, the lighting engine will take the first image, and then have more light on the tall parts and less on the deep parts because that's how light works. So even though the texture is just a single flat image, it lights up like a realistic brick wall.

One way to think about it is it's like the difference between looking at a brick wall in real life, versus looking at a wall that someone has put brick wallpaper on. If you point a light at the real brick, you'd get shadows in the mortar. But if you pointed at the wallpaper, you wouldn't.

This particular texture has more textures, so let's look at some of them. This is the "normal" channel.



"Normal" is once again a fancy physics word. This is like a more complex version of the height map. Each shade of cyan here tells the lighting system what direction all the little surfaces in this texture are facing. If you can imagine our brick wall in real life again, if you put a light on the left then the left sides of the bricks would reflect light and get highlights. This normal map tells the lighting engine how all that works on this surface.

Then we get the "Ambient Occlusion" channel.


"Occlude" just means "blocks the light of". "Ambient occlusion" basically means "the shadows that an object casts on itself," and that's what you see here. In the gutters between the bricks, where the mortar is, we get dark spots where the bricks are going to cast little shadows, and then white at the tops of the bricks, where there are no shadows, and a few small patches of grey in between. You can see how it lines up with the height map, but isn't exactly the same thing.

And our last black and white channel on this image is for "roughness"



Roughness means what it sounds like, how physically rough (or smooth) a surface is. Here, the white parts are the most rough, the dark parts are the most smooth. If you think of a smooth piece of paper, it looks bright when you hit light with it, almost shiny. Now crumple it up, point the same light at it - it doesn't look shiny anymore! That's because the surface is rough - it reflects the same amount of light, but it sends it in all different directions, so it doesn't hit your eye directly, making it less, well, shiny.

So now we put all these channels into our game engine and point the camera at our (totally flat!) brick wall. What we can do is take the first image (the albedo) and then shade it using the rest of the channels. The tallest parts should get the most light, the shallow parts less light (height map). The bumps and ridges facing the light should reflect more light back than the parts facing the camera (the normal map). The light should cast shadows that conform to the shape of the bricks (the ambient occlusion map). The smooth parts of the brick should look shinier than the rough mortar in between (the roughness map).

You get something like this



This is the same flat image as the albedo texture above, just rotated and lit using all those other channels. There is no real geometry here, we don't have a complex 3D object, it's just a flat surface, but the lighting system makes all that complexity possibly. And we should be able to change the geometry, change the lighting and still have it look good. Here is the same texture wrapped around a sphere



So, yes, they are like photoshop layers, except the lighting system isn't showing them directly, it's using them to inform a complex engine that decides how each pixel will look by combining the various channels with the color of the light, the angle of the light, and the angle of the camera.
 
it's using them to inform a complex engine that decides how each pixel will look by combining the various channels with the color of the light,
Wow. It goes by pixel 🥴

And yes, I managed to see every picture! Thanks for the explanation. Was veery interesting! I can’t imagine how complex it must be for open world games
 
Wow. It goes by pixel 🥴
This is why resolution is kind of a dumb way to talk about consoles. "Is Switch an HD console if it can't really do 1080p? And why can't it, if the Wii U could???" Because "modern" game engines do ~2.5x as many calculations per pixel as "classic" engines do.

And yes, I managed to see every picture! Thanks for the explanation. Was veery interesting! I can’t imagine how complex it must be for open world games
If you wanna get really deep, but still accessible, this is a good video on how Breath of the Wild's lighting system works
 
This is why resolution is kind of a dumb way to talk about consoles. "Is Switch an HD console if it can't really do 1080p? And why can't it, if the Wii U could???" Because "modern" game engines do ~2.5x as many calculations per pixel as "classic" engines do.


If you wanna get really deep, but still accessible, this is a good video on how Breath of the Wild's lighting system works
Can't wait for "Is Switch REDACTED an Ultra HD Console????"
 
This is why resolution is kind of a dumb way to talk about consoles. "Is Switch an HD console if it can't really do 1080p? And why can't it, if the Wii U could???" Because "modern" game engines do ~2.5x as many calculations per pixel as "classic" engines do.


If you wanna get really deep, but still accessible, this is a good video on how Breath of the Wild's lighting system works
Saw the video. Wow. Didn’t know A LOT went into lighting and texture. All these engines must be a hell to code for 🥴
 
Been thinking about how obscenely long Pokemon Home is taking for the mainline games, and my gut started telling me they're saving it so they can shadowdrop it while also doing a presentation for the SV DLC. But then I started thinking more, and remembered that a next-gen patch is supposed to arrive with the Indigo Disk - and now I wonder, does that Spring window for Home mean we're going to see the REDACTED reveal in that timeframe? Nintendo gives TotK a few weeks to breathe, and then around (or before) E3 time they reveal the new console for a Nov 2023 - Mar 2024 release. TPC can then finally talk about SV's REDACTED patch a while later, as well as a bunch of info for The Teal Mask and a Home shadowdrop. This timeline assumes a fair amount of things but I wanted to throw it out there
It's pretty likely that the Home announcement will be a completely standalone thing.

They're already straining the "early 2023" window pretty hard, and I suspect that it's ultimately being held back to ensure stability. The one other thing that would be relevant to Home would be an emulated release, which is still possible, but historically those have had to wait for the next compatibility update for Bank/Home support and not the other way around.
 
It's pretty likely that the Home announcement will be a completely standalone thing.

They're already straining the "early 2023" window pretty hard, and I suspect that it's ultimately being held back to ensure stability. The one other thing that would be relevant to Home would be an emulated release, which is still possible, but historically those have had to wait for the next compatibility update for Bank/Home support and not the other way around.
Forgot that it was just early 2023 instead of Spring, but that makes the silence even more confusing. Imo, it's either they're saving it for the REDACTED Presents or they're being SUPER thorough for cloning bugs after BDSP and launch SV
 
And Nvidia's capacity for 4N>> their capacity for 5LPP/LPE, the only other suitable node for the size of the processor at the same power consumption as Nintendo Switch (HAC-001)(V1)
I think TSMC's N6 process node is also a suitable process node, especially since TSMC's N6 process node still has better performance per watt than Samsung's 5LPP process node. And Nvidia probably still has capacity, especially if TrendForce's estimate of ≥30,000 A100 GPUs being needed for ChatGPT still holds true, and with BlueField-3 being in mass production. The only downside is that TSMC's N6 process node's transistor density (~114.2 MTr/mm²) is not as high as Samsung's 5LPP process node's transistor density (126.89 MTr/mm²).
 
I think TSMC's N6 process node is also a suitable process node, especially since TSMC's N6 process node still has better performance per watt than Samsung's 5LPP process node. And Nvidia probably still has capacity, especially if TrendForce's estimate of ≥30,000 A100 GPUs being needed for ChatGPT still holds true, and with BlueField-3 being in mass production. The only downside is that TSMC's N6 process node's transistor density (~114.2 MTr/mm²) is not as high as Samsung's 5LPP process node's transistor density (126.89 MTr/mm²).
Good catch! Well you're full of beans, aren't you?

How does the density translate? Would 4N result in a die size similar to Tegra X1? Or X1+?
How does 5LPP and N6 compare?
 
Last edited:
Good catch! Well you're full of beans, aren't you?

How does the density translate? Would 4N result in a die size similar to Tegra X1? Or X1+?
How does 5LPP and 6N compare?
With process nodes, it difficult to give an exact as these are all estimates based on actual world knowledge and data.


On top of that, not all parts of the silicon see the same shrink in size, for example with SRAM there’s a smaller change there from going to 5nm from 7nm.


6N isn’t a thing really, but N6 is a thing. But I say this in that, 4N is NVidia only, but N4 is not and that’s for any other customer. It’s minor but important to note, why? Because 4N has its own density of 121-125MTr/mm, but N4 can have a different density and can depend on who and how to design their silicon. I forget if N4 is actually an Apple exclusive node while 4nm is just everyone else and 4N is nvidia only, but anyway beside the point.

RDNA3 GCD has a density of about 153MTr/mm if I’m not mistaken. That’s on 5nm. But. Nvidia Hopper is on 5nm as well and that is only about 96MTr/mm.

Samsung, well, they are certainly a thing that exists. Their 5LPE was able to do 126MTr/mm or so, but the peak for TsMC based on the reported metrics is supposedly 173MTr/mm, of course not everyone hits that. It

As @Dakhil always mentions, these are nomenclature that is used for marketing purposes. Samsung 5LPE and it’s family are more comparable to TSMC 7nm in terms of density and SRAM scaling, not to mention performance. There’s always more than meets the eye. This doesn’t include the new SEC 4nm which is supposedly much better, but still worse than TSMC 5nm class. It’s comparable to the TSMC 6nm, the evolution to 7nm.


To go back to the question, though and attempt to give a straight forward answer: the Erista SoC (TX1) was about 118.1mm^2 and had 2B transistors or so.

The Mariko SoC (TX1+) was about 100mm^2, same X-sistor count of course.


Drake on 8N SEC in theory based on what we can gather from real products and annotated dies, would be around 160-180mm^2. 8N is about 43-45.6MTr/mm if I’m not mistaken.

If on TSMC 7nm, or 6nm, it should be about the range of 90-110mm^2

If on NV 4N, it should be about 60-70mm^2

If on SEC 5LPP, it should be similar to the TSMC 6-7nm in size.

The 4N would be the most efficient and most cost effective if we look at it as functioning nodes per wafer


This is only to the best of my knowledge, not my specialty. And it makes assumptions that the SRAM scaling is also going smoothly and not stagnating in its decrease. Hence why I gave a range.
 
Samsung, well, they are certainly a thing that exists. Their 5LPE was able to do 126MTr/mm or so, but the peak for TsMC based on the reported metrics is supposedly 173MTr/mm, of course not everyone hits that. It
Actually, TSMC's N5 process node has a max transistor density of 138.2 MTr/mm², not 171.3 MTr/mm². (Angstronomics has an article explaining why the max transistor density calculation was initially wrong.)
 
6N was a typo on my part.
I meant that, if it were 6N it would be nvidia exclusive, but nvidia doesn’t seem to have an exclusive customization for a node of the 6nm, but it 5nm (4N) and 7N. I know 6nm is an extension of the 7nm family, but the N placement means it’s for someone and nvidia does it after the number. :p

(Plus they would announce it)

Actually, TSMC's N5 process node has a max transistor density of 138.2 MTr/mm², not 171.3 MTr/mm². (Angstronomics has an article explaining why the max transistor density calculation was initially wrong.)
Thank you for that correction.
 
N4 should be for available in general, not for Apple exclusively.
There is no specific node named '4nm'; each of the 3 bleeding edge foundries have their own naming scheme, and none of them use 'nm' anymore.
TSMC likes to use N<number><plus a 1 letter suffix sometimes> (btw, this does make the 4N for Nvidia very unusual, naming-wise)
Samsung likes <number><3 letter suffix>, although sometimes you'll see "SF<number>"
Intel has recently switched to "Intel <number>" more or less (Intel 7, Intel 4, and Intel 3), though after Intel 3, it becomes "Intel <number>A" (see Intel 20A and Intel 18A). Yes, the A is for Angstrom.

And if the above sounds silly to you, reader, then you should check out the names for DRAM nodes :p
(for those, what would normally be some number between 10 and 19 instead get labeled as... 1x, then 1y, then 1z, then it wraps around to 1a (as in alpha), then 1b (as in beta))
 
There is no specific node named '4nm'; each of the 3 bleeding edge foundries have their own naming scheme, and none of them use 'nm' anymore.
TSMC likes to use N<number><plus a 1 letter suffix sometimes> (btw, this does make the 4N for Nvidia very unusual, naming-wise)
Samsung likes <number><3 letter suffix>, although sometimes you'll see "SF<number>"
Maybe not explicitly in the name, but TSMC and Samsung definitely still use "nm" when marketing.

Nvidia has been using <number>"N" ("N" = node) for describing process nodes custom tailored (no matter if TSMC or Samsung is used) for Nvidia since Volta/Turing. (Volta/Turing is the only era Nvidia also used "FF" ("FF" = FinFET) after <number>, but before N, in the process node nomenclature.)

"SF"<number> is a very recent change, which is why "SF"<number> is currently seldom used.
 
I know this a long shot but what if someone datamines Tears of the Kingdom to see there is a file or something rather pointing to another version for Nintendo Switch Redacted
 
I know this a long shot but what if someone datamines Tears of the Kingdom to see there is a file or something rather pointing to another version for Nintendo Switch Redacted
I think this question was already asked and the answer was no. Maybe I’m misremembering. And since the game has leaker, we would have heard something by now
 
Here is the "albedo" channel on a brick wall texture. Albedo is a fancy physics word, but in this case it basically means "raw image." When a gamer thinks of a texture, this is the channel they think of. Placed in a spoiler tag, just so we don't clog everyone's screens

Edited to add: Imgur is a butt sometimes. Try opening the images in a new tab and then refreshing if they don't load for you?



See? Simple, kinda flat looking because of the lighting. If that were slapped on a wall surface in a game it would look... kinda bad? But functional.

I thought Albedo was a character from XenoSaga?
 
This is why resolution is kind of a dumb way to talk about consoles. "Is Switch an HD console if it can't really do 1080p? And why can't it, if the Wii U could???" Because "modern" game engines do ~2.5x as many calculations per pixel as "classic" engines do.


If you wanna get really deep, but still accessible, this is a good video on how Breath of the Wild's lighting system works
Zoinks, both your post and that video are excellent. I learned so much, never knew how much game engines could do by combining "flat" images. Thanks!
 
@NateDrake

Whats your assumption now regarding SSD tech like the PS5 for the Next Gen Switch?

You think this is a safe bet now?
The bottleneck of the OG switch storage wise is the cpu, not the storage. So even if redacteds storage is not upgraded, the real world performance should be much better.

We also know that Drake has dedicated file decompression hardware from the leak, so whatever storage they go for actual performance should not be bottlenecked this time.

Personally I hope they hit 1GB/s as that's plenty enough for Nanite/ fast asset streaming, and is what devs originally asked Sony for in the ps5.
 
Last edited:
The bottleneck of the OG switch storage wise is the cpu, not the storage. So even if redacteds storage is not upgraded, the real world performance should be much better.

We also know that Drake has dedicated file decompression hardware from the leak, so whatever storage they go for actual performance should not be bottlenecked this time.

Personally I hope they hit 1gb/s as that's plenty enough for Nanite/ fast asset streaming, and is what devs originally asked Sony for in the ps5.
Has it been confirmed what format the file decompression supports? Does it still use old compression formats or is it the newer ones as discussed/brought up in this thread? I'm curious about compression efficiency.
 
The bottleneck of the OG switch storage wise is the cpu, not the storage. So even if redacteds storage is not upgraded, the real world performance should be much better.

We also know that Drake has dedicated file decompression hardware from the leak, so whatever storage they go for actual performance should not be bottlenecked this time.

Personally I hope they hit 1gb/s as that's plenty enough for Nanite/ fast asset streaming, and is what devs originally asked Sony for in the ps5.
1Gb, as in Gigabit or 1GB, as in 1 GigaBYTE?

As for where it'll land. I don't think it'll hit 1GB, even after decompression, personally. There's too many hurdles and not enough to gain.
 
1Gb, as in Gigabit or 1GB, as in 1 GigaBYTE?

As for where it'll land. I don't think it'll hit 1GB, even after decompression, personally. There's too many hurdles and not enough to gain.
I actually don't know. Always assumed GB stood for GigaByte when storage speeds are advertised, but maybe its wrong.
 
I actually don't know. Always assumed GB stood for GigaByte when storage speeds are advertised, but maybe its wrong.
GB - GigaBYTE
Gb - Gigabit

You had "gb" all lower case so I wasn't sure. You didn't make a mistake or anything I just couldn't tell which you meant.

I think 1GB/S is pretty unlikely still.

(I think now you meant Gigabyte, since the Xbox Series drive is multi-Gigabyte per second, while one Gigabit is just one eighth of one GigaBYTE.)
 
Yes and no! Let me show you an example.

Here is the "albedo" channel on a brick wall texture. Albedo is a fancy physics word, but in this case it basically means "raw image." When a gamer thinks of a texture, this is the channel they think of. Placed in a spoiler tag, just so we don't clog everyone's screens

Edited to add: Imgur is a butt sometimes. Try opening the images in a new tab and then refreshing if they don't load for you?



See? Simple, kinda flat looking because of the lighting. If that were slapped on a wall surface in a game it would look... kinda bad? But functional. Here is a "height' channel.



Okay, what the hell is going on here? It looks like a black and white copy of the "albedo" texture, but if you combines these two (like photoshop layers) you wouldn't get anything good? What gives?

Well, it's not an image that you show the user directly. The "height" channel tells the game engine how tall each part of the brick wall is. So when bricks have bumps that stick out, those pixels are colored white, and the deeper parts where the mortar is, those are colored black. In game, if you look at this brick wall directly, the lighting engine will take the first image, and then have more light on the tall parts and less on the deep parts because that's how light works. So even though the texture is just a single flat image, it lights up like a realistic brick wall.

One way to think about it is it's like the difference between looking at a brick wall in real life, versus looking at a wall that someone has put brick wallpaper on. If you point a light at the real brick, you'd get shadows in the mortar. But if you pointed at the wallpaper, you wouldn't.

This particular texture has more textures, so let's look at some of them. This is the "normal" channel.



"Normal" is once again a fancy physics word. This is like a more complex version of the height map. Each shade of cyan here tells the lighting system what direction all the little surfaces in this texture are facing. If you can imagine our brick wall in real life again, if you put a light on the left then the left sides of the bricks would reflect light and get highlights. This normal map tells the lighting engine how all that works on this surface.

Then we get the "Ambient Occlusion" channel.


"Occlude" just means "blocks the light of". "Ambient occlusion" basically means "the shadows that an object casts on itself," and that's what you see here. In the gutters between the bricks, where the mortar is, we get dark spots where the bricks are going to cast little shadows, and then white at the tops of the bricks, where there are no shadows, and a few small patches of grey in between. You can see how it lines up with the height map, but isn't exactly the same thing.

And our last black and white channel on this image is for "roughness"



Roughness means what it sounds like, how physically rough (or smooth) a surface is. Here, the white parts are the most rough, the dark parts are the most smooth. If you think of a smooth piece of paper, it looks bright when you hit light with it, almost shiny. Now crumple it up, point the same light at it - it doesn't look shiny anymore! That's because the surface is rough - it reflects the same amount of light, but it sends it in all different directions, so it doesn't hit your eye directly, making it less, well, shiny.

So now we put all these channels into our game engine and point the camera at our (totally flat!) brick wall. What we can do is take the first image (the albedo) and then shade it using the rest of the channels. The tallest parts should get the most light, the shallow parts less light (height map). The bumps and ridges facing the light should reflect more light back than the parts facing the camera (the normal map). The light should cast shadows that conform to the shape of the bricks (the ambient occlusion map). The smooth parts of the brick should look shinier than the rough mortar in between (the roughness map).

You get something like this



This is the same flat image as the albedo texture above, just rotated and lit using all those other channels. There is no real geometry here, we don't have a complex 3D object, it's just a flat surface, but the lighting system makes all that complexity possibly. And we should be able to change the geometry, change the lighting and still have it look good. Here is the same texture wrapped around a sphere



So, yes, they are like photoshop layers, except the lighting system isn't showing them directly, it's using them to inform a complex engine that decides how each pixel will look by combining the various channels with the color of the light, the angle of the light, and the angle of the camera.

i believed Retro Studios used a lot of Normal Maps, Ambient Oclision on Metroid Prime Remastered and possibly Metroid Prime 4, the textures for the game look quite realistic
 
i believed Retro Studios used a lot of Normal Maps, Ambient Oclision on Metroid Prime Remastered and possibly Metroid Prime 4, the textures for the game look quite realistic
you're better off listing games that don't use normal and AO maps as that list would be shorter. even breath of the wild had them for their characters despite how they look
 
i believed Retro Studios used a lot of Normal Maps, Ambient Oclision on Metroid Prime Remastered and possibly Metroid Prime 4, the textures for the game look quite realistic
My favourite examples of normal mapping are Pikmin 3 (look up Pikmin 3 Model Trivia, it's AMAZING) and Breath of the Wild.

Breath of the Wild dumbfounded me with how good the sand reacted to light thanks to its normal maps.
 
Maybe @Thraktor can answer.
The short answer is we don't know. The Linux references to the FDE only confirm that it exists, not anything related to supported formats or performance.

If I were to guess, I'd say it will support one standard, general-purpose compression algorithm, and possibly one additional custom algorithm tailored to game data. For the general purpose algorithm, DEFLATE (the algorithm used by Zlib) is a good bet. It's been around a long time, it's supported by everything, and it's likely relatively simple to implement in silicon. Both Sony's PS5 hardware decompressor and MS's XBX/S hardware decompressor support DEFLATE.

If a second compression algorithm is supported, then an algorithm specifically tailored to texture data would be the most likely bet. Textures make up the bulk of game data, and although they're already compressed using lossy block formats, there's a lot of room for further lossless compression on top of that. This is the approach MS took, and why the XBSX/S hardware decompressor also supports something called BCPACK. Microsoft haven't talked about this much, but it's a custom compression algorithm specifically tailored to textures. Lossy block compression formats for textures in DirectX are referred to as BC1, BC7, etc., and MS called this BCPACK because it takes those BC formats and compresses them further. Here's the description they provide for it from their DirectStorage documentation:

BCPack is a custom entropy coder designed specifically for BCn data. Generally, what this means is that color endpoints are separated from palette indices (that is, weights) and compressed using a rANS algorithm.

Basically what they're doing here is taking information they know about the structure of the texture data, and then being smart about how they feed it into a compression algorithm, which allows them to get better compression ratios than they'd get with a general-purpose compression algorithm like DEFLATE, which doesn't know anything about the structure of the underlying data. In this case they're using a Range Asymmetric Numerical Systems (rANS) algorithm, which is just a fancy mathematical compression technique that's become common in modern compression algorithms.

It's probably worth noting that although PS5's hardware decompressor does support an additional algorithm in the form of Kraken, it's not tailored to game data. It's a general-purpose compression algorithm that can be considered a slightly better alternative to DEFLATE. They do advertise Oodle Texture, which improves texture compression ratios, but it's actually using something called Rate Distortion Optimisation (RDO), which is performed prior to compression and is independent of the compression algorithm.

RDO is effectively a way to rearrange the data in a texture to get it to compress more effectively with general-purpose compression algorithms like DEFLATE or Kraken. RDO isn't lossless, though, and you do lose a bit of texture quality in order to achieve higher compression ratios. RDO is also something you can apply to almost any texture on almost any system, because it doesn't require any custom hardware, so it's also applicable to, and already used on, systems like PS4, XBO and Switch. It's made redundant on XBSX/S because BCPACK should allow better compression ratios without any loss in quality.
 
Metroid Prime Remastered didnt used normal map? i trough the game used this tecnique
no, it did use them. I'm saying normal maps, et al are standardize parts of the pipeline so it'd be more unique to not use them. and those games are pretty much in the indie space
 
I was just thinking about something: if Nintendo were to actually announce something in the earnings report, we might here something in the business articles leading up to it, which would be when Japan wakes up in less than 12 hours.
 
I was just thinking about something: if Nintendo were to actually announce something in the earnings report, we might here something in the business articles leading up to it, which would be when Japan wakes up in less than 12 hours.
A non outright denial, is the closest we are going to get to a confirmation.
 
@NateDrake

Whats your assumption now regarding SSD tech like the PS5 for the Next Gen Switch?

You think this is a safe bet now?
Is the biggest issue with SSD storage in the Switch not, that it is getting too hot? Newest generation SSDs are small, you could easily get them hooked on a Switch board. But the power consumption is by factor 10 (0.5 vs 5Watts). And most PCIe4 SSDs also have a heatsink. This does not matter in a large console like the PS5, but this would be definitely a challenge for the Switch formfactor. So I doubt that a fast SSD is at all feasible for the Switch, beside their huge price. So I see more UFS as a solution for Nintendo.
UFS is from power consumption very low, but even more expensive. But especially considering a modded UFS for the new Switchcards could be an option. But this would absolutely depend on what Nintendo could handle out for a deal with the producer. But 64GB cards should be definitely cheaper as current Switch card prices are rumored.
 
0
Is the biggest issue with SSD storage in the Switch not, that it is getting too hot? Newest generation SSDs are small, you could easily get them hooked on a Switch board. But the power consumption is by factor 10 (0.5 vs 5Watts). And most PCIe4 SSDs also have a heatsink. This does not matter in a large console like the PS5, but this would be definitely a challenge for the Switch formfactor. So I doubt that a fast SSD is at all feasible for the Switch, beside their huge price. So I see more UFS as a solution for Nintendo.
UFS is from power consumption very low, but even more expensive. But especially considering a modded UFS for the new Switchcards could be an option. But this would absolutely depend on what Nintendo could handle out for a deal with the producer. But 64GB cards should be definitely cheaper as current Switch card prices are rumored.
UFS isn't expensive as it's ubiquitous. making game cards with the format might be off the table though since Nintendo could have gone with cheaper formats from the jump, but havent
 
0
The bottleneck of the OG switch storage wise is the cpu, not the storage. So even if redacteds storage is not upgraded, the real world performance should be much better.

We also know that Drake has dedicated file decompression hardware from the leak, so whatever storage they go for actual performance should not be bottlenecked this time.

Personally I hope they hit 1GB/s as that's plenty enough for Nanite/ fast asset streaming, and is what devs originally asked Sony for in the ps5.

"X is a bigger bottleneck than Y at this time so if we improve X, we don't have to worry about Y" is just weird thinking. The question is whether 100 MB/s can actually allow for very quick load times if there's good decompression. The decompression speed being a bigger bottleneck than transfer speed for Switch does not imply that the transfer speed may not be a big bottleneck in the future (especially if Switch 2 games have much higher quality assets they need to load into the RAM).

I don't think the Switch 2 needs to hit 5.5 GB/s or anything, but I do think it's worth asking whether transfer speeds that are 1/55 as fast as the PS5 and 1/24 as fast as the Xbox Series S will allow for very good load times for Switch 2 games with good assets.
 
I don't think the Switch 2 needs to hit 5.5 GB/s or anything, but I do think it's worth asking whether transfer speeds that are 1/55 as fast as the PS5 and 1/24 as fast as the Xbox Series S will allow for very good load times for Switch 2 games with good assets.
it depends on what's defined as "good assets". blanket statements can't be applied because every game is different, as are techniques to handle asset streaming. just look at The Last of Us and Plague Tale 2 on PC for example
 
it depends on what's defined as "good assets". blanket statements can't be applied because every game is different, as are techniques to handle asset streaming. just look at The Last of Us and Plague Tale 2 on PC for example

OK, but the gap people are suggesting between the Switch 2 and PS5/XS is still so large that it wouldn't be surprising at all to see load times not a ton better than the PS4.
 
OK, but the gap people are suggesting between the Switch 2 and PS5/XS is still so large that it wouldn't be surprising at all to see load times not a ton better than the PS4.
well that's a mistake those folks are making by thinking assets are on par with PS5. being better than PS4 will be a given, provided the assets are properly scaled. trying to load assets akin to PS5's will no doubt bring problems
 
OK, but the gap people are suggesting between the Switch 2 and PS5/XS is still so large that it wouldn't be surprising at all to see load times not a ton better than the PS4.
All I said was worst case scenario, real world speeds will be significantly better than OG Switch due to much better cpu and FDE.

Edit:I cannot find a source, but I think I remember that the oled Switch internal storage is 400MB/s.

Edit 2: found a source.

 
Last edited:
0
eMMC 5.1's max transfer speed is 400 MB/s, it's just that the Switch throttles it to a much lower speed (probably to save power along with parity purposes for the game carts)

It's going to be hard to hit max transfer speeds on mobile hardware due to the electricity limitations.

USF 4.0's appeal isn't exactly its max transfer speed, just how little electricity it uses to hit reasonably fast transfer speeds.
 
eMMC 5.1's max transfer speed is 400 MB/s, it's just that the Switch throttles it to a much lower speed (probably to save power along with parity purposes for the game carts)

It's going to be hard to hit max transfer speeds on mobile hardware due to the electricity limitations.

USF 4.0's appeal isn't exactly its max transfer speed, just how little electricity it uses to hit reasonably fast transfer speeds.
They probably wouldn't have bothered with a custom File Decompression Engine, unless they were concerned with getting the most out of their storage.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom