• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

There's been years worth of R&D on AMD, Intel, and Nvidia's part to make rendering more bandwidth efficient (e.g. mesh shaders). You can't just slot in the numbers and directly compare them.
There's only so much efficiency improvements can do, 102.7 GB/s from regular LPDDR5 are very constrained on paper either way. It's always been a limiting factor as soon as we knew about it (especially when making docked comparisons to say, PS4 Pro), that's why it keeps being brought up.
 
There's been years worth of R&D on AMD, Intel, and Nvidia's part to make rendering more bandwidth efficient (e.g. mesh shaders). You can't just slot in the numbers and directly compare them.

Yes, but that doesn't mean having only 4x the RAM bandwidth of the Switch is going to be "more than enough". Tiled rendering and mesh shaders are great, but an extra 25% of bandwidth is also great.
 
Yes, but that doesn't mean having only 4x the RAM bandwidth of the Switch is going to be "more than enough". Tiled rendering and mesh shaders are great, but an extra 25% of bandwidth is also great.
Not only that, one of the main thing that separates good and bad Switch conversion's is memory optimization.

If you are targeting other systems with plenty of bandwidth, optimizing for a low bandwidth sku is extra work.
 
Not only that, one of the main thing that separates good and bad Switch conversion's is memory optimization.

If you are targeting other systems with plenty of bandwidth, optimizing for a low bandwidth sku is extra work.

Also, hasn't the Switch overclocking community found that the largest gains tend to come from overclocking the RAM bandwidth? Indicating that the Switch is already very bandwidth-constrained?
 
Luigi's Mansion 3 in 4K/120FPS and HDR basically looks like a PS5 game to me (view on a 4K HDR capable display)...



It's absolutely wild how much better regular Switch games can look with damn near no extra development cost needed on actual capable hardware. Day and night difference. Wish to god we could move on from Switch. :(
 
Luigi's Mansion 3 in 4K/120FPS and HDR basically looks like a PS5 game to me (view on a 4K HDR capable display)...



It's absolutely wild how much better regular Switch games can look with damn near no extra development cost needed on actual capable hardware. Day and night difference. Wish to god we could move on from Switch. :(

looks like someone cranked the bloom slider
 
Luigi's Mansion 3 in 4K/120FPS and HDR basically looks like a PS5 game to me (view on a 4K HDR capable display)...



It's absolutely wild how much better regular Switch games can look with damn near no extra development cost needed on actual capable hardware. Day and night difference. Wish to god we could move on from Switch. :(

This reminds me of the first time I ever saw Super Mario 3D Land running at 1080p on the Citra emulator on YouTube. I was absolutely blown away by how good the game looked on a screen that wasn't 240p.
 
There's only so much efficiency improvements can do, 102.7 GB/s from regular LPDDR5 are very constrained on paper either way. It's always been a limiting factor as soon as we knew about it (especially when making docked comparisons to say, PS4 Pro), that's why it keeps being brought up.
That's why I hope (in vain) that T239's GPU has access to 4 MB of L2 cache (which is the same amount T234's GPU has access to) instead of 1 MB of L2 cache. That way, games won't have to be as reliant on RAM with 4 MB of L2 cache vs 1 MB of L2 cache.

But of course, SRAM (including GPU L2 cache) does take up, at the very least, a non-trivial amount of silicon space. A perfect example is AD102, where 96 MB of L2 cache takes up 85.10 mm² of silicon space, whereas one GPC takes up 22.39 mm² of silicon space in comparison.
 
While I hope for LPDDR5X, LPDDR5 102 gb/s is closer to PS5’s 440 than Switch’s 25 was to PS4’s 174. We can expect a few more ”miracle ports” than last gen, for sure.
 
That's why I hope (in vain) that T239's GPU has access to 4 MB of L2 cache (which is the same amount T234's GPU has access to) instead of 1 MB of L2 cache. That way, games won't have to be as reliant on RAM with 4 MB of L2 cache vs 1 MB of L2 cache.

But of course, SRAM (including GPU L2 cache) does take up, at the very least, a non-trivial amount of silicon space. A perfect example is AD102, where 96 MB of L2 cache takes up 85.10 mm² of silicon space, whereas one GPC takes up 22.39 mm² of silicon space in comparison.
4MB of L2 for GPU, 8MB of L3 for CPU and is perfect
 
That's why I hope (in vain) that T239's GPU has access to 4 MB of L2 cache (which is the same amount T234's GPU has access to) instead of 1 MB of L2 cache. That way, games won't have to be as reliant on RAM with 4 MB of L2 cache vs 1 MB of L2 cache.

But of course, SRAM (including GPU L2 cache) does take up, at the very least, a non-trivial amount of silicon space. A perfect example is AD102, where 96 MB of L2 cache takes up 85.10 mm² of silicon space, whereas one GPC takes up 22.39 mm² of silicon space in comparison.
According to my math 4 MB of L2 cache takes only 3.54mm² of space.
 
While I hope for LPDDR5X, LPDDR5 102 gb/s is closer to PS5’s 440 than Switch’s 25 was to PS4’s 174. We can expect a few more ”miracle ports” than last gen, for sure.
It's not really closer when you remember PS5's GPU is already leveraging tile based rendering and all of the rendering improvements the TX1 had over PS4's. The original Switch being newer allowed it to punch above its weight on this department, Switch 2 will be playing in equal terms.
 
Yes, but that doesn't mean having only 4x the RAM bandwidth of the Switch is going to be "more than enough". Tiled rendering and mesh shaders are great, but an extra 25% of bandwidth is also great.
It does seem like 25% more would be better, but it's probably not. Which isn't intuitive!

The 3070 and the 3070Ti have nearly identical TFLOPS, but huge gaps in memory bandwidth. If that extra memory bandwidth matters, we'd expect the 3070Ti to overperform, relative to it's small compute advantage. Here are Digital Foundries benchmarks, summed up:

CardTFLOPSBandwidth GB/s1080p Average FPS1140p Average FPS4k Average FPS
RTX 307020.344816712276
RTX 3070Ti21.760817413082
% Improvement6.9%36%4%6.5%7.8%

Basically nothing. Under the highest possible load, with 4k textures and LODs, the bandwidth improvement results in less than 1% performance improvement. The 3080Ti and the 3090 show similar results. But those are huge cards, so the second question is, how does Ampere scale down, with regard to memory bandwidth?

Card30503060307030803090
Bandwidth/TFLOP24.628.322.125.626.3
4k FPS/TFLOP3.73.93.83.13.0
What we can see is that, over all, the architecture scales down better than it scales up. And the 3060, which has 25% more bandwidth (relative to compute) than the 3070, performs identically. So there is no reason to believe that the smaller cards have higher bandwidth needs. In fact, the opposite. There isn't any data to support the idea that the GPU maxes out at 25GB/s/TFLOP, and likely sees marginal - if any - benefit over 22GB/s/TFLOP

A 3.5 TFLOP Drake could be running balls to the wall, and still have 14GB/s of spare memory bandwidth left over for the CPU. Likely closer to 25GB/s. Whole 8 CPU core phones plus their GPUs are running on that amount of bandwidth. So this is probably generous for the CPU as well. Even more so if Nintendo takes advantage of the larger cache configuration that the A78C offers.

TL;DR: The extra 25% of bandwidth would likely result in little to no performance gain.

That's why I hope (in vain) that T239's GPU has access to 4 MB of L2 cache (which is the same amount T234's GPU has access to) instead of 1 MB of L2 cache. That way, games won't have to be as reliant on RAM with 4 MB of L2 cache vs 1 MB of L2 cache.
So I'm an obsessive (can you tell?) and I put craploads of GPU data in a big spreadsheet. And I turned up something which should have been obvious to me, but wasn't until it stared me in the face.

30503060307030803090T239
Cache KB/Bwidth GB9.148.539.146.736.5610.03
Cache KB/TFLOP225.05241.89201.77172.39172.68292.57

If you look at it this way, even the 1MB cache in Drake is actually crazy high relative to the rest of the line. This is probably a side effect of scaling Ampere down. Cache size scales with number of memory controllers, not number of GPU cores or their clock speeds. The may only have one GPC, clocked way down, but it's 1MB of cache no matter what.

Of course, this might not be the best/only way to look at it. I'm not sure if these things are good proxies for cache hit rate? But again, Ampere just seems to get more efficient as it gets smaller, so I tend to think that Drake will overperform, without any changes to it's memory architecture, rather than underperform.
 
It does seem like 25% more would be better, but it's probably not. Which isn't intuitive!

The 3070 and the 3070Ti have nearly identical TFLOPS, but huge gaps in memory bandwidth. If that extra memory bandwidth matters, we'd expect the 3070Ti to overperform, relative to it's small compute advantage. Here are Digital Foundries benchmarks, summed up:

CardTFLOPSBandwidth GB/s1080p Average FPS1140p Average FPS4k Average FPS
RTX 307020.344816712276
RTX 3070Ti21.760817413082
% Improvement6.9%36%4%6.5%7.8%

Basically nothing. Under the highest possible load, with 4k textures and LODs, the bandwidth improvement results in less than 1% performance improvement. The 3080Ti and the 3090 show similar results. But those are huge cards, so the second question is, how does Ampere scale down, with regard to memory bandwidth?

Card30503060307030803090
Bandwidth/TFLOP24.628.322.125.626.3
4k FPS/TFLOP3.73.93.83.13.0
What we can see is that, over all, the architecture scales down better than it scales up. And the 3060, which has 25% more bandwidth (relative to compute) than the 3070, performs identically. So there is no reason to believe that the smaller cards have higher bandwidth needs. In fact, the opposite. There isn't any data to support the idea that the GPU maxes out at 25GB/s/TFLOP, and likely sees marginal - if any - benefit over 22GB/s/TFLOP

A 3.5 TFLOP Drake could be running balls to the wall, and still have 14GB/s of spare memory bandwidth left over for the CPU. Likely closer to 25GB/s. Whole 8 CPU core phones plus their GPUs are running on that amount of bandwidth. So this is probably generous for the CPU as well. Even more so if Nintendo takes advantage of the larger cache configuration that the A78C offers.

TL;DR: The extra 25% of bandwidth would likely result in little to no performance gain.


So I'm an obsessive (can you tell?) and I put craploads of GPU data in a big spreadsheet. And I turned up something which should have been obvious to me, but wasn't until it stared me in the face.

30503060307030803090T239
Cache KB/Bwidth GB9.148.539.146.736.5610.03
Cache KB/TFLOP225.05241.89201.77172.39172.68292.57

If you look at it this way, even the 1MB cache in Drake is actually crazy high relative to the rest of the line. This is probably a side effect of scaling Ampere down. Cache size scales with number of memory controllers, not number of GPU cores or their clock speeds. The may only have one GPC, clocked way down, but it's 1MB of cache no matter what.

Of course, this might not be the best/only way to look at it. I'm not sure if these things are good proxies for cache hit rate? But again, Ampere just seems to get more efficient as it gets smaller, so I tend to think that Drake will overperform, without any changes to it's memory architecture, rather than underperform.
If you look at tx1 the same way, how does it hold up vs other maxwell cards?
 
Small off-topic, but let me spoil a bit of the fun i guess.
Bits of it were added on the Windows 11 Insider Build 26052, i was curious and took a look at it.
NWNWsUg.png

One part of the feature, was an "Automatic super resolution" for all games, it was implemented on the operating system itself and was a spatial upscaler (it operated purely out of screen-space instead of relying on temporal data from the game, similar to AMD's FSR1 or NVIDIA's NIS)

The way it worked is that it changed your desktop rendering resolution to a smaller one (1280x800 on that insider build), and upscaled that to your native resolution.

The feature at the time had both a DirectML model (shipped through the Microsoft Store as Microsoft.AutoSuperResolution_8wekyb3d8bbwe, not currently public/available) with a shader fallback in case the ML model wasn't available. (which is the case right now and is what you can test if you enable the feature with Vivetool)

The other half is the "DirectSR runtime", it's entirely unused right now on the insider build and it's seems to be a temporal upscaler for Direct3D 12 titles (No Direct3D 11, sad), similar to XeSS/FSR2/DLSS, looking at the DLL on a decompiler like IDA Pro, a game feeds it motion vectors, a depth buffer, a reactive mask and an exposure scale and it outputs a "super resolution" frame, although i have not found any mentions to DirectML on the DirectSR runtime avaliable on that insider build. (Though keep in mind that this is pre-release code that isn't supposed to be used by the public and we might not have the full thing and all of that fun stuff, so who knows, i have also not tested the DirectSR runtime in a game, since uh, implementing a undocumented API on a game engine is too much work and my curiosity doesn't go that far.)
 
Since we know Switch 2 will support mesh shading, how much does it put the Switch 2 in a better place than the PS5 which doesn't support it but afaik uses some alternative way?
 
Since we know Switch 2 will support mesh shading, how much does it put the Switch 2 in a better place than the PS5 which doesn't support it but afaik uses some alternative way?
Zero. PS5 doesn't have mesh shading in its most technical sense, but in practical sense it has it. Alan Wake 2 is proof.
 
The PS5 doesn't have the Direct3D version of "mesh shaders", but that's because the PS5 doesn't use Direct3D at all and uses it's own rendering API, and instead of relying on the Direct3D spec of mesh shading, it directly exposes the AMD's NGG Primitive Shaders support

Here's some literature about how Mesh Shading works on AMD RDNA GPUs.

Instead of going Direct3D -> NGG/HW (Primitive Shaders), it likely just exposes NGG directly to the application.

Edit: Oh yeah, the Switch 2 would have NVIDIA's version of Mesh shaders, so look at the VK_NV_mesh_shader Vulkan extension, the nvn2 implementation should look similar to it.
 
It does seem like 25% more would be better, but it's probably not. Which isn't intuitive!

The 3070 and the 3070Ti have nearly identical TFLOPS, but huge gaps in memory bandwidth. If that extra memory bandwidth matters, we'd expect the 3070Ti to overperform, relative to it's small compute advantage. Here are Digital Foundries benchmarks, summed up:

CardTFLOPSBandwidth GB/s1080p Average FPS1140p Average FPS4k Average FPS
RTX 307020.344816712276
RTX 3070Ti21.760817413082
% Improvement6.9%36%4%6.5%7.8%

Basically nothing. Under the highest possible load, with 4k textures and LODs, the bandwidth improvement results in less than 1% performance improvement. The 3080Ti and the 3090 show similar results. But those are huge cards, so the second question is, how does Ampere scale down, with regard to memory bandwidth?

Card30503060307030803090
Bandwidth/TFLOP24.628.322.125.626.3
4k FPS/TFLOP3.73.93.83.13.0
What we can see is that, over all, the architecture scales down better than it scales up. And the 3060, which has 25% more bandwidth (relative to compute) than the 3070, performs identically. So there is no reason to believe that the smaller cards have higher bandwidth needs. In fact, the opposite. There isn't any data to support the idea that the GPU maxes out at 25GB/s/TFLOP, and likely sees marginal - if any - benefit over 22GB/s/TFLOP

A 3.5 TFLOP Drake could be running balls to the wall, and still have 14GB/s of spare memory bandwidth left over for the CPU. Likely closer to 25GB/s. Whole 8 CPU core phones plus their GPUs are running on that amount of bandwidth. So this is probably generous for the CPU as well. Even more so if Nintendo takes advantage of the larger cache configuration that the A78C offers.

TL;DR: The extra 25% of bandwidth would likely result in little to no performance gain.


So I'm an obsessive (can you tell?) and I put craploads of GPU data in a big spreadsheet. And I turned up something which should have been obvious to me, but wasn't until it stared me in the face.

30503060307030803090T239
Cache KB/Bwidth GB9.148.539.146.736.5610.03
Cache KB/TFLOP225.05241.89201.77172.39172.68292.57

If you look at it this way, even the 1MB cache in Drake is actually crazy high relative to the rest of the line. This is probably a side effect of scaling Ampere down. Cache size scales with number of memory controllers, not number of GPU cores or their clock speeds. The may only have one GPC, clocked way down, but it's 1MB of cache no matter what.

Of course, this might not be the best/only way to look at it. I'm not sure if these things are good proxies for cache hit rate? But again, Ampere just seems to get more efficient as it gets smaller, so I tend to think that Drake will overperform, without any changes to it's memory architecture, rather than underperform.

So would that leftover 14-25GB/s definitely be enough for the CPU? This is the thing I was most concerned about, that fact that while 102GB/s should be enough for T239's GPU (even when targeting 2160p output resolutions via DLSS), the bandwidth has to be shared between both chips.

Also, will the lower latency of LPDDR5/X assist with the CPU and GPU's sharing of the RAM? Or is that not really what the low latency is for?
 
The PS5 doesn't have the Direct3D version of "mesh shaders", but that's because the PS5 doesn't use Direct3D at all and uses it's own rendering API, and instead of relying on the Direct3D spec of mesh shading, it directly exposes the AMD's NGG Primitive Shaders support

Here's some literature about how Mesh Shading works on AMD RDNA GPUs.

Instead of going Direct3D -> NGG/HW (Primitive Shaders), it likely just exposes NGG directly to the application.

Edit: Oh yeah, the Switch 2 would have NVIDIA's version of Mesh shaders, so look at the VK_NV_mesh_shader Vulkan extension, the nvn2 implementation should look similar to it.

So basically, the PS5 "doesn't have mesh shading" in the same way that it "doesn't have DirectStorage".
 
Nikkei
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
Last edited:
Nikkei
" but priority was given to securing the initial inventory of the successor machine and the lineup of leading software at the beginning of release, such as measures to prevent resale."

The interesting parts. Translated by safari.
 
Nikkei
Do they report from their own sources or just piggyback on Bloomberg, Eurogamer/VGC and Brazil (while adding some speculation?
 
" but priority was given to securing the initial inventory of the successor machine and the lineup of leading software at the beginning of release, such as measures to prevent resale."

The interesting parts. Translated by safari.

Are they saying it was due to prevent scalpers? Fuck the scalpers, I want the Switch 2 this year.
 
It does seem like 25% more would be better, but it's probably not. Which isn't intuitive!

The 3070 and the 3070Ti have nearly identical TFLOPS, but huge gaps in memory bandwidth. If that extra memory bandwidth matters, we'd expect the 3070Ti to overperform, relative to it's small compute advantage. Here are Digital Foundries benchmarks, summed up:

CardTFLOPSBandwidth GB/s1080p Average FPS1140p Average FPS4k Average FPS
RTX 307020.344816712276
RTX 3070Ti21.760817413082
% Improvement6.9%36%4%6.5%7.8%

Basically nothing. Under the highest possible load, with 4k textures and LODs, the bandwidth improvement results in less than 1% performance improvement. The 3080Ti and the 3090 show similar results. But those are huge cards, so the second question is, how does Ampere scale down, with regard to memory bandwidth?

Card30503060307030803090
Bandwidth/TFLOP24.628.322.125.626.3
4k FPS/TFLOP3.73.93.83.13.0
What we can see is that, over all, the architecture scales down better than it scales up. And the 3060, which has 25% more bandwidth (relative to compute) than the 3070, performs identically. So there is no reason to believe that the smaller cards have higher bandwidth needs. In fact, the opposite. There isn't any data to support the idea that the GPU maxes out at 25GB/s/TFLOP, and likely sees marginal - if any - benefit over 22GB/s/TFLOP

A 3.5 TFLOP Drake could be running balls to the wall, and still have 14GB/s of spare memory bandwidth left over for the CPU. Likely closer to 25GB/s. Whole 8 CPU core phones plus their GPUs are running on that amount of bandwidth. So this is probably generous for the CPU as well. Even more so if Nintendo takes advantage of the larger cache configuration that the A78C offers.

TL;DR: The extra 25% of bandwidth would likely result in little to no performance gain.


So I'm an obsessive (can you tell?) and I put craploads of GPU data in a big spreadsheet. And I turned up something which should have been obvious to me, but wasn't until it stared me in the face.

30503060307030803090T239
Cache KB/Bwidth GB9.148.539.146.736.5610.03
Cache KB/TFLOP225.05241.89201.77172.39172.68292.57

If you look at it this way, even the 1MB cache in Drake is actually crazy high relative to the rest of the line. This is probably a side effect of scaling Ampere down. Cache size scales with number of memory controllers, not number of GPU cores or their clock speeds. The may only have one GPC, clocked way down, but it's 1MB of cache no matter what.

Of course, this might not be the best/only way to look at it. I'm not sure if these things are good proxies for cache hit rate? But again, Ampere just seems to get more efficient as it gets smaller, so I tend to think that Drake will overperform, without any changes to it's memory architecture, rather than underperform.
I think it would be interesting to have a similar comparison between the Tegra X1 and NVidia's Maxwell line.
 
Thank you for your concern about my ability to read. The very interesting quotes (I promise I can read) that support your argument certainly demonstrate that third-party publishers are important to Nintendo. But I'm still wondering who was ever claiming here that this isn't the case.

If I wanted to be caricatural and hyperbolic, I could oversimplify things, too: do you think that with the addition of GTAV, the latest FInal Fantasy games, Elden RIng, Call Of Duty, but without Mario, without Zelda, without Smash, without Pokémon, the Nintendo Switch would have sold better or as well as it does now? Personally I don't think so, and that's absolutely all that I mean when I say first party games are more important. Nothing else than that. Not: "other games aren't important", just, when you're a Nintendo console, you can survive without Call Of Duty or the last Final Fantasy but not without Mario or Zelda.

I don't doubt that being left behind by third parties is a big problem. What I'm saying is that if you don't start by selling your console, then third parties won't come anyway. The presence of third parties was certainly inssuficient on Wii as your quote of Iwata points out, but if the console hadn't sold to begin with, it would simply have been a non-existent market for third party publishers, like on Wii U.

You rightly mention portable consoles. Nintendo's best-selling console to date is the DS, that has benefited from strong support from third-party publishers. Yet at the same time, 19 of the 20 best-selling games on the console, including the 14 best-selling games, are first party titles, the exception being Dragon Quest. So acknowledging the proven importance of third-party support in no way contradicts the fact that first-party games are, indeed, predominant. These are not two opposing things, unless you are responding to arguments that have never actually been made.

Now, of course, if third-party publishers abandon you completely, you suffer enormously. Whether it's in terms of revenue or sales. However, I find it interesting to note that the Game Cube sold far less than the Nintendo 64, even though it benefited objectively from far less poor third-party support, so things are undoubtedly more nuanced, and competition also comes into play. There are some indicators that would be interesting to obtain in order to provide us with more detailed information. For example, I wonder what the proportion is of Switch owners who only have third-party games and those who only have first party games. We obviously know that the Switch's success is based on a combination of the two, as is Nintendo's revenue. However, the number of consoles that provide a market for third-party publishers is probably primarily made up of first-party games, although once again, precise, quantified indicators would be more relevant.
See, your point, even in its hyperbolic oversimplification, was already addressed: hardware platforms are inextricable from 1st-party content, and no one making the point you were arguing against was making any claim to the contrary to begin with, so you were effectively arguing a point no one was making, as the original point that started this (if you follow all the way back) was that 3rd-parties are equally important to a successful console. Why you were making an existential case, I don’t know, but it's not actually relevant.
Nintendo considered Wii’s performance in Japan as not a success (hence the provided quotations), so any argument that Nintendo can be successful with even diminished 3rd-party performance goes out the window, which is what you originally posited at the start of this debate when claiming that this statement…
I’m pretty sure they realized that third parties are just as important to a console’s success as their first party titles
… was incorrect in your estimation. And then went on to say that Nintendo can’t survive without 1st-party content, which… yeah, no kidding, but that wasn’t the argument you were originally making and wasn’t what was being argued to begin with. And survival from 1st-party title sales is a debatable theory when looking at single platforms and excluding concurrent successful platforms with a far more equitable mix of 1st and 3rd party title sales that were subsidizing the existence of the flagging platform.
Again, Nintendo had a hot-selling piece of hardware with lots of 1st-party software sales in Japan, and their response in that instance? It was negative, basically saying they wish that to never happen again. So, to reiterate... even Nintendo themselves don't consider first-party sales with minimal 3rd-party sales to be a success, and because the discussion was about what makes a console a success, Nintendo considers strong 3rd-party sales to be of equal importance for a hardware platform's success. If you wish to retract your initial claim that this is false, you may feel free to do so, but hopefully we can stop talking past each other now that it's been made clear what you're actually arguing against instead of a point no one made.
Only used... By one of the world's biggest camera manufacturers, that's a LOT better than dead.

Conflating UFS and UFS Card is also not being particularly forthcoming about the reality of the second format. I'd go so far as to say it's dishonest framing. eUFS is huge. UFS Card is dead.

CFe Type A is neither dead nor embedded.
Used in, at my last count, 3 cameras. I can actually count more devices accepting UFS Card in the wild, enough to need more than one hand to count them, at least.
And there's no conflation, the primary difference between eUFS and UFS Card is packaging, which means that mature production effectively already exists and existing production lines can be simply converted from embedded, and is far more mature than CFExpress Type A likely ever will be any time soon. CFExpress Type B, though? Yeah, that's effectively an M.2 2230 in a special casing, so that's a mature product. But if that's the case, you may as well just use an M.2 2230 SSD solution, since it'd be a hell of a lot cheaper.
 
Nikkei didn't reveal their source at all. But it insists that this article is "Nikkei special report", so sounds like they have some degree of confidence.
Nikkei
* Hidden text: cannot be quoted. *
 
For using gendered language and abusing the pronoun field, you have been banned for one week. - VolcanicDynamo, Party Sklar, big lantern ghost, meatbag
VGC article on the Nikkei report

Nikkei notes that Switch 2 could yet slip beyond March 2025, dependent on manufacturing and how much software is ready for launch.

Now this I don't believe you don't go from early to late 2024 to early 2025 with a possibility of it being delayed past that.
 
my theory is that there may be some trouble with its production? that, the launch lineup or both

Sure, but this is something that Nintendo would be able to know beforehand. How long does the 3D Mario need, and possibly other titles?

I swear to gaming god if we have to wait to Holiday 2025 to play the Switch 2 I (and many others on here for sure) will freak the fuck out lol.
 
i don't think i'm excited anymore for switch 2 lol, between the production possibly not going well, games not being ready & constant delays, this feels like a disaster waiting to happen
 
The article makes it clear that Nintendo wants to start manifacturing for Switch 2 earlier than they started manufacturing Switch 1, because that would be the only way to have more Switch 2 available at launch than they had Switch available in 2017. So we should start seeing things heating up pretty soon.
 
i don't think i'm excited anymore for switch 2 lol, between the production possibly not going well, games not being ready & constant delays, this feels like a disaster waiting to happen

What production issues? The article literally states that it's to have more hardware units ready for launch, and what are the constant delays? Technically the Switch 2 hasn't even been officially delayed, and as for software, the Switch 1 was delayed also because of that.
 
i don't think i'm excited anymore for switch 2 lol, between the production possibly not going well, games not being ready & constant delays, this feels like a disaster waiting to happen
The article doesn't say anything about production not going well, it says Nintendo want to avoid scalping to the extent which happened with the Switch 1 by producing and making more Switch 2 available for the launch window, which takes longer time. Software not being ready is something that have always happened with Nintendo releasing new hardware, so nothing new in any way.
 
What production issues? The article literally states that it's to have more hardware units ready for launch, and what are the constant delays? Technically the Switch 2 hasn't even been officially delayed, and as for software, the Switch 1 was delayed also because of that.
didn't nikkei mention about the possibility of another set of delays?
 
didn't nikkei mention about the possibility of another set of delays?
Yes, the Switch 2 can be further delayed if Nintendo is not able to produce the amount of hardware that they want available for the launch window, what Nintendo wants to avoid is to go years until they can start filling the demand for the system like what happened with the PS5. If Nintendo wants to start releasing heavy hitting first party titles out of the gates it makes sense that they want to avoid a situation with few available Switch 2 because that would lessen the amount of buyers for these brand new Nintendo games for a long period of time.
 
didn't nikkei mention about the possibility of another set of delays?

If they don't get enough hardware in time, but I even think at that point, Nintendo will say fuck it and release it; this year is going to be a slight struggle with Switch, going all of 2025 with Switch 1? The system will start crashing in sales.

I really think if Nintendo got GTAVI on Switch 2 they want to make sure it's released before then.
 
Yes, the Switch 2 can be further delayed if Nintendo is not able to produce the amount of hardware that they want available for the launch window, what Nintendo wants to avoid is to go years until they can start filling the demand for the system like what happened with the PS5. If Nintendo wants to start releasing heavy hitting first party titles out of the gates it makes sense that they want to avoid a situation with few available Switch 2 because that would lessen the amount of buyers for these brand new Nintendo games for a long period of time.
in that case, wouldn't it be pushed to 2026 at max? be it Q1 or Q3
and in which case, we'll get switch 2 spec leaks during this year be it from ubi or some other companies
 
If they don't get enough hardware in time, but I even think at that point, Nintendo will say fuck it and release it; this year is going to be a slight struggle with Switch, going all of 2025 with Switch 1? The system will start crashing in sales.

I really think if Nintendo got GTAVI on Switch 2 they want to make sure it's released before then.
well if there's something we learnt about nintendo, is that they are sitting on ready games that they could use to fill up any gap they have so i think even if switch 2 isn't 2025 they still have some games to fill up that gap
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom