• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

The main thing that we should have learned from the DF video is when they say "don't treat this video as gospel" stop treating the fucking video as gospel
I'll be honest I just think the video is kind of a mistake I just don't think Digital Foundry's audience is very smart and only cares about numbers and none of the details.

The people who has enough intelligence on this stuff probably didn't need this video to figure things out.
 


What it do bruh

If we ignore everything else (power draw, heating concerns, etc) and were forced to make an assumption based only on history, with PS5 being out for 3 years now using 6nm, Nintendo would likely choose 4N (5nm).

Sony and MS had moved to 16nm about half a year before the switch came out.
 
Digital Foundry made it very clear that their findings were at best a very rough estimate and have acknowledge that NVIDIA and Nintendo could employ technology solutions they simply didn't know about. They also showed their setup didn't run the Matrix demo even completely paired back when official reports said that chip was running the Matrix demo and seemingly well at that.

I honestly wouldn't be surprised if at the end of the day, the Switch successor is actually a lot more capable then the setup Digital Foundry used. We won't know until Nintendo finally unveils it. I believe we are going to be impressed.
 
So If 660MHz in Handheald mode, 1.1GHz in Docked Mode, what think guys will be CPU clocks? 2GHz would be perfect imo
We'll see... Most of us are expecting around 1Ghz for GPU.. hopefully we get 1.5Ghz at least for CPU.
What process nodes were PS4 and XBone?
I think the pro variants (and maybe slim) were 16nm though.

Edit: someone beat me to it.
 
Last edited:
Sony and MS had moved to 16nm about half a year before the switch came out.
I think the video is simply exploring what process nodes were being used by other consoles every time Nintendo released a new console.

Both PS4 and XBoxOne were 28nm, which were the consoles at the time Switch was released.

Assuming the info in video was correct, Nintendo has always went with a lower (better) process node compared to the current-gen Sony and Microsoft consoles at the time the new Nintendo console is released. However, as some has pointed out, Wii U was MIA in this video - because Wii U is the only one that wasn't using a better process node compared to the Sony/Microsoft consoles that was active at the time.

Just an interesting look at history of what process node Nintendo went for their consoles in relative to existing Sony/Microsoft consoles at the time - doesn't mean that pattern will continue.
 
there is also chance we will have 16GB of RAM, it can be even better
its also possible that docked clocks are higher than we estimate here. switch oled at least can already be pumped up pretty high and it doesnt actually seem to get too hot. who can say
 
0
There are so many unknowns with this chip seemingly designed with Nintendo in mind. While I applaud Digital Foundry's efforts and think they did the best job they could based on so little information, I have no doubt that the actual finished hardware will punch past Digital Foundry's test rig and we will all be left quite impressed and hyped. I wouldn't even be surprised if the end result is a significant jump over the test setup Digital Foundry used. Time will tell.
 
4N (5nm) really isn't that "bleeding edge". If that's "bleeding edge" than so was the 20nm Tegra X1 in 2017.
My not be new, but it's a lot better than 20nm in 2017.

20nm is closer in performance to the the 28nm of the launch ps4 than the 16nm of the ps4 slim.

The main reason the jump to Mariko was so awesome, was how much 20nm sucked in the first place.
 
I‘m talking raw performance. All these things listed are because DLSS helping performance.
  • Cyberpunk is not that much higher than base PS4 performance WITH DLSS ENABLED.
  • Fortnite runs at 30fps with RT WITH DLSS ENABLED.
  • All the game listed are running using DLSS. Native is on par or worse compared to PS4.
When Fortnite comes to Switch 2, I highly doubt epic would use RT for a 30fps performance again on Switch. It would be no RT at 60fps which is more or less the base PS4 version.
PS5 settings.
Pee, esse, fayve.

Get it ? Not PS4, PS5.
 
Regarding the DF video, I think a direct comparison to PC handhelds on the same settings would do a lot to help visualize where this makeshift "Switch 2" falls in comparison.
 
I'd recommend putting on ignore users which aren't here to actually have a conversation with others or are simply unable to digest data points provided to them and then unwilling to contextualize the information they're given.

Anyway one way or another I hope we do hear about reports of manufacturing in the coming weeks as it'd hint towards a console release in June at the latest.
 
Ok dude I doomposted at first too but it's not even PS4 level. Their tests on A Plague's Tale Requiem was outclassing the Series S version of it.
Now of course, keeping expectations grounded is a good thing, and DF did note that the actual system specs could be weaker than the card they tested, but even if it was marginally weaker I wouldn't say it's "PS4 level"
I'd argue it won't be weaker, and what they tested was the lowest of the lowballed specs. DF/Rich still doesn't know what the final processing node is, and is running with the assumption that it is 8nm based on kopite7kimi's claims, but as people here have mentioned, there are a few facts that kopite has gotten wrong before, and other hints at the data already point out to something smaller than 8nm (not to mention the viability of 8nm when it comes to yields and being able to produce enough successor console to supply demands makes it very improbable lest Nintendo is getting a MASSIVE discount on it as its literally burning them cash-per-waffer).

So yeah, I remain cautiously optimistic.
 
If there's no announcement ahead of or at the investor's meeting on the 7th, safe to say no announcement for the rest of the year?

When Nintendo revealed all those bundles, I felt that squashed any chance of a 2023 announcement. It was weird seeing so many here expecting an announcement before Nov 7. I'd like to be wrong, as I am ready to see this thing, but the investors meeting will probably just have the usual run around and non answers if anyone brings up a successor or the TGS rumors.
 
I'd recommend putting on ignore users which aren't here to actually have a conversation with others or are simply unable to digest data points provided to them and then unwilling to contextualize the information they're given.

Anyway one way or another I hope we do hear about reports of manufacturing in the coming weeks as it'd hint towards a console release in June at the latest.
People who aren't willing to listen to anything past their initial thoughts are frustrating. We're not denying your disappointment, but we are trying to lessen it.

Regardless, despite my satisfaction with the 2050 information that DF provided, I do look forward to further information on manufacturing or shipping. I'd like a confirmation as to when the device will arrive or at least when we'll get a first look at it.
 
I'll be honest I just think the video is kind of a mistake I just don't think Digital Foundry's audience is very smart and only cares about numbers and none of the details.

The people who has enough intelligence on this stuff probably didn't need this video to figure things out.

Harsh but true imo
 
We'll see... Most of us are expecting around 1Ghz for GPU.. hopefully we get 1.5Ghz at least for CPU.

I think the pro variants (and maybe slim) were 16nm though.

Edit: someone beat me to it.
Im curious how watts looks on 4N for A78C, im hope for 1.7-2.0GHz
 
Last edited:
0
I'll be honest I just think the video is kind of a mistake I just don't think Digital Foundry's audience is very smart and only cares about numbers and none of the details.

The people who has enough intelligence on this stuff probably didn't need this video to figure things out.
Kind of insulting to suggest their audience isn't smart. Judging by the reception most of them are actually excited for NG, and don't share Multiverse's thoughts.

The video is just speculative content. I think some people are reading to much into it. Rich isn't trying to declare much by making it, it's simply theorizing what the NG might be capable of wth what we got.

I think many posters got soured by Multiverse's misinformed post and are now blaming the video when the video is fairly innocuous and you can't control how people resond to it.

Anyone who's been in this thread long enough should be well read on what we can expect by now.
 
I'll be honest I just think the video is kind of a mistake I just don't think Digital Foundry's audience is very smart and only cares about numbers and none of the details.

The people who has enough intelligence on this stuff probably didn't need this video to figure things out.
As annoying the reactions outside here can be (and from a poster likely trolling), it's nice to those who can figure it out see in practice what a 3TF Ampere card with relatively low bandwidth is capable of.

At the same time, it's good for the uninformed to have a tangible ballpark Hopefully, it can get ride of most extreme views from them (Steam Deck at best vs PS5 level with DLSS magic).

The only thing I would add would be an initial comparison with PS4 and PS4 Pro, aimed at people who don't understand how graphical settings works before moving to cross-gen games running at PS5 settings.
 
Kind of insulting to suggest their audience isn't smart. Judging by the reception most of them are actually excited for NG, and don't share Multiverse's thoughts.

The video is just speculative content. I think some people are reading to much into it. Rich isn't trying to declare much by making it, it's simply theorizing what the NG might be capable of wth what we got.

I think many posters got soured by Multiverse's misinformed post and are now blaming the video when the video is fairly innocuous and you can't control how people resond to it.

Anyone who's been in this thread long enough should be well read on what we can expect by now.
I think LuigiBlood's was mostly referring to those who are into console wars, those aren't too smart and will only pick the big numbers without looking too deep into the details. (they are in every DF videos)
 
Kind of insulting to suggest their audience isn't smart. Judging by the reception most of them are actually excited for NG, and don't share Multiverse's thoughts.

The video is just speculative content. I think some people are reading to much into it. Rich isn't trying to declare much by making it, it's simply theorizing what the NG might be capable of wth what we got.

I think many people got soured by Multiverse's misinformed post and are now blaming the video when the video is fairly innocuous and you can't control how people resond to it.

Anyone who's been in this thread long enough should be well read on what we can expect by now.
Sadly, the general audience for something like the T239 video won't be made up of people like those in the thread and some of those people won't be willing to listen to the people in this thread despite better judgement saying to take second-opinions. People will see "Nintendo Switch 2", skip to the timeframes where they do the tests, and make their conclusions off of those, either positive or negative. Also, people sometimes just have the wrong perspective when it comes to what the PS4 and Steam Deck are actually capable of doing.

While I won't deny that the PS4 and Steam Deck are impressive devices, they both have leagues of shortcomings that don't really line up with what people expect. The Steam Deck is an impressive 800p device, but it struggles to run current gen games at low-30fps settings even with FSR on with a lot of the blurriness and technical hiccups masked by the small screen display. The PS4 is able to squeeze a lot of power, but there is a lot of shortcuts taken with various games, some of which that aren't even able to hit the 1080p baseline despite expectations saying that they do. It genuinely becomes so clear when you look at DF findings on various PS4 games that they test in the video itself.

I mean... for fucks sake, Control on PS4 (900p 30FPS), PS4 Pro (1080p 30FPS) and Xbox Series S (900p 60fps) all with RT off are all outclassed by the 2050 Laptop Tests which runs at 1080p 30FPS with Ray-tracing on when matching the PS5 Settings. People overlook the specifics of the testing to see the general numbers, and that's easily the biggest problem that arises from the audience when they watch these videos. There are level-heads that look and interpret the results, but they are, without a doubt, the minority.

I think LuigiBlood's was mostly referring to those who are into console wars, those aren't too smart and will only pick the big numbers without looking too deep into the details. (they are in every DF videos)
Oh yeah, also this. Console warriors suck and they're black-holes in rational conversations. I doubt that's what some of the people in the thread are doing, but those kinds of people will always exist.
 
Last edited:
DF's video was pretty disappointing. If leakers said the Switch 2 hardware was capable of running the matrix demo well, why didn't they just use a 3050 laptop with 6GB of VRAM instead when they saw how the game was running poorly due to VRAM limitations?
The 3050 might have double the bandwidth as the T239, but Nintendo's console will have advantages over the 3050 laptop such as a shared memory pool and software optimizations.

They also didn't bother trying to equalize the laptop's CPU to match what was already speculated to be the T239's cpu geekbench score (based on the publicly available orin devkit score's). I know, entirely different architectures and even Switch 1 ports managed to perform well with the TX1 CPU clocked at a mere 1GHz. But my point is that those results found in the video might have been carried a bit by the CPU (specially considering how modern the one they picked is).
 
So we might not get much at thr investors meeting but I wonder if furukawa will give us a release window expectation. If he'd saying don't expect anything until h2 a ton of avatar bets will be lost tommorow
 
Has any pc game actually used dlss concurrency?

Edit: sounds great in theory. "Free" dlss for an extra frame of latency, however theory and practice are often different things.
I have no idea, but I don't expect any. Most RTX cards can do DLSS within a couple ms, or are laptop cards where you don't need 4K. And even the cards not made for 4K can still do a good job at it by fine tuning settings. PC gamers are a lot more picky about latency than handheld gamers.

The DLSS cost is quite different for NG Switch though and the extra latency might be worth it to devs now.

For a start, it solves the problem of 2~2.5x more power in docked but 4x more pixels to fill. You don't have to waste GPU power in handheld nor have different settings, you just keep the 2~2.5x difference in pixels for native rendering but scale DLSS higher for docked, since it doesn't matter anymore if DLSS takes longer in docked (as long as it is within the time frame). With that said, they can simply opt to not DLSS all the way to 4K and use a simple upscaler on top.

Another benefit, if you're doing CPU work of the next frame in parallel with the GPU in a 30fps game... You get 66.6ms of latency. If you can get CPU and DLSS within 16.6ms, then just lower the resolution until native rendering takes sub 16.6 too and you get 60fps at 50ms of latency (which is worse than 60fps without DLSS but better than 30fps without DLSS).
 
I would also add a MAJOR caveat on the whole video.

The 2050M tested just isn't really relevant to T239 at this point, there's too many factors of divergence to account for that make results from it, while neat in a "Oh this is the absolute worst case scenario" way...not really useful.

Sure, T239 would loose 25% of the CUDA cores versus the 2050M, however from all the things in the NVIDIA Hack/NVN2 Data and more recent rumors out of Gamescom/Nate/NecroLipe, it will gain.

  • Far Higher clocks than the 750Mhz Rich tested (~1.1GHz up to 1.38Ghz docked from the DLSS Testing Program)
    • This not only puts it up into the 3.3TFLOP-4.2TFLOP range for raw shader perf (Which narrow may have its benefits as PS5 vs Series X shows when not memory bound). It also may help boost DLSS as there is room to consider that DLSS's resolve speed is influenced more by Clock rather than raw number of Tensor Cores (As it's a specific part of the pipeline so the fastest it can clock to complete it the better).
  • 12GB of Low-Latency LPDDR versus the 4GB of High Latency GDDR6, at around the same bandwidth.
    • Yes, that 12GB is divvied up between the CPU, GPU, and OS. but with a worst-case OS size of 2GB, and 4GB for the CPU, that's 6GB to the GPU, and assuming Nintendo keeps a slim OS, that could be up to 7GB for the GPU assuming 4GB for the CPU (Which would be well more than double what OG switch had for CPU tasks)
      • This has a major effect as DLSS can increase memory usage as much as it saves. If you use DLSS to 4K, you are still fitting 4K Frames into the memory buffer, so the 4GB of the 2050M is 100% being overwhelmed and paging out to system memory.
    • Another interesting consideration is that both being ~100GB/s (102.4GB/s for T239 and 112GB/s for the 2050M), but having a massive latency difference probably will massively impact memory performance as it's the same bandwidth of memory, but it can call it at double the speed in the case of T239.
      • This is a factor to consider due to how Latency-Sensitive modern GPU architectures are, it's a major factor as to why RDNA2 has Infinity cache on Desktop, and why NVIDIA flooded Lovelace with L2 Cache, it's all to lower latency.
      • We know that Ray Tracing is a latency-sensitive task in isolation, but the whole GPU may perform better versus it's ""closest"" counterpart due to the massive difference in latency between them.
      • Latency is even further reduced outside of memory due to this being an SoC rather than a GPU having to communicate with the CPU over a long PCIE Trace with the GPU and CPU right next to eachother.
  • A very light OS and low-level API
    • Windows and DirectX are resource hogs, and their high-level operation worsens things as they can't as efficiently access the GPU versus a properly deployed use of Vulkan or custom APIs like NVN/NVN2/Whatever PS5 uses.

We so back.

As for RT/DLSS vs PCs, I'm optimistic. Not a huge percentage of the PC gaming population have cards powerful enough to run these features, so devtime spent on them is no doubt limited.

In contrast, the Switch 2 will be a closed box where every single user is running the exact same hardware. The possibilities for RT/DLSS open up there in a way there that's different to PCs - loads devtime can be spent on features that will actually be seen/utilised by everyone.
 
i would really like opinion from @Thraktor about CPU Clocks, since he make very good analytical graphics with watts on 4N for GPU

My opinion is that I don't really know, unfortunately.

The analysis on the GPU side is a bit easier, because Nvidia's GPUs aren't really built with ultra-low power consumption in mind. Even in the most power-constrained laptop GPU you're unlikely to see any Ampere graphics card running under 1GHz in game (they advertise lower "base clocks", but in practice always run well above them). So, the particular thing I was looking at on the GPU side is "what's the lowest clock they could use on Samsung 8N before power efficiency drops off?", and that clock was easy enough to figure out due to Nvidia's power tools for Orin giving us the power curves. I could also see that a 12SM GPU would consume way more power at those clocks than the original Switch GPU did, which is why I don't think 8nm is likely.

The issue with trying to do the same on the CPU side is that ARM's A78 CPU cores are designed with ultra-low power consumption in mind. While the intended power draw of an Ampere SM is somewhere in the single-digit Watts, the intended power draw of an ARM A78 is somewhere in the hundreds of milliwatts, or about a factor of ten lower. They're designed to be able to hit much lower power draw than Switch 2 would require, so we can't really make any estimations based on minimum viable clocks. There will still be some minimum clock below which they can't reduce voltage, and therefore there aren't any efficiency gains to clocking lower, but that will be much lower than any clock Nintendo would use, and certainly a lot lower than the 1GHz clock they used on Switch's CPU, even on 8nm.

There are some SoCs which use A78 cores on TSMC's 5nm/4nm processes (a few from MediaTek, at least), but unfortunately I haven't come across any power measurements from them. If someone did pull power curves from one of these SoCs we could use them to make an educated guess about what clocks Nintendo might choose to use, but without it it's a bit of a stab in the dark.

My own guess is somewhere around 1.7-1.8GHz, although that's very much just a guess off the top of my head.
 
DF's video was pretty disappointing. If leakers said the Switch 2 hardware was capable of running the matrix demo well, why didn't they just use a 3050 laptop with 6GB of VRAM instead when they saw how the game was running poorly due to VRAM limitations?
The 3050 might have double the bandwidth as the T239, but Nintendo's console will have advantages over the 3050 laptop such as a shared memory pool and software optimizations.

They also didn't bother trying to equalize the laptop's CPU to match what was already speculated to be the T239's cpu geekbench score (based on the publicly available orin devkit score's). I know, entirely different architectures and even Switch 1 ports managed to perform well with the TX1 CPU clocked at a mere 1GHz. But my point is that those results found in the video might have been carried a bit by the CPU (specially considering how modern the one they picked is).
3050 6GB has a higher TGP and higher bandwidth. RTX 2050 is really the best apples to apples comparison because it can be clocked lower to hit a certain TFLOP figure (3 TFLOPs), that is basically the floor level of expectation for Switch 2 Docked. And the RTX 2050 also has comparable bandwidth.

That's why RTX 2050 was chosen in favor of the RTX 3050 6GB and 3050 4GB Max-Q.
 
Both PS4 and XBoxOne were 28nm, which were the consoles at the time Switch was released.
Both PS4 and Xbox One had switched to a 16nm FinFET process in 2016 with PS4 slim, PS4 Pro, and Xbox One S.
Xbox 360 jumped to 45nm in 2010 with Xbox 360S, and Xbox One launched at 28nm, next to Wii U's 45nm in 2012. 3DS was on a 45nm process in 2011, to Vita's 32nm.

Nintendo Switch launched on 20nm MOSFET in 2017, jumping to 16nm FinFET in 2019 with the V2 and Lite.

It launched on a larger node despite being a handheld, so I wouldn't say Nintendo has a history of being persnickety about nodes.

I still think 4N is extremely likely; it was prototyped and sampled alongside 4N Lovelace chips, after all, among a small mountain of other information pointing in such a direction.
 
Last edited:
Has any pc game actually used dlss concurrency?

Edit: sounds great in theory. "Free" dlss for an extra frame of latency, however theory and practice are often different things.

X >1440p DLSS has mostly only cost 0.5 ms to 1ms on my 3080 so I have to imagine that no dev thought it was worth the extra dev time to do this instead of replacing their TAA with DLSS.

But we will (maybe? Not sure actually) see if it's possible I guess in the future.
 
X >1440p DLSS has mostly only cost 0.5 ms to 1ms on my 3080 so I have to imagine that no dev thought it was worth the extra dev time to do this instead of replacing their TAA with DLSS.

But we will (maybe? Not sure actually) see if it's possible I guess in the future.
It kind of does feel like a feature that's made for Nintendo. Because arguably no (or very few) other RTX card exists where DLSS runtime is a serious issue.
 
This is probably going to be a bit out there as a post, but I had a small realisation when looking through oldpuck's massive Digital Foundry note:
[In reference to A Plague Tale: Requiem's findings]: (...) the GPU isn't all that matters. Plague Tale is rough on the GPU, sure, but it's famously CPU limited. Pairing even this weak GPU with a modern CPU and storage solution, and suddenly 9th gen exclusives become viable
This quote got me thinking a bit. How many current games are still releasing for PS4/Xbox One? I decided to look at games that released or are due to be released this year and found... quite a lot. There are naturally exceptions to this, mainly found on console exclusive titles like Final Fantasy 16, Spider-man 2, Hi-Fi Rush (edit: this game runs on a Laptop 1050ti), Redfall (edit: badly optimised) and Starfield, but there's a lot of notable releases that are still making it to PS4/XBO despite being very taxing. God of War: Ragnarok from last year is a pretty blatant example, but Resident Evil 4, Like a Dragon Ishin/The Man Who Erased His Name/Infinite Wealth, Assassin's Creed Mirage, Lies of P, Armoured Core 6 and so on.

So, I started to ask "Why are some games not releasing on PS4?". The easiest games I noticed that were exclusive was with Spider-man 2, Dead Space Remake and Baldur's Gate 3. Doesn't take a genius to realise that all three of those games are taxing graphically, but the other problem was to do with the SSD.
(Gamespot interview with Dead Space Remake devs): "Campos-Oriola went on to mention that the new SSDs in the PS5 and Xbox Series X|S consoles allow the game to "load and unload really fast." There won't be any loading screens, it seems.

"Our intention is to offer a fully unbroken experience, it will be an uninterrupted sequence shot, from the start screen to the end credit, without interruption," the developer said"

There are also some other cases with "we didn't want to be restricted". Alan Wake 2 and Forspoken both are notable for their use of Ray-tracing. (Delta note: This is the only time I'll ever compare Forspoken to Alan Wake 2, I feel dirty just talking about it) Final Fantasy 16 is also in a similar boat (according to Wikipedia. I can't really check the source because it's from Famitsu and Google Translate is jank), but I don't know enough about the game to make a comment as to why.

So, how would the Switch 2 counteract this?

Well, the Switch 2 has Ray-tracing capabilities on-par with the PS5 and can still reach 30fps, in addition to being able to achieve fast loading times (see: Breath of the Wild tech demo)...


So this is basically a long-form way of saying that we probably shouldn't worry about the lack of third-parties on Switch. There are loads of games still coming to PS4, there are a lot of games where the limitations come down a specific feature of the PS5/XSX, and that the Switch 2 doesn't limit the abilities of the game's ambitions, especially with UE5 games that are up and coming (according to the DF team). I'm also happy to say that the ambitions of Nintendo will likely be accomplished given the tools that they're being given with this device, especially since most of Nintendo's games don't strive for realism as much as they strive for gameplay.
 
Back then, yea. there was a reason for that.

That said, if you mean dlss 2.0...that was around the time of drake's conception. Though whether or not it's because of Drake or because 1.0 was garbo, probably the latter. But Drake support was a nice bonus
Pretty sure Nvidia would have continued to improve DLSS without Nintendo.

Will be interesting to see if DLSS concurrency will find a new life on Drake, even though it's original intended use never happened.
 
Back then, yea. there was a reason for that.

That said, if you mean dlss 2.0...that was around the time of drake's conception. Though whether or not it's because of Drake or because 1.0 was garbo, probably the latter. But Drake support was a nice bonus

They’re talking about running DLSS one frame after the GPU finishes it’s work so that the CUDA cores and tensor cores can run at the same time.
 
This is probably going to be a bit out there as a post, but I had a small realisation when looking through oldpuck's massive Digital Foundry note:

This quote got me thinking a bit. How many current games are still releasing for PS4/Xbox One? I decided to look at games that released or are due to be released this year and found... quite a lot. There are naturally exceptions to this, mainly found on console exclusive titles like Final Fantasy 16, Spider-man 2, Hi-Fi Rush (edit: this game runs on a 1050ti), Redfall (edit: badly optimised) and Starfield, but there's a lot of notable releases that are still making it to PS4/XBO despite being very taxing. God of War: Ragnarok from last year is a pretty blatant example, but Resident Evil 4, Like a Dragon Ishin/The Man Who Erased His Name/Infinite Wealth, Assassin's Creed Mirage, Lies of P, Armoured Core 6 and so on.

So, I started to ask "Why are some games not releasing on PS4?". The easiest games I noticed that were exclusive was with Spider-man 2, Dead Space Remake and Baldur's Gate 3. Doesn't take a genius to realise that all three of those games are taxing graphically, but the other problem was to do with the SSD.


There are also some other cases with "we didn't want to be restricted". Alan Wake 2 and Forspoken both are notable for their use of Ray-tracing. (Delta note: This is the only time I'll ever compare Forspoken to Alan Wake 2, I feel dirty just talking about it) Final Fantasy 16 is also in a similar boat (according to Wikipedia. I can't really check the source because it's from Famitsu and Google Translate is jank), but I don't know enough about the game to make a comment as to why.

So, how would the Switch 2 counteract this?

Well, the Switch 2 has Ray-tracing capabilities on-par with the PS5 and can still reach 30fps, in addition to being able to achieve fast loading times (see: Breath of the Wild tech demo)...


So this is basically a long-form way of saying that we probably shouldn't worry about the lack of third-parties on Switch. There are loads of games still coming to PS4, there are a lot of games where the limitations come down a specific feature of the PS5/XSX, and that the Switch 2 doesn't limit the abilities of the game's ambitions, especially with UE5 games that are up and coming (according to the DF team). I'm also happy to say that the ambitions of Nintendo will likely be accomplished given the tools that they're being given with this device, especially since most of Nintendo's games don't strive for realism as much as they strive for gameplay.
I agree on the whole, though I think the PS4 comparison remains unfavorable at best. This isn't just a handheld PS4 with some modern bells and whistles, at the end of the day, it's a next gen console, with all its features, then a scoop more, crunched down (mainly sacrificing raw speed) to fit in a handheld.

As has been rumoured, come next year, "coming to Xbox Series X|S, PS5 and Nintendo Switch 2" is likely to be extremely common, perhaps even the default if Nintendo has pulled the right strings, demoed to the right people and had Nvidia provide the right tools.
 
DF's 18.3ms was a best effort attempt since as they don't have a way to measure it properly and that also doesn't aligns well with Nvidia documentation for stronger cards, so it could not be accurate...

But if it's true, then it's too funny (and scary) how stars are even more aligned than I thought.

- Work started right on time to be designed with DLSS in mind, not mention being the first Nvidia SoC designed specifically for Nintendo.

- ~3.4TF seems like the sweet spot for LPDDR5 bandwidth (if we look at the bandwidth per TF ratio of Ampere cards)

- And now ~3.4TF seems about the minimum you need for DLSS 4K at under 16.6ms.

- ~3.4TF is also 12SM @1.1GHz, which is roughly twice what we expected the peak efficiency to be, aligning with best battery life for handheld. Of course, it's way more likely they chose the count and clocks after deciding the dock target than this to be a coincidence, but still.

- The 2020 tech (Ampere, A78, 5nm) is still very comparable with current bleeding edge tech and allows that ~3.4TF in a tablet form factor
 
I agree on the whole, though I think the PS4 comparison remains unfavorable at best. This isn't just a handheld PS4 with some modern bells and whistles, at the end of the day, it's a next gen console, with all its features, then a scoop more, crunched down (mainly sacrificing raw speed) to fit in a handheld.

As has been rumoured, come next year, "coming to Xbox Series X|S, PS5 and Nintendo Switch 2" is likely to be extremely common, perhaps even the default if Nintendo has pulled the right strings, demoed to the right people and had Nvidia provide the right tools.
Oh definitely, but I was mainly operating with the knowledge that people are still doubting the Switch 2's potential power. Even in the worst case scenario, the hard and fast facts are that a fuck load of games can run on PS4 already with a bit of elbow grease, and even more can run on the Switch 2 thanks to the toolset that Nvidia itself provides.

Not only are we getting games, but current-gen releases that look stellar on a small screen and still look great on a bigger one. And that's before considering Nintendo's mentality on how they make games. If nothing else, Nintendo games will look stunning and play well on this device.
 
I'm starting to truly believe that it's more along the lines of PS4 Pro when docked. DF's video did nothing but reinforce that belief, and I've been trying to tell people for the longest time to keep their expectations at that level. Then someone blabbed about the Matrix demo and that was that 🤣
 
If the announcement is truly to be on the 6th, and Nintendo keeps the time of day consistent with the Switch reveal in 2016, the announcement should happen at 5:30 PM PST, 8:30 PM EST tonight.
Welp, here's to hoping. Not going to expect it to happen, but if it does... hoo doggy that'll be funny.

I'm not staying up late for it this time though. I don't want to annihilate my sleep schedule for the second time in a week.
Crazy to think the end could be near.
Probably best not to get your hopes up yet. To quote a pony-tailed guy with a red sword that conceals another sword, "This isn't the end, this is the decider!"
 
I'll never understand how this one barely-there tech that costs more money, uses more power and requires a whole separate bus configuration is constantly seen as preferential to another barely-there tech that costs less, uses less power, would utilize the exact same bus setup as the most likely internal storage, is royalty-free, can leverage existing (and booming) eUFS production and, y'know, actually had devices that can use them, paltry sum as it may be and at its original spec.

For a device that is already using eUFS for internal storage and does not require external storage to outclass it but does need to vaguely match it, UFS Card 3.0 is the logical design choice between 2 standards lacking in consumer and equipment manufacturer adoption, as well as no consistent production by card makers (that Nintendo could resolve with some phone calls and a contract signing with Sandisk for either of them).

Meanwhile, CFExpress cards (the more likely contender than SD Express because there's actually plenty of equipment that uses them) also gobble power, are bloody expensive and won't outdo UFS Card 3.0 in the process at a similar size until CFExpress 4.0 cards that are arriving this year.

I get that the SD brand enjoys familiarity among consumers and thus engenders a more favourable outlook for SDExpress, I do, but it's probably time to give up the ghost now, it's not the ideal choice and brand familiarity does not over-ride that.
SDExpress's high power consumption looks to correlate with trying to pull peak speeds, which Drake doesn't have to do. especially with the new minimal speed tiers. that's been my biggest caveat to the whole "uses too much power" thing, just go slower.

embedded UFS isn't being reflected by external UFS. it brings us back to square one. microSD and external UFS can be made in equal measure, but while there are no UFS slots, microSDexpress can be used in non-express slots (just at slower speeds)

CFExpress is widely used, but is bulkier than microSD or external UFS. the communication protocol is the same as SDexpress anyway, so it's a question of what card reader is more viable anyway

maybe you should give up the ghost on equally non-existent external UFS and the bulky CFexpress cards. but I don't tell anyone that, because they all come with caveats. shit, I still posit that Nintendo sticks with regular microSD cards since there's still a good deal of speed left to pull from it now that it won't be hobbled by a slow cpu. all options are on the table, throwing any one out is dumb
 
I'm starting to truly believe that it's more along the lines of PS4 Pro when docked. DF's video did nothing but reinforce that belief, and I've been trying to tell people for the longest time to keep their expectations at that level. Then someone blabbed about the Matrix demo and that was that 🤣
+ better cpu and much more modern gpu.

And if gpu concurrency will be a thing (treating it like an if at this point), the results from DF would be very different.
 
0
SDExpress's high power consumption looks to correlate with trying to pull peak speeds, which Drake doesn't have to do. especially with the new minimal speed tiers. that's been my biggest caveat to the whole "uses too much power" thing, just go slower.

embedded UFS isn't being reflected by external UFS. it brings us back to square one. microSD and external UFS can be made in equal measure, but while there are no UFS slots, microSDexpress can be used in non-express slots (just at slower speeds)

CFExpress is widely used, but is bulkier than microSD or external UFS. the communication protocol is the same as SDexpress anyway, so it's a question of what card reader is more viable anyway

maybe you should give up the ghost on equally non-existent external UFS and the bulky CFexpress cards. but I don't tell anyone that, because they all come with caveats. shit, I still posit that Nintendo sticks with regular microSD cards since there's still a good deal of speed left to pull from it now that it won't be hobbled by a slow cpu. all options are on the table, throwing any one out is dumb
Yeah, unless they want to force installs from Game Card, 300MB/s is probably the target minimum raw read speed given the interface of Game Card maxes out there, and we have evidence they are sticking with its MMC interface. Meanwhile, MicroSD UHS-II is available with 300MB/s read speeds, and internal storage at that speed or above is readily available and quite cheap. As far as I know, UE5's minimum speed is 1GB/s, which could be possible at 300MB/s reads with effective compression. I mean, I doubt the FDE exists if it is unnecessary and/or too slow to achieve Nintendo's goals.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom