• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

With all respect to MVG he is wrong about backwards compatibility. He is going off of an appeal to authority fallacy. SciresM said it won't work because the drivers are part of the software package. Instead of fact checking he's taking SciresM word for it. If you actually research it and start to understand the UDA you learn that the drivers being part of the software package doesn't really matter. The UDA works in such a way that it has unified instructions and microcode and so on meant specifically to make these issue not an issue at all. You don't need to update the drivers. The UDA will handle that stuff, almost like it was made for that purpose...

Here() is a twitter thread where I shared various proofs that the UDA is an integral core component of Nvidia's hardware AND software architecture. There is no reason to assume or believe that the UDA would have been removed from the Tegra SoCs. It has been a core component of Nvidia's architecture for 20 years or so. It's in the microcode, the hardware, and drivers. It would be more difficult to remove than it would be worth. They would need to redesign almost everything to account for it being missing. Ffs it uses unified instructions sets for the purpose of not having compatibility issues.

The UDA uses a unified instruction set across all Nvidia hardware and drivers. This means that a lot of the potential issues are already mitigated. Newer instructions would not be able to be used by older hardware. But obviously part of the UDA is to give a process for older hardware to know how to handle instructions that are too new. How it specifically does that I haven't gone deep with. The point is that Nvidia figured this all out some 20 years ago and they're ahead of MVG and all of us.

Basically backwards compatibility should be guaranteed. If Nintendo doesn't allow it's because of greed reasons, not technical reasons. Forwards compatibility though, is not guaranteed. That becomes trickier. It really depends on Nintendo and devs there. Odds are we will be able to play OG Switch games on the New Switch/Switch Home* but new games won't be playable on the OG Switch models using X1. It will take a lot more work from Nintendo and devs to have new games work on the older console. But then again there's also advantages to it. That factor truly depends on Nintendo's decisions and thinking.

Having old games work on the new Switch is much easier than making new games run on all models. Devs need to do extra work regardless of what Nintendo does. Even if Nintendo allows forwards compatibility in a technical sense. It is very possible that devs won't put the extra work in to make it happen. With the UDA backwards compatibility should require only precompiled shaders to be available. And that's relatively easy and simple.

*It is my personal speculation that Nintendo will drop a dedicated home console that pairs with existing Switches. The Switch will work like a Wii U tablet when paired with the Switch Home. This is based on only my personal intuition and observing how Nintendo behaves. It feels like a very Nintendo thing to do to me. This would be in addition to a New Switch(Super Switch?) portable/hybrid model. This way they can sell people a dedicated and more powerful home console, keep it in the "Switch Family" to pad those sales numbers, and not invalidate the concept & existence of hybrid/portable Switches. But that's a whole different discussion...

Images:

Source: http://docplayer.net/63348968-Technical-brief-quadro-vs-geforce-gpus-features-and-benefits.html
s0jyDmd.png


Source: https://patents.google.com/patent/US8154554B1/en
8WxvUiV.png



Source: https://patents.google.com/patent/US8154554B1/en
QecDY2B.png
 
It's not obvious that all of the shaders could be reliably identified via static analysis, and you have to deal with the fact that some of the shaders aren't precompiled, but would seemingly be compiled with an old compiler shipped with the game.

Ultimately I think some solution that can work at runtime will probably be required to cover the full library.
Really depends, Maxwell and Ampere do share binary compatibility if the Cuda core was kept above a certain version, which is possible that this was maintained, a lot of the difference to the ISA between these GPUs were actually in legacy instruction sets, and it is also quite possible that Drake was customized to add those ISA functions needed for Switch games. Reality is that not every function is used on a chip, most go unused, and while 4k+ games is a large compatibility hurdle, it's not likely the target anyways, devs can patch games that they want to continue to sell that do not work, and ~90% compatibility shouldn't take too much work.

There is also the solution of literally just adding 300M transistors to the SoC for 256 Maxwell shaders, and switch over to them for perfect compatibility.
 
I suppose you're right - my thinking was that Nintendo can't control what 3rd parties do, and if they want to manage battery life, disabling the cores prevents their use entirely


(Caveat - @ILikeFeet is the resident RT expert here)

Imagine a picture of a basketball. If you're not American you might not have played with a basketball but they're heavily dimpled so they're easy to grip onto.

Now imagine rendering that picture as a texture in a video game. And imagine that, because of the resolution of the game, you're going to lose some detail. That dimpling gets lost, or turns into a low res mush.

What if the player steps very very slightly to the left? It's still a low res mush, but different low res mush. Why? Because all those dimples are sub pixel detail. The tiny curves and shadows have detail that are smaller than a single pixel on the final screen, so when the player moves, some of the detail gets captured - sampled - by the new camera angle, and other detail gets lost. If you've ever been playing a game and something like a fence or trees in the distances seem to fizz, this is why. An edge of a leaf is suddenly appearing, but the tiny, 1 pixel branch which connects it to the tree vanishes.

One of the things DLSS does is keep that sub-pixel detail from previous frames. In fact, developers introduce tiny, unnoticeable camera jitter every frame, so even if the player is standing still, DLSS sees new detail every frame. In the case of the basketball, that gives DLSS a full picture of all those dimples. This is the super sampling part of Deep Learning Super Sampling.

Now, let's add a Ray Traced reflection of that basketball texture. Ray Tracing draws lines (rays) between light sources and the various objects in a scene. When a ray hits an object it samples the color at that pixel, and then carries that color data along the rest of the ray's path. This emulates how light takes on the color of things it bounces off. In the case of a reflection, you take all the bounced colors off the basketball texture, and apply them to the reflective surface. Ta dah! Ray Traced reflection.

Each ray cast is expensive, and more rays mean higher resolution reflections. By default, most games increase the number of rays with increased resolution, and drop them with lower resolution. So far so good. But let's add DLSS to the mix.

The game starts with a 1080p image, and an appropriate number of rays for 1080p. It displays the basketball texture and its reflection on frame one. On frame two, the camera gets jittered, and DLSS begins combining the frames to generate a 4K image. Our basketball gets more and more detailed. But the reflection doesn't.

Because the reflection's maximum level of detail is determined not by the texture, but by the number of rays you cast. When you move the camera, you get a new angle on the texture, and the texture is higher res than the game is actually displaying, so more sub-pixel detail gets exposed. But the reflection doesn't have that deeper detail to expose.

Developers have two options - keep the reflection low res, or cast a number of rays based on the output resolution, rather than the input resolution, which would give DLSS more detail to work with. And, to (finally) bring this around to [REDACTED] both of those situations favor handheld mode over docked.

When you slow down the GPU for handheld mode, you slow down the tensor cores and the RT cores by the same amount. For RT, as long as the ratio of performance matches the ratio of resolution, then there isn't really a compromise. If you're half as powerful, but running at half the res, you can probably just cast half as many rays, and your RT effects will scale along with the res of the image.

Tensor cores see a similar drop in performance, but DLSS performance isn't linear in the same way. When switching between handheld mode and docked mode, you probably want to change the DLSS scaling factor as well. So a 4x factor (1080p->4k) in docked mode probably becomes something like a 2x factor in handheld mode (540p->720p) in handheld.

Okay, so remember, devs have two options - run RT at the input resolution or the output resolution. If you chose input resolution, then in the handheld case, RT is half the resolution of the final image - but in docked mode it is only a quarter. On the other hand, if they choose output resolution, consider the gap between 720p and 4K. That's 800%. No way handheld mode is only 1/8th of docked mode's power. Anything that a game can do in docked mode at 4K* should be a breeze at 720p.

* I'm not suggesting that [REDACTED] can do 4k RT reflections, by the way. I'm just saying "whatever RT effects developers choose to enable" at 4K.
Wow this was a great in depth read thank you!
 
Nobody expected the Switch success, perhaps they would have thought at the time that the generation after the Switch would be totally different again.

Considering that Nintendo was willing to merge their 2 lines of consoles with the switch and have nothing to fall back on, I'd say that they very much expected the switch to be a success. To be the most successful hardware of all times, which of now is, but a success nonetheless.
 
if Drake isn't a "Switch 2" in performance, then I'm scared to know what does because it probably breaks the laws of thermodynamics

Hey, I’m willing to call this upgrade model a “Switch 2” successor console…that will never lose cross gen games for its entire lifecycle, they being its main library. If that makes people feel better :p
 
With all respect to MVG he is wrong about backwards compatibility. He is going off of an appeal to authority fallacy. SciresM said it won't work because the drivers are part of the software package. Instead of fact checking he's taking SciresM word for it. If you actually research it and start to understand the UDA you learn that the drivers being part of the software package doesn't really matter. The UDA works in such a way that it has unified instructions and microcode and so on meant specifically to make these issue not an issue at all. You don't need to update the drivers. The UDA will handle that stuff, almost like it was made for that purpose...
The driver model on HorizonOS is different from the Windows driver model, and doesn't use the UDA or its shader microcode.
 
The driver model on HorizonOS is different from the Windows driver model, and doesn't use the UDA or its shader microcode.
Yeah I remembered the UDA idea being brought up before but there was a reason to believe it wouldn't be the case here.

But is there some custom version of that Nvidia and Nintendo could've implemented here? Or is that something that would have to be visible to developers currently?
 
The driver model on HorizonOS is different from the Windows driver model, and doesn't use the UDA or its shader microcode.
I should clarify, since this was overly brief. UDA is several pieces, and it would be disingenuous to say that Horizon doesn't use it. It would be more accurate to say that it doesn't use the HAL in such a way that the driver stack can be carried anywhere. Games are allowed some raw access to the device, including initializing state in ways that aren't portable.

Games generally ship with raw Maxwell microcode for shaders as well. When a compiler turns high level language into microcode, it usually passes through something called IR, and intermediate representation, that allows easier optimization. Then that IR is turned into final microcode. Nvidia's shader portability patent essentially works by standardizing that IR, allowed a driver to generate microcode without games having to ship raw GLSL source, and requiring a lengthy compile process.

But Nintendo Switch games don't ship that IR, they ship the Maxwell microcode.

These are all solvable problems, but they are problems. SciresM isn't a "dataminer", he's the primarily developer of the entire Switch homebrew stack. His analysis of what Nintendo will do isn't unbiased, but he is the authority for how the HorizonOS works (that doesn't work at Nintendo, anyway). And MVG is someone I consistently disagree with in his analysis, but the guy has written commercial console emulators that ship on Switch. He knows what he's talking about.

Where I think MVG specifically falls down is that his emulation experience is from systems where a solid interpreter implementation is sufficient to get good performance, and which have sprite driven graphics stacks. Tracing/JITing compilers and shader emulation are very different beasts, and while he nails the problem, there are a number of well understood solutions he seems less familiar with.
 
Yeah I remembered the UDA idea being brought up before but there was a reason to believe it wouldn't be the case here.

But is there some custom version of that Nvidia and Nintendo could've implemented here? Or is that something that would have to be visible to developers currently?
Horizon splits the driver into two parts, once which lives in the game, and another that lives in the OS. I am honestly a little fuzzy on what does what there. What I am 100% certain about is that games have some privileged access that UDA drivers don't have, and that games hardcode some Maxwell microcode which uses those elevated permissions to set themselves up. That bypasses the HAL at least in that step, and won't work automagically on new hardware.

I don't think it's a particularly challenging fix, but it will require some work on the OS side to get it there.
 
Yeah I've already seen comments parroting "MVG said T239 has nothing to do with Nintendo" etc.

The NVN2 leak and T239 commits and their implications are basically not talked about anywhere but here. Large news sites I can understand not covering it for fear of legal retaliation (though DF has mentioned it in their articles already) but that doesn't apply to discussion forums. I've read many a comment along the lines of "Nintendo and DLSS? Do they even know what that is?". lmao

I actually wrote a piece (in Dutch) that I want to put on our website. Of course crediting Famiboards as well. Cause I think the speculation with all the proof and noise is quite interesting and exciting to talk about. A summary of what there is to know and what to expect.
 
Yeah I remembered the UDA idea being brought up before but there was a reason to believe it wouldn't be the case here.

But is there some custom version of that Nvidia and Nintendo could've implemented here? Or is that something that would have to be visible to developers currently?
There's nothing stopping them from just implementing all the missing/modified instructions from Maxwell, but I don't think there's any direct evidence they've done that. Driver compatibility is likely a non-issue, since there must be some level of that already for older games to work on the most recent firmware.
I should clarify, since this was overly brief. UDA is several pieces, and it would be disingenuous to say that Horizon doesn't use it. It would be more accurate to say that it doesn't use the HAL in such a way that the driver stack can be carried anywhere. Games are allowed some raw access to the device, including initializing state in ways that aren't portable.

Games generally ship with raw Maxwell microcode for shaders as well. When a compiler turns high level language into microcode, it usually passes through something called IR, and intermediate representation, that allows easier optimization. Then that IR is turned into final microcode. Nvidia's shader portability patent essentially works by standardizing that IR, allowed a driver to generate microcode without games having to ship raw GLSL source, and requiring a lengthy compile process.

But Nintendo Switch games don't ship that IR, they ship the Maxwell microcode.

These are all solvable problems, but they are problems. SciresM isn't a "dataminer", he's the primarily developer of the entire Switch homebrew stack. His analysis of what Nintendo will do isn't unbiased, but he is the authority for how the HorizonOS works (that doesn't work at Nintendo, anyway). And MVG is someone I consistently disagree with in his analysis, but the guy has written commercial console emulators that ship on Switch. He knows what he's talking about.

Where I think MVG specifically falls down is that his emulation experience is from systems where a solid interpreter implementation is sufficient to get good performance, and which have sprite driven graphics stacks. Tracing/JITing compilers and shader emulation are very different beasts, and while he nails the problem, there are a number of well understood solutions he seems less familiar with.
I get the impression that the homebrew/hobbyist crowd are stuck in the hobbyist emulator mindset , where shader emulation has to be quite complicated and heavy to be able to run anywhere. Nintendo just has to make shaders compiled for one ISA run on a different, but related one. The problem is not the same, and the solution doesn't have to be either.
 
I feel like Nintendo would follow Samsung and Apple's example and jump a few numbers.


The Nintendo Switch 10 could be that product.


Nintendo could could claim it's ten times more powerful as the current Switch. (which it would be close to or exceed that). And it would definitely host far more technically advanced games due to its higher fidelity visuals which would improve its overall 3rd party support.

10 is a win.
Nintendo Switch 10.

Releasing 10/20/23

My dream scenario btw.
How about 12? That's 2 better.

1. NES
2. GB
3. SNES
4. N64
5. GBA
6. GCN
7. DS
8. Wii
9. 3DS
10. Wii U
11. Switch

Considering that Nintendo was willing to merge their 2 lines of consoles with the switch and have nothing to fall back on, I'd say that they very much expected the switch to be a success. To be the most successful hardware of all times, which of now is, but a success nonetheless.
I think that's what he's saying. They expected "a success", not "the Switch success" that actually occurred.
 
have nintendo come and flat out said “y’all the switch 2 ain’t happening, keep dreaming” since they’ve said bloomberg told us a bunch of porkies about the devkits? like generally they’re pretty damn good at damage control so i wonder if in this instance they’re just letting people run riot and cba or there’s something coming r/tomorrow, which obviously there is, but how r/tomorrow is r/tomorrow? only nintendo knows
 
0
There's nothing stopping them from just implementing all the missing/modified instructions from Maxwell, but I don't think there's any direct evidence they've done that. Driver compatibility is likely a non-issue, since there must be some level of that already for older games to work on the most recent firmware.

I get the impression that the homebrew/hobbyist crowd are stuck in the hobbyist emulator mindset , where shader emulation has to be quite complicated and heavy to be able to run anywhere. Nintendo just has to make shaders compiled for one ISA run on a different, but related one. The problem is not the same, and the solution doesn't have to be either.
Yeah, they can just treat Maxwell ISA as if it were a new unified ISA and go to town. Unlike Yuzu they don't need to decompile back to GLSL, generate an Ubershader for latency reasons, then involve GLSL to generate new microcode. They can do a relatively quick transpile which will mostly be "yep, same"
 
How about 12? That's 2 better.

1. NES
2. GB
3. SNES
4. N64
5. GBA
6. GCN
7. DS
8. Wii
9. 3DS
10. Wii U
11. Switch


I think that's what he's saying. They expected "a success", not "the Switch success" that actually occurred.
Hmm, maybe it could be all three at once?

The Nintendo Switch X2!

Nintendo Switch 2
Nintendo Switch 10
Nintendo Switch 12!
 
One of the things DLSS does is keep that sub-pixel detail from previous frames. In fact, developers introduce tiny, unnoticeable camera jitter every frame, so even if the player is standing still, DLSS sees new detail every frame. In the case of the basketball, that gives DLSS a full picture of all those dimples. This is the super sampling part of Deep Learning Super Sampling.

Now, let's add a Ray Traced reflection of that basketball texture. Ray Tracing draws lines (rays) between light sources and the various objects in a scene. When a ray hits an object it samples the color at that pixel, and then carries that color data along the rest of the ray's path. This emulates how light takes on the color of things it bounces off. In the case of a reflection, you take all the bounced colors off the basketball texture, and apply them to the reflective surface. Ta dah! Ray Traced reflection.

Each ray cast is expensive, and more rays mean higher resolution reflections. By default, most games increase the number of rays with increased resolution, and drop them with lower resolution. So far so good. But let's add DLSS to the mix.

The game starts with a 1080p image, and an appropriate number of rays for 1080p. It displays the basketball texture and its reflection on frame one. On frame two, the camera gets jittered, and DLSS begins combining the frames to generate a 4K image. Our basketball gets more and more detailed. But the reflection doesn't.

Because the reflection's maximum level of detail is determined not by the texture, but by the number of rays you cast. When you move the camera, you get a new angle on the texture, and the texture is higher res than the game is actually displaying, so more sub-pixel detail gets exposed. But the reflection doesn't have that deeper detail to expose.
It seems like you know a lot more about this than me so maybe you're absolutely right, but my head is having a hard time getting around it. I don't get why the rays don't get as much advantage from DLSS as everything else.

I know DLSS doesn't work this way, but as a simplified way of thinking about it I'm pretending it's using data from the most recent four frames by newly rendering a quarter of the screen (like Mario Kart multiplayer) each frame. It seems like each quarter would need fewer rays to render its visible portion at equivalent quality to rendering the full screen. And if that's the case, it should hold for a more complicated way of combining frames as well.

Maybe simplified terminology is part of the problem. I know "ray tracing" can mean a lot of things done a lot of ways. Looking at some info now, the way I've been thinking about this is "eye based" or "backwards" ray tracing.
 
The technical talk regarding Switch 2 and MVG’s video has been confusing. Am I understanding this correctly that backwards compatibility IS an issue but can be dealt with? Easily?
I wouldn't say it's an issue. It's something that was built into this from the start, and they knew what they were doing from the start.

It's not as straightforward as being natively compatible, but it shouldn't be terribly difficult for them to get working.
 
The technical talk regarding Switch 2 and MVG’s video has been confusing. Am I understanding this correctly that backwards compatibility IS an issue but can be dealt with? Easily?
Not easily. But it can be.

The point is that he's acting like it's a definite "no they won't".
 
Implementing BC is hard but so is like, GPU development. All that matrix math 😥
 
I wouldn’t be so sure about that last part, honestly. That feels like another Wii U debacle waiting to happen. If marketing isn’t clear it’s a proper successor, there’s a real chance it won’t sell like one.

Well that’s the thing. It doesn’t have to sell that well like a “gen breaking successor” model is expected to do.

If you aren’t indicating to the consumer that this new model is where all development and services are expected to focus to in a couple of years…it selling well out of the gate isn’t the point.

As long as the new model keeps software engagement high for years in a way it wouldn’t have had it not existed, it’s a success. This model will keep a bunch of people buying Switch games who would have otherwise gone “eh, too old hardware. Looks too dated. I’m tired of the suffering framerates”

That’s why I expect it to be priced higher than not. Cause it doesn’t need to sell well out of the gate like a brand new console does. It only appeals to a segment of the Switch userbase, much of the userbase still doesn’t care that much.

If it sells between the Wii U sales and n64 sales…I think that’s a success. Ps4 pro and Xbox One and n3ds and DSi and Gameboy Color sold in this spectrum.
 
The technical talk regarding Switch 2 and MVG’s video has been confusing. Am I understanding this correctly that backwards compatibility IS an issue but can be dealt with? Easily?
Ridiculously easy if Nintendo doesn't mind lower margins.

Nintendo used to bundle original hardware with their newest systems. So, the DS has GBA chipset, the 3DS had the DS chipset, the Wii U had the Wii chipset.

Now, if Nintendo wants to do it with entirely the new chip, it's certainly not impossible though. Considering their close relationship to Nvidia, either Nintendo's engineers will figure it out, or Nvidia's will.
 
0
Yeah, they can just treat Maxwell ISA as if it were a new unified ISA and go to town. Unlike Yuzu they don't need to decompile back to GLSL, generate an Ubershader for latency reasons, then involve GLSL to generate new microcode. They can do a relatively quick transpile which will mostly be "yep, same"
Yeah, at this point, I think the most likely solution is a fairly "dumb" transpiler that just goes through the shaders line by line and drops in some prepared templates when it hits problematic instructions. That should work just fine so long as they didn't do anything too nasty wrt control flow. The resulting code won't be fast, but all that's really important is that it's correct and doesn't slow things down enough to outstrip the increase in GPU power.
 
Is anyone able to just summarize what we know versus what we speculate versus real world comparisons for the layperson to understand?
With all due respect, I still think the summary post @oldpuck is probably the most technically up to date rumours we have summarized.
Agreed that the post is probably the best summary of known facts and rumors regarding T239. Interviewing @oldpuck and @Z0m3le may be a great idea also. One thing that I'd like to add to the aforementioned summary is some indirect evidences of the T239 hardware being taped out and tested in H1 2022. For privacy reasons, I don't include direct links to the following LinkedIn pages.

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.

In addition, others have commented on why certain aspects of the Nvidia Linux kernel suggests that the T239 hardware was tested in 2022. I'll leave that to people more knowledgeable than me.
 
It seems like you know a lot more about this than me so maybe you're absolutely right, but my head is having a hard time getting around it. I don't get why the rays don't get as much advantage from DLSS as everything else.
No, you got it. You're tripping over my example being simplified, not your understanding.

When you cast rays backwards from the camera, it seems like jittering the camera should give you the extra data DLSS needs. But at multiple steps of the process this goes wrong.

The biggest one is just that RT already introduces a kind of noise that looks identical to the behavior of a jittering camera, so the RT denoiser actually prevents the data from getting to DLSS. The other is that ray tracing introduces unpredictability on what pixels are being effectively being sampled.

The inevitable endgame here is to get DLSS to replace the denoiser, but mixing raster and RT effects makes this tricky.
 
Agreed that the post is probably the best summary of known facts and rumors regarding T239. Interviewing @oldpuck and @Z0m3le may be a great idea also. One thing that I'd like to add to the aforementioned summary is some indirect evidences of the T239 hardware being taped out and tested in H1 2022. For privacy reasons, I don't include direct links to the following LinkedIn pages.

* Hidden text: cannot be quoted. *

In addition, others have commented on why certain aspects of the Nvidia Linux kernel suggests that the T239 hardware was tested in 2022. I'll leave that to people more knowledgeable than me.
I find it very interesting that T239 and the 40 series are linked together. Man I know I'm dreaming but DLSS 3 would be icing on the cake
 
Hmm, maybe it could be all three at once?

The Nintendo Switch X2!

Nintendo Switch 2
Nintendo Switch 10
Nintendo Switch 12!
Nintendo Switch X-2

Return of Yuna and Tidus as AI assistants powered by Tensor Cores.


And an awkward laugh as the start up sound.
 
what we know
  • 1536 Ampere cores
    • RT cores and tensor cores included
  • 8 A78 cores
  • 128-bit memory bus
  • hardware decompression block
what we don't know
  • clock speeds
  • memory type, amount, and speed
if you want to do comparisons, you'd have to do a lot of extrapolation. the closest gpu out there is a laptop gpu, the RTX 2050. that's listed up on 3D Mark at least

With all due respect, I still think the summary post @oldpuck is probably the most technically up to date rumours we have summarized. It was last updated back in December 2022, but realistically I don't think anything has necessarily changed or popped up beyond the usual cyclical discussions.

No offense, but I agree that the oldpuck summary is probably the best way to go.

But, even better, accredit someone who is active in the discussion and interview @Z0m3le

You should considering doing an interview with a thread regular here, and aspiring youtuber, Z0m3le. He has a fantastic grasp on the tech and perhaps doing a discussion style format interview on the topic would make for some good content?

Yeah, although I would argue that looking at the RTX 2050, MX570, and RTX 3050 Laptop GPU SKUs isn't super representative, not only because of the power/core differences but also because those GPUs are limited in a way Drake/T239 wouldn't be.

The GPUs listed above are extremely limited by a 64Bit 4GB Framebuffer, yes while it is GDDR6. Drake would have some benefits which we can see from oddly enough, the Steam Deck.
Having a unified memory layout allows developers to adjust and target greater allocations of RAM to the GPU if they need it/can simplify CPU allocation enough.

Not only that, but Drake may have 10+GB of RAM, allowing a >4GB for GPU allocation assuming an average of halving it (Which won't be the case).

Another thing to note is Ray Tracing, looking at Steam Deck RT performance, it can actually RT decently despite how small the GPU is and AMD's deficit in RT Performance versus NVIDIA.

This is even more interesting considering the Series S, when it can RT, performs not really great despite having a GPU more than double the size.

This to the best of my analysis comes from using LPDDR memory vs GDDR memory, the former having far lower latency versus the latter.
So RT seemingly likes low latency memory.
And this GPU is an NVIDIA one so it already is ahead of AMD's at an equivalent size using the high-latency GDDR Memory

We have to talk about Drake as potential performance, because we do lack some important concrete information, but we can give a good indication of what it should be capable of, and there is some info in the hack that could indicate clocks, which is key to what is missing atm.

I'll type up some information for you later, I know some people have suggested interviewing me, however I am in the process of moving, so it's just really difficult to make that happen. I have a lot of downtime at work, so I'll get it to you tomorrow morning and you can ask any questions you might have, just tag this post or DM me.

I'll go over the frequencies found in a DLSS test inside of NVN from the hack, I went over them with LiC and I'll give his point of view on them, and I'd suggest you let your audience take in the information. Basically at face value, they sound like Drake's GPU clocks in handheld and docked modes, but it lacks context to confirm that is what it is, however they are also within the estimations we have for the clocks on a 5nm process.

You could certainly interview @Z0m3le! You could use my summary! I also have a job in radio and have a mic setup at home if you wanted to have a discussion about it on video, or even just chat.

I think the most important thing is to get strong visibility on the technical facts, just so that the discussion is at least well informed. It can be frustrating to watch absolutely smart places like DF say things that don't track simply because they are (understandably) not up to the fine points of Linux commits about T239 or whatever.

Edited to add: There are a few purely speculative topics that I'd have to keep off limits, but none of them are hardware related.

Agreed that the post is probably the best summary of known facts and rumors regarding T239. Interviewing @oldpuck and @Z0m3le may be a great idea also. One thing that I'd like to add to the aforementioned summary is some indirect evidences of the T239 hardware being taped out and tested in H1 2022. For privacy reasons, I don't include direct links to the following LinkedIn pages.

* Hidden text: cannot be quoted. *

In addition, others have commented on why certain aspects of the Nvidia Linux kernel suggests that the T239 hardware was tested in 2022. I'll leave that to people more knowledgeable than me.

Just wanted to say thank you. All of you have been incredibly helpful in pointing me in the right direction. I did reach out to Zomble and the old puck via dm's - we'll see what can happen. If Zomble here wants to do come on for a video and just sort of explain it all out, that would be great. I didnt know he was an aspiring youtuber, so it can help him get some attention to his channel.

I don't want the video to be SUPER long, because I have a podcast for that. But, having a small discussion where one, the other, or both summarize things nad help explain it would be wonderful. I have that summary post open now which is fantastic. Would love to get the video out by the end of the week. Thank you again, I was afraid to ask but I think a quality video explaining this has some value in reaching new viewers on youtube.
 
Well that’s the thing. It doesn’t have to sell that well like a “gen breaking successor” model is expected to do.

If you aren’t indicating to the consumer that this new model is where all development and services are expected to focus to in a couple of years…it selling well out of the gate isn’t the point.

As long as the new model keeps software engagement high for years in a way it wouldn’t have had it not existed, it’s a success. This model will keep a bunch of people buying Switch games who would have otherwise gone “eh, too old hardware. Looks too dated. I’m tired of the suffering framerates”

That’s why I expect it to be priced higher than not. Cause it doesn’t need to sell well out of the gate like a brand new console does. It only appeals to a segment of the Switch userbase, much of the userbase still doesn’t care that much.

If it sells between the Wii U sales and n64 sales…I think that’s a success. Ps4 pro and Xbox One and n3ds and DSi and Gameboy Color sold in this spectrum.
I think at this point the idea of a Switch Pro coming out is pretty much dead and buried. If the rumours of T239 having awkward backwards compatibility with Mariko are true, then it would be a dumb idea to sell it as a Switch Pro in the first place. And realistically, I think third-party devs are pretty close to just abandoning Switch 1 (i.e. not T239) altogether considering how far behind the hardware is at this point. (2015 mobile chipset in 2023… yikes.) Even first party efforts would be held back by the need to target both T239 and Mariko, unless they do what they did with the New 3DS multiplied tenfold, which I can’t see them doing. So it makes no sense for T239 to be the chip in some PS4 Pro/New 3DS/DSi equivalent.

New hardware selling between the Wii U and the N64, especially off the back of literally the 3rd-most successful console of all time, would be an unmitigated disaster for Nintendo, plain and simple. If the potential consequences of not marketing [REDACTED] as Nintendo’s next big thing are that serious, you bet your sweet ass they’re gonna make sure you know [REDACTED] is their next big thing.
 
Yeah I've already seen comments parroting "MVG said T239 has nothing to do with Nintendo" etc.

The NVN2 leak and T239 commits and their implications are basically not talked about anywhere but here. Large news sites I can understand not covering it for fear of legal retaliation (though DF has mentioned it in their articles already) but that doesn't apply to discussion forums. I've read many a comment along the lines of "Nintendo and DLSS? Do they even know what that is?". lmao
I think that a part of it is enthusiast desktop PC culture kind of bleeding into things a bit.
You know how so much of the focus/attention is on the flagship parts? I think that sheer amount of time spent on those parts also pushes subconscious bias in their direction when it comes time to think about features. That is, for not an insignificant amount of people, it is the default to interpret features like DLSS through the lens of being utilized on flagship/high end parts. Some people, but still distinctly a minority, can fight off that bias and think from the other direction (ie how useful DLSS can be for the lower end of the power spectrum).

And there's that weird ass belief about how Nintendo won't use X feature or must be arbitrarily outdated by Y time for random ideological reasons.
 
Agreed that the post is probably the best summary of known facts and rumors regarding T239. Interviewing @oldpuck and @Z0m3le may be a great idea also. One thing that I'd like to add to the aforementioned summary is some indirect evidences of the T239 hardware being taped out and tested in H1 2022. For privacy reasons, I don't include direct links to the following LinkedIn pages.

* Hidden text: cannot be quoted. *

In addition, others have commented on why certain aspects of the Nvidia Linux kernel suggests that the T239 hardware was tested in 2022. I'll leave that to people more knowledgeable than me.
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
I feel like no one even watched that MVG video and is forming their opinion pieces just by the thumbnail.
I did. Jokey dismissals aside, I don't agree with his skeptical framing and I don't think he presents all the possibilities for how BC could work, there's not much comparison with how other consoles implement BC other than a mention of leaked PS5 clockspeed modes. Other folks in this thread have laid out more detailed rebuttals.
 
ARM and Tencent will go into mobile RT with Tencent's SmartGI global illumination solution for Unreal Engine 5


there's a short article that went up about the solution not long ago, with some pretty pictures (and stats turned on!)

Ray-tracing-blog-image-2.png

Ray-tracing-blog-image-3.png

Ray-tracing-blog-image-4.png

 
I feel like no one even watched that MVG video and is forming their opinion pieces just by the thumbnail.
If you're talking about the video, that's not an MVG video, and secondly, yes I am forming opinions about the thumbnail because I don't care to watch it.
 
If you're talking about the video, that's not an MVG video, and secondly, yes I am forming opinions about the thumbnail because I don't care to watch it.
I think they were referring to the MVG's video from yesterday.
 
0
I feel like no one even watched that MVG video and is forming their opinion pieces just by the thumbnail.
I watched MVG's video and his option 5, an upclocked TX1, seems to be the option he has convinced himself is going to happen because it is the easiest. He dismisses the idea of T239 actually being used for Nintendo as others have discussed earlier. As I have said before, he puts out good content, I just disagree with his video.
 
I find it very interesting that T239 and the 40 series are linked together. Man I know I'm dreaming but DLSS 3 would be icing on the cake
And 5nm!

I've pointed out a few times that 40 Series and T239 were sampled and developed in tandem. I don't think the Ampere/Ada difference is really all that significant performance wise, but inheriting the node from Ada would help.

DLSS 3.0... that would be madness. Brilliant, incredible madness.
I don't think they'll do it but I'll be damn amazed if they do. 7/8ths of all pixels being interpreted rather than rendered. Even at 2 or 3 teraflops that would have well optimised games looking close to even Xbox Series X. 1080p30, perhaps even, 720p30 rendering, displayed in 4K60 with nothing but some noise and mild controller delay.

It would he a dream come true to see this thing hit 3-4TF and have DLSS 3.0

Of course, Teraflops depend on node. I suspect 5nm stronger than ever. But if it was made alongside Ada, tested alongside Ada, inherited features from Ada... Maybe the OFA from Ada isn't so absurd.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom