PedroNavajas
Boo
- Pronouns
- He, him
For the record, I still believe that Tegra 239 is one of the best mobile chips available, and that i would take Nvidia at 8nm over Qualcomm at 7-6nm.
What upgradeAt least this will give some room for a mariko-tier upgrade in the future.
There is literally nothing of note which isn't there already.
I agree, but i truly hope they dont go with new, they most likely wont anyway.We take it for granted, as members of an enthusiast forum, our ability to look past a name and figure out the hardware behind it.
A name like "New Nintendo Switch", we think "oh, like the New 3DS, which was upgraded 3DS hardware with exclusives". Thats not immediately clear to everyone. Heck, when it first revealed I thought the big deal was improved 3D and a second stick. I didn't even want one because most of my games would look and feel the same. It feels like underselling this device's power.
Same with Switch 4K model, "So it has a 4K screen?". Nope. "Oh, so it does 4K for every game?" Nope. We here know about DLSS and that a new chip is required for 4K to be feasible and that the new chip is a substantial leap. "Why should I buy this? I only play portable / don't have a 4K display". You see these kinds of comments even on enthusiast forums.
Drake is an upgrade across the board, it is the next gen Switch. If they have a low key name it's just going to work against them, even if the rest of their marketing is solid. A lot of folks never look past the name.
Wait a moment, so even at the second lowest estimated GPU clock (handheld mode), drake even on 8nm would be comparable to a steam deck @ the maximum power hungry 15W setting*?(Post)
Always assume FP32 unless someone says otherwise.The amount of Tflops we are talking about is in FP16 or FP32? thanks.
Ultimately I think you are right in that the console will be engineered around its biggest bottle neck, that being the memory bandwidth, pushing too far beyond 3TF from the GPU doesn't make sense if the memory bandwidth becomes an issue, no node change is going to solve that problem, even using LPDDR5X would only help a little. So there is a theoretical ceiling for performance based on the above, they then just use a node that can hit or get close to that ceiling for their target battery life.
So it doesn't matter if its 8nm Samsung or 4nm TSMC, that memory bottle neck still dictates what the new switches performance will be like.
This is likely going to remain a bottleneck for this style of hardware for some years too. Unless Nintendo and Nvidia somehow crack reading and writing data to DNA or something.
The amount of Tflops we are talking about is in FP16 or FP32? thanks.
I think they’ll be a little bit more focused on the dramatic performance increase this time, because it enables a bunch of ports of third party games in the way that the New 3DS getting a CPU/RAM bump really didn’t. But there will be more to the new Switch hardware story than just a better SoC, and they’re not ever going to choose a name that suggests that the new SoC is the only or primary improvement. They know a lot of their market does not really care that much about visual fidelity.
I'm not particularly nostalgic for Super or Advance (wasn't even alive for Super) but they are precedent, the previous two times Nintendo marketed a more capable next-gen device while sharing the brand name of the previous one.4k is also not a thing i like, and the usual "super, advance" just feel like nostalgia from certain users.
Emulators always always always have more flexibility than real machines doing backwards compatibility in a more straightforward way. But also usually far worse compatibility.So Yuzu devs are expected to clown Nintendo? Huh. Neat.
I 0% think Pro will be used, but I also think this part matters about as much as people getting the Nintendo Switch confused with the RF Switch used for NES.Now "Switch Pro"...tell that to the people trying to get a Nintendo Switch Pro Controller. Do they mean the system or the controller?
Sure, I think this outlet made assumptions about that aren’t supported by what Nvidia has released. The “pairs of images” thing is confusing people, but I’m confident now that it’s calculating the optical flow field between the current frame and previous frames, not between the current frame and the next frame. It’s clear from the flow diagram and their description of the inputs.@Anatole this should be an interesting read for you, especially the DLSS part.
I think if the hardware is fast enough, it can reduce the appearance of that perhaps. However, I do feel like further testing needs to be done to actually see if there is no trailing or defects in the algorithm for what you see.It's surely intermediate frames rather than future ones, or else there would be horrible issues any time you panned/rotated the camera quickly, right?
All evidence up until now has pointed to a TSMC node. What has changed that suddenly everyone is so sure of Samsung's involvement?
I wouldn't call it evidence. more like reasons for us to believe it's one or the otherAll evidence up until now has pointed to a TSMC node. What has changed that suddenly everyone is so sure of Samsung's involvement?
Engine motion vectors, the depth buffer, and the optical flow field should theoretically give enough information to handle that transformation except at the edge of the frame, where all the pixels rendered would be brand new. We’ll see how well it works in practice.It's surely intermediate frames rather than future ones, or else there would be horrible issues any time you panned/rotated the camera quickly, right?
Nah there's actually been no evidence pointing to anything other than Samsung 8nm which is what Orin is on.All evidence up until now has pointed to a TSMC node. What has changed that suddenly everyone is so sure of Samsung's involvement?
hey so dlss could support vr? i know vr needs 4k so could it allow it?
dlss is available for VRhey so dlss could support vr? i know vr needs 4k so could it allow it?
Right. DLSS is available to VR headsets that use Nvidia’s product stack (why wouldn’t it be?) but the addition of DLSS does not suddenly turn the new Switch hardware into a compelling VR platform. For…lots of reasons, the largest of which is that I don’t think Nintendo really likes or cares about VR that much.I feel like outside of some unique experimenting like with LABO, Nintendo is years away from being ready to release something high-end VR related.
Not to mention we shouldn't really be expecting a screen resolution that can do VR all that well.Right. DLSS is available to VR headsets that use Nvidia’s product stack (why wouldn’t it be?) but the addition of DLSS does not suddenly turn the new Switch hardware into a compelling VR platform. For…lots of reasons, the largest of which is that I don’t think Nintendo really likes or cares about VR that much.
Also, for my money, Elden Ring would absolutely be a hardware sales driver for the new Switch, especially if DLC is released for it.This is…a really weird thing to say, but an especially weird thing to say after listing a long list of third party games that are not on Switch but saying they don’t count because “the ship has sailed” or whatever.
There are notable third party games that are not (yet) on Switch. I don’t really see how that’s a controversial point…?
I'm just going by their Cyberpunk picture, but it implies per-frame latency as going from 3.6 frames when DLSS 3/RTX is off, to 5.6 frames when DLSS/RTX is on. The individual frames themselves are >4x more frequent though, which would give room for the claim that responsiveness is improved.Engine motion vectors, the depth buffer, and the optical flow field should theoretically give enough information to handle that transformation except at the edge of the frame, where all the pixels rendered would be brand new. We’ll see how well it works in practice.
It's the context that you are missing, there is a limit to how powerful the device can be without being very expensive, you are also spreading dispair, and 8nm isn't even confirmed yet... I can understand if the performance could run away with say double, but because the memory bandwidth is limited to 102GB/s, 3.2tflops being discussed here is twice steamdeck's performance before DLSS... It's really good if it's 8nm, it's even better if it's Samsung 5nm or TSMC, but Drake being as powerful as it is? That's a huge win.That's a bit unfair. I have always tried to be reasonable with my expectations. I think is reasonable to be disapointed to get an older node when much better ones are mature and available.
Don’t get your hopes up.I'm no expert on APIs, custom processors, architectures, etc., but how likely is it that Nvidia has future-proofed Drake with parts of Lovelace (according to early rumours) and something from DLSS 3.0?
Not that DLSS 2.0 isn't already a great thing for a handheld.
Orin is on 8nm and its power draw was very fat based on Nvidia's docs. Estimates for the PVA and DLA power consumption were pretty small (I did them, and as it turns out I was pretty close!), leaving the GPU eating more power than we thought Switch could reasonably use. @BlackTangMaster has access to an Orin AGX, and got finer grained power numbers that showed how much of Orin's power draw actually goes to the GPU, and now the power draw looks more reasonable.All evidence up until now has pointed to a TSMC node. What has changed that suddenly everyone is so sure of Samsung's involvement?
I've been going through the recent Linux drops for Tegra, where they've added support for Drake. Drake's OFA driver - the OFA is what DLSS 3.0 uses for its special sauce - is the same as Orin's, unlike some other parts of T239, where it overrides the Orin driver.I'm no expert on APIs, custom processors, architectures, etc., but how likely is it that Nvidia has future-proofed Drake with parts of Lovelace (according to early rumours) and something from DLSS 3.0?
There's no reason the hardware couldn't be paired with an external headset, though. The hardware in the Meta Quest is nowhere near as powerful as what we're dealing with here and it still offers a very impressive VR experience.Not to mention we shouldn't really be expecting a screen resolution that can do VR all that well.
Everyone, I'm telling you, it's going to be The Nintendo Switch Up.
UPscales games with AI for better performance.
UP to 4k resolution on the TV.
UP to 120 FPS in docked mode.
UP to you how you want to play.
Oh and my only concern with Drake now is how broke I am going to be buying all of the resident evil ports when they inevitably all land.....
The problem with this is Nintendo will aim for a lower clock speed as a base for the next system. Further die shrinks will only improve battery life because the speed cannot be change anymore for compatibility's sake.At least this will give some room for a mariko-tier upgrade in the future.
It’s not access to ORIN AGX, it’s just access to their power tool software. If you have an account with nvidia you can also play with it.Orin is on 8nm and its power draw was very fat based on Nvidia's docs. Estimates for the PVA and DLA power consumption were pretty small (I did them, and as it turns out I was pretty close!), leaving the GPU eating more power than we thought Switch could reasonably use. @BlackTangMaster has access to an Orin AGX, and got finer grained power numbers that showed how much of Orin's power draw actually goes to the GPU, and now the power draw looks more reasonable.
What is the difference between Orin and Drake? What is each thing? I thought it was the same (Nvidia graphics chip)
engineering samples being out in April 2021 is interesting. either that means devs were working on simulated kits or they were working on analogues. I would bet on there being orin-based kits before thenInteresting Timing Bits: Drake Linux was being developed on software simulation in January of last year. In April, there were a set of Drake related updates that seem to indicate that actual engineering samples were being produced. In July, the code was branched to consolidate Orin changes for public release so that Drake work could continue. Any further references to Drake in the various public repos since are entirely updates to places where Drake share's Orin's driver, but needs a Drake specific exception (like the cpu-freq updates). This is likely so that there is a One True Source for Orin drivers and Drake dev can simply pull it from upstream, rather than maintaining multiple forks at Nvidia that are constantly cross merging.
Orin is a family of AI chips for AI developer boards. Drake is the code name for Switch 2. They are expected to share a lot of the designa.What is the difference between Orin and Drake? What is each thing? I thought it was the same (Nvidia graphics chip)
Orin refers to the series of automotive orientated system on chips developed by nvidia.
Drake is a specific system on chip developed by nvidia for Nintendo, as far as we know its based on Orin but separate from the Orin family.
You are correct, of course, but I think we're stuck with that no matter what. Ampere-B has been on Samsung 8nm the entire time, Orin was built on 8nm, and we know T239 shares design with T234. There is likely not a cost advantage porting to TSMC now over porting to TSMC later, because it's going to be a port either way, as opposed to being built fresh on TSMC.What upgrade
8N is a deadend node, there is nothing after that unless they pay hundreds of millions to rebuild it at a different foundry or a process with a completely different technology at the same foundry.
T = Tegra, the range of system on a chip that Nvidia makes. T234 is the internal name for Orin. T239 is the internal name for Drake. The chip in the original switch was T210Thank you very much for your answer.
So Drake is a chip based on Orin (Orin is an automotive series of chips).
Drake's architecture (based on Orin) is Ampere.
And what does the name T239 refer to then?
T239 is the internal identifier for Drake. Drake is just a codename.Thank you very much for your answer.
So Drake is a chip based on Orin (Orin is an automotive series of chips).
Drake's architecture (based on Orin) is Ampere.
And what does the name T239 refer to then?
Not quite, Drake is the code name of the chip in the next Switch, not the code name of the entire device.Orin is a family of AI chips for AI developer boards. Drake is the code name for Switch 2. They are expected to share a lot of the designa.
T239 is the Tegra series number for the Drake chip. Basically T239=Drake like T234= OrinThank you very much for your answer.
So Drake is a chip based on Orin (Orin is an automotive series of chips).
Drake's architecture (based on Orin) is Ampere.
And what does the name T239 refer to then?
engineering samples being out in April 2021 is interesting. either that means devs were working on simulated kits or they were working on analogues. I would bet on there being orin-based kits before then
Oh sure, I had thought you were talking about the image quality breaking down with camera rotation. I don’t know enough about how the scheduling works to say anything on how the latency shakes out.I'm just going by their Cyberpunk picture, but it implies per-frame latency as going from 3.6 frames when DLSS 3/RTX is off, to 5.6 frames when DLSS/RTX is on. The individual frames themselves are >4x more frequent though, which would give room for the claim that responsiveness is improved.
If it were just creating in-between frames from two finished frames, it would be little different than a higher quality version of what most TVs already offer. It would also necessarily add image latency, since you'd need to render frame A to create in-between frame A-1, and hold back frame A another 1/60 or 1/120 of a second or whatever the case may be.It's surely intermediate frames rather than future ones, or else there would be horrible issues any time you panned/rotated the camera quickly, right?