The problem as @ILikeFeet pointed out is that there's conflicting information in the Jetson Orin Series Module Data Sheet in regards to whether there is 1 RT core per 1 TPC or 2 RT cores per 1 TPC for Orin's GPU. So the number of RT cores on Orin is not necessarily cut and dry with respect to Orin.I would like to get everyone on the same page.
This is the Ampere GPU in Orin. Note that there are clearly 16 SMs and 16 RT cores across 8 TPCs and 2 GPCs.
Please stop saying that Orin has 8 RT cores.
![]()
I do wonder how they will implement the scrolling wheels though, and if it will be a free-rolling type or have incremental pauses, sort of like how my mouse lets you choose from free-scrolling to increment bases.There are two things I want from Super Switch controllers:
- Those new magnetic analog sticks.
- Those scrolling shoulder buttons they also have a patent for.
The language that Nvidia originally used to talk about how many RT cores there was not very clear. I read it myself before and also was not sure. Now fast forward to March 2022 when Nvidia has provided much newer updated long and involved technical PDFs, and they have provided us with a very clear drawing of the GPU, and look, on the picture from Nvidia dated Feb 2022, there are 16 RT cores in the full Orin. (That works out to 14 RT cores in the Orin 32 module that has 7 TPCs.) (and with 6 TPCs, you get 12 RT cores and 48 Tensor cores.)The problem as @ILikeFeet pointed out is that there's conflicting information in the Jetson Orin Series Module Data Sheet in regards to whether there is 1 RT core per 1 TPC or 2 RT cores per 1 TPC for Orin's GPU. So the number of RT cores on Orin is not necessarily cut and dry with respect to Orin.
But of course, how many RT cores Orin has doesn't necessarily affect Drake since Drake has been shown through the leaks of confidential Nvidia files by Lapsus$ to be a custom variation of Orin with 12 SMs and 12 RT cores.
How would they be pressed without scrolling on accidentI do wonder how they will implement the scrolling wheels though, and if it will be a free-rolling type or have incremental pauses, sort of like how my mouse lets you choose from free-scrolling to increment bases.
Man, I've always loved GameCube's squishy analog shoulders with a click.
The only controller to ever replicate it was the Steam Controller, but even the Deck just basically copies the typical XB360 style with added trackpads.
I do hope they implement GameCube shoulders + increment wheels for the top buttons.
It is relevant what Orin actually has, though, since the source that says Drake has 12 RT cores also says Orin has 16. If Orin actually has 8, then Drake presumably has 6.But of course, how many RT cores Orin has doesn't necessarily affect Drake since Drake has been shown through the leaks of confidential Nvidia files by Lapsus$ to be a custom variation of Orin with 12 SMs and 12 RT cores.
That’s disabling 2 SMS in 2 GPCs.If you take the full stock Orin, and disable 2 TPCs, you get this:
GPU: 1536 CUDA Cores, 12 2nd-generation RTcores, 48 3rd-generaiton Tensor cores, 12 CUs, 2 GPCs, 6 TPCs
![]()
Is it random change that people understood T239 to have exactly the same: 1536 CUDA cores, 12 RTcores and 48 Tensor cores?
ThisThat’s disabling 2 SMS in 2 GPCs.
T239 only has 1 GPC and all 12 SMS.
It’s not a binned chip regardless of how you’re trying to slice this.
The codename should be enough of an indication that it isn’t a binned chip.
A binned chip follows the same codename.
t239 is not T234. The only thing you can glean from it is that they exist in the same family of Tegra Ampere based SoCs.
Look, I get that you seem very attached to this "T239 is actually a binned T234" theory, but the leaked files from the the Nvidia hack directly contradict it. We know that they're two separate chips beyond a reasonable doubt.If you take the full stock Orin, and disable 2 TPCs, you get this:
GPU: 1536 CUDA Cores, 12 2nd-generation RTcores, 48 3rd-generaiton Tensor cores, 12 CUs, 2 GPCs, 6 TPCs
![]()
Is it random change that people understood T239 to have exactly the same: 1536 CUDA cores, 12 RTcores and 48 Tensor cores?
The artifacts you are talking about are not from upscaling from tiny resolutions, when going low on the base resolution the entire image becomes unstable. 360p as the base resolution doesn’t look very good and going lower looks awful.I'd like to point out that since DLSS 2.3, alot of said artifacting had been resolved. I think the trails in Death Stranding and the after images of Cyberpunk have been fixed?
Plus if nobody is bothered by dithered transparency patterns in Mario Oddyssey, I doubt people will be too bothered over hard-to-spot combing artifacts, especially if the rest of the image will be pretty detailed.
I believe one of the possibilities in the patents was them not being physical scroll wheels, but buttons with touch sensitivity. Combine that with haptics and a variety of implementations would be possible, like a scroll-wheel version of the Steam trackpads.I do wonder how they will implement the scrolling wheels though, and if it will be a free-rolling type or have incremental pauses, sort of like how my mouse lets you choose from free-scrolling to increment bases.
Man, I've always loved GameCube's squishy analog shoulders with a click.
The only controller to ever replicate it was the Steam Controller, but even the Deck just basically copies the typical XB360 style with added trackpads.
I do hope they implement GameCube shoulders + increment wheels for the top buttons.
Also, one would have to reconcile the idea of increased R&D expenditure at Nintendo with unfounded theories about binned chips.Look, I get that you seem very attached to this "T239 is actually a binned T234" theory, but the leaked files from the the Nvidia hack directly contradict it. We know that they're two separate chips beyond a reasonable doubt.
LOL, OMG, I was most definitely bothered by those dithered transparency patterns in Odyssey! Pokémon Legends: Arceus also has them as well. I really hope in the next generation Nintendo's hardware can handle something better.I'd like to point out that since DLSS 2.3, alot of said artifacting had been resolved. I think the trails in Death Stranding and the after images of Cyberpunk have been fixed?
Plus if nobody is bothered by dithered transparency patterns in Mario Oddyssey, I doubt people will be too bothered over hard-to-spot combing artifacts, especially if the rest of the image will be pretty detailed.
What I want.Some general thread questions: So, given what appears to be confirmed in the leaks, and other available information, is anybody daring enough to try and nail down any numbers, or make some predictions? If not predictions, what are your hopes, now that the leaks are seen to be better than earlier estimates?
CPU (Variant, Clock Speed):
GPU (Portable Clock, Home Clock, FLOPS estimate):
RAM (Amount):
Screen (Size, Resolution):
Storage:
Price:
It'll depend a lot on whether the chip is stuck on 8nm Samsung, or if they have something better like TSMC 7nm, Samsung 5nm, or even TSMC 5nm.Some general thread questions: So, given what appears to be confirmed in the leaks, and other available information, is anybody daring enough to try and nail down any numbers, or make some predictions? If not predictions, what are your hopes, now that the leaks are seen to be better than earlier estimates?
CPU (Variant, Clock Speed):
GPU (Portable Clock, Home Clock, FLOPS estimate):
RAM (Amount):
Screen (Size, Resolution):
Storage:
Price:
I think there is a distance between being "related" and being "binned". The latter implies basically identical design and manufacturing processes down to the circuits while the former has more wiggle rooms. Right now I'm in the former camp.T210 and T214 have different names, but the specs of these 2 tegra chips are essentially the same.
Tegra X1 (T210) was made using TSMC 20SoC ("20 nm")
Tegra X1 (T214) was made using TSMC 16FF ("16 nm")
TM660M-A2 is the name on the chip inside the NVIDIA Jetson Nano. It is a binned Tegra X1.
These products have different names, but the key specs are the same. (Quad-core A57s with 256-core NVIDIA Maxwel GPU)
These are facts. So how can you say with 100% certainty that T234 and T239 may not have a similar relationship?
What I noticed was that T239 seems like the end of the Tegra-Ampere line. Any future successor would need to be a different architecture with a name like T24X.T210 and T214 have different names, but the specs of these 2 tegra chips are essentially the same.
Tegra X1 (T210) was made using TSMC 20SoC ("20 nm")
Tegra X1 (T214) was made using TSMC 16FF ("16 nm")
TM660M-A2 is the name on the chip inside the NVIDIA Jetson Nano. It is a binned Tegra X1.
These products have different names, but the key specs are the same. (Quad-core A57s with 256-core NVIDIA Maxwel GPU)
These are facts. So how can you say with 100% certainty that T234 and T239 may not have a similar relationship?
This last week I watched an Nvidia GTC talk with this updated slide:
What do we know at this point about "Nano Next"?
How would you read this slide in terms of where in 2023 it will release?
What if "Nano Next" is T239, and its designed to be a lower cost Orin chip with a smaller die size?
![]()
Why would it be overkill? The first Nano had half its gpu cores disabled. 768 gpu cores would be fitting for a binned t239This is an interesting one. Jetson Nano Next has been on their roadmap for a while, and several people have speculated that it will use the same SoC as the next Switch model. However, now that we know the GPU configuration of Drake it seems, well, overkill for a Jetson Nano product. The first Jetson Nano was designed to hit a $99 price point, and even if that drifts upwards for Nano Next, it's still very much a place where they'd use the cheapest SoC they can make. Nvidia had previously announced an "Orin ADAS" chip, which was advertised at 10 TOPS/5W, so if that still exists it may be the one used in Nano Next.
If T239 is used in Jetson Nano Next in 2023, though, even in a heavily binned form and with a price bump to $149+, then I wouldn't be at all worried about the cost of this chip.
There's a possibility that Drake (T239) could use the Cortex-A78C instead of the Cortex-A78AE for the CPU.So how can you say with 100% certainty that T234 and T239 may not have a similar relationship?
Obviously i cannot say anything with 100% certainty. But in my mind, there is almost no possibility of Drake being a binned Orin.Tegra X1 (T210) was made using TSMC 20SoC ("20 nm")
Tegra X1 (T214) was made using TSMC 16FF ("16 nm")
TM660M-A2 is the name on the chip inside the NVIDIA Jetson Nano. It is a binned Tegra X1.
These products have different names, but the key specs are the same. (Quad-core A57s with 256-core NVIDIA Maxwel GPU)
These are facts. So how can you say with 100% certainty that T234 and T239 may not have a similar relationship?
Orin also has AV1 encode support, which consumer Ampere GPUs don't have.Worth noting that binned T239 chips could also make their way into a new Shield product. Its a niche product but does aid in selling Nvidia Geforce Now to that target audience and I imagine they could leverage the AV1 Decode in T239 to market it once again as a super premium streamer.
A die shrunk chip which has the same configuration is still a different chip, made in a different process line with different techniques. A binned chip is literally, physically the same exact chip. It wouldn't have a different number.T210 and T214 have different names, but the specs of these 2 tegra chips are essentially the same.
Tegra X1 (T210) was made using TSMC 20SoC ("20 nm")
Tegra X1 (T214) was made using TSMC 16FF ("16 nm")
TM660M-A2 is the name on the chip inside the NVIDIA Jetson Nano. It is a binned Tegra X1.
These products have different names, but the key specs are the same. (Quad-core A57s with 256-core NVIDIA Maxwel GPU)
These are facts. So how can you say with 100% certainty that T234 and T239 may not have a similar relationship?
Really? This video shows Death Stranding being upscaled from 240p and the results are definitely impressive given its base output and most certainly isn't "awful". Or at least, it doesn't look like Xenoblade Chronicles 2 Gormott on a rainy day...The artifacts you are talking about are not from upscaling from tiny resolutions, when going low on the base resolution the entire image becomes unstable. 360p as the base resolution doesn’t look very good and going lower looks awful.
Really? This video shows Death Stranding being upscaled from 240p and the results are definitely impressive given its base output and most certainly isn't "awful". Or at least, it doesn't look like Xenoblade Chronicles 2 Gormott on a rainy day...
In fact, the 240-to-720p example pretty much looks like a lot of the current Switch ports that run at subnative that so happen to run at an even higher resolution than 240p. So imagine what a game being upscaled from 480p would look like.
So yeah, I'd rather take a DLSS upscale than a non-DLSS subnative solution.
The first Jetson Nano had half its GPU disabled because that was literally the only option for binning. It had 2 SMs, so all you can do is disable one of them. Drake has 12 SMs, and if the next Switch uses all 12, then there are probably a lot of extra dies where 8 or 10 SMs are usable, so they don’t necessarily have to chop it all the way down to 6.Why would it be overkill? The first Nano had half its gpu cores disabled. 768 gpu cores would be fitting for a binned t239
Honestly, I feel NVIDIA could capitalize on the market the Steam Deck Opened up and that them and Mediatek are targeting and making a T239 Laptop or Steam-Deck Competitor with Geforce Now as a major element.Worth noting that binned T239 chips could also make their way into a new Shield product. Its a niche product but does aid in selling Nvidia Geforce Now to that target audience and I imagine they could leverage the AV1 Decode in T239 to market it once again as a super premium streamer.
As an Nvidia shield fan I would be all over a T239 based product. Should make game cube, wii, ps2 etc emulation viable and give the best possible experience from streaming apps.
Yup I can see this.Honestly, I feel NVIDIA could capitalize on the market the Steam Deck Opened up and that them and Mediatek are targeting and making a T239 Laptop or Steam-Deck Competitor with Geforce Now as a major element.
AKA A Sheild Portable 2 or Sheild Laptop.
heck, in the latter scenario they can binn the GPU down to 8SMs even but push the clocks way higher to offset it a decent bit to have that PS4 Pro w/DLSS GPU again.
It would pay itself more in the long run.Obviously i cannot say anything with 100% certainty. But in my mind, there is almost no possibility of Drake being a binned Orin.
While the upfront R&D cost of a semicustom chip is obviously higher, I feel it would more than pay for itself in the long run. Orin would have a lot of wasted silicon for a game console, and it’s not worth it because Nintendos next device will sell a fuckton more units than any car or robot using Orin. There aren’t going to be enough Orins to bin.
T210 and T214 have different names, but the specs of these 2 tegra chips are essentially the same.
Tegra X1 (T210) was made using TSMC 20SoC ("20 nm")
Tegra X1 (T214) was made using TSMC 16FF ("16 nm")
TM660M-A2 is the name on the chip inside the NVIDIA Jetson Nano. It is a binned Tegra X1.
These products have different names, but the key specs are the same. (Quad-core A57s with 256-core NVIDIA Maxwel GPU)
These are facts. So how can you say with 100% certainty that T234 and T239 may not have a similar relationship?
I think the main reason that only Nintendo and now Valve have been able to make this work is because the real money is in controlling the software ecosystem, not selling the hardware. Nvidia tried to curate software for the Shield handheld/tablet without very much success.Honestly, I feel NVIDIA could capitalize on the market the Steam Deck Opened up and that them and Mediatek are targeting and making a T239 Laptop or Steam-Deck Competitor with Geforce Now as a major element.
AKA A Sheild Portable 2 or Sheild Laptop.
heck, in the latter scenario they can binn the GPU down to 8SMs even but push the clocks way higher to offset it a decent bit to have that PS4 Pro w/DLSS GPU again.
NVN2 has Orin as a compilation target along with desktop Turing, desktop Ampere, and Drake. In fact, a comment states that Orin is currently the default compilation target. However, in the places where hardware values like SM count, cache, etc. need to be used by the driver, it's always using GA10F's values, not GA10B's. In other words, it only runs on targets other than Drake to the extent they can be made to behave like Drake -- for the reference implementation on desktop, and hypothetically for a devkit on Orin.The hacked info only connects NVN2, the API for the next Nintendo hardware, with GA10F (correct me if I'm wrong, but I don't remember seeing it in connection with GA10B anytime in this thread, I'll be glad to take the L). Theoretically you could bin a GPU to the point where it's functionally the same as a different model (see RTX 3060 changing from GA106 to GA104), but that's not what's happening here. Orin contains a GA10B GPU. The next Switch runs a GA10F.
I'm on Thraktor's side on this. The Nano Next probably will be based on the Orin ADAS (5w-10w), which is scheduled to launch in 2023 too. I wrote a while back that the next Lite model (and even the hybrid) might use a new SoC based on the Orin ADAS if they cannot reduce the cost of Drake, but that is a baseless speculation of course. Another possibility is that the binned Drake will be used in the next Lite—instead of going into Nano Next, which as Thraktor opined would be an overkill.I suppose more generally I was making a point about the expected cost of the chip. It’s quite a lot more powerful than any of us were expecting, and therefore there’s a general expectation that the new model is going to be much more expensive. But the original Switch launched at $299, and a Jetson Nano with a heavily cut down version of the chip arrived two years later. If Nano Next is using a binned version of Drake, and it’s arriving within a year of the new model, then something doesn’t add up. Either Nano Next isn’t using Drake, or Drake can’t be much more expensive than TX1 was in 2017.
This. Nvidia is pushing GeForce Now as their gaming platform nowadays, and doesn't seem likely to release a powerful Shield to support someone else's gaming platform (Steam or Android). I think the next Shield might use the Orin ADAS mentioned above, a low TDP device for cloud gaming and streaming media, with AV1 support for future-proofing.I think the main reason that only Nintendo and now Valve have been able to make this work is because the real money is in controlling the software ecosystem, not selling the hardware. Nvidia tried to curate software for the Shield handheld/tablet without very much success.
Dakhil, Alovon, 10k and myself go over Drake and the current known information. I think speculation can sometimes move us away from what we actually know, so hopefully this discussion helps frame what this successor is for everyone who watches it.
Mr. Polaris was Supermetaldave I thought.heh 10k. I recognize Mr Polaris.
will watch it later.
Dakhil, Alovon, 10k and myself go over Drake and the current known information. I think speculation can sometimes move us away from what we actually know, so hopefully this discussion helps frame what this successor is for everyone who watches it.
Really? This video shows Death Stranding being upscaled from 240p and the results are definitely impressive given its base output and most certainly isn't "awful". Or at least, it doesn't look like Xenoblade Chronicles 2 Gormott on a rainy day...
In fact, the 240-to-720p example pretty much looks like a lot of the current Switch ports that run at subnative that so happen to run at an even higher resolution than 240p. So imagine what a game being upscaled from 480p would look like.
So yeah, I'd rather take a DLSS upscale than a non-DLSS subnative solution.
To be fair, would you actually notice it that much on such a small screen?That 240p footage looks terrible. I’d rather games actually look clean and stable in handheld on Drake, I’m tired of blurry games.
The really question is would you prefer Xenoblade Chronicles 2’s worst case scenarios as it is now, or would you prefer to have it use some AI upscaling to achieve a much better result on handheld mode, with a much smoother framerate to boot?That 240p footage looks terrible. I’d rather games actually look clean and stable in handheld on Drake, I’m tired of blurry games.
There's a possibility that Drake (T239) could use the Cortex-A78C instead of the Cortex-A78AE for the CPU.
According to Arm, the Cortex-A78C allows for up to 8 CPU cores per CPU cluster as opposed to up to 4 CPU cores per CPU cluster for the Cortex-A78AE. This means that the Cortex-A78C requires less clusters for more CPU cores in comparison to the Cortex-A78AE (e.g. 1 CPU cluster for 8 CPU cores for the Cortex-A78C vs 2 CPU clusters for 8 CPU cores for the Cortex-A78AE, 2 CPU clusters for 12 CPU cores for the Cortex-A78C vs 3 CPU clusters for 12 CPU cores for the Cortex-A78AE, 2 CPU clusters for 16 CPU cores for the Cortex-A78C vs 4 CPU clusters for 16 CPU cores for the Cortex-A78AE, etc.). And I imagine more CPU clusters take up more die space.
And the Cortex-A78C happens to allow up to 8 MB of L3 cache vs up to 4 MB of L3 cache for the Cortex-A78AE.
Dakhil, Alovon, 10k and myself go over Drake and the current known information. I think speculation can sometimes move us away from what we actually know, so hopefully this discussion helps frame what this successor is for everyone who watches it.
It looks bad on a screen smaller than the Switch’s to me.To be fair, would you actually notice it that much on such a small screen?
Blurry games are eternal![]()
I’d rather we just have clean, stable looking games.The really question is would you prefer Xenoblade Chronicles 2’s worst case scenarios as it is now, or would you prefer to have it use some AI upscaling to achieve a much better result on handheld mode, with a much smoother framerate to boot?