• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Those devkits are probably not using final silicon since those devkits are probably using Orin. (I don't know if AGX Orin or Orin NX is used.)
Dev kits aren't always final hardware themselves, but they're still dev kits of the final hardware. The point is that Nintendo didn't send out dev kits for hardware they aren't going to release, like some scrapped Pro model, or hardware that is currently a prototype. Those kits, and the software stack Nvidia has made that runs on them, are for hardware that is going to be released and soon.
 
Remains TBA.

Still doing research to check all the boxes. I don't want to talk this topic more than once. Would rather have a cover-all, single episode than countless updates. As of now, prior episode information remains accurate: dev kits exist, have been distributed, and are being worked with.
And switch 4k version of the games being work on these devkits still targeting 2022?
 
0
I hold my phone at ~6" distances all the time. A "tight" hold on my Switch (if I've been playing for a while and have eye fatigue) is 15 inches, and that's already sacrificing some peripheral acuity.

I have never encountered a single person who could see the pixel gutters on the Switch - I've talked to hundreds of people on forums who think they can, but ask them about it and they don't see pixel gutters (the gaps between pixels) but they see jaggies. The clue is that they can only see the pixels "sometimes" or in "some games" - because they're not actually seeing the pixels, they're seeing aliasing artifacts and/or upscaling artifacts from not running the game at the Switches native 720p resolution. Increasing the resolution of the device could make this worse as the native resolution of the game was likely designed to look good on a 720p screen, and might not upscale as cleanly to 1080p.

These problems are solved not by increasing screen resolution, but using the additional power of new device to drive the native resolution and frame rates of all games right up to the native resolution of the screen (eliminating upscaling artifacts), and running a more sophisticated anti-aliasing solution (fixing in-engine jaggies). As running a 1080p native image is 4x as much pixel pushing power as 720p, games would need to dedicate that much power to just stay in place in terms of AA and upscaling artifacts, for visual data that would literally be invisible. Meanwhile, existing games that meet 720p and don't change at all - the best games currently on the Switch - will gain upscaling artifacts at 1080p.

All this power could be spent on new, more sophisticated effects, higher drawing distances, etc. 720p isn't just "good enough" it is very close to "as good as possible"
6"? Jeez, the only screens I put that close are in VR headsets. I fully get that for most 3D game visuals, a 720p image with a high image quality is going to be hard to tell from a 1080p one at that size, but to get such a high quality 720p image without things like noticeable aliasing would take as much work as rendering over 720p anyway. But games can include lots of text and small icons that are just never going to be as distinct at lower resolutions.

Switch games are designed, and we can assume the next Switch games will be designed, with parity between TV and portable modes in these regards. They're not going to do something like give portable mode double draw distance. So it comes down to: what resolution would it be possible to do this with? If we expect many docked games will be rendering twice as many pixels as they do on base Switch, the same should be true of portable.
Current Switch is around 300gr heavy, while PS VR is around 600gr heavy, so in what exactly sense is/would be too heavy?
It's not just the size, it's how you use it. Sticking a Switch in something you strap to your head results in more weight being slightly farther away and straight forward from your face than something like Quest, resulting in a very different balance.
Has anyone tried BotW and SMO VR modes? What's it like?
BOTW VR is bad, but as much as resolution it's the low frame rate and that head movement doesn't work like you'd expect head movement to.
 
Remains TBA.

Still doing research to check all the boxes. I don't want to talk this topic more than once. Would rather have a cover-all, single episode than countless updates. As of now, prior episode information remains accurate: dev kits exist, have been distributed, and are being worked with.
NateDrake throws a Hyper Potion
 
It did. Like, we have had a ton of leaks about this over the past two years.

I don't get this whole "why didn't it leak" thing when there's like 3+ Bloomberg articles alone talking about developers confirming this exists, not to mention the devkit discussions from Nate and Imran and such.

Didn’t Imran’s comment on ‘Switch Pro’ devkits suggested it wouldn’t be that much of a leap in power over OG Switch? But then these leaked specs seem to dispute that?

I may be completely wrong on that though, may have been someone else.
 
Didn’t Imran’s comment on ‘Switch Pro’ devkits suggested it wouldn’t be that much of a leap in power over OG Switch? But then these leaked specs seem to dispute that?

I may be completely wrong on that though, may have been someone else.
I think Imran Khan's talking about in terms of how Nintendo's advertising the devkits to third party developers marketing wise, not necessarily in terms of performance.
 
Remains TBA.

Still doing research to check all the boxes. I don't want to talk this topic more than once. Would rather have a cover-all, single episode than countless updates. As of now, prior episode information remains accurate: dev kits exist, have been distributed, and are being worked with.
Are you at GDC by any chance?
Didn’t Imran’s comment on ‘Switch Pro’ devkits suggested it wouldn’t be that much of a leap in power over OG Switch? But then these leaked specs seem to dispute that?

I may be completely wrong on that though, may have been someone else.
He only mentioned Res and FPS boost, which you need a more potent hardware for than the current switch.
 
0
Sorry for the double post:

Compare Jetson Orin and Jetson Xavier Specifications​


Jetson Xavier NX SeriesJetson AGX Xavier SeriesJetson Orin NX SeriesJetson AGX Orin Series
Jetson Xavier NX 16GBJetson Xavier NXJetson AGX Xavier 64GBJetson AGX XavierJetson AGX Xavier IndustrialJetson Orin NX 8GBJetson Orin NX 16GBJetson AGX Orin 32GBJetson AGX Orin 64GB
AI Performance21 TOPS32 TOPS30 TOPS70 TOPS100 TOPS200 TOPS275 TOPS
GPU384-core NVIDIA Volta™GPU with 48 Tensor Cores512-core NVIDIA Volta GPU with 64 Tensor Cores1024-core NVIDIA Ampere GPU with 32 Tensor Cores1792-core NVIDIA Ampere GPU with 56 Tensor Cores2048-core NVIDIA Ampere GPU with 64 Tensor Cores
CPU6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6MB L2 + 4MB L38-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L36-core Arm®Cortex®-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L312-core Arm®Cortex®-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3
DL Accelerator2x NVDLA2x NVDLA1x NVDLA v22x NVDLA v22x NVDLA v2
Vision Accelerator2x PVA2x PVA1 x PVA v21 x PVA v2
Safety Cluster Engine--2x Arm Cortex-R5 in lockstep----
Memory16GB 128-bitLPDDR4x 59.7GB/s8GB 128-bitLPDDR4x 59.7GB/s64GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x (ECC support) 136.5GB/s8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
32GB 256-bit LPDDR5
204.8 GB/s
64GB 256-bit LPDDR5
204.8 GB/s
Storage16GB eMMC 5.132GB eMMC 5.164GB eMMC 5.1-
(Supports external NVMe)
64GB eMMC 5.1
CameraUp to 6 cameras
(24 via virtual channels)
14 lanes MIPI CSI-2
D-PHY 1.2 (up to 30 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2 | 8 lanes SLVS-EC
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (up to 20Gbps)
Up to 6 cameras (16 via virtual channels*)
16 lanes MIPI CSI-2
D-PHY 2.1 (up to 40Gbps) | C-PHY 2.0 (up to 164Gbps)
Video Encode2x 4K60 (H.265)
10x 1080p60 (H.265)
22x 1080p30 (H.265)
4x 4K60 (H.265)
16x 1080p60 (H.265)
32x 1080p30 (H.265)
2x 4K60 (H.265)
12x 1080p60 (H.265)
24x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
8x 1080p60 (H.265)
16x 1080p30 (H.265)
Video Decode2x 8K30 (H.265)
6x 4K60 (H.265)
22x 1080p60 (H.265)
44x 1080p30 (H.265)
2x 8K30 (H.265)
6x 4K60 (H.265)
26x 1080p60 (H.265)
52x 1080p30 (H.265)
2x 8K30 (H.265)
4x 4K60 (H.265)
18x 1080p60 (H.265)
36x 1080p30 (H.265)
1x 8K30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
2 x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
3x 4K60 (H.265)
7x 4K30 (H.265)
11x 1080p60 (H.265)
22x 1080p30 (H.265)
PCIe1 x1 (PCIe Gen3) + 1 x4
(PCIe Gen4)
1 x8 + 1 x4 + 1 x2 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
1 x4 + 3 x1
(PCIe Gen4, Root Port & Endpoint)
Up to 2 x8 + 2 x4 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
Networking10/100/1000 BASE-T Ethernet1x GbE1x GbE
4x 10GbE
Display2 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
3 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.11x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Power10W | 15W | 20W10W | 15W | 30W20W | 40W10W | 15W | 20W10W | 15W | 25W15W | 20W | 50W5W | 30W | 50W Up to 60W Max
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin connector
Integrated Thermal Transfer Plate
69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin Molex Mirror Mezz Connector
Integrated Thermal Transfer Plate

*Virtual channel related camera information for Jetson Orin modules is not final and subject to change.

Jetson Orin NX Module Technical Specifications​


Jetson Orin NX 8GBJetson Orin NX 16GB
AI Performance70 TOPS (INT8)100 TOPs (INT8)
GPUNVIDIA Ampere architecture
with 1024 NVIDIA® CUDA® cores and 32 tensor cores
Max GPU Freq765 MHz918 MHz
CPU6-core Arm®Cortex®-A78AE v8.2 64-bit CPU
1.5MB L2 + 4MB L3
8-core Arm®Cortex®-A78AE v8.2 64-bit CPU
2MB L2 + 4MB L3
CPU Max Freq2 GHz
DL Accelerator1x NVDLA v2.02x NVDLA v2.0
DLA Max Freq614 MHz
Vision AcceleratorPVA v2.0
Memory8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
StorageSupports external NVMe
CSI CameraUp to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (20 Gbps)
Video Encode1x 4K60 | 3x 4K30| 6x 1080p60 | 12x 1080p30 (H.265)
H.264, H.265, AV1
Video Decode1x 8K30 | 2x 4K60 | 4x 4K30 | 9x 1080p60 | 18x 1080p30 (H.265)
H.264, H.265, VP9, AV1
UPHY3 x1 + 1 x4 PCIe Gen 4
3x USB 3.2 Gen2
Networking1x GbE
Display1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Other I/O3x USB 2.0
3x UART | 2x SPI | 4x I2C | 1x CAN | DMIC | DSPK | 2x I2S | 15x GPIOs
Power10W | 15W | 20W10W | 15W | 25W
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector

*Virtual Channel related camera information for Jetson Orin NX is not final and subject to change.


 
Didn’t Imran’s comment on ‘Switch Pro’ devkits suggested it wouldn’t be that much of a leap in power over OG Switch? But then these leaked specs seem to dispute that?

I may be completely wrong on that though, may have been someone else.
I'm mainly referring to him saying he's surprised it hasn't leaked more since a ton of devs have devkits.
 
0
Sorry for the double post:

Compare Jetson Orin and Jetson Xavier Specifications​


Jetson Xavier NX SeriesJetson AGX Xavier SeriesJetson Orin NX SeriesJetson AGX Orin Series
Jetson Xavier NX 16GBJetson Xavier NXJetson AGX Xavier 64GBJetson AGX XavierJetson AGX Xavier IndustrialJetson Orin NX 8GBJetson Orin NX 16GBJetson AGX Orin 32GBJetson AGX Orin 64GB
AI Performance21 TOPS32 TOPS30 TOPS70 TOPS100 TOPS200 TOPS275 TOPS
GPU384-core NVIDIA Volta™GPU with 48 Tensor Cores512-core NVIDIA Volta GPU with 64 Tensor Cores1024-core NVIDIA Ampere GPU with 32 Tensor Cores1792-core NVIDIA Ampere GPU with 56 Tensor Cores2048-core NVIDIA Ampere GPU with 64 Tensor Cores
CPU6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6MB L2 + 4MB L38-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L36-core Arm®Cortex®-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L312-core Arm®Cortex®-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3
DL Accelerator2x NVDLA2x NVDLA1x NVDLA v22x NVDLA v22x NVDLA v2
Vision Accelerator2x PVA2x PVA1 x PVA v21 x PVA v2
Safety Cluster Engine--2x Arm Cortex-R5 in lockstep----
Memory16GB 128-bitLPDDR4x 59.7GB/s8GB 128-bitLPDDR4x 59.7GB/s64GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x (ECC support) 136.5GB/s8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
32GB 256-bit LPDDR5
204.8 GB/s
64GB 256-bit LPDDR5
204.8 GB/s
Storage16GB eMMC 5.132GB eMMC 5.164GB eMMC 5.1-
(Supports external NVMe)
64GB eMMC 5.1
CameraUp to 6 cameras
(24 via virtual channels)
14 lanes MIPI CSI-2
D-PHY 1.2 (up to 30 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2 | 8 lanes SLVS-EC
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (up to 20Gbps)
Up to 6 cameras (16 via virtual channels*)
16 lanes MIPI CSI-2
D-PHY 2.1 (up to 40Gbps) | C-PHY 2.0 (up to 164Gbps)
Video Encode2x 4K60 (H.265)
10x 1080p60 (H.265)
22x 1080p30 (H.265)
4x 4K60 (H.265)
16x 1080p60 (H.265)
32x 1080p30 (H.265)
2x 4K60 (H.265)
12x 1080p60 (H.265)
24x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
8x 1080p60 (H.265)
16x 1080p30 (H.265)
Video Decode2x 8K30 (H.265)
6x 4K60 (H.265)
22x 1080p60 (H.265)
44x 1080p30 (H.265)
2x 8K30 (H.265)
6x 4K60 (H.265)
26x 1080p60 (H.265)
52x 1080p30 (H.265)
2x 8K30 (H.265)
4x 4K60 (H.265)
18x 1080p60 (H.265)
36x 1080p30 (H.265)
1x 8K30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
2 x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
3x 4K60 (H.265)
7x 4K30 (H.265)
11x 1080p60 (H.265)
22x 1080p30 (H.265)
PCIe1 x1 (PCIe Gen3) + 1 x4
(PCIe Gen4)
1 x8 + 1 x4 + 1 x2 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
1 x4 + 3 x1
(PCIe Gen4, Root Port & Endpoint)
Up to 2 x8 + 2 x4 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
Networking10/100/1000 BASE-T Ethernet1x GbE1x GbE
4x 10GbE
Display2 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
3 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.11x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Power10W | 15W | 20W10W | 15W | 30W20W | 40W10W | 15W | 20W10W | 15W | 25W15W | 20W | 50W5W | 30W | 50W Up to 60W Max
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin connector
Integrated Thermal Transfer Plate
69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin Molex Mirror Mezz Connector
Integrated Thermal Transfer Plate

*Virtual channel related camera information for Jetson Orin modules is not final and subject to change.

Jetson Orin NX Module Technical Specifications​


Jetson Orin NX 8GBJetson Orin NX 16GB
AI Performance70 TOPS (INT8)100 TOPs (INT8)
GPUNVIDIA Ampere architecture
with 1024 NVIDIA® CUDA® cores and 32 tensor cores
Max GPU Freq765 MHz918 MHz
CPU6-core Arm®Cortex®-A78AE v8.2 64-bit CPU
1.5MB L2 + 4MB L3
8-core Arm®Cortex®-A78AE v8.2 64-bit CPU
2MB L2 + 4MB L3
CPU Max Freq2 GHz
DL Accelerator1x NVDLA v2.02x NVDLA v2.0
DLA Max Freq614 MHz
Vision AcceleratorPVA v2.0
Memory8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
StorageSupports external NVMe
CSI CameraUp to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (20 Gbps)
Video Encode1x 4K60 | 3x 4K30| 6x 1080p60 | 12x 1080p30 (H.265)
H.264, H.265, AV1
Video Decode1x 8K30 | 2x 4K60 | 4x 4K30 | 9x 1080p60 | 18x 1080p30 (H.265)
H.264, H.265, VP9, AV1
UPHY3 x1 + 1 x4 PCIe Gen 4
3x USB 3.2 Gen2
Networking1x GbE
Display1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Other I/O3x USB 2.0
3x UART | 2x SPI | 4x I2C | 1x CAN | DMIC | DSPK | 2x I2S | 15x GPIOs
Power10W | 15W | 20W10W | 15W | 25W
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector

*Virtual Channel related camera information for Jetson Orin NX is not final and subject to change.


ELI5?
 
Anyway, once someone gets a nice picture and/or measurements of Jetson Orin NX then we can get a really good idea of what the die size of Drake would be on 8nm.
 
Sorry for the double post:

Compare Jetson Orin and Jetson Xavier Specifications​


Jetson Xavier NX SeriesJetson AGX Xavier SeriesJetson Orin NX SeriesJetson AGX Orin Series
Jetson Xavier NX 16GBJetson Xavier NXJetson AGX Xavier 64GBJetson AGX XavierJetson AGX Xavier IndustrialJetson Orin NX 8GBJetson Orin NX 16GBJetson AGX Orin 32GBJetson AGX Orin 64GB
AI Performance21 TOPS32 TOPS30 TOPS70 TOPS100 TOPS200 TOPS275 TOPS
GPU384-core NVIDIA Volta™GPU with 48 Tensor Cores512-core NVIDIA Volta GPU with 64 Tensor Cores1024-core NVIDIA Ampere GPU with 32 Tensor Cores1792-core NVIDIA Ampere GPU with 56 Tensor Cores2048-core NVIDIA Ampere GPU with 64 Tensor Cores
CPU6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6MB L2 + 4MB L38-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L36-core Arm®Cortex®-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L312-core Arm®Cortex®-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3
DL Accelerator2x NVDLA2x NVDLA1x NVDLA v22x NVDLA v22x NVDLA v2
Vision Accelerator2x PVA2x PVA1 x PVA v21 x PVA v2
Safety Cluster Engine--2x Arm Cortex-R5 in lockstep----
Memory16GB 128-bitLPDDR4x 59.7GB/s8GB 128-bitLPDDR4x 59.7GB/s64GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x (ECC support) 136.5GB/s8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
32GB 256-bit LPDDR5
204.8 GB/s
64GB 256-bit LPDDR5
204.8 GB/s
Storage16GB eMMC 5.132GB eMMC 5.164GB eMMC 5.1-
(Supports external NVMe)
64GB eMMC 5.1
CameraUp to 6 cameras
(24 via virtual channels)
14 lanes MIPI CSI-2
D-PHY 1.2 (up to 30 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2 | 8 lanes SLVS-EC
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (up to 20Gbps)
Up to 6 cameras (16 via virtual channels*)
16 lanes MIPI CSI-2
D-PHY 2.1 (up to 40Gbps) | C-PHY 2.0 (up to 164Gbps)
Video Encode2x 4K60 (H.265)
10x 1080p60 (H.265)
22x 1080p30 (H.265)
4x 4K60 (H.265)
16x 1080p60 (H.265)
32x 1080p30 (H.265)
2x 4K60 (H.265)
12x 1080p60 (H.265)
24x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
8x 1080p60 (H.265)
16x 1080p30 (H.265)
Video Decode2x 8K30 (H.265)
6x 4K60 (H.265)
22x 1080p60 (H.265)
44x 1080p30 (H.265)
2x 8K30 (H.265)
6x 4K60 (H.265)
26x 1080p60 (H.265)
52x 1080p30 (H.265)
2x 8K30 (H.265)
4x 4K60 (H.265)
18x 1080p60 (H.265)
36x 1080p30 (H.265)
1x 8K30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
2 x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
3x 4K60 (H.265)
7x 4K30 (H.265)
11x 1080p60 (H.265)
22x 1080p30 (H.265)
PCIe1 x1 (PCIe Gen3) + 1 x4
(PCIe Gen4)
1 x8 + 1 x4 + 1 x2 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
1 x4 + 3 x1
(PCIe Gen4, Root Port & Endpoint)
Up to 2 x8 + 2 x4 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
Networking10/100/1000 BASE-T Ethernet1x GbE1x GbE
4x 10GbE
Display2 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
3 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.11x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Power10W | 15W | 20W10W | 15W | 30W20W | 40W10W | 15W | 20W10W | 15W | 25W15W | 20W | 50W5W | 30W | 50W Up to 60W Max
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin connector
Integrated Thermal Transfer Plate
69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin Molex Mirror Mezz Connector
Integrated Thermal Transfer Plate

*Virtual channel related camera information for Jetson Orin modules is not final and subject to change.

Jetson Orin NX Module Technical Specifications​


Jetson Orin NX 8GBJetson Orin NX 16GB
AI Performance70 TOPS (INT8)100 TOPs (INT8)
GPUNVIDIA Ampere architecture
with 1024 NVIDIA® CUDA® cores and 32 tensor cores
Max GPU Freq765 MHz918 MHz
CPU6-core Arm®Cortex®-A78AE v8.2 64-bit CPU
1.5MB L2 + 4MB L3
8-core Arm®Cortex®-A78AE v8.2 64-bit CPU
2MB L2 + 4MB L3
CPU Max Freq2 GHz
DL Accelerator1x NVDLA v2.02x NVDLA v2.0
DLA Max Freq614 MHz
Vision AcceleratorPVA v2.0
Memory8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
StorageSupports external NVMe
CSI CameraUp to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (20 Gbps)
Video Encode1x 4K60 | 3x 4K30| 6x 1080p60 | 12x 1080p30 (H.265)
H.264, H.265, AV1
Video Decode1x 8K30 | 2x 4K60 | 4x 4K30 | 9x 1080p60 | 18x 1080p30 (H.265)
H.264, H.265, VP9, AV1
UPHY3 x1 + 1 x4 PCIe Gen 4
3x USB 3.2 Gen2
Networking1x GbE
Display1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Other I/O3x USB 2.0
3x UART | 2x SPI | 4x I2C | 1x CAN | DMIC | DSPK | 2x I2S | 15x GPIOs
Power10W | 15W | 20W10W | 15W | 25W
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector

*Virtual Channel related camera information for Jetson Orin NX is not final and subject to change.


That Jetson Orin NX 8GB one looks really on spec for what I expect at this point. I don't see a list of profiles to match those Power values near the bottom of the spec though.

EDIT: 25W is probably hitting those max clocks. I'd love to see what it the 10W and 15W values match up to. I'm guessing that 10W is our portable mode - and that leaves headroom for NVMe storage, the screen, and other miscellany.
 
BOTW VR is bad, but as much as resolution it's the low frame rate and that head movement doesn't work like you'd expect head movement to.

Yeah I forgot DF did a video on it. 480p with drops to 20fps 😕 Sounds awful for VR. I got major headaches on Resident Evil VII on PS4 Pro partially because of it being just 720p.

Nintendo have to be working on VR features for the next console because their games and properties would translate really well especially the likes of Mario Kart, Starfox, F Zero and Metroid. With their ideology behind making games and new experiences it’s too obvious not to do, the Joycons are perfect for VR too.

Nvidia I imagine have also put a lot of time, money and engineering into VR.

I’m guessing in a worst case scenario they could make a separate VR device using Drake specs which would be able to render BotW at 1080p/60fps with OLED like battery life.
 
0
Remains TBA.

Still doing research to check all the boxes. I don't want to talk this topic more than once. Would rather have a cover-all, single episode than countless updates. As of now, prior episode information remains accurate: dev kits exist, have been distributed, and are being worked with.
Thank you very much. One aspect of the discussion I am interested in is how surprising the leaked specs are to you and MVG. We had long thought that Dane would be capable of exchanging blows with a GTX 750Ti. Drake seems to be more like a 1050Ti. That gulf is massive.
 
That Jetson Orin NX 8GB one looks really on spec for what I expect at this point. I don't see a list of profiles to match those Power values near the bottom of the spec though.

EDIT: 25W is probably hitting those max clocks. I'd love to see what it the 10W and 15W values match up to. I'm guessing that 10W is our portable mode - and that leaves headroom for NVMe storage, the screen, and other miscellany.
The whole switch system consumes less than 10W in portable mode though. Adding the NVMe already adds like 4W to the package and the rest of the screen components aren’t a zero value and can be 5W all combined.

I don’t really see that happening.

Maybe Nintendo would be more graceful and allow for at max 12W in portable mode for absolutely everything, but I don’t see it being 10W for just the soc and then everything else that has its own power concerns.
 
Looking forward to the next Nate the Hate episode on this when it does come out.

Along with a little bump in performance and resolution, I’d really love backwards compatibility. Hoping we start getting “Switch Pro/Switch 2” boxart along the spines of the physical copies too.
 
The whole switch system consumes less than 10W in portable mode though. Adding the NVMe already adds like 4W to the package and the rest of the screen components aren’t a zero value and can be 5W all combined.

I don’t really see that happening.

Maybe Nintendo would be more graceful and allow for at max 12W in portable mode for absolutely everything, but I don’t see it being 10W for just the soc and then everything else that has its own power concerns.
The thing about NVMe is that it has a spec that 4W may be typical, but individual designs should be able to be lower. I haven't been following the thread for a while, but I remember from when I was that lots of designs were lower, and that there's a question of if the power budget was a hard limit, or if it's based on average power usage. A quick search found this spec sheet - https://image.semiconductor.samsung...PM971_BGA_NVMe_SSD_product_biref_170201-0.pdf - which shows 3W during read/write operations, but 60mW idle. Most games are going to spend most of their time in idle. Additionally there is a ram cache. If that ram cache is removed and replaced in software and in system ram then the power usage goes down.
 
Looking forward to the next Nate the Hate episode on this when it does come out.

Along with a little bump in performance and resolution, I’d really love backwards compatibility. Hoping we start getting “Switch Pro/Switch 2” boxart along the spines of the physical copies too.
No BC would be such a colossal fuckup that I just can't see any way it doesn't happen. People can discuss how they'd accomplish it all they want, but any talk of it not having BC is a waste of time to me.
 
The Tegra X1's GPU's interesting, architecturally speaking, since the Tegra X1's GPU does borrow at least one feature from the Pascal architecture (e.g. FP16), despite using the Maxwell architecture as the foundation.

And to play devil's advocate, there have been rumours that Nvidia and AMD have originally planned to have GPUs fabricated using TSMC's 20 nm** process node. But there was too much power leakage and too low yield rates associated with TSMC's 20 nm** process node for discrete GPU dies to the point where Nvidia and AMD have publicly complained about TSMC's 20 nm** process node. And Nvidia and AMD ultimately skipped using TSMC's 20 nm** process node in favour of using a 16 nm** process node from TSMC and a 14 nm** process node from GlobalFoundries for the fabrication of Pascal GPUs and RX Vega 10 GPUs respectively.

So I think the Tegra X1 is an exception.

I don't think it's quite correct to say that TX1 "borrowed" FP16 support from Pascal. As you can read in the article, while consumer Pascal cards technically support FP16, they only have a single FP16 core per SM for compatibility purposes, so performance of actual FP16 code would be pretty terrible. The HPC-oriented P100 had full double-rate FP16 support, but had a very different SM setup to TX1, with FP64 support, and only 64 (rather than 128) FP32 "cores" per SM.

I strongly suspect that the TX1's FP16 support was designed specifically for TX1, motivated by its use-case. It was designed for low-power mobile use (like tablets), and mobile GPUs from the likes of ARM, Qualcomm, Imagination Technologies, etc, have been supporting FP16 for quite a while, as it's generally quite a lot more power-efficient than FP32. Hence, it made sense for Nvidia to also support FP16 on their SoC GPUs to remain competitive. Nvidia added a minimal number of these 2xFP16 compatible cores to consumer Pascal later purely for compatibility with code compiled for TX1 or P100. It's also worth noting that TX1, TX2 and P100 are the only GPUs Nvidia made that used this approach of running paired FP16 ops on FP32 cores. From Turing onwards FP16 has been run on the tensor cores instead (with non-RTX Turing getting their own standalone FP16 cores).

Conversely, we do know precisely one feature which Drake has which is featured in Ada, but no other Ampere GPUs; FLCG. Of course, it's possible that this is a minor feature completely unrelated to manufacturing process which just happened to be feasible for Drake due to timing, but if we're comparing feature compatibility with the following architecture, then it's as relevant to manufacturing process as anything else (which is probably not very much).

Anyway, while Tegra X1 is clearly an exception, my point is that if there is one exception, there can be more exceptions. Another rule we could claim is that "every Nvidia SoC has used a manufacturing process at least as advanced as their most recently launched GPU (either HPC or gaming)". That technically hasn't been broken, and as of today their most recently launched GPU is Hopper, which is actually TSMC N4, as it turns out (although it's effectively just an improved version of their 5nm processes). That doesn't mean we're guaranteed to get N4, but if we're trying to find patterns on a small dataset like this we can always find one to suit any particular outcome. And in Drake's case it is already unique; it's the only SoC Nvidia have designed for a specific client, so there's no guarantee that they'll adhere to past patterns anyway.

Edit:
Sorry for the double post:

Compare Jetson Orin and Jetson Xavier Specifications​


Jetson Xavier NX SeriesJetson AGX Xavier SeriesJetson Orin NX SeriesJetson AGX Orin Series
Jetson Xavier NX 16GBJetson Xavier NXJetson AGX Xavier 64GBJetson AGX XavierJetson AGX Xavier IndustrialJetson Orin NX 8GBJetson Orin NX 16GBJetson AGX Orin 32GBJetson AGX Orin 64GB
AI Performance21 TOPS32 TOPS30 TOPS70 TOPS100 TOPS200 TOPS275 TOPS
GPU384-core NVIDIA Volta™GPU with 48 Tensor Cores512-core NVIDIA Volta GPU with 64 Tensor Cores1024-core NVIDIA Ampere GPU with 32 Tensor Cores1792-core NVIDIA Ampere GPU with 56 Tensor Cores2048-core NVIDIA Ampere GPU with 64 Tensor Cores
CPU6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6MB L2 + 4MB L38-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L36-core Arm®Cortex®-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L312-core Arm®Cortex®-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3
DL Accelerator2x NVDLA2x NVDLA1x NVDLA v22x NVDLA v22x NVDLA v2
Vision Accelerator2x PVA2x PVA1 x PVA v21 x PVA v2
Safety Cluster Engine--2x Arm Cortex-R5 in lockstep----
Memory16GB 128-bitLPDDR4x 59.7GB/s8GB 128-bitLPDDR4x 59.7GB/s64GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x (ECC support) 136.5GB/s8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
32GB 256-bit LPDDR5
204.8 GB/s
64GB 256-bit LPDDR5
204.8 GB/s
Storage16GB eMMC 5.132GB eMMC 5.164GB eMMC 5.1-
(Supports external NVMe)
64GB eMMC 5.1
CameraUp to 6 cameras
(24 via virtual channels)
14 lanes MIPI CSI-2
D-PHY 1.2 (up to 30 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2 | 8 lanes SLVS-EC
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (up to 20Gbps)
Up to 6 cameras (16 via virtual channels*)
16 lanes MIPI CSI-2
D-PHY 2.1 (up to 40Gbps) | C-PHY 2.0 (up to 164Gbps)
Video Encode2x 4K60 (H.265)
10x 1080p60 (H.265)
22x 1080p30 (H.265)
4x 4K60 (H.265)
16x 1080p60 (H.265)
32x 1080p30 (H.265)
2x 4K60 (H.265)
12x 1080p60 (H.265)
24x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
8x 1080p60 (H.265)
16x 1080p30 (H.265)
Video Decode2x 8K30 (H.265)
6x 4K60 (H.265)
22x 1080p60 (H.265)
44x 1080p30 (H.265)
2x 8K30 (H.265)
6x 4K60 (H.265)
26x 1080p60 (H.265)
52x 1080p30 (H.265)
2x 8K30 (H.265)
4x 4K60 (H.265)
18x 1080p60 (H.265)
36x 1080p30 (H.265)
1x 8K30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
2 x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
3x 4K60 (H.265)
7x 4K30 (H.265)
11x 1080p60 (H.265)
22x 1080p30 (H.265)
PCIe1 x1 (PCIe Gen3) + 1 x4
(PCIe Gen4)
1 x8 + 1 x4 + 1 x2 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
1 x4 + 3 x1
(PCIe Gen4, Root Port & Endpoint)
Up to 2 x8 + 2 x4 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
Networking10/100/1000 BASE-T Ethernet1x GbE1x GbE
4x 10GbE
Display2 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
3 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.11x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Power10W | 15W | 20W10W | 15W | 30W20W | 40W10W | 15W | 20W10W | 15W | 25W15W | 20W | 50W5W | 30W | 50W Up to 60W Max
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin connector
Integrated Thermal Transfer Plate
69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin Molex Mirror Mezz Connector
Integrated Thermal Transfer Plate

*Virtual channel related camera information for Jetson Orin modules is not final and subject to change.

Jetson Orin NX Module Technical Specifications​


Jetson Orin NX 8GBJetson Orin NX 16GB
AI Performance70 TOPS (INT8)100 TOPs (INT8)
GPUNVIDIA Ampere architecture
with 1024 NVIDIA® CUDA® cores and 32 tensor cores
Max GPU Freq765 MHz918 MHz
CPU6-core Arm®Cortex®-A78AE v8.2 64-bit CPU
1.5MB L2 + 4MB L3
8-core Arm®Cortex®-A78AE v8.2 64-bit CPU
2MB L2 + 4MB L3
CPU Max Freq2 GHz
DL Accelerator1x NVDLA v2.02x NVDLA v2.0
DLA Max Freq614 MHz
Vision AcceleratorPVA v2.0
Memory8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
StorageSupports external NVMe
CSI CameraUp to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (20 Gbps)
Video Encode1x 4K60 | 3x 4K30| 6x 1080p60 | 12x 1080p30 (H.265)
H.264, H.265, AV1
Video Decode1x 8K30 | 2x 4K60 | 4x 4K30 | 9x 1080p60 | 18x 1080p30 (H.265)
H.264, H.265, VP9, AV1
UPHY3 x1 + 1 x4 PCIe Gen 4
3x USB 3.2 Gen2
Networking1x GbE
Display1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Other I/O3x USB 2.0
3x UART | 2x SPI | 4x I2C | 1x CAN | DMIC | DSPK | 2x I2S | 15x GPIOs
Power10W | 15W | 20W10W | 15W | 25W
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector

*Virtual Channel related camera information for Jetson Orin NX is not final and subject to change.



The most relevant part of the updated specs (which doesn't seem to be in this table) is that Jetson AGX Orin now supports GPU clocks up to 1.3GHz (with an updated max power draw of 60W). It shouldn't really be surprising at all that the GPU can clock that high, and the original 1GHz clock was clearly very conservative, but it's worth noting when considering Drake clock speeds.
 
Last edited:
Remember when Dragon Quest 11 was announced for the NX before we even knew what it was

Then Eiyuden Chronicle developers, Rabbit & Bear, leaked Switch 2:

 
Then Eiyuden Chronicle developers, Rabbit & Bear, leaked Switch 2:

That isn't a leak, its just them saying that their game will come to whatever is after the Switch,
 
The thing about NVMe is that it has a spec that 4W may be typical, but individual designs should be able to be lower. I haven't been following the thread for a while, but I remember from when I was that lots of designs were lower, and that there's a question of if the power budget was a hard limit, or if it's based on average power usage. A quick search found this spec sheet - https://image.semiconductor.samsung...PM971_BGA_NVMe_SSD_product_biref_170201-0.pdf - which shows 3W during read/write operations, but 60mW idle. Most games are going to spend most of their time in idle. Additionally there is a ram cache. If that ram cache is removed and replaced in software and in system ram then the power usage goes down.
Wait, 3 watts for up to 1,400 MB/s sequential read for the 128 GB and 256 GB models? Not even 500 MB/s per watt? That's... below average for NVMe SSDs in general, I think?
(nevermind that Samsung's own 64 GB model eMMC is up to 330 MB/s sequential read for 0.5 watts)

Is there a more recent model? Cause the random IOPS are also really bad by modern NVMe SSD standards. They're not even all that far ahead enough of eUFS 3.1 to have a tangible difference.
 
0

Jetson Orin NX Module Technical Specifications​


Jetson Orin NX 8GBJetson Orin NX 16GB
AI Performance70 TOPS (INT8)100 TOPs (INT8)
GPUNVIDIA Ampere architecture
with 1024 NVIDIA® CUDA® cores and 32 tensor cores
Max GPU Freq765 MHz918 MHz
CPU6-core Arm®Cortex®-A78AE v8.2 64-bit CPU
1.5MB L2 + 4MB L3
8-core Arm®Cortex®-A78AE v8.2 64-bit CPU
2MB L2 + 4MB L3
CPU Max Freq2 GHz
DL Accelerator1x NVDLA v2.02x NVDLA v2.0
DLA Max Freq614 MHz
Vision AcceleratorPVA v2.0
Memory8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
StorageSupports external NVMe
CSI CameraUp to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (20 Gbps)
Video Encode1x 4K60 | 3x 4K30| 6x 1080p60 | 12x 1080p30 (H.265)
H.264, H.265, AV1
Video Decode1x 8K30 | 2x 4K60 | 4x 4K30 | 9x 1080p60 | 18x 1080p30 (H.265)
H.264, H.265, VP9, AV1
UPHY3 x1 + 1 x4 PCIe Gen 4
3x USB 3.2 Gen2
Networking1x GbE
Display1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Other I/O3x USB 2.0
3x UART | 2x SPI | 4x I2C | 1x CAN | DMIC | DSPK | 2x I2S | 15x GPIOs
Power10W | 15W | 20W10W | 15W | 25W
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector

*Virtual Channel related camera information for Jetson Orin NX is not final and subject to change.


Interesting.. Do the 8 and 16GB both have 12 SMs for GPU? Guessing it doesn't say.
Also,

What's DL accelerator? 16GB variant has 2 of them

What did you see for the L2 cache again for Drake, @LiC?

That Jetson Orin NX 8GB one looks really on spec for what I expect at this point. I don't see a list of profiles to match those Power values near the bottom of the spec though.

EDIT: 25W is probably hitting those max clocks. I'd love to see what it the 10W and 15W values match up to. I'm guessing that 10W is our portable mode - and that leaves headroom for NVMe storage, the screen, and other miscellany.
yeah I wouldn't be surprised if we get the 8GB version. That means we would get a 6 core A78S CPU and GPU is only clocked to 765Mhz max, which sounds eerily similar to TX1 on Switch...

Maybe Nintendo will increase the RAM to 10GB. Have 2 dedicated for OS like PS4 Pro.
 
Switch games are designed, and we can assume the next Switch games will be designed, with parity between TV and portable modes in these regards. They're not going to do something like give portable mode double draw distance. So it comes down to: what resolution would it be possible to do this with?
720p :(

If we expect many docked games will be rendering twice as many pixels as they do on base Switch, the same should be true of portable.
720p/60fps would be double many many many games resolution in handheld mode already. Odyssey interlaces frames, so its only rendering half a 720p image every frame. BotW runs 720p but only 30ps - that is when it's not dropping down to 640p, or dropping frames in the Korok forest. Animal Crossing runs at 30. Sword and Shield hits as low as 1024x576, also at 30fps. Splatoon 2 hits 60fps - and drops to 548p any time the screen gets busy.

Third party is often worse. Doom Eternal barely hits 720p at 30fps in docked mode, much less handheld. Witcher 3 the same. Generations old ports like Assassins Creed are running with limited draw distance and no fog, 2nd party games like Arceus, Hyrule Warriors offer pop-in like nobodies business.

There is still plenty of "meat" on the 720p bone, even without adding features like reflections/TAA/ray tracing. In docked mode there is plenty of juice in 1080p. Moreover, DLSS likes consistent resolution, as it uses data across frames. If you want to be upscaling to 4k, you really want to lock in the base resolution you're rendering at. If all of these games run at an uncompromised 720p handheld/1080p docked/4k docked + DLSS that is a HUGE jump, and roughly preserves the existing power gap between docked and handheld mode.
 
Remains TBA.

Still doing research to check all the boxes. I don't want to talk this topic more than once. Would rather have a cover-all, single episode than countless updates. As of now, prior episode information remains accurate: dev kits exist, have been distributed, and are being worked with.
I know you've already commented on this, but it would be nice to know more about the software.

For example are third parties actually using the hardware for exclusive games or if it will be another New 3DS situation where only Nintendo releases a few exclusives that barely justify the upgrade.
 
0
Not gonna tag Nate but I'm personally more curious about why devs haven't seemingly made many comments to reporters or folks like Nate/Imran/Grubb about how powerful this is for a handheld. If we're really talking about something that'll be somewhat close to a Series S in docked mode.
 
720p :(


720p/60fps would be double many many many games resolution in handheld mode already. Odyssey interlaces frames, so its only rendering half a 720p image every frame. BotW runs 720p but only 30ps - that is when it's not dropping down to 640p, or dropping frames in the Korok forest. Animal Crossing runs at 30. Sword and Shield hits as low as 1024x576, also at 30fps. Splatoon 2 hits 60fps - and drops to 548p any time the screen gets busy.

Third party is often worse. Doom Eternal barely hits 720p at 30fps in docked mode, much less handheld. Witcher 3 the same. Generations old ports like Assassins Creed are running with limited draw distance and no fog, 2nd party games like Arceus, Hyrule Warriors offer pop-in like nobodies business.

There is still plenty of "meat" on the 720p bone, even without adding features like reflections/TAA/ray tracing. In docked mode there is plenty of juice in 1080p. Moreover, DLSS likes consistent resolution, as it uses data across frames. If you want to be upscaling to 4k, you really want to lock in the base resolution you're rendering at. If all of these games run at an uncompromised 720p handheld/1080p docked/4k docked + DLSS that is a HUGE jump, and roughly preserves the existing power gap between docked and handheld mode.

Also, the potential for better AF, HDR, Higher res textures, higher fps ,and downsampling from higher resolutions because of DLSS will allow the current 720p panel to reach its peak potential much more than it currently does and will provide a noticeable step up.
 
The thing about NVMe is that it has a spec that 4W may be typical, but individual designs should be able to be lower. I haven't been following the thread for a while, but I remember from when I was that lots of designs were lower, and that there's a question of if the power budget was a hard limit, or if it's based on average power usage. A quick search found this spec sheet - https://image.semiconductor.samsung...PM971_BGA_NVMe_SSD_product_biref_170201-0.pdf - which shows 3W during read/write operations, but 60mW idle. Most games are going to spend most of their time in idle. Additionally there is a ram cache. If that ram cache is removed and replaced in software and in system ram then the power usage goes down.
That requires Nintendo implementing a custom NVMe protocol, which I don’t see them doing at all. And games do not stay idle with asset streaming. Especially in the scenario that has games where you stream a new area or environment.

Interesting.. Do the 8 and 16GB both have 12 SMs for GPU? Guessing it doesn't say.
Also,

What's DL accelerator? 16GB variant has 2 of them

What did you see for the L2 cache again for Drake, @LiC?
DeepLearning Accelerator is a dedicated ML block that NVidia has in their car/automotive SoCs. It lets devs create their own ML inferences.
 
720p :(


720p/60fps would be double many many many games resolution in handheld mode already. Odyssey interlaces frames, so its only rendering half a 720p image every frame. BotW runs 720p but only 30ps - that is when it's not dropping down to 640p, or dropping frames in the Korok forest. Animal Crossing runs at 30. Sword and Shield hits as low as 1024x576, also at 30fps. Splatoon 2 hits 60fps - and drops to 548p any time the screen gets busy.
I'm a big proponent of 60fps, but portable resolution isn't deciding that. If they're stuck on designing a game for 30, it will be on TV and on portable. They're not going to make Animal Crossing be 4k30 docked and 720p60 undocked. Nor if they make a Drake Zelda 60fps will it still be stuck at 900p max like BOTW. If they managed to make it a 1440p60 game the equivalent increase for portable would be 1152p60 with drops to 1024p60--except in Korok Forest.
Third party is often worse. Doom Eternal barely hits 720p at 30fps in docked mode, much less handheld. Witcher 3 the same. Generations old ports like Assassins Creed are running with limited draw distance and no fog, 2nd party games like Arceus, Hyrule Warriors offer pop-in like nobodies business.
"Miracle ports" are always going to be limited. But holding back screen tech so they don't seem as far behind first party games seems a disappointing way to cover it up. If the next-gen equivalent of a Witcher port only manages to be 720p... OK. It's not going to be hurt much more by a 1.5x scale
 
0
Not gonna tag Nate but I'm personally more curious about why devs haven't seemingly made many comments to reporters or folks like Nate/Imran/Grubb about how powerful this is for a handheld. If we're really talking about something that'll be somewhat close to a Series S in docked mode.
Which devs? I'm guessing by now many big and renowned studios already got their devkits and under heavy NDA's. Embracer owns hundreds of studios and they probably got each one of them these devkits to port the games without the need of an external port studio. I don't expect anyone to make comment on the subject, they'll act like such device never existed.
 
Which devs? I'm guessing by now many big and renowned studios already got their devkits and under heavy NDA's. Embracer owns hundreds of studios and they probably got each one of them these devkits to port the games without the need of an external port studio. I don't expect anyone to make comment on the subject, they'll act like such device never existed.
I mean the ones talking to Nate and Bloomberg et al.

Obviously nobody is gonna publicly talk about it themselves but they're already breaking NDA by talking to these reporters.
 
I mean the ones talking to Nate and Bloomberg et al.

Obviously nobody is gonna publicly talk about it themselves but they're already breaking NDA by talking to these reporters.
I'm under impression that people who leaked this were not under NDA directly. Just word of mouth reached their way. I don't expect people who actually have access to devkits to talk at all. That's why I gave the Embracer example, studios who actually have the devkits won't talk but this is a massive company, word will eventually get out.
 
I'm under impression that people who leaked this were not under NDA directly. Just word of mouth reached their way. I don't expect people who actually have access to devkits to talk at all. That's why I gave the Embracer example, studios who actually have the devkits won't talk but this is a massive company, word will eventually get out.
I guess that could generally be the case, yeah. Although even then someone along the chain is certainly breaking NDA.
 
0
It's not just the size, it's how you use it. Sticking a Switch in something you strap to your head results in more weight being slightly farther away and straight forward from your face than something like Quest, resulting in a very different balance.

Balance would be better with some kind of head holder or something similar, that Nintendo would probably provide,
I dont think we would talk only about strap in front, but also from above also.

In any case, I dont see around 300gr from front being too heavy to be used like VR headset.
 
0
and roughly preserves the existing power gap between docked and handheld mode.
I don't think the gap will be preserved. DLSS scales with output resolution, so if it's able to do 4K docked, 720p in handheld should be easier to achieve.

Unless they turn off tensor cores in handheld mode (which there'sno indication they even designed it to be able to), I'm fairly sure devs will save more battery with 540p + DLSS than 720p + idle TCs.

So, IMO the gap which makes more sense is 4x:
  • Least demanding games: 1080p handheld (downsampled to 720p) and 4K docked native.
  • Demanding games without DLSS: 720p/1440p
  • Very demanding games: 540p/1080p native, 720p/4K after DLSS (if it has DLSS)
  • Impossible ports: At worse 360p/720p native, before DLSS.

By targeting such a gap, they can deliver better battery life for handheld and better resolution for TV.
 
I mean the ones talking to Nate and Bloomberg et al.

Obviously nobody is gonna publicly talk about it themselves but they're already breaking NDA by talking to these reporters.

It might be the case that sources are close to the project but not deeply involved, e.g. artists or other staff that work on future hardware titles but are not deeply involved in the tech side of things. So they would know of devkits and a game in development but not mich about the actual specs.

I know of studios were people worked on unannounced hardware in a black box (behind a door with a key card and no windows). Everyone knew that they are working in new hardware from a specific first party, everyone knew about the game but no one was able to see it and get more details.
 
That requires Nintendo implementing a custom NVMe protocol, which I don’t see them doing at all.
How do you mean? Caching has been a thing forever.
And games do not stay idle with asset streaming. Especially in the scenario that has games where you stream a new area or environment.
Sure they do. It's just for shorter periods of time.
Think about it this way. You're reading from disk constantly on some other technology that is 1/6 the speed you would with an NVMe SSD and draw 500mW constant. On that same game with the NVMe SSD, you'd spend 2.5 of every 3 seconds idle, and the power usage is the same, but just broken up differently. If you're loading new rooms, the amount of data you read into system RAM is the same, it just happens more quickly. Same math as above.
 
Not gonna tag Nate but I'm personally more curious about why devs haven't seemingly made many comments to reporters or folks like Nate/Imran/Grubb about how powerful this is for a handheld. If we're really talking about something that'll be somewhat close to a Series S in docked mode.
Depends on the developers. If a lot of Japanese studios got kits than you probably won't hear much due to NDA. It's mostly western companies like Ubisoft that does most of the leaking.
 
0
How do you mean? Caching has been a thing forever.

Sure they do. It's just for shorter periods of time.
Think about it this way. You're reading from disk constantly on some other technology that is 1/6 the speed you would with an NVMe SSD and draw 500mW constant. On that same game with the NVMe SSD, you'd spend 2.5 of every 3 seconds idle, and the power usage is the same, but just broken up differently. If you're loading new rooms, the amount of data you read into system RAM is the same, it just happens more quickly. Same math as above.
That's if the MBps/W is the same. The specific model you linked to averages out to sub 500 MBps/W for the 256 GB and 512 GB versions.
Samsung's eMMC at 330 MB/s sequential read for 0.5 W would hit a theoretical 660 MBps/W.

To be fair, the model you linked to is from 2017. There's probably a more recent model whose efficiency is less bad, even in comparison to other PCIe gen 3 drives.
It's also annoyingly hard to find eUFS examples.
 
That's if the MBps/W is the same. The specific model you linked to averages out to sub 500 MBps/W for the 256 GB and 512 GB versions.
Samsung's eMMC at 330 MB/s sequential read for 0.5 W would hit a theoretical 660 MBps/W.

To be fair, the model you linked to is from 2017. There's probably a more recent model whose efficiency is less bad, even in comparison to other PCIe gen 3 drives.
It's also annoyingly hard to find eUFS examples.
eUFS would be a good solution for internal memory. UFS Card looks like it never got picked up by anyone might have died on the vine. Maybe SD Express will be in place well enough by the time this thing comes out.

I fully admit that I'm making the math up to support my argument. I don't know how much wattage the controller uses, and it could be significant. My understanding however is that the power draw of NAND is fairly closely linked to read and write activity - i.e. the faster you read or write the higher the power draw. If you're not reading or writing, then most of the power draw is the controller staying active.
 
0
Oh right, the time when you're not reading or writing, ie idle. The other consideration is that NVMe's idle draw is different from eMMC. Probably different from eUFS too.
eMMC idle power draw should be... sub 1 milliwatt under average conditions? Maybe between 1 and 2 milliwatts at 85 Celsius? I'm not too sure exactly, but the couple of times I've seen idle mentioned in eMMC specification pdf files, the current's measured in 2-3 digit microamperes (whereas active reading/writing's in milliamperes). And voltage should be a single digit.
What messes me up when looking at this stuff is that I'm not too sure how to handle the numbers given for VCC and VCCQ (and VCCQ2 for some versions of eUFS). Do I do something like add together VCC*current and VCCQ*current_2? Or do VCC and VCCQ apply in separate situations?
 
0
How do you mean? Caching has been a thing forever.

Sure they do. It's just for shorter periods of time.
Think about it this way. You're reading from disk constantly on some other technology that is 1/6 the speed you would with an NVMe SSD and draw 500mW constant. On that same game with the NVMe SSD, you'd spend 2.5 of every 3 seconds idle, and the power usage is the same, but just broken up differently. If you're loading new rooms, the amount of data you read into system RAM is the same, it just happens more quickly. Same math as above.
Caching has been a thing for a long time, yes. An NVMe SSD, which is what you suggested, has not. In order to have a fast Internal Storage that consumes low power, Nintendo would need to implement the protocol in-house like Sony or Apple did to make a custom SSD for their use case, and that is something I do not see them doing as that is not a trivial R&D.

And even then, if they were to use what you suggested, there are other solutions that could do it at way less like eUFS which is designed for lower powered mobile devices, and the switch fits that mold perfectly.
 
0
Sorry for the double post:

Compare Jetson Orin and Jetson Xavier Specifications​


Jetson Xavier NX SeriesJetson AGX Xavier SeriesJetson Orin NX SeriesJetson AGX Orin Series
Jetson Xavier NX 16GBJetson Xavier NXJetson AGX Xavier 64GBJetson AGX XavierJetson AGX Xavier IndustrialJetson Orin NX 8GBJetson Orin NX 16GBJetson AGX Orin 32GBJetson AGX Orin 64GB
AI Performance21 TOPS32 TOPS30 TOPS70 TOPS100 TOPS200 TOPS275 TOPS
GPU384-core NVIDIA Volta™GPU with 48 Tensor Cores512-core NVIDIA Volta GPU with 64 Tensor Cores1024-core NVIDIA Ampere GPU with 32 Tensor Cores1792-core NVIDIA Ampere GPU with 56 Tensor Cores2048-core NVIDIA Ampere GPU with 64 Tensor Cores
CPU6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6MB L2 + 4MB L38-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L36-core Arm®Cortex®-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L38-core Arm®Cortex®-A78AE v8.2 64-bit CPU 2MB L2 + 4MB L312-core Arm®Cortex®-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3
DL Accelerator2x NVDLA2x NVDLA1x NVDLA v22x NVDLA v22x NVDLA v2
Vision Accelerator2x PVA2x PVA1 x PVA v21 x PVA v2
Safety Cluster Engine--2x Arm Cortex-R5 in lockstep----
Memory16GB 128-bitLPDDR4x 59.7GB/s8GB 128-bitLPDDR4x 59.7GB/s64GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x 136.5GB/s32GB 256-bitLPDDR4x (ECC support) 136.5GB/s8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
32GB 256-bit LPDDR5
204.8 GB/s
64GB 256-bit LPDDR5
204.8 GB/s
Storage16GB eMMC 5.132GB eMMC 5.164GB eMMC 5.1-
(Supports external NVMe)
64GB eMMC 5.1
CameraUp to 6 cameras
(24 via virtual channels)
14 lanes MIPI CSI-2
D-PHY 1.2 (up to 30 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2 | 8 lanes SLVS-EC
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 6 cameras
(36 via virtual channels)
16 lanes MIPI CSI-2
D-PHY 1.2 (up to 40 Gbps)
C-PHY 1.1 (up to 62 Gbps)
Up to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (up to 20Gbps)
Up to 6 cameras (16 via virtual channels*)
16 lanes MIPI CSI-2
D-PHY 2.1 (up to 40Gbps) | C-PHY 2.0 (up to 164Gbps)
Video Encode2x 4K60 (H.265)
10x 1080p60 (H.265)
22x 1080p30 (H.265)
4x 4K60 (H.265)
16x 1080p60 (H.265)
32x 1080p30 (H.265)
2x 4K60 (H.265)
12x 1080p60 (H.265)
24x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
8x 1080p60 (H.265)
16x 1080p30 (H.265)
Video Decode2x 8K30 (H.265)
6x 4K60 (H.265)
22x 1080p60 (H.265)
44x 1080p30 (H.265)
2x 8K30 (H.265)
6x 4K60 (H.265)
26x 1080p60 (H.265)
52x 1080p30 (H.265)
2x 8K30 (H.265)
4x 4K60 (H.265)
18x 1080p60 (H.265)
36x 1080p30 (H.265)
1x 8K30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
2 x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1x 8K30 (H.265)
3x 4K60 (H.265)
7x 4K30 (H.265)
11x 1080p60 (H.265)
22x 1080p30 (H.265)
PCIe1 x1 (PCIe Gen3) + 1 x4
(PCIe Gen4)
1 x8 + 1 x4 + 1 x2 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
1 x4 + 3 x1
(PCIe Gen4, Root Port & Endpoint)
Up to 2 x8 + 2 x4 + 2 x1
(PCIe Gen4, Root Port & Endpoint)
Networking10/100/1000 BASE-T Ethernet1x GbE1x GbE
4x 10GbE
Display2 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
3 multi-mode DP 1.4/eDP 1.4/HDMI 2.0
No DSI support
1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.11x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Power10W | 15W | 20W10W | 15W | 30W20W | 40W10W | 15W | 20W10W | 15W | 25W15W | 20W | 50W5W | 30W | 50W Up to 60W Max
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin connector
Integrated Thermal Transfer Plate
69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin Molex Mirror Mezz Connector
Integrated Thermal Transfer Plate

*Virtual channel related camera information for Jetson Orin modules is not final and subject to change.

Jetson Orin NX Module Technical Specifications​


Jetson Orin NX 8GBJetson Orin NX 16GB
AI Performance70 TOPS (INT8)100 TOPs (INT8)
GPUNVIDIA Ampere architecture
with 1024 NVIDIA® CUDA® cores and 32 tensor cores
Max GPU Freq765 MHz918 MHz
CPU6-core Arm®Cortex®-A78AE v8.2 64-bit CPU
1.5MB L2 + 4MB L3
8-core Arm®Cortex®-A78AE v8.2 64-bit CPU
2MB L2 + 4MB L3
CPU Max Freq2 GHz
DL Accelerator1x NVDLA v2.02x NVDLA v2.0
DLA Max Freq614 MHz
Vision AcceleratorPVA v2.0
Memory8GB 128-bit LPDDR5
102.4 GB/s
16GB 128-bit LPDDR5
102.4 GB/s
StorageSupports external NVMe
CSI CameraUp to 4 cameras (8 via virtual channels*)
8 lanes MIPI CSI-2
D-PHY 1.2 (20 Gbps)
Video Encode1x 4K60 | 3x 4K30| 6x 1080p60 | 12x 1080p30 (H.265)
H.264, H.265, AV1
Video Decode1x 8K30 | 2x 4K60 | 4x 4K30 | 9x 1080p60 | 18x 1080p30 (H.265)
H.264, H.265, VP9, AV1
UPHY3 x1 + 1 x4 PCIe Gen 4
3x USB 3.2 Gen2
Networking1x GbE
Display1x 8K60 multi-mode DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1
Other I/O3x USB 2.0
3x UART | 2x SPI | 4x I2C | 1x CAN | DMIC | DSPK | 2x I2S | 15x GPIOs
Power10W | 15W | 20W10W | 15W | 25W
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector

*Virtual Channel related camera information for Jetson Orin NX is not final and subject to change.


Was giving a look into this and two things standout to me: Orin NX 8GB, which has 6 cores, 1xNVDLA and max GPU freq of 765MHz tops out at 20W. While Orin NX 16GB with 8 CPU cores(+2), 2xNVDLA(+1) and max GPU freq of 918MHz, tops out at 25W. That indicates to me that these clocks are still within the efficiency range of the V/f | Perf/W curve. When you look at Orin AGX, the 64GB SKU with + 4 CPU cores and GPU max freq at 1.3 GHz, consumes + 20W(60W) compared to the 32GB SKU. Nothing substantial of course, but it gives us a rough idea for Drake/T239.
Honestly, after seeing this, i'm more relaxed about T239 on 8nm. Nvidia and Nintendo will cut all of these Automotive/AI unintended hardware of the SoC and will clock it quite lower(Mainly on CPU side).
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom