• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Then in that case we'd probably have to assume that Bloomberg, DF and Nate all share at least 1 source? Definitely possible but I couldn't fathom what odds it would be.
Nintendo insiders sharing the same few sources is a thing that's been said to have happened in the past. just see the Star Fox Racing thing for example

Nintendo playing 4D chess
that's not 4D chess, that's a fool playing checkers in a chess tournament
 
Nintendo playing 4D chess
Actually, shit... I didn't even think about Nintendo tracking down who leaked and then making them think the hardware was cancelled so that "Switch Pro cancelled" would then leak.

The return of Star Fox Grand Prix. 🤣😂🤣
(which totally should have existed, btw. I woulda bought the hell outta that)
 
Drake has a max of 102 GB/s. So at 3 TFLOPS, you're starting to hit your limit of memory bandwidth for the GPU.

My idea is getting to 1GHz / 3TF. I think at this point we are all aware of the BW limiting everything T_T

But we still could be getting lpddr5x... that would give some ~34% extra juice for the BW.

When I say to extract every bit of Flop it can give, I'm thinking of at least 1GHz...
But 800MHz (or less) docked would be... well... expected from Nintendo =/

edit: also, how accurate is comparing a windows environment with a closed hardware/environment like the switch, in terms of bandwidth usage?
I mean, I would believe Nvidia has some features to mitigate the more scarcy bandwidth that hardly would be adopted by developers on the PC space, but would be essential on the development environment of the switch. I think it's hard to draw this comparison with an open platform like the PC and the switch, but I could be very wrong.
 
Last edited:
Where did the ‘devkits recalled and product cancelled’ narrative first originate?
NateDrake and John Linneman

Actually, shit... I didn't even think about Nintendo tracking down who leaked and then making them think the hardware was cancelled so that "Switch Pro cancelled" would then leak.

The return of Star Fox Grand Prix. 🤣😂🤣
(which totally should have existed, btw. I woulda bought the hell outta that)
that's been a thing before, so it's possible

My idea is getting to 1GHz / 3TF. I think at this point we are all aware of the BW limiting everything T_T

But we still could be getting lpddr5x... that would give some ~34% extra juice for the BW.

When I say to extract every bit of Flop it can give, I'm thinking of at least 1GHz...
800MHz docked would be... well... expected from Nintendo =/
that's not "expected of nintendo" that's just the smart thing to do. if you're going to squeeze every drop of performance, then you need to design your hardware differently. at that point, you might as well go back to two pools of ram and separate cpu and gpu
 
When I'm reviewing the transcript and podcast notes the impression I'm getting is 'plans for a late 2022 / early 2023' device had been cancelled, maybe I missed it but I don't see anything about devkits being pulled. Cancelling plans for a specific timeframe doesn't mean pulling the plug entirely on a project, right?

That's what I thought.
Will have to go back and listen again, but I got the feeling that each time they mention devkits being recalled, it was just speculation on what they think happened. No one came out and stated that they were told a dev had their devkit recalled.

When the DF guy is talking about recalls, it definitely sounds like he's just speaking generally. As in, when consoles are in their development phase it's a possibility that devkits can be recalled and then further iterative work is done.

But then Nate says that the "rumblings" he heard in the "summer months" was, along with "the hardware is no longer scheduled to release", that those devs were no longer working with those devkits (adding he's surprised cause MVG had heard about devs having them in March). Which I assume implies that he heard they were recalled.

Then MVG starts talking about how devkits can be just a bunch of motherboards tied together, prototypes etc. And it's clear he's just throwing shit out there without knowing or hearing what exactly has happened.

And then Nate posts this reply on the subject of recalled devkits:

We were going to mention a specific example but opted against it, as the risk was considered too high.

So I would guess Nate was told by at least one dev that they had their devkit recalled. But it's possible he's just assuming that, since DF guy also mentioned the cancellation, that whoever their source was also had their devkit recalled. Maybe they could have come out and stated it stronger, without giving a name or specific example. Cause it appears we're sticklers for clarity (and/or have nothing better to do lol)
 
I can’t imagine Nintendo pulling devkits and not replacing them knowing full well devs were preparing games. They would risk losing the support/titles for launch period plus I’ve not heard of a console manufacture doing that before?
 
that's not "expected of nintendo" that's just the smart thing to do.

Well, you're saying this assuming it can't clock above 800MHz because of memory bandwidth, which we don't know if it is the case (and for a possible 2024 release, it could have more BW with lpddr5x).

I'm saying it is something I expect from Nintendo because of the node (which we also don't know yet, but I'm already expecting the worse)
 
0
It depends how close the source was to the devkits and what excactly they said.
There's a world of difference between "My friend heard from their friend they had the devkits for an early 2023 Nintendo device and apparently it was cancelled recently" and "I was developing a game using the devkit, aiming for early 2023 release. Nintendo cancelled the device, recalled the devkits and now the game in on hold indefinitely".
Also, I can't imagine neither Nate, nor John asked their sources if they heard anything about Nintendo's plans after the cancellation. If their sources didn't hear anything about that, then either they weren't close enough to know, or the next device is far away (prompting the late 2024/early 2025 prediction)
 
I mean, polygon made it into the OP.
While that's a fair point, Polygon had no track record whatsoever, while MILD from what I understand had a very poor track record. I'd consider unproven info to be a bit more worthwhile to consider then info from someone who is typically proven wrong.
 
From a product management perspective I want to note Switch ended 2022 with some very strong ports. Nier, No Man's Sky, Crisis Core were kind of amazing releasing all around the same time. The perceived lack of excitement seems to be Nintendo themselves didn't have a tentpole game and has Bayo3/Pokemon
 
While that's a fair point, Polygon had no track record whatsoever, while MILD from what I understand had a very poor track record. I'd consider unproven info to be a bit more worthwhile to consider then info from someone who is typically proven wrong.
I agree, but now that similar things are corroborated by other sources, that might add some validity to it.

And I found the original post, it was from March 25.


https://famiboards.com/threads/futu...-staff-posts-before-commenting.55/post-197685
 
Last edited:
The idea that new hardware could ever be cancelled at the stage where you've sent out devkits already is so stupid it's not even worth discussing to me.
feeling very bad for LiC right now

the current discourse is your personal hell and we're all agents of satan
 
The lack of camera support rules out being used for self-driving cars, correct? But what about for infotainment units on cars for navigation, games, car play, etc?

In theory they could use it for infotainment, but it's worth noting that Nvidia's infotainment solution (DRIVE IX) advertises driver safety monitoring (via internal cameras) as one of its key features. I would also imagine that if they were developing it as a cheaper alternative to Orin primarily for infotainment use-cases, they would still leave the door open to ADAS (something TX1 has been used for), as I assume you would still need to pass the same safety certification either way.

To go back to the post to which you are replying--the information in the Linux commits implies that this SoC physically existed in mid-2022, and the SoC existing in mid-2022 implies, based on typical manufacturing timelines, that a device using the SoC would launch sometime in 2023. Is that correct?

If so, and assuming that Nintendo truly is the only consumer of this SoC, is there any way to reconcile the SoC existing in mid-2022 with the only device using it launching in 2024 or 2025?

If Nintendo is the only customer for T239, and it existed in silicon in 2022, but Nintendo won't release anything until 2024, then the only conclusion I could think of is that Nintendo delayed or cancelled their device after T239 had already taped out and hit early manufacturing. That's something that would have seemed very unlikely to me before, but it's technically possible. If the decision was unilateral from Nintendo (ie the chip was good to go but Nintendo just decided not to go ahead) then it would be a very expensive decision, as they'd either have to sit on manufactured chips for a couple of years or presumably pay Nvidia some penalty for pulling out of their contract.

On the other hand, it could have been something outside of Nintendo's control, like the chip not meeting agreed-upon requirements. I've said before that I don't think there's any reasonable possibility that they could have manufactured it on Samsung 8nm and been surprised that a 12 SM GPU would be too power hungry when it came out of manufacturing. I also don't think they would have been surprised by TSMC 7nm/6nm, or TSMC's 5nm/4nm processes. However if they had planned to manufacture on one of Samsung's 5nm or 4nm processes, then they could well have been surprised by how power hungry it ended up being. We know that Qualcomm were so unhappy with Samsung's 4LPE for the Snapdragon 8 Gen 1 that they migrated the entire chip over to TSMC N4 (probably at great expense, given the short timescale). We also know that Samsung themselves had to significantly reduce clocks on the Exynos 2200 GPU to get it out the door.

Conceivably they could have planned to manufacture on a Samsung 5nm or 4nm process, presumably expecting much better performance and power efficiency when they started in 2019/2020 than what they got after tapeout in 2022. I've heard that Nvidia's contract with Samsung has them paying for yielded die rather than per wafer (sorry, can't find the source for that at the moment), and depending on how that contract is written, if none of the dies meet Nvidia and Nintendo's yield criteria, Nvidia wouldn't have to pay Samsung anything. Or, Samsung would be in a position where they have to pump out a huge number of expensive to manufacture dies at a very low yield rate to hit Nvidia's requirements, and they'd be happier to renegotiate their way out of the contract rather than operate at a loss.

In theory we could have a situation where T239 was designed for Samsung 4LPE (let's say), and after taping out in early 2022, early silicon was way behind expectations in power/perf. With Nvidia having an advantageous contract with Samsung, and Nintendo seeing the Switch still selling well, they decide to push back to 2024, on either Samsung 3GAP (which will ship in 2024, with Nvidia rumoured to be one of the customers) or TSMC 4N. Nvidia might even decide to continue with small-scale manufacturing on 4LPE, as their other use-cases (eg Shield TV) don't have such strict yield requirements.

My main issue with this theory is that Nvidia have nothing else being manufactured on any Samsung 5nm or 4nm process. Generally they would want to share manufacturing processes across many product lines, both for economies of scale and to maximise flexibility, so manufacturing a single chip on Samsung 5nm or 4nm while you're planning to manufacture almost everything else on TSMC 4N would have been a pretty strange decision.

I don't think it's out of the question that Nvidia might want to run Linux on the chip entirely for internal development and/or continuous integration purposes.

Yeah, I wouldn't rule that out, but in that case I wouldn't expect to see any T239 code upstreamed to the Linux kernel. Perhaps the upstreamed references are only there incidentally, to ensure that T234 behaviour is consistent between mainline Linux and their internal branches which also support T239, but there have been a few of them now, and it wouldn't be difficult to do so in a way which avoids any T239 references being upstreamed at all.

2 words: micro PC.
For the same reason it’s not well-suited for gaming, A78AE CPUs are not well-suited for general purpose consumer computing.
Unlike most other micro PCs, because Nvidia is the SoC maker, they have a better cost for assembly (paying cost instead of wholesale price for a chip and able to leverage binning of an existing product for another customer). For example: Jetson Nano (which Tom’s Hardware called the Raspberry Pi of AI) was sold at some sort of a profit for $99 in 2019. Meanwhile, a 2-core Celeron N3350 micro-PC with 4GB LPDDR3 RAM from Bee-Link will run you $119.

I think Nvidia has an opportunity to basically scoop up nearly all of binned Drake chips and compete with a far more capable SoC in the “cheap office micro PC” market at the VERY least, now that ARM Windows is a LOT better than it was when it started.

It’s not even an unprecedented occurrence, AMD used binned PS console chips similarly (and priced themselves out of a lot of customers at $400 a pop, methinks).

One of the big issues with T239 being used in a (Windows) PC is that Windows on ARM is currently exclusive to Qualcomm chips. Microsoft would have to be talked out of that exclusivity, and I don't think T239 offers much that they can't get from Qualcomm.

The real advantage T239 (or any Nvidia SoC) would have in the PC space is a capable integrated GPU, but without games being compiled natively for ARM you're still going to end up with pretty poor gaming performance. Perhaps if Nvidia is planning to move into desktop ARM CPUs after Grace, they could use T239 to get their foot in the door, and work with developers to get games ported over to ARM before they launch a desktop lineup.

Project Indy is the Nintendo Switch. Project Indy went through several iterations before landing on what we got, and the name "Nintendo Switch" was given to a product that didn't yet have the TX1 inside it.

There isn't enough data in the gigaleak to say what the hell was going on there, and how that device evolved (and I have my own theories) but here is what is clear.

In the beginning of 2014, Nintendo was building Indy around a custom SOC from ST called Mont Blank. Mont Blanc has two years of development, a custom GPU, and a complete spec sheet with pin outs (though this doesn't mean final hardware).

Mont Blanc was a hybrid 3DS WiiU sorta deal, running 4 ARM cores, but a "decaf" version of the GPU in the WiiU. We can't be certain, but the design seems unlikely to have been as powerful as the Wii U, and so at least at the start of SOC development, the idea was still for a device that would continue the handheld line independently of the TV line.

By August of 2014, this device was called Nintendo Switch. By the end of 2014, the "New Switch" had replaced Mont Blanc with a TX1.

I do not believe there is any document in the gigaleak that indicates when this transition happened, but those who have reviewed it say that Nintendo's discussions do influence TX1's design (IIRC it's security concerns, not performance)

Sometime in Q1 2015, Nintendo builds functional prototype 1 for the Switch, in March, Iwata says the NX is coming during the DeNA press conference, and by June, they're planning to launch in holiday 2016.

p1gm7dukslun21u341v1u1rs816nd4-4.png


So, yes, Nintendo had moved to a TX1 based design before the TX1 launched.

Yes, Nintendo can scrap an SOC with 2 years of custom development.

No, it's unlikely that the SOC manufacturer is going to put your custom chip in rando hardware when you do - Mont Blanc never showed up anywhere.

Some version of the Switch was being developed for 5 years before launch, and yes, some of the hardware designed in that first year wound up in the final product (the DRM hardware on the cartridge reader in the switch was designed in 2012 as part of Indy).

From Pivot to Final Hardware took anywhere from 2 years to 2.5 depending on how you look at it. So yeah, the idea that a device scrapped in mid-2022 would get a replacement out the door in 2024 is precedented.

Again, I'm trying to dodge the narrative a bit here. I suspect that the none of the puzzles pieces seem to fit because not only do we not have all the puzzle pieces, we also don't now what the box looks like.

I haven't read super deep into this, but my understanding is that, although Nintendo chose TX1 before it was manufactured, and may have had some influence in its design, it was already well along the road by that point, and would have launched, and probably looked very similar, regardless of whether Nintendo signed up or not.

My point here is that, while TX1 wasn't strictly an off-the-shelf chip for the Switch, it might as well have been. It was a chip that was already well in development and would be manufactured well ahead of the planned launch of the Switch, and this allowed them to be so aggressive with their timescale for pushing the Switch out the door. I don't think there's a similar option available to them today.

If they were to drop BC, they could probably get an SoC from Qualcomm or elsewhere that would suit their purposes at short notice, but presumably BC is much more important to them now than it was post-Wii U, which means a chip from Nvidia. In which case, for Nintendo to execute a similar two-year turnaround for the new model, there would need to be a suitable chip already in development at Nvidia. That seems very unlikely to me.

Post-Switch, Nvidia have released only one SoC suitable for consumer devices, TX2 (Parker), which was already in development long before Nintendo signed up to use the TX1. Since signing Nintendo, the only SoCs Nvidia have announced are Xavier, Orin and Atlan/Thor, all large automotive SoCs unsuitable for consumer devices. As far as we're aware, the only SoC they've made since signing Nintendo that would be suitable for a device like the Switch is T239, which looks very much like it was designed specifically for Nintendo. It would be very surprising to me if they just so happened to have another consumer SoC waiting in the wings for a 2024 release that Nintendo hadn't been involved in at all.
 
It definitely feels like there may be some truth to the reports but also that we don’t have the full story at all. I would expect big devs, your Capcom’s and Square’s, would have kits of Nintendos next device right now. It could be pulled devkits were due to input from said bigger devs on the previous kits and Nintendo made some changes based on the feedback.
 
Bloomberg I dunno about (plus as far as I know they haven't said anything about cancellation), but I'm not convinced that Nate and DF were necessarily talking about the same device. Did DF ever specify the cancelled device they heard about was the same 4k DLSS one that Nate and Mochi have talked about? As far as I heard they just said they knew of hardware that was shelved. Could've been the beefed up Mariko that has been talked about here, while Nate's been describing something different.
Nate clarifies here:
I asked John if he believed the hardware discussed by myself and Bloomberg was the same device he had said was cancelled and he answers in the affirmative that the timelines match.
I'm still skeptical of a 'cancellation'. But there is some consensus of changed plans.
 
Something I don't get : Let's say things were canceled in 2022, how the hell can they go back to the drawing board and release something in 2 years ?!? It would take far longer, wouldn't it?
 
My idea is getting to 1GHz / 3TF. I think at this point we are all aware of the BW limiting everything T_T

But we still could be getting lpddr5x... that would give some ~34% extra juice for the BW.

When I say to extract every bit of Flop it can give, I'm thinking of at least 1GHz...
But 800MHz (or less) docked would be... well... expected from Nintendo =/
I'm going to agree with @ILikeFeet here, if you get every TFLOP by skimping on CPU, and by overloading the bandwidth situation, those last FLOPS aren't giving you as much bang for your buck as more CPU, and keeping the bandwidth situation under control. PS5 generally outperforms Xbox Series X, despite the extra 2 TFLOPS the Series X has, likely because of the bandwidth situation on Series X.

While the bandwidth situation on Switch wasn't great, part of the reason that it has done so well despite its small size is that it's better balanced than the PS4 and Xbone. Those machines had atrocious CPUs, and it let Switch keep up with them so that all games had to do is cut down the graphics side of the picture.

But PS5 and Xbox Series consoles have a pretty great CPU. If you want ports from Series S to Drake you need that extra CPU power much more than you need extra FLOPS. You don't want to put all this power into FLOPS that don't actually do anything because the CPU can't keep up.

edit: also, how accurate is comparing a windows environment with a closed hardware/environment like the switch, in terms of bandwidth usage?
I mean, I would believe Nvidia has some features to mitigate the more scarcy bandwidth that hardly would be adopted by developers on the PC space, but would be essential on the development environment of the switch. I think it's hard to draw this comparison with an open platform like the PC and the switch.
Switch will need more bandwidth than an equivalent graphics card on PC. 100%, there is no question. GPUs work on uncompressed textures. If you want the same quality texture on Switch it needs to occupy the same amount of RAM, and will need to be accessed by the GPU the same number of times, in the same way. There is no console specific magic there.

Where there is console specific magic is on the CPU. In a PC game, you have two pools of memory, RAM and VRAM, with two separate buses, with their own memory bandwidth. The CPU is often stuck using some of its RAM "preparing" data (usually decompressing it) for the GPU then copying it over. That puts a strain on the CPU's pool.

Consoles have a unified memory pool (usually), and can eliminate that copying. But that doesn't provide relief for the GPU, just the CPU. And because you have one pool of RAM, the two share the same bus/bandwidth. That's why the amount of bandwidth needed for console is more than just the graphics card bandwidth by itself.
 
Ultimately, a TV only model is still going to be in the place where supporting it, and supporting the handheld model, is easy. Pushing that power gap even further makes that tough. I suspect the only place for a "more powerful" TV only switch is as a pro revision which releases well after the "base" TV mode is established.


I agree with you on the CPU, though I don't think it'll quite get there.

Drake actually has a max TFLOP range which isn't obvious. All the RTX 30 cards have memory bandwidth of 30-35 GB/s of memory bandwidth per TFLOP. Drake has a max of 102 GB/s. So at 3 TFLOPS, you're starting to hit your limit of memory bandwidth for the GPU.

But Drake, unlike a graphics card, also needs bandwidth for the CPU. And as the CPU gets faster it, also, needs more bandwidth. And while the multicore performance of an 8 core A78C cluster is pretty excellent, single core is lagging relative to the other 9th gen consoles.

If Drake's GPU gets up to 800Mhz, you've still got a healthy amount of CPU bandwidth, and you're in excellent shape for 4K-DLSS versions of 8th gen games. If there is anything in the power budget left over, I would much prefer it to go to the CPU, than continue to push the GPU further and further.
Not that it changes your point, but some of the RTX 30 cards dip into the 20's for GB/s:TFLOP ratio. A few at base clocks, few more at boost clocks.
I do agree that getting docked to about 2.4-2.5 TFLOPs is perfectly fine for regular LPDDR5, and whatever's leftover can go into getting that CPU up.
My idea is getting to 1GHz / 3TF. I think at this point we are all aware of the BW limiting everything T_T

But we still could be getting lpddr5x... that would give some ~34% extra juice for the BW.

When I say to extract every bit of Flop it can give, I'm thinking of at least 1GHz...
But 800MHz (or less) docked would be... well... expected from Nintendo =/

edit: also, how accurate is comparing a windows environment with a closed hardware/environment like the switch, in terms of bandwidth usage?
I mean, I would believe Nvidia has some features to mitigate the more scarcy bandwidth that hardly would be adopted by developers on the PC space, but would be essential on the development environment of the switch. I think it's hard to draw this comparison with an open platform like the PC and the switch, but I could be very wrong.
Heck, even a ~+17% from 7500 MT/s LPDDR5X would be nice too, if full 8533 MT/s isn't available.

So, starting from the version of DynamIQ introduced with the A75 in 2017, ARM started using L3 cache. This L3 cache can be accessed by the GPU. So in theory there should be some benefit, but it'd vary a lot depending on the workload.
Then we fast forward to 2021 when ARM announces an update to DynamIQ. One of the updates is raising L3 cache from 4 to 8 MB (and slicing to potentially increase bandwidth). And there's this slide:
SystemIP_7.png

So ARM claims 28% hit rate for the GPU with a 8 MB L3 cache. In a vacuum, that... seems awful, actually?
And again, it's super variable depending on what's going at the moment, plus you are sharing that cache with the CPU. Still, from the perspective of squeezing every last drop out, it's something.
As for the A78C itself, the L3 cache did get bumped up to 8 MB. I also suspect that slicing is used to improve bandwidth, but ARM never gave too much detail about the A78C.
(my long shot dream? Nvidia does a bit of customizing and expand L3 cache up to 16 MB. I think that I wrote before that expanding a pre-existing block of SRAM from 8 MB to 16 MB should cost only a few mm^2 on N7 or N5? Something like that; totally worth it to me! :p
...heck, maybe even expand L2 as well; ARM did it with the transition to A710, so clearly it's doable on current nodes <.<)
 
I agree, but now that similar things are corroborated by other sources, that might add some validity to it.

And I found the original post, it was from March 25.

Off topic note: when sharing posts better to use the URL from the Share option, as it will lead to the right place regardless of how many posts/page a person uses.
 
One of the big issues with T239 being used in a (Windows) PC is that Windows on ARM is currently exclusive to Qualcomm chips. Microsoft would have to be talked out of that exclusivity, and I don't think T239 offers much that they can't get from Qualcomm.
A rumour said that Qualcomm's exclusivity deal with Microsoft on Windows on Arm has already ended. And considering that Mediatek mentioned a couple of months ago about having plans to design Windows on Arm SoCs, I think that rumour is accurate. Probably takes a while to design a Windows on Arm SoC competitive with Apple's and Qualcomm's offerings.
 
There is no console specific magic there.

I remember when, in 2016, I was reading about how the tegra x1 could mitigate the BW issue because of TBR. I just thought Nvidia would come with solutions to mitigate the lower BW on a switch 2. But I'm probably wrong.

Anyway, my layman guess is that even if switch 2 is on the "right" node, heat dissipation will still be the limiting factor - before memory BW - for clocks higher than 1GHz on the GPU and closer to 2GHz on the CPU. Unless Nintendo is coming with a phat form factor like these handhelds PCs, which I will be perplex if they do lol
 
Last edited:
I remember when, in 2016, I was reading about how the tegra x1 could mitigate the BW issue because of TBR. I just thought Nvidia would come with solutions to mitigate the lower BW on a switch 2. But I'm probably wrong.

Anyway, my layman guess is that even if switch 2 is on the "right" node, heat dissipation will still be the limiting factor - before memory BW - for clocks higher than 1GHz on the GPU and closer to 2GHz on the CPU.
It did to some degree, but mitigate and eliminate are two completely different thing.

What is definitely true for tx1, and I suspect will be true for Drake is that developers will have to optimize the memory pipeline a lot more carefully. As that's probably the main thing that separates good ports from bad ones, on OG hardware.
 
I wouldn't be surprised if the new system (including joycons) is noticeably thicker or larger in order to make room for better performance and features.
 
0
A rumour said that Qualcomm's exclusivity deal with Microsoft on Windows on Arm has already ended. And considering that Mediatek mentioned a couple of months ago about having plans to design Windows on Arm SoCs, I think that rumour is accurate. Probably takes a while to design a Windows on Arm SoC competitive with Apple's and Qualcomm's offerings.
And speaking about Qualcomm's offerings, here's a rumour about the Snapdragon 8cx Gen 4(?), which is purportedly using Nuvia's Oryon design.







I wonder where Nvidia's planning on using Estes.
 
I haven't been reading everyone's posts for the past few pages, can someone give a tldr of current events? Been playing Persona 3 Portable.
 
I remember when, in 2016, I was reading about how the tegra x1 could mitigate the BW issue because of TBR. I just thought Nvidia would come with solutions to mitigate the lower BW on a switch 2. But I'm probably wrong.
I actually think you're right. It's just that the tricks are baked into Ampere already. TBR does reduce bandwidth usage, but it's baked into Maxwell, and Switch was a little short relative to Maxwell cards + CPU.

It's possible that Drake gets a fatter cache, which would help matters, but I'm not sure there is much more they can do without reengineering Ampere.
 
0
I don't think this was posted.

New Nintendo Specific Call Of Duty Studio?​



It may take time to ramp up the studio, I don't think they'd be targeting the current Switch.

When Microsoft approached Nintendo for a 10-year CoD deal, Nintendo was like: "How many devkits do you need"?
 
I don’t think we’re going to get genuine key news on Nintendo’s next console for quite a while now.

Makes me wonder what the future of this thread is for the time being since there’s now almost nothing tangible to discuss and things seem to just be going in circles.

If the full Tears of the Kingdom reveal basically confirms it’s not launching with a new Switch, maybe it’s time to close this thread for a while until we have something real to discuss?
 
Ultimately, a TV only model is still going to be in the place where supporting it, and supporting the handheld model, is easy. Pushing that power gap even further makes that tough. I suspect the only place for a "more powerful" TV only switch is as a pro revision which releases well after the "base" TV mode is established.


I agree with you on the CPU, though I don't think it'll quite get there.

Drake actually has a max TFLOP range which isn't obvious. All the RTX 30 cards have memory bandwidth of 30-35 GB/s of memory bandwidth per TFLOP. Drake has a max of 102 GB/s. So at 3 TFLOPS, you're starting to hit your limit of memory bandwidth for the GPU.

But Drake, unlike a graphics card, also needs bandwidth for the CPU. And as the CPU gets faster it, also, needs more bandwidth. And while the multicore performance of an 8 core A78C cluster is pretty excellent, single core is lagging relative to the other 9th gen consoles.

If Drake's GPU gets up to 800Mhz, you've still got a healthy amount of CPU bandwidth, and you're in excellent shape for 4K-DLSS versions of 8th gen games. If there is anything in the power budget left over, I would much prefer it to go to the CPU, than continue to push the GPU further and further.

In a tv only version the bus width would be bigger ie 256bit vs the limited space on switch hybrid pcb where 128bit surely would be the max. That would give it around 204GB/s.


Unrelated to quote:
Many want the next console to very powerfull. But any talk about a tv only lead platform is like heresy. I just find a little contradicting.
 
I don’t think we’re going to get genuine key news on Nintendo’s next console for quite a while now.

Makes me wonder what the future of this thread is for the time being since there’s now almost nothing tangible to discuss and things seem to just be going in circles.

If the full Tears of the Kingdom reveal basically confirms it’s not launching with a new Switch, maybe it’s time to close this thread for a while until we have something real to discuss?
Why would they do that?

Calling for a thread to be closed because YOU feel Nintendo won't launch their next console soon doesn't seem right to me. 😕
 
I don’t think we’re going to get genuine key news on Nintendo’s next console for quite a while now.
I think we will get news, but next source will be official.

I'm only speculating, but I expect sometime in May, we might hear something. Again, I have to preface this with the reminder that we're in a speculation thread, lest people get crazy.

Ah, who am I kidding...
Calling for a thread to be closed because YOU feel Nintendo won't launch their next console soon doesn't seem right to me. 😕
Like I said, I've given up on any semblance of sanity on the internet.
 
In a tv only version the bus width would be bigger ie 256bit vs the limited space on switch hybrid pcb where 128bit surely would be the max. That would give it around 204GB/s.


Unrelated to quote:
Many want the next console to very powerfull. But any talk about a tv only lead platform is like heresy. I just find a little contradicting.
Well that's because people want a powerful Nintendo Switch, not a Nintendo brand Xbox Series X. 😅
 
In a tv only version the bus width would be bigger ie 256bit vs the limited space on switch hybrid pcb where 128bit surely would be the max. That would give it around 204GB/s.


Unrelated to quote:
Many want the next console to very powerfull. But any talk about a tv only lead platform is like heresy. I just find a little contradicting.
That's not the only change you'd make. You'd probably also go from 1 to 2 GPC's (so, from 12 to 24 SMs). So that's a separate, more expensive die.

Is 3 dev profiles + 2 separate chips to manufacture to lead a generation even business viable these days? Particularly when your more expensive SKU has to directly butt heads against PS5 and Series X. (no, not the Series S, price-wise)
 
I don’t think we’re going to get genuine key news on Nintendo’s next console for quite a while now.

Makes me wonder what the future of this thread is for the time being since there’s now almost nothing tangible to discuss and things seem to just be going in circles.

If the full Tears of the Kingdom reveal basically confirms it’s not launching with a new Switch, maybe it’s time to close this thread for a while until we have something real to discuss?

I think people have some reason to hope for 2H 2023 until after e3.

But if e3 passes with no news, then it's probably not going to be very interesting to talk about for a long time.
 
Unrelated to quote:
Many want the next console to very powerfull. But any talk about a tv only lead platform is like heresy. I just find a little contradicting.

If it is a different SoC for the TV only, doubling the performance, I would love it. If it is just a switch 2 tv only, I don't see higher clocks being converted to visible upgrades on games (too much work for it). Also, the sales of a device like this would certainly (IMO lol) be much lower to the point that support for the extra juice would be non existent.

That's why I think it's the best to extract everything they can from the T239 on the hybrid (more importantly being the node as I see it, and then getting clocks that won't be limited by memory bandwidth anyway), because this way we'll probably see the results of the extra TFlops when docked. In other words, the base line (the bydrid) should be as higher as it's viable.
 
Why would they do that?

Calling for a thread to be closed because YOU feel Nintendo won't launch their next console soon doesn't seem right to me. 😕

Not calling for anything. Just a suggestion.

I suppose it’ll give people somewhere to be for a year or so, maybe more, until we have a new system in our hands.
 
0

I'm still excited by the introduction of Hall Effect Sensors, but every review video fails to touch on Gulikit's claim that because these new joysticks use less electricity, the batteries in the joycons naturally last longer. I think if it has measurable improvement to the current batteries, it might mean that the battery in future joycons could potentially be shrunk down to maintain the same battery life as existing joycons (~20 hours) and maybe some other components have a little more room to breathe, or perhaps more tech crammed in, or the schematics are untouched and with enough battery density improvements it'll last upwards of 30 hours should Nintendo utilize hall effect sticks officially

Or maybe I'm foolishly optimistic
 
0
If Nintendo is the only customer for T239, and it existed in silicon in 2022, but Nintendo won't release anything until 2024, then the only conclusion I could think of is that Nintendo delayed or cancelled their device after T239 had already taped out and hit early manufacturing. That's something that would have seemed very unlikely to me before, but it's technically possible. If the decision was unilateral from Nintendo (ie the chip was good to go but Nintendo just decided not to go ahead) then it would be a very expensive decision, as they'd either have to sit on manufactured chips for a couple of years or presumably pay Nvidia some penalty for pulling out of their contract.

Thank you for the detailed response. Is it safe to say it's too late in the game for Nintendo to make any changes in the design of the chip? Meaning, if they were unhappy with the results, their only course of action is to try a different node and hope for better results?

If so, how long of a delay is typically incurred just from changing nodes? And do you have insight on how realistic it would be to change other aspects of the device assuming the SoC is now more efficient/cooler/lower power on a better node? Like could Nintendo decide to change the form factor or size, or is it too late for something like that?
 
0
This thread is for speculation and has its purpose even if there were zero leaks, rumors, or news. It's also the only place on the Internet with a good summarization and level-headed discussion of the leaks so far. Folks can come in any time and have their questions answered. Leaks are unpredictable too. Imagine if this thread were closed right before the Lapsus hack last February, and needing to close and reopen the thread between periods of silence. There's always something to talk about, even if it's as simple as "look at these new hall effect sticks, how feasible is it for Nintendo to manufacture and incorporate them in their next joy con".

Closing it is the last thing I want.
 
Last edited:
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom