• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

yeah. i had written something about their experience with AMD vs nvidia being night and day before deleting it, realizing, the frakenstein Wii U McM probably had more to do with the inherited partnerships between AMD/IBM than AMD's fault. I'm sure AMD could have offered up their own CPU/GPU combo for the Wii U that was cheaper and more performant than what we got, but one thing AMD didn't offer was good tools.
I don't know about that. Are PS/Xbox toolsets made entirely in house? Or do they use some AMD made tools?
 
I don't know about that. Are PS/Xbox toolsets made entirely in house? Or do they use some AMD made tools?
I believe the balance is different, I remember the talk leading up to the Series X|S launch being about how Microsoft made Direct X12(U) in collaboration with AMD but that's Direct X development built on years of Direct X, while Sony made their own tools built on top of PS4 tools.

As far as I'm aware, those two is more the console makers creating tools with support from AMD, while for Nintendo Switch, if was Nvidia making tools with input from Nintendo.

Ultimately both parties are always involved, but put simply, Direct X development is closer to Microsoft than AMD, and NVN(2) is closer to Nvidia than Nintendo.
 
I have an Nvidia developer account through work, but I think they're free for individuals. You may be locked out of certain docs without licenses though
Thing is I don't really have a work, and I'm not in a position where I can claim access to this content.
But that's no problem, as not only has this presentation been reported on by the press, and the chinese version is freely accessible, but also... Let's just say life finds a way.
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
Last edited:
I don't remember if this has been discussed. Nvidia announced "RTX Video Super Resolution" during CES. It seems similar to Shield TV's AI upscaling tech, but upgraded for RTX.




Since nobody drops a grand on GPU solely to watch YouTube videos, I suspect that Nvidia developed the software for a brand new RTX-based Shield TV. I'm bring this up now because the feature will finally be released soon. Does it indicate a new Shield TV is getting ready behind the scene? Using binned Drake?

It's also worth noting that GeForce Now has been supporting "AI-enhanced" upscaling on Shield TV Pro and selected GPUs. Since its performance is deemed good enough for GFN, it might also be good enough for Switch 2 as a supplement to DLSS; for example, DLSS from 540p-720p to 1080p-1440p, and then RTX VSR to 2160p. Legacy games that don't support DLSS could also benefit.
 
I've got the ultimate data for DLSS estimation.
The One document to rule them all.

Can't believe it took me this long to find this even though looking up github repos should have been one of the first thing to do.
Massive thanks to u/oginer on Reddit for directing me to that document.
Gotta go to sleep, but tomorrow I'm fixing the calculator.

At a quick glance it would seem the current calculator is not that far off (at least for 4K, my assumption that 1440p was bogus turns out to be true), but for 4K expect the fixed version to be more optimist (probably, I'd have to look into it more to confirm that).
 
Last edited:
Can someone with more knowledge than me answer some technical questions?

1) what does “taped out” mean?
2) what features CAN’T change after being “taped out”? (Ex: GPU/CPU/RAM/Speed/type?)
3) what features can?
4) How many months/years are in between “taping out” and mass release? (Trying to figure out when Drake is taped out if released in ‘23/‘24/‘25)

thanks!
 
I don't remember if this has been discussed. Nvidia announced "RTX Video Super Resolution" during CES. It seems similar to Shield TV's AI upscaling tech, but upgraded for RTX.


As DLSS 2 has evolved, it has leaned more and more on the OFA to provide motion vectors instead of the engine - this is why ghosting especially around particle effects has been reduced in later versions. I suspect this is an extension of the same tech.

While it might indicate something like a new Shield, I think Nvidia is just trying to find as many possible uses of non-raster hardware as possible. RTX cards are highly valuable in video editing, but that's an area where, currently, the tensor cores and RT cores are providing little value. Pushing into video upscaling feels of a piece with the rest of their hardware strategy.
 
I don't remember if this has been discussed. Nvidia announced "RTX Video Super Resolution" during CES. It seems similar to Shield TV's AI upscaling tech, but upgraded for RTX.




Since nobody drops a grand on GPU solely to watch YouTube videos, I suspect that Nvidia developed the software for a brand new RTX-based Shield TV. I'm bring this up now because the feature will finally be released soon. Does it indicate a new Shield TV is getting ready behind the scene? Using binned Drake?

It's also worth noting that GeForce Now has been supporting "AI-enhanced" upscaling on Shield TV Pro and selected GPUs. Since its performance is deemed good enough for GFN, it might also be good enough for Switch 2 as a supplement to DLSS; for example, DLSS from 540p-720p to 1080p-1440p, and then RTX VSR to 2160p. Legacy games that don't support DLSS could also benefit.

I recently checked and found out that past Shield TV models (and similar Nvidia products) were published in FCC filings about 3-5 months before announcement/release. If a new Shield TV were being announced, you might expect it at GTC on March 20, but that and the next few months are probably ruled out by the lack of FCC filings, as well as other information that's tended to leak Shield products in the past such as Google Play documentation and the Android kernel.

Thing is I don't really have a work, and I'm not in a position where I can claim access to this content.
But that's no problem, as not only has this presentation been reported on by the press, and the chinese version is freely accessible, but also... Let's just say life finds a way.

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
You don't need to do anything or claim anything to have access to the developer portal. You just need to sign up. Individual programs (like Drive AGX) require business information and/or NDAs but things like that DLSS video are freely available.
 
Can someone with more knowledge than me answer some technical questions?

1) what does “taped out” mean?
It basically means the physical layout of a chip's design is done.


2) what features CAN’T change after being “taped out”? (Ex: GPU/CPU/RAM/Speed/type?)
Of the chip? All of them can't be changed after. In Drake's case that means
  • The GPU
  • The CPU
  • The memory controller
    • Which means the type and speed of memory
  • The I/O controllers
    • The kind and speed of built in storage
    • The number and type of USB ports
    • The kind and speed of the DisplayPort adapter
    • The number and kinds of things that can talk to the chip over all.

3) what features can?
Of the chip, again, nothing. But you probably mean of the console itself.

Of your list, just the amount of RAM can change. But think about the differences between a Pixel C Laptop, every model of Switch, and an Nvidia shield. They all use the same chip, but that is huge variety of devices.


4) How many months/years are in between “taping out” and mass release? (Trying to figure out when Drake is taped out if released in ‘23/‘24/‘25)
However long you want. The tape out is a final blue print, but you don't have to go to manufacturing immediately. And the gap between the SOC being ready and the device being ready varies heavily between hardware.
 
Can someone with more knowledge than me answer some technical questions?

1) what does “taped out” mean?
2) what features CAN’T change after being “taped out”? (Ex: GPU/CPU/RAM/Speed/type?)
3) what features can?
4) How many months/years are in between “taping out” and mass release? (Trying to figure out when Drake is taped out if released in ‘23/‘24/‘25)

thanks!
I'll try to answer despite not being the most technically savvy person here.

1) Tape out means that the design of the integrated circuit (IC) is finalised before being sent to the foundry company (e.g. TSMC, Samsung, etc.) for fabrication.

2)
  • CPU
  • Amount of CPU cores
  • Amount of CPU cache (e.g. L1, L2, L3)
  • GPU
  • Amount of arithmetic logic units (ALUs) on the GPU (e.g. CUDA cores, CUs, etc.)
  • Amount of GPU cache (e.g. L1, L2, L3)
  • Amount of system cache
  • Memory controller for RAM
  • Type of RAM
  • Memory bus width for RAM
  • Type of external and/or internal flash storage (e.g. eMMC, UFS, SD, etc.)
  • Expansion bus standard (e.g. PCIe, etc.)
  • I/O (USB 3.2, etc.)

3)
  • CPU frequency
  • GPU frequency
  • RAM frequency
  • Amount of RAM
  • Amount of internal flash storage

4) I think that's hard to say with certainty since I think that varies from company to company. There's one example of Nvidia taking around 6 months from tape out to release a product (e.g. tape out of GA102 on March 2020 with the RTX 3090 and the RTX 3080 released on September 2020).
 
It basically means the physical layout of a chip's design is done.



Of the chip? All of them can't be changed after. In Drake's case that means
  • The GPU
  • The CPU
  • The memory controller
    • Which means the type and speed of memory
  • The I/O controllers
    • The kind and speed of built in storage
    • The number and type of USB ports
    • The kind and speed of the DisplayPort adapter
    • The number and kinds of things that can talk to the chip over all.


Of the chip, again, nothing. But you probably mean of the console itself.

Of your list, just the amount of RAM can change. But think about the differences between a Pixel C Laptop, every model of Switch, and an Nvidia shield. They all use the same chip, but that is huge variety of devices.



However long you want. The tape out is a final blue print, but you don't have to go to manufacturing immediately. And the gap between the SOC being ready and the device being ready varies heavily between hardware.
Clock speeds can be changed if I'm not mistaken. Not wildly, since there will be a range for suitable yields but they can still be changed a bit.
 
I've got the ultimate data for DLSS estimation.
The One document to rule them all.

Can't believe it took me this long to find this even though looking up github repos should have been one of the first thing to do.
Massive thanks to u/oginer on Reddit for directing me to that document.
Gotta go to sleep, but tomorrow I'm fixing the calculator.

At a quick glance it would seem the current calculator is not that far off (at least for 4K, my assumption that 1440p was bogus turns out to be true), but for 4K expect the fixed version to be more optimist (probably, I'd have to look into it more to confirm that).

Thanks for sharing this. I had a quick skim through, and one interesting thing that I wasn't aware of is that DLSS has both HDR and LDR modes, with the former running at 16 bit precision, and the latter at 8 bit. I'm assuming that almost every PC game is using the 16 bit HDR version (particularly because DLSS is designed to operate before tone-mapping), and the performance numbers provided in the document are for the 16 bit HDR mode. However, it is interesting that LDR is an option, and in theory would offer double performance over the HDR version we're familiar with.
 
As DLSS 2 has evolved, it has leaned more and more on the OFA to provide motion vectors instead of the engine - this is why ghosting especially around particle effects has been reduced in later versions. I suspect this is an extension of the same tech.
I like this theory a lot. So it might be possible to use the OFA to generate motion vectors even if a game itself does not provide them? Sort of a hybrid spatial-temporal upscaling? This could be a good use case for Drake’s OFA, since frame generation might not be practical on Switch 2.
 
It basically means the physical layout of a chip's design is done.



Of the chip? All of them can't be changed after. In Drake's case that means
  • The GPU
  • The CPU
  • The memory controller
    • Which means the type and speed of memory
  • The I/O controllers
    • The kind and speed of built in storage
    • The number and type of USB ports
    • The kind and speed of the DisplayPort adapter
    • The number and kinds of things that can talk to the chip over all.


Of the chip, again, nothing. But you probably mean of the console itself.

Of your list, just the amount of RAM can change. But think about the differences between a Pixel C Laptop, every model of Switch, and an Nvidia shield. They all use the same chip, but that is huge variety of devices.



However long you want. The tape out is a final blue print, but you don't have to go to manufacturing immediately. And the gap between the SOC being ready and the device being ready varies heavily between hardware.

I'll try to answer despite not being the most technically savvy person here.

1) Tape out means that the design of the integrated circuit (IC) is finalised before being sent to the foundry company (e.g. TSMC, Samsung, etc.) for fabrication.

2)
  • CPU
  • Amount of CPU cores
  • Amount of CPU cache (e.g. L1, L2, L3)
  • GPU
  • Amount of arithmetic logic units (ALUs) on the GPU (e.g. CUDA cores, CUs, etc.)
  • Amount of GPU cache (e.g. L1, L2, L3)
  • Amount of system cache
  • Memory controller for RAM
  • Type of RAM
  • Memory bus width for RAM
  • Type of external and/or internal flash storage (e.g. eMMC, UFS, SD, etc.)
  • Expansion bus standard (e.g. PCIe, etc.)
  • I/O (USB 3.2, etc.)

3)
  • CPU frequency
  • GPU frequency
  • RAM frequency
  • Amount of RAM
  • Amount of internal flash storage

4) I think that's hard to say with certainty since I think that varies from company to company. There's one example of Nvidia taking around 6 months from tape out to release a product (e.g. tape out of GA102 on March 2020 with the RTX 3090 and the RTX 3080 released on September 2020).
Thank you both! Does the Nvidia leak mean that Drake/T239 has been fully taped out then? Long story short, I’m trying to figure out what can possibly change (or be more affordable) if Drake is released in ’23 vs ‘24 or ‘25.

So it sounds like LPDDR5 vs LPDDR5x couldn’t change, but but the amount of RAM could. Nor could a die shrink either, if I’m reading everything correctly.

If I’m hopping off the 2023 bandwagon I’m hoping that a 2024 release is at least has 5nm, and 16gb LPDDR5x. (Not that I assume Nintendo delayed Drake or anything, just that it was ALWAYS planned that way if released during or after 2024.)
 
0
I don't know about that. Are PS/Xbox toolsets made entirely in house? Or do they use some AMD made tools?
seems like it? i just know the Eurogamer article talking about how tools for the Wii U were a complete shitshow and late.
 
0
@oldpuck as it pertains to DLSS, I think a lot of assumption is that Switch 2 games will render at low resolutions , but when you look at a lot of Nintendo's first party games on Switch, rendering at 900p-1080p natively, I believe Nintendo will be able to natively hit 1440p with games like Mario Kart 9 and Splatoon 4 and then use DLSS to scale to 4K. I always expected the DLSS implementation to be lightweight compared to the higher end PC implementations.

We also need to evaluate the performance cost of Nintendo's own inhouse image reconstruction that was implemented in XC3 compared to DLSS. It's becoming pretty evident that the frame time slice for DLSS on Drake won't be negligible, and it's fair to assume it won't always be in play.
 
I don't know if the Cortex-A78C has any power advantage(s) over the Cortex-A78AE.
But I think the Cortex-A78C does offer a latency advantage over the Cortex-A78AE since the Cortex-A78C can have 8 CPU cores per cluster whereas the Cortex-A78AE needs two clusters to have 8 CPU cores in total since the Cortex-A78AE can only have 4 CPU cores per cluster. Therefore, there's less hardware components needed to communicate with if the Cortex-A78C is used instead of the Cortex-A78AE.
Oh I forgot actually.. Supposedly some A78C set ups actually came with more cache vs regular A79/A78AE. Guess we'll see...
 
I'll try to answer despite not being the most technically savvy person here.
This is a better answer than mine
I like this theory a lot. So it might be possible to use the OFA to generate motion vectors even if a game itself does not provide them?
Yeah, I'm 90% certain that's what's already happening. Things like particle effects don't have motion vectors in the engine, and what would happen is DLSS 2 would see the image of a particle, but not have motion vectors. That would make it assume the particle wasn't moving, and then preserve a "ghost" of the particle on the subsequent frames

DLSS 2.3 improved this issue, and they're almost definitely doing it by inferring motion vectors. Whether they use the OFA to do it, I don't know. But that is what the OFA does, take a series of 2D frames, and infer the 3D motion vectors. And I believe the OFA is available on all RTX cards that have tensor cores.

Sort of a hybrid spatial-temporal upscaling? This could be a good use case for Drake’s OFA, since frame generation might not be practical on Switch 2.
Yeah, there are some indications that some performance optimizations are being made for DLSS on Tegra. It'll be interesting to see what the performance is like, and how much Nvidia can squeeze out of the more esoteric features they've been sticking in their cards
 
Yeah, I'm 90% certain that's what's already happening. Things like particle effects don't have motion vectors in the engine, and what would happen is DLSS 2 would see the image of a particle, but not have motion vectors. That would make it assume the particle wasn't moving, and then preserve a "ghost" of the particle on the subsequent frames

DLSS 2.3 improved this issue, and they're almost definitely doing it by inferring motion vectors. Whether they use the OFA to do it, I don't know. But that is what the OFA does, take a series of 2D frames, and infer the 3D motion vectors. And I believe the OFA is available on all RTX cards that have tensor cores.
I don't think they are doing that right now, although they could in the future. Look at section 3.14 in the programming guide linked above. It seems like the current solution is that you can pass a binary mask to DLSS to bias the image for the current frame on problematic areas like particle effects.
3.14 Biasing the Current Frame
NVIDIA is continuing to research methods to improve feature tracking in DLSS. From time to time, the DLSS feature tracking quality can be reduced and DLSS may then “erase” a particular feature or can produce a “ghost” or trail behind a moving feature. This can occur on:

1. Small particles (like snow or dust particles).
2. Objects that display an animated/scrolling texture.
3. Very thin objects (such as power lines).
4. Objects with missing motion vectors (many particle effects).
5. Disoccluding objects with motion vectors that have very large values (such as a road surface disoccluding from under a fast-moving car).

If a problematic asset is discovered during testing, one option is to instruct DLSS to bias the incoming color buffer over the colors from previous frames. To do so, create a 2D binary mask that flags the nonoccluded pixels that represent the problematic asset and send it to the DLSS evaluate call as the BiasCurrentColor parameter. The DLSS model then uses an alternate technique for pixels flagged by the mask. The mask itself should be the same resolution as the color buffer input, use R8G8B8A8_UNORM /R16_FLOAT/R8_UNORM format (or any format with an R component) and have a value of “1.0” for masked pixels with all other pixels set to “0.0".
NOTE: Only use this mask on problematic assets after the DLSS integration has been completed and confirmed as fully functional. There may be increased aliasing within the mask borders.

In section 9.3, which details some parameters that might be added to DLSS in the future, there's also the option to pass a mask that is specifically identifying particles, although currently that doesn't do anything.
 
They never would though. They are a business, not fans.
They removed the original blog post, but this should indicate otherwise...


As for discounts, weren't the TegraX1's on the original Switch sold to Nintendo at a discount?
 
we really knew too much too soon...

better than wust I suppose

"a rushed switch pro has a bad tdp forever. a delayed switch pro is the switch 2." - miyamoto, probably
 
I've got the ultimate data for DLSS estimation.
The One document to rule them all.

Can't believe it took me this long to find this even though looking up github repos should have been one of the first thing to do.
Massive thanks to u/oginer on Reddit for directing me to that document.
Gotta go to sleep, but tomorrow I'm fixing the calculator.

At a quick glance it would seem the current calculator is not that far off (at least for 4K, my assumption that 1440p was bogus turns out to be true), but for 4K expect the fixed version to be more optimist (probably, I'd have to look into it more to confirm that).

Great find and post, this was the updated information I was wondering about in regards to the newer DLSS cost.
The 2060S in particular cost a little less than what it was pointed out to be in the 2.0 chart from the 2020 GTC talk.

Just speculating: but even if Switch 2.0 ends up half the performance of a 2060S (while docked) there's definitely a greater hope of this device being capable of using DLSS for 4k/30fps quality mode and 1440p/60fps or 1080p/60 for a performance mode.

Edit: Also theoretically seeing as how DLSS can have a dynamic input resolution but needs to have the same output resolution, how would this work on the fly with this new Switch in game (by docking and un-docking) without going into a UI menu? Just wondering is this something that can be programmed for on the fly switching to take advantage of the hardware when it either has or doesn't to extra gpu performance. Would games that utilize DLSS offer a prompt of options when switching from handheld to docked mode or back again?
 
Last edited:
@oldpuck as it pertains to DLSS, I think a lot of assumption is that Switch 2 games will render at low resolutions , but when you look at a lot of Nintendo's first party games on Switch, rendering at 900p-1080p natively, I believe Nintendo will be able to natively hit 1440p with games like Mario Kart 9 and Splatoon 4 and then use DLSS to scale to 4K. I always expected the DLSS implementation to be lightweight compared to the higher end PC implementations.

We also need to evaluate the performance cost of Nintendo's own inhouse image reconstruction that was implemented in XC3 compared to DLSS. It's becoming pretty evident that the frame time slice for DLSS on Drake won't be negligible, and it's fair to assume it won't always be in play.
I doubt that.

In earlier comments, I claimed DLSS cost is not purely related to output resolution, the DF numbers as a base of that claim.
After more research, turns out this assumption is wrong (which also means DF's methodology for measuring DLSS cost just doesn't work).
What this means is that DLSS Quality 4K will perform as badly as DLSS Perf 4K (just the DLSS cost).

I think that DLSS on Switch will be mainly performance mode, simply because DLSS is costly enough that you might as well render the game at native 4K instead of Quality 4K DLSS, the final performance will probably not improve.
 
Last edited:
0
Edit: Also theoretically seeing as how DLSS can have a dynamic input resolution but needs to have the same output resolution, how would this work on the fly with this new Switch in game (by docking and un-docking) without going into a UI menu? Just wondering is this something that can be programmed for on the fly switching to take advantage of the hardware when it either has or doesn't to extra gpu performance. Would games that utilize DLSS offer a prompt of options when switching from handheld to docked mode or back again?
I have 0 knowledge of how games work, so take that with a big pinch of salt, but I don't think it's an issue. A UI menu simply sends a message to the game "hey, do stuff". The way this would work is that instead of having the UI menu send that message, it'd be sent by the console.
Something like that :

Code:
On system interruption "hey I've just been undocked" {
    set outputresolution to whatever
    set inputresolution to whatever
}

The Switch can communicate to the games as this is how the games know to change resolutions, no reason changing DLSS output resolution would be any harder than changing output resolution from 1080 to 720 like every Switch 1 game does.
 
Great find and post, this was the updated information I was wondering about in regards to the newer DLSS cost.
The 2060S in particular cost a little less than what it was pointed out to be in the 2.0 chart from the 2020 GTC talk.

Just speculating: but even if Switch 2.0 ends up half the performance of a 2060S (while docked) there's definitely a greater hope of this device being capable of using DLSS for 4k/30fps quality mode and 1440p/60fps or 1080p/60 for a performance mode.

Edit: Also theoretically seeing as how DLSS can have a dynamic input resolution but needs to have the same output resolution, how would this work on the fly with this new Switch in game (by docking and un-docking) without going into a UI menu? Just wondering is this something that can be programmed for on the fly switching to take advantage of the hardware when it either has or doesn't to extra gpu performance. Would games that utilize DLSS offer a prompt of options when switching from handheld to docked mode or back again?
Page 7 of the development documentation which @Paul_Subsonic shared indicates that DLSS has to be reinitialized if the output resolution changes. My best guess is that NV ships separate neural nets, each for one output resolution. This can complicate porting DLSS-ready games to Drake since the both the game engine and the API (prob NVN2) must cooperate to make both DLSS instances (docked and handheld) initialized and can be switched to at runtime.
 
I still think a TV model could make sense if they got it down really cheap
Absolutely, I think the 100-120 range, I mean they had 80 dollar consoles on shelves in 2017, after inflation that's not over 120 dollars. Also apparently the price Microsoft is aiming for with Project Keystone, the Xbox game streaming box.

I mean, let's put this in simple terms; The Nvidia Shield TV with the same processor and a remote is well under 100$. I have no doubt Nintendo could price it aggressively.

That said, if it is to follow precious patterns for their home consoles, the SNES Jr. And Wii Mini both came out AFTER the launch of their successors. I would expect similar to happen here, with Switch Mini TV clearing out the ends of the Switch 1 production after they've moved onto the 2 as the primary device.
 
Last edited:
Seems like Tesco thing only, maybe for some reason they plan to stop selling Nintendo products. No way in hell the price for Oled drops to 152 funts everywhere
I read they're also massively discounting PS5 games, might just be getting out of videogame sales a bit in general.
 
Absolutely, I think the 100-120 range, I mean they had 80 dollar consoles on shelves in 2017, after inflation that's not over 120 dollars. Also apparently the price Microsoft is aiming for with Project Lockhart, the Xbox game streaming box.

I mean, let's put this in simple terms; The Nvidia Shield TV with the same processor and a remote is well under 100$. I have no doubt Nintendo could price it aggressively.

That said, if it is to follow precious patterns for their home consoles, the SNES Jr. And Wii Mini both came out AFTER the launch of their successors. I would expect similar to happen here, with Switch Mini TV clearing out the ends of the Switch 1 production after they've moved onto the 2 as the primary device.
Wasn't Lockhart the code name for series s?
 
Absolutely, I think the 100-120 range, I mean they had 80 dollar consoles on shelves in 2017, after inflation that's not over 120 dollars. Also apparently the price Microsoft is aiming for with Project Keystone, the Xbox game streaming box.

I mean, let's put this in simple terms; The Nvidia Shield TV with the same processor and a remote is well under 100$. I have no doubt Nintendo could price it aggressively.

That said, if it is to follow precious patterns for their home consoles, the SNES Jr. And Wii Mini both came out AFTER the launch of their successors. I would expect similar to happen here, with Switch Mini TV clearing out the ends of the Switch 1 production after they've moved onto the 2 as the primary device.
Microsoft canned Project Keystone because they weren't able to get that low iirc
 
I still think a TV model could make sense if they got it down really cheap
Since we're finally on the point where they can supply significantly more than the existing demand, there's no better time for a cheap TV model to squeeze some extra sales.

I'd go further and use the NES Mini form factor this year and SNES in 2024, with a themed controller (maybe a Wii CC like design instead of a Pro Controller to get further savings and nostalgia?).
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom