• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

"lazy devs" will never not be a bad and untruthful rethoric for the majority of developers out there working with constraints you're not considering.
Yeah, shouldnt blame devs, i know, it was just easier on a mobile than saying that ports that do not take into actual consideration the whole game and the two different systems that you are looking into and making a proper port that works properly by tweaking everything in the game with restraints suchs as money, time, effort, development time and that you as a dev wont have a say in most of those decisions.

🤷‍♀️
 
Yeah, shouldnt blame devs, i know, it was just easier on a mobile than saying that ports that do not take into actual consideration the whole game and the two different systems that you are looking into and making a proper port that works properly by tweaking everything in the game with restraints suchs as money, time, effort, development time and that you as a dev wont have a say in most of those decisions.

🤷‍♀️
yup it's the unfortunate state of the industry for a lot of studios with people on the lower totem pole being put through the grinder
 
Hello, I have been a lurker few months now.
I would like to ask a question, is it possible, to slowdown an M.2 SSD from 5.4 Gb/s to 1Gb/s (minimal requirement EU5) or 2.1Gb/s (UFS 3.1),
to reduce power consumption and reduce heat, to an acceptable level as memory expansion for the switch 2?
Theoretically yes.

But I don't know if reducing the sequential read speeds for a M.2 2230 SSD necessarily brings down the power consumption to acceptable levels since the SK Hynix BC501A with a max sequential read speed of 1.5 GB/s has an active power consumption of 2.5 W.

In comparison, Micron said that Micron's UFS 3.1 modules have an active power consumption of 960 mW (0.96 W).
 
Last edited:
I mean that's not really lowing expectations imo. Honestly 1080p with really good AA is fine on a 4k screen, unless your putting your nose on it.

Yep, I have played around with this even with 720p on PC. When you apply 4X MSAA to a game, even at 720p, its a very clean image. It will still result in a softer image, especially for details in the distance, but I have found to often prefer a 720p image with 4x MSSA compared to a 1080p image with no AA. It does seem likely that DLSS 1440p is fast enough to make it viable for most games. I wouldn't even write off DLSS 4K since that appears to be what they did with the Matrix demo. If that is an accurate representation of what Drake can do, and it was indeed outputting 4K, then 4K DLSS seems perfectly viable for 30fps games. Perhaps that will be how things play out, 4K DLSS will mostly be limited to 30fps games and 1440p DLSS will be used for 60fps. This will be a monumental jump from the typical image quality of Switch games.
 
Yep, I have played around with this even with 720p on PC. When you apply 4X MSAA to a game, even at 720p, its a very clean image. It will still result in a softer image, especially for details in the distance, but I have found to often prefer a 720p image with 4x MSSA compared to a 1080p image with no AA. It does seem likely that DLSS 1440p is fast enough to make it viable for most games. I wouldn't even write off DLSS 4K since that appears to be what they did with the Matrix demo. If that is an accurate representation of what Drake can do, and it was indeed outputting 4K, then 4K DLSS seems perfectly viable for 30fps games. Perhaps that will be how things play out, 4K DLSS will mostly be limited to 30fps games and 1440p DLSS will be used for 60fps. This will be a monumental jump from the typical image quality of Switch games.

It would be nice to see performance modes in that case. 1440p is plenty pixels for me, and I'd rather choose between frames (which will usually win) and some RT effects.

I'm not sure I can remember a game on a Nintendo machine that lets you choose, though. Not since Perfect Dark on the n64.
 
Yeah, the last part im worrying, lazy devs making straight ports and think DLSS will just fix everything!
it's important to understand the difference between resolution dependent effects and "bad optimization". if an effect is designed to scale with the number of pixels, then there's no avoiding that full screen effects are expensive. that's not bad optimization, if anything, that's the norm before temporal upscaling was a thing
 
no one actually thinks that dlss with be a magic fix it all button right? of course some effort needs to be made outside, I hope devs dont soley depend on dlss and we get bad running games/ports on switch
 
It would be nice to see performance modes in that case. 1440p is plenty pixels for me, and I'd rather choose between frames (which will usually win) and some RT effects.

I'm not sure I can remember a game on a Nintendo machine that lets you choose, though. Not since Perfect Dark on the n64.

Some of the estimations here have suggested that DLSS will take around 6-8ms to render up to 4K, and it doesn't matter what the internal rendering resolution is, it will take that long to scale to 4k regardless of internal rendering resolution. I believe DLSS scaling to 1440p was estimated to take about 2ms and this would be much more practical for 60fps games where they only have 16ms to render a frame. There will certainly be some 4K 60fps games, but will will those games even bother with DLSS? For example, take a game like Super Mario Bros Wonder that renders at 1080p on Switch. Drake is likely powerful enough to render that game at a native 4K without using DLSS, and seeing as how DLSS would take 6-8MS to upscale it anyway, would the hardware be just as fast doing it natively as it would with DLSS? I do think Nintendo is holding back a few games for cross gen releases that will be marketed for rendering at 4K on SNG. I believe this is why both Zelda WW HD and Zelda TP HD have been held back. They will easily render at 4K on SNG with or without DLSS. Perhaps the missing in action port of F-Zero GX will be another example of a game that makes sense to be cross gen and delivery on 4K graphics for SNG.
 
Some of the estimations here have suggested that DLSS will take around 6-8ms to render up to 4K, and it doesn't matter what the internal rendering resolution is, it will take that long to scale to 4k regardless of internal rendering resolution. I believe DLSS scaling to 1440p was estimated to take about 2ms and this would be much more practical for 60fps games where they only have 16ms to render a frame. There will certainly be some 4K 60fps games, but will will those games even bother with DLSS? For example, take a game like Super Mario Bros Wonder that renders at 1080p on Switch. Drake is likely powerful enough to render that game at a native 4K without using DLSS, and seeing as how DLSS would take 6-8MS to upscale it anyway, would the hardware be just as fast doing it natively as it would with DLSS? I do think Nintendo is holding back a few games for cross gen releases that will be marketed for rendering at 4K on SNG. I believe this is why both Zelda WW HD and Zelda TP HD have been held back. They will easily render at 4K on SNG with or without DLSS. Perhaps the missing in action port of F-Zero GX will be another example of a game that makes sense to be cross gen and delivery on 4K graphics for SNG.
Wait, are you saying DLSS would take as long to scale from 1399p to 1400 p than scale to 720p to 1400p, to take an extreme example?
 
Wait, are you saying DLSS would take as long to scale from 1399p to 1400 p than scale to 720p to 1400p, to take an extreme example?
nvidia says the output resolution is the determinant to the render time, they haven't talked much about the distance being jumped
 
Some of the estimations here have suggested that DLSS will take around 6-8ms to render up to 4K, and it doesn't matter what the internal rendering resolution is, it will take that long to scale to 4k regardless of internal rendering resolution. I believe DLSS scaling to 1440p was estimated to take about 2ms and this would be much more practical for 60fps games where they only have 16ms to render a frame.
Since it mostly scales with output resolution, you're exaggerating the difference between 1440p and 4K timing a bit. We could probably expect the former to take 4/9 as long as the latter. But yeah, it holds that as a percentage of frame time, 1440@60 and 4K@30 should have similar costs.
Wait, are you saying DLSS would take as long to scale from 1399p to 1400 p than scale to 720p to 1400p, to take an extreme example?
Basically. Even 1400->1400 should be the same, which basically means it has the job of just being used for anti-aliasing and what they usually call DLAA on PC. Deep Learning Anti-Aliasing vs Deep Learning Super Sampling.
 
Wait, are you saying DLSS would take as long to scale from 1399p to 1400 p than scale to 720p to 1400p, to take an extreme example?

It doesn't make sense to me either, and that would be an extreme example, but as for DLSS frame slice for taking 1080 versus 540p up to 1440p, the fame slice would be the same is what I have read here, someone correct me if I am wrong on this? This is basically how you get your different quality presets, the more pixels rendered the higher the quality of the final resolve. Games that render at 1080p and use DLSS to scale to 4K will essentially be getting DLSS Performance Mode results.

Since it mostly scales with output resolution, you're exaggerating the difference between 1440p and 4K timing a bit. We could probably expect the former to take 4/9 as long as the latter. But yeah, it holds that as a percentage of frame time, 1440@60 and 4K@30 should have similar costs.

I could be misremembering, but I remembered being startled by the difference in frame time for 1440p versus 4K. 4K took more than twice as long. Again, these were estimates, so time will tell just how accurate they are.
 
Yeah, but the wordage/intent of the legislation is so uravg consumer could do it as easily as possible.

Removing and reinstalling a glass backplate is not easy for uravg consumer.
The goal is to make it easier sure, but the way the legislation is phrased outright states that adhesive and heat guns are OK as well as having carve outs for waterproof devices.

Based on a direct reading of the current EU rules Nintendo is fine, both for Switch and Joy-Cons, heck the current iPhone is fine (save for perhaps their pentalob screws).

As a result, there is additional interpretation/clarification needed if the EU intends to enforce the rules more narrowly than they are currently written. I suspect that will won't see such revisions as some amount of adhesive is necessary to help with water/dust proofing. Consumer friendly battery replacements is a laudable goal, but one that must be tempered against the very useful water/dust resistance enjoyed by most phones.
 
It doesn't make sense to me either, and that would be an extreme example, but as for DLSS frame slice for taking 1080 versus 540p up to 1440p, the fame slice would be the same is what I have read here, someone correct me if I am wrong on this?
Not wrong. The amount of change doesn't really matter, because it's not like DLSS is only generating the difference between what's been natively rendered and the desired output. You tell it to create a 2560x1440 image and it will do so from scratch using whatever data is at hand, with a certain time cost per pixel. It will just come up with better results if you've given it more to work with, with diminishing returns as your input resolution gets closer to the output resolution.
I could be misremembering, but I remembered being startled by the difference in frame time for 1440p versus 4K. 4K took more than twice as long. Again, these were estimates, so time will tell just how accurate they are.
I'm not sure which estimates you're referring to (as this has come up... a time or two or twenty over the years), but 4K is 225% the pixels of 1440p, so a little more than twice as long sounds about right.
 
AI can do a lot of CPU magic tricks like physics for example. As long as an algorithm can replaced by a trained AI and does not need 100% accuracy, then it can be optimized.
I want to see how many developers can even afford to do that, especially without Nvidia's direct support like Cyberpunk and Alan Wake 2. Its tensor cores are limited and DLSS/RR will be already pegging them quite hard, not holding my breath in there.
 
Some of the estimations here have suggested that DLSS will take around 6-8ms to render up to 4K,
I have done some personal benchmarks using GeForce Now, and I think it's higher than that, in fact. Huge grain of salt, but I think 10ms+ is a more accurate baseline.

Wait, are you saying DLSS would take as long to scale from 1399p to 1400 p than scale to 720p to 1400p, to take an extreme example?
DLSS uses AI to answer the question "what color should this pixel be?" for every pixel* in the final image. The AI runs in fixed time. DLSS upscaling a 4k image on an otherwise unloaded GPU will take the same amount of time, every time.

Having higher resolution inputs just improves the AI's ability to make good decisions, but the number of decisions is the same. That's why there is a line beyond which 4k60fps** is not "difficult" but "impossible. There are many tricks you can use to hide the DLSS cost - for example, by increasing latency, and buffering a frame, upscaling one frame while you prepare and render the previous once. But once DLSS's frame time cost exceeds 16.6ms, you can't hit 60fps no matter what you do.

That's why I keep arguing that 1440p + a secondary spatial upscale will be common. I expect that 4k DLSS will be possible, but there will be plenty of cases where that extra frame time can be used to make a prettier image, if not a higher resolution one. Will 4k Ultra Performance (a 9x upscale) look better or worse than 1440p performance mode (a 4x upscale + bilinear interp) at 8 feet away? What about 1440p with RT shadows and reflections, vs 4k with those things off?

It's also why I don't expect to see DLSS in literally every title. There are plenty of titles built for last gen that use checkerboarding already. DLSS might look better, but if the existing, well tuned solution already runs at reasonable frame rates, then why not?


*I believe it actually runs once for every 2x2 pixel block, but the principle is the same
** With DLSS at least.
 
I have done some personal benchmarks using GeForce Now, and I think it's higher than that, in fact. Huge grain of salt, but I think 10ms+ is a more accurate baseline.

The rumor from Gamescom was that the demos were using DLSS to render at 4K. If Drake is taking 10ms for DLSS to 4K, does Drake have the overhead to render BotW in less than 6ms? The Matrix demo was always 30fps or less for the other consoles, so 10ms for that seems very doable.
 
The rumor from Gamescom was that the demos were using DLSS to render at 4K. If Drake is taking 10ms for DLSS to 4K, does Drake have the overhead to render BotW in less than 6ms? The Matrix demo was always 30fps or less for the other consoles, so 10ms for that seems very doable.
it was the Zelda demo that was confirmed for 4K after DLSS. we don't know what resolution The Matrix was. I'm not expecting 4K for that demo since higher end consoles struggled even with upsampling
 
The rumor from Gamescom was that the demos were using DLSS to render at 4K. If Drake is taking 10ms for DLSS to 4K, does Drake have the overhead to render BotW in less than 6ms? The Matrix demo was always 30fps or less for the other consoles, so 10ms for that seems very doable.
Honestly taking that rumor (the 4k bit) with a grain of salt.

Edit: or maybe it was dlss to 1440p then fsr 1 the rest of the way. Or some trickery like that.

Or it is possible Drake has the overhead to render BOTW in 6 ms./
 
Honestly taking that rumor (the 4k bit) with a grain of salt.

Edit: or maybe it was dlss to 1440p then fsr 1 the rest of the way. Or some trickery like that.

Or it is possible Drake has the overhead to render BOTW in 6 ms./
Most people can't really tell i assume if it's output is 4k either, so if the rumor is it was in 4k, i would have to assume someone told them it was as part of the demo.

I am in agreement this could be a best case scenario demo showing what Switch 2 can do and may not reflect games actually hitting that target in real world performance.
 
Some quick math regarding BOTW 4k 60 DLSS (and DLSS upscaling to 1080p in portable mode):

  • On switch it's 900p (648p portable mode) at 30FPS, which equates to a frametime of 33.33 ms
  • Assuming both the GPU and CPU of the Switch 2 have at least 6x the performance of the Switch before DLSS and assuming it's DLSS upscaling from the same base resolution as Switch 1, it would take, before DLSS, around 5.55 ms***
  • Add the 10ms from DLSS and you get 15.55 ms
  • Frametime for 60fps is 16.66 ms

***This is obviously the key part. If I'm misreading and the Switch 2 is only 6x as powerful after DLSS then this wont work.
 
The rumor from Gamescom was that the demos were using DLSS to render at 4K. If Drake is taking 10ms for DLSS to 4K, does Drake have the overhead to render BotW in less than 6ms? The Matrix demo was always 30fps or less for the other consoles, so 10ms for that seems very doable.
this report of a 4K60 Breath of the Wild and the Matrix demo with visuals comparable to current gen consoles, made me very skectical, all this is true,
 
With how much Switch 2 seems to be leaning on Nvidia-specific tech (RT cores for ray tracing, tensor cores for DLSS and Ray Reconstruction) I wonder how emulation-resistant this thing is going to be? Obviously, future freebooters will be able to just chuck a 5090 and a 16900K at the problem, but will it reach the point where mid and entry level PCs can do so before the end of the system's life?
 
The rumor from Gamescom was that the demos were using DLSS to render at 4K. If Drake is taking 10ms for DLSS to 4K, does Drake have the overhead to render BotW in less than 6ms?
Naive math? Yeah, if Drake is 6x faster across the board (CPU and GPU), then yeah, Switch 30fps becomes Drake 180fps - or 5.5ms per frame

But we also don't know what base resolution was used in that demo. Nintendo isn't stuck using the 900p that the game rendered at on Switch, they could bump it down to 720p. That's another ~20% more performance .

Also, it's possible to run tensor core load, shader load, and RT load simultaneously. For a one-frame latency cost, DLSS can be upscaling a buffered frame, while the game engine goes on to begin working on the next frame. This means anything less than 16.6ms is at least potentially manageable, though you wouldn't want to do it on a fighting game, or a rhythm game, where latency is a Really Big Deal.
 
Also, it's possible to run tensor core load, shader load, and RT load simultaneously. For a one-frame latency cost, DLSS can be upscaling a buffered frame, while the game engine goes on to begin working on the next frame.
Replying myself to add something here - it's optimizations like this that make PC benchmarking a limited proxy for what games will actually be able to do on the hardware. On PC, this sort of optimization is almost never worth it. On PC, you expect users to tune their config to get the framerate and resolution that looks good on their combination of hardware, and on even a lowly 2060, DLSS probably costs less than 3ms.

This sort of buffering would make the game feel worse across the board, only to give a tiny boost in performance that a user could never power through by throwing more and more hardware at it. But on a fixed platform like a console, you can make this sort of trade off - or it might not even be a tradeoff. "No buffering, 30fps" and "buffering, 60fps" are the same latency, and thus basically free.

Benchmarking Cyberpunk on a 3 TFLOP Ampere PC might actually give you pretty good data on how much DLSS, RT, and modern rasterization will cost on Drake. It probably won't tell you how good port of Cyberpunk would perform, as devs have a different set of optimization choices in front of them for a console
 
0
Oct 23 (Reuters) - Nvidia (NVDA.O) dominates the market for artificial intelligence computing chips. Now it is coming after Intel's longtime stronghold of personal computers.

Nvidia has quietly begun designing central processing units (CPUs) that would run Microsoft's (MSFT.O) Windows operating system and use technology from Arm Holdings (O9Ty.F), two people familiar with the matter told Reuters.

The AI chip giant's new pursuit is part of Microsoft's effort to help chip companies build Arm-based processors for Windows PCs. Microsoft's plans take aim at Apple, which has nearly doubled its market share in the three years since releasing its own Arm-based chips in-house for its Mac computers, according to preliminary third-quarter data from research firm IDC.

Advanced Micro Devices (AMD.O) also plans to make chips for PCs with Arm technology, according to two people familiar with the matter.

Nvidia and AMD could sell PC chips as soon as 2025, one of the people familiar with the matter said. Nvidia and AMD would join Qualcomm (QCOM.O), which has been making Arm-based chips for laptops since 2016. At an event on Tuesday that will be attended by Microsoft executives, including vice president of Windows and Devices Pavan Davuluri, Qualcomm plans to reveal more details about a flagship chip that a team of ex-Apple engineers designed, according to a person familiar with the matter.

Nvidia shares rose 4.4% and Intel shares dropped 2.9% after the Reuters report on Nvidia's plans. Arm's shares were up 3.4%.

Nvidia spokesperson Ken Brown, AMD spokesperson Brandi Marina, Arm spokesperson Kristen Ray and Microsoft spokesperson Pete Wootton all declined to comment.

Nvidia, AMD and Qualcomm's efforts could shake up a PC industry that Intel long dominated but which is under increasing pressure from Apple (AAPL.O). Apple's custom chips have given Mac computers better battery life and speedy performance that rivals chips that use more energy. Executives at Microsoft have observed how efficient Apple's Arm-based chips are, including with AI processing, and desire to attain similar performance, one of the sources said.
In 2016, Microsoft tapped Qualcomm to spearhead the effort for moving the Windows operating system to Arm's underlying processor architecture, which has long powered smartphones and their small batteries.

Microsoft granted Qualcomm an exclusivity arrangement to develop Windows-compatible chips until 2024, according to two sources familiar with the matter.

Microsoft has encouraged others to enter the market once that exclusivity deal expires, the two sources told Reuters.

"Microsoft learned from the 90s that they don't want to be dependent on Intel again, they don't want to be dependent on a single vendor," said Jay Goldberg, chief executive of D2D Advisory, a finance and strategy consulting firm. "If Arm really took off in PC (chips), they were never going to let Qualcomm be the sole supplier."

Microsoft has been encouraging the involved chipmakers to build advanced AI features into the CPUs they are designing. The company envisions AI-enhanced software such as its Copilot to become an increasingly important part of using Windows. To make that a reality, forthcoming chips from Nvidia, AMD and others will need to devote the on-chip resources to do so.

There is no guarantee of success if Microsoft and the chip firms proceed with the plans. Software developers have spent decades and billions of dollars writing code for Windows that runs on what is known as the x86 computing architecture, which is owned by Intel but also licensed to AMD. Computer code built for x86 chips will not automatically run on Arm-based designs, and the transition could pose challenges.

Intel has also been packing AI features into its chips and recently showed a laptop running features similar to ChatGPT directly on the device.

Intel spokesperson Will Moss did not immediately respond to a request for comment. AMD's entry into the Arm-based PC market was earlier reported by chip-focused publication SemiAccurate.
 
Last edited:
Also, it's possible to run tensor core load, shader load, and RT load simultaneously. For a one-frame latency cost, DLSS can be upscaling a buffered frame, while the game engine goes on to being working on the next frame. This means anything less than 16.6ms is at least potentially manageable, though you wouldn't want to do it on a fighting game, or a rhythm game, where latency is a Really Big Deal.

This would essentially result in the same latency as games that use triple buffer vsync, correct? Zelda BotW and TotK are considered to be pretty responsive for 30fps because of the double buffer vsync. Assuming they are using the additional buffer, the jump from 30fps to 60fps in BotW would essentially have the same latency as it does at 30fps double buffer vsync. Controller response wouldnt actually be more responsive, but the game would still look much smoother on the TV so I think the perception to the player would still be a reduction in latency.
 
Last edited:
With how much Switch 2 seems to be leaning on Nvidia-specific tech (RT cores for ray tracing, tensor cores for DLSS and Ray Reconstruction) I wonder how emulation-resistant this thing is going to be? Obviously, future freebooters will be able to just chuck a 5090 and a 16900K at the problem, but will it reach the point where mid and entry level PCs can do so before the end of the system's life?
Life uh, finds a way
 
This is the sort of move that reminds me of the long-standing recommendation to never try to predict where tech markets are headed. Our present day view is particularly clouded, because there have been a bunch of changes to previously held assumptions, coming from both the rapid rise of AI and the slowing down of Moore's Law.

Most companies are standing at a crossroad to a variety of paths they can take, and by the same token, more places where they can make mistakes. By the end of this decade, it's not absurd to think we could be talking about multiple devices powered by Nvidia CPUs and Intel GPUs.

Intel in particular is hard to predict. A company as large as Intel would normally need a more in depth analysis of all their different divisions to properly describe, but so much of Intel's current strategy has been to give full priority to their manufacturing process pipeline and a streamlined list of internal products that will be taking advantage of that once it's ready. It's hard to tell if they'll be able to pull it off, especially since their public timeline is very aggressive and would actually put them a bit ahead of TSMC as the world's leading edge fab if the target performance and yields are there.

As one of Dakhil's previously posted links showed, TSMC seems to be taking note and commenting on it quite often, as they should, because if Intel's 20A and 18A nodes end up being viable then they'll likely have a long list of customers that will want to place some orders, most of which are currently running almost all their leading edge production on TSMC nodes. With Intel's fabs mostly located in the United States and Europe, placing some production on Intel fabs also adds an extra layer of geopolitical risk mitigation, which is increasingly becoming a priority. Again, assuming the nodes are viable and on time, which is far from a given.

Edit: Spelling
 
Last edited:
this report of a 4K60 Breath of the Wild and the Matrix demo with visuals comparable to current gen consoles, made me very skectical, all this is true,
It’s good to be skeptical, but how are the skeptics deciding to reconcile the Gamescom leaks?

Would Nintendo reps be misrepresenting the showing - ie. It wasn’t actually 4K? Trade floor reps have said some dubious things, but it’s usually when asked by journalists about a show floor demo. I assume they’d have more structure around that here.

Do you think it wasn’t representative of the final hardware? What is the purpose of a demonstration that isn’t at least proximal to the real thing?
 
Do you think it wasn’t representative of the final hardware? What is the purpose of a demonstration that isn’t at least proximal to the real thing?
It would not be the first time that tech demos aren't very accurate to what the final product is capable of, to be fair.
 
It would not be the first time that tech demos aren't very accurate to what the final product is capable of, to be fair.

We’ve definitely had public tech demos during showcases that the final hardware doesn’t live up to, but do we really have any insights to this type of behind closed doors demo?

The purpose is different and the public may never see these.
 
It’s good to be skeptical, but how are the skeptics deciding to reconcile the Gamescom leaks?

Would Nintendo reps be misrepresenting the showing - ie. It wasn’t actually 4K? Trade floor reps have said some dubious things, but it’s usually when asked by journalists about a show floor demo. I assume they’d have more structure around that here.

Do you think it wasn’t representative of the final hardware? What is the purpose of a demonstration that isn’t at least proximal to the real thing?
this tech demos represent a work in progress for the console, it specs can change before thr console launch, in a way they are not a final representation of Switch sucessor final graphical/techinal specs
 
I don’t think anyone here is arguing that the Seitch 2 will do full path tracing like Cyperpunk 2077 Overdrive.

But I believe Nintendo will totally make use of the RT cores to do at the least some ray tracing features.

I’m sorry but if third-parties can get some RT running on the Series S, then Nintendo will be able to get some RT running on the Switch 2.


In that video DF says one of the reason that Series S don't have more Ray tracing games is the small 10 GB memory. If switch 2 have 12 or 16 GB, maybe it can have more ray traced games than Series S in the end.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom