• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Hi everyone! First time posting here, but I've been lurking since the Linux kernel stuff and unhealthily consuming every single bit of information ya'll figure out.

I have a few questions about DLSS that I'd like to bring up, and who knows, maybe someone can share their thoughts.
  • I believe Drake will be the first time DLSS is available in a mainstream console for developers, correct? Could this result in games being developed (or ported) with DLSS in mind, thus optimizing it's efficacy?

  • Is there a chance we'll get "specialized" versions of DLSS for Drake, or would it use the current available solutions used in the PC space? In this DF video that I've rewatched probably a dozen times, Alex suggests something like this in the conclusion.

  • Is it safe to assume that future Drake firmware updates could introduce newer versions of DLSS with improved performance? And if so, could this squeeze out potential, down the line of it's lifecycle, for even more "impossible ports"?
 
I believe Drake will be the first time DLSS is available in a mainstream console for developers, correct? Could this result in games being developed (or ported) with DLSS in mind, thus optimizing it's efficacy?
The specific technique of DLSS is new, and certainly nothing else on consoles is as hardware accelerated at the moment, but algorithms attempting to accomplish similar things have been starting to pop up on console for some time now.
Is there a chance we'll get "specialized" versions of DLSS for Drake, or would it use the current available solutions used in the PC space? In this DF video that I've rewatched probably a dozen times, Alex suggests something like this in the conclusion.
The DLSS software will have to be ported to run on the Switch OS, but that doesn't necessarily imply any functional differences. They certainly could customize it for Drake if they choose to do so, however.
Is it safe to assume that future Drake firmware updates could introduce newer versions of DLSS with improved performance? And if so, could this squeeze out potential, down the line of it's lifecycle, for even more "impossible ports"?
Based on how Switch software is currently distributed, the DLSS code will probably be included with games and not as part of the firmware, but certainly, the versions of the DLSS library available to ship with games could be updated over time to deliver whatever enhancements that the hardware can support. I'd expect changes more along the lines of incremental refinements rather than dramatic improvements, but it depends on how the DLSS software itself evolves over time.
 
With the RDNA3 event shown, I don’t really see what Sony or Microsoft can do to freely improve their RT performance enough to make the investment worth it imo. RDNA3 is helped by the massive bandwidth from the MCDs, which MS and Sony won’t have because of increase complexity and costs to make I’m sure.

So, for the moment I really don’t see anything helping them enough to make a midgen console worth it with RT as a focus. In raster? Yeah there should be gains, but that is also is helped by a BW the consoles will not really have.

And would require them increasing the GPU sizes to see a better benefit. That said, it also seems like the legacy GCN is dropped on RDNA3, so there would be a bit of a compatibility issue for GCN code I think, but they’d need to run a layer of sorts for that.

They’d need to brute force this basically.


Unless…… the Pro consoles would be well above the current price. Only for sale or conversation am I bringing this up. I find RDNA3 to be a cool uArch though, all things considered. But only in the PC sense.
 
0
Let's set some performance expectations*. Drake will not be a PS4 Pro, even in docked mode. It certainly will not be a Series S.

Series S and PS4 Pro are 4 TFLOPS. If you really pushed docked mode, bleeding 20W+ of power, with a giant fan in the dock, you could get there. But you wouldn't be able to match their 200+ GB/s of memory bandwidth**. You wouldn't be able to match Series S 3.6 GHz octo-core Zen 2 CPUs. You wouldn't be able to match PS4 Pro's pixel and texel fill rates.

The gap between Series S and Series X is massive, and devs are complaining. While you might be able to run enough power in docked mode to light the NuSwitch up like a Christmas Tree, handheld would still be constrained - pushing docked mode to 4 TFLOPS would create a gap just as big between handheld and docked mode as between Series S and Series X - which just means performance left on the table in one mode, or juddery frame rate garbage on the other.

DLSS can help cover that gap, somewhat. A good example would be Death Stranding.
I'm not sure what it will take to get to the theoretical GPU max of 4 TFLOP.. Maybe a 5 or 4nm TSMC? Who knows if 1.3 GHz GPU and something like x7 A78c 2Ghz CPU (Beerus, I would love both of these)s is possible at 20 watts in docked mode.. Maybe not...

But anyway, if the GPU is at 4 TFLOPs in docked mode, then handheld mode would also be lifted as well. Not sure why we should worry about power gaps. If we follow the 2.5x gap like between switch handheld and docked,1.6 TFLOPs for handheld should be expected and definitely be doable. If Steam Deck can get a mid average performance of 1.3 TFLOPs on 7nm TSMC on AMD tech.. Shouldn't be an issue on 5nm TSMC on Nvidia architecture. First gen 5nm TSMC chips offers 1.8x more logic density, a 45% area reduction, 15% faster (at same complexity and power) or 30% lower power consumption (at same frequency and complexity) than 7nm With second gen 5nm TSMC being 7% faster than first gen or 10% power reduction. I believe going from 5nm to 3nm is the same.
NuSwitch, running at something like 2.4 TFLOPS in docked mode, just can't get there. It's like 1.3x a PS4, not 2.2x like the PS4 Pro. But here is the magic trick. DLSS can get a decent looking 4k from only a 1080p input.
To be fair, the gap will be wider than 1.3x per flop, because ampere architecture is newer and more efficient than PS4. Even Maxwell TX1 is more efficent (was it 30%?), but I'm not sure the performance gains are going from Maxwell --> Ampere. Perhaps its possible 3 TFLOPs could match 4 TLOPS PS4 Pro? I dunno. I do get remember that PS4 Pro is using Polaris architecture, which also has mixed precision mode like the TX1 (which Xbone and PS4 base do not).

But anyway, I shouldn't expect 4 Tflops. 2.5-3 T flops I'd be pretty happy with. If they can maintain the same GPU power gap to current gen like Switch Vs PS4/Xbone, while closing the gap on CPU and lower bottleneck on bandwidth (wish we got lpddr5x bandwidth), I'd be ecstatic and we'd be in a better situation than switch vs ps4/xbone.

GCN uses immediate rendering, so makes much less efficient use of L2 cache than newer tile-based rendering GPUs. Nvidia moved to tile-based rendering with the Maxwell architecture, and AMD I believe switched over on RDNA1. This is one of the reasons I'm hoping Drake has the larger 4MB cache that's indicated (but seemingly not confirmed) by the leak. For any tile-based renderer, increasing the cache size should be a much more power-efficient way to work around bandwidth limitations than just cranking up the memory clocks or bus width.
Zeno speed. 9_9
 
Last edited:
Given the recent announcements, I ponder the more technologically-informed ITT: is FSR 3.0's frame generation tech AMD-exclusive? could games on drake theoretically make use of it?
 
I suspect it's more akin to how VR does it, which could introduce a lot of artifacts due to being prediction-based
The VR motion reprojection solutions are different; they rely on having real rotational/positional data from the headset that describe how much orientation and position have changed since initially rendering the frame. You wouldn’t have that data in a non-VR game.

When we were speculating on DLSS 3 being frame extrapolation, it was basically contingent on having a neural network that could reasonably predict how the motion vectors would evolve in time one frame into the future based on nothing at all but previous frame/motion vectors data. That’s a tall order since it’s not a stable problem dynamically, but using a neural network should at least guarantee that the error is minimized. I don’t think there is any good way to make that prediction classically.
 
In my opinion, it's believable to see a Switch Drake only home console at the beginning of this new generation, and a few years later the hybrid version of it. Or maybe both at the same time.

The thing I see way too far from happening is a Switch Drake hybrid at $450 or $500. And I don't see either Nintendo selling the next console at a loss.
Having Drake launch as a home-only version might be a good strategy to keep the Switch relevant for another year or two due to its hybrid appeal. This would also mean that Nintendo can showcase the power of Drake to its fullest (not being hamstrung by portable battery life considerations). Then after two years or so Nintendo can discontinue the Switch and replace it with a hybrid Drake which by then could be engineered to have better battery life due to a die shrink or something.
 
It would also be a good way to have Drake start off with slow hardware sales and get any third party exclusives for it to underperform in those first years.
 
Having Drake launch as a home-only version might be a good strategy to keep the Switch relevant for another year or two due to its hybrid appeal. This would also mean that Nintendo can showcase the power of Drake to its fullest (not being hamstrung by portable battery life considerations). Then after two years or so Nintendo can discontinue the Switch and replace it with a hybrid Drake which by then could be engineered to have better battery life due to a die shrink or something.
2 years doesn't make that much of a difference.

If they pushed Drake to its fullest, they wouldn't be able to turn it into a hybrid in just 2 years. The hybrid would still have to be weaker even in docked mode.

And whatever hybrid with good battery life they can get in 2025, they can get with bad battery in 2023.

Starting with the less desirable product and waiting 2 years for the main product is a big risk for the platform momentum as well. Salvaging a platform after a bad start is very expensive and hard to pull off.
 
It would also be a good way to have Drake start off with slow hardware sales and get any third party exclusives for it to underperform in those first years.

Simply put, it’s not a good idea. The powerful TV-only Switch dream that some are clutching to needs to be left behind.

If there’s a time for Nintendo to return to TV-only products, it’s certainly not now. They’re seeing greater success than ever before, with a hardware lineup that is both uncontested and fits their innovative sensibilities. New hardware isn’t going suddenly see them walk away from that with a TV-only device that pits them directly against the far more powerful XSX|S and PS5 just as they really hit their stride.

Sorry, it’s just sounds ridiculous to me every time I see it. I’ll happily eat crow if I’m wrong
 
Last edited:
A TV-only Switch will never be designed to exceed docked hybrid performance. If Drake TV comes out, before or (more likely) after Drake Hybrid, they will have the same performance. Anything else risks extra profiles for developers to target, fracturing the userbase and making the value proposition of the hybrid console worse.

I admit there is an appeal to a power efficient set top Drake to put under an extra TV, like an even cheaper Series S with Nintendo support. But since the hybrid has done so well, I think the idea has been shelved for the time being.
 
I don’t see how a dock only successor would not be a total failure based on the rumored specs. Even maxed out the wazzo, this will be a machine on par with the Series S while costing more.
 
Releasing a TV-only Drake and ONLY a TV-only Drake, for any length of time, is a bad idea.

But releasing one as an option alongside a pre-existing hybrid? Any argument that could be made against it to call it a bad idea likely can be (or was) used against the existence of the Lite, so let’s not have a repeat of that.
 
Releasing a TV-only Drake and ONLY a TV-only Drake, for any length of time, is a bad idea.

But releasing one as an option alongside a pre-existing hybrid? Any argument that could be made against it to call it a bad idea likely can be (or was) used against the existence of the Lite, so let’s not have a repeat of that.
the lite plays games the same way as the standard model does. if a TV only model played games better it introduces a greater compromise
 
the lite plays games the same way as the standard model does. if a TV only model played games better it introduces a greater compromise
telling devs to make three profiles will probably earn some ire from devs. switch handheld and docked mode is better balanced than Series X and S, don't want to ruin that
 
I don’t see how a dock only successor would not be a total failure based on the rumored specs. Even maxed out the wazzo, this will be a machine on par with the Series S while costing more.
I don’t really think it’s realistic though, because let’s assume the scenario that N releases an TV only model first, and it is at the maximum clock speed that it can actually operate. So, game development proceeds to work on this new model and it’s specifications.

So, games are made for that spec in mind, they would need to go higher, not lower, for a smoother transition. Otherwise games that were developed have to be patched and downgraded to account for the handheld mode, lowered CPU frequency, lowered memory bandwidth, etc. in the other end, they would have to target equal to docked but in portable mode, and even higher performance in docked mode.

It’s like having a bowl of 500ml, and then a bit later (2 years) you release a new bowl that has to account for the 250ml at its worst, if you have water that filled a 500ml bowl and you pour it to a 250ml it won’t end well. So, now any upcoming drink have to either skip the 250ml, be delayed, and/or others have to update any game they were working on initially to account for the new, lower sku.

And I don’t think it’s possible to do that with a die shrink after 2 years. Would need 2 more shrinks down I’d imagine to be able to do that.


This is different from the Switch Lite, which operates equally to the Switch in portable mode.

Or the XBox series which was released at the same time.


It’s like Sony releasing a portable version of the PS5, and it is the PS5 perf in TV mode. We know it won’t operate at that same for handheld mode. Only remedy is to release something stronger that can be PS5 in portable mode, and 2x PS5 in Tv mode.


And you wouldn’t really get that in 2 years, probably more like 4, maybe. Even then it makes a lot of assumption.
 
the lite plays games the same way as the standard model does. if a TV only model played games better it introduces a greater compromise
telling devs to make three profiles will probably earn some ire from devs. switch handheld and docked mode is better balanced than Series X and S, don't want to ruin that
Yeah, operating at a different spec is ridiculous, just for the added manufacturing complexity to the line alone. But people poo-pooing the idea of a TV-only device are doing so irrespective of that notion.
 
I wrote a long post and then my browser ate it. :(

The quick (for me) version: DLSS changes the power calculations for the device and I think it’s useful to stop thinking of how much power Drake could have and start thinking about how much power DLSS needs.

Consider the PS 4 Pro and the Xbox One X. Their performances characteristics were dictated by checkerboard rendering. CR requires a bespoke resolution exactly one half of 4K. The PS4 Pro GPU is exactly twice as big as the because that’s the factor by which existing games needed to improve for checkerboard rendering to work.

DLSS doesn’t work that way. DLSS supports multiple possible input resolutions. Quality mode looks better than checker boarding. Performance mode looks a little worse. Neither requires as high an input resolution as checkerboarding. Nintendo can offer a super product at 85% of the power.

DLSS also makes dynamic resolution scaling way less useful or actively broken. DLSS really needs a stable base image to draw from, the whole design is built on the idea that you have performance to spare.

Without DRS, the power gap between the two modes is more important. Zelda’s 900p won’t cut it, you need to get to the base resolution for DLSS to work, and you can’t target one mode and let DRS cover any gaps.

So it’s highly likely Nintendo will try to keep power per natively rendered pixel about the same. For quality mode output, that’s a factor of 3.6. For performance mode its 2.1.

We know from Orin power data that 460Mhz is pushing it in handheld mode from a power draw perspective. We also know from Orin that the likely max clock for Drake is 1.3Ghz. The bottom of Ampere’s power curve is 300Mhz.

So here are our constraints. Nintendo almost definitely wants to target last gen quality as a baseline, and would like to preserve handheld mode battery life.

At 300Mhz, Drake has roughly half the TFLOPS of PS4, and at 720p it is supporting half the resolution. That leaves just enough room to support DLSS quality mode with docked hitting 1Ghz, and at maximum possible Drake battery life. This is pretty close to a best possible situation for Nintendo, and increased Ampere efficiency only encourages this arrangement further.

But perhaps Nintendo wants to leave some GPU room for “ps4 quality visuals with some ray tracing on top.” Or perhaps there are some other performance limitations that push the handheld clocks further. If they get past 370Mhz, DLSS quality mode is no longer on the table without sacrificing some visual features in the docked mode presentation.

Performance mode opens up additional possibilities. You could go as far as battery life would let in handheld mode, and still have plenty of room to stay under the cap. Maybe run a 1080p screen and DLSS to it?

So - Nintendo pushes for more power in handheld mode, only to lose DLSS quality mode in the name of SOC yields. Or goes for quality mode, and cuts handheld mode down to preserve battery life. Neither of these solutions requires PS4 Pro level of pixel pushing.
 
DLSS doesn’t work that way. DLSS supports multiple possible input resolutions. Quality mode looks better than checker boarding. Performance mode looks a little worse. Neither requires as high an input resolution as checkerboarding. Nintendo can offer a super product at 85% of the power.

DLSS also makes dynamic resolution scaling way less useful or actively broken. DLSS really needs a stable base image to draw from, the whole design is built on the idea that you have performance to spare.

Without DRS, the power gap between the two modes is more important. Zelda’s 900p won’t cut it, you need to get to the base resolution for DLSS to work, and you can’t target one mode and let DRS cover any gaps.
DLSS does have DRS support (section 3.2.2 of the programming guide). I’m not a game developer, so I can’t say what it takes to implement it in practice.

But what I can say is that one of the primary advantages of a fully convolutional neural network is that it’s independent of input size. Convolutional networks are made up of many filters so, for example, if you are training with 3x3 pixel filters, you can just as easily run those filters over a 900p image as to a 1080p image.

There is a caveat that 3x3 pixels is smaller in screen space at 1080p than 900p (and even smaller at 2160p), but there’s a way to address that problem. The Facebook neural upsampling paper projects the input to the output resolution before warping the image with motion vectors and running the neural network. So you would end up with a 2160p buffer after projection regardless of whether you started from 900p or 1080p; you would just have fewer samples available per frame from the 900p input.
 
DLSS does have DRS support (section 3.2.2 of the programming guide). I’m not a game developer, so I can’t say what it takes to implement it in practice.

But what I can say is that one of the primary advantages of a fully convolutional neural network is that it’s independent of input size. Convolutional networks are made up of many filters so, for example, if you are training with 3x3 pixel filters, you can just as easily run those filters over a 900p image as to a 1080p image.

There is a caveat that 3x3 pixels is smaller in screen space at 1080p than 900p (and even smaller at 2160p), but there’s a way to address that problem. The Facebook neural upsampling paper projects the input to the output resolution before warping the image with motion vectors and running the neural network. So you would end up with a 2160p buffer after projection regardless of whether you started from 900p or 1080p; you would just have fewer samples available per frame from the 900p input.
Yeah. Spiderman on PC has DLSS+DRS (All upscalers in the game work with DRS actually)
And it works great (outside of the fact the framerate target for DRS is limited in options but that is a game-side issue in how they made the PC port)

So no real problem here.
 
DLSS also makes dynamic resolution scaling way less useful or actively broken. DLSS really needs a stable base image to draw from, the whole design is built on the idea that you have performance to spare.
Not since 2.1:


  • New ultra performance mode for 8K gaming. Delivers 8K gaming on GeForce RTX 3090 with a new 9x scaling option.
  • VR support. DLSS is now supported for VR titles.
  • Dynamic resolution support. The input buffer can change dimensions from frame to frame while the output size remains fixed. If the rendering engine supports dynamic resolution, DLSS can be used to perform the required upscale to the display resolution.




The most interesting element about DLSS is that if they use it in portable mode, I do not expect something like ultra performance mode or performance mode to really be a common use of the feature. I expect something like quality to ultra quality mode, and in the most extreme cases.

Take a game that is 540p, and have the target be the DLSS to 720p of the screen res. So, it takes it up to the native res. Developers would need to be more creative with portable mode here though, because what I’m suggesting banks on the 720p 7 inch screen hiding a lot of the issues here that would have been present otherwise in TV mode.

And this is assuming they use a 720p display, could be 1080p but that doesn’t change the starting internal framerate of say, 640p, 720p or 810p and scaling it up to the native 1080p. Banking on the screen size hiding a lot of potentially visual issues here.




This is theoretically speaking of course all things considered.



540p- 518,400 pixels

720p- 921,600 pixels



Going from 540p to 720p requires 1.77x more throughput

Rather than brute force it…. save battery life and scale it more efficiently and use the physical dimension and hardware resources to do the job and hide it.


Know how people say DLSS is magic? Well, magic is just smoke, tricks and mirrors after all, so best to utilize those tricks as best as possible to give the impression of “Good Image Quality”.


And this is the most extreme case, I’m watching out for memory bandwidth in the long-run, not the short-term. Though, a System Level Cache would help reduce it a bit…. Hopefully they went for it.
 
Last edited:
On the other hand, a more powerfull CPU can last multiple generations. You can still run current gen games at ~PS5 performance with an OC i7-5960X from 2014.
It can, but is it worth it? I just looked up the processor you mentioned and it had an MSRP of $999 USD. There’s no point in spending that much on the off chance that your CPU will be able to keep up with the following generation of consoles. With $999, you could have bought a solid i5 k + mobo in 2014 (that would have greatly outperformed the PS4 and PS4 Pro) and still have almost enough money left over to buy another i5 k + mobo in 2022 (that will likely outperform a hypothetical PS5 Pro).

It’s also likely that new standards will be introduced over time, and will have limited or no compatibility with the older platform you’re on.

In my opinion, you should only buy a top-of-the-line CPU because you need the performance now and can afford it, not because you’re hoping to have it last through two console generations.
 
While true, I forgot where I read it but devs opted to not really use it as it complicated multiplatform development. With the bandwidth being 109GB/s in one direction and 109GB/s in another direction in actuality rather than the stated 218GB/s . 102GB in and out on the VCR model.

perhaps they went with a hybrid approach, having some in the eSRAM, and the other using the slower DDR3 memory pool.


I’ve been meaning to ask you this for a while, but what are your thoughts on a system level cache? I am aware that Drake is for video games and it is not a mobile processor, but in mobile processors it is pretty common to see a system level cache of 4 to 8 MB, excluding Apple who does a much larger SLC on the SOC.


ORIN also has a 4MB SysCache and I wonder if that remained. It would certainly reduce memory bandwidth constraints.

Nintendo isn’t necessarily a stranger to having extra cache (that I’m aware of)
I'm not sure about system level cache. My understanding is that it would be helpful in cases where both CPU and GPU are accessing the same data a lot. Orin, for example, is mainly aimed at machine learning use-cases, and that typically involves a lot of data being passed back and forth between the CPU and GPU, so a shared cache makes sense there. The alternative would be larger or extra levels of cache on each of the CPU and GPU, which in that case would likely end up with a lot of the same data being duplicated across the two (plus the cost of keeping them coherent as the data is modified), so just having a single SLC would be a more efficient use of die space.

For a console, I would have assumed that simply enlarging the GPU L2 would have been the way to go. Most of the bandwidth use in a console is going to be the GPU accessing buffer objects, which the CPU normally wouldn't touch. The tile-based rendering approach Nvidia uses is also going to be optimised around using L2 cache for buffers, as no Nvidia GPU other than Orin has a higher-level cache, so it would likely have to be tweaked to make efficient use of a SLC that's significantly larger than its L2. There's also the issue that, unlike the L2, the GPU isn't the only client of the SLC, so it might be somewhat less predictable than the L2 would for that use-case.

Then again, Apple make heavy use of SLC in their SoCs, and as far as I can tell the main driver for this would be the GPU. The 8MB SLC on the M1 is actually a bit smaller than the CPU's 12MB L2, and again I'd expect the GPU is the bigger bandwidth hog, so perhaps leveraging an SLC for tile-based rendering isn't such a big deal.
 
Company name: Creatures Co., Ltd.
Job Title: 3DCG Character Modeler
Products in Charge/Service Overview: [...] Other, R&D considering the next generation hardware
Game Engine: Unity, Unreal Engine
 
chopcrycover.jpg
 
Creatures has openly talked about experimenting with ray tracing, dlss, and ue5 months ago.
 
It can, but is it worth it? I just looked up the processor you mentioned and it had an MSRP of $999 USD. There’s no point in spending that much on the off chance that your CPU will be able to keep up with the following generation of consoles. With $999, you could have bought a solid i5 k + mobo in 2014 (that would have greatly outperformed the PS4 and PS4 Pro) and still have almost enough money left over to buy another i5 k + mobo in 2022 (that will likely outperform a hypothetical PS5 Pro).

It’s also likely that new standards will be introduced over time, and will have limited or no compatibility with the older platform you’re on.

In my opinion, you should only buy a top-of-the-line CPU because you need the performance now and can afford it, not because you’re hoping to have it last through two console generations.
The 5960x was admitedly an extreme example. I'm weary of recomending i5s (and AMD equivalents) to people that aim for PS5 performance for the whole generation.

To be clear, no i5 currently outperforms a PS5. The PS5 is an Ryzen 2 eight core with unified memory with a lot of chips for IO operations. I believe people have a warped view since the PS4 had a very weak CPU, so pretty much any quality CPU would significantly outperform it. So, lets go back a bit: the Ps3/360 gen.

At the start start of the 360/PS3 we had the same discussions, a C2D was outperforming the consoles but struggled at the end of the generation. People that bought a C2Q were comfortable for the whole generation and viable for the next.
 
I mean, there is no doubt Drake is next generation hardware regardless of how it’s positioned by Nintendo.
True. But I don't think in a job listing they are trying to be specific like "you will be working on a machine (whatever it is positioned like) with next generation hardware". I think if they say "next generation hardware" it is because it is what it is intended to be in every sense. It will be weird at a glance to say such a thing and then paddle back and say it is in the same generation as the Switch even if not hardware speaking.
 
True. But I don't think in a job listing they are trying to be specific like "you will be working on a machine (whatever it is positioned like) with next generation hardware". I think if they say "next generation hardware" it is because it is what it is intended to be in every sense. It will be weird at a glance to say such a thing and then paddle back and say it is in the same generation as the Switch even if not hardware speaking.
I disagree. I don’t think it relates to positioning at all.
 
Creatures have already been explicit about working with next gen features, so this isn't really new.

That said, as SV development has wrapped up, Gen 10 should be beginning to ramp up around now.
 
I disagree. I don’t think it relates to positioning at all.
I understand where you are coming from. Technically it is next generation hardware, but it does not actually indicate how it will be marketed. Basically, this is true.
Now, imagine if a developer from Sony had the exact same job listing and the PS5 was almost 6 years old what would you think when reading "next generation hardware" in a job listing? I said it the other day, this positioning talk exist because we don't want to give up on the "Switch Pro" or "a more powerful Switch in the Switch family" thinking which exist because we heard about that in 2019 where it made sense. Unfortunately, up until now, we don't want to give up on that thinking, as the thing is yet to materialize. That thing we heard in 2019 was a Switch 2 all along, why do we want to overthink things?

"Next generation hardware". Anybody reading this now in 2022, ignoring the rumors and the discussion we had before late 2021 and the mid gen consoles will automatically think of Switch 2. I think we should not overthink this.
 
Last edited:
0
Creatures have already been explicit about working with next gen features, so this isn't really new.

That said, as SV development has wrapped up, Gen 10 should be beginning to ramp up around now.
Creatures is probably talking about their own games. they seem to have a desire to put out more games as they're hiring for a unity game of some sort. and there's still Detective Pikachu 2, whatever that will be
 

"Our ambition is to be the No. 2 foundry in the world by the end of the decade, and [we] expect to generate leading foundry margins," Randhir Thakur, the president of Intel Foundry Services, told Nikkei Asia. IFS was set up last year to turn Gelsinger's vision into a reality.
I hope Intel succeeds since there needs to be more competition in the leading edge semiconductor fab sector.

Revealing eBook from TechInsights sheds some light on 5LPE, whose minimum metal pitch is same as for 8LPP, with only 8 layers (<20%) on EUV: https://lnkd.in/gDhUAGvE
 
Whatever Drake ends up being called, it will be a Switch 2 the same way the 360 was an Xbox 2, the ONE was Xbox 3, and the Series is Xbox 4.
 
To be clear, no i5 currently outperforms a PS5. The PS5 is an Ryzen 2 eight core with unified memory with a lot of chips for IO operations. I believe people have a warped view since the PS4 had a very weak CPU, so pretty much any quality CPU would significantly outperform it. So, let’s go back a bit: the Ps3/360 gen.

At the start start of the 360/PS3 we had the same discussions, a C2D was outperforming the consoles but struggled at the end of the generation. People that bought a C2Q were comfortable for the whole generation and viable for the next.
Not sure what you’re basing this on. A 12600k comfortably outperforms a PS5 in real-world scenarios and a 13600k is a good bit stronger than that.
 

I hope Intel succeeds since there needs to be more competition in the leading edge semiconductor fab sector.

8++

Samsung copying Intel. Maybe 8nm really is on the menu
 
0
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom