• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I am tantalized by the idea of a DLSS augmented with game specific training, though I suspect the dev cost there is exceedingly high.
I don’t agree that this would actually be desirable for DLSS 2.0. Training per game was useful for DLSS 1.0 because, as a spatial method, it had to hallucinate any high frequency image content that would be aliased at the lower resolution like fences, wires, grates, text, texture detail at a different mip bias, etc. I believe DLSS 1.0 was probably a deeper network than DLSS 2.0 to allow the network to learn more specific edge cases like this, which would align with DF reporting that it ran slower.

For a temporal method, all of the samples that you need for reconstruction at equivalent quality to a higher resolution will be accumulated after 2-4 frames, so the network’s main job is to weight those samples rather than to hallucinate new information. In this paradigm, high frequency information can be correctly reconstructed from samples as they are rendered in real time. Since you don’t have to bake that information into the network, you can use a shallower, faster architecture, and it will be game agnostic as long as the training set is sufficient.
 
For a temporal method, all of the samples that you need for reconstruction at equivalent quality to a higher resolution will be accumulated after 2-4 frames, so the network’s main job is to weight those samples rather than to hallucinate new information. In this paradigm, high frequency information can be correctly reconstructed from samples as they are rendered in real time. Since you don’t have to bake that information into the network, you can use a shallower, faster architecture, and it will be game agnostic as long as the training set is sufficient.
Interesting! (All my tensorflow work has been in sentiment analysis so Im out of my depth here).

My presumption is that essentially there would be “genre” presets in DLSS, so a model trained on highly abstract games vs highly realistic ones. But if “more images is better” rather than “specific images is better” for the base training set, then that falls apart
 
It's a sidecar library, though the integration in NVN2 is 100% specific to DLSS, not some kind of generic interface.

It does raise the question of how DLSS integration will work if developers choose to use OpenGL or Vulkan as their graphics API. Extension could be provided, but I doubt Nvidia would want to publish those extensions to the spec, and I don't know if they would provide extensions without publishing them. I think it's also possible that Nvidia/Nintendo would require developers to link against the DLSS library on their own if they don't use NVN, which could be part of an effort to get more developers to use NVN than did so previously.
On PC, DLSS is a redistributable library that gets shipped with the game. I imagine all the relevant variants will be included in the SDK.
 
Quoted by: LiC
1
From what i grasped, specific additionally to more would be best, but the benefits are not worth the cost (and im not just talking in terms of compute time, but creating models for specific cases)

And i see it, if its a lot of extra work for marginal improvements, why even go down that road.
 
Interesting! (All my tensorflow work has been in sentiment analysis so Im out of my depth here).

My presumption is that essentially there would be “genre” presets in DLSS, so a model trained on highly abstract games vs highly realistic ones. But if “more images is better” rather than “specific images is better” for the base training set, then that falls apart
I think there is an important distinction to make there between art style and rendering. You can use the same pipeline to render both “abstract” and “realistic” games, so from a pure image processing perspective, you would expect similar aliasing artifacts regardless of the qualitative style.

Just to make the discussion more quantitative, here's an example from the Facebook neural supersampling paper. I keep going back to this well because it's the deep learning temporal upscaling implementation with the most details publicly available and because Anton Kaplanyan, who is now the head of graphics at Intel, is the last author, so it's probably comparable at least to what Intel is doing with XeSS. It is also a convolutional autoencoder, so we know that the general structure of the architecture is similar to what Nvidia has revealed about DLSS.

Anyway, in that paper, they trained their upscaler on four scenes in Unity with different styles, titled Robots, Village, DanceStudio, and Spaceship. They trained several different networks:
  • Ours is the architecture that they show in Fig. 4. of the paper. It is trained on only one scene and evaluated on the same scene.
  • Ours-fast is the same architecture as Ours, but each layer has half as many channels as in Ours.
  • Ours-AllScenes is the same architecture as Ours, but the training set contains all four scenes.
  • Ours-AllButOne is the same architecture as Ours, but the network is trained on three scenes and evaluated on the fourth. For example, it could be trained on Robots, Village, and DanceStudio, then evaluated on Spaceship.
There are two metrics they use to evaluate how good the reconstruction was: peak signal to noise ratio (PSNR) and structural similarity (SSIM). A higher number is better on both. SSIM is probably easier to interpret; it is scaled between 0 and 1, with 1 being the best possible outcome. Here are their results:

image.png


EDIT: I am adding in one more table and one more figure from the paper, which compares the SSIM and PSNR of this approach to various other machine learning upscalers and to UE4's TAAU. I will spoiler tag this to save space.

image.png

image.png

On each scene, we can see a few trends:
  • Ours-AllButOne is the worst performing network, but only by a very small margin. Even though the content and styles of the scenes are different, the network can get very similar performance on a scene that it has never been trained on.
  • Ours-AllScenes actually performs almost identically to Ours. We don't actually see much improvement by specializing the network on a single scene or style.
  • Ours-Fast has a marginal loss compared to Ours. A lighter network with fewer channels is probably sufficient in many situations.
The main conclusion here is that a temporal upscaling network like this is fairly agnostic of the actual content of the scene. Even when the network has never seen a scene before like in Ours-AllButOne, just training on different scenes with the same rendering pipeline gives us very similar results.
 
Last edited:
The tangible evidence we have that Nitnendo will not release a successor type console in the next few years is:

- Nintendo saying Switch is at its mid point of its lifecycle 6 years in (most consoles declare this 3.5 years in)
I don't think most platforms have a "THIS IS THE MIDDLE" announcement. I think Nintendo is just saying it's neither the beginning nor end.
- Nintendo saying they expect growth in Switch’s 6th year which they admit is unusual since most consoles are in decline by the 6th year
They say hardware will be down for the second year, and at least claim to think software will be down as well.
- Nintendo having yet to have an upgrade for its handheld console (they almost always do)
We can also say: they almost always do well before the age Switch is.
 
On PC, DLSS is a redistributable library that gets shipped with the game. I imagine all the relevant variants will be included in the SDK.
Of course the DLSS library will be distributed with the SDK, because it will be needed for developers to test their applications on the reference implementations. But with NVN2's usage model, you don't link against that library yourself; NVN does. The question was how other graphics APIs would handle it.

Edit: Well, we don't specifically know if the library will be distributed with the SDK, at least at first, because we don't know if/how the situation will be different from the way it currently is with DLSS -- where developers have to reach out and work with Nvidia one-on-one to use DLSS in their titles.

NVidia is already providing Vulcan and DirectX DLSS versions, yes? So hopefully something similar occurs here, for portability reasons.
There are Vulkan and DirectX versions of APIs contained in the DLSS (NGX) SDK for PC, so developers need to link against the DLSS library and then call the functions that match the graphics API they're using. So that's backwards from how NVN is going to do it. Even if they were going to keep the direct linking model for Vulkan on Switch, they would need entirely new APIs on the DLSS side, possibly to interoperate with the Switch's version of Vulkan objects, and definitely because memory/resources need to be supplied by the application rather than being managed by the driver as they are on PC (presumably because of the lack of dedicated VRAM, and developers wanting 100% control of their resource usage, this is the pattern for a lot of driver stuff on Switch). So it won't be plug and play from a PC compatibility standpoint, and if they're changing the API surface, they might also choose another means of integrating with the library as they did with NVN.
 
Last edited:
Of course the DLSS library will be distributed with the SDK, because it will be needed for developers to test their applications on the reference implementations. But with NVN2's usage model, you don't link against that library yourself; NVN does.
Ahhhh, I see.
There are Vulkan and DirectX versions of APIs contained in the DLSS (NGX) SDK for PC, so developers need to link against the DLSS library and then call the functions that match the graphics API they're using. So that's backwards from how NVN is going to do it. Even if they were going to keep the direct linking model for Vulkan on Switch, they would need entirely new APIs on the DLSS side, possibly to interoperate with the Switch's version of Vulkan objects, and definitely because memory/resources need to be supplied by the application rather than being managed by the driver as they are on PC (presumably because of the lack of dedicated VRAM, and developers wanting 100% control of their resource usage, this is the pattern for a lot of driver stuff on Switch).
Fascinating. This is very clarifying, thanks!
 
0
I realize now that it was the mention of NVN2 existing that got leaked, but not he actual NVN2 SDK itself…
 
Quoted by: LiC
1
I don't think most platforms have a "THIS IS THE MIDDLE" announcement. I think Nintendo is just saying it's neither the beginning nor end.

They say hardware will be down for the second year, and at least claim to think software will be down as well.

We can also say: they almost always do well before the age Switch is.

I just assume when they say switch, they mean the brand, and not a specific product with a specific chipset.

I've been assuming this for some time now.
 
I realize now that it was the mention of NVN2 existing that got leaked, but not he actual NVN2 SDK itself…
What do you mean? The source code for NVN2 -- the graphics driver and its associated API, as well as various devtools and testing/sample applications -- are (part of) what was in the Nvidia leak.
 
What do you mean? The source code for NVN2 -- the graphics driver and its associated API, as well as various devtools and testing/sample applications -- are (part of) what was in the Nvidia leak.
Devtools? What devtools? You mean the thing about using Turing GPUs or later but Ampere is preferred and that thing about ORIN being compatible with this?
 
Devtools? What devtools? You mean the thing about using Turing GPUs or later but Ampere is preferred and that thing about ORIN being compatible with this?
Isn’t there a crash dump analyzer and a profiler in the leak? That’s how we knew the recent Nvidia job posting didn’t reflect the beginning of devtool work for Drake
 
0
@Kenka @Hermii @Skittzo @ReddDreadtheLead

I think the points you've all made regarding indie development on a new Nintendo system are important to keep in mind.

Yes, not all indie devs will have the same expectations and ambitions. Yes, many of indie devs may not even use UE5. Yes, some devs like me will not develop on a future Nintendo console that doesn't fully support Nanite. Yes, taking full advantage of UE5's feature will ease development even for indies.

The problem is this:

If you all want a future hybrid Nintendo system to "close the gap" with the other consoles in terms of graphics, then you either need to hope such a system supports tech like Nanite, or you need to realign your expectations for what is technically possible with a hybrid console without virtualized geometry, because the difference in apparent geometry in games on PS5/XSX compared to a future Nintendo hybrid console is going to be massive if such a console doesn't support Nanite or something similar to it.
 
nanite doesn't require a high bandwidth. it doesn't really have major requirements as even XBO and PS4 are supported

This is a bit disingenuous, don't you think?

You say that as if there is no functional difference between hardware with fast IO vs slow IO using Nanite. I know you have to know that isn't true.
 
In addition to the capability to stream assets from storage as quickly as necessary, does Nanite have other major hardware requirements?

The GPU needs to be able to handle the triangles that actually do get rendered on screen, which scales with resolution. In a hypothetical situation where, for instance, the Wii somehow had sufficient IO bandwidth, it wouldn't be able to handle the geometry (for high-poly assets).
 
Last edited:
This is a bit disingenuous, don't you think?

You say that as if there is no functional difference between hardware with fast IO vs slow IO using Nanite. I know you have to know that isn't true.
Is there a difference between a high poly model with nanite and a low poly model with nanite
 
Is there a difference between a high poly model with nanite and a low poly model with nanite

Yes, the triangle-to-pixel ratio will likely be different, with the more complex geometry being more likely to end up using 1 triangle per pixel, and since Nanite optimization is designed around the expectation that the ratio will be 1:1, using low polygons would end up wasting resources, as Nanite would effectively be overengineered for low poly assets and the triangles wouldn't be able to effectively take advantage of the hierarchical structure.

From the Unreal 5 Documentation:

More specifically, a mesh is an especially good candidate for Nanite if it:

  • Contains many triangles, or has triangles that will be very small on screen
  • Has many instances in the scene
  • Acts as a major occluder of other Nanite geometry
  • Casts shadows using Virtual Shadow Maps


The difference in the triangle-to-pixel ratio will also affect how much data needs to be streamed, so hardware that's capable of rendering more geometry will be expected to stream more data, and that is where IO throughput becomes more important.
 
Yes, the triangle-to-pixel ratio will likely be different, with the more complex geometry being more likely to end up using 1 triangle per pixel, and since Nanite optimization is designed around the expectation that the ratio will be 1:1, using low polygons would end up wasting resources, as Nanite would effectively be overengineered for low poly assets and the triangles wouldn't be able to effectively take advantage of the hierarchical structure.

From the Unreal 5 Documentation:




The difference in the triangle-to-pixel ratio will also affect how much data needs to be streamed, so hardware that's capable of rendering more geometry will be expected to stream more data, and that is where IO throughput becomes more important.
I already know all that, I'm coming from the perspective of the need for fast IO akin to other systems. Given Drake's other bottlenecks, supporting nanite is the least of the system's problems (and one seemingly already answered anyway). I'm expecting the assets to be paired back from the higher fidelity models on Series/PS5, hence my question if there's a difference regarding the need for fast IO.
 
I already know all that, I'm coming from the perspective of the need for fast IO akin to other systems. Given Drake's other bottlenecks, supporting nanite is the least of the system's problems (and one seemingly already answered anyway). I'm expecting the assets to be paired back from the higher fidelity models on Series/PS5, hence my question if there's a difference regarding the need for fast IO.

Mobile hardware has reached a point where the poly budget would be sufficient to handle the high poly geometry it is being fed through Nanite (after Nanite, what's actually rendered on screen isn't really all that high, relatively speaking). With sufficient IO, I'm pretty confident that the geometry of my assets would not be noticably pared-down on powerful mobile hardware compared to PS5/XSX, even if the textures would be lower resolution and the shader complexity simplified. It would be worth it, imo.
 
In the meantime, we have some exciting news for those who might have missed it!

EIYUDEN CHRONICLE: HUNDRED HEROES IS COMING TO NINTENDO SWITCH​


That's right! Given recent speculation over the Nintendo Switch and potential next generation Nintendo iterations, we wanted to play it safe and investigate what options we had before fully committing to a Nintendo Switch version.

But now the wait is over and we're delighted to confirm that Eiyuden Chronicle: Hundred Heroes will also be landing on Nintendo Switch!

Sure smells like “we can now say with certainty that the new hardware is a powered up revision that is fully back compat, and Switch is still the platform for the foreseeable future”
 
Sure smells like “we can now say with certainty that the new hardware is a powered up revision that is fully back compat, and Switch is still the platform for the foreseeable future”
Or more like most of our backers choosed the Switch platform so being forced to return money to all of them is not viable and is better to just make a Switch port.
 
@Kenka @Hermii @Skittzo @ReddDreadtheLead

I think the points you've all made regarding indie development on a new Nintendo system are important to keep in mind.

Yes, not all indie devs will have the same expectations and ambitions. Yes, many of indie devs may not even use UE5. Yes, some devs like me will not develop on a future Nintendo console that doesn't fully support Nanite. Yes, taking full advantage of UE5's feature will ease development even for indies.

The problem is this:

If you all want a future hybrid Nintendo system to "close the gap" with the other consoles in terms of graphics, then you either need to hope such a system supports tech like Nanite, or you need to realign your expectations for what is technically possible with a hybrid console without virtualized geometry, because the difference in apparent geometry in games on PS5/XSX compared to a future Nintendo hybrid console is going to be massive if such a console doesn't support Nanite or something similar to it.
Does steam deck support nanite?
 
Mobile hardware has reached a point where the poly budget would be sufficient to handle the high poly geometry it is being fed through Nanite (after Nanite, what's actually rendered on screen isn't really all that high, relatively speaking). With sufficient IO, I'm pretty confident that the geometry of my assets would not be noticably pared-down on powerful mobile hardware compared to PS5/XSX, even if the textures would be lower resolution and the shader complexity simplified. It would be worth it, imo.
I think a lot of nanite expectations are hinged on the "million poly assets", which never seemed realistic to me. I still posit that a lot of the data streaming that people envision is going to be more of a rarity, as far as amount of data per frame goes. Ratchet and Clank was an early example of this and that did quite well on slow (relatively) drives

Does steam deck support nanite?
yes and no. the drivers don't properly support it yet

 
I think a lot of nanite expectations are hinged on the "million poly assets", which never seemed realistic to me. I still posit that a lot of the data streaming that people envision is going to be more of a rarity, as far as amount of data per frame goes. Ratchet and Clank was an early example of this and that did quite well on slow (relatively) drives


yes and no. the drivers don't properly support it yet


.
 
Last edited:
0
I think a lot of nanite expectations are hinged on the "million poly assets", which never seemed realistic to me. I still posit that a lot of the data streaming that people envision is going to be more of a rarity, as far as amount of data per frame goes. Ratchet and Clank was an early example of this and that did quite well on slow (relatively) drives


yes and no. the drivers don't properly support it yet


The system does then it just needs its drivers updated?
 
0
Mobile hardware has reached a point where the poly budget would be sufficient to handle the high poly geometry it is being fed through Nanite (after Nanite, what's actually rendered on screen isn't really all that high, relatively speaking). With sufficient IO, I'm pretty confident that the geometry of my assets would not be noticably pared-down on powerful mobile hardware compared to PS5/XSX, even if the textures would be lower resolution and the shader complexity simplified. It would be worth it, imo.
Mobile hardware being xbone-ps4 level in GPU at least, ammiright. That's the cut off 🤔
 
Or more like most of our backers choosed the Switch platform so being forced to return money to all of them is not viable and is better to just make a Switch port.
Did they even allow backers to choose a Nintendo platform? I remember them saying it won't come to Switch, so allowing backers to choose that as an option would be awful.

"Give us your money and we'll see you when Nintendo says so..."

Either way, it could mean development has been going well and they had more time to investigate/optimize for Switch... or they're also tired of waiting for Nintendo to release new hardware.
 
Does steam deck support nanite?
The hardware is compatible. No idea when it will have the driver support for it.
I think a lot of nanite expectations are hinged on the "million poly assets"

Of course, that's exactly what it was designed for. Why would developers not use Nanite the way it was intended? And why go through the trouble of baking normal maps or using more textures when you don't have to?
which never seemed realistic to me.
Nanite specifications are entirely realistic. It never would have made it out of the cutting room floor if that weren't the case. Procedural generation in programs like Houdini allow devs to make high poly assets without much human labor. That is how I've been able to make them. And now they can be used in projects without much hassle. It's completely realistic, which is the point.
I still posit that a lot of the data streaming that people envision is going to be more of a rarity
If by rarity, you mean that AAA visuals in games are relatively rare compared to non-AAA visuals in games sure, but that's nothing new. For AAA devs though, the hierarchical organization of data as implemented with Nanite is likely here to stay. It's just a smart way to handle resources.
Ratchet and Clank was an early example of this and that did quite well on slow (relatively) drives

That has nothing to do with virtualized geometry.
 
Did they even allow backers to choose a Nintendo platform? I remember them saying it won't come to Switch, so allowing backers to choose that as an option would be awful.

"Give us your money and we'll see you when Nintendo says so..."

Either way, it could mean development has been going well and they had more time to investigate/optimize for Switch... or they're also tired of waiting for Nintendo to release new hardware.
They allowed it but promised refunds if they couldn’t make a Switch port in the end.
 
Mobile hardware being xbone-ps4 level in GPU at least, ammiright. That's the cut off 🤔

Nah, current Switch hardware could handle geometry in the millions of polygons per frame just fine (albeit with some limitations on how the triangles were shaded). The storage just wouldn't be able to feed the GPU fast enough.
 
Of course, that's exactly what it was designed for. Why would developers not use Nanite the way it was intended? And why go through the trouble of baking normal maps or using more textures when you don't have to?
never argued that, but to fill a game with hundreds of million poly assets to be streamed in is what I'm questioning

Nanite specifications are entirely realistic. It never would have made it out of the cutting room floor if that weren't the case. Procedural generation in programs like Houdini allow devs to make high poly assets without much human labor. That is how I've been able to make them. And now they can be used in projects without much hassle. It's completely realistic, which is the point.
I'm more referring to bespoke assets that can't always get by solely on procedural modeling. sculpting, retopo, uv-ing to the tune of a million polygons is getting to the level of movie-esque work and costs.

nanite has other uses than just being able to fit million polygon assets onto the screen, as it also functions for asset compression. it's why Epic recommends it even if you aren't making very high resolution models
 
Nah, current Switch hardware could handle geometry in the millions of polygons per frame just fine (albeit with some limitations on how the triangles were shaded). The storage just wouldn't be able to feed the GPU fast enough.
Then what would be good storage speeds to potentially handle PS5/XSX ports( using comparable assets and shading on the Switch 2 )
 
but to fill a game with hundreds of million poly assets to be streamed in is what I'm questioning
Nanite handles this just fine (granted the assets are good candidates for Nanite to begin with).
I'm more referring to bespoke assets that can't always get by solely on procedural modeling. sculpting, retopo, uv-ing to the tune of a million polygons is getting to the level of movie-esque work and costs.
To me, this seems less realistic, because the more polygons you have on-screen (which will likely come from the environment geometry) the less likely the majority of those polygons wouldn't be able to be generated procedurally. And keep in mind, procedural generation is really powerful these days with machine learning. You can get plenty of 'bespoke' assets with procedural generation. I'm not saying there aren't exceptions, but I don't think they represent the majority of cases in which you'd be working with high-poly assets/scenes. Procedural generation actually becomes more useful for larger-scale environments. In smaller, limited high-poly environments, they would likely require more manually authored assets.
nanite has other uses than just being able to fit million polygon assets onto the screen, as it also functions for asset compression. it's why Epic recommends it even if you aren't making very high resolution models
Of course, but if you're not using high-resolution models, and the assets don't have lots of triangles/vertices, then the data is already going to be much smaller, and there will be less of a need to rely on Nanite asset compression.

Epic recommends using Nanite with most compatible assets because it simplifies the workflow and doesn't create this situation where developers are worrying too much about which assets to use with Nanite.
Then what would be good storage speeds to potentially handle PS5/XSX ports( using comparable assets and shading on the Switch 2 )

500 MB/s or more
 
0
Since the product lifecycle has been brought up again, I'd like to point out that the last time Furukawa declared the Switch being "at the mid-point of its lifecycle" was on Nov. 5, 2021. This talking point was absent from the Feb. 3, 2022 and May 10, 2022 earnings reports/Q&As. If we take Furukawa's words at face value and extrapolate:
  • Nov. 2021 was the mid-point of Switch lifecycle—4.5 years after the initial release.
  • The OG Switch may have a 9 year lifecycle (4.5*2=9).
  • If the Switch Next is coming out in Q1 2023—6 years after OG—there will be a 3 year cross-gen period (9-6=3).
Although a 3 year cross-gen support might seem long for the traditional consoles, it's the norm for mobile devices. Since iPhone 4 (2010) and iPad 2 (2011), the iOS support for previous models ranges from 3 years to 7 years, for example. That'd also give Nintendo/Nvidia up to 3 years of time to figure out how to shrink/cost-reduce the 12SM Drake for a cheaper hybrid and/or Lite model.

Edit: typo
 
Last edited:
Sure smells like “we can now say with certainty that the new hardware is a powered up revision that is fully back compat, and Switch is still the platform for the foreseeable future”
That's... smelling pretty deeply. Even if the next machine is called "Switch 2 No Bones About It This Is A True Separate Successor!" it would still be fully back compat. and it would still be preferable to sell to the massive earlier userbase where possible.
 
Shouldn’t Nanite scale with how many polygons a hardware can push out also?

Does it really need 500MB/s if it renders at 1080p/720p and not higher like 1800p?
Don’t think it’s about resolution necessarily, it’s about the fidelity that is shown whether at 720 or 2160p.
 
0
Shouldn’t Nanite scale with how many polygons a hardware can push out also?

Does it really need 500MB/s if it renders at 1080p/720p and not higher like 1800p?

There are over 2 million pixels in a 1080p frame. Nearly a million for a 720p frame. To take full advantage of Nanite, you need to consistently have one triangle per pixel, so you would need to be rendering that many triangles per frame. Keep in mind, these are just fractions of the actual polygon count of the assets. You're gonna need 500 MB/s in order to stream high-fidelity assets with that level of geometry per frame.
 
There are over 2 million pixels in a 1080p frame. Nearly a million for a 720p frame. To take full advantage of Nanite, you need to consistently have one triangle per pixel, so you would need to be rendering that many triangles per frame. Keep in mind, these are just fractions of the actual polygon count of the assets. You're gonna need 500 MB/s in order to stream high-fidelity assets with that level of geometry per frame.
What about when using DLSS to output to a higher resolution? For example, internal render at 1080p, output to 4k. For attempting to maintain the 1 triangle:1 pixel ratio, is the target the # of pixels in the internal 1080p frame or the # of pixels in the output 4k frame? My guess is the internal rendering resolution, but I'm not all that confident in that.
 
What about when using DLSS to output to a higher resolution? For example, internal render at 1080p, output to 4k. For attempting to maintain the 1 triangle:1 pixel ratio, is the target the # of pixels in the internal 1080p frame or the # of pixels in the output 4k frame? My guess is the internal rendering resolution, but I'm not all that confident in that.

Yup, nanite geometry would be rasterized to the base render before DLSS upsampling.
 
Industry goes to 4nm. Qualcomm Snapdragon 8 gen1 is first 4nm we saw according to Qualcomm (see attached picture). But the "4nm" in Snapdragon 8 gen1 can be old definition of 4LPE.

What is "old definition"? It is same process group of 7LPP. There is no change of transistor pitch but may have DTCO scaling tricks and/or perfomance benefits.
Since last year, Samsung Foundry has new roadmap adding new 4LPE/4LPP which has pitch scaling benefit apart from 7LPP/5LPE/5LPP. 4LPE/4LPE process generation group is last node FinFET before GAA (Gate All Around) innovation.



My assumption is that the old definition 4LPE is now 4LPX. Maybe by political and historical reasons, Qualcomm want to keep "4nm" budge on Snapdragon 8 gen1 likely originally planned using old 4LPE technology. It has no transistor level scaling like new 4LPE, but may have matured process and some performance/DTCO density benefits.

Techinsights completed Digital Floorplan Analysis and confirmed it has same CPP, Standard Cell height and SRAM cell size as those items in 5LPE. And working for detailed process analysis if there are any differences from 5LPE.
Stay tuned.
 
We keep flip flopping between optimism and pessimism , reading the tea leaves of media reports about nvidia's product lines sprinkled with the weekly troll posters derailing the thread.
At this stage I would prefer Nintendo just announce something, even if it's not exactly what we wanted, so we have something concrete to discuss.
 
Last edited:
Quoted by: LiC
1
We keep flip flopping between optimism and pessimism , reading the tea leaves of media reports about nvidia's product lines sprinkled with the weekly troll posters derailing the thread.
At this stage I would prefer Nintendo just announce something, even if it's not exactly what we wanted, so we have something concrete to discuss.
Alternative approach: everybody can just chillax.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom