• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I think so too, but it is important to remember that these games do ultimately still need to scale down to portable mode. We tend to focus on the performance gap between the PS5/Series consoles and the docked performance of SNG, but ultimately these games do need to scale down to PS4 levels of performance. We are approaching the time when PS4/X1 support will be dropped for multi platform games, but developers will still be tasked with supporting a very similar performance profile with SNG portable mode.
Portable mode is never going to have RAM, CPU, feature set as far back as PS4. If the game works at all docked, getting it to work in portable is mostly a matter of visual degree.
We have mostly focused on DLSS in docked, but has anyone done the math for how long 1080p DLSS would take in portable mode given around 500 mhz?
However accurate it is, the DLSS calculator lets you set a speed and see what that would mean for 4K or 1080p.

But regardless of how accurate that is, at the same speed 1080p should take about 1/4 the amount of time as 4K. So if portable mode is half docked speed, it would take 1/2 the time of docked 4K. Or looked at another way, half-speed portable getting to 1080p would be similarly costly to full-speed docked getting to 1527p.
 
No I agree with you, but there's a lot of people out there who don't. And I don't know about you but I'd rather not go another 7 years listening to people complain about lack of native voice chat or lack of themes every system update because the ram allotted to the OS is so low
Switch lacking themes/folders and no media player/streaming service, was a conscious decision by Nintendo, they wanted Switch to be a sonly for games, a pure gaming console, not a entertainment center as they did with Wii U
 
if they do 16 instead of 12 How much more expensive would it be? if it makes it like 50 dollars more expensive I dont think they will do it but if its cheap they might
I'm not an expert but my two cents is....Even if it's $4 more expensive, they are going to sell 100m of these things. That's increasing your expenses by $400m, so is that expense worth the extra power for developers? Profit margins on hardware are slim to none at launch so the difference between 12GB of RAM and 16GB of RAM could be the difference between profitable day 1, or losing money day 1.
The consensus is nuts.
You're in the minority here.
As far as I know, 12 GB of RAM is still the consensus based on what necrolipe said.
Necrolipe is legit, so I accept it.
12GB RAM consumer and 16GB Devkit was what Necrolipe shared, yes. What people speculate is if Nintendo will bump the RAM amount for consumer Switch 2, not too dissimilar to what they did with 3DS.
Yeah, and I think people are mixing that up and drinking the Orin kool-aid. It's going to be 12. Even if the OS reserves 2GB, that still leaves 10GB of memory which is 2GB more than the useable RAM on the Series S. More than enough for 1080p gaming with DLSS.
 
When it comes to the use of advanced lighting and AI techniques in anime, I actually would want this:
While considering what artstyle experimentations Nintendo could go with for the next Zelda, I began to think of a Studio Ghibli kind of look... but then my brain jumped to how awkward mimicking 2d animation with 3d models can get and then I had an epiphany!

You could use the tensor cores and AI to examine each frame and pop out an animated frame. Something like this only better because they will have more control over the input and the training of the AI that generates the output.



And another example:


Honestly it's quite exciting what the possibilities are. Forget DLSS... Nintendo we know is already working on their own versions of it, perhaps it's art-style driven?

more than this:
yea, it definitely can work. there's nothing about ray tracing that works against the effect of anime. while RT can simulate light, it can pretty much break all the rules as well

I'm trying to find the tech talk Unity hosted where they talked about getting ray traced shadows in an anime feature

until then, here's a hobbyist made example



FAKE EDIT: found it





the youtube channel I posted, had a much more recent example of 3D anime with ray tracing (shadows and global illumination)


And I don't mean the generative AI being used in that first video, but rather, the fact that it's reconstructing individual frames in a 2D animation, and that comes closer to the true problem anime has had since the 80's. The extremely fluid and engaging hand drawn movement best exemplified in Studio Ghibli movies (but are far from the only examples of it) is what most of the industry's long time animators would like to do, if only it weren't such an extremely time-consuming process.

That last part is the problem. A common complaint of long time anime viewers is that the older hand-drawn techniques from the 80's and 90's have died out, but that isn't entirely true. Even now, most 2D anime (and thankfully, most anime is still 2D) have at least key frames manually drawn in. It's all the other in-between frames that get drawn with a combination of CG and other techniques, and this is because the alternative would be to hand-draw all those in-between frames, which again, is absurdly time-consuming.

And yet somehow that's what a lot of anime did a few decades ago (and Studio Ghibli still does to this day). What anime really needs is specialized AI reconstruction tools, something very similar to DLSS but trained on specific forms of animation, so that it can draw in a series of unique frames between the key frames drawn by animators. Unfortunately, that's still not something I've seen a lot of, and it would be particularly bad if, instead of finding a solution to this problem, the industry just moves into fully rendered 3D models.

This would also greatly benefit games with unique 2D artstyles like Cuphead, which notoriously took a very long time to make because of how hard it was to draw. If there were more AI development in this space, I could see Nintendo coming up with some games with very unique visuals. Or at least more 2D cutscenes like the ones used in the Switch remake of Link's Awakening (the game has too few of them):

 
Switch lacking themes/folders and no media player/streaming service, was a conscious decision by Nintendo, they wanted Switch to be a sonly for games, a pure gaming console, not a entertainment center as they did with Wii U
Steaming services came anyway. That is not dictated by system updates. Folders arguably came with the introduction of Groups.
Most themes outside of color swaps and voice chat are things that require more RAM dedicated to the OS. Yes, it was a conscious choice by Nintendo to omit those things because they wanted the OS to be fast and responsive with the limited amount of RAM they had to work with.
 
Maybe I'm totally off-base on this, but I think most people would rather see advancements in the coordination, and coordination tools, of animators to allow many more people work concurrently, rather than just replacing tweening with a computer
 
You expect eMMC and not UFS? Lol
what? no, I don't know how you got that out of my post

When it comes to the use of advanced lighting and AI techniques in anime, I actually would want this:

more than this:


And I don't mean the generative AI being used in that first video, but rather, the fact that it's reconstructing individual frames in a 2D animation, and that comes closer to the true problem anime has had since the 80's. The extremely fluid and engaging hand drawn movement best exemplified in Studio Ghibli movies (but are far from the only examples of it) is what most of the industry's long time animators would like to do, if only it weren't such an extremely time-consuming process.

That last part is the problem. A common complaint of long time anime viewers is that the older hand-drawn techniques from the 80's and 90's have died out, but that isn't entirely true. Even now, most 2D anime (and thankfully, most anime is still 2D) have at least key frames manually drawn in. It's all the other in-between frames that get drawn with a combination of CG and other techniques, and this is because the alternative would be to hand-draw all those in-between frames, which again, is absurdly time-consuming.

And yet somehow that's what a lot of anime did a few decades ago (and Studio Ghibli still does to this day). What anime really needs is specialized AI reconstruction tools, something very similar to DLSS but trained on specific forms of animation, so that it can draw in a series of unique frames between the key frames drawn by animators. Unfortunately, that's still not something I've seen a lot of, and it would be particularly bad if, instead of finding a solution to this problem, the industry just moves into fully rendered 3D models.

This would also greatly benefit games with unique 2D artstyles like Cuphead, which notoriously took a very long time to make because of how hard it was to draw. If there were more AI development in this space, I could see Nintendo coming up with some games with very unique visuals. Or at least more 2D cutscenes like the ones used in the Switch remake of Link's Awakening (the game has too few of them):


it sounds like you're describing tweening
 
Maybe I'm totally off-base on this, but I think most people would rather see advancements in the coordination, and coordination tools, of animators to allow many more people work concurrently, rather than just replacing tweening with a computer
Hi, professional animator here

Yes! You are right.
Thanks!
 
I'm not convinced either of those ARE RAM issues, since 3DS with far less RAM had themes, and DS with EXTREMELY LITTLE RAM had VC in some games. To me I think they're probably a matter of priority. Prioritising the mobile app over built in social features, and prioritising consistency and speed over themes.
You're comparing apples to oranges here.
 
When it comes to the use of advanced lighting and AI techniques in anime, I actually would want this:

more than this:


And I don't mean the generative AI being used in that first video, but rather, the fact that it's reconstructing individual frames in a 2D animation, and that comes closer to the true problem anime has had since the 80's. The extremely fluid and engaging hand drawn movement best exemplified in Studio Ghibli movies (but are far from the only examples of it) is what most of the industry's long time animators would like to do, if only it weren't such an extremely time-consuming process.

That last part is the problem. A common complaint of long time anime viewers is that the older hand-drawn techniques from the 80's and 90's have died out, but that isn't entirely true. Even now, most 2D anime (and thankfully, most anime is still 2D) have at least key frames manually drawn in. It's all the other in-between frames that get drawn with a combination of CG and other techniques, and this is because the alternative would be to hand-draw all those in-between frames, which again, is absurdly time-consuming.

And yet somehow that's what a lot of anime did a few decades ago (and Studio Ghibli still does to this day). What anime really needs is specialized AI reconstruction tools, something very similar to DLSS but trained on specific forms of animation, so that it can draw in a series of unique frames between the key frames drawn by animators. Unfortunately, that's still not something I've seen a lot of, and it would be particularly bad if, instead of finding a solution to this problem, the industry just moves into fully rendered 3D models.

This would also greatly benefit games with unique 2D artstyles like Cuphead, which notoriously took a very long time to make because of how hard it was to draw. If there were more AI development in this space, I could see Nintendo coming up with some games with very unique visuals. Or at least more 2D cutscenes like the ones used in the Switch remake of Link's Awakening (the game has too few of them):


You know it could be something as simple as the "ink" layer... as in the outlines around things. They could render the 3d scene very flatly and then the AI generated outlines/inking gets layered over the top.

There are certainly some interesting, perhaps incredible, opportunities to arise in this medium! Very excited for the future.
 
Maybe I'm totally off-base on this, but I think most people would rather see advancements in the coordination, and coordination tools, of animators to allow many more people work concurrently, rather than just replacing tweening with a computer
You're not off-base here. That's definitely another thing the industry is in desperate need of. Also something AI could help with in terms of the coordination tools, ironically. I wish the investment side of things would view any kind of AI more as something that could stand side by side with human work instead of just outright replacing them, but that's also a whole can of worms that extends well beyond the scope of this thread.

it sounds like you're describing tweening
Yes, but specifically saying that the current tool set in that regard is lacking.

You know it could be something as simple as the "ink" layer... as in the outlines around things. They could render the 3d scene very flatly and then the AI generated outlines/inking gets layered over the top.

There are certainly some interesting, perhaps incredible, opportunities to arise in this medium! Very excited for the future.
Agreed, that's another technique I really like, similar to what Arc System Works does with their games, to use a non-animation example.
 
You know it could be something as simple as the "ink" layer... as in the outlines around things. They could render the 3d scene very flatly and then the AI generated outlines/inking gets layered over the top.

There are certainly some interesting, perhaps incredible, opportunities to arise in this medium! Very excited for the future.
we have shaders that do that already

Yes, but specifically saying that the current tool set in that regard is lacking.
all the work you could put into training an AI, could be put into designing a custom tweening tool
 
You're not off-base here. That's definitely another thing the industry is in desperate need of. Also something AI could help with in terms of the coordination tools, ironically. I wish the investment side of things would view any kind of AI more as something that could stand side by side with human work instead of just outright replacing them, but that's also a whole can of worms that extends well beyond the scope of this thread.


Yes, but specifically saying that the current tool set in that regard is lacking.
Sorry bud but you really have no idea what you’re talking about here haha. I’ve been a professional animator for almost a decade and like… “current tool set in that regard is lacking?” when talking about tweening? lmao

Yeah Maya already does this for us, with pretty comprehensive tools on our end to adjust the curves to be exactly what we need. With the odd case of gimbal lock here and there not withstanding (which we have a tool, the Euler filter, to fix it for us) tweening on computers is a solved problem.

What I think you’re really wanting here are better rigs and better shaders. If something is animated frame-by-frame like Cuphead it’s because they’re specifically going for that as a stylistic choice, and using AI to generate frames betrays that choice for the sake of convenience. If you want “smooth” animation, you use rigs and let the computer do the work. Modern TV anim, 2D or 3D, use rigs like this. “Choppiness,” such as animating on 2s or whatever, is always a creative choice.

And let me tell you: AI ain’t the cure-all a lot of folks seem to think it is lmao
 
all the work you could put into training an AI, could be put into designing a custom tweening tool
Sorry bud but you really have no idea what you’re talking about here haha. I’ve been a professional animator for almost a decade and like… “current tool set in that regard is lacking?” when talking about tweening? lmao

Yeah Maya already does this for us, with pretty comprehensive tools on our end to adjust the curves to be exactly what we need. With the odd case of gimbal lock here and there not withstanding (which we have a tool, the Euler filter, to fix it for us) tweening on computers is a solved problem.

What I think you’re really wanting here are better rigs and better shaders. If something is animated frame-by-frame like Cuphead it’s because they’re specifically going for that as a stylistic choice, and using AI to generate frames betrays that choice for the sake of convenience. If you want “smooth” animation, you use rigs and let the computer do the work. Modern TV anim, 2D or 3D, use rigs like this. “Choppiness,” such as animating on 2s or whatever, is always a creative choice.

And let me tell you: AI ain’t the cure-all a lot of folks seem to think it is lmao
I'm only basing that first opinion on anecdotes from friends who worked in the industry, but what they told me is that tweening tools themselves are very time-consuming to create, and are very inflexible in terms of how many different styles of animation you could use them in, or even in terms of scenes you could use them in within a specific animation.

My point was the time-consuming part of the equation, which is a similar problem to ray tracing vs. other lighting solutions. You can certainly do almost everything with other available lighting solutions, but designing around ray tracing allows developers more time to focus on the million other things they need to do, similar to what I think AI tools could do for animators.

Of course, it's not a cure-all. But no solution is. My only conclusion was that more development in the space would be helpful.
 
What type of speed are we to expect?

It kinds of depend on the expansion storage and gamecard speed. You can't have a big difference between the three so you're limited by the slowest one. Internal storage we can be pretty sure is UFS 3.1 or better so around 1.2gb/s. But if they're sticking with running games off standard SD card then they're limited to around 100mb/s speed. Switch game cart is 30mb/s I believed and I'm expecting the new cart to be faster but how much faster is hard to say.
 
0
I'm not convinced either of those ARE RAM issues, since 3DS with far less RAM had themes, and DS with EXTREMELY LITTLE RAM had VC in some games. To me I think they're probably a matter of priority. Prioritising the mobile app over built in social features, and prioritising consistency and speed over themes.
I think in the case of voice chat it's more prioritizing the prevention of someone shouting racial slurs at a kindergartener.
 
personally, I'm thinking 400MB/s to 500MB/s. even emmc can get into the 300MB/s
Is that a sizeable enough upgrade to shorten the gap between modern platforms a little bit?

Also is UFS 4.0 possible? Too small to search on fami :/
 
Last edited:
And I don't mean the generative AI being used in that first video, but rather, the fact that it's reconstructing individual frames in a 2D animation, and that comes closer to the true problem anime has had since the 80's. The extremely fluid and engaging hand drawn movement best exemplified in Studio Ghibli movies (but are far from the only examples of it) is what most of the industry's long time animators would like to do, if only it weren't such an extremely time-consuming process.

That last part is the problem. A common complaint of long time anime viewers is that the older hand-drawn techniques from the 80's and 90's have died out, but that isn't entirely true. Even now, most 2D anime (and thankfully, most anime is still 2D) have at least key frames manually drawn in. It's all the other in-between frames that get drawn with a combination of CG and other techniques, and this is because the alternative would be to hand-draw all those in-between frames, which again, is absurdly time-consuming.
In-between frames are almost always drawn by human hands in anime. Look up any recent 2D anime on ANN and ctrl+f "in-between" to see all of the different companies the work gets outsourced to. When it's not, and is instead tweened with software like Adobe Animate, it's usually pretty easy to tell; Masaaki Yuasa's studio Science Saru has leaned heavily into the aesthetic, so it's a great reference point (look up Lu Over the Wall to see what I mean). In recent years an increasing number of animators use graphics tablets to draw digitally, but that's still hand drawn.

What makes classic Ghibli films (for example) look so good isn't just that they're hand-drawn, that's still common. It's the skill of the animators and directors, and the number of cels per second. Mostly the skill; if you go frame-by-frame, most of movies like Porco Rosso or Totoro is drawn on 3's (8 new cels per second), with occasional cuts on 2's (12 new cels per second) when a movement requires special smoothness or detail. That's higher than an average television series scene, but not shockingly so.

To bring my post back on topic: Definitely interested in seeing whether any developers will train a deep-learning algorithm to run on the Tensor cores for the sake of making a cel-shaded game that looks more convincingly hand-drawn without requiring a massive investment in tweaking the model itself (like in Guilty Gear). I'm imagining something like having outlines drawn with varying thickness depending on various criteria, or having it work on just the shadows across the face. The prior could be done on CPU but would be somewhat intensive if you're applying it to too many lines at once (whether a complicated design or a number of objects), and the latter seems like a perfect use case for the technology. ML still seems like magic to me though, so perhaps my idea makes little or less sense.
 
I'm not an expert but my two cents is....Even if it's $4 more expensive, they are going to sell 100m of these things. That's increasing your expenses by $400m, so is that expense worth the extra power for developers? Profit margins on hardware are slim to none at launch so the difference between 12GB of RAM and 16GB of RAM could be the difference between profitable day 1, or losing money day 1.

You're in the minority here.

Necrolipe is legit, so I accept it.

Yeah, and I think people are mixing that up and drinking the Orin kool-aid. It's going to be 12. Even if the OS reserves 2GB, that still leaves 10GB of memory which is 2GB more than the useable RAM on the Series S. More than enough for 1080p gaming with DLSS.
Wait man, how are they going to produce 100 million consoles in a short time?
And then the cost of RAM will decrease over the years.

Nintendo has two choices
1) save money: have enough RAM for launch but which will be limited in the future.
2) spend more: have so much RAM that will be enough in the future.

Obviously I'm not saying that 12GB is little, but you have to consider that Nintendo wants a lot of third-party support, and for sure this time they want the next console to be future-proof.
 
I'm not an expert but my two cents is....Even if it's $4 more expensive, they are going to sell 100m of these things. That's increasing your expenses by $400m, so is that expense worth the extra power for developers? Profit margins on hardware are slim to none at launch so the difference between 12GB of RAM and 16GB of RAM could be the difference between profitable day 1, or losing money day 1.
But you can also look at this with a different perspective: With an slight increase of power, even if slightly costly, should be worth it for the console's longevity and getting more demanding games running on the platform. Imagine if, due to the less-than-adaquate specs, they can't get something like the next Final Fantasy or Kingdom Hearts, games that 100% would sell well on Nintendo's platform and would generate millions in revenue? Because Nintendo's revenue comes mainly from software, hardware profit is just the icing on the cake/a cherry on top.
 
It might have been a genius move for Nintendo to distance themselves from the competition, but by releasing their consoles much later after Xbox's and PS (4 years) guarantees that their console isn't going to last the usual 7 years that consists a generation with third-party games and by doing a Pro later down the line removes the potential benefit one would have to port it to a Nintendo platform: install base, as that is going to be much smaller than the OG version.
 
Is that a sizeable enough upgrade to shorten the gap between modern platforms a little bit?

Also is UFS 4.0 possible? Too small to search on fami :/
maybe? UFS 3.1 can get up to 2.1GBps sequential. the limiting factor will be how much power will nintendo pump through it. the FDE could cut the required speed by a lot at least

USF 4.0 is most likely too new to be used since it's only now getting into devices
 
and by doing a Pro later down the line removes the potential benefit one would have to port it to a Nintendo platform: install base, as that is going to be much smaller than the OG version.
Nintendo already learned that lesson with the New 3DS. It was too substantial an upgrade, which meant that developers either designed for the original 3DS, which meant leaving the extra power of the New 3DS on the table, or they designed for the New 3DS, which meant a butchered experience on the original 3DS. Since the original had a much larger install base, it lead to vanishingly few New 3DS exclusives.
 
You're comparing apples to oranges here.
I'm really not. Voice chat takes RAM, yes, but the human ear is terribly good at hearing human voices, no matter how compressed, how crummy, how screechy. You can shout a message down a copper phone line bitten by rats and laying in stagnant water and still have a chance that the person on the other end can understand what you're saying. A low RAM voice chat solution is possible - if NINTENDO finds the quality palatable.

Nintendo 3DS had a relatively slow OS that loaded things in and out when you opened the Home Menu, but all of Nintendo 3DS' RAM combined is less than half of the volume of RAM just Nintendo Switch's OS takes up.
 
0
I'm not an expert but my two cents is....Even if it's $4 more expensive, they are going to sell 100m of these things. That's increasing your expenses by $400m, so is that expense worth the extra power for developers? Profit margins on hardware are slim to none at launch so the difference between 12GB of RAM and 16GB of RAM could be the difference between profitable day 1, or losing money day 1.

You're in the minority here.

Necrolipe is legit, so I accept it.

Yeah, and I think people are mixing that up and drinking the Orin kool-aid. It's going to be 12. Even if the OS reserves 2GB, that still leaves 10GB of memory which is 2GB more than the useable RAM on the Series S. More than enough for 1080p gaming with DLSS.
Good. More credit for me when it becomes a reality.
 
But does it really even need to match texture quality of current gen? I personally don't think so, considering it's gonna be 3-4 weaker than PS5/X Series X in GPU alone and we're gonna get down ports of them. DLSS will help with some resolution, but running 4k resolution for what might be 1080p to 1440p DLSS'd in docked mode?

The more RAM the merrier, but even I think it would be bizarre if we matched them in RAM amount 😅. I just don't think we need to. RAM bandwidth is more important after 12GB.

Gonna be interesting how much more meaty the OS will be. Lic's proposal of 1.5GB used for OS and video sound about right, work 10.5 for games.
Yeah, that's why I'd call that insane. It's literally enough RAM to essentially have 4K textures on all their games and leave some for a beefy OS, 12 GB would be more than enough but in the hypothetical case they did bring in all the memory from the devkits, we'd be looking at equal texture quality being resolved by DLSS more or less properly. Could be awesome to see, but also pointless in many ways.
 
then make your own? they're shaders, they do what you want them to do
My man are you serious right now? 😂 my point is that so far I’ve not seen it done very convincingly. Honestly even when 3D models are used in anime it usually sticks out like a sore thumb. Nothing matches an artists touch to every (2s or 3s or whatever they’re doing lol) frame. Integration of AI image generation into the rendering pipeline, to whatever extent, could lead to new interesting art styles, even if not ever matching an animated esthetic.
 
What makes classic Ghibli films (for example) look so good isn't just that they're hand-drawn, that's still common. It's the skill of the animators and directors, and the number of cels per second. Mostly the skill; if you go frame-by-frame, most of movies like Porco Rosso or Totoro is drawn on 3's (8 new cels per second), with occasional cuts on 2's (12 new cels per second) when a movement requires special smoothness or detail.
The quality of individual cels per second is a (far more well-stated) term for the problem I was trying to highlight. The skill of those top animators is in being able to add so much individual detail in such a small space that normally wouldn't receive such effort.

Or to say it in yet another way, the knowledge that a similar quality of animation will take a very long time, and a lot of effort, is what often keeps it from happening. There's a concept that is sometimes brought up in software design, called the principle of good enough, which states that consumers will use products that are just good enough for their requirements, even if there are more advanced products, with more advanced technology, available.

It's debatable as to how true the concept is, but it is one that, based on how many similar conversations exist about the current quality of art and entertainment media, seems to have made its way into many different fields, like movies, video games, and animation. In animation, what constitutes just "good enough" is also a matter of debate, but my sense is that most people consider pure 3D animation as closer to the good enough side of things instead of a more ideal style to strive towards, at least on the anime side of things. 3D animation in the West is often seen in a far better light, which of course implies that this is far more about the quality of what is being done with the 3D, as 3D anime often feels like the 3D was chosen for the purposes of saving money more than anything else (hence, good enough).

I only brought up AI because, on top of it being a continuation of previous conversations, I wonder if there will eventually be tools that can push the feasibility of more things that are closer to that Ghibli style. My underlying fear is that instead of that happening, things will just head more towards the good enough side.
 
0
Portable mode is never going to have RAM, CPU, feature set as far back as PS4. If the game works at all docked, getting it to work in portable is mostly a matter of visual degree.
This is what I believe it's going to happen with most current gen ports this generation, even for some EPD-produced games that want to be technical marvels. They will be designed to run docked first and foremost which would be closest mode to XSS, and then handheld mode will be neutered to oblivion resolution-wise. Thanks to DLSS and modern upscalers most people probably aren't going to tell the difference anyway, and as long as only GPU clocks will be affected between modes... It should be very doable.
 
My man are you serious right now? 😂 my point is that so far I’ve not seen it done very convincingly. Honestly even when 3D models are used in anime it usually sticks out like a sore thumb. Nothing matches an artists touch to every (2s or 3s or whatever they’re doing lol) frame. Integration of AI image generation into the rendering pipeline, to whatever extent, could lead to new interesting art styles, even if not ever matching an animated esthetic.
I'm not sure why you think AI is the solution here. if you want that 2s and 3s look with 3D, you animate on 2s and 3s. 3D models in anime sticks out like a sore thumb is because they're two different mediums and little work is done to make them blend. if you want a more convergent style between the two, you put in the work to make them work. just slapping a single color diffuse and hard shadow isn't ever gonna make it look 2D alone

and by definition, AI doesn't create new styles because it's all copying current styles. it can't even apply that consistently because it's generating a new frame every time. you actually do get more consistency out of making your own shaders.
 
If you don’t care, then this is a pointless discussion. :p
I’m baffled at how you read that sentence when it has a clearly identified subject. I don’t care about AMD’s enterprise tech, for the reasons specified (that it’s made totally irrelevant to our interest in the subject by AMD’s total seeming disinterest in bringing that tech to the consumer space any time within the next 5-6 years at least).
what? no, I don't know how you got that out of my post
Probably the intensely modest speed increase you expect, when all real-world testing of UFS in actual devices shows even UFS2.1 can double sequential and random read speeds over eMMC 5.1 (or more) with less power consumption.
 
how he can be so certain indies are working on Switch sucessor right now? this could also indicate another console
I mean yeah but like which console?

Sony and Microsoft have had their machines out for a little less than three years now, PlayStation 5 is likely to get an enhanced “Pro” refresh soon but definitely not another iteration, the next Xbox iteration is also due in 2028 or so along with PS6.

What could they possibly mean apart from Nintendo hardware?
 


What it do bruh?

I don't think the part of the profile that mentions "TBA for groundbreaking new hardware" has been updated for a long time. The games it lists as "current projects" were all released between 2018 and 2020 (except Once Upon a Coma, which is a Kickstarter game that may still not be out?), and the latest game listed under "released projects" came out in August 2018. I wouldn't be surprised if the "new hardware" they meant at the time they put that in there was the Switch itself.

The experience section with "unannounced UE5 project" is up to date, but that one doesn't mention anything about new hardware.
 
I don't think the part of the profile that mentions "TBA for groundbreaking new hardware" has been updated for a long time. The games it lists as "current projects" were all released between 2018 and 2020 (except Once Upon a Coma, which is a Kickstarter game that may still not be out?), and the latest game listed under "released projects" came out in August 2018. I wouldn't be surprised if the "new hardware" they meant at the time they put that in there was the Switch itself.

The experience section with "unannounced UE5 project" is up to date, but that one doesn't mention anything about new hardware.
Yeah unless we can prove that that section was updated recently, it’s probably referring to the Xbox Series/PS5/an existing platform as of 2023.
 
Yeah, that's why I'd call that insane. It's literally enough RAM to essentially have 4K textures on all their games and leave some for a beefy OS, 12 GB would be more than enough but in the hypothetical case they did bring in all the memory from the devkits, we'd be looking at equal texture quality being resolved by DLSS more or less properly. Could be awesome to see, but also pointless in many ways.
The only thing DLSS has to do with texture quality is sometimes developers screw up the settings so it uses lower res versions of the textures than it should.
 
0
Portable mode is never going to have RAM, CPU, feature set as far back as PS4.
They will be designed to run docked first and foremost which would be closest mode to XSS, and then handheld mode will be neutered to oblivion resolution-wise.
Portable is the stronger mode though? I mean, targeting docked and then adjusting the resolution down is a sane strategy for that very reason, but I don’t understand why folks think this way?

Handheld will have the same CPU and RAM as docked, while supporting a lower res. Handheld might have lower bandwidth, but again, lower res assets, and it is likely to be disproportionately high. Yes, the GPU will be downclocked, but assuming a 2x gap, the power per pixel will be higher in handheld, than in docked, and scaling artifacts will generally be more visible on the larger screen.

There will be games that prefer one over the other, and there is no way to eliminate that. But the odds are there will be more preferring the handheld model
 
so capcom went gigachad and is hosting a lot of development videos on RE Engine, including future updates. they are covering a lot and are getting into the nitty gritty

for instance, future Capcom games will support ray tracing and mesh shaders. so if you're on pre-Turing/pre-RDNA2 on PC, you're gonna have a bad time.

exciting times for engines thanks to new hardware

 
Nope. A single core cluster A78C is not done anywhere.

Especially one geared for gaming.
I know A78s have been in phones for a while. What does A78C cores do differently?
With phones going for 1 big core, 3 high end cores and 4 smaller cores, wouldn't that actually work better for gaming where perhaps you want to run stuff on a big powerful core and then distribute other tasks on the smaller ones?
 
so capcom went gigachad and is hosting a lot of development videos on RE Engine, including future updates. they are covering a lot and are getting into the nitty gritty

for instance, future Capcom games will support ray tracing and mesh shaders. so if you're on pre-Turing/pre-RDNA2 on PC, you're gonna have a bad time.

exciting times for engines thanks to new hardware

Aaaaaaand subscribe. Lots of homework to do tonight.

After my actual homework, of course.
 
so capcom went gigachad and is hosting a lot of development videos on RE Engine, including future updates. they are covering a lot and are getting into the nitty gritty

for instance, future Capcom games will support ray tracing and mesh shaders. so if you're on pre-Turing/pre-RDNA2 on PC, you're gonna have a bad time.

exciting times for engines thanks to new hardware

They're calling "REX Engine" this project
"RE ne-X-t ENGINE"
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom