StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Question regarding DLSS 4.0: didn't NVidia specify that DLSS 4.0 was more heavy to train but not to use? Is it heavy on both ends after all?
That’s exactly the point. They haven’t said yet, we don’t know yet.

I do believe the “4x” statement was in reference to training. But I also think I heard them say the finished model is 2x more parameterized, and a vision transformer accesses input pixels in a more greedy and less coherent pattern than a CNN model. That is going to create major problems for tensor cores trying to operate off slow memory or small cache sizes. Turing cards would probably just manage, but an Orin wouldn’t hang. But, you might be able to “fix” Orin with a large amount of fast memory, and a stupidly large L2 cache. Hmmm.
 
480MHz (1.47TF) - Battery friendly, great for Indie titles
960MHz(2.94TF) - Battery Beater, Ideal for AAA curvature
1.35GHz(4.14TF) - Dock Mode leaving power overhead for fast charging.
Didn't necrolipe confirm he heard over 600mhz for the GPU in handheld? Thought I saw 660 floating around.

GPU I don't see being above 1100mhz if it's 8nm.

The rest sounds plausible.
 
Didn't necrolipe confirm he heard over 600mhz for the GPU in handheld? Thought I saw 660 floating around.

GPU I don't see being above 1100mhz if it's 8nm.

The rest sounds plausible.
Necrolipe did say 600 a while ago but he wasn't sure of it.
Also 8nm shouldn't really hurt expectations for docked mode too much outside of not being too far off from handheld mode. If there's an issue with battery life and power draw, docked mode doesn't have to deal with that nearly as much
 
But, you might be able to “fix” Orin with a large amount of fast memory, and a stupidly large L2 cache.
Orin already has the largest amount of L2 cache per GPC amongst all consumer Ampere GPUs at 2 MB of L2 cache per GPC, for a total of 4 MB of L2 cache (for 2 GPCs).
wl6MLc6.png
 
That’s exactly the point. They haven’t said yet, we don’t know yet.

I do believe the “4x” statement was in reference to training. But I also think I heard them say the finished model is 2x more parameterized, and a vision transformer accesses input pixels in a more greedy and less coherent pattern than a CNN model. That is going to create major problems for tensor cores trying to operate off slow memory or small cache sizes. Turing cards would probably just manage, but an Orin wouldn’t hang. But, you might be able to “fix” Orin with a large amount of fast memory, and a stupidly large L2 cache. Hmmm.

So it's even more of a "wait and see situation?" Waiting to see what the parameters for the beta features will actually be (which are the ones I'd expect from DLSS 4 on Switch 2, if I'm honest) and the T239 cache situation which, AFAIK, is also currently unknown?

I wonder if those features (AA and SS) are in beta because of the Switch 2, to make sure they are a viable or more viable advanced solution (compared to previous DLSS versions)...
 
0
Orin already has the largest amount of L2 cache per GPC amongst all consumer Ampere GPUs at 2 MB of L2 cache per GPC, for a total of 4 MB of L2 cache (for 2 GPCs).
wl6MLc6.png
Really? That’s interesting. I remembered seeing a number somewhere that was much smaller (like stupidly so, I remember thinking they would need way more than that for gaming applications). I probably should have questioned that number a little harder when I saw it.

I do wonder, since the DLSS workload likely ends up on the DLA instead of on the GPU, that means it wouldn’t lean on those caches anyway, right?
 
If you're talking about DLSS here, that's likely impossible on switch 1 games without a patch, since it needs to be integrated into the renderer.
I’ve been thinking about that, and the challenge of Switch 1 games, plus DLSS maybe being too heavy for handheld (and not as good with lower resolutions anyway), plus the Nintendo upscaler patent.

The wording of that patent had two interesting features for me:
  • It was worded to cover engine integrated upscaler like DLSS (which need motion vectors, etc.) as well as general purpose image upscalers that work without hints
  • It talked about an upscaler in the engine being signaled to turn on/off when changing modes (docked vs undocked).
Putting two and two together here, I could imagine a simpler system level ML upscaler that can help with smaller resolution bumps, and separately DLSS available in engine to help with the big leap up to a large TV display. So Switch 1 games get the system level scaler, much like the PS5 Pro upscaler for unpatched PS5 games. Switch 2 games also get the system upscaler when undocked, but then get signaled to fire up DLSS when they dock.

Also, per some of my earlier posts, people in my immediate circle have neighbors with devkits, and there have been accidental slips. Actually one coworker has physically seen a devkit. One of the tidbits we’ve managed to scrape is that Switch 1 games both run and look better. The dev in question would have no reason to have access to patched Switch 1 games. Therefore I infer that games look better unpatched, which suggests a decently nice looking system level upscaler built in.
 
Last edited:
Is there hope for a decently sharp looking version of Rebirth on the Switch 2? I heard plenty about how blurry it could get in performance mode on PS5. But even a sharp 30fps mode on the Switch 2 would satisfy me, I think.

@ILikeFeet tagging you if you don't mind since you always have good answers to these types of questions lol
 
Is there hope for a decently sharp looking version of Rebirth on the Switch 2? I heard plenty about how blurry it could get in performance mode on PS5. But even a sharp 30fps mode on the Switch 2 would satisfy me, I think.
I keep hearing about the performance mode on Rebirth, even though there's a dynamic 2160p 30 mode right there. Switch can drop to 25% of the internal render pixels down to 1080p, or lower, and DLSS to higher, with a 30 FPS target.

A 60 FPS target on NS2 is a different story.
 
Let me back up for a moment.

In the case for Wukong, a PS5 title that as of right now, doesn’t have an Xbox port, the issue according to the developers is a memory limitation/challenge for the Xbox Series S.

In a previous post, I also mentioned how in Baldur’s Gate 3, developers had challenges in optimizing the split GDDR6 memory pool for the Series S version. They ended up optimizing the game enough where the game no longer requires the much slower pool of memory, instead now relying on the much faster 8GB memory pool.

Part of the reason I continue to bring this up is because while there are minor differences between Series S, and Series X CPUs, any challenges presented by developers for Series S ports have nothing to do with the CPU, or at least that is what we’re led to believe.

And someone can correct if I’m wrong, but it was my understanding if a system is bottlenecked by its own memory pool, that can cause more spikes in CPU usage to occur (and I think even the GPU). That is what prompted part of this discussion because if there’s enough memory to be fed into the CPU, and GPU, they’re not waiting so long for instructions to execute, and thus run more efficiently. Again, someone can correct me on this, but that is what I’ve understood. Your example with GTAIV, and GTAV, especially on PS360, was likely more to do with the lack of memory than it was the CPU last I checked.

And as for PS4, and Xbox One? The AMD Jaguar CPUs were dogshit slow, almost to the point where even the humble Tegra X1 A57 CPU cores were core for core about the same with Jaguar if I recall, and yet many titles on PS4/Xbox One could still run fine on Switch.

I’ll thank @Dakhil for correcting me on the CPU situation for T239 that it hasn’t been confirmed to be A78C. Our best guess is A78C, though it could be the standard A78, or perhaps it could be a totally different version. But, it almost has to be a version of the Cortex CPU in order to maintain native compatibility with the Switch 1’s CPU, so we do have a good idea of what it could be. We’re not completely in the dark here.

My point is I don’t see the CPU as being the straw that breaks the camels back here, and whatever version of the Cortex CPU is used, it’ll be MUCH faster than the AMD Jaguar CPU in the PS4.
You’ve lost me now lol.

The sole reason I’m arguing there could be a problem for a Switch 2 port of GTA VI is because of the CPU performance difference between Switch 2 and the other three consoles GTA VI is launching on (Series S, X and PS5). Not RAM or anything else. I have no idea why you’re talking about RAM, Wukong, Baldurs Gate and now Jaguar CPU’s…
 
Now that we are expecting a reveal this week, a lot of questions will be answered but some aspects will still remain a mystery until after launch:

What I expect to be fully confirmed:
  • Screen size and screen technology
  • Functionality of the C button
  • Purpose of the new optical sensor
  • microSD Express support
  • Backwards compatibility details (specifically if there will be any improvements to older titles)
  • Battery life measurements
  • DLSS
Things that may or may not be explicitly mentioned:
  • Ray-tracing support
  • Purpose of the fan in the new dock
  • Price
Things that will almost certainly not be mentioned:
  • Node
  • Clockspeeds
 
Yes I alluded to in prior posts that its not possible to determine screen tech from a reveal trailer
With that said though I think compared to the other handheld footage in the reveal trailer the BOTW footage looks worse even with its art style taken into account. This could be for a variety of reasons though and not necessarily an indication of LCD tech being used.
The footage in the trailer is for sure just normally recorded footage composited with the video material and distorted to the perspective to the Switch screen. I think there even is an editing error at the point in the video where Karen is picking up the Switch. The screen is overlapping her clothes for a few frames and if I remember it correctly this generated a lot of discussion at the time.

It's a common practice when screens are appearing in a video, regardless of the technology. It is done to avoid reflections or viewing angle distortions of the screens and to keep the possibility open to decide on the screen footage in post production.

If the screen has a more washed out look in the Switch trailer, it's intentionally color graded that way in editing.
 
Last edited:
The footage in the trailer is for sure just normally recorded footage composited with the video material and distorted to the perspective to the Switch screen. I think there even is an editing error at the point in the video where Karen is picking up the Switch. The screen is overlapping her clothes for a few frames and if I remember it correctly this generated a lot of discussion at the time.

It's a common practice when screens are appearing in a video, regardless of the technology. It is done to avoid reflections or viewing angle distortions of the screens and to keep the possibility open to decide on the screen footage in post production.

If the screen has a more washed out look in the Switch trailer, it's intentionally color graded that way in editing.

I dont know for sure but yes recorded footage composited on to the switchs screen does make a lot of sense. they even included the screens natural reflections. I still think they could have made the botw footage better though, even if it was intentionally colour graded, I dont think it looks the best it could have.
 
I dont know for sure but yes recorded footage composited on to the switchs screen does make a lot of sense. they even included the screens natural reflections. I still think they could have made the botw footage better though, even if the it was intentionally colour graded, I dont think it looks the best it could have.
This could also be about these videos/trailers maybe made by different production companies. I mean later Switch Ads (Super Bowl, first NoE Ad) or the Switch Lite Trailer had much higher production quality.

Maybe at the time they didn't have time or the focus to make the quality better, I mean it could even be that the final footage for BOTW/Mario Odyssey came last minute, like as in a week before the reveal.
 
You’ve lost me now lol.

The sole reason I’m arguing there could be a problem for a Switch 2 port of GTA VI is because of the CPU performance difference between Switch 2 and the other three consoles GTA VI is launching on (Series S, X and PS5). Not RAM or anything else. I have no idea why you’re talking about RAM, Wukong, Baldurs Gate and now Jaguar CPU’s…
Their argument is that the CPU hasn’t been the main bottleneck for a lot of games this gen. It’s typically been the RAM or something else. And if GTA VI is coming to Series S, there’s a decent chance the CPU wouldn’t be the primary issue with a Switch 2 port. Who knows once all is said and done, obviously.
 
This could also be about these videos/trailers maybe made by different production companies. I mean later Switch Ads (Super Bowl, first NoE Ad) or the Switch Lite Trailer had much higher production quality.

Maybe at the time they didn't have time or the focus to make the quality better, I mean it could even be that the final footage for BOTW/Mario Odyssey came last minute, like as in a week before the reveal.

I mean this originally was a reply to someone asking if we will be able to tell if the switch 2's screen is a LCD or OLED based on the reveal trailer. The only thing about the original switches reveal trailer that would tell me its not a OLED is the zelda footages black levels, if i was a betting man i would say it aint OLED based on that trailer, but its a moot point because the game footage was likely composited on. All in all its a very good trailer imo, but everything I've said is bearing in mind the context of the 'what sceen?' question.
 
I mean this originally was a reply to someone asking if we will be able to tell if the switch 2's screen is a LCD or OLED based on the reveal trailer. The only thing about the original switches reveal trailer that would tell me its not a OLED is the zelda footages black levels, if i was a betting man i would say it aint OLED based on that trailer, but its a moot point because the game footage was likely composited on. All in all its a very good trailer imo, but everything I've said is bearing in mind the context of the 'what sceen?' question.
I understand :) Though it is really impossible to judge the screen because it will basically be almost always the case this is composited. I mean you see this in both the Reveal and Oled trailer, it would be almost impossible to get the screen in so many different settings without any reflections. In the Oled trailer I believe it's even the case the whole console just is a 3D rendering at some points.

I mean for the OLED trailer it seems kinda obvious they wanted to show the best possible quality of Footage, I mean this is the whole point of this model variation.
 
And if GTA VI is coming to Series S, there’s a decent chance the CPU wouldn’t be the primary issue with a Switch 2 port.
I'm not sure this tracks, since the Series S CPU has nearly identical performance to the bigger consoles, while the switch 2 CPU likely won't. It's also entirely possible to have a game that doesn't over-tax a PS5 or Series X/S CPU, but is too much for a significantly slower one, like if it maxes out at ~80% utilization on the faster CPU, at CPU with 50% of the performance (just an example) simply wouldn't be able to keep up without a lot of extra work on the game.
 
I'm not sure this tracks, since the Series S CPU has nearly identical performance to the bigger consoles, while the switch 2 CPU likely won't. It's also entirely possible to have a game that doesn't over-tax a PS5 or Series X/S CPU, but is too much for a significantly slower one, like if it maxes out at ~80% utilization on the faster CPU, at CPU with 50% of the performance (just an example) simply wouldn't be able to keep up without a lot of extra work on the game.
I know. I wasn’t making the argument just trying to explain why that other poster responded with talk about RAM and other components. That part isn’t untrue. Those have been bottlenecks for a lot of games. But whether the Switch 2 CPU can hold its own will need to be seen.
 
0
The Future Hardware thread is exclusively for hardware speculation—software speculation belongs in a different thread. -Biscuit, KilgoreWolfe, IsisStormDragon
Their argument is that the CPU hasn’t been the main bottleneck for a lot of games this gen. It’s typically been the RAM or something else. And if GTA VI is coming to Series S, there’s a decent chance the CPU wouldn’t be the primary issue with a Switch 2 port. Who knows once all is said and done, obviously.

Yeah but GTA6 is neither "typical" nor "a lot of games". It's a massive open world with a shitload of NPCs and physics simulations and so on. If any game is going to push the home console CPUs hard enough that 60fps is impossible and leave the Switch2 CPU unable to run it at a playable framerate, it's this one. I still have hope, though.
 
Their argument is that the CPU hasn’t been the main bottleneck for a lot of games this gen. It’s typically been the RAM or something else. And if GTA VI is coming to Series S, there’s a decent chance the CPU wouldn’t be the primary issue with a Switch 2 port. Who knows once all is said and done, obviously.
And my point is that GTA games are one of the few series that push physics systems and NPC count in realistic modern cities thus hammering the CPU. They also always target 30fps and fail to hit it consistently going on history thus making say a port to a platform with only 70% of the performance of Series S CPU extremely challenging.

Most third party games won’t have this issue.
 
Does anybody know how would dlss work in splitscreen games? like does it upscale each individual screen or does it upscale everything?
It depends on what the developer chooses to do (DLSS just upscales whatever pixel data you put into it), but I'm guessing most developers would use DLSS on each part of the screen separately. They're already rendered separately, and the extra information needed for DLSS like motion vectors and other buffers would likely be separate. Additionally, you wouldn't want DLSS to treat the border between the screens as part of the same image, as it might cause artifacts with content on one screen bleeding into the other.

DLSS supports specifying rectangular sub-areas of both the input to take, and the output to be placed, which would be well-suited to separating out split-screen rendering. The example of this they use in the documentation is about VR, but it's basically the same thing (VR scenes are rendered with two cameras, one for each eye, and split-screen is also just two or more cameras, one for each player).

The only reason not to do it separately would be if it becomes a performance issue, but that seems unlikely beyond the performance constraints the developer is already contending with for split-screen in general.
 
It depends on what the developer chooses to do (DLSS just upscales whatever pixel data you put into it), but I'm guessing most developers would use DLSS on each part of the screen separately. They're already rendered separately, and the extra information needed for DLSS like motion vectors and other buffers would likely be separate. Additionally, you wouldn't want DLSS to treat the border between the screens as part of the same image, as it might cause artifacts with content on one screen bleeding into the other.

DLSS supports specifying rectangular sub-areas of both the input to take, and the output to be placed, which would be well-suited to separating out split-screen rendering. The example of this they use in the documentation is about VR, but it's basically the same thing (VR scenes are rendered with two cameras, one for each eye, and split-screen is also just two or more cameras, one for each player).

The only reason not to do it separately would be if it becomes a performance issue, but that seems unlikely beyond the performance constraints the developer is already contending with for split-screen in general.
Thank you, i was worried things like splitscreen would be dificult to work with for devs
 
0
I do wonder, since the DLSS workload likely ends up on the DLA instead of on the GPU, that means it wouldn’t lean on those caches anyway, right?
Nvidia Deep Learning Accelerator (NVDLA) is nowhere to be found on T239 vs T234 (T23x). So safe to say T239 doesn't have NVDLA.

But hypothetically, there's more latency when using NVDLA due to being outside of the GPU and being reliant on the RAM, which is really far away.
iSVfqvW.png
 
Nvidia Deep Learning Accelerator (NVDLA) is nowhere to be found on T239 vs T234 (T23x). So safe to say T239 doesn't have NVDLA.

But hypothetically, there's more latency when using NVDLA due to being outside of the GPU and being reliant on the RAM, which is really far away.
iSVfqvW.png
Yeah, that’s part of where I was going with that - loss of caching when on DLA. So the tensor cores are probably preferable for a DLSS kind of workload anyway unless they redesigned DLA to have its own cache system. I don’t think I had caught on in the leaked spec that it was missing, in my head it wasn’t missing, just hacked down, but the latency from lack of any cache support probably then also explains why they would just cut it out completely.

Which means the L2 GPU cache, which is in fact oversized even compared to RTX cards, is the relevant factor here. Even DF called out memory bandwidth as a potential limiting factor for DLSS on T239, but that was based a fair bit on their RTX 2050 experiment. If Drake has left in most or all of that big fat cache, DLSS 4 might be possible with some tweaking.
 
0
I’ve been thinking about that, and the challenge of Switch 1 games, plus DLSS maybe being too heavy for handheld (and not as good with lower resolutions anyway), plus the Nintendo upscaler patent.
I don't think there's any conclusive evidence of DLSS not working in handheld mode other than assumptions. even DF's testing didn't do a hypothetical handheld clock since they couldn't get that low

Is there hope for a decently sharp looking version of Rebirth on the Switch 2? I heard plenty about how blurry it could get in performance mode on PS5. But even a sharp 30fps mode on the Switch 2 would satisfy me, I think.

@ILikeFeet tagging you if you don't mind since you always have good answers to these types of questions lol
easily. FF7R2 just has a really bad implementation of TAA. it honestly shouldn't even be as bad as it is
 
I mean this originally was a reply to someone asking if we will be able to tell if the switch 2's screen is a LCD or OLED based on the reveal trailer. The only thing about the original switches reveal trailer that would tell me its not a OLED is the zelda footages black levels, if i was a betting man i would say it aint OLED based on that trailer, but its a moot point because the game footage was likely composited on. All in all its a very good trailer imo, but everything I've said is bearing in mind the context of the 'what sceen?' question.
I'm not even sure anything in the Switch reveal trailer was actually there. I don't recall where i read it, but one of the actors said (or perhaps it was speculated?) that there was no gameplay in the units and it was all composited in later.
 
I'm not even sure anything in the Switch reveal trailer was actually there. I don't recall where i read it, but one of the actors said (or perhaps it was speculated?) that there was no gameplay in the units and it was all composited in later.
That’s standard practice. Screens generally show up very poorly on camera, unless you synchronize the refresh rate of the screen with the shutter of the camera. Sometimes they do that, and certainly LCD screens don’t do as badly as the old CRTs. But you still can catch bad timing and the lighting on sets can mess with the brightness/colors. Very hard to get a consistently good image. They almost always just superimpose the image over the screen in editing.
 
There is also an optimization that gets talked about a lot here, parallel DLSS. This would allow DLSS to run in the background for one frame while the next frame is being drawn. This totally hides the cost of DLSS and makes 4k basically free! However, I think implementations like this will be rare. Remember, "good" DLSS needs to draw post processing effects (and the UI!) after DLSS has run. I think it will be rare that games can afford to push DLSS to the very last stage of the pipeline.
Does it need to go to the very end to be useful? If something like Death Stranding had an 18ms "DLSS+post-processing" cost and 7ms could be "hidden" by concurrent DLSS processing, could one then just budget 11ms for post-processing? As I'm imagining it it still wouldn't make 60fps work in this instance (you'd still want DLSS+post-processing to fit within the time of a single frame), but would free up a fair bit of frame time for 30fps.

IfYMRGC.png
 
So...
Do you think they will end up calling the new feature "MouseCon" as so many (including myself) mention here? "JoyMouse"? "Mouse Mode"?

And do you think it will be tied with the new "C" button? Or will that be a separate feature altogether?
 
And my point is that GTA games are one of the few series that push physics systems and NPC count in realistic modern cities thus hammering the CPU. They also always target 30fps and fail to hit it consistently going on history thus making say a port to a platform with only 70% of the performance of Series S CPU extremely challenging.

Most third party games won’t have this issue.

Does Rockstar target 30fps because that’s their target, or are the consoles just not powerful enough to run them at 60fps?

Given more games nowadays have specific modes, either for fidelity, or performance typically, I can see a 60fps mode as an option for Xbox Series X, and PlayStation 5. Series S might be limited to 30fps though.

That leaves a hypothetical Switch 2 conversion with a target 30fps only in either modes, and more cutbacks in graphics, and resolution.

That also said, game engines are so scalable nowadays, the cutbacks may not have as much of a drastic effect as it used to be in the past. I would imagine GTAVI is using their RAGE engine, which already has Switch support, and highly likely has Switch 2 support, though for GTAVI in particular is likely a newer version.

Like you said, we don’t know, and we will not know until it’s either confirmed, or if Rockstar says it’s not coming at all, and then we’ll know for certain.

I’m of the opinion that GTAVI is not only possible, but is going to come to Switch 2, and I feel very confident in that. Any potential challenges in hardware I think are just that: challenges. Doesn’t make them impossible, though I am not saying you said it’s not.

I’ll even go so far as to say GTAVI isn’t even going to be an example of an “impossible” port, just another multiplatform game that is coming to Switch 2. Who know if I’m wrong though.
 
Does it need to go to the very end to be useful? If something like Death Stranding had an 18ms "DLSS+post-processing" cost and 7ms could be "hidden" by concurrent DLSS processing, could one then just budget 11ms for post-processing? As I'm imagining it it still wouldn't make 60fps work in this instance (you'd still want DLSS+post-processing to fit within the time of a single frame), but would free up a fair bit of frame time for 30fps.

IfYMRGC.png
hell, just don't do post processing after DLSS. or do a lower resolution post processing. there are ways around the issues that DF found. Alan Wake 2 has an option for post before DLSS for higher performance
 
I don't think definitive statements on what GTA6 will or won't do make any sense. I see a lot of its going to overtax the consoles etc. We don't know what the games going to do. One thing we do know is that things like NPCs etc. are scalable as are Ai functions etc.

No one unless they work for rockstar itself and are working on GTA6 can speak definitively one way or the other.
 
Thought I saw 660 floating around.
660MHz was from the DLSS regression testing in the NVN2 leak.
Does it need to go to the very end to be useful? If something like Death Stranding had an 18ms "DLSS+post-processing" cost and 7ms could be "hidden" by concurrent DLSS processing, could one then just budget 11ms for post-processing? As I'm imagining it it still wouldn't make 60fps work in this instance (you'd still want DLSS+post-processing to fit within the time of a single frame), but would free up a fair bit of frame time for 30fps.

IfYMRGC.png
There are ways to go around post-processing. We shouldn't take Death Stranding 18ms compute cost for DLSS 4K as face value given we don't know how the post-processing is operated within the rendering + supersampling pipeline or how it was setup in the testing.

On a bespoke closed hardware like Switch 2, developers can go around post-processing + DLSS cost issues by doing the super sampling step after post-processing. That gives bigger performance gains (lower compute costs) in exchange for a less stable image.
 
So...
Do you think they will end up calling the new feature "MouseCon" as so many (including myself) mention here? "JoyMouse"? "Mouse Mode"?

And do you think it will be tied with the new "C" button? Or will that be a separate feature altogether?
Oh geez I really hope they don't go with the second idea there...
I think we conclusively ruled out C being for the cursor since it's only on one joycon; my bet is some sort of social feature reaching the likes of streetpass, it could even be as simple as a button that turns a neo-streetpass on and off as I learned it was a terrible battery hog when unwanted.
 
0
Please read this new, consolidated staff post before posting.
Last edited:


Back
Top Bottom