• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

What the hell happened I was still caught up with the ransomers but what
...... 12 SM'S more shaders than PS4? Twice as many shaders as Durango Xbone? Dogs and cats sleeping together? What is going on??!!!!

Okay, how 'massive' is this chip going to be really? What kind of a ballpark reduction of size per transistor..... Per... Sm can we use, using the old 20nm X1 as a base?

Right? That one had 2 sm, we get the footprint of that, How would 12 of these ampere SM'S shrunk to 8 and 5nm compare to that footprint? How much bigger?
Again I have to stress this, you can't compare core counts and FLOPs across Architectures.
The GTX1050 is 640 Cores and it beats the PS4's 1024 cores quite handily.

And the RTX 3070 has 5888 Cores and 20.31 TFLOPS but the 2080Ti at 4352 Cores and 13.45 TFLOPS is right next to it in performance.

Core counts and FLOP numbers CAN NOT be used to directly compare GPUs across different generations and vendors.

You have to convert them based on FLOP-Efficiency more or less and even then that is a very tricky thing that becomes a rabbit hole really quickly
 
I've also been shown some numbers for GA10B and GA10F floating-point efficiency, but I'm scared to post them.
c7xfe0a11gqz.jpg

Again I have to stress this, you can't compare core counts and FLOPs across Architectures.
The GTX1050 is 640 Cores and it beats the PS4's 1024 cores quite handily.

And the RTX 3070 has 5888 Cores and 20.31 TFLOPS but the 2080Ti at 4352 Cores and 13.45 TFLOPS is right next to it in performance.

Core counts and FLOP numbers CAN NOT be used to directly compare GPUs across different generations and vendors.

You have to convert them based on FLOP-Efficiency more or less and even then that is a very tricky thing that becomes a rabbit hole really quickly

Flop normalization is for the birds.

We have benches for days when we gets our hands on these.
 
Sharing a block of the source on here is part of it, but the main reason I'm scared to post them is mostly because I don't know what they mean and feel like it's going to turn into the warps per thread confusion but worse, lol. But sure I'll DM Redd.
Thanks!
 
0
c7xfe0a11gqz.jpg



Flop normalization is for the birds.

We have benches for days when we gets our hands on these.
Assuming if NVIDIA releases an NVIDIA sheild with Drake in it as it is in the Switch 2/Pro/Plus/Whatever.

Or if someone finds a way to crack into it and put Android on it and run Geekbench (Highly unlikely)
 
Assuming if NVIDIA releases an NVIDIA sheild with Drake in it as it is in the Switch 2/Pro/Plus/Whatever.

Or if someone finds a way to crack into it and put Android on it and run Geekbench (Highly unlikely)

Wait do we have reason to think shield is done from here on out?
 
Do you guys think Nintendo are going to forego generations and just keep iterating on the Switch as time and tech progresses?, Also personally for me dream scenarios would be for them to announce the codename at the next investors meeting, announce the console in July for a March 2023 release, my wallet is already going to be hurting from 2022 alone lol
 
I don't know what "FLCG" is exactly, but it is related to clock gating, and a comment states that GA10F is the only Ampere chip which supports FLCG. Could be an indication of downclocking for portable mode? Just a guess though.
 
Just to back up a little and expand a bit on the 'bigger chip = more $$$' point for the readers new to this...
IIRC, foundries make chips in the form of wafers, then cut the chips out from them. This is a 300mm (diameter) wafer for example:
2-TheNom_575px.jpg

Typically, customers buy by some number of wafers (unless the foundry is getting desperate...). We on the outside aren't going to know exact price tags, but going off of a late 2020 thread, it's probably fair to say that for TSMC's recent nodes (except for N5), a wafer costs somewhere in the 4 digits USD. For the 16/12 node that Mariko (2019's V2 and 2021's OLED) is made on, it's probably mid 4 digits.
I don't see anything for Samsung, but logically, 8nm is older and less desired than TSMC's N7, so it should be less than 9k a wafer.

Now, we're surprised, because for all this time, we've operated within the assumption of no more than 8 SMs for Dane/Drake. Why 8? Uh, I forgot exactly. There are two routes in my head: one is extrapolating via comparing transistor densities of Samsung 8nm against TSMC's 20 nm process. The other is instead try to extrapolate from the Ampere graphics cards. And we generally agreed on trying to not exceed the original TX1/Erista's size of ~120 mm^2 for cost reasons.
 
Just to back up a little and expand a bit on the 'bigger chip = more $$$' point for the readers new to this...
IIRC, foundries make chips in the form of wafers, then cut the chips out from them. This is a 300mm (diameter) wafer for example:
2-TheNom_575px.jpg

Typically, customers buy by some number of wafers (unless the foundry is getting desperate...). We on the outside aren't going to know exact price tags, but going off of a late 2020 thread, it's probably fair to say that for TSMC's recent nodes (except for N5), a wafer costs somewhere in the 4 digits USD. For the 16/12 node that Mariko (2019's V2 and 2021's OLED) is made on, it's probably mid 4 digits.
I don't see anything for Samsung, but logically, 8nm is older and less desired than TSMC's N7, so it should be less than 9k a wafer.

Now, we're surprised, because for all this time, we've operated within the assumption of no more than 8 SMs for Dane/Drake. Why 8? Uh, I forgot exactly. There are two routes in my head: one is extrapolating via comparing transistor densities of Samsung 8nm against TSMC's 20 nm process. The other is instead try to extrapolate from the Ampere graphics cards. And we generally agreed on trying to not exceed the original TX1/Erista's size of ~120 mm^2 for cost reasons.
We were assuming that the SoC would be using the same hierarchy of GPC/TPC/SMs as seen in Nvidia's other hardware. Namely 1 GPC contains 4 TPCs with 2 SMs each.

But apparently, the GA10F uses a different structure, 1 GPC containing 6 TPCs with 2 SMs each.
 
Interesting that the new features section of NVN2 appears to have started life labeled as Turing-related instead of Ampere-related. This doesn't mean there were ever plans for a Turing-based device, but it seems like the development of NVN2 at least began with Turing as a target.
 
Interesting that the new features section of NVN2 appears to have started life labeled as Turing-related instead of Ampere-related. This doesn't mean there were ever plans for a Turing-based device, but it seems like the development of NVN2 at least began with Turing as a target.
That at least coroborates with the talks about the initial developments on Switch being Turing based but then were downgraded to Maxwell, also the whole thing about being able to so something special with the audio...
 
Do you guys think Nintendo are going to forego generations and just keep iterating on the Switch as time and tech progresses?
I think they'll stick with the hybrid concept for a long while, because it has put them back on the map to such an extreme degree that ditching one mode or the other would alienate a huge chunk of the player base they've cultivated this gen. Even if at some point they decide to do a new-new system with its own name and identity (not "Switch 2" or whatever, but an actual new name), I expect it to still bridge the handheld-tv capabilities. Maybe instead of "switching" it'll be so damn powerful that it's a full-time dockless handheld that can simply beam a video feed to the tv? Any major leap forward they do should still have both tv and handheld capabilities. I would not be surprised at all if that is Nintendo's hallmark going forward.

So in a way, I think the Switch spirit will continue even beyond systems explicitly named "Switch." Whether you'd consider that "iterative" or not, I dunno.
 
Quoted by: SiG
1
Are we still expecting 8nm for the process with these new leaks?
I don't know about everyone else, but I still do until proven otherwise, especially since Samsung's 8N process node probably has the best yields when mobile SoCs are concerned in comparison to Samsung's more advanced process nodes (Samsung's 7LPP process node and more advanced), since I haven't heard any reports and rumours about Samsung's 8N process node having yield issues with respect to mobile SoCs. (TSMC's process nodes of course are a completely different story.)
 
The rumour actually mentioned Pascal, not Turing. (And RedGamingTech isn't the most reliable source of information.)
Ah, thanks for the correction/sanity check.

In this case, I guess we are in uncharted territory.
I think they'll stick with the hybrid concept for a long while, because it has put them back on the map to such an extreme degree that ditching one mode or the other would alienate a huge chunk of the player base they've cultivated this gen. Even if at some point they decide to do a new-new system with its own name and identity (not "Switch 2" or whatever, but an actual new name), I expect it to still bridge the handheld-tv capabilities. Maybe instead of "switching" it'll be so damn powerful that it's a full-time dockless handheld that can simply beam a video feed to the tv? Any major leap forward they do should still have both tv and handheld capabilities. I would not be surprised at all if that is Nintendo's hallmark going forward.
I have a feeling that them toying with Miracast probably means they want to get rid of cables altogether.

You can bet something about having players "face each other" will come into play, maybe something AR and the like.

I doubt 4k will be the main selling point/ gimmick of the console, as they really don't focus on graphical fidelity/prowess as the central selling point. I simply expect it to support 4k input standards, though I hope they consider HDR to some capacity. Doubtfull they will go for VRR.
 
Ah, thanks for the correction/sanity check.

In this case, I guess we are in uncharted territory.

I have a feeling that them toying with Miracast probably means they want to get rid of cables altogether.

You can bet something about having players "face each other" will come into play, maybe something AR and the like.

I doubt 4k will be the main selling point/ gimmick of the console, as they really don't focus on graphical fidelity/prowess as the central selling point. I simply expect it to support 4k input standards, though I hope they consider HDR to some capacity. Doubtfull they will go for VRR.
NVN2 mentions HDR

 
0
Some more numbers, apparently. L2 cache = 4 * 1024 * 1024 = 4 MB (same as GA10B). RT core count = 12 (GA10B = 16).

To the obvious follow-up question, no, I still have no context for these numbers.
 
Some more numbers, apparently. L2 cache = 4 * 1024 * 1024 = 4 MB (same as GA10B). RT core count = 12 (GA10B = 16).

To the obvious follow-up question, no, I still have no context for these numbers.
Wait they halved the RT Core count for Orin in the dev papers, Orin should have 8
 
Wait they halved the RT Core count for Orin in the dev papers, Orin should have 8
Well, besides the usual question of whether what we're looking at is up to date (although it's up to date enough to have GA10F, Hopper, etc.), this information appears to be coming from a "modeling" layer i.e. for simulating the chips, so it's not a datasheet per se. But yeah like I said, I have no context.
 
0
Perhaps we simply underestimated how large a die (and the corresponding cost) NVIDIA/Nintendo were willing to go with?
I've been saying this for over a month now. While we've been using the 10-15W power draw docked as a baseline metric, Nintendo is obviously not totally beholden to it. As long as they can get thermal performance where they want it to be, they could go higher. It will especially depend on whether they're still looking at something closer to 6-7W power draw in handheld mode, as better thermal and battery performance could mean it doesn't have to go so low.

Either way, when it comes to the hardware's power envelope, we've always just been working with reasonable assumptions instead of reality.
 
I'm hoping Nintendo doesn't look down on Nvidia for this leak the way it's been said they looked down on Netflix for leaking the Zelda talks. I'm sure Nintendo can't just pull the plug on the Nvidia hardware relationship the way they pulled the plug on Netflix.

Nah, this is under very different circumstances compared to the Netflix situation. In the situation with Netflix, the leaks were 100% intentional by the involved parties, rather than an outside party. Nintendo did NOT take kindly to this at all. Cyber Attacks, unfortunately, happen and Nintendo is still dealing with the aftermath of one of their branches being subjected to it. So they wouldn't see this as the fault of their partners.
 
0
I was talking about theoretically speaking rather in actually speaking. But yeah, I personally don't expect Nintendo to enable HDR support for the OLED model anytime soon.
If they are doing the work for hdr anyway for a game that gets a Drake patch, I could see them giving the oled dock a firmware update, enabling hdr.
 
0
The codename for Dane leaked early last year, them going with a new one doesn't surprise me. It could have also been a change to some element of the chip. GA10F is another name for the chip that we know now, considering the alphabet, it is interesting they went all the way to F for the naming of this SoC.

Purely guesswork: I'd say with every new "iteration" they likely go up a letter? So the current Version "F" is the sixth Version of the Switch 2/Pro SoC. Kinda like the stepping on CPUs.
 
Also, just me guessing: Maybe they use a new dock that allows for the Switch main unit to be cooled better? Having a fan in the dock that blows alot more cool air into the unit than the puny internal one, upping the specs by quite a margin when docked. They may shut down some SMs in Handheld mode and switch to the internal fan? I'm not an engineer or anything tho and have no idea how feasible this is....
 
0
I've been saying this for over a month now. While we've been using the 10-15W power draw docked as a baseline metric, Nintendo is obviously not totally beholden to it. As long as they can get thermal performance where they want it to be, they could go higher. It will especially depend on whether they're still looking at something closer to 6-7W power draw in handheld mode, as better thermal and battery performance could mean it doesn't have to go so low.

Either way, when it comes to the hardware's power envelope, we've always just been working with reasonable assumptions instead of reality.
I am also in the mindset that Nintendo may be going for a device with overall higher power draw than its predecessor. I think they could squeeze a few more Watts out of handheld mode with a more modern and bigger battery along with a slightly larger form factor. A larger form factor could also facilitate a bigger and more efficient cooling solution.

Goes hand in hand with it being a 12sm GPU and thus larger die. Having a larger chip clocked lower facilitates lower temps as surely its easier to dissipate heat generated across a larger surface area with a larger radiator and maybe larger but slower fan too?

Then in docked mode, crank the chip output to say 30w and kick that fan up to max speed. Still less power draw than the WiiU and makes sense if they are targeting 720p in handheld and 4k with DLSS in docked. Not pushing the chip as hard as say PS5 pushes it's APU, maybe up to 75%-80% of max clocks.

Thinking about cost to manufacture too. I know each chip will take up more space on the wafer, but if these chips will only ever run at the aforementioned percentage of max clocks surely the high yields and low bin rate could offset that cost?

If indeed Nintendo is still targeting 720p handheld the larger disparity between handheld and docked clocks may have been necessary to get that resolution up.

Either that or this new switch is actually a laptop. /s
 
0
Sad that this got leaked but it’s very interesting nonetheless! I guess that 12sm would work if Nintendo asked Nvidia to go with a smaller/newer node.

So it’s either:

7nm (TSMC) or 5nm (Samsung)
More SM (12SM)
Bigger cache
8 core A78 (?)
8-12GB ram

Or

8nm (Samsung)
8SM
Bigger cache
6 core A78 (?)
8GB ram

I’m guessing that Nintendo either wants to use Ray tracing in both docked and undocked mode. Which requires a beefier SoC to be able to run DLSS in undocked mode (540p to 1080p and downscale it to 720p) to achieve good Ray Tracing results.

Or that SoC is for something entirely else and we are being obtuse and hoping that it’s what Nintendo will use
 
Sad that this got leaked but it’s very interesting nonetheless! I guess that 12sm would work if Nintendo asked Nvidia to go with a smaller/newer node.

So it’s either:

7nm (TSMC) or 5nm (Samsung)
More SM (12SM)
Bigger cache
8 core A78 (?)
8-12GB ram

Or

8nm (Samsung)
8SM
Bigger cache
6 core A78 (?)
8GB ram

I’m guessing that Nintendo either wants to use Ray tracing in both docked and undocked mode. Which requires a beefier SoC to be able to run DLSS in undocked mode (540p to 1080p and downscale it to 720p) to achieve good Ray Tracing results.

Or that SoC is for something entirely else and we are being obtuse and hoping that it’s what Nintendo will use
Drake/GA10F/T239 is the 12SM SoC in NVN2.

So it can not be us being obtuse as the information we have from the Hack literally says it as it is.
 
Sad that this got leaked but it’s very interesting nonetheless! I guess that 12sm would work if Nintendo asked Nvidia to go with a smaller/newer node.

So it’s either:

7nm (TSMC) or 5nm (Samsung)
More SM (12SM)
Bigger cache
8 core A78 (?)
8-12GB ram

Or

8nm (Samsung)
8SM
Bigger cache
6 core A78 (?)
8GB ram

I’m guessing that Nintendo either wants to use Ray tracing in both docked and undocked mode. Which requires a beefier SoC to be able to run DLSS in undocked mode (540p to 1080p and downscale it to 720p) to achieve good Ray Tracing results.

Or that SoC is for something entirely else and we are being obtuse and hoping that it’s what Nintendo will use
12 SMs just sounds wildly unrealistic from a size perspective. Wasn't 8SMs even supposedly cutting it close?

Kinda feel a little sketchy on that detail...
Or 12SM is possible because 8nm is a cheap(er) node, thus is economically viable to go for a large SoC. I have always said that SoC size not that important. Perf/W is king.
 
I think many have the idea that Nintendo took Tx1, settled the clocks for handheld mode and double them for 1080p. However, Switch docked clocks are exactly the same as Shield Tv's when under load. So, the Switch docked is already clocked as high as it can reliably go. To me this evidence that the docked mode is not beholden to handhelds as much as many of you assume.

Now, all rumors are consistent that Nintendo is aiming for 4k while docked and for that you need the most powerful chip you can get (within reason). Handheld mode can then be achieved by downclocking as needed. Of course, all this within reasonable constrains, but I don't see why is unreasonable for Nintendo going for a (relative) large chip which full potential can only be used while on docked more, and the delta between handheld and docked is considerably larger than ~2x this time around.

The TX1 was always a tablet chip (and not a very good one for that form factor), but Dane (Drake now?) is the first time Nintendo is getting a fully custom design that isn't beholden to the needs of other markers. I believe Nintendo has the opportunity to really push the docked mode concept. I see no reason why the docked cannot be 30W this time around.
 
Last edited:
I think many have the idea that Nintendo took Tx1, settled the clocks for handheld mode and double them for 1080p. However, Switch docked clocks are exactly the same as Shield Tv's when under load. So, the Switch docked is already clocked as high as it can reliably go. To me this evidence that the docked mode is not beholden to handhelds as much as many of you assume.

Now, all rumors are consistent that Nintendo is aiming for 4k while docked and for that you need the most powerful chip you can get (within reason). Handheld mode can then be achieved by downclocking as needed. Of course, all this within reasonable constrains, but I don't see why is unreasonable for Nintendo going for a (relative) large chip which full potential can only be used while on docked more and the delta between handheld and docked is considerably larger than ~2x this time around.

The TX1 was always a tablet chip (and not a very good one for that form factor), but Dane (Drake now?) is the first time Nintendo is getting a fully custom design that isn't beholden to the needs of other markers. I believe Nintendo has the opportunity to really push the docked mode concept. I see no reason why the docked cannot be 30W this time around.
I think they did it the other way around. Settled for Clocks in docked mode, then halfed them for handheld.

ALso this time around, the handheld target is presumably still 720p and docked is 4k, and also DLSS changes everything.
So there is no logic in docked running 2.5 times faster than handheld, as it was in the original.
 
I think they did it the other way around. Settled for Clocks in docked mode, then halfed them for handheld.
Which would explain the last minute higher clocks for handheld mode after getting developer feedback.

ALso this time around, the handheld target is presumably still 720p and docked is 4k, and also DLSS changes everything.
So there is no logic in docked running 2.5 times faster than handheld, as it was in the original.
Even if they are aiming for 1080p handheld, 4k/1080p is 4, while 1080p/720p is only 2.25.
 
Last edited:
I think many have the idea that Nintendo took Tx1, settled the clocks for handheld mode and double them for 1080p. However, Switch docked clocks are exactly the same as Shield Tv's when under load. So, the Switch docked is already clocked as high as it can reliably go. To me this evidence that the docked mode is not beholden to handhelds as much as many of you assume.

Now, all rumors are consistent that Nintendo is aiming for 4k while docked and for that you need the most powerful chip you can get (within reason). Handheld mode can then be achieved by downclocking as needed. Of course, all this within reasonable constrains, but I don't see why is unreasonable for Nintendo going for a (relative) large chip which full potential can only be used while on docked more, and the delta between handheld and docked is considerably larger than ~2x this time around.

The TX1 was always a tablet chip (and not a very good one for that form factor), but Dane (Drake now?) is the first time Nintendo is getting a fully custom design that isn't beholden to the needs of other markers. I believe Nintendo has the opportunity to really push the docked mode concept. I see no reason why the docked cannot be 30W this time around.
TX1 is/was excellent, it was the manufacturing node (20 NM TSMC) that was holding it back. EDIT: And it may be the case of Samsung 8NM as well.
 
0
Demand for Nvidia's current 8nm GPUs will go down when the new RTX40 generation on 5nm comes out this year.
There's a wide open window there for Nintendo to use that 'withered technology'.
 
Or 12SM is possible because 8nm is a cheap(er) node, thus is economically viable to go for a large SoC. I have always said that SoC size not that important. Perf/W is king.
Die size and costs are always related. The bigger the die, the lower the yield, thus pulling the price up.
 
I think many have the idea that Nintendo took Tx1, settled the clocks for handheld mode and double them for 1080p. However, Switch docked clocks are exactly the same as Shield Tv's when under load. So, the Switch docked is already docked as high as it can reliably go.
To me this evidence that the docked mode is not beholden to handhelds as much as many of you assume.

Now, all rumors are consistent that Nintendo is aiming for 4k while docked and for that you need the most powerful chip you can get (within reason). Handheld mode can then be achieved by downclocking as needed. Of course, all this within reasonable constrains, but I don't see why is unreasonable for Nintendo going for a (relative) large chip which full potential can only be used while on docked more and the delta between handheld and docked is considerably larger than ~2x this time around.

The TX1 was always a tablet chip (and not a very good one for that form factor), but Dane (Drake now?) is the first time Nintendo is getting a fully custom design that isn't beholden to the needs of other markers. I believe Nintendo has the opportunity to really push the docked mode concept. I see no reason why the docked cannot be 30W this time around.
This is even more relevant if they stick to a 720p screen, which I believe they will given the diminishing returns with a screen of that size.

One other factor to consider then is if they can fit the chip into the form factor of the switch and still adequately cool this larger chip. Given that the original switch had better cooling than necessary I can see this as being a possibility.
 
0
That can make all the difference, on a device that sells 100 million.
Exactly. That is why I am pretty skeptical regarding that piece of information. Nintendo has been making money on their hardware since forever and they will probably want things to stay that way.

If a device uses such a large die, then there is no way it is a portable. It might be a supplementing computing device or a stationary console (which would be sold at a premium) but not a portable in any case.
 
0
Contrary to the popular belief, Nintendo doesn't always go for cheap. The Wii U had a fairly Complex MCM with a large ammount of EDRAM that housed an IBM PowerPC CPU and AMD GPU coming from different fab lines. An x86 AMD SoC with similar power profile would have been considerably cheaper. The system was weak because their goals were full BC with Wii and low power consumption, not power. I would not be surprised if the 3DS, that used an exotic GPU and a custom dual core ARM11, was more expensive than just going for Tegra or a Qualcom SoC. Furthermore, sticking with cartridges is the antithesis of going cheap....

They have design goals that, unlike Sony and MS, are not always the "most powerful hardware for target price", that much is true.
 
Last edited:
Oh my, 12 SM’s? If true then we are in for one hell of a treat. Furukawa’s comment last(?) year about Nintendo perusing cutting edge tech is definitely sounding less and less like fluff. If this all pans out this thing could be an absolute monster. Ports should definitely be much less of an issue this time, that’s for sure.

I wonder where Nintendo might cut some corners to save a few bucks. I’m guessing now that it’ll be an LCD screen, 8 gigs of RAM and the metal parts of the shell we got on the OLED model will just be plastic again.
 
0
I think with the Switch aka the Hybrid system they have finally the kind of system where the market would be willing to pay a premium for a more powerful version. They tried it with DS to 3DS and WiiU to WIiU and fell flat on their face because the value proposition wasnt there for most of the mainstream audience and hardcore audience.

The fact that the SWOLED is selling at 350, paints a very strong picture that they could at least charge 400 for the next Switch and the market would be fine with it just based of the increase in hardware power alone. SeriesX and PS5 wont get price drops anytime soon, so they are still gonna be in the same ball park.

So when people bring up "too big" because the potential price of the SoC, are we factoring in that Nintendo will be able to sell the device for at least 400 bucks or at least quite a bit more than the OG Switch.

500 price point was considered high at some point in the past as well - no one would say that PS5 or SXS were overpriced at launch though.

Yes, you might loose a couple bucks per unit sold on hw sales but you might earn all of that back through higher software sales because of how and what kind of software the device will be able to support in the long run.

Nintendos software lineup is really ambitious for the next 12 months, their current earning are really strong - they were never in a better spot for a more ambitious hardware launch....if not now, its never gonna happen.
 
A larger SM count can be a solution to decrease heat right? If the power consumption - performance sweet spot is at a low clock speed they could've increased SM count to keep the system cool instead of added compute.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom