• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

But especially considering a modded UFS for the new Switchcards could be an option.
They are unlikely to go with UFS for GameCards. XtraROM allows Nintendo to embed a DRM scheme in the cards themselves
 
0
Because of XtraROM, not wanting to force installs, and the availability of UHS-1 Vs faster SD cards, I remain in the camp that the new device will settle for 100MB/s read speeds.

What I'd like to see them do is embrace larger internal storage since they'll probably stick with eMMC, maybe 256GB would be nice.

My actual expectations remain really straightforward. 128GB to differentiate it. No difference in speed. Significantly faster loading thanks to the faster CPU and FDE. 100MB/S + decompression, and likely smaller assets than home consoles to begin with, should be more than enough. Remember, most if not all games still run on spinning rust in a desktop or a Micro SD card in a laptop or Steam Deck.

I sincerely don't think this speed limit will impact the kind of gameplay possible on the device. Asset streaming was a thing back in a disk based world, after all.
 
Because of XtraROM, not wanting to force installs, and the availability of UHS-1 Vs faster SD cards, I remain in the camp that the new device will settle for 100MB/s read speeds.

What I'd like to see them do is embrace larger internal storage since they'll probably stick with eMMC, maybe 256GB would be nice.

My actual expectations remain really straightforward. 128GB to differentiate it. No difference in speed. Significantly faster loading thanks to the faster CPU and FDE. 100MB/S + decompression, and likely smaller assets than home consoles to begin with, should be more than enough. Remember, most if not all games still run on spinning rust in a desktop or a Micro SD card in a laptop or Steam Deck.

I sincerely don't think this speed limit will impact the kind of gameplay possible on the device. Asset streaming was a thing back in a disk based world, after all.
100MB of max speeds would absolutely limit novel engine features designed to take advantage of faster storage. We have devs in this very thread saying as much.

I don't know how they would solve it, maybe select games will require installs.
 
100MB of max speeds would absolutely limit novel engine features designed to take advantage of faster storage. We have devs in this very thread saying as much.

I don't know how they would solve it, maybe select games will require installs.
100MB+ decompression would be a lot more than 100MB, and smaller assets on top of that.

This isn't a 5400RPM HDD, it's still solid state storage.
 
Because of XtraROM, not wanting to force installs, and the availability of UHS-1 Vs faster SD cards, I remain in the camp that the new device will settle for 100MB/s read speeds.

What I'd like to see them do is embrace larger internal storage since they'll probably stick with eMMC, maybe 256GB would be nice.

My actual expectations remain really straightforward. 128GB to differentiate it. No difference in speed. Significantly faster loading thanks to the faster CPU and FDE. 100MB/S + decompression, and likely smaller assets than home consoles to begin with, should be more than enough. Remember, most if not all games still run on spinning rust in a desktop or a Micro SD card in a laptop or Steam Deck.

I sincerely don't think this speed limit will impact the kind of gameplay possible on the device. Asset streaming was a thing back in a disk based world, after all.
Do note that 100MB/s is the number for uninterrupted sequential transfers, not including random access delays. When a game loads, it isn't reading one continuous strand of data from one end to the other. It's jumping around all over the place to get each piece it needs, which can be even as little as a handful of KB, tacking on random access delays each jump and dropping that overall transfer rate harshly. It's better than what PS4/XB1 had to deal with as those used mechanical drives with moving parts, but still.

Here's the thing. If Nintendo recommended a microSD based on having a sequential read speed between 60MB/s and 95MB/s for the Switch that had the CPU decompressing assets, then what does that mean for the goal of Drake's FDE with a 100MB/s microSD? To simply take the pressure off the CPU and not really provide faster loading?
 
Do note that 100MB/s is the number for uninterrupted sequential transfers, not including random access delays. When a game loads, it isn't reading one continuous strand of data from one end to the other. It's jumping around all over the place to get each piece it needs, which can be even as little as a handful of KB, tacking on random access delays each jump and dropping that overall transfer rate harshly. It's better than what PS4/XB1 had to deal with as those used mechanical drives with moving parts, but still.

Here's the thing. If Nintendo recommended a microSD based on having a sequential read speed between 60MB/s and 95MB/s for the Switch that had the CPU decompressing assets, then what does that mean for the goal of Drake's FDE with a 100MB/s microSD? To simply take the pressure off the CPU and not really provide faster loading?
An octacore a78 can probably easily decompress 100MB/s without sweating. Additional hardware seems overkill for that.
 
Here's the thing. If Nintendo recommended a microSD based on having a sequential read speed between 60MB/s and 95MB/s for the Switch that had the CPU decompressing assets, then what does that mean for the goal of Drake's FDE with a 100MB/s microSD? To simply take the pressure off the CPU and not really provide faster loading?
An octacore a78 can probably easily decompress 100MB/s without sweating. Additional hardware seems overkill for that.
The Switch has an extra mode where it turns the GPU way down and the CPU speed way up, explicitly designed for loading screens. You get just enough GPU power to run a little animation, and all the power goes to the CPU so decompression can be done as quickly as possible.

Of course, increasingly, open world games don't have loading screens. Meaning that those CPU cores are running your physics engine while streaming in data for the next chunk of the game world. Dedicated decompression hardware effectively doubles the amount of CPU power in these scenarios, without having to add CPU cores or run at battery draining clocks.

It's the same way almost all AAA games are running some kind TAA/TU, so REDACTED has a win for farming that out to the tensor cores.
 
Gonna be really interesting to see how much faster loading games on Drake will be vs Switch.


Speaking of storage though... Would storage on Drake be cheaper than what's being offered on current gen consoles? Especially if its UFS? Microsoft is making permanent price drops on their Seagate expansio storage cards for their current gen consoles.. $90 for 512GB, 150 for 1 TB, and $280 for 2 TB.

Trying to gauge how much 128, 256, 512GB, and even 1TB would cost on Drake. 1TB seems unlikely imo. 128GB would be disgustingly low, but considering how Nintendo has barely moved the storage size since Wii U days, it's not impossible. 256GB and maybe 512GB might seem the most likeliest, but the latter would cost a bundle. I wonder how much would it cost them total 🤔. Can't find any prices on UFS storage.
 
Gonna be really interesting to see how much faster loading games on Drake will be vs Switch.


Speaking of storage though... Would storage on Drake be cheaper than what's being offered on current gen consoles? Especially if its UFS? Microsoft is making permanent price drops on their Seagate expansio storage cards for their current gen consoles.. $90 for 512GB, 150 for 1 TB, and $280 for 2 TB.

Trying to gauge how much 128, 256, 512GB, and even 1TB would cost on Drake. 1TB seems unlikely imo. 128GB would be disgustingly low, but considering how Nintendo has barely moved the storage size since Wii U days, it's not impossible. 256GB and maybe 512GB might seem the most likeliest, but the latter would cost a bundle. I wonder how much would it cost them total 🤔. Can't find any prices on UFS storage.
For better or worse, I don't think it'll be UFS storage. SD card baybeee.
 
The Switch has an extra mode where it turns the GPU way down and the CPU speed way up, explicitly designed for loading screens. You get just enough GPU power to run a little animation, and all the power goes to the CPU so decompression can be done as quickly as possible.

Of course, increasingly, open world games don't have loading screens. Meaning that those CPU cores are running your physics engine while streaming in data for the next chunk of the game world. Dedicated decompression hardware effectively doubles the amount of CPU power in these scenarios, without having to add CPU cores or run at battery draining clocks.

It's the same way almost all AAA games are running some kind TAA/TU, so REDACTED has a win for farming that out to the tensor cores.
Having just checked BotW using the CPU boost mode, loading a save from a fresh start on the main menu showed only ONE core doing most of the work, hitting a spike close to 100% for a split second, but remained roughly around 45% the remaining time. The other game cores would spike around 20%, but stuck at less than 5% most of the time. And this is with loading from my microSD that benchmarked at ~92MB/s.

To me, this makes it seem like the bottleneck is now the microSD, mainly because even the one CPU core doing most of the work isn't being pushed to its max for the majority of the time. If it was the CPU that was the bottleneck before the boost mode update, it doesn't seem to be it now. Granted, this is just one game. I need to check others that use boost mode for loading, especially those that released AFTER the boost mode, because maybe they utilize the CPU cores better. But if they also follow what BotW was doing....
 
Assuming that Nintendo won't adopt CF Express, eUFS, whatever exotic memory standard due to either cost, lack of adoption, higher energy usage or XtraROM cards not being able to match faster storage, I do hope they at least adopt UHS-II for microSD cards. That would even allow them to keep using eMMC and they could even re-use the SWOLED eMMC if they wanted to lower costs (Or as a cheaper SKU with reduced storage), as it can achieve 330MB/s read and 200MB/s write. It's, honestly, a guessing game until they reveal it. Hopefully they settle for faster standards.
 
Gonna be really interesting to see how much faster loading games on Drake will be vs Switch.


Speaking of storage though... Would storage on Drake be cheaper than what's being offered on current gen consoles? Especially if its UFS? Microsoft is making permanent price drops on their Seagate expansio storage cards for their current gen consoles.. $90 for 512GB, 150 for 1 TB, and $280 for 2 TB.

Trying to gauge how much 128, 256, 512GB, and even 1TB would cost on Drake. 1TB seems unlikely imo. 128GB would be disgustingly low, but considering how Nintendo has barely moved the storage size since Wii U days, it's not impossible. 256GB and maybe 512GB might seem the most likeliest, but the latter would cost a bundle. I wonder how much would it cost them total 🤔. Can't find any prices on UFS storage.
easily. it's still probably gonna be mSD cards
 
I don't think loading times are that much of an issue on the Switch. We're not talking lego city on wiiu levels of slow. Not even persona 3 FES on the ps2 levels of annoying.
Speaking of Lego City, I do have that digitally on Switch. It may not use boost mode, but something tells me it would not have benefitted from it, as having just checked it now, it never exceeded 70% on a single core @ 1Ghz while the rest did practically nothing, and most of the time that core sat around 40%. There were times where during the loading, that core was at 0% while another core was around 24% before something was loaded onto it.
 
0
I have a mental schedule of the thread annd the cycle it goes through and I expect next week( edit: this week) to be utter chaos and doom by 2AM.


If you’re late that’s detention.


Following that, which ends at 12PM, we will have lunch out. A BBQ, vegan options are welcome as well.

Then at 2PM, we will discuss launch titles potentially.

That ends until the 10th at 5pm.


And then after that we have a special guest in @hologram to give us a show on why TOTK will be great, then back to the regularly scheduled program of Node discussion until Wednesday morning, 9AM.


Have you all wrote that down? The normal cycle of the thread?

We also have a “we need more leaks” hour and “answering your questions” hour from Thursday and Friday, every hour on the hour for the other visitors.
 
I wonder if Nintendo will simply increase the gap of storage speeds between mediums compared to how it is now (e.g. 500MB/s-1GB/s of UFS, 150-200MB/s SD UHS-I, and whatever they can get cartridges up to).
 
1Gb, as in Gigabit or 1GB, as in 1 GigaBYTE?

As for where it'll land. I don't think it'll hit 1GB, even after decompression, personally. There's too many hurdles and not enough to gain.
Big B, so Gigabyte.
Aside from mentioning developers' request of Sony, Hermii was referencing what Brainchild has mentioned about his project. For that project, it was a case of streaming assets with/via Nanite due to... rapidly changing environments was it? IIRC, for his project specifically, 300 MB/s was the minimum read needed with 1000 MB/s being the worry-free ideal. 500 MB/s (I brought that specific number up at the time because that's UFS Card 1.0) was in that middle 'works, but still need to be mindful of limitations' area.
 
Having just checked BotW using the CPU boost mode, loading a save from a fresh start on the main menu showed only ONE core doing most of the work, hitting a spike close to 100% for a split second, but remained roughly around 45% the remaining time. The other game cores would spike around 20%, but stuck at less than 5% most of the time. And this is with loading from my microSD that benchmarked at ~92MB/s.

To me, this makes it seem like the bottleneck is now the microSD, mainly because even the one CPU core doing most of the work isn't being pushed to its max for the majority of the time. If it was the CPU that was the bottleneck before the boost mode update, it doesn't seem to be it now. Granted, this is just one game. I need to check others that use boost mode for loading, especially those that released AFTER the boost mode, because maybe they utilize the CPU cores better. But if they also follow what BotW was doing....
Good analysis, but not sure what your ellipses is implying? (genuinely, hope that doesn't read as passive aggressive)

DEFLATE compression is really hard to make multi-threaded, so I wouldn't expect to see solid multi-core utilization. Texture decompression happens potentially multiple times per frame, so those operations might be split over multiple threads, but you would also need sub-frame sampling rates on your utilization tool to see actual usage. Not sure what the sampling rate is on the various homebrew tools out there.

You're probably right that the read speeds are the limiting factor here, but considering that you're getting a microSD up to speeds that near the sequential read max, then that implies that Nintendo has made some special optimization for a bulk loading operation, which makes sense. The use case for an FDE is likely to be more focused on highly bursty and unpredictable IO that happens during gameplay, so a bit of an apples to oranges situation.
 
The fact it even RAN on Wii U is still amazing to me.
It and Xenoblade X both. Two of my favorite-looking games ever and both on "failed" hardware. WiiU was the little engine that could, I swear.
Absolutely. Breath of the Wild's renderer isn't special, exactly - I would say it's very modern, but standard, PS4 era engine. That is ran on a machine that was easily outclassed by the PS3 is wild.

Then you consider Metroid Prime: Remastered, which is doing something similar* and MSAA - and it's doing it twice as fast as Zelda, to get 60fps.

I believe MP:R is using forward rendering instead of Zelda's deferred rendering.

Forward rendering uses all the same buffers as deferred rendering you see in the video, but where deferred rendering flattens the image before lighting it, forward rendering lights the image first, then flattens it.

Forward Rendering would be a performance nightmare in an open world game, or in a game with lots of moving lights (like the sun), but modernizing a GameCube game is like an ideal use case, where forward lighting is simpler and faster. That's likely how MP:R manages to look so good and hit 60fps.
 
I have a mental schedule of the thread annd the cycle it goes through and I expect next week( edit: this week) to be utter chaos and doom by 2AM.


If you’re late that’s detention.


Following that, which ends at 12PM, we will have lunch out. A BBQ, vegan options are welcome as well.

Then at 2PM, we will discuss launch titles potentially.

That ends until the 10th at 5pm.


And then after that we have a special guest in @hologram to give us a show on why TOTK will be great, then back to the regularly scheduled program of Node discussion until Wednesday morning, 9AM.


Have you all wrote that down? The normal cycle of the thread?

We also have a “we need more leaks” hour and “answering your questions” hour from Thursday and Friday, every hour on the hour for the other visitors.
Have you considered the FY earnings report in tuesday in your schedule?
 
I think 100MB/second is absurdly slow even with killer decompression

If that’s what we get it’s not cause it’s fine
It’s because Nintendo is also absurd 😂
 
0
Do we have any clues on if Nintendo might be using something more akin to an XtraROM successor? Maybe a straight XtraROM2 with higher read speeds if such a thing even exists or is being worked on for the succ?
 
Do we have any clues on if Nintendo might be using something more akin to an XtraROM successor? Maybe a straight XtraROM2 with higher read speeds if such a thing even exists or is being worked on for the succ?
There's fairly little public information to go on about XtraROM in general, and some of the wording on Macronix's site suggests that the from of it used by Nintendo is probably "ASIC XtraROM" (it's the one they say is used by "handheld gaming consoles"), meaning that it's probably semi-custom on top of that. If there are improvements available in the underlying technology (and you'd figure there probably would be after 6 years), I don't think we'd necessarily have a way of knowing before Nintendo started using it.
 
Good analysis, but not sure what your ellipses is implying? (genuinely, hope that doesn't read as passive aggressive)

DEFLATE compression is really hard to make multi-threaded, so I wouldn't expect to see solid multi-core utilization. Texture decompression happens potentially multiple times per frame, so those operations might be split over multiple threads, but you would also need sub-frame sampling rates on your utilization tool to see actual usage. Not sure what the sampling rate is on the various homebrew tools out there.

You're probably right that the read speeds are the limiting factor here, but considering that you're getting a microSD up to speeds that near the sequential read max, then that implies that Nintendo has made some special optimization for a bulk loading operation, which makes sense. The use case for an FDE is likely to be more focused on highly bursty and unpredictable IO that happens during gameplay, so a bit of an apples to oranges situation.
I think you misunderstood when I spoke about the microSD hitting 92MB/s. That was with the benchmark tool via Hekate I presented some time ago for sequential reads. When random access was taken into account, that number dropped like a brick. Even SSDs aren't immune to it. The NVMe in my gaming laptop, for instance, can hit upwards of 7GB/s, but can drop all the way down to 85MB/s if it is pushing a lot of random access. I recently tested my laptop's drive with 3DMark's Storage benchmark that does a synthetic test of 3 games loading into their main menus from launch, and this is their result.

Battlefield V - 1130.83MB/s
CoD: BO4 - 932.36MB/s
Overwatch - 465.69MB/s

I realized just now that I can choose the drive to test it, so what I'm doing now is testing it with my microSD. I'll post those results later as it's going to take a long time, going from 7GB/s sequential to 92MB/s sequential.

My point is, games are not really designed with sequential reads in mind, because they are made up of thousands upon thousands of unique pieces of data scattered across the medium. Some large (like FMVs), some small, and a lot that are really small. Even the really big files are likely to be just archives that contain a lot of smaller pieces that must be traversed via random access.

In the end though, they will deal the cards (no pun intended), and we simply have to use what we are given. It may be proprietary. It may be common microSDs. It could even that REDACTED games must be installed to internal storage, and microSD is just a holding station like external HDDs for PS5 and Series. You can play the games from older devices on them, but not the games for the newer device.
 
I think you misunderstood when I spoke about the microSD hitting 92MB/s. That was with the benchmark tool via Hekate I presented some time ago for sequential reads. When random access was taken into account, that number dropped like a brick. Even SSDs aren't immune to it. The NVMe in my gaming laptop, for instance, can hit upwards of 7GB/s, but can drop all the way down to 85MB/s if it is pushing a lot of random access. I recently tested my laptop's drive with 3DMark's Storage benchmark that does a synthetic test of 3 games loading into their main menus from launch, and this is their result.

Battlefield V - 1130.83MB/s
CoD: BO4 - 932.36MB/s
Overwatch - 465.69MB/s

I realized just now that I can choose the drive to test it, so what I'm doing now is testing it with my microSD. I'll post those results later as it's going to take a long time, going from 7GB/s sequential to 92MB/s sequential.

My point is, games are not really designed with sequential reads in mind, because they are made up of thousands upon thousands of unique pieces of data scattered across the medium. Some large (like FMVs), some small, and a lot that are really small. Even the really big files are likely to be just archives that contain a lot of smaller pieces that must be traversed via random access.

In the end though, they will deal the cards (no pun intended), and we simply have to use what we are given. It may be proprietary. It may be common microSDs. It could even that REDACTED games must be installed to internal storage, and microSD is just a holding station like external HDDs for PS5 and Series. You can play the games from older devices on them, but not the games for the newer device.
Eh, in the home console space, games actually have been somewhat designed around trying to do as much sequential reads as possible up until pretty recently. Both optical discs and HDDs have much higher penalties for random access than solid state storage, and games have been using various techniques to mitigate that for decades. It's only with the Switch, PS5, and Xbox Series going full solid state that we're starting to see some of these optimizations go away, such as notably smaller files sizes in some cases from no longer duplicating a lot of data so it can be read more sequentially. Obviously the level of optimization varies per game, but it's definitely been a thing.
 
Just finished the synthetic tests with my microSD, and they're a bit better than I expected.

Battlefield V - 43.41MB/s
CoD: BO4 - 44.26MB/s
Overwatch - 20.24MB/s

To note, my laptop uses an 11th Gen Intel Core i7-11800H in performance mode while plugged in during these tests, so it's highly unlikely the CPU is the bottleneck here. Using a hardware decompressor for games on microSD in these cases would mainly just relieve the CPU of handling the decompression. It would be unlikely to produce better results simply because the card itself is the limiting factor.

edit:

I also got around to testing another Switch game that utilized boost mode for loading. Xenoblade Chronicles 3, which released long after boost mode was implemented, so it wasn't simply tacked on to existing games via patch. Looks like it used the CPU cores better than BotW, and it was a game on cart too. Simply put, while loading to the title screen, one core was pretty much maxed out while the others remained relatively low at around 5%. But, when loading my last save (to the Interior City), that same core still hung at near 100%, and another ranged around 50%. The 3rd of the 3 cores went no higher than 22%.

I need to sleep.
 
Last edited:
I have a mental schedule of the thread annd the cycle it goes through and I expect next week( edit: this week) to be utter chaos and doom by 2AM.


If you’re late that’s detention.


Following that, which ends at 12PM, we will have lunch out. A BBQ, vegan options are welcome as well.

Then at 2PM, we will discuss launch titles potentially.

That ends until the 10th at 5pm.


And then after that we have a special guest in @hologram to give us a show on why TOTK will be great, then back to the regularly scheduled program of Node discussion until Wednesday morning, 9AM.


Have you all wrote that down? The normal cycle of the thread?

We also have a “we need more leaks” hour and “answering your questions” hour from Thursday and Friday, every hour on the hour for the other visitors.
Whoever brings potato salad with raisins in it; we fighting!
 
I have a mental schedule of the thread annd the cycle it goes through and I expect next week( edit: this week) to be utter chaos and doom by 2AM.


If you’re late that’s detention.


Following that, which ends at 12PM, we will have lunch out. A BBQ, vegan options are welcome as well.

Then at 2PM, we will discuss launch titles potentially.

That ends until the 10th at 5pm.


And then after that we have a special guest in @hologram to give us a show on why TOTK will be great, then back to the regularly scheduled program of Node discussion until Wednesday morning, 9AM.


Have you all wrote that down? The normal cycle of the thread?

We also have a “we need more leaks” hour and “answering your questions” hour from Thursday and Friday, every hour on the hour for the other visitors.

I'm a little bit huffy that there's no part of "Bonejack coming in with dumb jokes" in that schedule, not gonna lie.

;_;
 
They’re not going to GamesCom with literally nothing - Pikmin 4 having been released by then.

Whatever that guy is saying, we’re getting announcements before GamesCom.
 
Perfect.
* Hidden text: cannot be quoted. *
lmaoo
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.


—————

There are no softwares announced for 2H. By now, we usually know what we’ll get. If they haven’t done a Direct, they’ll do it soon. We won’t go past Pikmin 4 without a Direct.
 


Quake's BVH is 6MB while Cyberpunk's is 400MB. I think there is an interesting point about optimizing for ray tracing here, and why Epic chose to use SDF proxies for software RT. Memory was touted as the reason for Series S missing out on RT since the combination of render targets + bvh is too much for the 8GB of usable memory in the SS despite having the horsepower for RT
 
Absolutely. Breath of the Wild's renderer isn't special, exactly - I would say it's very modern, but standard, PS4 era engine. That is ran on a machine that was easily outclassed by the PS3 is wild.

Then you consider Metroid Prime: Remastered, which is doing something similar* and MSAA - and it's doing it twice as fast as Zelda, to get 60fps.

I believe MP:R is using forward rendering instead of Zelda's deferred rendering.

Forward rendering uses all the same buffers as deferred rendering you see in the video, but where deferred rendering flattens the image before lighting it, forward rendering lights the image first, then flattens it.

Forward Rendering would be a performance nightmare in an open world game, or in a game with lots of moving lights (like the sun), but modernizing a GameCube game is like an ideal use case, where forward lighting is simpler and faster. That's likely how MP:R manages to look so good and hit 60fps.
I know the gaming community LOVES to overshoot the capacities of PS hardware in general, especially while lowballing Nintendo hardware in the space of a single breath... But the Wii U was not "easily outclassed by the PS3". Not in any real life timeline. Perhaps in a bizarro one, but under proper scrutiny, that doesn't hold up at all.
 
Tom Henderson saying he doesn't expect Nintendo to show anything in May/June literally means nothing.

Seriously, I expected a Direct if E3 was still going ahead as that is something they used to do with Treehouse and everything but that ceased when E3 ended completely for this year and every year to come (dead and buried guys, dead and buried.)

July is when I expect a Direct for this. This doesn't rule anything out in the slightest as July is very much a possibility after Pikmin 4.

This only gives more credence to Holiday 2023 release in my view. They are waiting until the last possible moment to do anything, letting their entire H1 catalogue carry the first half of the year and not whispering a word about H2.

Don't expect anything from anywhere else either, Nintendo will engage in nothing but wordplay with everyone, even shareholders, not confirming or deconfirming anything about a new console until the time is right. They won't show their hand to anybody until they decide to unexpectedly.
 
I know the gaming community LOVES to overshoot the capacities of PS hardware in general, especially while lowballing Nintendo hardware in the space of a single breath... But the Wii U was not "easily outclassed by the PS3". Not in any real life timeline. Perhaps in a bizarro one, but under proper scrutiny, that doesn't hold up at all.
Indeed, you could argue the PS3 had greater CPU capabilities but Wii U had a clearly more modern, capable GPU so nothing is easily outclassed.
 
I don't think anyone really pushed thr Cell CPU to its theoretical numbers. It was too arcane of an architecture for most to me use of it
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom