Why was the WiiU so weak anyway.
It came out a year before the PS4 and for not much cheaper.
It used a CPU that Nintendo dragged from 1999, and an outdated uArch. Although they modernized it with some features, it was still too old.
Said CPUs had terrible cache setups.
And they only had 3 Cores. I think it had an extra ARM core solely for the OS but I could be mistaken. Otherwise, I think 2 cores for games and 1 for the OS
The GPU was based on an architecture from 2008/9, and it only had 160 shaders or so. It was clocked fairly low.
Not only that but it had 2GB of memory, 1 for games and 1 for the OS on some titles. It was also
really slow. It gave 12.8GB/s from the DDR3.
The PS4 and Xbox One by comparison, has more modern and much larger GPUs. They featured much better CPUs. And they had more RAM at their disposal.
They dedicated 1.5 cores to the operating system, and 6.5 cores for video games. They also had 5GB of memory available to their games out of the 8GB pool.
XBox and PS4 went in different routes though, MS went with a smaller GPU but a larger on-die cache that was enough for 1080p. Although that operated at delivering over 200GB/s (depending on the model it differs), the thing went bidirectional. So, +100GB/s in one direction and +100GB/s in the other direction.
It’s larger pool of RAM going with DDR3, only nets it about 68GB/s in memory bandwidth from that.
And I also remember reading that after some time, devs dropped using the 32MB of eSRAM on the One/One S because it made development more complex and deadlines needed to be met. Pretty much only exclusive software made use of it, otherwise.
Jumping from that, the PS4 also has 8GB but it’s GDDR5 memory delivering, at the time, a whopping ~176GB/s of memory bandwidth. High latency, but it was worth it. Not only that, but the GPU was larger too. 1142 shaders.
It was the leading platform, and so it shows. It was the 900-1080p console while the XB1 was the 720-900p console
Both of these were able to clock their GPUs higher than the GPU, they had more CPU cores and an easier to understand uArch that simplified development. Very familiar and PC-esque environment allowed for developers to better stretch their legs as best they could compared to the Wii U.
And finally, the PS4 and XB1 had much better developer tools to make the process of extracting the most very simple and pretty straightforward relatively speaking, better than their previous gen counterparts.
Nintendo opted to remain with the Wii U and it’s architecture to keep backwards compatibility as perfect as possible, but that came at the cost of hampering their platform and it’s potential due to their obsession with perfect BC.
Meanwhile Sony and MS took a risk to jump ship from the Power ISA and some GPU attached to it due to cost being too high, and the whole process would have been complex.
Intel had good CPUs but poor GPUs that they didn’t care for. Nvidia has good GPUs but they were nonexistent with respect to the CPU. AMD was the only one that had a GPU and a CPU combo at its disposal.
So it’s a culmination of facts that resulted in the Wii U being a lot weaker, the PS4/XB1 a lot stronger, and cost being the driving force behind this all.
Now that Nvidia does provide a GPU+CPU in a single package it does simplify things, even if it’s not NVidia’s CPU necessarily but ARM, Nvidia having the license for it. Nintendo benefits from that greatly.
Nintendo has avoided x86 for a reason I don’t know.
However if they do switch down the road, their best bet is honestly AMD who can give them a good GPU and CPU deal while keeping it efficient and small.