Can I ask a technical question (excuse me for the lack of knowledge)? Which is more crucial/important? Memory size or memory bandwidth? If the new console could have 8gb with more bandwidth or 12 with less(if this is possible ofc), which would be the most ideal scenario? (Sorry but I cannot understand the actual usage of each of em)
Oh hey, I got the time to put up a more lengthy answer for the readers also wondering about this. Or at least, my interpretation
Broadly speaking, memory size is 'amount of stuff I can hold in RAM'.
Memory bandwidth is 'how much data can move into/out of RAM in some standard unit of time (usually 'per second')'.
CPU and GPU cores need data and instructions to do work. These things (data and instructions) need to be held somewhere.
You may occasionally see reference to 'the memory hierarchy'; that more or less just describes the order in which you look for stuff. For this post, most of the details of this hierarchy doesn't matter. Just remember that you usually dig around in RAM right before internal storage, and generally speaking, you really want to find the stuff you're looking for before you throw your hands up in the air and sigh, "ugh, time to go through storage". It needs to be hammered in; storage is much slower than RAM (1-2 orders of magnitude, I'd say?). Storage is
significantly less responsive than RAM (NAND flash-based storage is 2-3 orders of magnitude less responsive than DRAM).
Memory size is about addressing the question: "Do I have the capacity to hold what I think that I'll be working with soon-ish"/"Please gods, can I hold enough crap so that I can save the reading from storage for a far,
far more convenient time, seriously"
Memory bandwidth: "What I want to grab is in RAM, hurray! Alright, now how quickly can I retrieve it?"
Again, this is something that kicks in after you answer yes to the previous question; that what I want is found in RAM.
-
So, digression:
My perspective on this aspect of computing is that, you can divide a CPU/GPU core's time into two categories: the core is working, and the core is idling/not working. It turns out that the goal of a
lot of shit about hardware is actually about reducing the latter. We want to minimize idle time as much as possible.
IIRC, the four main aspects of 'architecture' that gets discussed are 'wider'/'deeper'/'smarter'/'memory sub-system'.
'Wider' - I think that it's the only one of the four that deals with the 'working' category of time. It's the 'do more at once' thing.
'Deeper' - my understanding of this one is weak, admittedly. I think this one is about queuing up stuff?
'Smarter' - the selection of what data/instructions you want to pull
Memory sub system - alright, you've determined what to retrieve.
Now go get that stuff.
...think of something like say, Homer Simpson in hell having donuts shoved down his throat from a conveyor belt.
'Wider' - how much donut can he eat at once
'Deeper' - the conveyor belt
'Smarter' - your selection of donuts
Memory sub system - how quickly can you go find and get those donuts, then toss them onto the conveyor belt
-
So, going back to memory bandwidth. Bandwidth is not work, directly. Bandwidth
facilitates work.
So, this is my perspective:
I think that your bandwidth needs scales with the amount of work you're trying to do.
Sometimes you see us referring to bandwidth:compute ratio, right? X amount of GB/s per Y amount of TFLOPS.
FLOPS are floating point operations per second. A measurement of computation/work; makes sense, right? But keep in mind that the calculation is basically X hardware times Y clock frequency, ignoring minor details. The key here is that there is an implicit assumption here. The assumption is that all of the clock cycles are spent working. That no cycles are wasted away, just sitting around twiddling your thumbs cause you're waiting on data/instructions. So we know that we need adequate bandwidth.
But conceptually, there is an upper limit, right? You cannot go above working 100% of the time. Additional bandwidth past that point doesn't do anything.
So, when I express, or smash 'Yeah!' for, posts that basically say 'more bandwidth plz!', I'm actually implying an expectation towards the upper end on the capability for work. That is, I think that CPU & GPU grunt will be more on the upper end of what our expectations, and thus, I think that memory bandwidth needs will scale accordingly.
But that also means that you can absolutely get away with less bandwidth if your raw working/computing potential is lesser in the first place.
Oh, but I didn't actually answer the question, did I? I didn't say which one is more important.
That's project dependent, isn't it.
The most prominent example floating around right now is Baldur's Gate III. Very specifically, Larian is having trouble getting the split-screen co-op mode to work as as they'd like on the Series S, and RAM quantity is cited, I believe.
But it makes sense though, right? Local split screen co-op effectively asks for, "there is a lot of stuff we need to hold at a given moment". Without having played the game, I'm assuming that the local players aren't necessarily tethered to each other? Presumably, you can have player 1 in one area, and player 2 in some completely separate area. So then, you would need to hold the stuff for what player 1 can see and muck around with, AND the same for player 2 as well.
So yea, it all goes back to, 'so, what
are you trying to do?'