I'm sure it's been calculated somewhere in the last 1570 pages, but do you know how fast (ms) the Series S, X and PS5 can do FSR 2? Would it be markedly slower than Drake can do DLLS?
I did some ballparking on it using
these numbers from AMD and similar numbers from Nvidia. On paper, FSR2 is slower on machines of the same performance. In practice, AMD has been really aggressive in supporting partners, and even customizing FSR for games. So benchmarks might make FSR2 look a little better than the tech is "generically".
There is a decent amount of ambiguity in scaling the official FSR and DLSS numbers down to machines as small as the Series S and Drake - but I would say
roughly the same? Optimistic Drake number, ~2ms, pessimistic number, ~6ms. Optimistic Series S number, ~3ms, pessimistic number, ~8ms.
Which is, to be clear, a big Drake win. Imagine if DLSS didn't exist, and Drake was forced to use FSR. It would almost certainly run slower than Series S.
@oldpuck @ILikeFeet @Thraktor and others, how do you guys know so much about hardware stuff like SoCs, chiplets etc...? Is it your job? I genuinely find the area so fascinating and am interested to hear how you guys got into it. Maybe PC gaming? Are one of you guys secretly Mr. Furukawa in disguise?
Honestly, a decent chunk of it is me becoming obsessed with gaming again after the Switch, and learning from those 2 + everyone else in this thread.
But the rest is
somewhat job related. I have, in the past, been a programmer and later a performance engineer for High Performance Computing. When I started hanging in the thread I didn't know jack about consoles, which are their own weird world, but AI and GPU compute I had a little experience with through work. When folks started speculating about the theoretical performance of an Orin base console, I knew that I had background to at least read the docs and analyze the benchmarks.
If there is a technical skill that I think I bring to the table, it's that last one*. My hardware knowledge mostly comes from trying to figure out why the
software doesn't scale like it looks like it should, then either finding it in the docs, or getting someone here to explain it to me.
*Not that I haven't been wrong, I really convinced myself that 8nm was viable, power-wise, for a long time, by over-extrapolating.