I was thinking about this, and I'm actually not sure if Orin's system level cache (which sits above both the CPU and GPU) is really necessary for Drake. Generally you would want a SLC like that if there's a lot of data going back and forth between the CPU and GPU, which would be the case for Orin, but on a games console SoC, the really heavy bandwidth uses are framebuffer objects and to a lesser extent textures, which are only touched by the GPU. So it may be both more effective and simpler to just increase the size of the GPU L2, rather than adding an SLC on top of that.
Nvidia's framebuffer partition sizes vary by the memory type. For LPDDR5 it's different than GDDR6 or HBM2E, but we know from Nvidia's specs that Orin has a 256 bit memory interface, and Drake has half the framebuffer partitions, therefore Drake should have a 128 bit memory interface.
The capacity of the memory shouldn't impact the performance, just the speed, width and type of the interface. I calculated a little while back that LPDDR5 probably has around 4 picoJoules per bit (pJ/b) of energy consumption, based on claims from manufacturers on LPDDR4X consumption and efficiency improvements for LPDDR5, but it's only a rough estimate, and likely to vary from manufacturer to manufacturer. If it is 4 pJ/b, though, then the power consumption for 102.4GB/s would be a bit over 3.3W. For LPDDR5X,
Samsung claims 20% less power consumption than LPDDR5, so we could expect about 3.2 pJ/b, which would result in a bit over 3.5W to hit the maximum 136GB/s. For comparison, I'd say the LPDDR4 in the original Switch consumed between 1.5W to 2W for 25.6GB/s of bandwidth, so it's an increase over the original model either way. My guess is that the memory will be clocked quite a bit lower in portable mode to accommodate for this.
My default assumption is definitely LPDDR5, the only reason I'm considering LPDDR5X is that Nintendo used 4X on the Mariko Switch models when they could have just as well stuck with LPDDR4, so they presumably value the extra bit of power efficiency. I do wonder if there would even need to be any changes on the software side between LPDDR5 and 5X (or 4 and 4X), though, as it's largely just an increased clock, and things like channel size, bank grouping, etc. stay the same. The hardware memory controller would definitely have to be updated, but I suspect it may be relatively invisible on the software side, aside from the appearance of additional clock speeds.
Incidentally, T214 seems to have been more commonly referred to as T210B01 inside Nvidia, so you may have more luck searching for it with that code.