• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

this is so confusing, now i dont know if 18.2GB is the real file size for Tears of the Kingdom, or 16GB is the actual file size of the game( more likely to be 16GB)
My guesses are that one of the pages is currently inaccurate or eShop versions from different regions come with different languages.
 
The Japanese Nintendo website says 16 GB. The US Nintendo website says 18.2 GB, though.
this is so confusing, now i dont know if 18.2GB is the real file size for Tears of the Kingdom, or 16GB is the actual file size of the game( more likely to be 16GB)
My guesses are that one of the pages is currently inaccurate or eShop versions from different regions come with different languages.

The Japanese version may include Japanese voices only.
Other versions may contain voices for multiple languages.

And I believe those sizes only reflect what you will be downloading if you buy the game from the eShop and may not reflect what's on the physical cart.
The cart may require a download so they can use a 16GB cart, but I don't think we have seen that mandatory download notice on the covers.
 
It becomes a waste if storage is faster than it, but becomes a bottleneck if storage is quite a bit less.
I hate to contradict you but there isn’t a world where decompression becomes a bottleneck because of slow storage.

If your storage were replaced with physical people who walked in and out of the room with the data written on scrolls, it would be unimaginably slow. Now, would you prefer to read 100MB of uncompressed data that way, or 10MB of compressed data that you then have to spend a couple seconds decompressing?

This is not an extreme example, deep back up systems are built this way, with tape systems that have to be read in order and physically carried in an out of vaults. And the solution is to use truly ridiculous levels of compression because that medium is so slow that even slowest decompression is faster than it is.

Slow storage can be a bottleneck itself, of course. But slow storage doesn’t make decompression hardware into a bottleneck. Decompression hardware makes slow storage less of a bottleneck. This is why Microsoft sent with a slower SSD and decompression hardware and Sony went with just a fast SSD.

The existence of the FDE in T239 is not an indication that Nintendo will be pushing the performance of the internal storage. If anything it indicates the opposite.

Compression makes sense with slow storage mediums, but now that we have massive games, compression has become critical beyond transfer speeds.

The microSD slot in the Switch is certainly able to handle 90MB/s+, but we don't see those results because of software decompression being the bottleneck in asset loading. If Drake's hardware decompression can't handle more than 100MB/s input, then microSD would be fine, though disappointing.
Oh, I think I see what you’re getting at.

The speed of decompression isn't fixed to the amount of data read, it's fixed to the complexity of the compression. You can use the exact same DEFLATE compression algorithm, and tune it so that it compresses data really small but it takes longer to compress and decompress. And you can tune is to it compresses data less well, but compresses and decompresses faster.

If the FDE can "only" ready 100MB/s of input, but it handles 10x compression rates (the theoretical max of DEFLATE) in real time, then that's 1GB/s of effective bandwidth. If the FDE can read 200MB/s of input, but only gets 4x compression (a pretty common rate for text), then that's 800GB/s of effective bandwidth.

Because of this, hardware decompression will never have trouble being fed by storage.

I think the worry you're getting at is that because games have to compress, simply because of how large they are, decompression overall becomes a bottleneck, even in places where theoretically storage bandwidth is sufficient. While that certainly occurs, the FDE doesn't exacerbate that problem. The win of the FDE is that you only have so much silicon to work with, only so much space on a die. Because all games need decompression, dedicated hardware is basically like getting extra CPU cores, without having to spend CPU core space. The FDE could be slower than a CPU and still be a win in that case, but there is plenty of decompression hardware out there to indicate that it's likely to be an order of magnitude faster than the CPU.

The short version: Games targeting ~2TFLOPS already exist, and are built on 80MB/s storage, well below the performance of GameCards, and because of that, I see no reason for Nintendo to require installation to faster eMMC/NVME. The FDE doesn't indicate otherwise, in fact, I expect the FDE to do the opposite, to indicate that Nintendo doesn't intend to move to NVME.
 
the only option for that is TSMC's 3nm. that'd be a hell of a performance boost though
Nvidia using TSMC's 3 nm** process node is not guaranteed. kopite7kimi mentioned that Blackwell won't use TSMC's 3 nm** process node. (I assume that's because TSMC's N3E process node probably won't be available to non-Apple customers until 2024 at the earliest, and that Apple probably has temporary exclusivity to TSMC's N3E process node, assuming the rumour that Apple uses TSMC's N3E process node to fabricate the Apple A17 is true.)

** → a marketing nomenclature used by all foundry companies
 
I hate to contradict you but there isn’t a world where decompression becomes a bottleneck because of slow storage.

If your storage were replaced with physical people who walked in and out of the room with the data written on scrolls, it would be unimaginably slow. Now, would you prefer to read 100MB of uncompressed data that way, or 10MB of compressed data that you then have to spend a couple seconds decompressing?

This is not an extreme example, deep back up systems are built this way, with tape systems that have to be read in order and physically carried in an out of vaults. And the solution is to use truly ridiculous levels of compression because that medium is so slow that even slowest decompression is faster than it is.

Slow storage can be a bottleneck itself, of course. But slow storage doesn’t make decompression hardware into a bottleneck. Decompression hardware makes slow storage less of a bottleneck. This is why Microsoft sent with a slower SSD and decompression hardware and Sony went with just a fast SSD.

The existence of the FDE in T239 is not an indication that Nintendo will be pushing the performance of the internal storage. If anything it indicates the opposite.


Oh, I think I see what you’re getting at.

The speed of decompression isn't fixed to the amount of data read, it's fixed to the complexity of the compression. You can use the exact same DEFLATE compression algorithm, and tune it so that it compresses data really small but it takes longer to compress and decompress. And you can tune is to it compresses data less well, but compresses and decompresses faster.

If the FDE can "only" ready 100MB/s of input, but it handles 10x compression rates (the theoretical max of DEFLATE) in real time, then that's 1GB/s of effective bandwidth. If the FDE can read 200MB/s of input, but only gets 4x compression (a pretty common rate for text), then that's 800GB/s of effective bandwidth.

Because of this, hardware decompression will never have trouble being fed by storage.

I think the worry you're getting at is that because games have to compress, simply because of how large they are, decompression overall becomes a bottleneck, even in places where theoretically storage bandwidth is sufficient. While that certainly occurs, the FDE doesn't exacerbate that problem. The win of the FDE is that you only have so much silicon to work with, only so much space on a die. Because all games need decompression, dedicated hardware is basically like getting extra CPU cores, without having to spend CPU core space. The FDE could be slower than a CPU and still be a win in that case, but there is plenty of decompression hardware out there to indicate that it's likely to be an order of magnitude faster than the CPU.

The short version: Games targeting ~2TFLOPS already exist, and are built on 80MB/s storage, well below the performance of GameCards, and because of that, I see no reason for Nintendo to require installation to faster eMMC/NVME. The FDE doesn't indicate otherwise, in fact, I expect the FDE to do the opposite, to indicate that Nintendo doesn't intend to move to NVME.

I feel like people are arguing that "it's guaranteed to be fine, no worse than PS4 and probably better!" and that is... kind of comically missing the extremely simple point I am making?

I value PS5 tier load times a lot and would sacrifice a lot of graphical power for that and I'm not sure PS5 tier load times can be achieved with the extremely slow transfer speed of ~100 MB/s.
 
@Cuzizkool
Something worth keeping in mind is what the memory bandwidth situation looks like in the near future. Barring some neat developments in on-die memory or something, I'm currently expecting LPDDR to still be the way to go for the time being.
In the present, as we've gone over before, the best would be LPDDR5X-8533 MT/s by the end of the year.
SK Hynix has developed an enhancement that they've named LPDDR5T; that goes up to 9600 MT/s (so 1/8th more than full speed LPDDR5X). They've stated that they're trying to get this standardized, so hey, it's possible that in 2025, LPDDR5T is the best option.
Beyond that, the next major step should be LPDDR6. A slide from Samsung's presentation last October indicated 2026 being the target for that. I don't recall official statement of the planned max speed for that, but based off of historical trend, I'd expect LPDDR6 to eventually go for 12,800 MT/s (double of regular LPDDR5's 6,400 MT/s). But also remember that memory doesn't hit full speed in the first year, so I wouldn't expect 12,800 MT/s just yet in 2026.
 
It's worth remembering that storage rarely appears to be the actual bottleneck on Switch in terms of loading. A stronger CPU and the FDE should provide significant improvements to the experience, even with UHS-I SD cards. Something faster would obviously be better, but there are some significant challenges to delivering that in a portable form factor at a reasonable price right now and UHS-I is still a substantially better baseline than a 5400RPM HDD.
 
It's worth remembering that storage rarely appears to be the actual bottleneck on Switch in terms of loading. A stronger CPU and the FDE should provide significant improvements to the experience, even with UHS-I SD cards. Something faster would obviously be better, but there are some significant challenges to delivering that in a portable form factor at a reasonable price right now and UHS-I is still a substantially better baseline than a 5400RPM HDD.
Yes, the loading situation is a good bit better than the PS4 most likely (assuming the Switch 2 has a decompressor)

It just feels like it would be pretty easy to make it nearly as good as the Xbox Series with just some less good components elsewhere and mandatory installs.
 
Yes, the loading situation is a good bit better than the PS4 most likely (assuming the Switch 2 has a decompressor)

It just feels like it would be pretty easy to make it nearly as good as the Xbox Series with just some less good components elsewhere and mandatory installs.
Mandatory installs would help nobody. Certainly not Nintendo.
 
I feel like people are arguing that "it's guaranteed to be fine, no worse than PS4 and probably better!" and that is... kind of comically missing the extremely simple point I am making?

I value PS5 tier load times a lot and would sacrifice a lot of graphical power for that and I'm not sure PS5 tier load times can be achieved with the extremely slow transfer speed of ~100 MB/s.
We aren't going to get ps5 file sizes on Drake anyway, so the slower storage speeds would be less of an impact. You'll get you fast loading as long as it's appropriately designed for
 
Yes, the loading situation is a good bit better than the PS4 most likely (assuming the Switch 2 has a decompressor)

It just feels like it would be pretty easy to make it nearly as good as the Xbox Series with just some less good components elsewhere and mandatory installs.
Mandatory installs is a solution to the physical medium being slow, which we don't really have any indication will even be a problem. Nintendo always updates their cart tech between generations, and there's no reason to think this time will be an exception. The bigger problem is the storage situation devolving fully into a Wii fridge situation, which PS5/XS are already teetering dangerously close to.

Besides, the big impressive sequential numbers that PS5/XS boast aren't even where a lot of the perceived benefit is coming from. Any half decent solid state storage is going to be a lot faster than an HDD for running a game off of.
 
Chip designer Arm is building its own semiconductor to showcase the capabilities of its products, as the SoftBank-owned group seeks to attract new customers and fuel growth following a blockbuster IPO later this year.

Arm will team up with manufacturing partners to develop the new semiconductor, according to people briefed on the move who describe it as the most advanced chipmaking effort the Cambridge-headquartered group has ever embarked upon.

The effort comes just as SoftBank seeks to drive up Arm's profits and attract investors to a planned listing on New York’s Nasdaq exchange.

The company traditionally sells its blueprint designs to chip manufacturers, rather than getting involved directly in the development and production of semiconductors itself. The hope is that the prototype will allow it to demonstrate the power and capabilities of its designs to the wider market.

Arm has previously built some test chips with partners including Samsung and Taiwan Semiconductor Manufacturing Co, largely aimed at enabling software developers to gain familiarity with new products.
However, multiple industry executives told the FT that its newest chip — on which it started work in the past six months — is "more advanced" than ever before. Arm has also formed a bigger team that will execute the effort and is targeting the product at chip manufacturers more than software developers, they said.

The company has built a new "solutions engineering" team that will lead the development of these prototype chips for mobile devices, laptops and other electronics, according to people briefed on the move.

The solutions engineering arm is led by chip industry veteran Kevork Kechichian, who joined Arm's top executive team in February. He has held previous roles at chipmakers NXP Semiconductors and Qualcomm, overseeing the development of the San Diego-based company's flagship Snapdragon chip.

The team will also expand on Arm's existing efforts to enhance the performance and security of designs, as well as bolster developer access to its products.

Rumblings about Arm's chipmaking moves have stoked fears in the semiconductor industry that if it makes a good enough chip, it could seek to sell it in the future and thereby become a competitor to some of its biggest customers, such as MediaTek or Qualcomm.
People close to Arm insist there are no plans to sell or license the product and that it is only working on a prototype. Arm declined to comment.

Any move to build chips for wider commercial sale would undermine Arm's position as the "Switzerland" of the semiconductor industry, selling designs to almost all mobile device chipmakers while not directly competing with them.

Its neutral model has led to its products being found in more than 95 per cent of smartphones, with customers including Qualcomm, MediaTek and Apple.

"Working on intellectual property is one thing but really designing and working with production partners to turn those efforts into physical chips is a totally different arena. It's more capital intensive," a former Arm executive with knowledge of the effort told the FT. "At some point in the future [Arm] will definitely need returns to justify that massive investment."

SoftBank's push for growth has led Arm to seek out changes to its commercial practices. The chip designer has sought to increase prices and overhaul its business model by charging royalties to device-makers rather than some of its chipmaker customers, the FT reported last month.
Arm acknowledged in its annual report published last week that a principal risk to its business was the "significant concentration" in its customer base. Arm's top 20 customers accounted for 86 per cent of revenues last year, so "the loss of a small number of key customers could significantly impact the group's growth".

That warning came with Arm currently embroiled in a bitter legal dispute with Qualcomm, one of its largest customers, after it accused the chipmaker of using some of its designs without having procured the necessary licence.

There are also widespread concerns in the industry that in-house chips developed by Apple, Arm's largest customer, are outperforming those made by competitors such as Qualcomm and MediaTek.

"Google thought it could demonstrate the world's best Android OS so it built the Pixel phone. Microsoft thought it was the master of Windows so it built Surface laptops. So, naturally, Arm thinks it can build best-in-class Arm-based chips, better than chip developers out there," said Brady Wang, a semiconductor analyst with Counterpoint Research.

But making chips is even more challenging than building devices, Wang said. "It will need generation after generation of development efforts."
Additional reporting by Christian Davies in Seoul
 
Last edited:
I'm expecting both MP4 and the next 3D Mario on my Switch V1. With Metroid I'm 100% confident it will come, but about Mario I'm only 80% confident
1
 
We aren't going to get ps5 file sizes on Drake anyway, so the slower storage speeds would be less of an impact. You'll get you fast loading as long as it's appropriately designed for
What about Series S file sizes for comparison? One of the benefits for why PS5 and Series use faster storage solutions isn't simply to shorten loading screen times. It's to reduce the frequency of loading screens altogether and basically hotswap data in. Why we don't see that often enough after 2 years? Because devs are still developing with cross-gen in mind, but devs are moving over as PS4/XB1 are becoming more irrelevant. Series S uses the same storage speed as Series X @ 2.4GB/s, but even if we dropped that to 1GB/s to be somewhat close to the drop in GPU power and RAM bandwidth, that's still a lot higher than the supposed 100MB/s folks seem to be content with for Drake. But, Drake is definitely not going to be as low as 1/10 in power of Series S, even in portable mode (one might say Switch in docked mode is 1/10 in power to Series S).

Nintendo improves on their cart tech with each gen, but honestly, this is a point where they may not be able to with the same degree, imo. Not because the tech isn't available, but because of cost. Going from 15MB/s with Switch carts to something like 100MB/s for Drake is going to tack on a price that's already pretty high for many devs who already choose to downgrade on cart size, requiring mandatory downloading. And we're talking about 1/10 of a supposed 1GB/s that Series S may only need. That to me doesn't seem like enough for the kind of on-the-fly loading that these other platforms are capable of in their power ranges. That is likely to push away 3rd-parties looking to port their games over. This is why I feel that Drake may require mandatory installation of its physical games. The cart speeds stay relatively low to keep costs down.
 
Well… That’s not sexy.

Maybe the possible CPU upgrade is sexy? Thor will be using an ARM Neoverse processor. This a data centre design, massively multi-core and not really suitable for consoles. However the consumer device equivalent of Neoverse are the Cortex X series from ARM. As well as bigLITTLE cores they have an additional high performance core. It'd probably kill the battery on a Switch type device when mobile, but could provide a massive boost when docked.
 
I guess they managed to shrink something, or a convenient silly little 'day 1 patch' will sweep the nation
 
All From Software games also have this problem, just Elden Ring is so big that people sorta assume it's because the system's overloaded. This bug won't go away with more hardware power.

Not targeting you specifically with this comment, just using it as a jumping off point, should folks find it interesting

30 fps is 3x the power of 60 fps, and that's why 30 fps won't die

Lemme give you a simplified view of a game engine.

Code:
CPU operations          -> GPU Queues                     -> Driver                     -> Screen
*Read inputs               *Tesselate Geometry               *Keep video buffer            *Draw buffer to screen
*Hit detection             *Render textures
*Physics                   *Run shaders
*Progress animations       *Perform post-processing

So the CPU does all it's operations, pushing work into the GPU queue. Then the GPU does its job and draws the final frame to a buffer, controlled by the driver. Then the physical screen reads that buffer and shows it to the player.

A typical screen does its job on a 60Hz timer. Every 16.6ms, it draws whatever is in that buffer, no matter what. Little complication, actually - it takes a little time to do that, that's going to matter in a second, but stick with me.

With a normal screen, you can't change that timer. So if you want each frame to be in front of the players eye for the same amount of time (for the least juddery experience) you either need to run all that logic in 16.6ms, or in 33.3ms - 30fps.

But notice - the CPU part of the frame time is dictated not by how visually complex your scene is, but by the underlying game logic. So if you're running at 60fps, you might spend 8ms on CPU stuff, and another 8ms on GPU stuff. Got to 30fps, you still spend 8ms on CPU stuff, but 24 ms on GPU. That's 3x the amount of GPU performance, a huge win. And as long as a significant number of gamers care about resolution/effects over frame rates, 30fps is here to stay.

What if you run at a frame rate between 30 and 60fps?

As frame rate goes up, latency goes down. The CPU is checking your controller inputs, and the faster it can get those results to your screen - and the sooner that it can get on to reading the next set of inputs - the better latency.

But because the screen is does 60 updates a second, some of your frames appear on screen longer than other frames. This causes judder. It's not a frame drop, but it feels like it, whee you see a frame for multiple ticks of the screen, then new frames every tick, then back to waiting a few ticks.

Because you have more frames smoothness increases. Smoothness is a bit of a misnomer, to me, because you're getting more frames of data, but you're also having judder. I find it very unsmooth but for some folks, they'd rather have the extra fames over judder.

You get screen tearing. If you're running at a frame rate that doesn't fit smoothly into 60Hz tick-tock of the screen, eventually you will be writing to the buffer when the screen is reading it. That causes tearing, where the top of screen is showing one frame, and the bottom of the screen is showing another frame. This is very noticeable in side to side camera movement, especially, less of an issue when the camera is static.

Doesn't variable refresh rate fix this?
Sorta! Variable refresh rate basically says that the screen will hold updating itself if it hasn't seen a buffer update, and then will update itself Just In Time when there is one. Usually there is a limit to how much flexibility in timing the screen has, but VRR will eliminate the screen tearing issue. It can't eliminate judder, however.

What about a frame rate cap?
The idea of a frame rate cap is that you set an artificial limit on how often you render frames, so that your experience runs at a locked rate, with no judder or tearing.

The basic way a frame rate limiter works is that it lets the game go as fast as it wants, then once the game has completed a frame, the limiter with lie about the GPU still being busy until the last millisecond. The CPU part of the game waits until the GPU completes, which is being artificially slowed down by the frame limiter, the screen updates, and then the CPU goes off.

In practice frame limiters are really tricky. I won't dig too much into why, but one reason is that as engines become more complex, the more they want to run at least some CPU operations while the GPU is going, which leads to some complex interactions between all the various systems. You don't want to get into a case where one part of game logic runs at an unlocked frame rate, and the others run at a locked frame rate.

So, what the hell is wrong with the Unity frame limiter.
Real quick, let's talk about the difference between dropped frames and bad frame pacing.

A dropped frame is when your game can't do all the things it needs to do in the allotted frame time, so the display doesn't get updated. Over the course of a second, you get 29 frames instead of 30.

Bad frame pacing is a subtle situation where you get 30 frames every second, but the frames are on screen for an inconsistent amount of time. Instead of getting a new frame every 33.3ms, you get one frame in 16.6ms, then a second frame in 50ms, and then another frame in 33.3ms. Think of it this way, a 60Hz screen is like on a tick-tock timer. A 60fps game updates frames on both the "tick" and the "tock" a 30fps game is supposed to just update on the "tick".

Bad frame pacing is when you update on the tick most of the time, them you miss a tick, update on the "tock" to catch up, and eventually swing back to the tick again. This is the Unity engine problem. Even for a game that has no problem hitting >30fps all of the time, Unity will sometimes fail to correctly apply CPU back pressure to slow down the game, or will fail to update the buffer in a timely fashion, or both.

Bad frame pacing doesn't cause screen tearing, fortunately, but it still causes judder just like an unlocked frame rate, but without any of the extra smoothness or latency reductions.

WayForward's sorta clever solution
According the DF, Advance Wars runs the frame rate limiter in cases where the player has control of the camera, and runs an unlocked frame rate when they don't.

When you're moving the camera, that's when you're going to notice screen tearing the worst, and because the map is not exactly rich in animation, there is little lost detail when running at a lower frame rate. So despite the judder, running the frame rate cap here makes sense.

In combat, when the camera is static, but animation detail increases, tearing isn't an issue, but the elaborate character animations that WF has provided can run with all the extra smoothness provided by the higher frame rate.

TL;DR
Devs will always want to have 30fps on the table as an option, no matter the strength of the hardware.

Frame rate limiters are necessary to get high quality 30fps options.

Unity has an especially bad frame limiter (and, historically, so does From Software, which is even worse).

In some cases, devs may choose to go with "unstable" frame rates as preferable to Unity's bad frame rate limiter, even when 30+fps is well within their grasp performance-wise
FE Engage exists, so the frame limiter bullshit does not seem to have merit.
 
What about Series S file sizes for comparison? One of the benefits for why PS5 and Series use faster storage solutions isn't simply to shorten loading screen times. It's to reduce the frequency of loading screens altogether and basically hotswap data in. Why we don't see that often enough after 2 years? Because devs are still developing with cross-gen in mind, but devs are moving over as PS4/XB1 are becoming more irrelevant. Series S uses the same storage speed as Series X @ 2.4GB/s, but even if we dropped that to 1GB/s to be somewhat close to the drop in GPU power and RAM bandwidth, that's still a lot higher than the supposed 100MB/s folks seem to be content with for Drake. But, Drake is definitely not going to be as low as 1/10 in power of Series S, even in portable mode (one might say Switch in docked mode is 1/10 in power to Series S).

Nintendo improves on their cart tech with each gen, but honestly, this is a point where they may not be able to with the same degree, imo. Not because the tech isn't available, but because of cost. Going from 15MB/s with Switch carts to something like 100MB/s for Drake is going to tack on a price that's already pretty high for many devs who already choose to downgrade on cart size, requiring mandatory downloading. And we're talking about 1/10 of a supposed 1GB/s that Series S may only need. That to me doesn't seem like enough for the kind of on-the-fly loading that these other platforms are capable of in their power ranges. That is likely to push away 3rd-parties looking to port their games over. This is why I feel that Drake may require mandatory installation of its physical games. The cart speeds stay relatively low to keep costs down.
What about the Series S? It's still stronger than Drake and has faster storage et al.

You're really overblowing the loading issue. Devs never had a problem with slapping in loading screens on cross gen games for systems with worse loading than Drake. Nor do they have a problem with making pop-in more prevalent on those systems. Why does it become a problem now for Drake? That's just coming up with excuses to not support the system, but that isn't even new, just look at switch
 
On my european BotW cartridge I can choose from Japanese, English, French (France), French (Canada), German, Spanish (Spain), Spanish (Latin America), Italian and Russian voice language options. It will be the same for TotK (except maybe for Russian due to sanctions, but I doubt it because it is spoken in other countries too).

If you only need Japanese voice audio for the japanese game version you can save some gigabytes in audio files and 16 GB would be enough. Easy as that.
 
Are you talking about Prime 4? Because, no offense, I feel like that would be a terrible idea.
If the game can't run at least at 720p 30 fps on switch docked, then, maybe it is the only way. Of course if Retro can archive something so good as Guerrilha did with horizon forbidden west, that run very well from base Ps4 to Ps5, then a native port can be a possibility.
 
Here I'll speed it back up for ya:

Hey yall maybe T239 has been cancelled!!

*runs away*
Funny you say that. I'm in a discussion with a guy named Terry on youtube and that is what he think about T239:

"Nintendo have probably abandoned the t239 and moved on, Nintendo usually keep their chipset secretive and clearly that was leaked before 2021. It’s not sure if they have a final decision on the chipset yet. I think they wasting too much of the chipset and it is not optimised well still due to Nvidia switching direction of including X1 compatibility, consumption of power and cost. The newest leak from t239 the spec have suggested the chip is adopted for something else(perhaps a Sony handheld project) as it is 3 teraflops while assuming the same cooling system. The dev kit are recycled for Sony and the nvn(Switch) mode are remains of early development and presumably depredated. Then, what chipset? I think they will place a custom order and heavily modify the tegra X2 to allow 1.5 tflops performance with achieving 1080p30/720p60 handheld and 1440p30/1080p60 docked. It’s compatible with the X1 although have to be heavily modified to work. Even still, it will have far lesser work rather than working with a Orin which have be EXTREMELY modified that it is impractical and lose compatibility."

And when I argue with him that Nintendo will utilize the T239 and Sony work with AMD, that is what he answers:

"I don’t see they do 3 teraflops, too power hungry, incompatible and expensive. There is a reason why they abandoned the t239 after 2021. The chip will make the switch successor having a mere 1 hour-1 hour 30 minutes battery life and cost something more like PS or Xbox. I don’t see them abandoning hybrid, raising the price by significant margin and incompatible with the Switch. Apple is already investing in ARM and since PS/Xbox probably want something more inline with the M chip, it can be in one of the planned console and battery life would not be a concern as it’s a home console constantly plug into an outlet. In addition, the Switch successor would want to keep the popular hybrid from factor and switch compatibility, also having a brand new feature as the hook to the successor and Nintendo have always wanted to be innovative and accessible where is t239 is anti-Nintendo with the high price and killing hybrid idea and prohibit new ideas as well since it requires resource and the hybrid form factor too."

and complete with that:

"Thinking more about it, despite people’s approximation, I see the chip having to use a better cooling system which potentially not fit with the switch form factor to reach better power, like 5 tflops. The switch successor will sucks if they use this chip. It’s a good decision Nintendo abandoned it and let the chip to have it’s full potential. I said it, modifying the X2 to be a custom chip is a better choice rather than sticking with modifying Orin which require heavier modifying. It’s very likely Nintendo abandoned it and the hype is too high for processing power that any reasonable reason will be invalid to the eyes of these people. Aiming for a 1.5-1.8 tflops will let Nintendo stick with the same price and help adoption of the successor. There is a lot of games running fine with base PS4, Xbox one and steam deck and obviously the switch successor will receive the most support ever. People are overhyped and modifying Orin to make it a custom chip that will only suit in a powerhouse is a bad decision and Nintendo abandoned the chip with reasons"

So, Terry is right!? Famiboards are all wrong!? Nintendo is doom!?
 
The existence of the FDE in T239 is not an indication that Nintendo will be pushing the performance of the internal storage. If anything it indicates the opposite.
I don't disagree with the point of your post in general, but I don't know if this is quite the case when considering the cost of the storage media itself.

Let's say that Nintendo wants their new hardware to have effective read speeds of 1GB/s. If they went the "screw compression, let's just use fast storage" approach, they would need (say) 256GB of 1GB/s eUFS internal storage, and for big game X they would need to manufacture and ship a 64GB 1GB/s game card for the physical edition.

Compare that to the case where they've got a 2:1 compression algorithm which they can decompress via a small hardware block on the SoC. Now they only need 128GB of 500MB/s internal storage to hold the same number of games, and big game X can ship on a 32GB 500MB/s game card. I think it's a safe bet that adding the FDE to the SoC is significantly cheaper than using larger and faster internal storage and shipping games on larger and faster game cards over the course of the entire generation. This should be true regardless of their target effective read speeds.

I don't think the existence of the FDE tells us very much about the storage speed to expect, but it does tell us that Nintendo wants to take the decompression workload off the CPU. If all were were concerned about was faster loading screens with compressed data, having eight A78 (or similar) CPU cores, presumably running at a higher clock, would already give them that. Ditto with PS5 and XBSX, they would have already been able to achieve much faster loading of compressed data by virtue of CPU improvements alone.

It's not a coincidence that fast SSDs and hardware decompression blocks arrived in Sony and MS's consoles in the same generation as a meagre 2x jump in RAM capacity over the previous gen (both PS3 and PS4 had 16x as much RAM as the previous gen). GDDR6 is expensive, but fast SSDs and hardware decompression blocks are (relatively) cheap, so rather than spending lots of money on 32GB or more GDDR6, you go with less RAM, but rely on games streaming assets in and out of RAM at very high speed to make more efficient use of the RAM you have. If you expect developers to use compression, though
(which as a platform holder you want, because it means you have to spend less money on storage), then using CPU decompression would be a problem because suddenly several CPU cores are being used up during the game just to decompress all those assets you're streaming in. Hence the decompression block, as it allows developers to stream in compressed assets at high speed during gameplay while leaving the CPU free to do everything else it needs to do.

I expect Nintendo's reasoning for including the FDE is the same. If they just wanted faster loading screens, the CPU improvement would give them that. Most game engines (including Nintendo's) already make heavy use of asset streaming, and with Sony and MS's adoption of "not much RAM, but very fast SSD" architectures, game engines going forward will be built around even more aggressive asset streaming. Nintendo is obviously conscious of this, and wants to be able to handle these approaches without cutting into CPU performance.
 
If the game can't run at least at 720p 30 fps on switch docked, then, maybe it is the only way. Of course if Retro can archive something so good as Guerrilha did with horizon forbidden west, that run very well from base Ps4 to Ps5, then a native port can be a possibility.
Honestly, the idea of a cloud based Nintendo game is unrealistic. There's no reason for Prime 4 to have that many issues to where a cloud port is necessary.
 
Funny you say that. I'm in a discussion with a guy named Terry on youtube and that is what he think about T239:



And when I argue with him that Nintendo will utilize the T239 and Sony work with AMD, that is what he answers:



and complete with that:



So, Terry is right!? Famiboards are all wrong!? Nintendo is doom!?
christ I lost braincells reading that
 
Funny you say that. I'm in a discussion with a guy named Terry on youtube and that is what he think about T239:



And when I argue with him that Nintendo will utilize the T239 and Sony work with AMD, that is what he answers:



and complete with that:



So, Terry is right!? Famiboards are all wrong!? Nintendo is doom!?
You know you are wasting your time when you argue with someone who bring up the idea of a TX2

I really can't wait to see all of those weird theory to age poorly
 
Funny you say that. I'm in a discussion with a guy named Terry on youtube and that is what he think about T239:



And when I argue with him that Nintendo will utilize the T239 and Sony work with AMD, that is what he answers:



and complete with that:



So, Terry is right!? Famiboards are all wrong!? Nintendo is doom!?
I think you got into a chat with an hallucinating AI chat bot. There's almost too much there to deconstruct and debunk. I'll just pick on one argument, about Tegra X2. Using that would still introduce incompatibility as the GPU goes from Maxwell to Pascal based. So Terry is talking poo.
And what is the 'M chip'?
 
Funny you say that. I'm in a discussion with a guy named Terry on youtube and that is what he think about T239:



And when I argue with him that Nintendo will utilize the T239 and Sony work with AMD, that is what he answers:



and complete with that:



So, Terry is right!? Famiboards are all wrong!? Nintendo is doom!?
At a risk to my sanity, I re-read Terry's comments. Their argument seems to be that T239 is too good for Nintendo, Sony and M$ will be better placed to use it and Nintendo can make do with 7 year old technology. Maybe Terry is a human after all, if not a scared and insecure human.
Oh, and they've got T239's relationship with Orin all wrong - T239 would have been developed in parallel to Orin, it's not a modification from it.
 
People are really obsessed with Nintendo having super fast memory storage.

It sounds like folks want the next gen Switch to have a fast M2 SSD built in, but that would just be too power hungry and too hot for the Switch form factor right?

No matter what, Nintendo is going to do what they need to do to get the price right, power consumption right, and make the most of the hardware they got.
 
At a risk to my sanity, I re-read Terry's comments. Their argument seems to be that T239 is too good for Nintendo, Sony and M$ will be better placed to use it and Nintendo can make do with 7 year old technology. Maybe Terry is a human after all, if not a scared and insecure human.
Oh, and they've got T239's relationship with Orin all wrong - T239 would have been developed in parallel to Orin, it's not a modification from it.
the weird thing is, AMD already makes mobile chips that would be good for Sony and MS. why would they even need to go to arm?
 
0
If the game is actually 16GB they'll still need a 32GB card since a 16GB card cannot hold a full 16GB of data.
then it will be less than 16 gb. There's no way they'll use a 32 gb card if the game is just a few 100 mb over. They will find a way to use the cheapest alternative.
 
I wonder if the technical aspects that are being discussed in this thread right now, that wont make it to the Switch successor, would be user for a revision
 
I think the bigger issue is the Odyssey team should be ready to reveal their next big Mario game, Miyamoto has already stated that we will see Mario in an upcoming direct, and that Drake not getting botw sequel, means that Mario is almost certainly the launch title, also it's likely that totk patch via DLC will probably be there for Drake, likely coming in the first 12 months of totk's release. Then the only insider to leak real information and tying it into Switch 2 info, is the Pokemon dev who said the Drake patch for pokemon is coming this winter. I'd place the end of winter for Nintendo at March 1st 2024, which is a friday, but the pokemon patch can come AFTER Switch 2's launch, and doesn't have to be that far into "winter". It's also known that Drake is physically complete, and is publically still being worked on via Linux patch notes, so we know it was never canceled.

The biggest current hangup that brings skepticism for a Redacted release in 2023, or even in this fiscal year at this point is the total lack of leaks from developers. However, Nintendo basically launched Switch with hardly any third party support and this didn't harm the early success. Switch sold early on primarily for Zelda BotW. Knowing this, could the plan be to launch with a very limited lineup, essentially releasing with 3D Mario. This would explain a lack of leaks from developers, there may not be any third parties outside of Japan that have development kits. If Nintendo were to start shipping dev kits at the same time as the reveal, this would time up nicely for third party ports to start flowing in the months following Redacteds release. PS4 ports are likely going to take less than a year, and if a publisher is willing to commit the resources, a port will likely take six months or less. Nintendo knows all of this and hence they are comfortable with withholding dev kits from third parties until they are ready to reveal. Let's face it, when Redacted is shown off for the first time with a new 3D Mario, Prime 4 and an upgraded Zelda TotK, this would overshadow third party ports anyway.
 
then it will be less than 16 gb. There's no way they'll use a 32 gb card if the game is just a few 100 mb over. They will find a way to use the cheapest alternative.
I mean the game has been listed at >18GB for months and just recently had a 16GB listing, it's pretty likely they already printed the game on 32GB cards over the past few months before they got it down to 16, which is seemingly just for the JP release.
 
I mean the game has been listed at >18GB for months and just recently had a 16GB listing, it's pretty likely they already printed the game on 32GB cards over the past few months before they got it down to 16, which is seemingly just for the JP release.
We'll see who's right in less than 3 weeks.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom