A demake would be a reimagining of a modern game into the style and aesthetics of the time. E.g. taking God of War and turning it into a 2D Shinobi-style platformer for Sega Genesis. Or turning Gran Turismo into a Mode7-style racer on SNES.
In this case, the creator wrote a custom 3D renderer and recreated the models/meshes to get as close of an approximation of the N64 experience onto the GBA.
I wouldn't call it a port necessarily ("recreation" seems more apt), but it's closer to that than a demake.
Interesting, I'm wondering if the GBA could handle a light version of a Minecraft style game, but the N64 looks like it could be great at it too. I need to get me a SummerCart64 one of these days and experiment with my old N64.
Related to this is the Atari Falcon port of Minecraft using a sparse voxel octree, might work for the GBA seeing as the Quake ports are similar performance-wise:
Affine texture mapping is kinda jarring to look at, especially in this GBA port since there is no fixup with huge ground polygons drifting around.
One of the listed features in the PS1 port in the OP article is tesselation to reduce the issues of the PS1 HW affine texture mapper, on the GBA you have some base cost of doing manual software texture mapping but also oppurtunities to do some minor perspective correction to lessen the worst effects (such as doing perspective correction during the clipping process).
You’re misremembering. SM64 was fully textured, outside of specific models.
Also flat shading (vs. say gouraud shading) is isomorphic to the question of texture mapping, and concerns how lighting is calculated across the surface of the polygon. A polygon can be flat shaded and textured, flat shaded and untextured, smoothly shaded and textured, or smoothly shaded and untextured.
Amazing feat. I was a very happy owner of both consoles back in the day, and this port clearly shows how much the N64 brought that "SGI at home" feel in mid‑1996; at least until Voodoo 1 / QuakeGL, maybe even up to Unreal (Glide) or Sonic Adventure on DC?
I still remember gasping when I first saw the basically unattainable (for me) Japanese‑import N64 running Mario 64.
Such an interesting and varied gaming landscape back then; for example, the Wipeout experience on PSX was beyond the N64 port in that particular niche, for its own set of reasons.
> Tessellation (up to 2x) to reduce issues with large polygons
From the videos I've watched there is still insane amounts of affine transformation texture warping, is that because it's not enable or because 2x is not enough?
I guess they will need to also redo all level geometry to be more amenable to tesselation... I guess that's why many ps1 games had blocky looking levels.
I see a lot of texture warp like you mentioned but I'm not seeing the geometry popping (wobble?) that was a hallmark of ps1 games, I'm guessing they're using soft floating point for the geometry and doing perspective-correct texture mapping would just be too expensive for decent frame rate
The PS1's GPU does not support perspective correction at all; it doesn't even receive homogeneous 3D vertex coordinates, instead operating entirely in 2D screen space and leaving both 3D transformations and Z-sorting to the CPU [1]. While it is possible to perform perspective correct rendering in software, doing so in practice is extremely slow and the few games that pull it off are only able to do so by optimizing for a special case (see for instance the PS1 version of Doom rendering perspective correct walls by abusing polygons as "textured lines" [2]).
A little more than just a multiplication instruction (the 68000, used in, say, the Sega Mega Drive, had one of those too). Have a look at https://www.copetti.org/writings/consoles/playstation/, and in particular, read about the GTE - it offered quite a bit of hardware support for 3D math.
Also, even though it didn't handle truly 3D transformations, the rasterizer was built for pumping out texture mapped, Gouraud shaded triangles at an impressive clip for the time. That's not nothing for 3D, compared to an unaccelerated frame buffer or the sprite/tile approach of consoles past.
It's not just a multiplication instruction. The CPU is equipped with a fixed-point coprocessor to accelerate the most common computations in 3D games, the geometry transformation engine [1], capable of carrying them out much faster than the CPU alone could. For instance, the GTE can apply a transformation matrix to three vertices and project them in 23 cycles, while the CPU's own multiplier takes up to 13 cycles for a single multiplication and 36 (!) for a division. Combined with a few other "tricks" such as a DMA unit capable of parsing linked lists (which lets the CPU bucket sort polygons on the fly rather than having to emit them back-to-front in the first place), it allowed games to push a decent number of polygons (typically around 1-3k per frame) despite the somewhat subpar performance of the cache-less MIPS R3000 derivative Sony chose.
If you have some basic familiarity with C, you can see both the GTE and the Z bucket sorting of GPU commands in action in the cube example I linked in the parent comment.
The README mentions that it uses both (new) fixed point as well as soft floating point.
Unless I'm mistaken, the PS1 just plain doesn't support perspective correction. All texture mapping is done in hardware using a very not-programmable GPU; there'd be no way to do perspective correction, decent frame rate or not, outside of software rendering the whole thing (which would be beyond intractable).
The common workaround for this was, as suggested, tessellation - smaller polygons are going to suffer less from affine textures. Of course that does up your poly count.
it's not possible to have either subpixel vertex precision or perspective correct mapping with the PS1 GPU, as it only takes 2D whole-pixel coordinates for triangle vertices. (contrary to popular belief, N64 also uses exclusively fixed point for graphics btw, it just has subpixel units.) better tessellation can mitigate the perspective issues by a lot, but the vertex snapping is unsolvable, and it is indeed present here. look closer and you might see it.
I guess you could pretend to have sub-pixel precision on the PS1, if you did it manually? Eg change the colours around 'between pixels' or something like that?
But that would probably get very expensive very soon.
right now there is basically no preprocessing of level polygons and they are copied as is, but when it is implemented, the largest polygons will be split to solve this
this is also necessary to fix the occasional stretched textures, as texture coordinates are also limited to a smaller range per polygon on PS1
It notes in the Known Issues section that "Tessellation is not good enough to fix all large polygons".
Maybe it just needs more tessellation or something else is going on, because you're right - even as someone who grew up on the PS1 and is accustomed to early 3D jank, it looks painfully janky.
The distorted textures and weird triangle clipping issues are exactly what you'd expect from an unoptimized port to a platform that doesn't support perspective correct texturing or depth testing.
Playstation rendered with affine texturing which made it impossible to get perspective correct rendering without hacks. The porting team ultimately did a very interesting hack where they would use polygons to render 1 pixel wide strips effectively simulating how non-hardware (that is CPU-based/integer) acclerated rendering was done on the PC.
I had the opposite reaction. As someone who was on team PSX, the wobbly jank is pleasingly nostalgic. Didn't someone say that the limitations and artifacts of the obsolete media of the past become the sought-after aesthetics of the future?
They are certainly sometimes a key part of the retro look that makes things nostalgic.
But even during the PSX era I found it distracting and annoying to look at so I can't say I have any nostalgia for it even now in the way I do for N64-style low-poly 3-D games or good pixel art.
This is all subjective so I suppose I should add an IMO, Even back then many games were preferable on the N64 like megaman legends, what the PS1 offered that was superior was storage, which allowed for more music and FMVs, and also allowed for voice acting and probably why MGS is still talked about to this day, my guess is the lack of detail helps immersion the same way you would read a novel, and I imagine the PS1 with its storage would've been the perfect vehicle for Visual Novels, but that still is not popular anywhere but Japan.
Even with realism, ports to dreamcast were better overall and considering the latest port of Final Fantasy Tactics does not emulate any of its PS1 limitations, I don't think a lot of people strive/like the aesthetic.
As someone who was team N64 I do agree PSX has more of a "trademark look" compared to the N64 which is pretty much just a very limited version of a modern graphics rasterizer.
There was actually an unauthorized third-party CD-ROM drive for it, the Bung Doctor V64[1]. It didn't actually expand the available ROM space beyond what was possible with cartidges, but its still interesting in that it was allegedly used by licensed Nintendo devs as a lower-cost alternative to the devkits officially provided to them.
The RAMBUS speed is the main issue. The RDP can literally be stalled over 70% of the time waiting for memory. It's extremely flawed.
They could have used SDRAM and it would perform so much better, and I believe the cost is around the same.
If you wanted to cut something, cut the antialiasing. While very cool, it is a bit wasted on CRTs. Worst of all, for some reason they have this blur filter which smears the picture horizontally. Luckily it can be deblured by appliying the inverse operation.
I think the main reason is that when they architected it, RDRAM seemed like the better choice based on price and bandwidth at that time, and they underestimated the performance issues it would cause (RDRAM has amazing bandwidth but atrocious latency).
By the time the N64 launched, SDRAM was better and cheaper, and they considered it was too late to make the switch. Allegedly SGI wanted to make changes but Nintendo refused.
Basically they made the wrong bet and didn't want to change it closer to release.
OK, I also just read that basically Nintendo bet on ram bandwidth, but ignored latency.
A more general lesson: Nintendo bet on cutting edge, speculative technology with RDRAM, instead of concentrating on 'Lateral Thinking with Withered Technology'.
The whole thing about the texture cache being the worst design decision in the N64 just gets parroted so much, but nobody can cogently explain which corner should have been cut instead to fit the budget.
The N64's CPU, with pretty much every single game released on the platform, is just sitting there idling along at maybe 30% load tops, and usually less than that. It's a 64 bit CPU, but Nintendo's official SDK doesn't even support doubles or uint64!
Of course, Nintendo clearly cared about the CPU a lot for marketing purposes (it's in the console's name), but from a purely technological perspective, it is wasteful. Most of the actual compute is done on the RSP anyway. So, getting a much smaller CPU would have been a big corner to cut, that could have saved enough resources to increase the texture cache to a useful resolution like 128x128 or so.
It should be noted, though, that the N64 was designed with multitexturing capabilities, which would have helped with the mushy colors had games actually taken advantage of it (but they didn't, which here again, the Nintendo SDK is to blame for).
> So, getting a much smaller CPU would have been a big corner to cut, that could have saved enough resources to increase the texture cache to a useful resolution like 128x128 or so.
How? The texture RAM (TMEM) is in the RSP, not in the CPU.
Only really in the marketing material. It's a bit like calling a 386 with an arithmetic co-processor an 80 bit machine, when it was still clearly a 32 bit machine by all metrics that matter.
However, I agree in general that the N64 CPU sits idle a lot of the time. It's overspecced compared to the rest of the system.
You could have saved a lot of money by using CDs instead of cartridges.
If you sell games for roughly the same amount as before (or even a bit cheaper), you have extra surplus you can use to subsidise the cost of the console a bit.
Effectively, you'd be cutting a corner on worse load times, I guess?
Keep in mind that the above ignores questions of piracy. I don't know what the actual impact of a CD based solution would have been, but I can tell for sure that the officials at Nintendo thought it would have made a difference when they made their decision.
imho, Nintendo had a hard enough time with preventing piracy and unlicensed games with the NES and SNES and saw the PS1 got modded within a year, even with the special black coated discs to hide the tracks. There wasn’t a lot of optical/compact disc copy protection magic at the time and, cd-rs and writers started getting popular quickly as well. ps1 in 1994, n64 in 1996, backwards Dreamcast GD-ROMs and beginnings of larger discs and DVDS in 98.
> I agree that the PS1 had more piracy, but I'm not sure that actually diminished its success?
At least in my corner of the world (Spain), piracy improved its success. Everybody wanted the PSX due to how cheap it was, I think it outsold the N64 10:1.
Its incredible to how compltely unwatchable modern youtube norms are, to me at least. I feel like youtubers now aim almost exclusively for the 12-18 demographic. I mean, this person is doing some kind of character or affectation instead of using a normal voice. Everything is some kind of grift or character or PR or persona now it seems. I understand they do this to get viewers, but its just depressing how much more content I'd enjoy if the PR gimmicks and lowest-common-denominator tricks were stopped.
I just saw techtips Linus interview Linus Torvalds and the constant manboying and bad jokes was just embarrassing and badly hurt the interview. I really wish people like this would turn it way, way down. I think we all love some levity and whimsy, but now those gimmicks are bigger and louder than the actual content.
Torvalds didn't hold back either though, so not sure what the complaint is... If you watch some WAN you'll see you're not getting some weird persona in that video, just the same guy with a bit of extra energy - which is just what you want to do for presentations / shows / whatever. It was a genuine experience.
To me this sounds like a computer-generated voice for obvious pro-privacy reasons for this kind of project. If it bothers you, then maybe work on better voice synthesis tech! I assume it sounds not-leading-generation because it was locally rendered but I could be wrong.
> I just saw techtips Linus interview Linus Torvalds and the constant manboying and bad jokes was just embarrassing and badly hurt the interview.
If you've been watching LTT for any amount of time, it wouldn't be surprising that that's just LTT Linus' nervous awkward style, he's just a person. The jokes can be cringe as hell, but I thought the video was great, I don't think most nerds would be any different in front of a camera.
This is emulated as I'm sure the other videos are, but the PS1 back in the day had no way of running anything this crisp, so the emulator is `enhancing` it here. It's not an actual representation of what the game would have looked like.
It doesn't really work right on "normal" PS1s yet, at least when it was making the rounds a few weeks ago, so you need either an emulator or modded/dev PS1 with more RAM to prevent crashes and most people won't have the latter https://www.reddit.com/r/psx/comments/1p45hrm/comment/nqjtdp.... Probably shared a few months to early.
But yeah, on a "real" PS1 it would be blockier due to lower res. The main rendering problems should be the same though.
nah, it's not even configured to use the extra RAM, though there is a compile option for that. seems like the freeze was some sort of bug in the tessellation code, but I'm rewriting that part, so the bug is gone now. it should be working fine on hardware after I publish the changes.
The 14" Nokia TV from my old bedroom disagreed a little :)
In the end if you reescaled the emulator window down to 320x240 or 640x480 with a 25% scanline filter on LCD's or a 50% in CRT, the result would be pretty close to what teenagers saw in late 90's.
Obligatory mention of Kaze, who has spent the past several years optimizing Mario64 using a variety of interesting methods. Worth a watch if your interests are at the intersection of vintage gaming and programming.
I was just about to post his video from August explaining how much excess ram mario 64 uses and where, which was the first serious mention I saw of a ps1 port being possible. He uses the ps1's smaller ram size as a kind of benchmark.
There is an explosion of decompilation projects spawning new ports, but was there something that enabled better decompilations? I see it across many retro games.
It has been enabled mainly by the the advent of streamlined tooling to assist with 1:1 byte-by-byte matching decompilations (https://decomp.me/ comes to mind), which allows new projects to get off the ground right away without having to reinvent basic infrastructure for disassembling, recompiling and matching code against the original binary first. The growth of decompilation communities and the introduction of "porting layers" that mimic console SDK APIs but emulate the underlying hardware have also played a role, though porting decompiled code to a modern platform remains very far from trivial.
That said, there is an argument to be made against matching decompilations: while their nature guarantees that they will replicate the exact behavior of the original code, getting them to match often involves fighting the entropy of a 20-to-30-year-old proprietary toolchain, hacks of the "add an empty asm() block exactly here" variety and in some cases fuzzing or even decompiling the compiler itself to better understand how e.g. the linking order is determined. This can be a huge amount of effort that in many cases would be better spent further cleaning up, optimizing and/or documenting the code, particularly if the end goal is to port the game to other platforms.
In this case, the creator wrote a custom 3D renderer and recreated the models/meshes to get as close of an approximation of the N64 experience onto the GBA.
I wouldn't call it a port necessarily ("recreation" seems more apt), but it's closer to that than a demake.
ClassiCube has a WIP GBA port, but according to commits it only hits 2 FPS as of now and is not listed in its README.
On a related tangent, there's also Fromage, a separate Minecraft Classic clone written for the PS1 (https://chenthread.asie.pl/fromage/).
https://www.youtube.com/watch?v=nHsgdZFk22M
Still bravo! I know getting it working and complete is the real goal and it is commendable.
What were you expecting?
One of the listed features in the PS1 port in the OP article is tesselation to reduce the issues of the PS1 HW affine texture mapper, on the GBA you have some base cost of doing manual software texture mapping but also oppurtunities to do some minor perspective correction to lessen the worst effects (such as doing perspective correction during the clipping process).
I think the resolution makes it particularly rough though.
Also flat shading (vs. say gouraud shading) is isomorphic to the question of texture mapping, and concerns how lighting is calculated across the surface of the polygon. A polygon can be flat shaded and textured, flat shaded and untextured, smoothly shaded and textured, or smoothly shaded and untextured.
I still remember gasping when I first saw the basically unattainable (for me) Japanese‑import N64 running Mario 64.
Such an interesting and varied gaming landscape back then; for example, the Wipeout experience on PSX was beyond the N64 port in that particular niche, for its own set of reasons.
From the videos I've watched there is still insane amounts of affine transformation texture warping, is that because it's not enable or because 2x is not enough?
I guess they will need to also redo all level geometry to be more amenable to tesselation... I guess that's why many ps1 games had blocky looking levels.
[1]: https://github.com/spicyjpeg/ps1-bare-metal/blob/main/src/08... - bit of a shameless plug, but notice how the Z coordinates are never sent to the GPU in this example.
[2]: https://fabiensanglard.net/doom_psx/index.html
I guess the main thing the console brought to the table that made 3d (more) feasible was that the CPU had a multiplication instruction?
Also, even though it didn't handle truly 3D transformations, the rasterizer was built for pumping out texture mapped, Gouraud shaded triangles at an impressive clip for the time. That's not nothing for 3D, compared to an unaccelerated frame buffer or the sprite/tile approach of consoles past.
If you have some basic familiarity with C, you can see both the GTE and the Z bucket sorting of GPU commands in action in the cube example I linked in the parent comment.
[1]: https://psx-spx.consoledev.net/geometrytransformationengineg...
Unless I'm mistaken, the PS1 just plain doesn't support perspective correction. All texture mapping is done in hardware using a very not-programmable GPU; there'd be no way to do perspective correction, decent frame rate or not, outside of software rendering the whole thing (which would be beyond intractable).
The common workaround for this was, as suggested, tessellation - smaller polygons are going to suffer less from affine textures. Of course that does up your poly count.
I guess you could pretend to have sub-pixel precision on the PS1, if you did it manually? Eg change the colours around 'between pixels' or something like that?
But that would probably get very expensive very soon.
this is also necessary to fix the occasional stretched textures, as texture coordinates are also limited to a smaller range per polygon on PS1
Maybe it just needs more tessellation or something else is going on, because you're right - even as someone who grew up on the PS1 and is accustomed to early 3D jank, it looks painfully janky.
https://www.youtube.com/watch?v=kscCFfXecTI
The first comment is pretty funny:
> Finally, Super Mario 32.
Playstation rendered with affine texturing which made it impossible to get perspective correct rendering without hacks. The porting team ultimately did a very interesting hack where they would use polygons to render 1 pixel wide strips effectively simulating how non-hardware (that is CPU-based/integer) acclerated rendering was done on the PC.
For others who run into the same problem, the file can be accessed via https://fabiensanglard.net/gebbdoom/index.html#:~:text=High%... . (I've highlighted the link to click.)
But even during the PSX era I found it distracting and annoying to look at so I can't say I have any nostalgia for it even now in the way I do for N64-style low-poly 3-D games or good pixel art.
Even with realism, ports to dreamcast were better overall and considering the latest port of Final Fantasy Tactics does not emulate any of its PS1 limitations, I don't think a lot of people strive/like the aesthetic.
I guess you can pretend that the JRPG or Resident Evil are Visual Novels with some action game play (or turn based combat) thrown in?
Huh, I generally see megaman legends cited as an example where the PSX version looks better due to the crisper textures.
https://www.youtube.com/watch?v=J6lravGmPPQ
[1] https://en.wikipedia.org/wiki/Doctor_V64
They could have used SDRAM and it would perform so much better, and I believe the cost is around the same.
If you wanted to cut something, cut the antialiasing. While very cool, it is a bit wasted on CRTs. Worst of all, for some reason they have this blur filter which smears the picture horizontally. Luckily it can be deblured by appliying the inverse operation.
By the time the N64 launched, SDRAM was better and cheaper, and they considered it was too late to make the switch. Allegedly SGI wanted to make changes but Nintendo refused.
Basically they made the wrong bet and didn't want to change it closer to release.
OK, I also just read that basically Nintendo bet on ram bandwidth, but ignored latency.
A more general lesson: Nintendo bet on cutting edge, speculative technology with RDRAM, instead of concentrating on 'Lateral Thinking with Withered Technology'.
Of course, Nintendo clearly cared about the CPU a lot for marketing purposes (it's in the console's name), but from a purely technological perspective, it is wasteful. Most of the actual compute is done on the RSP anyway. So, getting a much smaller CPU would have been a big corner to cut, that could have saved enough resources to increase the texture cache to a useful resolution like 128x128 or so.
It should be noted, though, that the N64 was designed with multitexturing capabilities, which would have helped with the mushy colors had games actually taken advantage of it (but they didn't, which here again, the Nintendo SDK is to blame for).
How? The texture RAM (TMEM) is in the RSP, not in the CPU.
Only really in the marketing material. It's a bit like calling a 386 with an arithmetic co-processor an 80 bit machine, when it was still clearly a 32 bit machine by all metrics that matter.
However, I agree in general that the N64 CPU sits idle a lot of the time. It's overspecced compared to the rest of the system.
If you sell games for roughly the same amount as before (or even a bit cheaper), you have extra surplus you can use to subsidise the cost of the console a bit.
Effectively, you'd be cutting a corner on worse load times, I guess?
Keep in mind that the above ignores questions of piracy. I don't know what the actual impact of a CD based solution would have been, but I can tell for sure that the officials at Nintendo thought it would have made a difference when they made their decision.
> Nintendo had a hard enough time with preventing piracy and unlicensed games with the NES and SNES [...]
Yes, so I'm not sure that the cartridge drawbacks bought them that much in terms of piracy protection?
I agree that the PS1 had more piracy, but I'm not sure that actually diminished its success?
At least in my corner of the world (Spain), piracy improved its success. Everybody wanted the PSX due to how cheap it was, I think it outsold the N64 10:1.
I just saw techtips Linus interview Linus Torvalds and the constant manboying and bad jokes was just embarrassing and badly hurt the interview. I really wish people like this would turn it way, way down. I think we all love some levity and whimsy, but now those gimmicks are bigger and louder than the actual content.
If you've been watching LTT for any amount of time, it wouldn't be surprising that that's just LTT Linus' nervous awkward style, he's just a person. The jokes can be cringe as hell, but I thought the video was great, I don't think most nerds would be any different in front of a camera.
But yeah, on a "real" PS1 it would be blockier due to lower res. The main rendering problems should be the same though.
Not if you watch the video on your phone or iPad or laptop!
Actually, even most desktop pc monitors aren't bigger than people's TVs back then.
(Of course, TVs now are bigger than TVs back then. And desktop pc monitors are bigger than desktop pc monitors back then.)
In the end if you reescaled the emulator window down to 320x240 or 640x480 with a 25% scanline filter on LCD's or a 50% in CRT, the result would be pretty close to what teenagers saw in late 90's.
Though I suspect for interactive use, CRTs might have had better latency?
https://www.youtube.com/@KazeN64
I did not expect it to happen so soon.
https://www.youtube.com/watch?v=oZcbgNdWL7w - Mario 64 wastes SO MUCH MEMORY
I wonder what someone who has PS1 knowledge equivalent to Kaze's N64 knowledge could do on that console---perhaps using Mario 32 as the benchmark.
(Mario 32 = Mario 64 on PS1.)
That said, there is an argument to be made against matching decompilations: while their nature guarantees that they will replicate the exact behavior of the original code, getting them to match often involves fighting the entropy of a 20-to-30-year-old proprietary toolchain, hacks of the "add an empty asm() block exactly here" variety and in some cases fuzzing or even decompiling the compiler itself to better understand how e.g. the linking order is determined. This can be a huge amount of effort that in many cases would be better spent further cleaning up, optimizing and/or documenting the code, particularly if the end goal is to port the game to other platforms.
https://github.com/CharlotteCross1998/awesome-game-decompila...
edit: whoever did the gameplay video is really good at mario n64. They were playing to and reacting to stuff that had rendered very late, if at all.