Real-Time Shadows on N64 Hardware

While the Nintendo 64 system was, in the minds of many, relegated to the era of “decidedly obsolete graphics”, given that its graphics processing unit (GPU) lineage harkens directly back to what SGI had to offer in the 1990s, it also supports a variety of modern features, adding dynamic shadows. In an undeniable demonstration, [lambertjamesd] shows how this function is used.

As can be seen in the demonstration video (linked after the break), this demonstration features a single dynamic light, which casts a shadow on the central object of the scene, with a monkey object hovering around it casting its own shadow (depicted in an auxiliary). frame buffer). This helper buffer is then incorporated into the main buffer, as explained in [ItzWarty] in /r/programming on Reddit.

This means that the main scene uses a shadow volume, which was used extensively with Doom 3. The main reasons the N64 did not use shadow volumes were due to the limitations it imposes on the shadow (object) launcher in the scene. , such as the desire to be convex, and overlap is likely to lead to artifacts and problems.

Doom 3 would solve this challenge by employing a template that would further refine the N64’s fundamental dynamic lighting, ultimately leading to the complicated video game graphics we have today. And one that will no doubt seem obsolete in a decade. as usual.

I think we’ve reached the point where, no matter how much time passes, graphics have become as smart as the human eye can understand (for titles that have very clever graphics by today’s standards: 4K monitors that pretty much fill the entire field). . (see pixels have pixels that are almost imperceptible at this viewing distance), at least until a full generation similar to a 3D holodeck arrives, you’re stuck on 2D screens or some kind of 3D virtual reality, and like Stands, VR can, if you have the hardware, not be far from 2D screens in solution and sharpness. . .

I think from now on, it will be the quality of the physical interactions, animations, and HUD/UI that will make the games seem more outdated than their graphics, at least in the gameplay. I hope the screenshot symbol continues. to improve a bit, and there will be smaller, more engaging visual points in the infrequent events when ray tracing generation actually pops up to highlight them, but for the most part, ray tracing doesn’t serve as much as point design and is fake. Shadow strategies combine to mean that it’s rarely so noticeable. . .

I’m waiting for that. . .

https://youtu. be/00gAbgBu8R4

Seriously? Unreal Engine 5’s Nanite generation already allows for almost unlimited polygons. It has been demonstrated in tech demos, adding The Matrix Awakens, which can be played publicly. Euclideon is probably just vaporware at the moment and looks weak compared to Nanite.

I don’t think we can improve the state of animation by cutting animators.

I get the impression that Nitpicker has some very explicit examples in his brain that would have given some context to the flippant “fire hosts” comment. I just don’t rate them for some reason.

I bet you’re talking about procedural animation.

There are a number of discussions about the GDC that are appealing if anyone is interested https://youtu. be/KLjTU0yKS00

That’s right, Nintendo has focused on exclusive hardware. Keeping up with the latest generation is rarely as vital to them as something new. Sometimes they get stuck on a device, but rarely does it produce something competitive that no one else would think of doing. They need their games to only be playable on their hardware. They’re a bit like the Willy Wonka of video games: strange, isolated, but fascinating.

I couldn’t agree more, evidently games look to improve physics when it provides them with an edge, however, few games I know of deviate from their path to make physics the focal point, as graphics were the driving force behind sales.

Hopefully, with VR becoming more and more popular, physics to keep the flow of immersion will drive gaming more than ever, holodecks seem too far away, but I don’t mind dreaming about you about it.

Graphics in pre-rendered scenes, yes, but even close to the actual game.

People have been saying this since the PS2. Today’s games will seem rubbish to those 30 years from now.

Without anything interrupting the existing progression speed, they certainly will, as they probably won’t play the same way at all: they’ll be displayed in a more holographic 3D form. . . All physics and animations will be better, more realistic, and flawed. World Maps: Probably a bit of AI clutter and scenario generation, so there are more than 8 posters/pictures on the walls, not all rooms have the same desks, etc. (the only way to get more without having to rely on more and more 3D modelers, artists to create objects adapted to an increasingly varied world). . .

But in purely graphical terms, at least for 2D screens, the best games on the market are sharp, with a stupidly high polygon count and textures detailed to the point that, at the general viewing distance while gaming, you can’t see. any defects because they are not literally visible. The solution of texture and number of polygons, etc. , are all very detailed and the eye is rarely able to see the rougher edges, especially in the consistent moving symbol (see taking screenshots). looking for faults and locating them, and actually being able to see them, but from a gameplay perspective, it’s completely irreverent). . .

(Heck, games from a while ago don’t show their age like they would have in the same time period, say 10 years earlier, and let’s go back to the N64/PS1 era and my son makes the most productive games of that time. era. su age in a short time – of course, even now, many games don’t surpass AAA, we have to be this year’s Crysis graphically, many are very pleased not to care as they still seem quite clever just making the effort to make it the most productive of the 360-type games, which requires much less work)

Maybe LLVM to optimize?

The article alludes to this, but to be clear: only the cuboid “casts” a genuine shadow. The monkey, Suzanne, is simply rendered in an off-screen buffer and then melts into the ground. If there were some other object underneath, it wouldn’t be in the monkey’s shadow. A bit confusing when reading at first “the object will have to be convex”, but then seeing that the monkey “throws” a shadow, it is not so.

I don’t know what you mean by “as intelligent as the human eye can perceive. “We’re a long way from that. I’ve been disappointed with this generation of consoles in terms of graphical fidelity. Even this demonstration of matrix technology was disappointing. I think there are real-time photorealistic rendering rigs out there today, but the amount of manpower required to complete each and every detail, from a cityscape to Martix, for example, is far more than a developer is willing to do. This tech demo looks wonderful from about 10 feet away. It’s only when you’re near cars and inside windows that the ghost shatters.

“Inside,” windows break because they don’t have an “inside. “The demo achieves this speed because the buildings don’t have interiors, the trick is the texture of the windows. In real time, those flat textures shift just enough. to make the building look like it has an interior.

The same trick is used in Half Life Alyx to make the bottles look like they have water (or beer or whatever) inside.

Will real-life games look better?I hope so. Nanite (dynamic mesh resizing) combined with real-time retexturing paints is wonderful for things that are further away. All of this reduces the amount of paints for things in the background. And that leaves more time for closer details.

What is this trick? Do you know the technique? I’ve been using fabrics for my game lately, now that I have the right lighting engine, it’s just PBR at the moment. So, is it a displacement map or is it a traditional shading job?

When it comes to dynamic mesh scaling and real-time (re)texturing, they honestly seem like a more game-friendly edition of known existing LoD methods, e. g. Scattered trees. Real-time retexturing isn’t something you can’t already do in any engine’s manual MIP map management; however, it can be a headache with other hardware (the Basis Universal texture format has made this a lot less difficult for developers lately).

. . . . That said, I agree that the generation is at the point where it needs to create photorealistic games; It’s the artists and designers who struggle with that. That’s why engines like UE5 try to make things easier.

Here’s a video of the technique: https://youtu. be/dUjNoIxQXAA

Oh, and what bothered me about the Matrix demo was the eyes. Human eyes tremble constantly. It turns out that everyone, the animators, knows that the eyes don’t stay still and fixed. When Mr. Anderson looked at the barrel of the game’s camera, it just didn’t seem right.

I still feel that those white bars look exactly like those of the professional tennis players in the Atari 2600’s Pong game.

Be kind and respectful to make the feedback segment great. (Comment Policy)

This is what Akismet uses to reduce spam. Find out how your observational knowledge is processed.

Leave a Comment

Your email address will not be published. Required fields are marked *