Jump to content

Home

Any ‘advanced’ gamers here who can tell me what this stuff means?


severous84

Recommended Posts

Proprietary game engine, using all the latest 3D technology :

 

-DirectX® 8.1 support

-Dynamic LoD* (Level of Detail),

-Hardware accelerated Transform and Lighting,

-Pixel Shaders and Vertex Shaders

-Hierarchical animation and skinning system,

-DXTn compressed textures Dynamic Damage system Particle system able to handle thousands of particles, influenced by various forces, fields…

-Special effects : glows, lens flares, reflections, Dot3 Bump Mapping, Per-pixel lighting

Link to comment
Share on other sites

1) Good graphics

2) Dunno

3) Dunno

4) Dunno

5) Dunno

6) Dunno

7) Glows: dunno, or so obvios I can't get it.

Lens flares: Look at the sun with glasses on u'll get them (Not advised. Also see it in films sometimes when camera is facing the sun.

Reflections: In some games if u enebale reflections it means u like come up to some water and u see a reflection of urself.

Link to comment
Share on other sites

Pixel Shaders create ambiance with materials and surfaces that mimic reality. An infinite number of material effects replace the artificial, computerized look with high-impact organic surfaces. Characters now have facial hair and blemishes, golf balls have dimples, a red chair gains a subtle leather look, and wood exhibits texture and grain. By altering the lighting and surface effects, artists are able to manipulate colors, textures, or shapes and to generate complex, realistic scenes.

 

This object is an animated, bumpy, reflective surface—something that would have been impossible to render in real time with previous 3D hardware. Now, with programmable Pixel Shaders, images such as this one can be achieved.

 

A Pixel Shader is a graphics function that calculates effects on a per-pixel basis. Depending on resolution, in excess of 2 million pixels may need to be rendered, lit, shaded, and colored for each frame, at 60 frames per second. That in turn creates a tremendous computational load. The GeForce3 can easily process this load through Pixel Shaders, and bring movie-style effects to your PC. This is an unprecedented level of hardware control for consumers. Per-pixel shading brings out an extraordinary level of surface detail—allowing you to see effects beyond the triangle level. Programmable Pixel Shaders then give artists and developers the ability to create per-pixel effects that mirror their creative vision. Furthermore, because the Pixel Shader capabilities in the NVIDIA nfiniteFX engine are fully programmable,

rather than simply choosing from a preset palette of effects, developers can create their own. Thus, programmable Pixel Shaders provide developers with unprecedented control for determining the lighting, shading, and color of each individual pixel, allowing them to create a myriad of unique surface effects.

 

In addition to incredible material effects, the nfiniteFX engine achieves excellent performance and enables previously impossible pixel-level effects on consumer-level platforms, because of its ability to handle four textures in a single pass. Applying multiple textures in one pass almost always yields better performance than performing multiple passes. Multiple passes translate into multiple geometry transformations and multiple Z-buffer calculations, slowing the overall rendering process.

 

 

Before the advent of NVIDIA's nfiniteFX engine, realistic characters and environments were beyond the reach of graphics processors. Now programmable Vertex Shaders enable an unlimited palette of visual effects that can be rendered in real time. But what are programmable Vertex Shaders, and how do they work? To answer this we have to start with the basics. In order to understand Vertex Shaders, it's important to know what a vertex is in relation to a scene. Objects in a 3D scene are typically described using triangles, which in turn are defined by their vertices.

 

What is a vertex?

 

A vertex is the corner of the triangle where two edges meet, and thus every triangle is composed of three vertices.

 

 

A vertex shader is a graphics processing function used to add special effects to objects in a 3D environment by performing mathematical operations on the objects' vertex data. Each vertex can be defined by many different variables. For instance, a vertex is always defined by its location in a 3D environment using the x-, y-, and z- coordinates. Vertices may also be defined by

colors, coordinates. Vertices may also be defined by colors, textures, and lighting characteristics. Vertex Shaders don't actually change the type of data; they simply change the values of the data, so that a vertex emerges with a different color, different textures, or a different position in space.

 

Before the introduction of the GeForce3, vertex shading effects were so computationally complex that they could only be processed offline using server farms. Now, developers can use Vertex Shaders to breathe life and personality into characters and environments, such as fog that dips into a valley and curls over a hill; or true-to-life facial animation such as dimples or wrinkles that appear when a character smiles.

 

Examples of vertex shading effects include: matrix palette skinning, which allows programmers to create realistic character animation with up to 32 "bones" per joint, allowing them to move and flex convincingly; deformation of surfaces, which gives developers the power to create realistic surfaces such as waves and water that ripples; and vertex morphing, which is used to morph triangle meshes from one shape to another, providing smooth skeletal animation. These are just a few of the virtually infinite number of effects developers can create using Vertex Shaders. By customizing skinning and motion, developers can create life-like personalities for characters and scenes, thereby intensifying the graphics experience.

Link to comment
Share on other sites

You could have just given him a link.....

 

 

Anyways, as for 6 and 7.

 

A particle system is a system in which pieces of an object can be "lost" by the object. An example would be if a lightsaber cut an arm off. Until recently, particles had to be programmed and drawn in manually. But new systems now let programmers simply program it all in automatically, without the tedious mapping and programmin previously required. Plus, it lets the textures be cut off with the pieces so those don't have to be programmed manually either. In addition, those pieces can be treated as individual objects that interact with the environment.

 

The Special Effects are all just lighting issues.

Link to comment
Share on other sites

About Transform and Lighting (T&L)

 

With the GeForce2 and Quadro2 families of GPUs, second generation transform and lighting engines offer blazing fast graphics processing speeds. Animated characters running on a GPU are lifelike and detailed, with complex facial expressions and smooth movements. Developers and designers can create worlds lush with organic life and architectural details that we take for granted in the real world. With accelerated transform and lighting capabilities, NVIDIA GPUs create unsurpassed 3D experiences for everyone from the web surfer to the multimedia enthusiast.

 

What is a GPU?

A Graphics Processing Unit (GPU) offloads all transform and lighting calculations from your CPU, freeing it up for other functions such as physics and artificial intelligence. Not only will your graphics look better and run more smoothly with an NVIDIA GPU, but your computer's performance will improve as well.

 

 

Why T&L?

Geometrically complex worlds require exceptional processing speed. And transform and lighting are two very mathematically intense processes. Combined, transform and lighting radically enhance photo-realism to create worlds that come alive on your screen. NVIDIA GPUs use separate transform and lighting engines so that each can run at maximum efficiency. The transform engine converts 3D data from one frame of reference to the next. Every object that is redisplayed, and even some that are not, must be transformed every time the

scene is redrawn. Lighting effects then provide high visual impact by enhancing the realism of the scene.

 

How does transform work?

Transform performance dictates how precisely software developers can "tessellate" the 3D objects they create, how many objects they can put in a scene and how sophisticated the 3D world itself can be. To tessellate an object means to divide it into smaller geometric objects, such as polygons. The images below are examples of a sphere tessellated by different degrees:

 

 

Each of the images above represents the same sphere, but the image on the far right is clearly the most realistic of the three. It has been carved up into five times as many polygons as the sphere on the far left, and therefore requires five times the transform performance as the sphere on the left. That may not seem very important for one sphere, but because hundreds to thousands of objects are often displayed in scenes, without a GPU those objects have to share the limited processing power of the CPU, forcing developers to budget processing tasks.

 

Now with an NVIDIA GPU transform calculations are offloaded from the CPU, allowing more detailed objects with higher polygon counts to be processed more quickly. With transformation a jungle scene can have lots of trees and bushes—rather than just a single tree—and each tree can consist of many leaves created by thousands of polygons. Since the GPU relieves the CPU of the burden of calculating the transforms, you will be able to view scenes rich with complex objects that look real and move like their real-life counterparts. Not only will the objects and characters be complex, but many more can exist.

 

With diffuse lighting, we can't see where the light is concentrated on the object.

By adding specular lighting, we can see that the light is shining directly on the object.

 

 

How does lighting work?

The human eye is more sensitive to changes in brightness than it is to changes in color—which means that an image with lighting effects communicates more information to a viewer more efficiently. The discrete lighting engine on an NVIDIA GPU calculates distance vectors from lights to objects and from objects to a viewer's eyes within 3D scenes. Lighting calculations are an effective way to add both subtle and not-so-subtle changes in brightness to 3D objects in a manner that mimics real-world lighting conditions.

 

Specular highlights move on the object if the viewer or the object moves relative to

the light source. For this reason they cannot be pre-computed or static. Specular lighting is also important for representing different materials for objects in a 3D scene. A silk shirt looks different than a cotton shirt—even if they are the same color—and will reflect light differently. Specular lighting combined with texture mapping creates more realistic objects because they have the visual properties of real materials. Only a GPU with a dedicated hardware lighting engine can support specular lighting without a severe loss in performance.

 

Diffuse vs. Specular Lighting

Lighting is divided into two main categories: diffuse and specular. Diffuse lighting assumes the light hitting an object scatters in all directions equally, so the brightness of the reflected light does not depend at all on the position of the viewer. For example, when sunlight hits a playground, the light is everywhere. Specular lighting is different because it depends on the position of the viewer as well as the direction of light and orientation of the object being lit. For example, the beam from a flashlight will bounce off a quarter differently than off a blade of grass. Specular lighting captures the mirror-like properties of an object so effects such as reflection and glare are achievable.

Link to comment
Share on other sites

Dynamic Lod

 

High speed computer networks will provide us with new broad band multimedia applications. This paper discusses new functions for the next generation VRML (Virtual Reality Modeling Language) over high speed computer networks. The LoD (Level of Detail) of 3D objects is the most important function for rendering scenes dynamically while managing the QoS (Quality of Service). New requirements for the next generation VRML are discussed. We present Differential VRML (DVRML) in order to update scene graphs dynamically, and describe principles of the LoD function based on the DVRML.

Link to comment
Share on other sites

Wow - you guys are good! I'm impressed. Thanks for the help. I saw the screens for Hegemonia on PC and they were so breathtaking I had to find out what the specs mean. I can post a link to those screens if you want. Do you think a game for console can have these specs? Wouldn't XBOX be able to handle this?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...