Graphics Enhancements & Effects

OpenGL Built-In Features

Anti-aliasing (Multisampling)

OpenGL has MSAA (multisampling) built-in, which you can set as desired within the simulator. A common form of aliasing is jagged edges of polygons. The higher this setting is, the more any jagged edges become smoothed. Incidentally, shimmering is another form of aliasing.

Some graphics cards allow you to set this feature (and other anti-aliasing techniques) regardless of whether the application has enabled it as an option.

The simulator’s VR feature has its own independent MSAA setting which is applied to its own dedicated frame buffer. The two final images are sent to the VR headset, one for each eye.

Some VR vendors have Anti-aliasing such as Supersampling as a display option. From testing, the simulator’s OpenGL MSAA option works very well, and it hasn’t been necessary to use Supersampling.

Anisotropic filtering

This is built-in to OpenGL and can be set to apply on a per texture (image) basis, or to all textures. It helps make images displayed on mesh faces look well defined (sharp) when viewed at steep angles. The steeper the angle, the more beneficial anisotropic filtering can be.

The simulator has a graphics setting which enables this feature.

Some graphics cards allow you to set anisotropic filtering regardless of whether the application has enabled it as an option.

Texture coordinates (considering these 1st)

3D model meshes can be rotated in any combination of X, Y and Z. Texture (via UV map) coordinates ensure each image pixel is selected correctly for each mesh face.

Mipmaps (considering these next)

Mipmaps which are a related aspect to improving image quality, do so by making available multiple copies of images scaled (equally in the X and Y directions) to different sizes, which are then selected accordingly to be displayed on a given mesh face, depending on how much of the monitor’s screen area is taken up by a given model i.e. closer = bigger = more. This process helps performance and image quality.

Anisotropic filtering - A simplistic description

When an object’s mesh faces are tilted in 3D space, which is typically most of the time, although you’re still viewing the entire part of an image applied to a face, it’s at an angle. Therefore, either screen direction X or Y will likely end up representing (containing) the image within a significantly smaller or larger distance than is ideal, if equally scaled images are used.

If the smaller of X or Y is used in producing an equally scaled image, the image resolution will be too low to correspond well to the larger distance. Likewise, if the larger distance is used, the image resolution will be too high for the smaller distance.

Therefore, anisotropic filtering is used which involves producing images scaled to different percentages for horizontal vs vertical, which can then be selected accordingly to best fit the mesh face’s level of rotation (tilt) on screen.

A more detailed overall description: An improvement on isotropic MIP mapping (2nd section)

The following video shows two identical floors, each with the same texture image applied. The left plane’s (floor) texture has no anisotropic filtering; the right plane has anisotropic filtering enabled, which makes it look considerably clearer when viewed at various steep angles.

Transparency (blending vs discarding)

Transparency which can be applied partially (AKA translucent), and the ability to throw away fragments completely (so totally invisible), is functionality built-in to OpenGL.

Blending:

When objects are drawn on screen one in front of the other, if you add some of the colour from the object that’s drawn first, to the object that’s drawn second which covers up the first object, the result is: you’re seeing through the object that was drawn second.

This process is used for all the simulator’s objects which have varying levels of transparency. Meshes can have their triangles sorted into depth order, either just once within the modelling software prior to exporting, or on the fly within the application as it’s running, but that’s expensive.

Discarding:

Based on some alpha value assigned to each vertex, fragments can be discarded completely.

This is the technique being used to remove the unwanted image regions around the outside of leaves i.e. the background in a photo of a leaf. This results in a tree’s leaves looking correct, instead of having some rectangular outline around the leaves.

The process of discarding fragments is also useful for producing completely transparent regions, within an object’s mesh faces, not just for trimming the unwanted outside regions away, of course


Additional Features

Shadows & Lighting

Although you cannot interact with the scenery visible within a photo world / skybox / cubemap, you can indeed produce nice looking shadows which are displayed on the skybox floor.

The “The Beast” simulated model helicopter for example produces shadows of all its moving parts.

Lighting is essential for any 3D object to look realistic as it moves and rotates. The current lighting used is the “Phong reflection model”.

Various settings will be added for adjusting the shadows and lighting.

Depth maps - Enhanced detail

A process which adds detail to the appearance of images as displayed on mesh faces.

Without depth map:

The popular brick wall example: Imagine a mesh plane which has a photo of a brick wall applied to it. If you tilt the plane, you’ll still see the same amount of brick and mortar; the perspective of the photo will remain unchanged.

With depth map:

As you tilt the plane, you’ll see varying amounts of brick and mortar, approximating a real 3D model of a brick wall. Still only 4 vertices are required for the mesh plane, which means the mesh itself cannot be displaced to produce depth detail.

Instead, the line-of-sight from the camera to the plane is used in calculating how much the texture (image) pixel positions become shifted, to create the illusion that it’s a real 3D mesh being viewed at an angle. The results can be very convincing, but all is revealed as you get close to seeing the plane edge-on.

Bumpy wooden floor demo: The following video demonstrates this effect being applied to a scene floor which has a bumpy wooden like texture.


Special Effects

Blades and skids bending

These graphics animations are mostly processed in the vertex shader.

Vertex positions are transformed in the vertex shader to represent varying levels of curvature, which can be adjusted in the control panel.

Blade bending:

The more aggressive the collective input, the more the blades bend. The sensitivity can be set from zero to ridiculously high.

There’s a blade-bend-start setting which affects the linearity of how bending is applied along the length of the blade. If set to zero, bending happens equally along the length. As the setting is increased, bending occurs less, further away from the blade grips.

Skid bending:

From a programming point of view this is more complicated, because the model’s height, position, and rotation, must correspond to the distortion which is applied to the skids, or else as the skids bend, they would appear to leave or go below the ground.

Separate bending calculations are applied to the horizontal vs angled sections, because they require different amounts of distortion and the angled section’s position needs offsetting in accordance with how much the horizontal sections bend, or else they would separate where they meet at the corner.

Blades feathering

This graphics animation is processed in both the vertex shader and the fragment shader.

The vertex shader transforms vertex positions to form a disc. The amount vertices are transformed is approximately proportional to the RPM.

Transparency is applied in the fragment shader to affect regions either side of each blade’s initial rotational vertex position, increasing in value based on how much each vertex has been transformed.

Controls for adjusting the sensitivity of blade feathering will be added to the control panel.

Belt text animation

It’s fair to say this is an over complicated hacky way of achieving belt animation. Even so, it works quite well.

The belt exists as several sections, each which is animated 1 tooth section, and then moved back to its start position and the animation repeated.

The transformations are transitions for the straight sections and rotations for the pulley sections.

Where the straight sections meet up with the pulley sections, the vertices are transformed such that they approximately align to produce a continuous motion.

The texture coordinates are transformed (once every complete tooth transformation/step) across a texture that contains the entire belt’s text.

Multiple UV map faces (1 for each belt section) travel from right to left and then back to the start, repeatedly, also working their way from bottom to top, which then start again at the bottom.

That’s all there is to it.

Particle fountain

Instancing is used to draw thousands of copies of a simple mesh which consists of just a few vertices. The number of particles processed can be set from a few to tens of thousands.

A buffer object is updated each cycle which contains all the particle positions.

A float representing each particle’s position is fed to the vertex shader as a vertex attribute.

Logic designed to produce varying patterns of particle streaming determines each particle’s position during each physics cycle. There are various settings for changing the streaming patterns.

Transparency is applied incrementally to the particles and can be set to start at different timing within the flow.