Terrain Generation Collaborative Research Project – Introduction & First update

Welcome to the first update featuring my collaborative research project for my third year of university. The project is made up of a two man team, myself and Toby Gilbert. Our goal to research into new techniques of procedurally generating terrain on the macro and meso levels and maintaining relatively real time rendering of the geometry created. The macro side of generation being the large scale geometry such as mountains, hills and caverns etc.. For this we want to create as physically accurate terrain as possible to enhance realism. The meso being the smaller detail such as trees, boulders and grass, also a very demanding area to draw large volumes of shrubbery while maintaining performance. To increase the progress of our research we have split the two areas of generation between ourselves, Toby being in charge of the meso side of the generation and I shall be looking at the macro.

Initial Research

Terrain generation, a field that has been vastly researched in today’s computer graphics as it can be used in anything from computer games to movies. Therefore having a large amount of research that we can build from. Very basic techniques include simply using height maps, a previously generated texture in which every pixel value represents the height of the surface at the location of the pixel. So height of surface at (x,y) = f(x,y), function f returning the colour value of pixel (x,y) in a texture. This makes for very easy procedural generation by using random noise functions such as Perlin noise to generate the initial texture to be sampled. This method is very quick to implement and complexity can be added by developing more advanced methods to generate your texture. You will find a very detailed exploration of this technique in Realtime Procedural Terrain Generation by Jacob Olsen which uses a combination voronoi diagrams and noise generated by mid-point displacement to create fractal terrain. It also explores simulating erosion techniques such as thermal and hydraulic to improve the physical accuracy of the terrain. The limitations of this method being the terrain will never be able to generate features such as caves or arches in the surface. To achieve this we must look into generating volumetric data which is explored in Arches: a Framework for Modeling Complex Terrains by A. Peytavie ,E. Galin ,J. Grosjean and S. Merillou. In this paper they generate 3-dimensional data set consisting of different materials such a bedrock and sand. They will then simulate how these materials interact with each other and settle under gravity. They finally generate the volumetric data using a technique known as marching cubes. This is an algorithm in which you sample a three dimensional field of voxels and determine what to draw based on how these voxels intersect with the function. For example in the image bellow if one point of our voxel lies inside the shape and the other 7 lie outside then our surface must lie in between and therefore we draw a triangle between these point.

350px-MarchingCubes.svg

Finally we look into the meso side of generation. The challenges of this is that we need a lot of geometry to create a high amount of detail which in turn increases the compute power we will need to render our terrain in real time. Now you can achieve this by instancing geometry which means storing one set of geometry in memory and just drawing it many times. Alternatively in the paper Real-time Realistic Rendering and Lighting of Forests by Eric Bruneton, Fabrice Neyret solves this by rendering these objects as selection of textures different perspectives of an object and blending between them as you move around for a smooth transition between textures. This mean we can draw a large amount of objects very cheaply as textures are extremely optimised on the GPU.

Current Progress

The first step in this project was to implement the generation of the generation of the terrain through marching cubes. To achieve this I have created a modified version of the source code written by Paul Bourke here such that we can generate terrain from height maps. Our first generation program simply uses Perlin noise to generate a height map.

terrainGen

My next step was to make our terrain look a little more natural by rendering the terrain in a slightly more creative manor. For which I have used the technique in A rule-base approach to 3D terrain generation via texture splatting by Jonathan Farraris and Christos Gatzidis. This implementation shades the geometry based on the heights and the normals of its points. For example lower points are mud and grass shaded with brown and green and as the height increases it becomes rocks and snow shaded with grey and white. To further greater this we use the normals to identify sheers cliffs and shade them as rock instead of grass or snow. We can do this by calculating the angle through inverting the y component of the normal and multiplying it by 90. Now we just set a user defined thresh hold of faces above a certain again to shade as cliffs. In our program I have implemented to version of this, one with just block colours and one that uses pre created textures of our different types of terrain.

terrainColouredSmooth terrainWithTextures

Advertisement

A brief introduction to ray and path tracing

Rendering, the process of generating a image from a 2D or 3D model. Possibly the most important component in any video game or CGI. In which there are a variety of techniques to choose from the most common of which at the moment is rasterization which is what you will find in all modern day games. In this technique you take in vertices and normals of a model, interpolate these across pixel space to create an image. For example if you have 2 points A and B and want to draw a line, interpolate from A to B and fill in all the pixels along the way. This technique is decades old and very highly optimized but as time moves on the visual effects demands higher quality images which requires more complicated techniques of rendering. This is where ray and path tracing comes in. You will find this very common in CGI as it creates very high quality images but is in no way fast enough for for games (Yet!). In this post I will try to give you a basic idea of what they are and how you would implement them in computer graphics.

In layman’s terms the process of ray tracing is the attempt to simulate photons of light through mathematical formulae such that we can create images on the screen. In reality billions of photons come from a light source bounce off objects in many different directions of which the photons angled correctly hit our eyes enabling us to see.l001lighttoeyebounce This is exactly the what we are trying to simulate with ray tracing but with some cheats so that our computers can handle it. In our scene we will have a light source, some objects and finally a camera which represents our eye. In reality billions of photons are emitted from our light source in infinite directions and the small percentage of rays land in our camera/eye would create our image. Sadly we can’t simulate this in computing, or at least if we did it would take years to do due to the vast quantity of rays we would have to calculate which may not even land in our camera. To overcome this we use a method very imaginatively named Backward Tracing. In this method we will back trace rays from our camera to an object and then to the light source. This saves us calculating all the billions of unnecessary rays and just keep the ones that create our image. So at our current state we have one ray hitting something in our scene creating a small dot of our image. Now to create our full image all we have to do is send more rays. Imagine if you will you are painting a picture but can only use dots, if you paint enough dots you will eventually be able to create a full image. 875px-Ray_trace_diagram.svgTo convert this into terms of rendering we effectively need a ray for every pixel we are trying to draw. So image we have a plane in front of our camera. We divide this plane into a grid and fire a ray from our camera through one of the cells (our pixels) of our grid. We calculate whether or not it intercepts with something in our scene and if it does we use the colour of that object for that pixel. Effectively at a very basic level this is how our ray tracer works.

Path tracing is almost an extension to ray tracing. It is a lot more physically accurate creating even higher quality images but again but sacrifices speed in calculations. Instead of the rays hitting the object and then sending it straight to the light source it will continue to bounce around the scene accumulating colour values until it eventually hits a light. Some materials behave differently, some may have a high reflectivity and others a level of refraction or transparency which mean our rays will have to behave differently as well to colour them correctly. For example if we have a shiny red sphere next to a blue sphere, the red sphere will reflect some of the blue from the other sphere. This means our rays must do more “bounces” before reaching our light source, this in turn means more calculations which equals longer rendering times.

So that’s a brief introduction to ray and path tracers. Soon I hope to look into explaining more in depth about the mathematics used in these techniques such as shading formulae and how to calculate the reflections of the rays in the scene.

For more info:

A good explanation about simple ray tracers and how to implement. With source code

Ray tracers Vs Path Tracers