Compiling Cuda dynamic parallelism with Qt creator

So recently I have been using dynamic parallelism with my cuda fluid simulation. Qt is my IDE of choice so ideally I needed it to be compiles with that. As there is not a lot of documentation on it I figured it would be a crime not to share with the world. So here is the .pro file! Enjoy!

TARGET=FluidSim
OBJECTS_DIR=obj

# as I want to support 4.8 and 5 this will set a flag for some of the mac stuff
# mainly in the types.h file for the setMacVisual which is native in Qt5
isEqual(QT_MAJOR_VERSION, 5) {
 cache()
 DEFINES +=QT5BUILD
}
UI_HEADERS_DIR=ui
MOC_DIR=moc

CONFIG-=app_bundle
QT+=gui opengl core
SOURCES += #any srcs you hav 


HEADERS += #any headers you have

FORMS += #whatever you want

OTHER_FILES += #whaterver you want


INCLUDEPATH +=./include /opt/local/include $$(HOME)/NGL/include/
LIBS += -L/opt/local/lib -lGLEW
DESTDIR=./

CONFIG += console
CONFIG -= app_bundle

#This is some stuff for NGL the ncca graphics lib remove if you dont want it
#----------------------------------------------------------------
#------------------------ NGL setup -----------------------------
#----------------------------------------------------------------
# use this to suppress some warning from boost
QMAKE_CXXFLAGS_WARN_ON += "-Wno-unused-parameter"
QMAKE_CXXFLAGS+= -msse -msse2 -msse3
macx:QMAKE_CXXFLAGS+= -arch x86_64
macx:INCLUDEPATH+=/usr/local/include/
# define the _DEBUG flag for the graphics lib
DEFINES +=NGL_DEBUG

unix:LIBS += -L/usr/local/lib
# add the ngl lib
unix:LIBS += -L/$(HOME)/NGL/lib -lNGL

# now if we are under unix and not on a Mac (i.e. linux) define GLEW
linux-*{
 linux-*:QMAKE_CXXFLAGS += -march=native
 linux-*:DEFINES+=GL42
 DEFINES += LINUX
}
DEPENDPATH+=include
# if we are on a mac define DARWIN
macx:DEFINES += DARWIN

#----------------------------------------------------------------
#-------------------------Cuda setup-----------------------------
#----------------------------------------------------------------

#set out cuda sources
CUDA_SOURCES += cudaSrc/*.cu

# Path to cuda SDK install
macx:CUDA_DIR = /Developer/NVIDIA/CUDA-6.5
linux:CUDA_DIR = /usr/local/cuda-6.5
# Path to cuda toolkit install
macx:CUDA_SDK = /Developer/NVIDIA/CUDA-6.5/samples
linux:CUDA_SDK = /usr/local/cuda-6.5/samples

#Cuda include paths
INCLUDEPATH += $$CUDA_DIR/include
INCLUDEPATH += $$CUDA_DIR/common/inc/
INCLUDEPATH += $$CUDA_DIR/../shared/inc/


#cuda libs
macx:QMAKE_LIBDIR += $$CUDA_DIR/lib
linux:QMAKE_LIBDIR += $$CUDA_DIR/lib64
QMAKE_LIBDIR += $$CUDA_SDK/common/lib
#note for dynamic parallelism you need libcudadevrt
LIBS += -lcudart -lcudadevrt

# join the includes in a line
CUDA_INC = $$join(INCLUDEPATH,' -I','-I',' ')

# nvcc flags (ptxas option verbose is always useful)
NVCCFLAGS = --compiler-options -fno-strict-aliasing -use_fast_math --ptxas-options=-v


#prepare intermediat cuda compiler
cudaIntr.input = CUDA_SOURCES
cudaIntr.output = ${OBJECTS_DIR}${QMAKE_FILE_BASE}.o

## Tweak arch according to your hw's compute capability
cudaIntr.commands = $$CUDA_DIR/bin/nvcc -m64 -g -G -gencode arch=compute_52,code=sm_52 -dc $$NVCCFLAGS $$CUDA_INC $$LIBS ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}

#Set our variable out. These obj files need to be used to create the link obj file
#and used in our final gcc compilation
cudaIntr.variable_out = CUDA_OBJ
cudaIntr.variable_out += OBJECTS
cudaIntr.clean = cudaIntrObj/*.o

QMAKE_EXTRA_UNIX_COMPILERS += cudaIntr


# Prepare the linking compiler step
cuda.input = CUDA_OBJ
cuda.output = ${QMAKE_FILE_BASE}_link.o

# Tweak arch according to your hw's compute capability
cuda.commands = $$CUDA_DIR/bin/nvcc -m64 -g -G -gencode arch=compute_52,code=sm_52 -dlink ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}
cuda.dependency_type = TYPE_C
cuda.depend_command = $$CUDA_DIR/bin/nvcc -g -G -M $$CUDA_INC $$NVCCFLAGS ${QMAKE_FILE_NAME}
# Tell Qt that we want add more stuff to the Makefile
QMAKE_EXTRA_UNIX_COMPILERS += cuda

Advertisement

Terrain Generation Collaborative Research Project – Update 2: Thermal Erosion & Data Structures

When thinking about generating terrain we must first think about how we are going to store our data to represent it. This is a very crucial aspect as depending on the method you use to represent this data will have a direct effect on what kind of terrain you can generate and how large your data structure will be. In a ideal world we want to be able to represent any kind of terrain using minimal memory. The technique we have been using so far is two dimensional height maps. This is a very small way of representing terrain data in which we use a two dimensional array of elements that store height values. This results with a spacial requirement of n^2 bytes, n being the size of our array. This technique has its limitations in what type of terrain you can represent. Being only able to represent height and location information this restricts us to only being able to represent one layer of a surface and cannot represent natural phenomena such as horizontal caves. On the other hand we can represent our terrain in voxel form. This could be represented in a tree dimensional array which allows us to represent a third dimension of data. Where this representation has its draw backs is the size the data structure will be. Unlike our height maps voxels will be the size n^3 bytes, turning terrain that would be megabytes in height maps into gigabytes in voxel data. Therefore we must compromise and combine the two techniques with the data structure proposed in the paper Layered Data Representation for Visual Simulation of Terrain Erosion by B. Benes and R. Forsback. This paper proposes a method of a two dimensional array of elements which contain information about the underlying layers.

E.g.

typedef struct{
PropertiesT data[MAX_LEVEL];
float height;
} ElmT; //one element of the array

PropertiesT is a structure that contains information about the material possibly such as height of the material layer, material type or even density. Unlike in the voxel representation this clumps together layers of the same material providing information about the block which saves large amounts of data. The overall size of the data structure is now more like n^2 * sizeof(ElmT) * bytes which means as long as sizeof(ElmT) is smaller than n our which it is highly likely to be out data structure will be much smaller than the voxel based approach.

This data structure also gives us the freedom to easily implement erosion techniques. The technique we have used is known as thermal erosion and is sited from the same paper. The thermal erosion algorithm is an attempt to represent long term thermal weathering. A material is dissolve because of changes in temperature which cause there terrain to break up and fall down. The eroded part will fall down in the direction of greatest gradient. To achieve this we use the following equation,

thermalErosionEqz

The result of which will give the amount of material to move to neighbouring location i. Delta S is equivalent to 1/2 the largest height difference between the element we wish to erode and its eight neighbours, this must be calculated to stop oscillations in the algorithm. ‘hi’ represents the height of the neighbour we wish to move our material to which is divided by the sum of all our elements neighbouring heights.

Anyway enough of the boring stuff here is pretty video of it all implemented!

Terrain Generation Collaborative Research Project – Introduction & First update

Welcome to the first update featuring my collaborative research project for my third year of university. The project is made up of a two man team, myself and Toby Gilbert. Our goal to research into new techniques of procedurally generating terrain on the macro and meso levels and maintaining relatively real time rendering of the geometry created. The macro side of generation being the large scale geometry such as mountains, hills and caverns etc.. For this we want to create as physically accurate terrain as possible to enhance realism. The meso being the smaller detail such as trees, boulders and grass, also a very demanding area to draw large volumes of shrubbery while maintaining performance. To increase the progress of our research we have split the two areas of generation between ourselves, Toby being in charge of the meso side of the generation and I shall be looking at the macro.

Initial Research

Terrain generation, a field that has been vastly researched in today’s computer graphics as it can be used in anything from computer games to movies. Therefore having a large amount of research that we can build from. Very basic techniques include simply using height maps, a previously generated texture in which every pixel value represents the height of the surface at the location of the pixel. So height of surface at (x,y) = f(x,y), function f returning the colour value of pixel (x,y) in a texture. This makes for very easy procedural generation by using random noise functions such as Perlin noise to generate the initial texture to be sampled. This method is very quick to implement and complexity can be added by developing more advanced methods to generate your texture. You will find a very detailed exploration of this technique in Realtime Procedural Terrain Generation by Jacob Olsen which uses a combination voronoi diagrams and noise generated by mid-point displacement to create fractal terrain. It also explores simulating erosion techniques such as thermal and hydraulic to improve the physical accuracy of the terrain. The limitations of this method being the terrain will never be able to generate features such as caves or arches in the surface. To achieve this we must look into generating volumetric data which is explored in Arches: a Framework for Modeling Complex Terrains by A. Peytavie ,E. Galin ,J. Grosjean and S. Merillou. In this paper they generate 3-dimensional data set consisting of different materials such a bedrock and sand. They will then simulate how these materials interact with each other and settle under gravity. They finally generate the volumetric data using a technique known as marching cubes. This is an algorithm in which you sample a three dimensional field of voxels and determine what to draw based on how these voxels intersect with the function. For example in the image bellow if one point of our voxel lies inside the shape and the other 7 lie outside then our surface must lie in between and therefore we draw a triangle between these point.

350px-MarchingCubes.svg

Finally we look into the meso side of generation. The challenges of this is that we need a lot of geometry to create a high amount of detail which in turn increases the compute power we will need to render our terrain in real time. Now you can achieve this by instancing geometry which means storing one set of geometry in memory and just drawing it many times. Alternatively in the paper Real-time Realistic Rendering and Lighting of Forests by Eric Bruneton, Fabrice Neyret solves this by rendering these objects as selection of textures different perspectives of an object and blending between them as you move around for a smooth transition between textures. This mean we can draw a large amount of objects very cheaply as textures are extremely optimised on the GPU.

Current Progress

The first step in this project was to implement the generation of the generation of the terrain through marching cubes. To achieve this I have created a modified version of the source code written by Paul Bourke here such that we can generate terrain from height maps. Our first generation program simply uses Perlin noise to generate a height map.

terrainGen

My next step was to make our terrain look a little more natural by rendering the terrain in a slightly more creative manor. For which I have used the technique in A rule-base approach to 3D terrain generation via texture splatting by Jonathan Farraris and Christos Gatzidis. This implementation shades the geometry based on the heights and the normals of its points. For example lower points are mud and grass shaded with brown and green and as the height increases it becomes rocks and snow shaded with grey and white. To further greater this we use the normals to identify sheers cliffs and shade them as rock instead of grass or snow. We can do this by calculating the angle through inverting the y component of the normal and multiplying it by 90. Now we just set a user defined thresh hold of faces above a certain again to shade as cliffs. In our program I have implemented to version of this, one with just block colours and one that uses pre created textures of our different types of terrain.

terrainColouredSmooth terrainWithTextures

A brief introduction to ray and path tracing

Rendering, the process of generating a image from a 2D or 3D model. Possibly the most important component in any video game or CGI. In which there are a variety of techniques to choose from the most common of which at the moment is rasterization which is what you will find in all modern day games. In this technique you take in vertices and normals of a model, interpolate these across pixel space to create an image. For example if you have 2 points A and B and want to draw a line, interpolate from A to B and fill in all the pixels along the way. This technique is decades old and very highly optimized but as time moves on the visual effects demands higher quality images which requires more complicated techniques of rendering. This is where ray and path tracing comes in. You will find this very common in CGI as it creates very high quality images but is in no way fast enough for for games (Yet!). In this post I will try to give you a basic idea of what they are and how you would implement them in computer graphics.

In layman’s terms the process of ray tracing is the attempt to simulate photons of light through mathematical formulae such that we can create images on the screen. In reality billions of photons come from a light source bounce off objects in many different directions of which the photons angled correctly hit our eyes enabling us to see.l001lighttoeyebounce This is exactly the what we are trying to simulate with ray tracing but with some cheats so that our computers can handle it. In our scene we will have a light source, some objects and finally a camera which represents our eye. In reality billions of photons are emitted from our light source in infinite directions and the small percentage of rays land in our camera/eye would create our image. Sadly we can’t simulate this in computing, or at least if we did it would take years to do due to the vast quantity of rays we would have to calculate which may not even land in our camera. To overcome this we use a method very imaginatively named Backward Tracing. In this method we will back trace rays from our camera to an object and then to the light source. This saves us calculating all the billions of unnecessary rays and just keep the ones that create our image. So at our current state we have one ray hitting something in our scene creating a small dot of our image. Now to create our full image all we have to do is send more rays. Imagine if you will you are painting a picture but can only use dots, if you paint enough dots you will eventually be able to create a full image. 875px-Ray_trace_diagram.svgTo convert this into terms of rendering we effectively need a ray for every pixel we are trying to draw. So image we have a plane in front of our camera. We divide this plane into a grid and fire a ray from our camera through one of the cells (our pixels) of our grid. We calculate whether or not it intercepts with something in our scene and if it does we use the colour of that object for that pixel. Effectively at a very basic level this is how our ray tracer works.

Path tracing is almost an extension to ray tracing. It is a lot more physically accurate creating even higher quality images but again but sacrifices speed in calculations. Instead of the rays hitting the object and then sending it straight to the light source it will continue to bounce around the scene accumulating colour values until it eventually hits a light. Some materials behave differently, some may have a high reflectivity and others a level of refraction or transparency which mean our rays will have to behave differently as well to colour them correctly. For example if we have a shiny red sphere next to a blue sphere, the red sphere will reflect some of the blue from the other sphere. This means our rays must do more “bounces” before reaching our light source, this in turn means more calculations which equals longer rendering times.

So that’s a brief introduction to ray and path tracers. Soon I hope to look into explaining more in depth about the mathematics used in these techniques such as shading formulae and how to calculate the reflections of the rays in the scene.

For more info:

A good explanation about simple ray tracers and how to implement. With source code

Ray tracers Vs Path Tracers

My student experiences and advice to those interested in getting into VFX/Games

Greetings!

Welcome and behold my very first blog post ever! Exciting isn’t it. Well now that you’ve had a moment calm down and those shivers down your spine to settle let me get on to the point of this blog post. I feel that currently in schools and sixth form education there is a lack of information about the VFX/Games industry and how to get into it. I often remember talking to the careers teacher in my college, telling them that I wanted to make games and the responding advice is generally the same. Either “Oh you should probably to IT then” or “I have no idea about that industry”. Both pretty useless pieces of advice. Which I find saddening because its one of the most creative, interesting and fast growing industries around to date. I mean Grand Theft Auto V generated over $800 million in revenue world wide.. On its FIRST DAY! If that’s not worth schools talking about then I don’t know what is. Its time that people loose the stereotype that making games is just a dream because its more within your grasp than you may think. Please remember though these are just my personal opinions so don’t take my word as law!

So before tell you what I think it takes to be achieve in this field let me tell you a little about myself to give you a little more context. My name is Declan Russell (That handsome devil in the picture above 😉 ) I’m currently in my third and final year studying BSc Software Development in Animation, Games and Effects at the NCCA which resides in Bournemouth University. All or at least most of the work I have created here is on my portfolio here so be sure to check that out *shameless self advertising*. In college (sixth form) I studied Maths, Further Maths and Computing A levels, where I originally wanted to be an accountant but after 2 years of having maths for 2/3rds of my week it got a bit stale. I honestly only did the course I’m doing now on a whim! I enjoyed my computing A level and liked playing computer games. I had no real knowledge of the field at all but haven’t regretted the decision since!

Now one of the first questions you may have, at least this is what I always wanted to know was what qualification at sixth form do I need? Overall I would highly recommend doing maths! I know lots of people don’t get on with it but its everywhere in visual effects and you will get really far if you have a good understanding of it. I can’t stress enough how useful maths is! Other than that it really depends on what you are doing. If you want to be some kind of artist, modeller or animator you will need some kind of art qualification and portfolio for most uni courses to consider you. If you’re looking into programming or making games I would consider doing computing. Don’t get this confused with IT! Computing is programming and learning about how a pc works. IT is taking many a screenshot showing that you have achieved the incredibly advanced skill of renaming a file or using word. Computing will give you a basic understanding of how a computer works and even give you some basic coding skills. On a side note though if you’re looking into games I personally feel that you should stay clear from games development courses. As much fun as they sound in sixth form they may give you some basic coding skills but really fall short on the maths side of things and you will struggle later on.

Do I have to be able to program before I go for a VFX degree? No, universities will teach you the coding you need but a bit of experience before and is only ever a bonus!

What application do we use to make VFX? The first applications you are likely to encounter in VFX are applications from the Autodesk suite. The most common and my favourite of which is Maya. This is used for modelling, rigging, animating, rendering and so much more. Maya is a good application to start learning and its free for students! Its got a pretty intuitive interface and there are loads of books on how to use it. A good read to get a lot of the basics is this

What programming languages do you use? The most common programming language you will come across is C++ and as time goes on for graphics you are more likely to learn OpenGL than DirectX now a days, mainly due to its cross compatibility. If you’re want to learn these some good books to read are Beginning C++ Through Game Programming, OpenGL Programming Guide and OpenGL 4.0 Shading Language Cook Book 

Finally, what uni’s should I look at? Bournemouth! (I may be biased but I don’t care everyone should come here!)

Well that concludes today’s blog post as its now gone midnight and brain functionality is plummeting! Hope I have been of some help and feel free to contact me with any further questions you have about this subject 🙂