Compiling Optix with Qt Creator!

Alas for for my major project I’ve have delved into the realm of path tracing. This savage beast to tame on computers but such tools such as NVidia’s OptiX makes life good again! Unfortunately Optix is still fairly new so not too many people know too much about it so essentially you have to read pages and pages of documentation to understand it. Furthermore the use of Optix with Qt is essentially non existent. Luckily if you are like I and use the Qt IDE (and why wouldn’t you!) Im here to make your first few steps a little easier.

A little bit of background:

So when using the OptiX API the first thing you will notice is that you have to write “Programs” nvidias GPU specific functions. To use these programs with your OptiX engine they must be in the form of NVidias PTX immediate assembly language. This is then read in at runtime by your program. To generate this PTX code you will need the NVCC compiler which will come with your drivers when you install cuda. If you have done any research into this before you will know that to translate into PTX code you will need to call the -ptx flag when calling NVCC. So in the simplest case you would call something like this,

nvcc -ptx someOptixProgram.cu

Simply type this into your console and viola! You have a very basic ptx file. But we’re lazy! We dont want to have to do this every time we write/edit a program, so lets get Qt to do it for us.

How to do it:

Essentially we want to add another compiler to our Qt .pro file that compiles any of our OptiX samples before we build the rest of our code. Unlike with cuda we don’t want this code to be turned into object files and linked into our executable. We just want it to be translated to ptx and left alone. Be sure to read the comments and edit according to your own needs!

TARGET=Blank_scene
OBJECTS_DIR=obj

# as I want to support 4.8 and 5 this will set a flag for some of the mac stuff
# mainly in the types.h file for the setMacVisual which is native in Qt5
isEqual(QT_MAJOR_VERSION, 5) {
        cache()
        DEFINES +=QT5BUILD
}
UI_HEADERS_DIR=ui
MOC_DIR=moc

CONFIG-=app_bundle
QT+=gui opengl core
# Whatever sources you want in your program
SOURCES += \
    src/main.cpp \


# Whatever headers you want in your program
HEADERS += \
    include/something.h

INCLUDEPATH +=./include /opt/local/include
#Whatever libs you want in your program
DESTDIR=./
#Whatever libs you want in your program
CONFIG += console
CONFIG -= app_bundle


macx:INCLUDEPATH+=/usr/local/include/
unix:LIBS += -L/usr/local/lib



#Optix Stuff, so any optix program that we wish to turn into PTX code
CUDA_SOURCES += src/draw_color.cu \
                src/pinhole_camera.cu \
                src/constantbg.cu \
                src/box.cu \
                src/phong.cu

#This will change for you, just set it to wherever you have installed cuda
# Path to cuda SDK install
macx:CUDA_DIR = /Developer/NVIDIA/CUDA-6.5
linux:CUDA_DIR = /usr/local/cuda-6.5
# Path to cuda toolkit install
macx:CUDA_SDK = /Developer/NVIDIA/CUDA-6.5/samples
linux:CUDA_SDK = /usr/local/cuda-6.5/samples

# include paths, change this to wherever you have installed OptiX
macx:INCLUDEPATH += /Developer/OptiX/SDK/sutil
macx:INCLUDEPATH += /Developer/OptiX/SDK
linux:INCLUDEPATH += /usr/local/OptiX/SDK/sutil
linux:INCLUDEPATH += /usr/local/OptiX/SDK
INCLUDEPATH += $$CUDA_DIR/include
INCLUDEPATH += $$CUDA_DIR/common/inc/
INCLUDEPATH += $$CUDA_DIR/../shared/inc/
macx:INCLUDEPATH += /Developer/OptiX/include
linux:INCLUDEPATH += /usr/local/OptiX/include
# lib dirs
#QMAKE_LIBDIR += $$CUDA_DIR/lib64
macx:QMAKE_LIBDIR += $$CUDA_DIR/lib
linux:QMAKE_LIBDIR += $$CUDA_DIR/lib64
QMAKE_LIBDIR += $$CUDA_SDK/common/lib
macx:QMAKE_LIBDIR += /Developer/OptiX/lib64
linux:QMAKE_LIBDIR += /usr/local/OptiX/lib64
#Add our cuda and optix libraries
LIBS += -lcudart  -loptix

# nvcc flags (ptxas option verbose is always useful)
# add the PTX flags to compile optix files
NVCCFLAGS = --compiler-options -fno-strict-aliasing -use_fast_math --ptxas-options=-v -ptx

#set our ptx directory so that our ptx files are put somewhere else
PTX_DIR = ptx

# join the includes in a line
CUDA_INC = $$join(INCLUDEPATH,' -I','-I',' ')

# Prepare the extra compiler configuration (taken from the nvidia forum - i'm not an expert in this part)
optix.input = CUDA_SOURCES

#Change our output name to something suitable
optix.output = $$PTX_DIR/${QMAKE_FILE_BASE}.cu.ptx

# Tweak arch according to your GPU's compute capability
# Either run your device query in cuda/samples or look in section 6 here #http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/#axzz3OzHV3KTV
#for optix you can only have one architechture when using the PTX flags when using the -ptx flag you dont want to have the -c flag for compiling
optix.commands = $$CUDA_DIR/bin/nvcc -m64 -gencode arch=compute_52,code=sm_52 $$NVCCFLAGS $$CUDA_INC $$LIBS  ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}
#use this line for debug code
#optix.commands = $$CUDA_DIR/bin/nvcc -m64 -g -G -gencode arch=compute_52,code=sm_52 $$NVCCFLAGS $$CUDA_INC $$LIBS  ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}
#Declare that we wnat to do this before compiling the C++ code
optix.CONFIG = target_predeps
#now declare that we don't want to link these files with gcc, otherwise it will treat them as object #files
optix.CONFIG += no_link
optix.dependency_type = TYPE_C
# Tell Qt that we want add our optix compiler
QMAKE_EXTRA_UNIX_COMPILERS += optix

Hurruh! Hopefully you now have a Qt project that compiles all your Optix programs nicely for you, now go off into the wild and make what fabulous ray indulgent programs you desire!

Advertisement

A brief introduction to ray and path tracing

Rendering, the process of generating a image from a 2D or 3D model. Possibly the most important component in any video game or CGI. In which there are a variety of techniques to choose from the most common of which at the moment is rasterization which is what you will find in all modern day games. In this technique you take in vertices and normals of a model, interpolate these across pixel space to create an image. For example if you have 2 points A and B and want to draw a line, interpolate from A to B and fill in all the pixels along the way. This technique is decades old and very highly optimized but as time moves on the visual effects demands higher quality images which requires more complicated techniques of rendering. This is where ray and path tracing comes in. You will find this very common in CGI as it creates very high quality images but is in no way fast enough for for games (Yet!). In this post I will try to give you a basic idea of what they are and how you would implement them in computer graphics.

In layman’s terms the process of ray tracing is the attempt to simulate photons of light through mathematical formulae such that we can create images on the screen. In reality billions of photons come from a light source bounce off objects in many different directions of which the photons angled correctly hit our eyes enabling us to see.l001lighttoeyebounce This is exactly the what we are trying to simulate with ray tracing but with some cheats so that our computers can handle it. In our scene we will have a light source, some objects and finally a camera which represents our eye. In reality billions of photons are emitted from our light source in infinite directions and the small percentage of rays land in our camera/eye would create our image. Sadly we can’t simulate this in computing, or at least if we did it would take years to do due to the vast quantity of rays we would have to calculate which may not even land in our camera. To overcome this we use a method very imaginatively named Backward Tracing. In this method we will back trace rays from our camera to an object and then to the light source. This saves us calculating all the billions of unnecessary rays and just keep the ones that create our image. So at our current state we have one ray hitting something in our scene creating a small dot of our image. Now to create our full image all we have to do is send more rays. Imagine if you will you are painting a picture but can only use dots, if you paint enough dots you will eventually be able to create a full image. 875px-Ray_trace_diagram.svgTo convert this into terms of rendering we effectively need a ray for every pixel we are trying to draw. So image we have a plane in front of our camera. We divide this plane into a grid and fire a ray from our camera through one of the cells (our pixels) of our grid. We calculate whether or not it intercepts with something in our scene and if it does we use the colour of that object for that pixel. Effectively at a very basic level this is how our ray tracer works.

Path tracing is almost an extension to ray tracing. It is a lot more physically accurate creating even higher quality images but again but sacrifices speed in calculations. Instead of the rays hitting the object and then sending it straight to the light source it will continue to bounce around the scene accumulating colour values until it eventually hits a light. Some materials behave differently, some may have a high reflectivity and others a level of refraction or transparency which mean our rays will have to behave differently as well to colour them correctly. For example if we have a shiny red sphere next to a blue sphere, the red sphere will reflect some of the blue from the other sphere. This means our rays must do more “bounces” before reaching our light source, this in turn means more calculations which equals longer rendering times.

So that’s a brief introduction to ray and path tracers. Soon I hope to look into explaining more in depth about the mathematics used in these techniques such as shading formulae and how to calculate the reflections of the rays in the scene.

For more info:

A good explanation about simple ray tracers and how to implement. With source code

Ray tracers Vs Path Tracers