Updating your BxDF Plugins to Renderman 21

With the exciting non commercial release of the all new Renderman 21 I am sure you are all as keen as I to update all the Renderman BxDF plugins you have made! Well if this is true then you should be aware of a couple of changes in the way plugins are handled in with the new release.

1. Librix and Examples

So as of the Renderman 21 librix and all Pixars helpful example files are no longer default download to your RMANTREE location. In fact they download to your systems default download file as two separate tar files. I simply extracted these out back into my RMANTREE location but ultimately of course where you put them is up to you.

2. RixRNG.h

RixRNG.h is no longer included in RixBxDF.h and therefore you must include it yourself within your BxDF plugin as demonstrated below. I have added a RENDERMAN21 define to my build so I can easily revert back if I need to build for previous Rendermans again.

#include "RixBxdf.h"
#ifdef RENDERMAN21
    #include "RixRNG.h"
#endif

3. RixBXEvaluateDomain

The RixBXEvaluateDomain enumerators have changed slightly with,

k_RixBXFront = k_RixBXReflect

k_RixBXBack = k_RixBXTransmit

4. Function Parameter Changes

As of this version the functions GenerateSample, EvaluateSample and EvaluateSamplesAtIndex have some extra parameters.

GenerateSample:

#ifdef RENDERMAN21
    virtual void GenerateSample(RixBXTransportTrait transportTrait,
                                RixBXLobeTraits const *lobesWanted,
                                RixRNG *rng,
                                RixBXLobeSampled *lobeSampled,
                                RtVector3   *Ln,
                                RixBXLobeWeights &W,
                                RtFloat *FPdf, RtFloat *RPdf,
                                RtColorRGB* compTrans)
#else
    virtual void GenerateSample(RixBXTransportTrait transportTrait,
                                RixBXLobeTraits const *lobesWanted,
                                RixRNG *rng,
                                RixBXLobeSampled *lobeSampled,
                                RtVector3   *Ln,
                                RixBXLobeWeights &W,
                                RtFloat *FPdf, RtFloat *RPdf)
#endif

EvaluateSample:

#ifdef RENDERMAN21
virtual void EvaluateSample(RixBXTransportTrait transportTrait,
RixBXLobeTraits const *lobesWanted,
RixRNG *rng,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln, RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#else
virtual void EvaluateSample(RixBXTransportTrait transportTrait,
RixBXLobeTraits const *lobesWanted,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln, RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#endif

EvaluateSamplesAtIndex:

#ifdef RENDERMAN21
virtual void EvaluateSamplesAtIndex(RixBXTransportTrait transportTrait,                                            RixBXLobeTraits const &lobesWanted,
RixRNG *rng,
RtInt index, RtInt nsamps,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln,
RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#else
virtual void EvaluateSamplesAtIndex(RixBXTransportTrait transportTrait,
RixBXLobeTraits const &lobesWanted,
RtInt index, RtInt nsamps,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln,
RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#endif

5. Install location

As of Renderman 21 you no longer install your BxDF plugins in %RMANTREE%/lib/RIS/bxdf/ as the RIS file no longer exists. This is presumably due to the fact that Renderman 21 has now removed REYES with the replacement of the new RIS framework. Therefore the distinction in the file management is no longer necessary.

You will now install your BxDF .args files in %RMANTREE%/lib/plugins/Args and your .dll files in %RMANTREE%/lib/plugins.

 

Hope this is post finds itself helpful to someone. You can see a demonstration of all these changes in my previously made BxDF plugins here https://declanrussell.com/portfolio/microfacet-models-for-refraction-through-rough-surfaces-in-renderman/

Advertisement

Compiling Cuda dynamic parallelism with Qt creator

So recently I have been using dynamic parallelism with my cuda fluid simulation. Qt is my IDE of choice so ideally I needed it to be compiles with that. As there is not a lot of documentation on it I figured it would be a crime not to share with the world. So here is the .pro file! Enjoy!

TARGET=FluidSim
OBJECTS_DIR=obj

# as I want to support 4.8 and 5 this will set a flag for some of the mac stuff
# mainly in the types.h file for the setMacVisual which is native in Qt5
isEqual(QT_MAJOR_VERSION, 5) {
 cache()
 DEFINES +=QT5BUILD
}
UI_HEADERS_DIR=ui
MOC_DIR=moc

CONFIG-=app_bundle
QT+=gui opengl core
SOURCES += #any srcs you hav 


HEADERS += #any headers you have

FORMS += #whatever you want

OTHER_FILES += #whaterver you want


INCLUDEPATH +=./include /opt/local/include $$(HOME)/NGL/include/
LIBS += -L/opt/local/lib -lGLEW
DESTDIR=./

CONFIG += console
CONFIG -= app_bundle

#This is some stuff for NGL the ncca graphics lib remove if you dont want it
#----------------------------------------------------------------
#------------------------ NGL setup -----------------------------
#----------------------------------------------------------------
# use this to suppress some warning from boost
QMAKE_CXXFLAGS_WARN_ON += "-Wno-unused-parameter"
QMAKE_CXXFLAGS+= -msse -msse2 -msse3
macx:QMAKE_CXXFLAGS+= -arch x86_64
macx:INCLUDEPATH+=/usr/local/include/
# define the _DEBUG flag for the graphics lib
DEFINES +=NGL_DEBUG

unix:LIBS += -L/usr/local/lib
# add the ngl lib
unix:LIBS += -L/$(HOME)/NGL/lib -lNGL

# now if we are under unix and not on a Mac (i.e. linux) define GLEW
linux-*{
 linux-*:QMAKE_CXXFLAGS += -march=native
 linux-*:DEFINES+=GL42
 DEFINES += LINUX
}
DEPENDPATH+=include
# if we are on a mac define DARWIN
macx:DEFINES += DARWIN

#----------------------------------------------------------------
#-------------------------Cuda setup-----------------------------
#----------------------------------------------------------------

#set out cuda sources
CUDA_SOURCES += cudaSrc/*.cu

# Path to cuda SDK install
macx:CUDA_DIR = /Developer/NVIDIA/CUDA-6.5
linux:CUDA_DIR = /usr/local/cuda-6.5
# Path to cuda toolkit install
macx:CUDA_SDK = /Developer/NVIDIA/CUDA-6.5/samples
linux:CUDA_SDK = /usr/local/cuda-6.5/samples

#Cuda include paths
INCLUDEPATH += $$CUDA_DIR/include
INCLUDEPATH += $$CUDA_DIR/common/inc/
INCLUDEPATH += $$CUDA_DIR/../shared/inc/


#cuda libs
macx:QMAKE_LIBDIR += $$CUDA_DIR/lib
linux:QMAKE_LIBDIR += $$CUDA_DIR/lib64
QMAKE_LIBDIR += $$CUDA_SDK/common/lib
#note for dynamic parallelism you need libcudadevrt
LIBS += -lcudart -lcudadevrt

# join the includes in a line
CUDA_INC = $$join(INCLUDEPATH,' -I','-I',' ')

# nvcc flags (ptxas option verbose is always useful)
NVCCFLAGS = --compiler-options -fno-strict-aliasing -use_fast_math --ptxas-options=-v


#prepare intermediat cuda compiler
cudaIntr.input = CUDA_SOURCES
cudaIntr.output = ${OBJECTS_DIR}${QMAKE_FILE_BASE}.o

## Tweak arch according to your hw's compute capability
cudaIntr.commands = $$CUDA_DIR/bin/nvcc -m64 -g -G -gencode arch=compute_52,code=sm_52 -dc $$NVCCFLAGS $$CUDA_INC $$LIBS ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}

#Set our variable out. These obj files need to be used to create the link obj file
#and used in our final gcc compilation
cudaIntr.variable_out = CUDA_OBJ
cudaIntr.variable_out += OBJECTS
cudaIntr.clean = cudaIntrObj/*.o

QMAKE_EXTRA_UNIX_COMPILERS += cudaIntr


# Prepare the linking compiler step
cuda.input = CUDA_OBJ
cuda.output = ${QMAKE_FILE_BASE}_link.o

# Tweak arch according to your hw's compute capability
cuda.commands = $$CUDA_DIR/bin/nvcc -m64 -g -G -gencode arch=compute_52,code=sm_52 -dlink ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}
cuda.dependency_type = TYPE_C
cuda.depend_command = $$CUDA_DIR/bin/nvcc -g -G -M $$CUDA_INC $$NVCCFLAGS ${QMAKE_FILE_NAME}
# Tell Qt that we want add more stuff to the Makefile
QMAKE_EXTRA_UNIX_COMPILERS += cuda

Compiling Optix with Qt Creator!

Alas for for my major project I’ve have delved into the realm of path tracing. This savage beast to tame on computers but such tools such as NVidia’s OptiX makes life good again! Unfortunately Optix is still fairly new so not too many people know too much about it so essentially you have to read pages and pages of documentation to understand it. Furthermore the use of Optix with Qt is essentially non existent. Luckily if you are like I and use the Qt IDE (and why wouldn’t you!) Im here to make your first few steps a little easier.

A little bit of background:

So when using the OptiX API the first thing you will notice is that you have to write “Programs” nvidias GPU specific functions. To use these programs with your OptiX engine they must be in the form of NVidias PTX immediate assembly language. This is then read in at runtime by your program. To generate this PTX code you will need the NVCC compiler which will come with your drivers when you install cuda. If you have done any research into this before you will know that to translate into PTX code you will need to call the -ptx flag when calling NVCC. So in the simplest case you would call something like this,

nvcc -ptx someOptixProgram.cu

Simply type this into your console and viola! You have a very basic ptx file. But we’re lazy! We dont want to have to do this every time we write/edit a program, so lets get Qt to do it for us.

How to do it:

Essentially we want to add another compiler to our Qt .pro file that compiles any of our OptiX samples before we build the rest of our code. Unlike with cuda we don’t want this code to be turned into object files and linked into our executable. We just want it to be translated to ptx and left alone. Be sure to read the comments and edit according to your own needs!

TARGET=Blank_scene
OBJECTS_DIR=obj

# as I want to support 4.8 and 5 this will set a flag for some of the mac stuff
# mainly in the types.h file for the setMacVisual which is native in Qt5
isEqual(QT_MAJOR_VERSION, 5) {
        cache()
        DEFINES +=QT5BUILD
}
UI_HEADERS_DIR=ui
MOC_DIR=moc

CONFIG-=app_bundle
QT+=gui opengl core
# Whatever sources you want in your program
SOURCES += \
    src/main.cpp \


# Whatever headers you want in your program
HEADERS += \
    include/something.h

INCLUDEPATH +=./include /opt/local/include
#Whatever libs you want in your program
DESTDIR=./
#Whatever libs you want in your program
CONFIG += console
CONFIG -= app_bundle


macx:INCLUDEPATH+=/usr/local/include/
unix:LIBS += -L/usr/local/lib



#Optix Stuff, so any optix program that we wish to turn into PTX code
CUDA_SOURCES += src/draw_color.cu \
                src/pinhole_camera.cu \
                src/constantbg.cu \
                src/box.cu \
                src/phong.cu

#This will change for you, just set it to wherever you have installed cuda
# Path to cuda SDK install
macx:CUDA_DIR = /Developer/NVIDIA/CUDA-6.5
linux:CUDA_DIR = /usr/local/cuda-6.5
# Path to cuda toolkit install
macx:CUDA_SDK = /Developer/NVIDIA/CUDA-6.5/samples
linux:CUDA_SDK = /usr/local/cuda-6.5/samples

# include paths, change this to wherever you have installed OptiX
macx:INCLUDEPATH += /Developer/OptiX/SDK/sutil
macx:INCLUDEPATH += /Developer/OptiX/SDK
linux:INCLUDEPATH += /usr/local/OptiX/SDK/sutil
linux:INCLUDEPATH += /usr/local/OptiX/SDK
INCLUDEPATH += $$CUDA_DIR/include
INCLUDEPATH += $$CUDA_DIR/common/inc/
INCLUDEPATH += $$CUDA_DIR/../shared/inc/
macx:INCLUDEPATH += /Developer/OptiX/include
linux:INCLUDEPATH += /usr/local/OptiX/include
# lib dirs
#QMAKE_LIBDIR += $$CUDA_DIR/lib64
macx:QMAKE_LIBDIR += $$CUDA_DIR/lib
linux:QMAKE_LIBDIR += $$CUDA_DIR/lib64
QMAKE_LIBDIR += $$CUDA_SDK/common/lib
macx:QMAKE_LIBDIR += /Developer/OptiX/lib64
linux:QMAKE_LIBDIR += /usr/local/OptiX/lib64
#Add our cuda and optix libraries
LIBS += -lcudart  -loptix

# nvcc flags (ptxas option verbose is always useful)
# add the PTX flags to compile optix files
NVCCFLAGS = --compiler-options -fno-strict-aliasing -use_fast_math --ptxas-options=-v -ptx

#set our ptx directory so that our ptx files are put somewhere else
PTX_DIR = ptx

# join the includes in a line
CUDA_INC = $$join(INCLUDEPATH,' -I','-I',' ')

# Prepare the extra compiler configuration (taken from the nvidia forum - i'm not an expert in this part)
optix.input = CUDA_SOURCES

#Change our output name to something suitable
optix.output = $$PTX_DIR/${QMAKE_FILE_BASE}.cu.ptx

# Tweak arch according to your GPU's compute capability
# Either run your device query in cuda/samples or look in section 6 here #http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/#axzz3OzHV3KTV
#for optix you can only have one architechture when using the PTX flags when using the -ptx flag you dont want to have the -c flag for compiling
optix.commands = $$CUDA_DIR/bin/nvcc -m64 -gencode arch=compute_52,code=sm_52 $$NVCCFLAGS $$CUDA_INC $$LIBS  ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}
#use this line for debug code
#optix.commands = $$CUDA_DIR/bin/nvcc -m64 -g -G -gencode arch=compute_52,code=sm_52 $$NVCCFLAGS $$CUDA_INC $$LIBS  ${QMAKE_FILE_NAME} -o ${QMAKE_FILE_OUT}
#Declare that we wnat to do this before compiling the C++ code
optix.CONFIG = target_predeps
#now declare that we don't want to link these files with gcc, otherwise it will treat them as object #files
optix.CONFIG += no_link
optix.dependency_type = TYPE_C
# Tell Qt that we want add our optix compiler
QMAKE_EXTRA_UNIX_COMPILERS += optix

Hurruh! Hopefully you now have a Qt project that compiles all your Optix programs nicely for you, now go off into the wild and make what fabulous ray indulgent programs you desire!

Master Class: Laplacian Mesh Editing – Update 1

So to start here is a bit of back story. Our university (Bournemouth) has partnered up with MPC to give us some R&D assignments for our master class which they will judge and give feed back at the end of the project. We had the choice between creating some kind of terrain generation or animation application or if you’re truly hardcore to implement one of a selection of mesh deforming papers from SIGGRAPH. I (being the hardcore of hardcore) decided that my brain clearly isn’t working hard enough at the moment and have chosen the latter. so here is a quick introduction to laplacian mesh editing and what I understand at the moment. Also theres a demo!!! *screams*

So laplacian mesh editing is a form of deformation in which the user will paint weights to a mesh of their choosing and from the manipulation of these weights can magically deform the object before your eyes. But how does one do that you might say? Good question! Well firstly we have to represent our geometry in a different way. You’re probably used to using relative coordinates to represent points in a mesh i.e. Vec3(1,4,5). Well for laplacian mesh editing we represent the point in relation to its neighbouring points.This is called the implicit representation and goes a little like this…

implicitRepresenttaion

So here we have a square made from 4 points (mind blowing!). To find the location of V0 from the other points we can use half the sum of V1 and V3 then add this unknown vector d0 (delta). We can calculate this unknown by simply rearranging the equation just like in Fig 1. Now lets get a bit more crazy and represent all these points in implicit matrix form.

laplaceAndDeltaMatrix

From this form you are now ready to create some deformation! You do this by adding these things called “handles” or “anchors” to our matrix. Say if we want V0 and V2 to be our handles this is how it would look.

laplaceAndDeltaMatrixHandles

Now say if we change the position of V0 and V2 in our delta matrix (denoted b in Fig 3) and then solve for x, this will give use different points because deformation has occurred (Huzzuh!). But you may be thinking to yourself, “How in the world do I solve this? These are not square matrices!”. Another great question! Well here is how…

SolvingTheEqation

Now we have the basis of our laplacian mesh editing! You can deform simple shapes to your hearts desire! But this is by no means a complete. The way we create our matrix A or our “laplace matrix” only currently works for very uniform geometry. For any complicated shapes we will have to calculate this another way. Sadly I dont know that yet 😦 but keep updated with my posts and one day… just maybe… I will!

Anyway here’s a pretty demo!

 

References:
O. Sorkine , D. Cohen-Or , Y. Lipman , M. Alexa, C. Rössl and H.-P. Seidel (2004) Laplacian Surface Editing, Eurographics Symposium on Geometry Processing: Eurographics

Terrain Generation Collaborative Research Project – Update 2: Thermal Erosion & Data Structures

When thinking about generating terrain we must first think about how we are going to store our data to represent it. This is a very crucial aspect as depending on the method you use to represent this data will have a direct effect on what kind of terrain you can generate and how large your data structure will be. In a ideal world we want to be able to represent any kind of terrain using minimal memory. The technique we have been using so far is two dimensional height maps. This is a very small way of representing terrain data in which we use a two dimensional array of elements that store height values. This results with a spacial requirement of n^2 bytes, n being the size of our array. This technique has its limitations in what type of terrain you can represent. Being only able to represent height and location information this restricts us to only being able to represent one layer of a surface and cannot represent natural phenomena such as horizontal caves. On the other hand we can represent our terrain in voxel form. This could be represented in a tree dimensional array which allows us to represent a third dimension of data. Where this representation has its draw backs is the size the data structure will be. Unlike our height maps voxels will be the size n^3 bytes, turning terrain that would be megabytes in height maps into gigabytes in voxel data. Therefore we must compromise and combine the two techniques with the data structure proposed in the paper Layered Data Representation for Visual Simulation of Terrain Erosion by B. Benes and R. Forsback. This paper proposes a method of a two dimensional array of elements which contain information about the underlying layers.

E.g.

typedef struct{
PropertiesT data[MAX_LEVEL];
float height;
} ElmT; //one element of the array

PropertiesT is a structure that contains information about the material possibly such as height of the material layer, material type or even density. Unlike in the voxel representation this clumps together layers of the same material providing information about the block which saves large amounts of data. The overall size of the data structure is now more like n^2 * sizeof(ElmT) * bytes which means as long as sizeof(ElmT) is smaller than n our which it is highly likely to be out data structure will be much smaller than the voxel based approach.

This data structure also gives us the freedom to easily implement erosion techniques. The technique we have used is known as thermal erosion and is sited from the same paper. The thermal erosion algorithm is an attempt to represent long term thermal weathering. A material is dissolve because of changes in temperature which cause there terrain to break up and fall down. The eroded part will fall down in the direction of greatest gradient. To achieve this we use the following equation,

thermalErosionEqz

The result of which will give the amount of material to move to neighbouring location i. Delta S is equivalent to 1/2 the largest height difference between the element we wish to erode and its eight neighbours, this must be calculated to stop oscillations in the algorithm. ‘hi’ represents the height of the neighbour we wish to move our material to which is divided by the sum of all our elements neighbouring heights.

Anyway enough of the boring stuff here is pretty video of it all implemented!