Updating your BxDF Plugins to Renderman 21

With the exciting non commercial release of the all new Renderman 21 I am sure you are all as keen as I to update all the Renderman BxDF plugins you have made! Well if this is true then you should be aware of a couple of changes in the way plugins are handled in with the new release.

1. Librix and Examples

So as of the Renderman 21 librix and all Pixars helpful example files are no longer default download to your RMANTREE location. In fact they download to your systems default download file as two separate tar files. I simply extracted these out back into my RMANTREE location but ultimately of course where you put them is up to you.

2. RixRNG.h

RixRNG.h is no longer included in RixBxDF.h and therefore you must include it yourself within your BxDF plugin as demonstrated below. I have added a RENDERMAN21 define to my build so I can easily revert back if I need to build for previous Rendermans again.

#include "RixBxdf.h"
#ifdef RENDERMAN21
    #include "RixRNG.h"
#endif

3. RixBXEvaluateDomain

The RixBXEvaluateDomain enumerators have changed slightly with,

k_RixBXFront = k_RixBXReflect

k_RixBXBack = k_RixBXTransmit

4. Function Parameter Changes

As of this version the functions GenerateSample, EvaluateSample and EvaluateSamplesAtIndex have some extra parameters.

GenerateSample:

#ifdef RENDERMAN21
    virtual void GenerateSample(RixBXTransportTrait transportTrait,
                                RixBXLobeTraits const *lobesWanted,
                                RixRNG *rng,
                                RixBXLobeSampled *lobeSampled,
                                RtVector3   *Ln,
                                RixBXLobeWeights &W,
                                RtFloat *FPdf, RtFloat *RPdf,
                                RtColorRGB* compTrans)
#else
    virtual void GenerateSample(RixBXTransportTrait transportTrait,
                                RixBXLobeTraits const *lobesWanted,
                                RixRNG *rng,
                                RixBXLobeSampled *lobeSampled,
                                RtVector3   *Ln,
                                RixBXLobeWeights &W,
                                RtFloat *FPdf, RtFloat *RPdf)
#endif

EvaluateSample:

#ifdef RENDERMAN21
virtual void EvaluateSample(RixBXTransportTrait transportTrait,
RixBXLobeTraits const *lobesWanted,
RixRNG *rng,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln, RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#else
virtual void EvaluateSample(RixBXTransportTrait transportTrait,
RixBXLobeTraits const *lobesWanted,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln, RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#endif

EvaluateSamplesAtIndex:

#ifdef RENDERMAN21
virtual void EvaluateSamplesAtIndex(RixBXTransportTrait transportTrait,                                            RixBXLobeTraits const &lobesWanted,
RixRNG *rng,
RtInt index, RtInt nsamps,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln,
RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#else
virtual void EvaluateSamplesAtIndex(RixBXTransportTrait transportTrait,
RixBXLobeTraits const &lobesWanted,
RtInt index, RtInt nsamps,
RixBXLobeTraits *lobesEvaluated,
RtVector3 const *Ln,
RixBXLobeWeights &W,
RtFloat *FPdf, RtFloat *RPdf)
#endif

5. Install location

As of Renderman 21 you no longer install your BxDF plugins in %RMANTREE%/lib/RIS/bxdf/ as the RIS file no longer exists. This is presumably due to the fact that Renderman 21 has now removed REYES with the replacement of the new RIS framework. Therefore the distinction in the file management is no longer necessary.

You will now install your BxDF .args files in %RMANTREE%/lib/plugins/Args and your .dll files in %RMANTREE%/lib/plugins.

 

Hope this is post finds itself helpful to someone. You can see a demonstration of all these changes in my previously made BxDF plugins here https://declanrussell.com/portfolio/microfacet-models-for-refraction-through-rough-surfaces-in-renderman/

Advertisement

Simple Arnold Shader Maya Plugin

I have set up a git repo with an implementation of the simple Arnold shader demonstrated here https://support.solidangle.com/display/AFMUG/Creating+a+Shader

Installation

  1. Download the git repo https://github.com/DeclanRussell/aiSimpleShader

2. Install the Arnold SDK from here https://www.solidangle.com/arnold/download/ to somewhere on your computer. I have it in,

C:\solidangle\releases\Arnold-X.X.X.X-platform.

3. Set a new environment variable ARNOLD_PATH to your chosen install path:

set ARNOLD_PATH=”C:\solidangle\releases\Arnold-X.X.X.X-platform”

4. Install the MtoA (Arnold for Maya) plugin for maya, from here: https://www.solidangle.com/arnold/download/

5. Build the shader with Qt. Is you’re on linux or mac you can just run,

qmake
make clean
make

Or you can just open the .pro file in Qt and build it there.

6. You can test if the shader has compiled correcly by running the testScript.bat file I have included. If you get a Red sphere the shader work, pink means it has not compiled correcly.

7. Copy the simpleShader.dll you have just built and the simpleShader.mtd to %MTOA_PATH%\shaders\ where %MTOA_PATH% is wherever you installed the Arnold maya plugin.

8. Copy the mySimpleTemplate.py to %MTOA PATH%\scripts\mtoa\ui\ae\

9. You can now test if this works in maya by running the test scene mayaTestScene.mb. You should get the following result.

examplerender

A brief introduction to ray and path tracing

Rendering, the process of generating a image from a 2D or 3D model. Possibly the most important component in any video game or CGI. In which there are a variety of techniques to choose from the most common of which at the moment is rasterization which is what you will find in all modern day games. In this technique you take in vertices and normals of a model, interpolate these across pixel space to create an image. For example if you have 2 points A and B and want to draw a line, interpolate from A to B and fill in all the pixels along the way. This technique is decades old and very highly optimized but as time moves on the visual effects demands higher quality images which requires more complicated techniques of rendering. This is where ray and path tracing comes in. You will find this very common in CGI as it creates very high quality images but is in no way fast enough for for games (Yet!). In this post I will try to give you a basic idea of what they are and how you would implement them in computer graphics.

In layman’s terms the process of ray tracing is the attempt to simulate photons of light through mathematical formulae such that we can create images on the screen. In reality billions of photons come from a light source bounce off objects in many different directions of which the photons angled correctly hit our eyes enabling us to see.l001lighttoeyebounce This is exactly the what we are trying to simulate with ray tracing but with some cheats so that our computers can handle it. In our scene we will have a light source, some objects and finally a camera which represents our eye. In reality billions of photons are emitted from our light source in infinite directions and the small percentage of rays land in our camera/eye would create our image. Sadly we can’t simulate this in computing, or at least if we did it would take years to do due to the vast quantity of rays we would have to calculate which may not even land in our camera. To overcome this we use a method very imaginatively named Backward Tracing. In this method we will back trace rays from our camera to an object and then to the light source. This saves us calculating all the billions of unnecessary rays and just keep the ones that create our image. So at our current state we have one ray hitting something in our scene creating a small dot of our image. Now to create our full image all we have to do is send more rays. Imagine if you will you are painting a picture but can only use dots, if you paint enough dots you will eventually be able to create a full image. 875px-Ray_trace_diagram.svgTo convert this into terms of rendering we effectively need a ray for every pixel we are trying to draw. So image we have a plane in front of our camera. We divide this plane into a grid and fire a ray from our camera through one of the cells (our pixels) of our grid. We calculate whether or not it intercepts with something in our scene and if it does we use the colour of that object for that pixel. Effectively at a very basic level this is how our ray tracer works.

Path tracing is almost an extension to ray tracing. It is a lot more physically accurate creating even higher quality images but again but sacrifices speed in calculations. Instead of the rays hitting the object and then sending it straight to the light source it will continue to bounce around the scene accumulating colour values until it eventually hits a light. Some materials behave differently, some may have a high reflectivity and others a level of refraction or transparency which mean our rays will have to behave differently as well to colour them correctly. For example if we have a shiny red sphere next to a blue sphere, the red sphere will reflect some of the blue from the other sphere. This means our rays must do more “bounces” before reaching our light source, this in turn means more calculations which equals longer rendering times.

So that’s a brief introduction to ray and path tracers. Soon I hope to look into explaining more in depth about the mathematics used in these techniques such as shading formulae and how to calculate the reflections of the rays in the scene.

For more info:

A good explanation about simple ray tracers and how to implement. With source code

Ray tracers Vs Path Tracers