NVidia AI Denoiser

Over the holidays to keep me busy I have implemented a simple command line implementation of NVidias new AI denoiser. Here are some examples of what it can do,

Original image:

test

Denoised image:

denoised_test

The code can be found here:

https://github.com/DeclanRussell/NvidiaAIDenoiser

I have created a windows distribution as well for those who wish to try it out here:

Denoiser windows v2.0

Denoiser windows v1.1

Denoiser windows v1.0

 

Advertisements

6 thoughts on “NVidia AI Denoiser

  1. Sharing some experience. Had done some testing with v1.0 & it works pretty nice on unidirectional PT renders (Cycles, LuxCoreRender). But does nothing to images generated with BiDir engines (Maxwell, Indigo).

    BTW can it be made to run through a sequence? Whole set of images in specific folder? Wanna test to see, if and how much flickering occurs.

    Liked by 1 person

  2. Thanks for taking the time to test it. The results on Bidirectional are very interesting indeed. The training data was done only on unidirectional so perhaps that is the reason for this.

    I’ll look into sequences, however EXR support is definitely at the top of my list. In the mean time I’ve created some videos previous for work that demonstrates how well it works with animation if you’re interested? Of course I’ll have to see if I can get permission to share these before.

    Like

  3. was no problem
    i like experimenting & experiencing novelties in visual tech
    so thank you for sharing & getting back

    on BiDir
    had also assumed so because of iRay – although it’s well capable of BiDir rendering, i guess it isn’t economically feasible for company to do it just yet

    on to sequences
    was a wishful idea – as you have priorities all set, with EXR being an integral part
    it’s totally understandable and i respect that

    but… if you might know, how the batch script should look like or have a link to a site, it would mean a lot to me…

    am no coder but, still… intrigue & passion for animation burns inside πŸ™‚
    so either way i need to grind on, dig further…

    Thanks again for your work & keep it up!

    Like

    1. ADDENDUM
      so i’ve done some more tests (now with 1.1) on interior scene renders and it seems that this denoiser works efficiently only in well lit areas, similar to your cornell box example
      and after further observation it also leads me to the question, why is there more noise in the greens?

      i wish i would know more about specifications and statistics, what kind of data was this denoiser fed with…

      oh, BTW
      Does this AI learns while working? Is such an option even possible – switching to learn mode?

      Like

      1. No problem, if people are enjoying and making use of it then it makes it worth while πŸ™‚

        > but… if you might know, how the batch script should look like or have a link to a site, it would mean a lot to me…

        It would probably be pretty easy to setup with a batch script. I think I could whip one up pretty quickly as a short term work around. It would be useful to know how the numbers in your images are formatted i.e. do they have padding like image.0001.jpg, image.0002.jpg etc…

        >why is there more noise in the greens?

        Are you supplying normal and albedo inputs with the “-n” and “-a” flags. This should improve preserving the colour a lot better than just giving it the beauty alone. If so then perhaps its just a limitation.

        >i wish i would know more about specifications and statistics, what kind of data was this denoiser fed with…

        The training data this uses is the shipped training data from Nvidia which was trained with Iray. As to the exact training set they used I do not have any information on. OptiX actually ships with tools to create your own training data which is really interesting and could improve results if it were trained with images from the same renderer you’re denoising with. So if you happen to have a couple of thousand image pairs lying around to train it with I would be interested in the results πŸ˜‰

        >Does this AI learns while working? Is such an option even possible – switching to learn mode?

        That would be sweet but sadly no its pre-trained. I’m not sure how that would work anyway as you essentially train AI by giving it a big data set of before and after results. When you give it a new before result it can use its training data to try to match a before and produce the expected after result on its own. Is this making sense? However to learn on its own how would it know if what its producing is the correct result?

        Like

  4. … πŸ™‚ was imagining a concept of a master artist teaching AI to help on projects later. Something along the lines of personal “AIssistant” – long term study/project. Basically, for starters alike macro maker – recognition of repeating patterns, actions to automate after. Similarly applied in case of rendering, since artists usually develop own styles with years. Just wishfully brainstorming πŸ˜‰

    Seen you’ve updated. Will try some testing over weekend.

    TYVM for explanations & stay well

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s