DT Exclusive: Maxwell's Head of Render Technology Explains GPU Prototype

Next Limit Technologies, developers of Maxwell Renderer, have recently announced that they are developing a prototype of a GPU-driven version of their powerful rendering software. Currently, GPU rendering is a fast growing trend in 3D technology because it's reported to provide modelers and animators higher quality view port looks that more closely resemble final renders. Next Limit's foray into GPU rendering was on display in a SIGGRAPH released video (below) that showed Juan Cañada, head of Maxwell Render technology, demonstrating the prototype being used on a GPU device. Digital-Tutors recently reached out to Juan through email to discuss Next Limit's decision to develop a GPU rendering prototype, how the project is going, and his thoughts on all the hype surrounding the trend of GPU rendering in general.

What motivated you to begin developing a GPU renderer?

We always invest time in exploring hardware architectures that can speed up Maxwell. During the past few years there has been a lot of hype about GPUs. We’ve been looking into them for a long time, but until now they haven’t been suitable for Maxwell in terms of memory, precision, stability, and even performance in complex scenes. We are very pleased with the evolution of the graphic cards and we consider the latest generation a good step forward.


When fully developed, will your new prototype replace the CPU renderer?

So far it is a prototype useful for previewing purposes, as an alternative to the Fire Draft engine. Whether it will be useful for final shots is an open question. The way things are going right now, I’d say the answer is no. However, from what we know about future developments of different hardware manufacturers, the edge between GPUs and CPUs could be more diffuse in the future. In the end, what the customer wants is a render engine that fulfills his or her needs in terms of speed and quality at a competitive price.


In the video, you used a Quadro K5000. Are there other video cards you’ve been able to successfully run the prototype on?

The demo shown in the video uses CUDA 6.0. The demo is running on a nVidia Quadro K5000, but it also runs in less expensive devices like the nVidia GeForce family.

cont image


Many people think a GPU renderer is faster than a CPU renderer at everything. What are the advantages and disadvantages of both?

In general GPUs are more limited than CPUs on a few important things, mostly in terms of memory. Therefore it is very likely that big scenes with many polygons and textures will not fit in the GPU. There is this general perception that GPUs are faster than CPUs and that is not true. GPUs are very fast at some things, while slower than CPUs at others. There is a lot of hype about GPUs that hides this fact, typically SIMD architectures are very good when all the cores are doing more or less the same thing. In a raytracing context it means that the performance of GPUs is good in simple scenes where rays follow similar paths, but the more complex the scene is in terms of both geometry and light transport, (ex: a scene with a few polygons with glass, mirrors, and small holes could be very complex), the lower the performance of a GPU.


What major advantages should Maxwell users expect after the GPU renderer finally comes on the market?

We are looking forward to developing a GPU interactive engine that speeds up the interactive engine first, and perhaps the production engine later. In the end, the user should expect to get faster results with less effort.


Is the ability to use the GPU something you hope to eventually port to Maxwell inside of RealFlow?

Yes, this is something we will try to accomplish.

cont image2


How do you think GPU’s in general will affect production pipelines in the future?

It seems that a significant amount of studios are moving to (or have plans to move to) GPU based pipelines for look development. It seems there is a general trend of trying to give the artist a lot of horsepower so they don’t need to rely on render farms for everything, and use them mainly for final shots.


During SIGGRAPH, we saw a teaser on your Facebook page about a network job manager for Maxwell. Can you tell us more about that?

While Maxwell can be easily connected to any third-party job manager, we also include our own job queue manager system with the Suite for people who cannot afford to acquire a specific tool for that task. This network manager has been rewritten from scratch to be more stable, efficient, user-friendly, expandable and scriptable. This is probably not as “sexy” as other features, but it will certainly boost the productivity of all Maxwell users. We plan to release it soon as a free upgrade for V3 customers. (Maxwell Facebook link)


Apart from what we’ve talked about so far, are there any other features you’d like to see integrated into Maxwell in the future?

We are working on several things at the same time, some can be revealed and some can’t yet.


As you can see, there’s some exciting new technology ahead for Maxwell Render. Next Limit’s recent announcement that licenses of Maxwell Render will be free for faculty and students makes it a great time to get started with Maxwell if you haven’t already done so. When you do, you can learn how to get started with Maxwell in the Beginner’s Guide to Maxwell Render in Maya or any of our other Maxwell Render tutorials.