CUDA rendering in SLI

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • HansWellens
    3Dfollower
    • Nov 2016
    • 21

    CUDA rendering in SLI

    Hi there,

    I was wondering how useful it is to place (NVIDEA) graphics cards in SLI for speeding up the CUDA rendering process even more.
    How does the process scale? Isn't it CPU bound, in that adding a second or third high-end graphics results in a CPU bottleneck?
    What CUDA cards are best (gaming or professional)? Is it just about Teraflops/CUDA cores, or are other factors important as well?

    I couldn't find anything on the website about this. The CUDA acceleration is a very big boon to this software. Might be interedsting to clarify this a bit.

    Best regards,
    Hans

  • Andrea Alessi
    3Dflow Staff
    • Oct 2013
    • 1335

    #2
    Hi Hans!

    Zephyr can work with multiple cards. You don't necessarily need SLI, and, albeit we last tested a SLI configuration a while back, SLI itself would not give a substantial gain vs 2x single cards.

    Think of it this way: cuda is used whenever we can process things in parallel (keypoints extraction, depth map generation, etc) so 2 cards can compute independently 2 depthmaps, for example. I don't think there are bottlenecks that would cause a sli to be twice as faster to compute 1 depthmap than 2 single cards computing one depthmap each, but as i said, it's a while i haven't done testing in that matter.

    Stricly speaking about performance, Quadro are better as they tend to have more cuda cores and memory. However, on a price/performance basis, i usually suggest gaming cards, as they tend to cost much less for very similar result for Zephyr computation. The memory amount is also important, because if there is not enough video memory, the computation might be switched to CPU.

    Andrea

    Comment

    Working...