Workflow advice regarding re-coloring an imported mesh

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Craig
    3Dflower
    • Oct 2022
    • 6

    Workflow advice regarding re-coloring an imported mesh

    Hi,

    I laser scanned a hotel room and have parsed the resulting point cloud into different objects (east wall, west wall, bed, toilet, etc.). Now I want to make those point clouds into textured meshes. The geometry has to be decent, but not perfect. I have been meshing the point clouds with Rhino 8 and am pleased with the results.

    I'd like to color the meshes in 3DFZ. I have equirectangular images taken with a Ricoh Theta X camera (see attached sample that is 50% size - full size exceeded upload limits). Of course, I also have the color captured via the laser scanner and can access the full resolution version in FARO's image "color overlay" as well. It's something like an equirectangular image as well. A bonus of using the FARO image overlay is that I know exactly where that camera was located, but I don't know the projection that is used. So I'll focus on the Ricoh unless someone has a better idea.

    Can anyone suggest a workflow to re-color the imported Rhino meshes using the Ricoh image(s)? I have the full version of 3DFZ.

    Thanks!

    Craig
  • Andrea Alessi
    3Dflow Staff
    • Oct 2013
    • 1335

    #2
    Hello Craig!

    In general I would suggest using actual photos rather than 360 photos if possible (or if your 360 camera allows to export the raw unstitched-images, use those).

    Ideally you should be able to create a photogrammetry project (either from non-stiched photos or using the 360 images https://www.youtube.com/watch?v=_SDa2GfFCUg ) - then use the laser scanned point cloud as a reference and bring the photogrammetry project in the laser scan reference system ( see https://www.3dflow.net/technology/do...-registration/ or https://www.youtube.com/watch?v=oT0CK2Vcu_E ).

    One you have done the rough alignment, do the fine alignment via ICP, then structure the laser scan cloud(s). If there are multiple clouds, consider merging them together so you can have one single mesh. After that, you can extract the mesh and its textured mesh.

    I hope this helps! If you need, you can contact us at support@3dflow.net and we're happy to process the data with you.

    Comment

    • Craig
      3Dflower
      • Oct 2022
      • 6

      #3
      Andrea thanks for the quick reply!

      It's key that the final result is several meshes, rather than one. That way I can turn walls and objects on and off to get different views.

      Is it possible to tell Zephyr where the camera was located and then have it just "paint" an existing mesh with the colors that come from the known camera imagery?

      Comment

      • Andrea Alessi
        3Dflow Staff
        • Oct 2013
        • 1335

        #4
        Happy to help!

        In that case you can certainly skip the point clouds merge, though keep in mind you're likely to get some "intersections" and/or "gaps" between two different meshes / near the edges, depending on the point clouds themselves. I would eventually consider exporting / edit / reimporting in blender, if necessary and if that is a problem - if it's an interior of a room, that should be fairly easy since we're discussing simple shapes.

        You'll be re-generating the texture mesh whenever you add or remove new cameras, as the texturing process merges and weights different cameras for each pixel, but yes, you can do that - while you can add new cameras manually, I would recommend adding them via photogrammetry as even slight errors such as 0.1° will result in bad texturing reprojection.

        Comment

        Working...