Reconstruction of objects created from 2D-stitched images

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • waynewaynehello
    3Dflower
    • Aug 2018
    • 4

    Reconstruction of objects created from 2D-stitched images

    Hello,

    I have an interesting use case and was hoping I could get some advice.

    I have a motorized scanner consisting of a USB microscope which moves in one axis (X), and a sample which moves in a perpendicular one (Y), and can rotate in discrete steps. I am first stitching together the 2D scans, which results in what looks like a normal image taken with a camera, and then trying to stitch these together as if I was just taking normal pictures of an object on a turntable (right now, ignoring the ability to Z Stack.)

    I was encouraged by some success I had in reconstructing from this set of images (https://drive.google.com/drive/folde...e_hwv?ogsrc=32 , each of these was originally 25 stitched images) , which after masking and editing and such, resulted in this reconstruction: https://sketchfab.com/models/6653825...4a5034244a902b

    However, I haven't had any other success. I've tried four other insects. Two beetles are rather monotone black and brown (and glossy), so I can understand that they wouldn't work very well. A bee I dismissed because it was fuzzy, which I imagine would be difficult too. But I just tried some other bug, which I was a bit more optimistic for, since it was nice and colorful and not that hairy... : https://drive.google.com/file/d/1uH4...ew?usp=sharing (I was trying to use Z2000).

    I have tried masking the outsides and insides between the legs, various scan parameters I could find, but it can't align more than three pictures (which are next to each other), no matter what I do, and about half the time the reconstruction fails. Is it because the image is off-axis (should it be straightened out, referenced against like the tip of the nose?) Should I be playing around with the camera focal length, trying to find the "equivalent" one as if it was a regular camera? Or am I confusing the heck out of the program by using stitched images in the first place? The way I see it, all the information needed to do the reconstruction should be there, right?

    I would love to see if anyone more experienced with the software can produce a model.

    Thanks,

    Wayne
  • Roberto
    3Dflow
    • Jun 2011
    • 559

    #2
    Hi Wayne,

    I'm not very familiar with this kind of setup, but I'll try to give you some suggestion based on traditional photography. I've downloaded the datasets, and I noticed some issues that you should cope with, to improve the results.
    Resolution: the image resolution is very low, but I guess this is a limitation of the hardware itself. I also suspect that the sensor size is small and this is a problem for pixel level noise.
    DoF: The depth of field is quite narrow. Usually, you can decrease the aperture and reduce the shutter speed to get a better depth of field.
    The number of photos/overlap: I would say that 25 pictures per single orbit should be enough. However, you can compensate for the low resolution/quality of the photos by taking more pictures. The major problem at the moment if you run the dataset in Zephyr, is that there are not enough matched key points. A smaller change between a photo and the subsequent one can help.

    I think that the points above are the major issues. However, here additional hints:
    Camera calibration: you might want to use the calibration manager to save and reuse a successful camera calibration. This might help the orientation further.
    Different orbits: It's not a problem if you can move the microscope only on one axis, but I would always take more than one orbit to help the camera orientation.
    Lights: you can try to use diffusers to get a homogeneous lighting setup
    Masking: you probably still need to use masking. However, if you can play with diffusers and set up lights properly, you can get a uniform white background that wouldn't need masking at all.

    Hope this helps! The best thing is to experiment and see what's best

    Comment

    • waynewaynehello
      3Dflower
      • Aug 2018
      • 4

      #3
      Hey Rob, thanks so much for your reply! The setup is homemade, and I will happily give you more details if interested. I have no experience in traditional photography, so I greatly appreciate any tips that might be relevant.

      Just to address these points in turn:

      The resolution is indeed a limitation of the hardware (the original pictures from the scope are 480x640, sensor itself is about 3x4 mm), but larger objects use more pictures and result in higher resolutions. The "small black and red beetle" was a bit on the small size. Most of the pictures are 3-5 MP or so.

      The depth of field is something I am trying to work on. I don't have access to aperture or shutter speed, but I have been trying to implement Z stacking, with Picolay. For a typical scan I'll collect say 4 captures over 3 MM in height, and it clearly goes through the full range of focus most of the time. I'm trying it both before and after the 2D stitching. I am concerned about what the authors in this paper (https://www.researchgate.net/publica...i-view_imaging) say which is that Z stacking tends to confuse object-from-motion reconstruction, but I won't really know until I try.

      Number of pictures: Definitely, that would be a good thing! I can take up to 160 per rotation, but am limited to 50 in the free version. I usually take about 40, but now that I look at it, some parts like legs jump significantly.

      Lighting: Been trying a few things including a softbox, but you just gave me an idea for an attachment to the microscope!

      I have a couple of questions:

      When you say take more orbits, do you mean rotating the sample more than once? I'm not sure I understand how that would be different from taking more pictures per rotation.

      I've attached a couple of pictures illustrating a common problem. Sometimes, the images generated are offset significantly with white space to one side. If the images are masked, would this confuse the software (should I be making an effort that the objects are all centered in the image)?

      I'll continue to experiment. I really want this wasp scan to work: https://imgur.com/a/jv1AUFL

      So far I have gotten a max of 15 out of 40 cameras oriented, though.

      Thanks again for the help and encouragement!
      Attached Files
      Last edited by waynewaynehello; 2018-08-07, 06:40 PM. Reason: Old link expired

      Comment

      • Andrea Alessi
        3Dflow Staff
        • Oct 2013
        • 1335

        #4
        Hi Wayne,

        with more Orbits, Roberto means that after you have done a full orbit, you should change the camera position and rotation. You can see an example of this technique from our tutorial #1, where there are three orbits of the cherub statue at three different heights.

        For example from your gif, i would then take another orbit with the head as "center" of the image, and another one with the abdomen as a center of the image, so to speak.

        The object does not have to be at the center of the image. However, i see that the two images have two different resolutions, so zephyr will treat them as two different physical cameras. Are you cropping the images or is the microscope outputting those images as we see them? Even if those are virtual cameras, do not crop the images if possible

        Comment

        • waynewaynehello
          3Dflower
          • Aug 2018
          • 4

          #5
          Hey, thanks for your response and sorry, I was asleep for a month.

          I can do scans centered around different parts of the object, but in my case it wouldn't be different from cropping out part of the image and adding whitespace --- the position of the camera and bug just repeats after a rotation. I could try poking the bug in a different spot, but this would introduce its own set of problems.

          After stitching, there is a bit of a jagged edge on the border due to imperfections in the motion, so it is cropped to be rectangular. I could not do that --- but the images would still be slightly different resolutions. I now crop them to be the same resolution, though.

          I did manage to get one more 3D model in the last month, from 80 pictures (started a Lite trial which has since expired). https://skfb.ly/6ALAR https://imgur.com/tbPjoZV . I think the reason Zephyr did such a good job is because of all the spots.

          I've found with this success and a couple of other partial successes, that the only thing to really do is keep track of the various settings and just walk towards a better sparse reconstruction. It's still quite difficult, though I accept that a lot of that is probably imperfections from my side of things. Common problems are front, shiny surfaces with low amounts of detail failing to be aligned, ruining the rest of the reconstruction, and another is that when the cameras are aligned, the object is "unfolded". I would really love for an expert to try out any of my dataset and see how much of this is skill vs. fundamental problems with the images. Anyone is welcome to it if they like and I will be happy to provide

          Thanks again,

          Wayne
          Last edited by waynewaynehello; 2018-09-05, 01:04 AM.

          Comment

          • Andrea Alessi
            3Dflow Staff
            • Oct 2013
            • 1335

            #6
            Hi Wayne,

            no problems!

            As I stated above, please avoid cropping when possible. If you need to remove parts of an image use 3DF Masquerade instead (tutorial here https://www.youtube.com/watch?v=dGRw8LbXknU )

            if you'd like to share the photos with me i'll happily have a look at them, i have to kindly ask you to share the un-cropped images though.

            Comment

            • waynewaynehello
              3Dflower
              • Aug 2018
              • 4

              #7
              Thanks for your understanding! I'll send you a google drive PM soon with a few samples (cropped, uncropped, and the component images), and some reconstruction files.

              Sorry, I think I am just a bit confused as to how to avoid cropping (if the images need to be the same resolution --- and they don't start out that way.) Most of the time the cropping isn't for such a dramatic change, more like a few dozens of pixels.

              Also, how good should the masks in Masquerade be? I usually spent quite a bit of time trying to get each image perfect.

              Best,

              Wayne
              Last edited by waynewaynehello; 2018-09-08, 09:08 PM.

              Comment

              • Andrea Alessi
                3Dflow Staff
                • Oct 2013
                • 1335

                #8
                Hi Wayne!

                masks do not need to be perfect - if you have enough images, even if you miss some areas (usually the border) there will be more images that see that same one. However if you have a low amount of images it may be critical to not miss certain parts.

                Sure, feel free to send the images and i'll happily have a look at them!

                Comment

                Working...