My first attempt was a great success, second was weird, and third has failed

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Random_Tox
    3Dflower
    • Dec 2018
    • 5

    My first attempt was a great success, second was weird, and third has failed

    Admittedly I'm not very far into the tutorials, but hope someone can offer some directions on my problems.

    Just trying out the free version still.

    All photos were from my phone, LG LM-G710ULM. Not quite certain about the sensor specs on this device.

    My first attempt I thought turned out great. Some Toyota engine cam covers. There's an attached screen shot and here are the source images:


    My second attempt I tried something more ambitious. My car. I wonder if the problem is more than just lighting and reflective surfaces. Also attached a screen shot and here are sources:


    Is this just because of reflective surfaces? How do I turn off the blue lines which I think are camera representations? It seemed like it was confused about camera positions.

    My third attempt I went back to something simple and small: A little robot head toy. 3DF would only utilize 3 of the photos. I went no further than sparse point cloud. I had initially shot on a different tabletop with no grid paper and it wouldn't accept any photos. Here are source photos:


    I had initially shot on a tabletop with no

    Thanks for any feedback you can offer!
    Attached Files
  • CG-Guy
    3Dflover
    • Apr 2018
    • 151

    #2
    Metallic and transparency don't reconstruct very well because they change secular as you take the shots.

    Try fruit or something natural

    Comment

    • CarlW
      3Dfollower
      • Dec 2018
      • 11

      #3
      The third try has issues with how you are doing your photos. It is a macro object, and you need a camera capable of getting in much tighter to your subject with much higher resolution. The graph paper is not helping the software make sense of the scene because the lines are very similar. Take that same paper and draw some random patterns or colors on in it would help the software work better. The oil stains on the pavement in your successful try demonstrate this to some degree. The shinny round top surface may give you problems because of the light and reflections that change as you take shots from different angles. Altering the surface finish would be helpful, but might not be practical. As CG-Guy says more organic, non-reflective objects might be an easier place to start.

      Comment

      • cam3d
        3Dflover
        • Sep 2017
        • 682

        #4
        Hi Random_Tox

        I've had a look over your data sets and I can give the following feedback:

        1. The floor is doing all the hard alignment work here, and it's doing a great job of it! - You have tons of overlapping features and this has lead to the successful reconstruction.

        2. Cars need to be dirty if you're going to scan them well - This car is just too clean - As CG guy said - Metallic and transparent things are very difficult to construct and require a lot of surface prep to get good results. This also applies to translucent things, very dark subjects and shiny subjects too.

        3. If you tried using newspaper instead of grid paper you'd see a much much better alignment - though with things that small you're going to be fighting against DOF issues.

        Note: DLSR typically outclasses all phone cameras in terms of image quality, so if you're serious about scanning I'd highly recommend getting your hands on one

        Comment

        • Random_Tox
          3Dflower
          • Dec 2018
          • 5

          #5
          Thanks for the feedback!

          I've been continuing to experiment with some more varied results. Many more failures but enjoying learning. I think a big limitation is my camera. A fairly new LG phone with a good camera, but sensor is only 1/3.1. I dug out my old Nikon P100 camera with a 1/2.3" sensor, but so far its results are worse shooting the same subject and environment. I did some tests with different shooting resolution thinking I may get more from my sensors. I learned some basic mesh/cloud cropping thanks to the tutorial vids. I also have been experimenting with/without chalk dust to dull the subject's surface.

          One experiment was an avocado on a little box on a pattern of blobs. I think the avocado surface and environment were too weird. Kept pushing and tried masking, and I noticed focus was not great on some shots so eliminated those. Using the "deep" preset 3DF could only use 7 of 25 shots and the point cloud was pretty terrible. (Source images here: https://photos.app.goo.gl/cixS3Uyw95VQp1bw7). I think this was partly a lesson in how reflective an avocado actually is.
          Click image for larger version

Name:	Guac.png
Views:	446
Size:	592.8 KB
ID:	4123

          Tried a couple more objects in the same environment. This tester was a the best result. (Sources: https://photos.app.goo.gl/NTozeWZV6Ebgoicx7)

          Click image for larger version

Name:	Tester.png
Views:	596
Size:	187.0 KB
ID:	4124

          Comment

          • Random_Tox
            3Dflower
            • Dec 2018
            • 5

            #6
            Today I went back to the robot head. I agree I need to optimize macro capability for this little fellow. Did a matrix of shots with my nikon and phone, before and after chalk, and different shooting resolutions. Best results were from my phone actually, and so here are some results of without and with masking. (Sources: https://photos.app.goo.gl/Ft836o2WsMyAyG157)

            Masked:

            Click image for larger version

Name:	masked.png
Views:	498
Size:	466.2 KB
ID:	4126

            Not masked:

            Click image for larger version

Name:	Unmasked.png
Views:	463
Size:	472.8 KB
ID:	4127

            The results head itself was a bit better if I didn't mask the photos, then cropped the dense cloud before running mesh extraction. The masked attempt was missing the back of the bowl and the shaded part of the head was more deformed.

            Next steps are to set up a cleaner studio and try optimize macro photos.

            Comment

            • CarlW
              3Dfollower
              • Dec 2018
              • 11

              #7
              I believe your results would come out much better, if all of your photos were closer to the last one in the series. Your camera is just too far away in most of the pictures to capture any real detail in a macro subject. The subject matter should nearly fill the frame in each shot. If anything, shots where the subject nearly fills the frame should be the ones that are the farthest away from the subject. If your camera would allow for it, and it probably will not, you should be capturing detail shots where there is nothing in the picture but one of the holes in the object. You should really have several shots of all the details you are trying to capture. To get this close you will likely need a true macro lens, or something comparable. I would suggest testing the minimum distance that your equipment can focus. You can do this by taking a picture of a ruler. Shooting subjects that are close to this size may be about the smallest your camera equipment can reproduce with accuracy. Your lighting could be improved by having it more even. The back side is very dark. You could improve this by adding a light source, or using a a large white surface to reflect light back into the scene. Good luck

              Comment

              Working...