I am a researcher trying to create 3D photogrammetric models of specimens I am studying. I am taking pictures of these specimens on a 3D turntable to increase the number of unique feature and improve feature matching. However, in order to orient these specimens such that the maximum number of unique features are being observed I have the specimens temporary mounted in clay. I have pictures taken around the turntable in two different positions with the intent of getting a complete 360 degree view of the specimen.
My first thought was to try and upload all of the photos into a single workspace and then hope that the non-overlapping parts of both images (the turntables) would be removed in the processing, as occurs in some other photogrammetry software. This did okay, but I noticed it had some issues with artifacts and anomalous black spots where the specimen was in shadow or touching the clay.
Alternatively, I tried following this tutorial to merge the two workspaces using control points. The idea was that I would restrict the region of interest to that not covered on each model and then merge them together using control points to create one complete image. However, after setting the control points when I merged the two workspaces I did not get two separate sparse point clouds (see picture), instead I got a single sparse point cloud with both clay blobs visible.
Does anyone know why this is? Is this not the best solution in this case? What is the best way to merge two workspaces where there is a common object in two different views but the background is not the same?
My first thought was to try and upload all of the photos into a single workspace and then hope that the non-overlapping parts of both images (the turntables) would be removed in the processing, as occurs in some other photogrammetry software. This did okay, but I noticed it had some issues with artifacts and anomalous black spots where the specimen was in shadow or touching the clay.
Alternatively, I tried following this tutorial to merge the two workspaces using control points. The idea was that I would restrict the region of interest to that not covered on each model and then merge them together using control points to create one complete image. However, after setting the control points when I merged the two workspaces I did not get two separate sparse point clouds (see picture), instead I got a single sparse point cloud with both clay blobs visible.
Does anyone know why this is? Is this not the best solution in this case? What is the best way to merge two workspaces where there is a common object in two different views but the background is not the same?
Comment