Hi!
I've the following problem. I want to export the camera intrinsics and extrinsics together with the reconstructed mesh in order to perform view dependent texture mapping.
Therefore I looked at the possible export options and the xmp option looked just like what I was looking for (although the internals.txt and externals.txt would provide basically the same information).
I have now some questions:
There is an export option for OpenGL - first I used this one and computed the projection matrix out of the given field of view and aspect ratio. The modelview matrix is given already. If I use this option in conjunction with the undistorted images (since the camera distortion was already corrected on those) I thought that this should work. The result was nearly right but slightly off. I realized that the projection matrix was the problem. I had to incorporate the principal point and the focal length into the projection matrix to solve this problem. If you only have the fov and aspect ratio given, you basically compute a projection matrix for a principal point that is perfectly in the middle. But why do I have to incorporate the focal length and principal point again if I already use the undistorted images, since these should be already applied (together with the radial distortion) to the images --> and by looking at the images in the camera navigator or at the exported images you can see that they are undistorted.
But now comes the next funny part. In 3DF Zephyr all datasets work perfectly. The resulting mesh looks great and if you select images in the camera navigator you can see that they are aligned properly. The fist dataset looked perfectly fine with view dependent texture mapping. The result looked exactly the same as if you overlay the images in 3DF Zephyr by selecting them in the camera navigator. The view dependent texture mapping on other datasets on the other hand didn't work at all. Some datasets were basically off in the y-axis, every image within them. Some datasets had images that fitted perfectly and others that were off again. But how can that be? The workflow is always the same, in all cases.
All images are taken with an IPhone X.
I tried several things out:
- use the online precomputed camera calibration
- fix the intrinsics completely
- only allow radial distortion parameters to be adjusted
- completely autocalibrate them incl. precalibration step and internals parameter adjustment
None of it delivered a reliable results, although all of them looked good within 3DF Zephyr, but the exported data didn't fit. Is there a bug in the pipeline and nobody is really using this feature?
How are you supposed to use the exported data?
- "Export Matrices for OpenGL" - should you use the undistorted images together with the exported .opengl files and the mesh?
- "Export Projection Matrices" - since it contains a 3x4 projection matrix and 5 distortion parameters should you use the normal input images with those? I know that this option would make more sense in case you want to do anything else with the images since the projection matrix is defined to give you pixel coordinates and not NDC coordinates.
I hope someone knows something about this!
Greetings,
Christoph
I've the following problem. I want to export the camera intrinsics and extrinsics together with the reconstructed mesh in order to perform view dependent texture mapping.
Therefore I looked at the possible export options and the xmp option looked just like what I was looking for (although the internals.txt and externals.txt would provide basically the same information).
I have now some questions:
There is an export option for OpenGL - first I used this one and computed the projection matrix out of the given field of view and aspect ratio. The modelview matrix is given already. If I use this option in conjunction with the undistorted images (since the camera distortion was already corrected on those) I thought that this should work. The result was nearly right but slightly off. I realized that the projection matrix was the problem. I had to incorporate the principal point and the focal length into the projection matrix to solve this problem. If you only have the fov and aspect ratio given, you basically compute a projection matrix for a principal point that is perfectly in the middle. But why do I have to incorporate the focal length and principal point again if I already use the undistorted images, since these should be already applied (together with the radial distortion) to the images --> and by looking at the images in the camera navigator or at the exported images you can see that they are undistorted.
But now comes the next funny part. In 3DF Zephyr all datasets work perfectly. The resulting mesh looks great and if you select images in the camera navigator you can see that they are aligned properly. The fist dataset looked perfectly fine with view dependent texture mapping. The result looked exactly the same as if you overlay the images in 3DF Zephyr by selecting them in the camera navigator. The view dependent texture mapping on other datasets on the other hand didn't work at all. Some datasets were basically off in the y-axis, every image within them. Some datasets had images that fitted perfectly and others that were off again. But how can that be? The workflow is always the same, in all cases.
All images are taken with an IPhone X.
I tried several things out:
- use the online precomputed camera calibration
- fix the intrinsics completely
- only allow radial distortion parameters to be adjusted
- completely autocalibrate them incl. precalibration step and internals parameter adjustment
None of it delivered a reliable results, although all of them looked good within 3DF Zephyr, but the exported data didn't fit. Is there a bug in the pipeline and nobody is really using this feature?
How are you supposed to use the exported data?
- "Export Matrices for OpenGL" - should you use the undistorted images together with the exported .opengl files and the mesh?
- "Export Projection Matrices" - since it contains a 3x4 projection matrix and 5 distortion parameters should you use the normal input images with those? I know that this option would make more sense in case you want to do anything else with the images since the projection matrix is defined to give you pixel coordinates and not NDC coordinates.
I hope someone knows something about this!
Greetings,
Christoph
Comment