Camera intrinsics/extrinsics export problem

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • urpgor
    3Dflower
    • Jan 2019
    • 3

    Camera intrinsics/extrinsics export problem

    Hi!

    I've the following problem. I want to export the camera intrinsics and extrinsics together with the reconstructed mesh in order to perform view dependent texture mapping.
    Therefore I looked at the possible export options and the xmp option looked just like what I was looking for (although the internals.txt and externals.txt would provide basically the same information).

    I have now some questions:
    There is an export option for OpenGL - first I used this one and computed the projection matrix out of the given field of view and aspect ratio. The modelview matrix is given already. If I use this option in conjunction with the undistorted images (since the camera distortion was already corrected on those) I thought that this should work. The result was nearly right but slightly off. I realized that the projection matrix was the problem. I had to incorporate the principal point and the focal length into the projection matrix to solve this problem. If you only have the fov and aspect ratio given, you basically compute a projection matrix for a principal point that is perfectly in the middle. But why do I have to incorporate the focal length and principal point again if I already use the undistorted images, since these should be already applied (together with the radial distortion) to the images --> and by looking at the images in the camera navigator or at the exported images you can see that they are undistorted.

    But now comes the next funny part. In 3DF Zephyr all datasets work perfectly. The resulting mesh looks great and if you select images in the camera navigator you can see that they are aligned properly. The fist dataset looked perfectly fine with view dependent texture mapping. The result looked exactly the same as if you overlay the images in 3DF Zephyr by selecting them in the camera navigator. The view dependent texture mapping on other datasets on the other hand didn't work at all. Some datasets were basically off in the y-axis, every image within them. Some datasets had images that fitted perfectly and others that were off again. But how can that be? The workflow is always the same, in all cases.
    All images are taken with an IPhone X.
    I tried several things out:
    - use the online precomputed camera calibration
    - fix the intrinsics completely
    - only allow radial distortion parameters to be adjusted
    - completely autocalibrate them incl. precalibration step and internals parameter adjustment
    None of it delivered a reliable results, although all of them looked good within 3DF Zephyr, but the exported data didn't fit. Is there a bug in the pipeline and nobody is really using this feature?

    How are you supposed to use the exported data?
    - "Export Matrices for OpenGL" - should you use the undistorted images together with the exported .opengl files and the mesh?
    - "Export Projection Matrices" - since it contains a 3x4 projection matrix and 5 distortion parameters should you use the normal input images with those? I know that this option would make more sense in case you want to do anything else with the images since the projection matrix is defined to give you pixel coordinates and not NDC coordinates.

    I hope someone knows something about this!

    Greetings,
    Christoph
  • Andrea Alessi
    3Dflow Staff
    • Oct 2013
    • 1335

    #2
    Hello Christoph,

    not sure what you're doing wrong, but i double checked and everything should work well

    Have you tried exporting in fbx ? It's probably easier as you have the cameras as well

    Comment

    • urpgor
      3Dflower
      • Jan 2019
      • 3

      #3
      I tried the fbx export and made some more experiments. Like I expected, the extrinsic parameters are not the problem, since my computed camera positions and rotations match exactly the ones defined in the fbx. Therefore the problem has to be somewhere in the projection.
      When I'm exporting the camera parameters (extrinsic and intrinsic) - is there any conversion within 3DF Zephyr and there could be a bug?
      I'm using OpenGL for my rendering, therefore I've to compute a valid projection matrix. This is a very easy task as it can be seen here: http://ksimek.github.io/2013/06/03/c...ras_in_opengl/
      Proj = NDC * Persp
      where the NDC is a simple orthographic projection matrix and Persp is constructed from the known camera intrinsics. So I highly doubt that there is the problem.

      How is this all handled internally within the 3DF Zephyr?

      If I use the exported 3x4 matrix (ppm export) I checked now manually for a known 3d position the projected 2d position and this is working. This should indicate now that the problem is somewhere in my code but as I said in my first post, I have one dataset which works perfectly fine. That's why I'm somehow suspecting that there is a problem within the 3DF Zephyr export.

      Comment

      • Roberto
        3Dflow
        • Jun 2011
        • 559

        #4
        Hi Urpgor,

        Could you please post a snippet of code that you use for the import, maybe we can start from there to help. Are you using the projection matrices directly from Zephyr or any other method for export?
        Please note that with the extrinsic conversion in computer vision the z is flipped wrt to the OpenGL conversion.
        The intrinsic shouldn'be a problem to convert. Let's suppose you are using the undistorted image (no need for the radial distortion parameters), and keep the optical center at the center. Then the fov and aspect ration in opengl should be set as follow:
        double fovy = 2.0 * std::atan( 0.5 * height / fy ) * 180.0 / pi;
        double aspect = ( width * fy ) / ( height * fx );
        where fx,fy is the focal length in pixel from zephyr, width and height is the image size, and pi is pi

        Hope this helps.

        Comment

        • urpgor
          3Dflower
          • Jan 2019
          • 3

          #5
          Hi Roberto!
          I'm not doing anything special during the export and I'm also reading the files as they are. I'm also fully aware of the z-flip in OpenGL (and the y-flip compared to the image space).
          I know these formulas and yes, if you compute fovy and aspect like this you get the values as they are in .openGL file. But that is not the solution to the problem. Therefore I investigated further.

          For the following tests I took the undistorted images as they are the way to go. To brace my suspicion that the 3x4 ppm matrix is working I took 3D position of an easily distinguishable 3D point and projected it into two different images (one from the front and one from the side) and the projected positions were perfect in both images.
          If I did the same with the information of the .openGL file, so the viewmatrix + fovy + aspect ratio, the projected points where slightly off. But what did I do here? I computed a projection matrix with the principal point perfectly in the middle of the image but this was obviously not right.
          Therefore I tried again to incorporate the optical center into the computation. Now the x positions of the projected point in both images were right but the y positions where off. I knew that the 3x4 ppm matrix is right so I had to know how to compute it. This should actually be really simple as it should be:
          ppm_computed = proj_mat * view_mat
          and then ignoring the 3rd row since we don't care about depth at that point.
          If I compared the original 3x4 ppm matrix with the computed one I saw that the first and last row were exactly the same but that second one didn't match (and therefore the projected y coordinates weren't right). But what was going on here?
          Now comes the trick what I didn't think about in the first place. In order to incorporate the principal point in the y axis right you have to know the coordinate system you are dealing with (i.e. is y going up or down ). A simple flip can be achieved by inverting the focal length in y. And by doing so you end up exactly at the given 3x4 ppm matrix.
          Therefore I can say now for sure that the information alone given in the .openGL file is not enough to properly align the images on the mesh.

          My problem is solved now and I know now exactly how one has to interpret the exported data of 3DF Zephyr. But what I don't understand is why the .openGL files look how they like, since they obviously lack information.

          Greetings,
          Christoph

          Comment

          Working...