How to combine a high-res ear mesh with a low-res head mesh?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • nadimhossain
    3Dflower
    • Sep 2017
    • 8

    How to combine a high-res ear mesh with a low-res head mesh?

    Hi, I'm working on a research project that involves characterizing how different features of the outer ears such as the pinna, concha and various cavities contribute to how humans perceive sound.

    For this, I have set up a photogrammetry rig with 50 Canon 1300D DSLR cameras. The subject sits on a chair at the centre and all 50 linked cameras fire off at once giving a reasonable model of the head and torso. However, when it comes to the ears on said head and torso model, they are not as detailed as I need them to be to run accurate numerical simulations.

    Hence, I'm using a standard mobile phone camera (the Samsung Galaxy S7 Edge to be specific) to acquire higher resolution models of just the ears themselves as I can get very close to the ears. By higher resolution, I actually mean more detailed in the sense that particular features are identifiable; not raw megapixel count. I am doing this by executing two arc sweeps for each ear: one horizontal sweep that spans from the front of the face to the back of the head, and one vertical sweep that spans from the top of the ear looking down to the earlobe looking up. This is giving me an enough detailed model to run my numerical simulations.

    But to do so, I need to combine the high-resolution ear mesh (acquired by the phone) with the lower-resolution head and torso model (acquired by the photogrammetry rig). For this, I need to remove the low-resolution ears and replace them with the high-resolution ones. How can I do this very accurately? I have attached photos to give you an idea of what I'm working with.

    PLEASE HELP!

    Also, I would welcome suggestions to simplify and improve the acquisition methodology.
  • Andrea Alessi
    3Dflow Staff
    • Oct 2013
    • 1335

    #2
    Hi nadimhossain,

    since you have a controlled environment, i would probably still do the acquisiton as you do it. However, rather than processing two different workspaces, have you tried running all the photos ( booth body + phone photos ) togheter ? That would be my first test.

    If all goes well, zephyr should be able to orient everything and generate a 3D model more detailed in the ear region. I see two possible problems with this approach:

    - the subject will most likely move, so if in your ear pictures you see the background, there is the risk that zephyr will use the background for orientation. In this case, maybe consider masking the ear photos.

    - it is possible that the ear closeup is too distant from the rig photos, and thus zephyr may not be able to merge the two clusters - in this case, consider using control points or (better) take a few more shots going from near one camera to the ear, in order to help zephyr "follow" the path to merge the two sets

    If you can share a sample dataset with me i'll be happy to give you more detailed advice!

    This is what i would do simply because otherwise, working at mesh level would mean that you need to manually alter the body mesh with the ear mesh in a third party software (e.g. blender) but i think it's possible to avoid that.

    Comment

    Working...