Individual Project: Photogrammetry Practice

Maike Prewett
creating immersive worlds
4 min readNov 10, 2018

For my individual project, I wanted to practice the photogrammetry techniques we learned for our group project, One Last Look. I also wanted to test out a free, opensource alternative to Agisoft Photoscan, Regard 3D, to compare functionality and results.

During our group project, I was impressed with the photogrammetry results, but there were several things I wanted to improve on; it was very obvious where we missed, or where objects were moved during shooting (motorbikes being driven out or parked, doors opened or closed, etc.) I wanted to test out my knowledge in a smaller, more controlled environment, and so I chose my bedroom.

I ended up having to do two separate shoots through trial and error. For the first shoot, I missed several key areas, the camera’s focus length ended up slipping for several places, and because of the angles of my photography, the appearance of both the dressers and the headboard appeared distorted. When I uploaded the photos into Agisoft and generated a point cloud and mesh, I tried to separate the images into different chunks (as we did with the street photogrammetry) which only emphasized the distortion.

Two chunks in Agisoft Photoscan, similar to how we did it for our final group project

In order to salvage the 3D models, I had to heavily doctor the chunks of three different areas, cutting out places of distortion:

1) had to cut the headboard in order to hide distortion, and delete the closets, 2) this area was zoomed in and the mirror appears cut off, 3) the chair is missing and the objects in this section appear blurry and out of focus; more detail needed
Final results, a single chunk

I knew that I was going to have to do a second photo shoot, so I decided to change my plan of attack slightly. The first priority was taking photographs at eye level, then ground level, then from the height of a chair, and from each corner of the room to manage the distortion of the objects. This time, I was much more thorough, and instead of breaking the photographs into chunks I generated a single point cloud and textured mesh.

Additional details

I think both the angles from different corners of the room, and aligning the photographs as a single chunk, made a big difference in quality.

Camera angles used
Another screenshot of the camera angles

Next, I uploaded all 378 photos to Regard3D. The first difference I noticed was that, during photo upload, the program displays all of the information relating to each photograph — camera type and even focal length, so you can go through and delete zoomed-in pictures before starting.

Unfortunately, this process has taken much, much longer than expected; 20 hours into the initial upload, and I still don’t have results (compared to about four hours for Agisoft’s process)! I will update later today.

I drew a quick floorplan for our apartment, to see how many different sections I would have to break my shoots into in order to do my whole apartment: I could do my room and the other two bedrooms in single chunks, but am worried about how I would manage distortion in the kitchen and bathroom due to their cramped sizes, and the inability to access several different angles. In addition, the living area/dining area would have to be divided into separate chunks, but I am worried about how smooth they would look combined in Unreal.

Christian: the files are saved on your computer, in the “Maike” folder next to “AM2”. The different versions are bed, desk, and corner, in addition to attempt2.

--

--