3D Modeling and Orthomosaic Generation
About
The following was intended for a university class project, "2D and 3D composite image lab." For this project I got permission to map Purdue University football stadium.Description
For this flight a Mavic 2 Pro was used to capture the images at 198 ft above ground level and a 70 degree down angle. Pix4D Cepture was used for the routing and capture of the flight. The duration of the flight was 42 minutes. Once completed, the data file consisted of 1011 images used in Pix4D to create the following images. These images are screen captures of the original model. The drone is equipped with the 1 inch CMOS sensor with 20 million pixels as the primary means of image capture. The camera has a shutter speed at 8-1/8000s for a still image size of 5472 x 3648 and formatted in JPEG.Findings
The following images are representations of the amazing data set that exists. It shows the possibilities of using 3D mapping and level of extraordinary detail you can get.Figure 1: 3D Image of the Purdue Football Field |
The Figure 1 is of the computer generated 3D model. The cars, field, and surrounding area were represented well. The detail allows us to identify the model and location of car during the time of the flight. The field is well represented having the goal posts and scale of surrounding fixtures present. It can be immediately noticeable that the press box to the left of the field is missing. This is due to the angle of the images. A corrective action would be to go back and get photos of the sides, being sure to plenty of overlap on the photos. Over lap in any scenario allows for a greater level of confidence in image placement.
Figure 2 allows for use to see the extent a void that can be created by either a lack of data or lack of exposure in the images captured. Again, the angle of the camera along with how close the flight path was to the subject itself can be the cause of this issue. In the first step of the data processing, images are compiled and tie points are created.Once it came time to create the point cloud and the triangle meshes, the lack of data caused the program to mesh the area together.
Tie points can be described as points of confidence. Here 6 points were set as the parameter. So the processor will make sure it has identified at least 6 points between any two images before it will place that image. With multiple images peaces will be moved around until they all make the most seance. adding GPS data to this will increase the level of confidence in image placement and ensure that one image will go in one place and the tie points will correct overlap placement.
Tie points can be described as points of confidence. Here 6 points were set as the parameter. So the processor will make sure it has identified at least 6 points between any two images before it will place that image. With multiple images peaces will be moved around until they all make the most seance. adding GPS data to this will increase the level of confidence in image placement and ensure that one image will go in one place and the tie points will correct overlap placement.
Using 3D modeling and the appropriate camera angle we can capture incredible detail and show the depth of terrain. Captured in the foreground of Figure 3 we can see the hand rails from the parking lot to the stadium. Looking to the background we can see the rising concrete wall separating the rising hill and the descending road that wraps around the west side of the stadium.
Figure 4: 3D Image Stadium Seating |
In figure 4 we can see the level of detail obtained from this short flight. Each row of bleachers was modeled with some depth all the way around the field. A closer flight would allow for images that can model the seats better.
The image in figure 5 shows how well the entire processor was able to compile the data for the subject of the map. We can see the field goals, the building depth in the background, and the different awnings on the right side of the field that were clearly created with detail.
Processes like this allow for accurate tracking of stationary targets. Things like construction progress, city development, or erosion can all be modeled and tracked. This 3D model was put into a hollow lens and then into an app, Augmented Reality Airplane Models currently on the google play store, allowing for 1-to-1 scale exploration of the infrastructure.
Processes like this allow for accurate tracking of stationary targets. Things like construction progress, city development, or erosion can all be modeled and tracked. This 3D model was put into a hollow lens and then into an app, Augmented Reality Airplane Models currently on the google play store, allowing for 1-to-1 scale exploration of the infrastructure.
The above image (figure 6) is the Pix4D generated orthomosaic from the data, opened in Adobe Photoshop. It has only a few problems like gaps in the pixels, but shows somethings better than the 3D model.
Figure 7 shows the press box that is not well represented in the 3D build. You can see some melting of the image but it does not take away from the image itself to the point of not knowing what it is. the slight slope of the roof top can be seen. Additionally the shape of the pressbox long with its height can be inferred form this image.
The 2D map does not do too well on either sides of the stadium. Figures 8 and 9 show examples where the image was rotated up to us where we can see the walls that would not normally be visible form a top down image. This is not necessarily bad since they are on the out side edges of the model where data becomes limited.
The last image, Figure 10, is admittedly blurry but it shows a small point in the map where the pixels are missing. A highly reflective surface would give off the same effect but after review the peaces of this small part are missing. This is another small fault in the 2D map that does not exist in the 3D image.
Regardless, in total both the 3D model and the 2D map are very well done.
Regardless, in total both the 3D model and the 2D map are very well done.
Comments
Post a Comment