Concordia International School Shanghai — A Case Study on the Math of 3D Modeling

By Lucas Jeong, Ellen Zhao, and Victoria Wong

Lucas Jeong
Beauty in Mathematics
12 min readJun 12, 2024

--

Objective

The routine life of a student: sleep, wake up, and go to school. On a plain day, in an ordinary AP Precalculus class, Dr. Tong introduced the transformation art project: an open-ended final assignment that applied mathematics to art. A variety of project options were given to us, but none intrigued us as much as the application of math on digital 3D art. Instead of modeling the famous architecture in Shanghai as many students chose to do in the previous years, we decided to construct the school.

The purpose of Case Study: Concordia International School Shanghai aims to evaluate the usage of a variety of 3d modeling techniques accessible by the open source software known as Blender, as well as image analyzation methods such as Photogrammetry using Meshroom. The final results of this project are:

1. A detailed photogrammetry scan of the HS building to be used in promotional and interactive materials.

2. A hand modeled 3d model of the HS building and the Phoenix Center.

3. A breakdown and mathematical representation of transformations and translations via texture wrapping, UV mapping, and normal/glossiness mapping (also known as parallax).

For the final result, please visit this Behance link: Concordia International School Shanghai

Part 1: Photogrammetry

Overview

Photogrammetry is the science of interpreting 3D obtained through photographic images. Photogrammetry uses methods from optics, projective geometry, and other methods to obtain precise measurements from photographs. The process involves several stages of digital image capture and processing to generate 2D or 3D models of the photographed object or scene. Key data inputs include the 3D coordinates defining object point locations, the image coordinates pinpointing where those points appear on the photographs, the exterior orientation specifying the camera’s position and view direction, and the interior orientation describing the camera’s geometric parameters like focal length. Although there are many different types of photogrammetry, we will be relying on the conversion of a dataset of 2D photos to 3D data– commonly known as Stereophotogrammetry– to create our 3D model.

The process of stereophotogrammetry involves capturing overlapping images from different angles or positions, typically using specialized cameras or sensors. (We will be using an iPhone 15 Pro) These images are then processed using Meshroom (our software of choice for its pricing and wide range of features) which analyzes the geometric relationships between the images and the objects depicted. By triangulating the positions of corresponding points in the overlapping images, the software can calculate the three-dimensional coordinates of those points with high accuracy. This process is repeated for numerous points across the images, allowing for the reconstruction of a highly detailed 3D model or representation of the object or scene being studied.

Image Acquisition and Processing

Several (99) images were taken of the backside of the HS building to recreate in using photogrammetry.

3D Reconstruction

After collecting photographic data and processing the dataset, we are left with a (rough) model and its textures. It looks quite similar to the facade of the building in real life, especially when the lighting set-up within Blender is made to closely replicate or simulate the lighting of the dataset photos.

(The final result wasn’t the best, so let’s proceed with the Blender model)

The following video showcases the final processed model.

Part 2: Hand Modeling

Overview

We will be utilizing Blender 4.1 to model the HS and PC buildings. Blender has gained widespread adoption across various industries like automotive, architecture, and media/entertainment for its robust 3D modeling, animation, and rendering capabilities. A major strength of Blender is its vast add-on ecosystem created by an active community of developers. These add-ons allow customizing Blender and optimizing workflows for specific tasks and pipelines. For our modeling project, we will follow these main steps:

First, we will construct the 3D geometry of each building from mesh editing using Blender’s modeling toolset. This process requires the orthographic drawings of the school provided by the architectural firm behind the school’s design. Elevation and layout plans were obtained through the operations department.

Once the 3D mesh is modeled, the next critical step is to unwrap its UV texture coordinates. This process flattens the 3D mesh onto a 2D plane in preparation for texture mapping. Careful UV unwrapping is essential for avoiding seams and distortion when applying textures. After the UVs are unwrapped, we can then create textures and materials and map them onto the unwrapped model geometry, bringing our 3D model to life.

Modeling

Although Blender supports a variety of 3D modeling techniques, we will be utilizing box modeling and boolean modeling to create our 3d model.

The box modeling technique is a widely used approach in 3D polygonal modeling, where the artist starts with a basic geometric primitive shape like a cube, cylinder, or sphere and progressively shapes it until the desired form is achieved. Initially, we will start with a low-resolution polygonal mesh representation of the basic shape. Through a combination of extruding, scaling, and other transformations of the vertices, they roughly sculpt the mesh to approximate the overall intended appearance.

Boolean modeling is a 3D technique where new shapes are created by combining or subtracting geometry between two separate mesh objects. The resulting form emerges through this boolean operation applied to the objects’ volumes. Boolean unions join and merge two object meshes together into a single new shape encapsulating their combined volumes. Alternatively, boolean differences allow one object’s geometry to be subtracted from the other, carving out negative space. We will be using boolean differentiation to create panel detail and eliminate hidden geometry for render optimization.

Math Behind Vertex Transformations

In 3D modeling, vertices represent the points in space that define the shape and structure of a 3D object. The position of a vertex is typically defined by its x, y, and z coordinates. The formula for calculating the position of a vertex depends on the specific context and requirements of the 3D model.

Vertex Position = (x, y, z)

1. Vertex Translation:

Vertex translation involves moving a vertex from its original position to a new position in 3D space. It is achieved by adding a translation vector to the coordinates of the vertex. The formula for vertex translation is:

New Vertex Position = Original Vertex Position + Translation Vector

(X+t, Y+t, Z+t)

2. Vertex Rotation:

Vertex rotation involves rotating a vertex around a specified axis or point in 3D space. This can either be the origin, (0, 0, 0) or be local to the center of the mesh defined by me/modeler or around a defined origin point other than the real origin. There are different rotation formulas depending on the axis of rotation (X, Y, or Z) and the angle of rotation.

Let’s say we have a vertex (X, Y) and want to rotate it by an angle θ around a center point (CX, CY).

Translate the vertex by subtracting the center point, so the vertex is now relative to the origin (0, 0):

X’ = X — CX

Y’ = Y — CY

The 2D rotation matrix follows this form:
[ cos(θ) -sin(θ) ]

[ sin(θ) cos(θ) ]

Where θ is the angle of rotation in radians.

To rotate a 2D vector (X, Y) by an angle θ, you multiply the vector by the rotation matrix:

[ x’ ] [ cos(θ) -sin(θ) ] [ x ]

[ y’ ] = [ sin(θ) cos(θ) ] [ y ]

Apply the 2D rotation matrix to get the new rotated coordinates (X”, Y”):

X” = X’ * cos(θ) — Y’ * sin(θ)

Y” = X’ * sin(θ) + Y’ * cos(θ)

Translate the rotated vertex back by adding the original center point:

NEW X = X” + CX

NEW Y = Y” + CY

So the final formulas are:

NEW X = CX + (X — CX) * cos(θ) — (Y — CY) * sin(θ)

NEW Y = CY + (X — CX) * sin(θ) + (Y — CY) * cos(θ)

We can now simplify this example and apply it to the different axis. You may notice that during the Z-axis rotation, the X and Y values are the only ones being changed. This is because the mesh is being rotated around its local origin (or in this case the global origin) only in the Z-axis, so the Z remains the same.

Rotation around the X-axis:

New X = X

New Y = Y * cos(θ) — Z * sinθ)

New Z = Y * sin(θ) + Z * cos(θ)

Rotation around the Y-axis:

New X = X * cos(θ) + Z * sin(θ)

New Y = Y

New Z = -X * sin(θ) + Z * cos(θ)

Rotation around the Z-axis:

New X = X * cos(θ) — Y * sin(θ)

New Y = X * sin(θ) + Y * cos(θ)

New Z = Z

3. Vertex Scaling:

Vertex scaling involves changing the size of a vertex along each axis. It is achieved by multiplying the coordinates of the vertex by scaling factors along each axis. The formula for vertex scaling is:

New Vertex Position = (Original Vertex Position) * Scaling Factors

(X, Y, Z) * Scaling Factors

Math Behind UV and Texture Mapping

UV and Texture Mapping exists to ensure proper texture mapping onto non-rectangular and complex geometries. There are various types of texture projection techniques, such as flat, box, spherical, and cylindrical projection. Among these, flat projection is the most commonly used method for texturing 3D models. Since images are flat, there are latitudinal and longitudinal lines that can be projected onto non-rectangular meshes. Let’s explore the different formulas and types of texture projection in detail. (Formulas were included as we found them interesting from our research but are not essential to understanding the concept of image projection)

1. Flat Projection (Planar Mapping):

Flat projection, also known as planar mapping, is the simplest form of texture projection. It involves projecting the texture onto the 3D model as if it were a flat, rectangular plane. The texture coordinates (U, V) are linearly interpolated across the model’s surface. While flat projection works well for relatively flat surfaces, it can lead to distortions and stretching on curved or complex geometries.

2. Box Projection (Cubic Mapping):

Box projection, or cubic mapping, is a technique used for mapping textures onto cubic or box-like objects. It involves projecting the texture onto the six faces of a cube, essentially creating a cube map. This method is particularly useful for creating environmental reflections and skyboxes in real-time graphics.

3. Spherical Projection (Spherical Mapping):

Spherical projection, also known as spherical mapping or spherical environment mapping, is a technique used for mapping textures onto spherical or highly curved surfaces. It involves projecting the texture onto a sphere, resulting in a more natural and undistorted appearance on curved geometries. Spherical projection is commonly used for creating realistic reflections and highlights on spherical objects, such as balls or planets.

Here are the variables used in the formula for Spherical Projection:

λ (lambda): Longitude of the location (pixel) to receive projection.

φ (phi): Latitude of the location (pixel) to receive projection.

φ₁, φ₂: Standard parallels (north and south of the centerline of a sphere) where the scale of the projection is true.

φ₀: The latitude of the central point of receiving the projection.

λ₀: The longitude of the central point of receiving the projection.

X: Horizontal coordinate of the location being projected on the sphere.

Y: Vertical coordinate of the location being projected on the sphere.

R: Radius of the sphere (assuming a spherical model)

Spherical Projection (Reverse Equirectangular Projection) Formula:

Formula:

λ = (X / (R * cos(φ₀))) + λ₀

φ = Y / R

Reverse Projection Formula:

X = (λ — λ₀) * cos(φ)

Y = φ — φ₀

The formula spherical projection involves transforming planar coordinates (horizontal and vertical coordinates on the map) into spherical coordinates (longitude and latitude). The forward projection transforms planar coordinates into spherical coordinates, while the reverse projection transforms spherical coordinates back onto the grid.

4. Cylindrical Projection (Cylindrical Mapping):

Cylindrical projection, or cylindrical mapping, is a technique used for mapping textures onto cylindrical or tube-like objects. It involves projecting the texture onto a cylinder, resulting in a seamless texture wrapping around the object’s surface. This method is particularly useful for texturing objects like pipes, cables, or tree trunks.

X: Horizontal coordinate of the projected location on the map.

Y: Vertical coordinate of the projected location on the map.

λ: The longitude of the point on the sphere.

λ₀: The longitude of the central point of the projection.

φ: The latitude of the point on the sphere.

φ₀: The latitude of the central point of the projection

Cylindrical Projection Formula:

Formula:

X = (λ — λ₀) * cos(φ₀)

Y = φ — φ₀

The formula for the cylindrical projection is used to convert coordinates from the spherical coordinate system (longitude and latitude) to the Cartesian coordinate system (X and Y). The cylindrical projection extends the lines from each point on a unit sphere until they intersect a cylinder tangent to the sphere at its equator. The resulting intersection points on the cylinder are then mapped to the x and y coordinates.

Modeling Process

1. Import orthographic views of each part of the building.

2. Add starter cube

3. Vertex editing

4. Panel details (booelan difference)

5. UV mapping/unwrapping/turning into nice squares for texturing

oops..

Unwrapping!!! (correct version)

6. Texture capturing (color checker for color accuracy)

Color corrected and lens corrected image

7. Texture mapping (applying the needed transformations using the texture mapping node)

Usage

The 3D models can be used as official school marketing materials, webpage, souvenirs, and much more. As time goes on, the number of buildings being modeled will be expanded to encompass the entire campus.

For the final result, please visit this Behance link: Concordia International School Shanghai

Final Word

Though the preparation process was intricate, we successfully constructed the model that can be used for a variety of purposes. In addition, we were able to gain a deeper understanding on vertex transformations and texture mapping through this project. Modeling Concordia not only allowed us to observe the building we sit in for class every day from a different perspective, but also shaped a meaningful experience for us.

Special thanks to Jack Zhang and Tim Winterstein in Operations for school blueprints.

Another special thanks to the countless un-citable Reddit, 3D artist, and programming communities that functioned as a library for our research.

Sources:

Download Blender here: https://www.blender.org

Download Meshroom here: https://github.com/alicevision/Meshroom

Photogrammetry. Photogrammetry — an overview | ScienceDirect Topics. (n.d.). https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/photogrammetry#:~:text=Photogrammetry%20is%20the%20science%20of,and%20Electronics%20in%20Agriculture%2C%202022

Encyclopædia Britannica, inc. (n.d.). Cylindrical projection. Encyclopædia Britannica. https://www.britannica.com/science/cylindrical-projection

Topiwala, A. (2021, April 28). Spherical projection for point clouds. Medium. https://towardsdatascience.com/spherical-projection-for-point-clouds-56a2fc258e6c

--

--