Visualization of a Vector Field

Vojtěch Tomas
researchsummer
Published in
10 min readOct 4, 2019

The development in the area of scientific simulations, observations, and measurements combined with a growth of computational performance in last decades allowed for the construction of large datasets representing, e.g., the motion of fluid, electromagnetic field, or plasma flow. The amount of the data often exceeds a comprehensible quantity; however, the most profound understanding is still acquired by seeing and observing. This led to the development of several scientific visualization tools.

One of the results, a render of a 2D vertical slice of a vector field describing plasma flow near solar surface

The purpose of the visualization is to provide insight and allow users to create a mental image of the dataset. For the mental image to be correct, it is necessary to visualize a substantial part of the dataset and provide some sense of scale. The amount of data which needs to be visualized often poses serious performance challenges. There are several additional attributes which contribute to the overall quality of the visualization, and thus should be balanced appropriately. These attributes include physical accuracy, usability, and the choice of which data should be displayed and which should be filtered out. It is common to prioritize one or two of these attributes over the others. However, neglecting one of the factors can render the entire visualization unusable.

The base for the research has been introduced in the author’s bachelor thesis. It concerns the visualization of a dataset representing a plasma flow in the area near the solar surface, which has been produced based on observations and measurements. The goal of this research is to find the optimal balance between the critical attributes of the visualization and to create a simple independent visualization toolkit focused around vector data visualization. Since there already exists a number of frameworks which provide similar functionality, the research is trying to improve some of the existing visualization methods and explore new ways of processing and visualizing large vector datasets.

Background

The entire visualization process is based on what is commonly known as a visualization pipeline [1]. The visualization pipeline divides the visualization process into four main parts and allows isolating the more computationally heavy tasks from the actual rendering of the final image. The four main stages of the visualization pipeline include importing, filtering and enriching, mapping, and rendering.

Visualization pipeline diagram

Import

As the title suggests, import involves transforming the dataset into an internal representation. The internal representation has to allow for fast interpolation inside the dataset. There are two factors which need to be taken into account, and that is the spatial structure of the data and the character of the values.

Illustration of a three-dimensional rectilinear grid

Data can be organized into different types of the grid. In practice, one of the commonly used grids is the rectilinear grid. As the rectilinear grid is generally well-structured, it is possible to hold the entire dataset in an array or several arrays and based on the grid coordinates use a binary search to look up the local cells during the interpolation process. Depending on the grid dimensions, it is quite simple to implement a polynomial interpolation with a given order. E.g., linear interpolation can be used to provide results quickly and reliably; however, the physical accuracy of the results is not adequate [2]. Thus, it is recommended to use higher-order polynomials or other interpolation methods.

A different approach takes into account the character of the visualized data. The alternative method uses Fourier-based interpolation and is suitable for the datasets where the boundary conditions are roughly periodic in at least one domain (vertical or horizontal). This interpolation method is quite common in specific areas of physics.

Filtering and Enriching

The second step involves filtering and enriching. The goal is to filter only the significant parts of the dataset and reveal inner hidden structures. In case the intent is to highlight the areas with low or high values, techniques such as simple thresholding can be used. However, for the detection of areas with significant flow structures, it is desirable to use methods such as critical points detection [3] or view-based rendering [4] in the final stage of the visualization pipeline.

An alternative approach is to compute the divergence and rotation of the vector field and highlight individual parts of the vector field based on the output of this calculation. The computation requires partial derivatives of the vector field. A straightforward method of obtaining partial derivatives proposed in the preceding thesis utilizes finite differences. Depending on the character of the data, it might be more suitable to use an alternative approach utilizing derivatives computed using Fourier-based methods [5].

Mapping

After the filtering and enriching process, the pipeline slowly transitions into the third stage of the visualization pipeline. In this stage, one of the main focus areas of this research is tracking particles in the vector field and constructing streamlines and streaklines. The construction involves numerical integration, which can be performed using several techniques. One of the first choices was the use of Runge-Kutta integration schemes [6], namely adaptive fourth- and fifth-order methods [7]. One of the goals is to clarify whether the combination of the used integration and interpolation methods produces physically correct results.

Streamline visualization featuring examples with different density of seeding points and different color schemes

Additionally, there are other factors such as coverage, uniformity of the streamline distribution, and continuity, which all profoundly influence the quality of the output visualization [8].

Example of a vector field visualization using glyphs with different seeding points distribution

Besides streamlines and streaklines, the other considered visualization methods are glyphs and color-coded surfaces. Glyph visualization method needs to take into consideration the density and distribution of the individual visual elements in order to produce a natural and expressive image. Random distribution seems more favorable over the regular; however, there still is a room for improvement for visualization in three dimensions. One of the problems in three dimensions is clutter caused by glyphs overlapping each other regardless of the initial distribution.

Rendering

Let us move beyond the third step of the visualization pipeline; the last step involves the rendering of the visualized scene. The last step poses a technical challenge — each of the visualization methods might require rendering of a vast number of visual elements during each frame. The thesis introduced a method for piecewise rendering of streamline geometry, which utilizes instancing and per-instance transformations.

Related Work

Current state-of-the-art visualization tools are usually built using VTK library [9] or computational software such as Matlab or Mathematica. As the visualization capabilities of the computational system might be limited, the use of this software guarantees access to the state-of-the-art implementation of mathematical routines and it is probably one of the most common approaches used by researches. The software often supports distributed parallel computing.

Alternatively, using an open visualization toolkit such as VTK enables utilizing the already implemented visualization algorithms. The toolkit supports a number of formats, and the existing application build on top of VTK allow for the implementation of custom plugins.

One of the downsides of using an existing framework or computational system is that it requires the knowledge of the system. VTK is known for its steep learning curve, and most of the computational systems are proprietary, which precludes any development of a standalone application.

First Results

The first design of the custom visualization toolkit opted for using a modular design, staying true to the general visualization pipeline. The toolkit presents the user with a node-based GUI which enables the construction of custom visualization pipelines.

Illustration of a custom pipeline scheme

The primary purpose of the GUI is to provide a clear overview and control over the entire visualization pipeline. The pipeline editor works with the atomic operations of the first three stages of the visualization pipeline. According to the principles of visual programming, the user can manipulate the individual operations represented as nodes, connect the inputs and outputs, and create a custom visualization pipeline.

Screenshot of the pipeline editor with built-in terminal informing user about the processing progress

The computational backend behind the pipeline editor is programmed in C and Python and does not utilize VTK or any other visualization frameworks. The decision to not use any existing visualization framework was made to avoid any restrictions posed by the design of the underlying framework. The prototype includes a custom C implementation of Runge-Kutta adaptive fourth- and fifth-order solver, which is a direct alternative to the SciPy implementation of the same method. The prototype utilizes a SciPy and a custom C implementation of trilinear interpolation. In the test benchmarks, according to the expectations, the C implementation surpasses in performance the Python SciPy/Numpy alternatives providing results with comparable accuracy. The representation of the pipeline is currently stored as a JSON file, but that is going to change very soon.

The prototype of the renderer was developed using WebGL. A discussion over which technology should be used was presented in the thesis. To sum it up, one of the main reasons for using WebGL is its portability. The entire prototype is designed as a server-client application. The client renderer application also supports SAGE2 environment [10] for displaying the visualization on large grids of displays. Despite the advantages of using WebGL, the main drawback is the lack of performance. Additionally, one of the goals is to enable visualization on portable devices such as phones and tablets, which poses further restrictions on the available performance.

Screenshot of the protoype renderer with GUI

The renderer allows for interactive exploration of the dataset. It enables real-time modifications of specific visualization parameters, such as modifying the color map, hiding and displaying specific segments of the streamlines, changing sizes of visual elements, selecting which vector components should be represented, or animating the streamline flow.

The entire rendering process is performed in the browser on the client-side of the application. The surrounding images, together with the image in the introduction, illustrate the renderer output.

Vertical slice of a vector field describing plasma flow near solar surface, the zoomed-out version, contains the entire width of the dataset
Segment of a vector field describing plasma flow near solar surface, top view, visualization using glyphs
Segment of a vector field describing plasma flow near solar surface, top view, visualization using streamlines

As of today, the prototype utilizes three visualization methods — streamlines, glyphs, and color-coded surfaces. Most of the research has focused on streamlines so far, and the most visually pleasing images have been created using a combination of streamlines in the foreground and color-coded surfaces as a background, isolating a single slice of the vector field. The surface cuts out the streamlines, leaving in the picture only a segment of each streamline corresponding with the flow which goes above level represented by the surface.

Additionally, the prototype supports depth rendering, which makes it possible to display the scene on selected monitors with support for native 3D. This feature has been tested with the 3D monitor located at FIT CTU in SAGELab.

Example of render with depthmap allowing for 3D visualization on selected native 3D monitors

Next Steps

Further research will target the following four goals:

  • transform the prototype from server-client to a lightweight desktop application, which involves further optimizing the rendering,
  • create a uniform Python API which will allow storing the visualization pipeline in the form of a single Python script,
  • present a more robust numerical interpolation and integration methods and consider the use of Fourier-based methods,
  • as a long-term goal, develop a visualization of divergence and rotation utilizing approximation of derivatives based on [5],
  • and finally, perform user testing of the pipeline editor and renderer to further explore the possibilities of visual programming in the area of data visualization and visualization pipeline.

The final version of the will be distributed using pip as a Python package including the routines of the computational backend and code for the renderer. Additionally, a standalone version of a desktop application will be released, incorporating the computational backend, pipeline editor and renderer. Depending on the complexity of the previous goals, additional support for the Jupyter might be considered. Since the renderer already uses WebGL and with the Python API it would be possible to integrate the toolkit into the Jupyter notebooks.

Resources

  1. Telea, A. C. Data visualization: principles and practice. Boca Raton: CRC Press, Taylor & Francis Group, second edition, 2015, ISBN 9781466585263.
  2. Yeung, P. K.; Pope, S. B. An algorithm for tracking fluid particles in numerical simulations of homogeneous turbulence. Journal of computational physics, 1988, 79.2: 373–416.
  3. Globus, Al; Levit, Creon; Lasinski, Tom. A tool for visualizing the topology of three-dimensional vector fields. In: Proceedings of the 2nd conference on Visualization’91. IEEE Computer Society Press, 1991. p. 33–40.
  4. Marchesin, S.; Chen, C.; et al. View-Dependent Streamlines for 3D Vector Fields. IEEE Transactions on Visualization and Computer Graphics, volume 16, no. 6, Nov 2010: pp. 1578–1586, ISSN 1077–2626, doi: 10.1109/TVCG.2010.212.
  5. Lele, Sanjiva K. Compact finite difference schemes with spectral-like resolution. Journal of computational physics, 1992, 103.1: 16–42.
  6. Atkinson, K.; Han, W.; et al. Numerical solution of ordinary differential equations. John Wiley & Sons, 2009, ISBN 978–0–470–04294–6.
  7. Press, W. H.; Teukolsky, S. A. Adaptive Stepsize Runge-Kutta Integration. Computers in Physics, volume 6, no. 2, 1992: p. 188, ISSN 08941866, doi:10.1063/1.4823060.
  8. Verma, V.; Kao, D.; et al. A flow-guided streamline seeding strategy. In Proceedings of the conference on Visualization’00, IEEE Computer Society Press, 2000, pp. 163–170.
  9. Avila, L. S. (editor). The VTK User’s Guide. Clifton Park, NY: Kitware, 11th edition, 2010, ISBN 978–1–930934–23–8.
  10. Marrinan, T.; Aurisano, J.; et al. SAGE2: A new approach for data intensive collaboration using Scalable Resolution Shared Displays. In Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), 2014 International Conference on, IEEE, 2014, pp. 177–186.

--

--