## The Occlusion Spectrum

The Occlusion Spectrum is a technique for the classification of volumetric datasets based on ambient occlusion of voxels. The term ambient occlusion at this point is borrowed from the rendering domain, where similar algorithms are used to approximate global illumination effects due to the occlusion of objects. In this context however it is used for the classification of volumetric datasets.

The occlusion spectrum is calculated using techniques similar to the generation of classical ambient occlusion. In a preprocessing step a weighted average of a spherical neighborhood for each voxel is built. This is done by sampling all voxels that are located within a given area. The intensities of the surrounding voxels are weighted according to a visibility mapping and then accumulated. This accumulated value encodes the weighted average visibility of a point along all directions in a sphere.

During rendering, occlusion is used together with the original signal intensity to generate two-dimensional transfer functions.

Contrary to other classication methods such as gradient magnitude, the occlusion spectrum highlights structures inside the data instead of material boundaries. The original paper describes this technique mostly by means of medical datasets such as MRI or CT data. The idea of this thesis is to adapt this technique in order to classify and visualize volumetric seismic data sets from the oil and gas domain. The goal of this adaption is the better identication of structures such as salt domes or channel systems, which are an integral part of the interpretation of 3D surveys.

## Synthetic Example

The concept of occlusion-based classication can be explained
with the following example. The synthetic dataset in the
following figure consists of two spheres that share the same
intensity. One is surrounded by a cube with lower intensity,
the other with high intensity. By mapping the intensity of
voxels to transparency the spheres can be separated from the
cubes. However it is not possible to further differentiate
between the two spheres, as they have the exact same
intensity. It is only possible to completely hide or display
both spheres.

By further considering the direct neighborhood of the spheres they can be separated based on the amount of occlusion. The sphere that is surrounded by the cube with the high intensity lies in the upper occlusion range whereas the neighborhood of the other sphere shows a lower occlusion distribution.

The classication is based on the *distribution of
intensities around volumetric objects*.
The next figure shows a more advanced example where an MRI
of a human head is classied in terms of occlusion. Depending
on the selection inside the occlusion spectrum, different
anatomical structures can be displayed separately even
though they share the same intensity.

## Visibility Mapping

In the above example occlusion is generated in a linear way.
This means that low intensities correspond to low occlusion
whereas high intensities correspond to high occlusion
values. This characteristic can be expressed graphically
with a simple linear function. The way different intensities
contribute to the occlusion is controlled with a visibility
mapping. The next figure shows different possible mapping
functions.

The mapping function has a great impact on the final occlusion spectrum, because it can be used to emphasize or completely hide intensity ranges during occlusion generation. Consider the truncated mapping. During occlusion preprocessing, each voxel that is located in either the minimum or maximum intensity ranges will not contribute to the occlusion.

Choosing a suitable visibility function is therefore of great importance for the nal quality of the classication. It can be used to vertically spread certain intensity ranges, which is important in order to maximize the likelihood of structure separation. Therefore a good occlusion spectrum contains a lot of variance in the vertical dimension.

## Accelerated Occlusion Computation

At this point the algorithm for computing occlusion is introduced. It was developed to overcome the major bottleneck of the first version, which was the generation of the occlusion. It was necessary to experiment quickly with different settings and parameters and then compare the results in a preview environment.

The new algorithm has two major advantages. First it reduces the number of texture samples by building occlusion based on differences between adjacent voxels. Secondly it is implemented in a parallel fashion and executed on modern parallel processors such as GPUs.

## Occlusion Computation from a Volumetric Dataset

*This section will be updated soon...*

### Brute Force Sampling

The original pipeline used a brute force approach for the precomputation of the occlusion information. To process a given voxel all its neighbors that are located within the sampling area had to be accessed.

Obviously this approach resulted in a huge number of texture samples, but also in a huge number of calculations. In addition to taking samples from the volume texture it is also necessary to sum up the neighboring intensities and multiply the result with a weighting factor.

This sampling pattern has to be performed per voxel. For a
relatively small cubic volume with dimensions of
*512³* and a sampling radius of *25* the total
number of texture samples needed for computing the occlusion
sum up to *8.784.529.755.548*.

Such a brute force approach is very slow especially with
increasing sampling areas or volume dimensions since it has
a complexity of *O(n³)*.

### Cached Occlusion Computation

The new algorithm is based on the observation that the sampling areas of two adjacent grid points overlap and share many common voxels. That means if the sum for a specic voxel is calculated, then a big part of the adjacent sum is already built. If this value can be cached and reused for adjacent voxels, then the new sum can be determined by building only the difference between two neighboring voxels. In this case only voxels need to be considered that are located on the borders of the sampling area.

To calculate the occlusion for the blue voxel in the above figure, texture samples are only necessary for border voxels. The new sum can be calculated by subtracting the orange and adding the green voxels. This figure also illustrates that this approach scales very nicely with increasing sampling areas.

### Sampling Mask Structure

*This section will be updated soon...*

## Parallel Implementation

*This section will be updated soon...*

### Application to Volumetric Data

As stated above, the sampling area is dened by two border lines. One samples the new voxels that get added and the second samples those voxels that get subtracted from the cached occlusion value. In order to use this concept for calculating occlusion for a threedimensional volume, it is necessary to extend the sampling mask as shown below.

Instead of two border lines, the new extended sampling mask is dened by two border planes. Both of these planes have the same dimensions as one slice through the volume.

In order to correctly calculate the initial occlusion that is needed for the start slice, the sampling mask is positioned outside of the volume. Sampling is now performed by moving the sampling mask through the volume and caching / saving the results according to the above-described phases. The cache in this case is a two-dimensional array and can be thought of as located in between the active planes. Each array element represents one voxel.

Another observation that can be made by looking at the above figure is that during each iteration there are two active slices.

In the current parallel implementation each of these slices is further subdivided into blocks.

Each block consists of a two-dimensional group containing a number of 322 threads. According to CUDA's thread organization hierarchy, this means that we create one grid per border plane.

### Shared Memory Loading

*This section will be updated soon...*

### Algorithm and Data Flow Overview

*This section will be updated soon...*

## Credits

**The
Occlusion Spectrum for Volume Visualization and
Classifications**

*Carlos D. Correa and Kwan-Liu Ma, IEEE Transactions on
Visualization and Computer Graphic*