Building Ascent

This page provides details on several ways to build Ascent from source.

For the shortest path from zero to Ascent, see Quick Start.

To build third party dependencies we recommend using uberenv which leverages Spack or Spack directly. We also provide info about building for known HPC clusters using uberenv. and a Docker example that leverages Spack.


Ascent uses CMake for its build system. Building Ascent creates two separate libraries:

  • libascent : a version for execution on a single node
  • libascent_mpi : a version for distributed-memory parallel

The CMake variable( ENABLE_MPI ON | OFF ) controls the building the parallel version of Ascent and included proxy-apps.

The build dependencies vary according to which pipelines and proxy-applications are desired. For a minimal build with no parallel components, the following are required:

  • Conduit
  • C++ compilers

We recognize that building on HPC systems can be difficult, and we have provide two separate build strategies.

  • A spack based build
  • Manually compile dependencies using a CMake configuration file to keep compilers and libraries consistent

Most often, the spack based build should be attempted first. Spack will automatically download and build all the third party dependencies and create a CMake configuration file for Ascent. Should you encounter build issues that are not addressed here, please ask questions using our github issue tracker.

Build Dependencies



  • Conduit
  • One or more runtimes

For Ascent, the Flow runtime is builtin, but for visualization functionality (filters and rendering), the VTK-h runtime is needed.

Conduit (Required)

  • MPI
  • Python + NumPy (Optional)
  • HDF5 (Optional)
  • Fortran compiler (Optional)

VTK-h (Optional)

  • VTK-h:

    • VTK-m:
      • OpenMP (Optional)
      • CUDA 7.5+ (Optional)
      • MPI (Optional)


When building VTK-m for use with VTK-h, VTK-m must be configured with rendering on, among other options. See the VTK-h spack package for details.

MFEM (Optional)

  • MPI
  • Metis
  • Hypre

Getting Started

Clone the Ascent repo:

  • From Github
git clone --recursive

--recursive is necessary because we are using a git submodule to pull in BLT ( If you cloned without --recursive, you can checkout this submodule using:

cd ascent
git submodule init
git submodule update

Configure a build: is a simple wrapper for the cmake call to configure ascent. This creates a new out-of-source build directory build-debug and a directory for the install install-debug. It optionally includes a host-config.cmake file with detailed configuration options.

cd ascent

Build, test, and install Ascent:

cd build-debug
make -j 8
make test
make install

Build Options

Ascent’s build system supports the following CMake options:

  • BUILD_SHARED_LIBS - Controls if shared (ON) or static (OFF) libraries are built. (default = ON)
  • ENABLE_TESTS - Controls if unit tests are built. (default = ON)
  • ENABLE_DOCS - Controls if the Ascent documentation is built (when sphinx and doxygen are found ). (default = ON)
  • ENABLE_FORTRAN - Controls if Fortran components of Ascent are built. This includes the Fortran language bindings and Cloverleaf3D . (default = ON)
  • ENABLE_PYTHON - Controls if the ascent python module and related tests are built. (default = OFF)
The Ascent python module will build for both Python 2 and Python 3. To select a specific Python, set the CMake variable PYTHON_EXECUTABLE to path of the desired python binary. The ascent python module requires the Conduit python module.
  • ENABLE_OPENMP - Controls if the proxy-apps are configured with OpenMP. (default = OFF)
  • ENABLE_MPI - Controls if parallel versions of proxy-apps and Ascent are built. (default = ON)

We are using CMake’s standard FindMPI logic. To select a specific MPI set the CMake variables MPI_C_COMPILER and MPI_CXX_COMPILER, or the other FindMPI options for MPI include paths and MPI libraries.

To run the mpi unit tests on LLNL’s LC platforms, you may also need change the CMake variables MPIEXEC and MPIEXEC_NUMPROC_FLAG, so you can use srun and select a partition. (for an example see: src/host-configs/chaos_5_x86_64.cmake)


Starting in CMake 3.10, the FindMPI MPIEXEC variable was changed to MPIEXEC_EXECUTABLE. FindMPI will still set MPIEXEC, but any attempt to change it before calling FindMPI with your own cached value of MPIEXEC will not survive, so you need to set MPIEXEC_EXECUTABLE [reference].

  • CONDUIT_DIR - Path to an Conduit install (required).
  • VTKM_DIR - Path to an VTK-m install (optional).
  • MFEM_DIR - Path to a MFEM install (optional).
  • ADIOS_DIR - Path to a ADIOS install (optional).
  • BLT_SOURCE_DIR - Path to BLT. (default = “blt”)
Defaults to “blt”, where we expect the blt submodule. The most compelling reason to override is to share a single instance of BLT across multiple projects.

Host Config Files

To handle build options, third party library paths, etc we rely on CMake’s initial-cache file mechanism.

cmake -C config_file.cmake

We call these initial-cache files host-config files, since we typically create a file for each platform or specific hosts if necessary.

The script uses your machine’s hostname, the SYS_TYPE environment variable, and your platform name (via uname) to look for an existing host config file in the host-configs directory at the root of the ascent repo. If found, it passes the host config file to CMake via the -C command line option.

cmake {other options} -C host-configs/{config_file}.cmake ../

You can find example files in the host-configs directory.

These files use standard CMake commands. CMake set commands need to specify the root cache path as follows:


It is possible to create your own configure file, and an boilerplate example is provided in /host-configs/boilerplate.cmake


If compiling all of the dependencies yourself, it is important that you use the same compilers for all dependencies. For example, different MPI and Fortran compilers (e.g., Intel and GCC) are not compatible with one another.

Building Ascent and Third Party Dependencies

We use Spack ( to help build Ascent’s third party dependencies on OSX and Linux.

Uberenv (scripts/uberenv/ automates fetching spack, building and installing third party dependencies, and can optionally install Ascent as well. To automate the full install process, Uberenv uses the Ascent Spack package along with extra settings such as Spack compiler and external third party package details for common HPC platforms.

Uberenv Options for Building Third Party Dependencies has a few options that allow you to control how dependencies are built:

Option Description Default
–prefix Destination directory uberenv_libs
–spec Spack spec linux: %gcc osx: %clang
–spack-config-dir Folder with Spack settings files linux: (empty) osx: scripts/uberenv_configs/spack_configs/darwin/
-k Ignore SSL Errors False
–install Fully install ascent not just dependencies False
–run_tests Invoke tests during build and against install False

The -k option exists for sites where SSL certificate interception undermines fetching from github and https hosted source tarballs. When enabled, clones spack using:

git -c http.sslVerify=false clone

And passes -k to any spack commands that may fetch via https.

Default invocation on Linux:

python scripts/uberenv/ --prefix uberenv_libs \
                                  --spec %gcc

Default invocation on OSX:

python scripts/uberenv/ --prefix uberenv_libs \
                                  --spec %clang \
                                  --spack-config-dir scripts/uberenv_configs/spack_configs/darwin/

The uberenv –install installs conduit@master (not just the development dependencies):

python scripts/uberenv/ --install

To run tests during the build process to validate the build and install, you can use the --run_tests option:

python scripts/uberenv/ --install \

For details on Spack’s spec syntax, see the Spack Specs & dependencies documentation.

Compiler Settings for Third Party Dependencies

You can edit yaml files under scripts/uberenv/spack_config/{platform} or use the –spack-config-dir option to specify a directory with compiler and packages yaml files to use with Spack. See the Spack Compiler Configuration and Spack System Packages documentation for details.

For OSX, the defaults in spack_configs/darwin/compilers.yaml are X-Code’s clang and gfortran from


The bootstrapping process ignores ~/.spack/compilers.yaml to avoid conflicts and surprises from a user’s specific Spack settings on HPC platforms.

When run, checkouts a specific version of Spack from github as spack in the destination directory. It then uses Spack to build and install Conduit’s dependencies into spack/opt/spack/. Finally, it generates a host-config file {hostname}.cmake in the destination directory that specifies the compiler settings and paths to all of the dependencies.

Building with Uberenv on Known HPC Platforms

To support testing and installing on common platforms, we maintain sets of Spack compiler and package settings for a few known HPC platforms. Here are the commonly tested configurations:

System OS Tested Configurations (Spack Specs) Linux: TOSS3


%gcc~shared Linux: BlueOS %clang@coral~python~fortran Linux: SUSE / CNL %gcc

See scripts/spack_build_tests/ for the exact invocations used to test on these platforms.

Building Third Party Dependencies for Development

You can use scripts/uberenv/ to help setup your development environment on OSX and Linux. leverages Spack ( to build the external third party libraries and tools used by Ascent. Fortran support in is optional, dependencies should build without fortran. After building these libraries and tools, it writes an initial host-config file and adds the Spack built CMake binary to your PATH, so can immediately call the helper script to configure a ascent build.

#build third party libs using spack
python scripts/uberenv/

#copy the generated host-config file into the standard location
cp uberenv_libs/`hostname`*.cmake host-configs/

# run the configure helper script

# or you can run the configure helper script and give it the
# path to a host-config file
./ uberenv_libs/`hostname`*.cmake

Building with Spack

Currently, we maintain our own fork of Spack for stability. As part of the uberenv python script, we automatically clone our Spack fork.


Installing Ascent from the Spack master branch will most likely fail. We build and test spack installations with

To install Ascent with all options (and also build all of its dependencies as necessary) run:

spack install ascent

To build and install Ascent with CUDA support:

spack install ascent+cuda

The Ascent Spack package provides several variants that customize the options and dependencies used to build Ascent:

Variant Description Default
shared Build Ascent as shared libraries ON (+shared)
cmake Build CMake with Spack ON (+cmake)
python Enable Ascent Python support ON (+python)
mpi Enable Ascent MPI support ON (+mpi)
vtkh Enable Ascent VTK-h support ON (+vtkh)
tbb Enable VTK-h TBB support ON (+tbb)
cuda Enable VTK-h CUDA support OFF (~cuda)
doc Build Ascent’s Documentation OFF (~doc)
mfem Enable MFEM support OFF (~mfem)

Variants are enabled using + and disabled using ~. For example, to build Conduit with the minimum set of options (and dependencies) run:

spack install ascent+cuda~python~doc

See Spack’s Compiler Configuration to customize which compiler settings.

Using system installs of dependencies with Spack

Spack allows you to specify system installs of packages using a packages.yaml file.

Here is an example specifying system CUDA on MacOS:

# CUDA standard MacOS install
      cuda@9.0: /Developer/NVIDIA/CUDA-9.0
  buildable: False

Here is an example of specifying system MPI and CUDA on an LLNL Chaos 5 machine:

# LLNL toss3 CUDA
       cuda@9.1: cuda/9.1.85
    buildable: False
# LLNL toss3 mvapich2
      mvapich2@2.2%gcc@4.9.3:  /usr/tce/packages/mvapich2/mvapich2-2.2-gcc-4.9.3
      mvapich2@2.2%intel@17.0.0: /usr/tce/packages/mvapich2/mvapich2-2.2-intel-17.0.0
      mvapich2@2.2%clang@4.0.0: /usr/tce/packages/mvapich2/mvapich2-2.2-clang-4.0.0
    buildable: False
Settings for LLNL TOSS 3 Systems:
  • compilers.yaml
  • packages.yaml

ParaView Support

Ascent ParaView support is in src/examples/paraview-vis directory. This section describes how to configure, build and run the example integrations provided with Ascent and visualize the results insitu using ParaView. ParaView pipelines are provided for all example integrations. We describe in details the ParaView pipeline for cloverleaf3d in the ParaView Visualization section.

Setup spack

Install spack, modules and shell support.

  • Clone the spack repository:

    git clone
    cd spack
    source share/spack/
  • If the module command does not exist:

    • install environment-modules using the package manager for your system.
    • run add.modules to add the module command to your .bashrc file
    • Alternatively run spack bootstrap

Install ParaView and Ascent

  • For MomentInvariants (optional module in ParaView for pattern detection) visualization patch ParaView to enable this module:
  • Install ParaView (any version >= 5.7.0). When running on Linux we prefer mpich, which can be specified by using ^mpich.
    • spack install paraview+python3+mpi+osmesa
    • for CUDA use: spack install paraview+python3+mpi+osmesa+cuda
  • Install Ascent
    • spack install ascent~vtkh+python
    • If you need ascent built with vtkh you can use spack install ascent+python. Note that you need specific versions of vtkh and vtkm that work with the version of Ascent built. Those versions can be read from scripts/uberenv/project.json by cloning spack_url, branch spack_branch. paraview-package-momentinvariants.patch is already setup to patch vtkh and vthm with the correct versions, but make sure it is not out of date.
  • Load required modules: spack load conduit;spack load python;spack load py-numpy;spack load py-mpi4py;spack load paraview

Setup and run example integrations

You can test Ascent with ParaView support by running the available integrations. Visualization images will be generated in the current directory. These images can be checked against the images in src/examples/paraview-vis/tests/baseline_images.

  • Test proxies/cloverleaf3d

    • Go to a directory where you intend to run cloverleaf3d integration (for use a member work directory such as cd $MEMBERWORK/csc340) so that the compute node can write there.

    • Create links to required files for cloverleaf3d:

      • ln -s $(spack location --install-dir ascent)/examples/ascent/paraview-vis/

      • ln -s $(spack location --install-dir ascent)/examples/ascent/paraview-vis/ for surface visualization.

      • Or ln -s $(spack location --install-dir ascent)/examples/ascent/paraview-vis/ for MomentInvariants visualization (Optional)

      • ln -s $(spack location --install-dir ascent)/examples/ascent/paraview-vis/ascent_actions.json
        ln -s $(spack location --install-dir ascent)/examples/ascent/paraview-vis/expandingVortex.vti
        ln -s $(spack location --install-dir ascent)/examples/ascent/proxies/cloverleaf3d/
    • Run the simulation

      $(spack location --install-dir mpi)/bin/mpiexec -n 2 $(spack location --install-dir ascent)/examples/ascent/proxies/cloverleaf3d/cloverleaf3d_par > output.txt 2>&1

    • Examine the generated images

  • Similarily test proxies/kripke, proxies/laghos, proxies/lulesh, synthetic/noise. After you create the apropriate links similarily with cloverleaf3d you can run these simulations with:

    • $(spack location --install-dir mpi)/bin/mpiexec -np 8 ./kripke_par --procs 2,2,2 --zones 32,32,32 --niter 5 --dir 1:2 --grp 1:1 --legendre 4 --quad 4:4 > output.txt 2>&1
    • $(spack location --install-dir mpi)/bin/mpiexec -n 8 ./laghos_mpi -p 1 -m data/cube01_hex.mesh -rs 2 -tf 0.6 -visit -pa > output.txt 2>&1
    • $(spack location --install-dir mpi)/bin/mpiexec -np 8 ./lulesh_par -i 10 -s 32 > output.txt 2>&1
    • $(spack location --install-dir mpi)/bin/mpiexec -np 8 ./noise_par  --dims=32,32,32 --time_steps=5 --time_delta=1 > output.txt 2>&1

Setup and run on

  • Execute section Setup spack

  • Configure spack

    • add a file ~/.spack/packages.yaml with the following content as detailed next. This insures that we use spectrum-mpi as the MPI runtime.

          buildable: false
          - modules:
            - spectrum-mpi/
            spec: spectrum-mpi@
          buildable: false
          - modules:
            - cuda/10.1.168
            spec: cuda@10.1.168
    • Load the correct compiler:

      module load gcc/7.4.0
      spack compiler add
      spack compiler remove gcc@4.8.5
  • Compile spack packages on a compute node

    • For busy summit I got internal compiler error when compiling llvm and paraview on the login node. To fix this, move the spack installation on $MEMBERWORK/csc340 and compile everything on a compute node.
      • First login to a compute node: bsub -W 2:00 -nnodes 1 -P CSC340 -Is /bin/bash
      • Install all spack packages as in Install ParaView and Ascent with -j80 option (there are 84 threads)
      • Disconnect from the compute node: exit.
  • Continue with Setup and run example integrations but run the integrations as described next:

    • Execute cloverleaf bsub $(spack location --install-dir ascent)/examples/ascent/paraview-vis/summit-moment-invariants.lsf
    • To check if the integration finished use: bjobs -a

Nightly tests

We provide a docker file for Ubuntu 18.04 and a script that installs the latest ParaView and Ascent, runs the integrations provided with Ascent, runs visualizations using ParaView pipelines and checks the results. See tests/ for how to create the docker image, run the container and execute the test script.


  • Global extents are computed for uniform and rectilinear topologies but they are not yet computed for a structured topology (lulesh). This means that for lulesh and datasets that have a structured topology we cannot save a correct parallel file that represents the whole dataset.
  • For the laghos simulation accessed through Python extracts interface, only the higher order mesh is accessible at the moment, which is a uniform dataset. The documentation shows a non-uniform mesh but that is only available in the vtkm pipelines.

Using Ascent in Another Project

Under src/examples there are examples demonstrating how to use Ascent in a CMake-based build system (using-with-cmake) and via a Makefile (using-with-make). Under src/examples/proxies you can find example integrations using ascent in the Lulesh, Kripke, and Cloverleaf3D proxy-applications. In src/examples/synthetic/noise you can find an example integration using our synthetic smooth noise application.

Building Ascent in a Docker Container

Under src/examples/docker/master/ubuntu there is an example Dockerfile which can be used to create an ubuntu-based docker image with a build of the Ascent github master branch. There is also a script that demonstrates how to build a Docker image from the Dockerfile ( and a script that runs this image in a Docker container ( The Ascent repo is cloned into the image’s file system at /ascent, the build directory is /ascent/build-debug, and the install directory is /ascent/install-debug.

Building Ascent Dependencies Manually

In some environments, a spack build of Ascents dependencies can fail or a user may prefer to build the dependencies manually. This section describes how to build Ascents components. When building Ascents dependencies, it is highly recommended to fill out a host config file like the one located in /host-configs/boilerplate.cmake. This is the best way to avoid problems that can easily arise from mixing c++ standard libraries conflicts, MPI library conflicts, and fortran module conflicts, all of which are difficult to spot. Use the same CMake host-config file for each of Ascent’s dependencies, and while this may bring in unused cmake variables and clutter the ccmake curses interface, it will help avoid problems. In the host config, you can specify options such as ENABLE_PYTHON=OFF, ENABLE_FORTRAN=OFF, and ENABLE_MPI=ON that will be respected by both conduit and ascent.

HDF5 (Optional)

The HDF5 source tarball on the HDF5 group’s website. While the source contains both an autotools configure and CMake build system, use the CMake build system with your host config file. Once you have built and installed HDF5 into a local directory, add the location of that directory to the declaration of the HDF5_DIR in the host config file.

curl > hdf5.tar.gz
tar -xzf hdf5.tar.gz
cd hdf5-1.8.16/
mkdir build
mkdir install
cd build
cmake -C path_to_host_config/myhost_config.cmake . \
make install

In the host config, add set(HDF5_DIR "/path/to/hdf5_install" CACHE PATH "").


The version of conduit we use is the develop branch. If the HDF5_DIR is specified in the host config, then conduit will build the relay io library. Likewise, if the config file has the entry ENABLE_MPI=ON, then conduit will build parallel versions of the libraries. Once you have installed conduit, add the path to the install directory to your host config file in the cmake variable CONDUIT_DIR.

git clone --recursive
cd conduit
mkdir build
mkdir install
cd build
cmake -C path_to_host_config/myhost_config.cmake ../src \
make install

In the host config, add set(CONDUIT_DIR "/path/to/conduit_install" CACHE PATH "").


Now that we have all the dependencies built and a host config file for our environment, we can now build Ascent.

git clone --recursive
cd ascent
mkdir build
mkdir install
cd build
cmake -C path_to_host_config/myhost_config.cmake ../src -DCMAKE_INSTALL_PREFIX=path_to_install \
make install

To run the unit tests to make sure everything works, do make test. If you install these dependencies in a public place in your environment, we encourage you to make you host config publicly available by submitting a pull request to the Ascent repo. This will allow others to easily build on that system by only following the Ascent build instructions.

Asking Ascent how its configured

Once built, Ascent has a number of unit tests. t_ascent_smoke, located in the tests/ascent directory will print Ascent’s build configuration:

        "version": "0.4.0",
                "cpp": "/usr/tce/packages/gcc/gcc-4.9.3/bin/g++",
                "fortran": "/usr/tce/packages/gcc/gcc-4.9.3/bin/gfortran"
        "platform": "linux",
        "system": "Linux-3.10.0-862.6.3.1chaos.ch6.x86_64",
        "install_prefix": "/usr/local",
        "mpi": "disabled",
                        "status": "enabled",
                                "status": "enabled",
                                        "serial": "enabled",
                                        "openmp": "enabled",
                                        "cuda": "disabled"
                        "status": "enabled"
        "default_runtime": "ascent"

In this case, the non-MPI version of Ascent was used, so MPI reportsa as disabled.