Skip to content

Tutorial: Rendering Athena Data in Blender

rabagoi edited this page Aug 7, 2024 · 4 revisions

Rendering Athena++ Data in Blender


This is a tutorial on rendering your Athena++ simulation data in Blender, a 3D modeling program. The renderings that can be produced from Blender can be very beneficial for use in future presentations, so I encourage you to look into this tutorial if you are interested!

This guide was witten using Blender 4.0. The UI may change between versions.

Part 1: Converting the data to VDB files

On its own, Blender is unable to read the standard outputs produced by Athena++ (vtk and hdf5). Blender uses the OpenVDB format (.vdb) to represent sparse volumetric objects.

Conversion to the vdb format requires the pyopenvdb module to be installed. It is tricky to install this, but Blender's python module comes with this pre-installed, so it is possible to have Blender itself do the conversion.

However, Blender's copy of python does not come with the main modules used to open Athena++ files, such as h5py. To install these modules for Blender's local Python console, navigate to the Scripting tab and type the following commands into the Python console:

import sys, subprocess
subprocess.run([sys.executable, '-m', 'pip', 'install', '--user', 'h5py'])

or replace h5py with whichever additional modules are needed to load the files you need.

If modules cannot be loaded into Blender's local Python console easily, then it is always possible to convert the Athena data into a NumPy .npz file as a middleman, then convert that to vdb.

Once you can access pyopenvdb and the Athena++ data in the same console, a Python script can be used to copy the data from the Athena++ files into a VDB object. An example script is listed below.

import pyopenvdb as vdb
import sys
sys.path.append('/path/to/athena/vis/python')
import athena_read

# Import the ATHENA++ data (hdf5 example)
data = athena_read.athdf('file.out1.00000.athdf')

# List of data fields that are to be added to the vdb file
fields = ['rho', 'press', 'vel1', 'vel2', 'vel3']


DataCube = []

# Iterate over each field
for field in fields:

    # Create a new vdb FloatGrid and populate the values
    grid = vdb.FloatGrid()
    grid.copyFromArray(data[field])
    grid.activeVoxelCount() == data[field].size
    grid.evalActiveVoxelBoundingBox()
    grid.name = field
    grid.gridClass = vdb.GridClass.FOG_VOLUME
    
    # Add the new grid to the final datacube
    DataCube.append(grid)


# Create a new vdb file with the datacube values
vdb.write('file.vdb', grids=DataCube)

Some notes:

  • The above script does not scale the grid sizes, so when imported into Blender 1 grid = 1 Blender unit. This can result in objects that are many thousands of units long. To scale the voxels down to a size that matches, e.g., simulation units, add the following command:

    grid.transform = vdb.createLinearTransform(voxelSize=scalefactor)

  • During the file conversion, opening the Athena files will require enough computer memory to hold all of the data arrays that must be put into the vdb file. This may be difficult for large simulation files.

Other Coordinate Systems

At the time of writing, VDB files are currently written in Cartesian coordinates and only support linear transformations. In order to use data calculated in other coordinate systems, it must first be interpolated onto a Cartesian grid.


Part 2: Rendering the data in Blender

Now that the data has been written to the proper file format, it can be opened in Blender properly. Open up Blender to get started.

Caption

From the main menu (3D viewport), navigate to Add > Volume > Import VDB and select the vdb file to import the Athena data into Blender.

Caption

The VDB object should now be loaded into Blender. It should be positioned so that the origin is at the first grid cell (i,j,k=0), and scaled to the units set during the file conversion (see above). This puts the object off to the side, so you can move it around, rotate, and rescale it to your liking now.

Some shortcuts for movement commands:

  • G: Translate Object

  • R: Rotate Object

  • S: Scale Object

  • N: Opens the View Sidebar at the top right, which gives additional context for an object's current properties.

While manipulating an object, you can also press X, Y, or Z to limit the transformation to the corresponding axes.

Note: If you are getting weird triangular artifacts when viewing your data, it is likely due to View Clip, where the Viewer is clipping out objects that are too far away. Using the View Sidebar > View Tab, you can set the Clip Start and End distances manually. It may also help to switch the viewer to an Orthographic view (Numpad 5 or View Menu)


Part 3: Adding a Simple Shader

So far, our data has been loaded properly, but nothing is showing up in the Viewport other than the box that holds the data. This is because we haven't given any directions for how light should interact with our object - how it should be absorbed, scattered, if the object itself should emit light, etc. Without this information, light is passing through with no interaction, which makes our object invisible.

In Blender, the lighting properties of an object are known as its Materials or Shaders, and we will add a shader to our data here. But first, to make sure we can see the lighting effects properly, make sure that Blender is using the correct rendering engine.

In the Sidebar, go to Scene > Render Engine and make sure that the engine is set to Cycles. By default, Blender uses the EEVEE engine, which is faster and better for real-time graphics, but does not use actual raytracing and will not show volumetrics correctly.

IMPORTANT NOTE: The Cycles engine is very computer-intensive, and can easily use up all the resources of smaller computers. If realtime editing becomes difficult, switch the render engine back to EEVEE and perform changes there, then switch back to Cycles to only when you are ready to produce a final image.

Caption

Once the rendering engine is set, click on the Shading tab along the top menu to access the Shading layout.

Caption

The central two panels will be the most important to us. The top panel shows the 3D layout, but renders objects using their given shaders. Our data has no shader yet, so it is invisible. The bottom panel is the Node Editor, which is where we will be constructing our new shader. Click on the [+ New] button in the middle to add a new shader.

Some Nodes will have been created. First is the Principled Volume Node, which gathers information about the object's volumetric light scattering properties, and passes it to the Material Output Node, allowing it to be rendered. Right now, it is only using the default values, so it will not look like much.

To use the data from the VDB object, click on Add > Input > Attribute to create a new Attribute Node. Set the Name field at the bottom to the name of the data field in the VDB object, i.e. rho for density. Finally, connect the Fac output of the Attribute Node to the Density input of the Principled Volume Node by clicking and dragging.

Caption

At this point, you should start to see something resembling the shape of your data. Depending on the lighting, it may be dim; you can adjust the positioning and strength of the light by selecting the Light object and transforming it. The size of the data units may also be an issue. If the input data needs to be scaled, you can perform calculations on it by adding Math Nodes (Add > Converters > Math), which can perform most basic operations before connecting to the Volume node.

Caption

Rendering An Image

To render a full image, press F12 or select Render > Render Image from the top menu. Blender (if using the Cycles engine) will start ray-casting from the camera to the objects in its view until it completes its image. The settings for rendering an image are in the Scenes > Render tab on the sidebar. Under the Rendering menu, I recommend setting a time limit for the render - a fully sampled render can take a very long time, and you may want to limit the render to a few minutes each for test images.

Image rendered from the example data.  Render time: 5 min.

The camera itself is an object that can be moced around in the 3D space - you should see it as a pyramid-shaped object above the scene, or in the list of objects on the top-right. You can view your object from the camera's viewpoint by pressing the Camera button on the right side of the viewport, or Numpad 0. If you want to line up the camera with your current viewing angle, select View > Align View > Align Active Camera To View from the drop down menu. If you select the camera, you can change the camera's properties by going to the Data tab in the sidebar (green camera).

Top: Moving the camera around in the scene.  Bottom:  Image rendered from the new camera view, with a perspective lens of 35mm.

From here, it is up to you how you want to display the data! Add colors, starry backgrounds, render it from whatever angle you want, there's lots you can do!

Blender Manual

Shader Nodes

The example VDB and Blender session shown above are also available for download below:

test.vdb

example.blend


Part 4: Adding Color

A simple shader as outlined above will only render data in a single color, or in shades of that color (grayscale by default). To add more color to the scene, we must enlist the use of the ColorRamp Node.

The ColorRamp Node takes an input from 0 to 1 and returns a color along a gradient. Just like creating colormaps for plots, the (normalized) density of your data can be used as the input to get a color for each point in the dataset, which can be routed to the Color input of the Principled Volume node.

Caption

Caption

It is recommended to use the Math nodes and the Map Range node to manipulate and normalize the data before feeding it into the ColorRamp node (i.e. normalizing to log(rho), (Sigma-Sigma_0)-1, etc. ). If one wishes to use the colormaps specific to matplotlib and other plotting softwares, I recommend looking into the following addon, which imports colormaps as a ColorRamp node: (https://github.com/TheJeran/Blender-Colormaps). Note that using this addon requires installing the related package (matplotlib, colorcet, etc.) into Blender's local Python.


Time Series

For sequentially numbered outputs, Blender can automatically load an entire sequence at once by checking the Sequence box in the VDB Properties tab. This is useful if you have a sequence of outputs, and you are interested in rendering each of the output files as a time series.

Caption

Each VDB file will be loaded as a single frame in the Timeline, visible at the bottom of the screen. You can scroll through the Timeline to flip through the different frames and see how they are rendered. Information about the VDB object's size, shaders, and the like are preserved between frames, as Blender is only swapping out the data contained within the object.

To render a full animation, select Render > Render Animation from the top menu or Ctrl + F12. Make sure to use the Timeline to set the start and ending frames, as well as the output format in the Output Tab, before you start rendering!

Rendering of a 3D Kelvin-Helmholtz simulation.


Preparing Blender for Use in Clusters

To use Blender properly on clusters, any required packages should be installed to Blender directly before moving it to a cluster. Go to the Scripting tab and use the Python console or invoke Blender's local Python directly (located in path/to/blender/4.0/python/bin) and use the modified installation command below to install packages directly into Blender's site-packages folder:

import sys, pip
pip.main(['install', 'module_name', '--target', (sys.exec_prefix) + "/lib/python3.10/site-packages"])

If the installation is successful, listing the items in Blender's site-packages folder (see locations for each operating system here) should show your required packages.

Running Blender From the Command Line

A typical use of Blender from the command like would look something like this:

blender -b --python example_script.py

The -b option executes Blender "silently", without opening the GUI. The -P or --python option executes the following python script.


This tutorial is in progress and more will be written in the future! Parts to be written:

  • Camera Flythroughs
  • Other data files?