Those who work closely with me know that I am part of a project entitled Neurodome (www.neurodome.org). The concept is simple. To better understand our motivations to explore the unknown (e.g. space), we must look within. To accomplish this, we are creating a planetarium show using real data: maps of the known universe, clinical imaging (fMRI, CT), and fluorescent imaging of brain slices, to name a few. From our web site:
Humans are inherently curious. We have journeyed into space and have traveled to the bottom of our deepest oceans. Yet no one has ever explained why man or woman “must explore.” What is it that sparks our curiosity? Are we hard-wired for exploration? Somewhere in the brain’s compact architecture, we make the decision to go forth and explore.
The NEURODOME project is a planetarium show that tries to answer these questions. Combining planetarium production technology with high-resolution brain imaging techniques, we will create dome-format animations that examine what it is about the brain that drives us to journey into the unknown. Seamlessly interspersed with space scenes, the NEURODOME planetarium show will zoom through the brain in the context of cutting edge of astronomical research. This project will present our most current portraits of neurons, networks, and regions of the brain responsible for exploratory behavior.
To embark upon this journey, we are launching a Kickstarter campaign next week, which you will be able to find here. Two trailers and a pitch video showcase our techniques and our vision. For now, you can see our “theatrical” trailer, which combines some real data with CGI, below. Note that the other trailer I plan to embed in a later post will include nothing but real data.
I am both a software developer and curator of clinical data in this project. This involves acquisition of high-resolution fMRI and CT data, followed by rendering of these slices into three-dimension objects that can be used for our dome-format presentation. How do we do this? I will begin by explaining how I reconstructed a human head from sagittal sections of CT data. In a later post, I will describe how we can take fMRI data of the brain and reconstruct three-dimensional models by a process known as segmentation.
How do we take a stack of images like this:
(click to open)
and convert it into three-dimensional objects like these:
These renders allow us to transition, in a large-scale animation, from imagery outside the brain to fMRI segmentation data and finally to high-resolution brain imaging. The objects are beneficial in that they can be imported into most animation suites. To render stacks of images, I created a simple script in MATLAB. A stack of 131 saggital sections, each with 512×512 resolution, was first imported. After importing the data, the script then defines a rectangular grid in 3D space. The pixel data from each of these CT slices is interpolated and mapped to the 3D mesh. For example, we can take the 512×512 two-dimensional slice and interpolate it so that the new resolution is 2048×2048. Note that this does not create new data, but instead creates a smoother gradient between adjacent points. If there is interest, I can expand upon the process of three-dimensional interpolation in a later post.
I then take this high-resolution structure mapped to the previously-defined three-dimensional grid and create an isosurface. The function takes volume data in three dimensions and a certain isovalue. An isovalue in this case corresponds to a particular intensity of our CT data. The script searches for all of these isovalues in three dimensions and connects the dots. In doing so, a surface in which all of the points have the same intensity is mapped. These vertices and faces are sent to a “structure” in our workspace. The script finally converts this structure to a three-dimensional “object” file (.obj). Such object files can then be used in any animation suites, such as Maya or Blender. Using Blender, I was able to create the animations shown above. Different isovalues correspond to different parts of the image. For example, a value/index of ~1000 corresponds to skin in the CT data, and a value/index of ~2400 corresponds to the bone intensity. Thus, we can take a stack of two-dimensional images and create beautiful structures for exploration in our planetarium show.
In summary the process is as follows:
- A stack of saggital CT images is imported into MATLAB.
- The script interpolates these images to increase the image (but not data) resolution.
- A volume is created from the stack of high-resolution images.
- The volume is “sliced” into a surface corresponding to just one intensity level.
- This surface is exported to animations suites for your viewing pleasure.
This series will continue in later posts. I plan to describe more details of the project, and I will delve into particulars of each post if there is interest. You can find more information on this project at http://www.neurodome.org.