All that we can observe using scanning instruments are projection images. You see the object just from one angle. That gives you only limited information about the 3D structure. To see really in 3D your object, you need to compute from all these images, from all the angles what the object looks like. How to do this has actually been known since the nineteen sixties and seventies when people first started developing CT scanners. Researchers use very basic algorithms based on the so-called Radon inversion formula. They are computationally efficient so people could use them already at that time but their main drawback is that they need a lot of information. You need to take images from the full range of angles and they have to be of very high quality. That can be a big problem, for example, due to the dose limitations. X-rays are harmful and you don't want to use too much of it.
If you start taking less and less images to use less radiation, you don't have sufficient information to compute an accurate 3D object purely from the data. The way to deal with this - and this is what researchers are doing at the moment - is to incorporate additional knowledge about the object of your imaging. In a medical case, for example, you know a patient is built up out of soft tissue, bone and some other densities but you know that it does not have aluminium in it, for instance. By using this prior knowledge and building it into the algorithm that computes the images you can get high-quality images from just few projections, from few measurements but at the expense of a lot of computation time because these new algorithms are far more computationally intensive compared to the classical ones.
At the moment, Joost Batenburg and his colleagues are making heavy use of GPU computing, so computing on the graphics processor. For medium-scale volumes, consisting of 500 by 500 cubic voxels, this is still sufficient. With a workstation using one or two powerful GPUs you can compute these images in a matter of minutes. If the researchers are dealing with very large datasets, for example, coming out of big scanning institutes, the researchers have to resort to cluster computing with many GPUs in order to do the computation in reasonable time.
The model the researchers are using at the moment is a kind of processing: they take the image and do the calculation afterwards. Joost Batenburg's vision for the future is that tomographic 3D scanning becomes an interactive science. Right at the moment that the researchers are scanning the object they try to compute the 3D image and have the data of the 3D image immediately available, visualized and analyzed, such that the user can in fact interact with the scanning process. This is particularly important for in situ scanning applications such as doing experiments like foam formation, bubble tracking inside the tomographic scanner. You have an evolving object and you want to constantly keep track of what is happening inside the scanner in full 3D.
To make this possible, first of all it is necessary to develop a new generation of algorithms because the current algorithms are not sufficient. Secondly, researchers have to stop considering that a scanner is an instrument separate from computing. Researchers need to integrate it so they have the scanner on one hand, a high performance cluster, maybe consisting of one hundred nodes and lots of GPU on the other hand, fully connected with a high speed connection and have that all available in one facility.
At the moment, Joost Batenburg and his colleagues have a pretty good grip on how to do the algorithms for that. He expects that he and his colleagues need at least another five years of research to turn this into a practical proof of concept.