Seeing at the molecular resolution with visible light: microscopy and imaging physics

Professor Bernd Rieger, from Delft University of Technology’s Department of Imaging Physics, explains why computational microscopy may come to have a dramatic impact on quality of the achievable image.

For 100 years, a key rule in microscopic imaging was that the resolution in a microscope is determined by the diffraction limit, which states that the smallest detail separable is given by the wavelength of the light divided by twice the numerical aperture, where the latter is a measure for how much light is captured by the objective lens from the sample. For typical values of visible light, which has a wavelength ~500nm and high quality immersion objectives, this results in diffraction limits of ~200nm.

The field of optical microscopy research has developed rapidly since the turn of the century. Firstly due to the invention of staining via fluorescent proteins that can be genetically incorporated into cells, and secondly thanks to the invention of a technique called super-resolution fluorescence microscopy. Both of these findings have been awarded the Nobel Prize in Chemistry, in 2008 and 2014 respectively. Where the earlier finding allowed to specifically label certain parts of a cell with (green) fluorescent protein (GFP), the resolution of the images taken were still limited by diffraction in resolution. The second finding allows us to now make images on smaller length scales. What are the limits and how are we getting there? Here, we try to provide a short summary of the current state-of-the art and how we in Delft are pushing the boundaries to be able to see macro-molecular complexes with visible light at the one nanometre scale.

Single molecule localisation microscopy

The most popular technique to image beyond the diffraction limit is called ‘single molecule localisation microscopy’. In hindsight, it is conceptually a relatively straight forward idea and also experimentally an undemanding technique, offering many laboratories around the world early adoption of the technique since its introduction around 2006. Resolutions in the order of tens of nanometres have been increasingly reported in the literature. The basic idea is to localise single isolated fluorescent emitters. This can be realised by photo-chemically altering the properties of the dyes or proteins such that most of the time only a very small fraction of all molecules emit light. Many cycles of recording and switching will result in imaging most of the single molecule emitters present in the sample.

Due to the prior knowledge that only a single emitter is seen, one can fit a model of the system’s response function to the data taking into account the model’s different noise sources and enabling the localising of the centre with an uncertainty that scales inversely with the square root of the number of detected photons. Typical values result in uncertainties in the tens of nanometres – opposed to >200nm in a classical diffraction limited imaging approach. In essence, one has traded time-resolution for spatial resolution. To gain an increase in spatial resolution the sample must be static and many, many time frames with different single emitter blinking events must be recorded.

However, the overall resolution in an image is a combination of the localisation accuracy of single emitters and the labelling density of the structure with fluorescent emitters. These two problems must be tackled together to see with ‘unlimited’ resolution or at least with a resolution on the length scale of macro-molecules inside the cell. Once we can see structures with details on the order of one-nanometre in 3D, no improvement in the spatial resolution will be needed for light microscopy.

Localisation accuracy

The localisation precision for a single fluorescent emitter scales as 1/√nph with nph the number of detected photons. Typically a few hundred to a few thousand can be detected per on-state, depending on the type of fluorophore and imaging conditions. For very specialised imaging conditions higher counts have been reported, but the highest localisation precisions can currently be achieved in cryogenic fluorescent imaging.
There precisions are in the sub-nanometre range because the fluorescent emitters almost do not photobleach and millions and more photons can be collected before photobleaching occurs. On the down side, localisation based super-resolution is problematic as established protocols for sparsity such as photo-conversion in practice do only work (and poorly so) for a handful of emitters.

Very recently, we presented a first possible solution to this problem using polarisation control and consequently, in Delft we will opt for this way to achieve the highest localisation precision. The idea is presented in Figure 1, where we mount the sample on a cryogenic cooled sample in the middle of two opposing objective lenses. The cryogenic stage is currently developed in collaboration with Delmic BV, a small microscope company in Delft, the Netherlands, which is specialised in building correlative light and electron microscopes.

The design of the microscope shown here in an artist’s impression is very different from a standard microscope. It has two objective lenses instead of one, and the sample is imaged from two sides at the same time. This special design is required to obtain true ‘3D images’. In current practise, numerous different tricks are employed to retrieve 3D information from 2D images by adding deliberately optical aberration to the emission light path before the camera. However, these tricks all share the downside that the resolution in the third dimension is poor, and only by imaging from two sides at once will the resolution in the third dimension will be as good as in the other two.

In addition, using two lenses doubles the amount of light collected from the sample as the emitters radiate isotopically in all directions, this has the further advantage of increased signal to noise ratio.

Density of labelling

Attaching fluorescent emitters to the structure of interest in a specific manner is at the basis of fluorescent light microscopy. However, this process is stochastic in nature and never 100% efficient and successful. Therefore, the picture we elude from the acquired microscope images contains ‘holes’, i.e. the miss information at different random locations. This in the end results in a reduced sampling of the structure by localisations, and thus impairs our ability to learn something from the data.

This difficult problem can be mitigated if different instances of the same biological structure are fused (combined) properly. In fact, this is the only way to improve the effective labelling density. This idea is called ‘data fusion’, a technique we have pioneered in the field of super-resolution light microscopy and that has been applied to elucidate important structural biological questions of the Nuclear Pore Complex and HIV.

The fusion of multiple acquisitions into one reconstruction can mitigate limiting factors of density in cases where many identical copies of the same structure (called ‘particles’) can be imaged. This final reconstruction has effectively many more localisations than each individual image, which results in a better signal-to-noise ratio and a useful resolution improvement. This idea is conceptually similar to single-particle analysis (SPA) in cryo-electron microscopy (cryo-EM) where image reconstructions have resolutions in the Ångström range, while individual particles are barely visible in the raw, noisy acquisitions.
For the early developments in cryo-EM methodology, the Nobel Prize 2017 in chemistry was awarded to Jacques Dubochet, Jochiam Frank, and Richard Henderson. Whereas the idea for both imaging modalities is the same, the actual implementation must be very different as the image formation is different. Finding the relative rotation and translational parameters for hundreds to thousands of particles in 3D is a complex and time consuming task, which requires efficient computation.

We are therefore spending a substantial amount of time on implementation of the algorithms for parallelisation on GPU cards. The graphic cards of computers are ‘misused’ as small (and cheap) super-computers, if the task is suitable for parallelisation on their chip architecture. For our problem, these cards are highly suitable as the problem can be split in independent small subproblems that run the same algorithm. This reduces computation times from small server times of a few days to a few hours.

Correlative light and electron microscopy

In Delft we have made exciting progress in cryogenic fluorescent imaging and have pioneered the fusion of information from multiple-image acquisition in recent years. Based on these results, I am convinced that computational microscopy, that is, the correct combination of microscopic imaging with dedicated computational methods, can have a dramatic impact on the achievable image – its resolution and useful information content.
The possibility to do structural biology with light instead of exclusively with complex electron microscopy is in itself already thrilling. Even more so is the combination of both techniques, offering the prospect of adding direct functional identification of protein complexes by specific labelling in light microscopy to the ‘grey’ images of electron microscopy.

Professor Bernd Rieger
Quantitative Imaging Group
Department of Imaging Physics
Faculty of Applied Sciences
Delft University of Technology
+31 (0)15 27 88574
B.Rieger@tudelft.nl
Tweet @ImPhys_TUDelft
www.homepage.tudelft.nl/z63s8/

Subscribe to our newsletter

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network