In single-molecule localization microscopy methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), samples are imaged over multiple rounds; in each round a random subset of fluorophores is activated and imaged at diffraction-limited resolution. The precise positions of these individual emitters are determined, and after multiple rounds a composite super-resolution image is generated from the localized fluorophores.

A visual representation of the Deep-STORM network architecture. Reproduced with permission from Nehme et al. (2018), The Optical Society.

Localization microscopy tends to require thousands of rounds of imaging to generate a high-resolution image, because in each round, sparse emission is preferred. Sparse emission minimizes the likelihood of simultaneous emission from closely positioned fluorophores, which confounds their precise localization. The end result is long acquisition times, which hinder throughput and most live-cell applications.

Many existing methods address these challenges, including some that allow accurate localization in images where fluorophore emission is dense. Although these work well in some cases, they can limit image quality and resolution. For this reason, two groups independently developed approaches to improve the acquisition speed of PALM/STORM while maintaining image resolution. In both cases, the researchers used deep learning to generate super-resolution images from a relatively small number of frames of localization microscopy data. Deep learning is a type of machine learning that uses artificial neural networks to learn a mapping between input and output data. Once trained, these models can predict outputs from supplied input data.

One of the two methods, artificial neural network accelerated PALM (ANNA-PALM), was developed by Christophe Zimmer and his student Wei Ouyang at the Institut Pasteur. In ANNA-PALM, an artificial neural network is trained on localizations from a small number of frames matched with dense localization data obtained from long-duration acquisitions of the same structures. The neural network can then produce accurate super-resolution images from images generated from a smaller number of frames. “This strategy resembles how humans recognize objects in noisy or blurred images,” explains Zimmer. The researchers used their approach to generate high-quality images of microtubules, nuclear pores and mitochondria, and found that they were able to obtain super-resolution images of more than a thousand cells in around three hours—an astonishing feat for the field.

Another method, Deep-STORM, was developed by Yoav Shechtman, Tomer Michaeli and their joint student Elias Nehme at the Technion – Israel Institute of Technology. In Deep-STORM, no a priori knowledge regarding the underlying object is used. Instead, the artificial neural network ‘learns’ to extract information directly from images of dense blinking emitters, after being trained on correct emitter positions. This allows the trained model to infer correct emitter positions in images where emission is dense and output a super-resolution image of a structure rapidly. Here, the ability to image densely labeled samples corresponds to a reduced total acquisition time. Using their approach, the researchers were able to outperform existing algorithms for image generation from densely labeled frames of synthetic data and images of microtubules.

Although the two approaches differ in their neural networks and training approach, they both can be used to generate super-resolution images that are appropriate for quantitative analysis. One important distinction between the output from both ANNA-PALM and Deep-STORM and that of traditional PALM/STORM is that the neural networks produce super-resolution images that are not composed of a compilation of emitter positions.

Approaches that generate complete images from relatively sparse input data can yield artifacts. Zimmer’s team addressed this issue by developing an algorithm that can identify and reduce artifacts by comparing the generated image with the wide-field image. This approach was inspired by NanoJ-SQUIRREL, developed by Ricardo Henriques’s lab.

Zimmer notes that a major focus was identifying the best way to train the neural network. For this, they developed a data-augmentation strategy that allowed them to effectively increase the number of training images without more experimental data. Still, he recalls, “it was somewhat surprising to see that ANNA-PALM only needs to be a trained on a few super-resolution images—in some cases just one.” He explains that ANNA-PALM will improve with time if trained on more data. Shechtman was also surprised by the training of the neural network; he notes that small numbers of experimental images could train the network successfully, and that “while training the net on experimental measurements produced the best results, training the net on simulated data, of which we could easily generate huge amounts, already yielded excellent images.”

Although these methods represent early days in the application of deep learning to super-resolution microscopy, they are poised to have an important impact on the field and herald a bright future for this area. Shechtman says that user-friendly versions of these tools are an important future goal, and notes that his group is developing a stand-alone version of Deep-STORM. Zimmer says that his team is currently developing tools to facilitate training for use with ANNA-PALM.