Today's
event: Hyperspectral imaging;
Presenters: Ross and Alex;
Presented: [1] Müller, Walter, et al. "Light sheet Raman micro-spectroscopy." Optica 3.4
(2016): 452-457; [2] Jahr, Wiebke, et al. "Hyperspectral
light sheet microscopy." Nature communications 6 (2015); [3] Puttonen, Eetu, et al. "Artificial
target detection with a hyperspectral LiDAR over 26-h measurement." Optical Engineering 54.1 (2015):
013105-013105.
Number
of attendees: 15.
After
realizing I hadn’t taken any photos during the Journal Club, I decided to draw something to put on this post. I then
couldn’t stop drawing and ended up with a few drawings that should help me
summarize what we talked about this time.
The
topic of the day was hyperspectral imaging. The first thing that comes to my
mind when I hear words that contain "spectrum" is a rainbow, and combined with
‘imaging’ they make me think of a cube.
This little cube illustrates the idea of taking an image, in
x and y, at many different wavelengths. Thinking about the articles presented
today, I should have added a forth dimension to it, but I didn’t find that very
easy to draw! These articles in fact describe techniques that allow to add an additional
third spatial dimension to the reconstructed images. Let’s see how they do it.
The first paper, presented by Aex, was "Light sheet Raman micro-spectroscopy", by Müller
et al. 2016. The aim here is to reconstruct the image of an entire volume
inside the microscopy sample (3 spatial dimensions), recording the Raman
spectrum of each point in the reconstructed volume (1 spectral dimension). The
scheme followed in this case can be summarized as:
- Take an image of a single plane inside the sample;
- Take an image of a single plane inside the sample;
- Acquire Raman spectrum for each point in that
plane;
- Do the same for many planes;
- Do the same for many planes;
In order to acquire, in one single shot, an entire 2D
image inside the sample, they use SPIM (Selective Plane Illumination
Microscopy). With SPIM, a thin sheet of light is used to illuminate the sample
from the side. This allows to excite fluorescence only in a single plane inside
the sample, which can then be recorded with a single shot of the camera.
Each point excited by the light-sheet emits a whole
Raman spectrum, and to obtain one image for each wavelength the authors make use
of an interferometer in the imaging arm:
The light collected by the imaging objective is
divided into two beams, which are sent into the two arms of the interferometer
and later recombined to form the image. Moving one of the two arms changes the
path length difference between the two interfering beams, and it is possible to
find the positions of the second arm that make the two beams interfere in such
a way that only some wavelengths are let through (constructive interference) while
others are blocked (destructive interference).
Keeping
the light-sheet fixed on a plane in the sample, one image is taken for each
different position of the second interferometer arm. I tried to represent this
in figure 1 (see below), where the three images 1, 2 and 3 are taken respectively with
position 1, 2 and 3 of the second arm of the interferometer. As said above,
each arm position gives info about how much light is emitted by the illuminated
plane within a particular set of wavelengths. Selecting the same pixel on each
image (pixel A in figure 1), one can concentrate on a single point in the
sample. To obtain the Raman spectrum emitted by this point, i.e. to see how
much light is emitted at each single wavelength, one only has to Fourier
transform the set of data acquired by moving the interferometer arm. Finally, repeating this procedure on many planes inside the sample allows to reconstruct an entire 3D
volume of Raman spectra.
Figure 1
The second paper of the day, "Hyperspectral light sheet microscopy", by Jahr et
al. 2015, also uses light-sheet microscopy, but in a different way. The authors are in this case not interested in
the Raman spectrum of the sample, but instead use samples in which different
molecules are labelled with different fluorophores, each emitting in a
particular wavelength range. They simultaneously excite all the fluorophores
and want to collect, at the same time, light emitted from all of them.
Instead of illuminating one plane
at a time, a focused beam is used to illuminate only a single line inside the
sample. The emitted fluorescence is then diffracted onto the detector, which
records, in one single image, the spectrum of the whole line. The illumination
line is then scanned through an entire plane, and all the acquired spectra are
combined in order to form many images, one for each wavelength, of the same
plane (see figure 2 below). By doing this on many planes, a 4D volume(x,y,z and lambda)
can then be reconstructed.
Figure 2
At this point, I went back to thinking of the small rainbow cube, and this is how I picture these first two papers deal with it:
In the first one an image is taken in x and y, and spectral
info about the same plane is acquired sequentially. In the second one the
spectrum of a line is acquired (x-lambda image) and the line is then scanned in
y. In both of them the same process is then applied at different z positions in
the sample.
The third paper of the day, presented by Ross, was "Artificial target detection with a hyperspectral
LiDAR over 26-h measurement." by Puttonen et al. 2015.
In this case not a plane, not a line, but a point is scanned in order to
reconstruct an image. Also, no more microscopy here, but light-radar. In LiDAR,
light is shone onto a target, which reflects it back; the reflected light is
detected, and its time of arrival (relative to the time the light pulse had
been sent) defines the distance of the target. Each material has different
reflective properties, which means that by analyzing the spectrum of the
reflected light one can identify what kind of material the target could be made
of.
By scanning the whole scene with the laser pulses and recording their "return times" it is possible to localize, in x,y and z, all the present objects. By analyzing the spectrum of the light each object reflects, the authors were also able do distinguish for example between man-made objects such as a chair and natural objects such as leaves of a tree.
I think this is all for now, hope you all enjoyed the Journal Club and if you couldn't make it come along next time, on the 3rd of March!
Ciao a tutti,
Chiara.
PS: I leave you with the scones and chocolate we had this time :)
No comments:
Post a Comment