There were 13 of us, with Miguel and James presenting two very cool papers. I'll upload the pictures for now, and will hopefully come back very soon with some more details about what happened on that day :)
Tuesday, 23 May 2017
Friday 3.3.17 Beating the Diffraction Limit
And finally here they are: pictures from the last Journal Club!
There were 13 of us, with Miguel and James presenting two very cool papers. I'll upload the pictures for now, and will hopefully come back very soon with some more details about what happened on that day :)
There were 13 of us, with Miguel and James presenting two very cool papers. I'll upload the pictures for now, and will hopefully come back very soon with some more details about what happened on that day :)
Thursday, 2 March 2017
Updates for tomorrow, and a special guest!
Updated title: Beating the Diffraction Limit;
Presenters: Miguel and our special guest James Babington from Qioptiq;
Papers: Baumgartl, Jörg, et al. "Far field subwavelength focusing using optical eigenmodes." Applied Physics Letters 98.18 (2011): 181109; Paúr, Martin, et al. "Achieving the ultimate optical resolution." Optica 3.10 (2016): 1144-1147; Rogers, Edward TF, and Nikolay I. Zheludev. "Optical super-oscillations: sub-wavelength light focusing and super-resolution imaging." Journal of Optics 15.9 (2013): 094008.
See you tomorrow,
Chiara :)
Presenters: Miguel and our special guest James Babington from Qioptiq;
Papers: Baumgartl, Jörg, et al. "Far field subwavelength focusing using optical eigenmodes." Applied Physics Letters 98.18 (2011): 181109; Paúr, Martin, et al. "Achieving the ultimate optical resolution." Optica 3.10 (2016): 1144-1147; Rogers, Edward TF, and Nikolay I. Zheludev. "Optical super-oscillations: sub-wavelength light focusing and super-resolution imaging." Journal of Optics 15.9 (2013): 094008.
See you tomorrow,
Chiara :)
Saturday, 25 February 2017
Next event on Friday!
Title (for now): Computational imaging;
Where and when: Friday 3rd March 2017, 1pm-2pm, Room 255 (reading room), Kelvin Building, U.o.G.;
Speakers: Miguel and Laura;
Coming soon: more details on the topic and the presented papers.
Monday, 13 February 2017
Friday 3.2.17
Today's
event: Hyperspectral imaging;
Presenters: Ross and Alex;
Presented: [1] Müller, Walter, et al. "Light sheet Raman micro-spectroscopy." Optica 3.4
(2016): 452-457; [2] Jahr, Wiebke, et al. "Hyperspectral
light sheet microscopy." Nature communications 6 (2015); [3] Puttonen, Eetu, et al. "Artificial
target detection with a hyperspectral LiDAR over 26-h measurement." Optical Engineering 54.1 (2015):
013105-013105.
Number
of attendees: 15.
After
realizing I hadn’t taken any photos during the Journal Club, I decided to draw something to put on this post. I then
couldn’t stop drawing and ended up with a few drawings that should help me
summarize what we talked about this time.
The
topic of the day was hyperspectral imaging. The first thing that comes to my
mind when I hear words that contain "spectrum" is a rainbow, and combined with
‘imaging’ they make me think of a cube.
This little cube illustrates the idea of taking an image, in
x and y, at many different wavelengths. Thinking about the articles presented
today, I should have added a forth dimension to it, but I didn’t find that very
easy to draw! These articles in fact describe techniques that allow to add an additional
third spatial dimension to the reconstructed images. Let’s see how they do it.
The first paper, presented by Aex, was "Light sheet Raman micro-spectroscopy", by Müller
et al. 2016. The aim here is to reconstruct the image of an entire volume
inside the microscopy sample (3 spatial dimensions), recording the Raman
spectrum of each point in the reconstructed volume (1 spectral dimension). The
scheme followed in this case can be summarized as:
- Take an image of a single plane inside the sample;
- Take an image of a single plane inside the sample;
- Acquire Raman spectrum for each point in that
plane;
- Do the same for many planes;
- Do the same for many planes;
In order to acquire, in one single shot, an entire 2D
image inside the sample, they use SPIM (Selective Plane Illumination
Microscopy). With SPIM, a thin sheet of light is used to illuminate the sample
from the side. This allows to excite fluorescence only in a single plane inside
the sample, which can then be recorded with a single shot of the camera.
Each point excited by the light-sheet emits a whole
Raman spectrum, and to obtain one image for each wavelength the authors make use
of an interferometer in the imaging arm:
The light collected by the imaging objective is
divided into two beams, which are sent into the two arms of the interferometer
and later recombined to form the image. Moving one of the two arms changes the
path length difference between the two interfering beams, and it is possible to
find the positions of the second arm that make the two beams interfere in such
a way that only some wavelengths are let through (constructive interference) while
others are blocked (destructive interference).
Keeping
the light-sheet fixed on a plane in the sample, one image is taken for each
different position of the second interferometer arm. I tried to represent this
in figure 1 (see below), where the three images 1, 2 and 3 are taken respectively with
position 1, 2 and 3 of the second arm of the interferometer. As said above,
each arm position gives info about how much light is emitted by the illuminated
plane within a particular set of wavelengths. Selecting the same pixel on each
image (pixel A in figure 1), one can concentrate on a single point in the
sample. To obtain the Raman spectrum emitted by this point, i.e. to see how
much light is emitted at each single wavelength, one only has to Fourier
transform the set of data acquired by moving the interferometer arm. Finally, repeating this procedure on many planes inside the sample allows to reconstruct an entire 3D
volume of Raman spectra.
Figure 1
The second paper of the day, "Hyperspectral light sheet microscopy", by Jahr et
al. 2015, also uses light-sheet microscopy, but in a different way. The authors are in this case not interested in
the Raman spectrum of the sample, but instead use samples in which different
molecules are labelled with different fluorophores, each emitting in a
particular wavelength range. They simultaneously excite all the fluorophores
and want to collect, at the same time, light emitted from all of them.
Instead of illuminating one plane
at a time, a focused beam is used to illuminate only a single line inside the
sample. The emitted fluorescence is then diffracted onto the detector, which
records, in one single image, the spectrum of the whole line. The illumination
line is then scanned through an entire plane, and all the acquired spectra are
combined in order to form many images, one for each wavelength, of the same
plane (see figure 2 below). By doing this on many planes, a 4D volume(x,y,z and lambda)
can then be reconstructed.
Figure 2
At this point, I went back to thinking of the small rainbow cube, and this is how I picture these first two papers deal with it:
In the first one an image is taken in x and y, and spectral
info about the same plane is acquired sequentially. In the second one the
spectrum of a line is acquired (x-lambda image) and the line is then scanned in
y. In both of them the same process is then applied at different z positions in
the sample.
The third paper of the day, presented by Ross, was "Artificial target detection with a hyperspectral
LiDAR over 26-h measurement." by Puttonen et al. 2015.
In this case not a plane, not a line, but a point is scanned in order to
reconstruct an image. Also, no more microscopy here, but light-radar. In LiDAR,
light is shone onto a target, which reflects it back; the reflected light is
detected, and its time of arrival (relative to the time the light pulse had
been sent) defines the distance of the target. Each material has different
reflective properties, which means that by analyzing the spectrum of the
reflected light one can identify what kind of material the target could be made
of.
By scanning the whole scene with the laser pulses and recording their "return times" it is possible to localize, in x,y and z, all the present objects. By analyzing the spectrum of the light each object reflects, the authors were also able do distinguish for example between man-made objects such as a chair and natural objects such as leaves of a tree.
I think this is all for now, hope you all enjoyed the Journal Club and if you couldn't make it come along next time, on the 3rd of March!
Ciao a tutti,
Chiara.
PS: I leave you with the scones and chocolate we had this time :)
Thursday, 2 February 2017
Next event: Tomorrow!!
Title: Hyperspectral Imaging;
Where and when: Friday 3rd February 2017, 1pm-2pm, Andy's office (Room 246b, Kelvin Building, U.o.G);
Presenters: Alex and Ross;
Articles:
- Puttonen, Eetu, et al. "Artificial target detection with a hyperspectral LiDAR over 26-h measurement." Optical Engineering 54.1 (2015): 013105-013105.
- Müller, Walter, et al. "Light sheet Raman micro-spectroscopy." Optica 3.4 (2016): 452-457.
Last Journal Club with us for Alex, who's leaving soon. For this there will be extra cakes and goodbye-Alex-nooooo-don't-leave-us!-food :)
See you tomorrow,
Chiara.
Tuesday, 13 December 2016
Friday 9.12.16
Today's event: Light-Field imaging;
Where and when: Friday 9th December 2016, 1pm-2pm, Andy's office (Room 246b, Physics and Astronomy, Kelvin Building, University of Glasgow);
Presenters: Laura and Guillem;
Presented: [1] Levoy, Marc, et al. "Light field Microscopy." ACM Transactions on Graphics (TOG) 25.3 (2006): 924-934; [2] Cohen, Noy, et al. "Enhancing the performance of the light field microscope using wavefront coding." Optics express 22.20 (2014): 24817-24839; [3] Georgiev, Todor, and Andrew Lumsdaine. "Superresolution with plenoptic camera 2.0." Adobe Systems Incorporated, Tech. Rep (2009); [4] Broxton, Michael, et al. "Wave optics theory and 3-D deconvolution for the light field microscope." Optics express 21.21 (2013): 25418-25439.
Number of attendees: 22.
Second and last ICG Journal Club event of 2016, this time with many people from the Optics group too!
Laura was the first presenter of the day, introducing the concept of light-field imaging discussing "Light field Microscopy", Levoy et al. 2006.
In a conventional image, each point contains information about the intensity of the light coming from one point of the imaged object. Instead, a light-field image contains, for each point of the imaged scene, information about the amount of light that reaches the imaging objective from different directions. This makes it possible to change the depth at which the image is focused or create perspective views of the imaged scene (all AFTER actually recording the image), and even reconstruct 3D volumes combining different refocused version of the same recorded image:
(image from "Light field Microscopy", Levoy et al. 2006)
In order to create these light-field images, an array of lenses is added to the imaging path of the microscope:
(image from "Light field Microscopy", Levoy et al. 2006)
Nothing comes for free though, and in light-field microscopy there is always a trade off between angular and lateral resolution. In the microscope presented in this article, each small lens in the lens array produces many images of the same point of the object, each corresponding to a different incoming light direction. In this case having more (and smaller) lenses results in a better lateral resolution but a worse angular resolution.
This first paper raised questions about how the different final images are actually extracted from the recorded light-field image, and Laura also suggested an interesting article that discusses this topic in more details:
Prevedel, Robert, et al. "Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy." Nature methods 11.7 (2014): 727-730.
From there:
"[...]
Light-field deconvolution.
The volume reconstruction itself can be formulated as a tomographic inverse problem27, wherein multiple different perspectives of a 3D volume are observed and linear reconstruction methods—implemented via deconvolution—are employed for computational 3D volume reconstruction. The image formation in light-field microscopes involves diffraction from both the objective and microlenses. PSFs for the deconvolution can be computed from scalar diffraction theory28.[...]"
[27] Kak, A.C. & Slaney, M. Principles of Computerized Tomographic Imaging (Society of Industrial and Applied Mathematics, 2001). [28] Gu, M. Advanced Optical Imaging Theory (Springer, 1999).
With Guillem and the three articles he presented we went into more details understanding how light-field imaging works, and we further discussed the deconvolution needed to reconstruct images focused at different scene depths.
Combining light-field imaging with wavefront coding, for example, it is possible to make the resolution of the reconstructed images vary/decade less when changing focusing depth:
(image from "Enhancing the performance of the light field microscope using wavefront coding.", Cohen et al. 2014)
We have only had two Journal Clubs so far, but it was enough for me to notice that I don't fully understand all the discussions that go on during these events, so I must admit I'm probably missing much of what has been said and discussed, but I hope this brief summary gives you an idea of what happened, and maybe even makes you want to come along next time too!
Anyway, next time I'd better take some notes and also avoid waiting even just a few days before updating the blog!
We also had cakes and biscuits, as there always will be :)
Special thanks to:
- The two presenters Guillem and Laura, for whom I hadn't prepared a star but who at least got one in the pictures;
- Pavi for smiling happily at my phone and not taking part in the Journal Club attendees new favorite sport of 'let's see who hydes best' :P;
- Miguel for helping me make sure there was no cake left at the end;
- Everybody for coming along!!
Thursday, 8 December 2016
Articles for tomorrow.
Sorry for not posting these titles before, I blame my poor organization this week to...Christmas coming soon, definitely. Let's see what excuse I'll come up with in January!
Anyway, better late than never, here are the articles that will be discussed tomorrow:
Levoy, Marc, et al. "Light field Microscopy." ACM Transactions on Graphics (TOG) 25.3 (2006): 924-934.
Cai, Zewei, et al. "Structured light field 3D imaging." Optics Express 24.18 (2016): 20324-20334.
Cohen, Noy, et al. "Enhancing the performance of the light field microscope using wavefront coding." Optics express 22.20 (2014): 24817-24839.
Pégard, Nicolas C., et al. "Compressive light-field microscopy for 3D neural activity recording." Optica 3.5 (2016): 517-524.
See you tomorrow at 1pm, and in the meantime have a nice Thursday!
Chiara.
Anyway, better late than never, here are the articles that will be discussed tomorrow:
Levoy, Marc, et al. "Light field Microscopy." ACM Transactions on Graphics (TOG) 25.3 (2006): 924-934.
Cai, Zewei, et al. "Structured light field 3D imaging." Optics Express 24.18 (2016): 20324-20334.
Cohen, Noy, et al. "Enhancing the performance of the light field microscope using wavefront coding." Optics express 22.20 (2014): 24817-24839.
Pégard, Nicolas C., et al. "Compressive light-field microscopy for 3D neural activity recording." Optica 3.5 (2016): 517-524.
Chiara.
Subscribe to:
Posts (Atom)