Saturday, 25 February 2017

Next event on Friday!


























Title (for now): Computational imaging;
Where and when: Friday 3rd March 2017, 1pm-2pm, Room 255 (reading room), Kelvin Building, U.o.G.;
Speakers: Miguel and Laura;
Coming soon: more details on the topic and the presented papers.

Monday, 13 February 2017

Friday 3.2.17



Today's event:  Hyperspectral imaging;
Presenters: Ross and Alex;
Presented: [1] Müller, Walter, et al. "Light sheet Raman micro-spectroscopy." Optica 3.4 (2016): 452-457; [2] Jahr, Wiebke, et al. "Hyperspectral light sheet microscopy." Nature communications 6 (2015); [3] Puttonen, Eetu, et al. "Artificial target detection with a hyperspectral LiDAR over 26-h measurement." Optical Engineering 54.1 (2015): 013105-013105.
Number of attendees: 15.


After realizing I hadn’t taken any photos during the Journal Club, I decided  to draw something to put on this post. I then couldn’t stop drawing and ended up with a few drawings that should help me summarize what we talked about this time.

The topic of the day was hyperspectral imaging. The first thing that comes to my mind when I hear words that contain "spectrum" is a rainbow, and combined with ‘imaging’ they make me think of a cube. 
This little cube illustrates the idea of taking an image, in x and y, at many different wavelengths. Thinking about the articles presented today, I should have added a forth dimension to it, but I didn’t find that very easy to draw! These articles in fact describe techniques that allow to add an additional third spatial dimension to the reconstructed images. Let’s see how they do it.

The first paper, presented by Aex, was "Light sheet Raman micro-spectroscopy", by Müller et al. 2016. The aim here is to reconstruct the image of an entire volume inside the microscopy sample (3 spatial dimensions), recording the Raman spectrum of each point in the reconstructed volume (1 spectral dimension). The scheme followed in this case can be summarized as:

-  Take an image of a single plane inside the sample;
-  Acquire Raman spectrum for each point in that plane;
-  Do the same for many planes;

In order to acquire, in one single shot, an entire 2D image inside the sample, they use SPIM (Selective Plane Illumination Microscopy). With SPIM, a thin sheet of light is used to illuminate the sample from the side. This allows to excite fluorescence only in a single plane inside the sample, which can then be recorded with a single shot of the camera.

Each point excited by the light-sheet emits a whole Raman spectrum, and to obtain one image for each wavelength the authors make use of an interferometer in the imaging arm:
The light collected by the imaging objective is divided into two beams, which are sent into the two arms of the interferometer and later recombined to form the image. Moving one of the two arms changes the path length difference between the two interfering beams, and it is possible to find the positions of the second arm that make the two beams interfere in such a way that only some wavelengths are let through (constructive interference) while others are blocked (destructive interference).

Keeping the light-sheet fixed on a plane in the sample, one image is taken for each different position of the second interferometer arm. I tried to represent this in figure 1 (see below), where the three images 1, 2 and 3 are taken respectively with position 1, 2 and 3 of the second arm of the interferometer. As said above, each arm position gives info about how much light is emitted by the illuminated plane within a particular set of wavelengths. Selecting the same pixel on each image (pixel A in figure 1), one can concentrate on a single point in the sample. To obtain the Raman spectrum emitted by this point, i.e. to see how much light is emitted at each single wavelength, one only has to Fourier transform the set of data acquired by moving the interferometer arm. Finally, repeating this procedure on many planes inside the sample allows to reconstruct an entire 3D volume of Raman spectra.


Figure 1



The second paper of the day, "Hyperspectral light sheet microscopy", by Jahr et al. 2015, also uses light-sheet microscopy, but in a different way.  The authors are in this case not interested in the Raman spectrum of the sample, but instead use samples in which different molecules are labelled with different fluorophores, each emitting in a particular wavelength range. They simultaneously excite all the fluorophores and want to collect, at the same time, light emitted from all of them.

Instead of illuminating one plane at a time, a focused beam is used to illuminate only a single line inside the sample. The emitted fluorescence is then diffracted onto the detector, which records, in one single image, the spectrum of the whole line. The illumination line is then scanned through an entire plane, and all the acquired spectra are combined in order to form many images, one for each wavelength, of the same plane (see figure 2 below). By doing this on many planes, a 4D volume(x,y,z and lambda) can then be reconstructed.


Figure 2




























At this point, I went back to thinking of the small rainbow cube, and this is how I picture these first two papers deal with it:




In the first one an image is taken in x and y, and spectral info about the same plane is acquired sequentially. In the second one the spectrum of a line is acquired (x-lambda image) and the line is then scanned in y. In both of them the same process is then applied at different z positions in the sample.

The third paper of the day, presented by Ross, was "Artificial target detection with a hyperspectral LiDAR over 26-h measurement." by Puttonen et al.  2015. In this case not a plane, not a line, but a point is scanned in order to reconstruct an image. Also, no more microscopy here, but light-radar. In LiDAR, light is shone onto a target, which reflects it back; the reflected light is detected, and its time of arrival (relative to the time the light pulse had been sent) defines the distance of the target. Each material has different reflective properties, which means that by analyzing the spectrum of the reflected light one can identify what kind of material the target could be made of.


Figure 4:





By scanning the whole scene with the laser pulses and recording their "return times" it is possible to localize, in x,y and z, all the present objects. By analyzing the spectrum of the light each object reflects, the authors were also able do distinguish for example between man-made objects such as a chair and natural objects such as leaves of a tree.


I think this is all for now, hope you all enjoyed the Journal Club and if you couldn't make it come along next time, on the 3rd of March!

Ciao a tutti,
Chiara.


PS: I leave you with the scones and chocolate we had this time :)







Thursday, 2 February 2017

Next event: Tomorrow!!



Title: Hyperspectral Imaging;
Where and when: Friday 3rd February 2017, 1pm-2pm, Andy's office (Room 246b, Kelvin Building, U.o.G);
Presenters: Alex and Ross;
Articles:  

Last Journal Club with us for Alex, who's leaving soon. For this there will be extra cakes and goodbye-Alex-nooooo-don't-leave-us!-food :)

See you tomorrow,
Chiara.

Tuesday, 13 December 2016

Friday 9.12.16


Today's event:  Light-Field imaging;
Where and when: Friday 9th December 2016, 1pm-2pm, Andy's office (Room 246b, Physics and Astronomy, Kelvin Building, University of Glasgow);
Presenters: Laura and Guillem;
Presented: [1] Levoy, Marc, et al. "Light field Microscopy." ACM Transactions on Graphics (TOG) 25.3 (2006): 924-934[2] Cohen, Noy, et al. "Enhancing the performance of the light field microscope using wavefront coding." Optics express 22.20 (2014): 24817-24839[3] Georgiev, Todor, and Andrew Lumsdaine. "Superresolution with plenoptic camera 2.0." Adobe Systems Incorporated, Tech. Rep (2009); [4]  Broxton, Michael, et al. "Wave optics theory and 3-D deconvolution for the light field microscope." Optics express 21.21 (2013): 25418-25439.
Number of attendees: 22.

Second and last ICG Journal Club event of 2016, this time with many people from the Optics group too! 


Laura was the first presenter of the day, introducing the concept of light-field imaging discussing "Light field Microscopy", Levoy et al. 2006.
In a conventional image, each point contains information about the intensity of the light coming from one point of the imaged object. Instead, a light-field image contains, for each point of the imaged scene, information about the amount of light that reaches the imaging objective from different directions. This makes it possible to change the depth at which the image is focused or create perspective views of the imaged scene (all AFTER actually recording the image), and even reconstruct 3D volumes combining different refocused version of the same recorded image:
(image from "Light field Microscopy", Levoy et al. 2006)

In order to create these light-field images, an array of lenses is added to the imaging path of the microscope:
(image from "Light field Microscopy"Levoy et al. 2006)

Nothing comes for free though, and in light-field microscopy there is always a trade off between angular and lateral resolution. In the microscope presented in this article, each small lens in the lens array produces many images of the same point of the object, each corresponding to a different incoming light direction. In this case having more (and  smaller) lenses results in a better lateral resolution but a worse angular resolution.

This first paper raised questions about how the different final images are actually extracted from the recorded light-field image, and Laura also suggested an interesting article that discusses this topic in more details: 
Prevedel, Robert, et al. "Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy." Nature methods 11.7 (2014): 727-730. 

From there:
"[...]
Light-field deconvolution.
The volume reconstruction itself can be formulated as a tomographic inverse problem27, wherein multiple different perspectives of a 3D volume are observed and linear reconstruction methods—implemented via deconvolution—are employed for computational 3D volume reconstruction. The image formation in light-field microscopes involves diffraction from both the objective and microlenses. PSFs for the deconvolution can be computed from scalar diffraction theory28.[...]"
[27] Kak, A.C. & Slaney, M. Principles of Computerized Tomographic Imaging (Society of Industrial and Applied Mathematics, 2001). [28] Gu, M. Advanced Optical Imaging Theory (Springer, 1999).
With Guillem and the three articles he presented we went into more details understanding how light-field imaging works, and we further discussed the deconvolution needed to reconstruct images focused at different scene depths.
Combining light-field imaging with wavefront coding, for example, it is possible to make the resolution of the reconstructed images vary/decade less when changing focusing depth:

We have only had two Journal Clubs so far, but it was enough for me to notice that I don't fully understand all the discussions that go on during these events, so I must admit I'm probably missing much of what has been said and discussed, but I hope this brief summary gives you an idea of what happened, and maybe even makes you want to come along next time too! 
Anyway, next time I'd better take some notes and also avoid waiting even just a few days before updating the blog!

We also had cakes and biscuits, as there always will be :)


Special thanks to: 
- The two presenters Guillem and Laura, for whom I hadn't prepared a star but who at least got one in the pictures;
- Pavi for smiling happily at my phone and not taking part in the Journal Club attendees new favorite sport of 'let's see who hydes best' :P;
- Miguel for helping me make sure there was no cake left at the end;
- Everybody for coming along!!

Thursday, 8 December 2016

Articles for tomorrow.

Sorry for not posting these titles before, I blame my poor organization this week to...Christmas coming soon, definitely. Let's see what excuse I'll come up with in January!
Anyway, better late than never, here are the articles that will be discussed tomorrow:

Levoy, Marc, et al. "Light field Microscopy." ACM Transactions on Graphics (TOG) 25.3 (2006): 924-934.


Cai, Zewei, et al. "Structured light field 3D imaging." Optics Express 24.18 (2016): 20324-20334.

Cohen, Noy, et al. "Enhancing the performance of the light field microscope using wavefront coding." Optics express 22.20 (2014): 24817-24839.

Pégard, Nicolas C., et al. "Compressive light-field microscopy for 3D neural activity recording." Optica 3.5 (2016): 517-524.

See you tomorrow at 1pm, and in the meantime have a nice Thursday!
Chiara.



Wednesday, 30 November 2016

Next event!

Next event: Light-Field imaging;
Where and when: Friday 9th December 2016, 1pm-2pm, Andy's office (Room 246b, Physics and Astronomy, Kelvin Building, University of Glasgow);
Presenters (confirmed for now): Laura and Guillem;
Coming soon: details on the articles that will be discussed.

After the JC, everybody back in the big office for a coffee, cakes and a free tour of our office Christmas decorations.


Sunday, 13 November 2016

Friday 11.11.16



Today's event: New approaches to optical design (a.k.a. weird optics);
Where and when: Friday 11th November 2016, 1pm-2pm, Andy's office (Room 246b, Physics and Astronomy, Kelvin Building, University of Glasgow);
Presenters: Ross, Stuart, Chiara;
Number of attendees: 11.

This first event of our journal club was dedicated to strange optical designs. We introduced the topic by discussing DARPA's program called "Extreme optics and imaging". The aim of this program is to develop, by 2020, some innovative optical components which should revolutionize the whole process of optical system design.

The way the problem is introduced is more or less the following:
At the moment we use optical components (like lenses) that follow certain simple physical laws (like the law of refraction) and with these we sometimes end up with very bulky and complicated optical systems (an example of this is the Mesolens we also talked about today). DARPA would like to break this paradigm and design new, specifically engineered, optical components. Light will interact with them in a much less straight forward way than what it does with lenses, gratings, filters and all the components we are now used to, but the idea is that they will simplify the design of multi-component optical systems.

Curiosity: as an example of what kind of thing DARPA has in mind, they always refer, in this presentation, to a sugar cube. At the beginning some of us thought 'sugar cube' was just chosen to give people the idea of a cubic object of more or less that size. Miguel then told me that a sugar cube can be thought of as a lens with very complex transmission matrix, so DARPA probably chose it to indicate not only a commonly known small cubic thing, but also an apparently simple object able to interact with light in a more intriguing way.

After I presented DARPA's program, Stuart introduced the Mesolens, an optical lens system specifically designed to allow 3D imaging of thick specimens  (6 mm wide and 3 mm thick) using confocal microscopy. The Mesolens gives low magnification, high depth resolution and big working distance, with the main drawback of being very big (image taken from [2]):



Ross concluded today's event with an article on flat lenses made of titanium dioxide nano pillars. Along the lines of DARPA's idea to overcome big unpractical optical systems, in this article the authors present a new type of lens, made of small titanium dioxide pillars instead of glass. These lenses can be manufactured to give a high NA, values of NA that normal lenses can achieve only being bulky (and expensive).


And for dessert...


That's all for now,
Chiara.

[1] DARPA's Power Point slides about the programprogram on DARPA's website.
[2] McConnell, Gail, Johanna Trägårdh, Rumelo Amor, John Dempster, Es Reid, and William Bradshaw Amos. "A novel optical microscope for imaging large embryos and tissue volumes with sub-cellular resolution throughout." eLife 5 (2016): e18659.
[3] Khorasaninejad, Mohammadreza, Wei Ting Chen, Robert C. Devlin, Jaewon Oh, Alexander Y. Zhu, and Federico Capasso. "Planar Lenses at Visible Wavelengths." arXiv preprint arXiv:1605.02248 (2016).