Jump to content

Single-pixel imaging

From Wikipedia, the free encyclopedia
(Redirected from Single-pixel camera)
Schematic of a single-pixel camera using a DMD. The transmitted light (white) from the sample (blue) is modulated by the DMD and collected by a single-pixel detector.[1]

Single-pixel imaging is a computational imaging technique for producing spatially-resolved images using a single detector instead of an array of detectors (as in conventional camera sensors).[2] A device that implements such an imaging scheme is called a single-pixel camera. Combined with compressed sensing, the single-pixel camera can recover images from fewer measurements than the number of reconstructed pixels.[3]

Single-pixel imaging differs from raster scanning in that multiple parts of the scene are imaged at the same time, in a wide-field fashion, by using a sequence of mask patterns either in the illumination or in the detection stage.[4] A spatial light modulator (such as a digital micromirror device) is often used for this purpose.

Single-pixel cameras were developed to be simpler, smaller, and cheaper alternatives to conventional, silicon-based digital cameras, with the ability to also image a broader spectral range.[3] Since then, it has been adapted and demonstrated to be suitable for numerous applications in microscopy, tomography, holography, ultrafast imaging, FLIM and remote sensing.[4]

History

[edit]

The origins of single-pixel imaging can be traced back to the development of dual photography[5] and compressed sensing in the mid 2000s.[6] The seminal paper written by Duarte et al.[3] in 2008 at Rice University concretised the foundations of the single-pixel imaging technique. It also presented a detailed comparison of different scanning and imaging modalities in existence at that time. These developments were also one of the earliest applications of the digital micromirror device (DMD), developed by Texas Instruments for their DLP projection technology, for structured light detection.

Soon, the technique was extended to computational ghost imaging, terahertz imaging, and 3D imaging. Systems based on structured detection were often termed single-pixel cameras, whereas those based on structured illumination were often referred to as computational ghost imaging. By using pulsed-lasers as the light source, single-pixel imaging was applied for time-of-flight measurements used in depth-mapping LiDAR applications. Apart from the DMD, different light modulation schemes were also experimented with liquid crystals and LED arrays.[4]

In the early 2010s, single-pixel imaging was exploited in fluorescence microscopy, for imaging biological samples.[7] Coupled with the technique of time-correlated single photon counting (TCSPC), the use of single-pixel imaging for compressive fluorescence lifetime imaging microscopy (FLIM) has also been explored.[8] Since the late 2010s, machine learning techniques, especially Deep learning, have been increasingly used to optimise the illumination, detection, or reconstruction strategies of single-pixel imaging.[4]

Principles

[edit]

Theory

[edit]
Compressed sensing represented as sampling a signal () in a basis . Here is the coefficient vector of which is sparse (shown as having only a few coloured dots) in . The inner product of a rank-deficient random matrix (shown by the randomly-coloured dots) with gives the measurement vector . Under certain conditions, the signal can be reconstructed (nearly) accurately.

In sampling, digital data acquisition involves uniformly sampling discrete points of an analog signal at or above the Nyquist rate. For example, in a digital camera, the sampling is done with a 2-D array of pixelated detectors on a CCD or CMOS sensor ( is usually millions in consumer digital cameras). Such a sample can be represented using the vector with elements . A vector can be expressed as the coefficients of an orthonormal basis expansion:

where are the basis vectors. Or, more compactly:

where is the basis matrix formed by stacking . It is often possible to find a basis in which the coefficient vector is sparse (with non-zero coefficients) or r-compressible (the sorted coefficients decay as a power law). This is the principle behind compression standards like JPEG and JPEG-2000, which exploit the fact that natural images tend to be compressible in the DCT and wavelet bases.[3] Compressed sensing aims to bypass the conventional "sample-then-compress" framework by directly acquiring a condensed representation with linear measurements. Similar to the previous step, this can be represented mathematically as:

where is an vector and is the -rank measurement matrix. This so-called under-determined measurement makes the inverse problem an ill-posed problem, which in general is unsolvable. However, compressed sensing exploits the fact that with the proper design of , the compressible signal can be exactly or approximately recovered using computational methods.[3] It has been shown[6] that incoherence between the bases and (along with the existence of sparsity in ) is sufficient for such a scheme to work. Popular choices of are random matrices or random subsets of basis vectors from Fourier, Walsh-Hadamard or Noiselet bases.[3] It has also been shown that the optimisation given by:

works better to retrieve the signal from the random measurements , than other common methods like least-squares minimisation.[3] An improvement to the optimisation algorithm, based on total-variation minimisation, is especially useful for reconstructing images directly in the pixel basis.[9]

Single-pixel camera

[edit]

The single-pixel camera is an optical computer[3] that implements the compressed sensing measurement architecture described above. It works by sequentially measuring the inner products between the image and the set of 2-D test functions , to compute the measurement vector . In a typical setup, it consists of two main components: a spatial light modulator (SLM) and a single-pixel detector. The light from a wide-field source is collimated and projected onto the scene, and the reflected/transmitted light is focussed on to the detector with lenses. The SLM is used to realise the test functions , often as binary pattern masks, and to introduce them either in the illumination or in the detection path. The detector integrates and converts the light signal into an output voltage, which is then digitised by an A/D converter and analysed by a computer.[3]

Rows from a randomly permuted (for incoherence) Walsh-Hadamard matrix, reshaped into square patterns, are commonly used as binary test functions in single-pixel imaging. To obtain both positive and negative values (±1 in this case), the mean light intensity can be subtracted from each measurements,[3] since the SLM can produce only binary patterns with 0 (off) and 1 (on) conditions. An alternative is to split the positive and negative elements into two sets, measure both with the negative set inverted (i.e., -1 replaced with +1), and subtract the measurements in the end. Values between 0 and 1 can be obtained by dithering the DMD micromirrors during the detector's integration time.

Examples of commonly used detectors include photomultiplier tubes, avalanche photodiodes, or hybrid photo multipliers (sandwich of layers of photon amplification stages). A spectrometer can also be used for multispectral imaging, along with an array of detectors, one for each spectral channel. Another common addition is a time-correlated single photon counting (TCSPC) board to process the detector output, which, coupled with a pulsed laser, enables lifetime measurement and is useful in biomedical imaging.[10]

Advantages and drawbacks

[edit]

The most important advantage of the single-pixel design is its reduced size, complexity, and cost of the photon detector (just a single unit). This enables the use of exotic detectors[3] capable of multi-spectral, time-of-flight, photon counting, and other fast detection schemes. This made single-pixel imaging suitable for various fields, ranging from microscopy to astronomy.[4]

The quantum efficiency of a photodiode is also higher than that of the pixel sensors in a typical CCD or CMOS array. Coupled with the fact that each single-pixel measurement receives about  times more photons than an average pixel sensor, this help reduce image distortion from dark noise and read-out noise significantly. Another important advantage is the fill factor of SLMs like a DMD, which can reach around 90% (compared to that of a CCD/CMOS array which is only around 50%). In addition, single-pixel imaging inherits the theoretical advantages that underpins the compressed sensing framework, such as its universality (the same measurement matrix works for many sparsifying bases ) and robustness (measurements have equal priority, and thus, loss of a measurement does not corrupt the entire reconstruction).[3]

The main drawback the single-pixel imaging technique face is the tradeoff between speed of acquisition and spatial resolution. Fast acquisition needs projecting fewer patterns (since each of them is measured sequentially), which leads to lower resolution of the reconstructed image.[11] An innovative method of "fusing" the low resolution single-pixel image with a high spatial-resolution CCD/CMOS image (dubbed "Data Fusion") has been proposed to mitigate this problem.[12] Deep learning methods to learn the optimal set of patterns suitable to image a particular category of samples are also being developed to improve the speed and reliability of the technique.[1]

Applications

[edit]

Some of the research fields that are increasingly employing and developing single-pixel imaging are listed below:[2]

See also

[edit]

References

[edit]
  1. ^ a b Higham, Catherine F.; Murray-Smith, Roderick; Padgett, Miles J.; Edgar, Matthew P. (2018-02-05). "Deep learning for real-time single-pixel video". Scientific Reports. 8 (1): 2369. Bibcode:2018NatSR...8.2369H. doi:10.1038/s41598-018-20521-y. ISSN 2045-2322. PMC 5799195. PMID 29403059.
  2. ^ a b Edgar, Matthew P.; Gibson, Graham M.; Padgett, Miles J. (2019-01-01). "Principles and prospects for single-pixel imaging". Nature Photonics. 13 (1): 13–20. Bibcode:2019NaPho..13...13E. doi:10.1038/s41566-018-0300-7. ISSN 1749-4893.
  3. ^ a b c d e f g h i j k l Duarte, Marco F.; Davenport, Mark A.; Takhar, Dharmpal; Laska, Jason N.; Sun, Ting; Kelly, Kevin F.; Baraniuk, Richard G. (2008-03-21). "Single-pixel imaging via compressive sampling". IEEE Signal Processing Magazine. 25 (2): 83–91. Bibcode:2008ISPM...25...83D. doi:10.1109/MSP.2007.914730. ISSN 1053-5888.
  4. ^ a b c d e Gibson, Graham M.; Johnson, Steven D.; Padgett, Miles J. (2020). "Single-pixel imaging 12 years on: a review". Optics Express. 28 (19). Optica Publishing Group: 28190–28208. Bibcode:2020OExpr..2828190G. doi:10.1364/oe.403195. PMID 32988095. Retrieved 2024-07-09.
  5. ^ Sen, Pradeep; Chen, Billy; Garg, Gaurav; Marschner, Stephen R.; Horowitz, Mark; Levoy, Marc; Lensch, Hendrik P. A. (2005-07-01). "Dual photography". ACM Trans. Graph. 24 (3): 745–755. doi:10.1145/1073204.1073257. ISSN 0730-0301.
  6. ^ a b Candes, Emmanuel J.; Tao, Terence (2006-11-30). "Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?". IEEE Transactions on Information Theory. 52 (12): 5406–5425. arXiv:math/0410542. doi:10.1109/TIT.2006.885507. ISSN 0018-9448.
  7. ^ Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed Shams; Candes, Emmanuel; Dahan, Maxime (2012-06-26). "Compressive fluorescence microscopy for biological and hyperspectral imaging". Proceedings of the National Academy of Sciences. 109 (26): E1679-87. doi:10.1073/pnas.1119511109. ISSN 0027-8424. PMC 3387031. PMID 22689950.
  8. ^ Rousset, Florian; Ducros, Nicolas; Peyrin, Françoise; Valentini, Gianluca; d'Andrea, Cosimo; Farina, Andrea (2018). "Time-resolved multispectral imaging based on an adaptive single-pixel camera". Optics Express. 26 (8): 10550–10558. Bibcode:2018OExpr..2610550R. doi:10.1364/oe.26.010550. hdl:11311/1087166. PMID 29715990. Retrieved 2024-07-11.
  9. ^ Candes, E.J.; Romberg, J.; Tao, T. (2006-01-23). "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information". IEEE Transactions on Information Theory. 52 (2): 489–509. arXiv:math/0409186. doi:10.1109/TIT.2005.862083. ISSN 0018-9448.
  10. ^ Calisesi, Gianmaria; Ghezzi, Alberto; Ancora, Daniele; D'Andrea, Cosimo; Valentini, Gianluca; Farina, Andrea; Bassi, Andrea (2021-06-19). "Compressed sensing in fluorescence microscopy". Progress in Biophysics and Molecular Biology. 168: 66–80. doi:10.1016/j.pbiomolbio.2021.06.004. hdl:11311/1199828. ISSN 0079-6107. PMID 34153330.
  11. ^ Stojek, Rafał; Pastuszczak, Anna; Wróbel, Piotr; Kotyński, Rafał (2022). "Single pixel imaging at high pixel resolutions". Optics Express. 30 (13): 22730–22745. arXiv:2206.02510. Bibcode:2022OExpr..3022730S. doi:10.1364/oe.460025. PMID 36224964. Retrieved 2024-07-12.
  12. ^ Zhang, Jixian (2010-02-17). "Multi-source remote sensing data fusion: status and trends". International Journal of Image and Data Fusion. 1 (1): 5–24. Bibcode:2010IJIDF...1....5Z. doi:10.1080/19479830903561035. ISSN 1947-9832.

Further reading

[edit]
  • Eldar, Yonina C.; Kutyniok, Gitta, eds. (2012). Compressed sensing: theory and applications. Cambridge: Cambridge University Press. ISBN 978-1-107-00558-7.
  • Stern, Adrian (2017). Optical compressive imaging. Boca Raton: CRC Press, Taylor & Francis. ISBN 978-1-4987-0806-7.
[edit]