Journal articles on the topic 'Near-infrared Image'

To see the other types of publications on this topic, follow the link: Near-infrared Image.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Near-infrared Image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yakno, Marlina, Junita Mohamad-Saleh, Mohd Zamri Ibrahim, and W. N. A. W. Samsudin. "Camera-projector calibration for near infrared imaging system." Bulletin of Electrical Engineering and Informatics 9, no. 1 (February 1, 2020): 160–70. http://dx.doi.org/10.11591/eei.v9i1.1697.

Full text
Abstract:
Advanced biomedical engineering technologies are continuously changing the medical practices to improve medical care for patients. Needle insertion navigation during intravenous catheterization process via Near infrared (NIR) and camera-projector is one solution. However, the central point of the problem is the image captured by camera misaligns with the image projected back on the object of interest. This causes the projected image not to be overlaid perfectly in the real-world. In this paper, a camera-projector calibration method is presented. Polynomial algorithm was used to remove the barrel distortion in captured images. Scaling and translation transformations are used to correct the geometric distortions introduced in the image acquisition process. Discrepancies in the captured and projected images are assessed. The accuracy of the image and the projected image is 90.643%. This indicates the feasibility of the captured approach to eliminate discrepancies in the projection and navigation images.
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Jae Taeg, Sung Woong Ra, Sungmin Lee, and Seung-Won Jung. "Image Dehazing Algorithm Using Near-infrared Image Characteristics." Journal of the Institute of Electronics and Information Engineers 52, no. 11 (November 25, 2015): 115–23. http://dx.doi.org/10.5573/ieie.2015.52.11.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kil, Taeho, and Nam Ik Cho. "Image Fusion using RGB and Near Infrared Image." Journal of Broadcast Engineering 21, no. 4 (July 30, 2016): 515–24. http://dx.doi.org/10.5909/jbe.2016.21.4.515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, You Sun, and Duk Shin. "Multiband Camera System Using Color and Near Infrared Images." Applied Mechanics and Materials 446-447 (November 2013): 922–26. http://dx.doi.org/10.4028/www.scientific.net/amm.446-447.922.

Full text
Abstract:
Various applications using a camera system have been developed and deployed commercially to improve our daily life. The performance of camera system is mainly dependent on image quality and illumination conditions. Multiband camera has been developed to provide a wealth of information for image acquisition. In this paper, we developed two applications about image segmentation and face detection using a multiband camera, which is available in four bands consisting of a near infrared and three color bands. We proposed a multiband camera system to utilize two different images i.e. color image extracted from Bayer filter and near infrared images. The experimental results showed the effectiveness of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
5

Son, Dong-Min, Hyuk-Ju Kwon, and Sung-Hak Lee. "Visible and Near Infrared Image Fusion Using Base Tone Compression and Detail Transform Fusion." Chemosensors 10, no. 4 (March 25, 2022): 124. http://dx.doi.org/10.3390/chemosensors10040124.

Full text
Abstract:
This study aims to develop a spatial dual-sensor module for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images. The proposed method synthesizes visible and near-infrared images using contourlet transform, principal component analysis, and iCAM06, while the blending method uses color information in a visible image and detailed information in an infrared image. The contourlet transform obtains detailed information and can decompose an image into directional images, making it better in obtaining detailed information than decomposition algorithms. The global tone information is enhanced by iCAM06, which is used for high-dynamic range imaging. The result of the blended images shows a clear appearance through both the compressed tone information of the visible image and the details of the infrared image.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhuge, Jing Chang, Zhi Jing Yu, and Jian Shu Gao. "Ice Detection Based on Near Infrared Image Analysis." Applied Mechanics and Materials 121-126 (October 2011): 3960–64. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.3960.

Full text
Abstract:
In order to detect the ice on aircraft wings, a method based on near infrared image processing is proposed. According to the variety of near-infrared reflectivity, four images of one object are obtained under different detection wavelengths. Water and ice can be distinguished by the different variation trends of near infrared images. In this paper, 1.10μm, 1.16μm, 1.26μm and 1.28μm are selected to be the detection wavelengths. The images of Carbon Fiber Composite material aircraft wings partially covered by water or ice are obtained and analyzed. Parameter D can reflect the variation trend of relative near-infrared reflectivity, so that Parameter D also can be the distinguish basis. The results of the experiment show that the method proposed in this paper is available for ice detection.
APA, Harvard, Vancouver, ISO, and other styles
7

Nagata, Masaki, Toshiki Hirogaki, Eiichi Aoyama, Takahiro Iida, Yasuhiro Uenishi, Masami Matsubara, and Yoshitaka Usui. "Quality Control Method Based on Gear Tooth Contact Evaluation Using Near-Infrared Ray Imagery." Key Engineering Materials 447-448 (September 2010): 569–73. http://dx.doi.org/10.4028/www.scientific.net/kem.447-448.569.

Full text
Abstract:
Conventionally, tooth contact evaluation has been performed visually by machine operators in gear manufacturing fields when finishing a gear or during assembly. With automation, the contact area’s boundary is unclear due to scattered light when visible light is used to obtain an image for tooth contact evaluation. We therefore focused on using near-infrared to prevent scattered light. First, we confirmed that the tooth contact image obtained by image binarization is hardly affected by the image threshold. Second, we propose a new method to extract the boundary part of the tooth contact by differential calculation of the fine near-infrared image. These methods allow automatic division of near-infrared images into the contact area, the boundary, and the non-contact area. Finally, the obtained result is compared with the tooth contact calculated from the measured tooth surface. We demonstrated that the near-infrared image method is effective for automatic tooth contact evaluation.
APA, Harvard, Vancouver, ISO, and other styles
8

Kwon, Hyuk-Ju, and Sung-Hak Lee. "Visible and Near-Infrared Image Acquisition and Fusion for Night Surveillance." Chemosensors 9, no. 4 (April 8, 2021): 75. http://dx.doi.org/10.3390/chemosensors9040075.

Full text
Abstract:
Image fusion combines images with different information to create a single, information-rich image. The process may either involve synthesizing images using multiple exposures of the same scene, such as exposure fusion, or synthesizing images of different wavelength bands, such as visible and near-infrared (NIR) image fusion. NIR images are frequently used in surveillance systems because they are beyond the narrow perceptual range of human vision. In this paper, we propose an infrared image fusion method that combines high and low intensities for use in surveillance systems under low-light conditions. The proposed method utilizes a depth-weighted radiance map based on intensities and details to enhance local contrast and reduce noise and color distortion. The proposed method involves luminance blending, local tone mapping, and color scaling and correction. Each of these stages is processed in the LAB color space to preserve the color attributes of a visible image. The results confirm that the proposed method outperforms conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Choi, Janghoon, Jun-Geun Shin, Yoon-Oh Tak, Youngseok Seo, and Jonghyun Eom. "Single Camera-Based Dual-Channel Near-Infrared Fluorescence Imaging system." Sensors 22, no. 24 (December 13, 2022): 9758. http://dx.doi.org/10.3390/s22249758.

Full text
Abstract:
In this study, we propose a single camera-based dual-channel near-infrared (NIR) fluorescence imaging system that produces color and dual-channel NIR fluorescence images in real time. To simultaneously acquire color and dual-channel NIR fluorescence images of two fluorescent agents, three cameras and additional optical parts are generally used. As a result, the volume of the image acquisition unit increases, interfering with movements during surgical procedures and increasing production costs. In the system herein proposed, instead of using three cameras, we set a single camera equipped with two image sensors that can simultaneously acquire color and single-channel NIR fluorescence images, thus reducing the volume of the image acquisition unit. The single-channel NIR fluorescence images were time-divided into two channels by synchronizing the camera and two excitation lasers, and the noise caused by the crosstalk effect between the two fluorescent agents was removed through image processing. To evaluate the performance of the system, experiments were conducted for the two fluorescent agents to measure the sensitivity, crosstalk effect, and signal-to-background ratio. The compactness of the resulting image acquisition unit alleviates the inconvenient movement obstruction of previous devices during clinical and animal surgery and reduces the complexity and costs of the manufacturing process, which may facilitate the dissemination of this type of system.
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Rongxin, Hualin Liu, and Jingbo Wei. "Visualizing Near Infrared Hyperspectral Images with Generative Adversarial Networks." Remote Sensing 12, no. 23 (November 24, 2020): 3848. http://dx.doi.org/10.3390/rs12233848.

Full text
Abstract:
The visualization of near infrared hyperspectral images is valuable for quick view and information survey, whereas methods using band selection or dimension reduction fail to produce good colors as reasonable as corresponding multispectral images. In this paper, an end-to-end neural network of hyperspectral visualization is proposed, based on the convolutional neural networks, to transform a hyperspectral image of hundreds of near infrared bands to a three-band image. Supervised learning is used to train the network where multispectral images are targeted to reconstruct naturally looking images. Each pair of the training images shares the same geographic location and similar moments. The generative adversarial framework is used with an adversarial network to improve the training of the generating network. In the experimental procedure, the proposed method is tested for the near infrared bands of EO-1 Hyperion images with LandSat-8 images as the benchmark, which is compared with five state-of-the-art visualization algorithms. The experimental results show that the proposed method performs better in producing naturally looking details and colors for near infrared hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
11

Osorio Quero, C., D. Durini, J. Rangel-Magdaleno, J. Martinez-Carranza, and R. Ramos-Garcia. "Single-Pixel Near-Infrared 3D Image Reconstruction in Outdoor Conditions." Micromachines 13, no. 5 (May 20, 2022): 795. http://dx.doi.org/10.3390/mi13050795.

Full text
Abstract:
In the last decade, the vision systems have improved their capabilities to capture 3D images in bad weather scenarios. Currently, there exist several techniques for image acquisition in foggy or rainy scenarios that use infrared (IR) sensors. Due to the reduced light scattering at the IR spectra it is possible to discriminate the objects in a scene compared with the images obtained in the visible spectrum. Therefore, in this work, we proposed 3D image generation in foggy conditions using the single-pixel imaging (SPI) active illumination approach in combination with the Time-of-Flight technique (ToF) at 1550 nm wavelength. For the generation of 3D images, we make use of space-filling projection with compressed sensing (CS-SRCNN) and depth information based on ToF. To evaluate the performance, the vision system included a designed test chamber to simulate different fog and background illumination environments and calculate the parameters related to image quality.
APA, Harvard, Vancouver, ISO, and other styles
12

Dümbgen, Frederike, Majed El Helou, Natalija Gucevska, and Sabine Süsstrunk. "Near-Infrared Fusion for Photorealistic Image Dehazing." Electronic Imaging 2018, no. 16 (January 28, 2018): 321–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.16.color-321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mäkelä, Mikko, Paul Geladi, Marja Rissanen, Lauri Rautkari, and Olli Dahl. "Hyperspectral near infrared image calibration and regression." Analytica Chimica Acta 1105 (April 2020): 56–63. http://dx.doi.org/10.1016/j.aca.2020.01.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jang, Dong-Won, and Rae-Hong Park. "Colour image dehazing using near-infrared fusion." IET Image Processing 11, no. 8 (August 1, 2017): 587–94. http://dx.doi.org/10.1049/iet-ipr.2017.0192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Rahmi, K. I. N., N. Febrianti, and I. Prasasti. "Forest and land fire smoke detection using GCOM-C data (case study: Pulang Pisau, Central Kalimantan)." IOP Conference Series: Earth and Environmental Science 893, no. 1 (November 1, 2021): 012068. http://dx.doi.org/10.1088/1755-1315/893/1/012068.

Full text
Abstract:
Abstract Forest/land fire give bad impact of heavy smoke on peatland area in Indonesia. Forest/land fire smoke need to be identified the distribution periodically. New satellite of GCOM-C has been launched to monitor climate condition and have visible, near infrared and thermal infrared. This study has objective to identify fire smoke from GCOM-C data. GCOM-C data has wavelength range from 0.38 to 12 μm it covers visible, near infrared, short-wave infrared and thermal infrared. It is relatively similar to MODIS or Himawari-8 images which could identify forest/land fire smoke. The methodology is visual interpretation to detect forest/land fire smoke using near infrared band (VN08), shortwave infrared band (SW03), and thermal bands (T01 and T02). Hotspot data is overlaid with GCOM-C image to represent the location of fire events. Combination of composite RGB image has been applied to detect forest/land fire smoke. GCOM-C image of VN8 bands and combination of thermal band in composite image could be used to detect fire smoke in Pulang Pisau, Central Kalimantan.
APA, Harvard, Vancouver, ISO, and other styles
16

Saxena, Sanjeev, and Mausumi Pohit. "Near Infrared and Visible Image Registration Using Whale Optimization Algorithm." International Journal of Applied Metaheuristic Computing 13, no. 1 (January 2022): 1–14. http://dx.doi.org/10.4018/ijamc.2022010109.

Full text
Abstract:
This paper reports the use of a nature-inspired metaheuristic algorithm known as ‘Whale Optimization Algorithm’ (WOA) for multimodal image registration. WOA is based on the hunting behaviour of Humpback whales and provides better exploration and exploitation of the search space with small possibility of trapping in local optima. Though WOA is used in various optimization problems, no detailed study is available for its use in image registration. For this study different sets of NIR and visible images are considered. The registration results are compared with the other state of the art image registration methods. The results show that WOA is a very competitive algorithm for NIR-visible image registration. With the advantages of better exploration of search space and local optima avoidance, the algorithm can be a suitable choice for multimodal image registration.
APA, Harvard, Vancouver, ISO, and other styles
17

Hughes, D. H., E. I. Robson, and M. J. Ward. "Optical & Near Infrared Imaging of NGC1275." Symposium - International Astronomical Union 134 (1989): 376–78. http://dx.doi.org/10.1017/s0074180900141373.

Full text
Abstract:
We are currently studying a selection of active galaxies using the new IR array camera IRCAM on UKIRT. Our aim is to seperate the underlying stellar emission from that of the active galactic nucleus. Although the optical is the best wavelength region to discriminate between the different populations in the underlying spiral and elliptical galaxies, it is in the infrared that the contrast between the non-thermal central core and the surrounding galaxy is increased. We present reduced data from infrared images taken at 1.25, 1.65 and 2.2 μm with an image scale of 0.6 arcsec/pixel together with optical 0.44 and 0.55 μm CCD images of the Seyfert galaxy NGC1275.
APA, Harvard, Vancouver, ISO, and other styles
18

Yuan, Yubin, Yu Shen, Jing Peng, Lin Wang, and Hongguo Zhang. "Defogging Technology Based on Dual-Channel Sensor Information Fusion of Near-Infrared and Visible Light." Journal of Sensors 2020 (November 15, 2020): 1–17. http://dx.doi.org/10.1155/2020/8818650.

Full text
Abstract:
Since the method to remove fog from images is complicated and detail loss and color distortion could occur to the defogged images, a defogging method based on near-infrared and visible image fusion is put forward in this paper. The algorithm in this paper uses the near-infrared image with rich details as a new data source and adopts the image fusion method to obtain a defog image with rich details and high color recovery. First, the colorful visible image is converted into HSI color space to obtain an intensity channel image, color channel image, and saturation channel image. The intensity channel image is fused with a near-infrared image and defogged, and then it is decomposed by Nonsubsampled Shearlet Transform. The obtained high-frequency coefficient is filtered by preserving the edge with a double exponential edge smoothing filter, while low-frequency antisharpening masking treatment is conducted on the low-frequency coefficient. The new intensity channel image could be obtained based on the fusion rule and by reciprocal transformation. Then, in color treatment of the visible image, the degradation model of the saturation image is established, which estimates the parameters based on the principle of dark primary color to obtain the estimated saturation image. Finally, the new intensity channel image, the estimated saturation image, and the primary color image are reflected to RGB space to obtain the fusion image, which is enhanced by color and sharpness correction. In order to prove the effectiveness of the algorithm, the dense fog image and the thin fog image are compared with the popular single image defogging and multiple image defogging algorithms and the visible light-near infrared fusion defogging algorithm based on deep learning. The experimental results show that the proposed algorithm is better in improving the edge contrast and the visual sharpness of the image than the existing high-efficiency defogging method.
APA, Harvard, Vancouver, ISO, and other styles
19

Park, Jin-Seok, Dai-Kyung Hyun, Jong-Uk Hou, Do-Guk Kim, and Heung-Kyu Lee. "Detecting digital image forgery in near-infrared image of CCTV." Multimedia Tools and Applications 76, no. 14 (September 10, 2016): 15817–38. http://dx.doi.org/10.1007/s11042-016-3871-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Clouet, Axel, Célia Viola, and Jérôme Vaillant. "Visible to near-infrared multispectral images dataset for image sensors design." Electronic Imaging 2020, no. 5 (January 26, 2020): 106–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.5.maap-082.

Full text
Abstract:
In this paper we present a set of multispectral images covering the visible and near-infrared spectral range (400 nm to 1050 nm). This dataset intends to provide spectral reflectance images containing daily life objects, usable for silicon image sensor simulations. All images were taken with our acquisition bench and a particular attention was brought to processings in order to provide calibrated reflectance data. ReDFISh (Reflectance Dataset For Image sensor Simulation) is available at: http://dx.doi.org/10.18709/perscido.2020.01.ds289.
APA, Harvard, Vancouver, ISO, and other styles
21

Everitt, James H., Russ D. Pettit, and Mario A. Alaniz. "Remote Sensing of Broom Snakeweed (Gutierrezia sarothrae) and Spiny Aster (Aster spinosus)." Weed Science 35, no. 2 (March 1987): 295–302. http://dx.doi.org/10.1017/s0043174500079224.

Full text
Abstract:
Field spectroradiometric plant canopy measurements showed that broom snakeweed [Gutierrezia sarothrae(Pursh.) Britt. and Rusby # GUESA] and spiny aster (Aster spinosusBenth. # ASTSN) had lower near-infrared (0.85-μm) reflectance than did other associated rangeland shrubs and herbaceous vegetation. The low near-infrared reflectances of both species were attributed to their erectophile (erect leaf/stem) canopy structures. These low near-infrared reflectance values caused broom snakeweed to have a dark-brown to black image on color-infrared aerial photos (0.50- to 0.90-μm), whereas spiny aster had a dark reddish-brown to black image. Other rangeland plant species had light-brown, red, or magenta images. Computer-based image analyses of color-infrared film positive transparencies showed that broom snakeweed and spiny aster infestations could be quantitatively differentiated from associated rangeland species. Computer analyses can permit “percent land area infested” estimates of broom snakeweed and spiny aster infestations on rangelands.
APA, Harvard, Vancouver, ISO, and other styles
22

ZHENG, YING, STAN Z. LI, JIANGLONG CHANG, and ZENGFU WANG. "3D MODELING OF FACES FROM NEAR INFRARED IMAGES USING STATISTICAL LEARNING." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 01 (February 2010): 55–71. http://dx.doi.org/10.1142/s0218001410007804.

Full text
Abstract:
This paper proposes a statistical learning based method for 3D modeling of faces directly from Near Infrared (NIR) images. We use a specially designed camera system with active NIR illumination to capture the NIR images of faces. The NIR images captured in such a way are invariant to environmental lighting changes. The property provides more reasonable data sources for statistical learning. By using the NIR images and the depth images of some known faces, we can observe a mapping relation between the two image modalities. The mapping relation can then be used to recover depth data of an unknown face from his NIR image. To perform the learning, the images of different modalities taken from different persons are elaborately aligned to make pixel-to-pixel correspondences between images. Based on these aligned images, two face spaces corresponding to NIR and depth face images can be constructed, respectively. We then use a PCA based or kernel based scheme to perform the learning between spaces of large dimensions. Several regression algorithms with linear and nonlinear kernels are employed and evaluated to find the mapping that best describes the relation between the two face spaces. The experimental results show that the method presented in this paper is effective. It can reconstruct 3D face model directly from NIR image of a face with high accuracy and low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
23

TRONGTIRAKUL, THAWEESAK, WERAPON CHIRACHARIT, and SOS AGAIAN. "Color Restoration of Multispectral Images: Near-Infrared (NIR) filter-to-Color (RGB) image." Electronic Imaging 2020, no. 10 (January 26, 2020): 180–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.10.ipas-179.

Full text
Abstract:
Multispectral images captured by near-infrared (NIR) filtered devices have an attractive potential for numerous applications in computer vision, robot vision, and an artificial intelligence system. Although near-infrared (NIR) filtered devices display some unseen details, but their colors contain false colors affected main color components. Therefore, the color restoration of near-infrared (NIR) filtered images is a challenge for human perception. To overcome the challenge, the proposed method has four main steps: i) color balance with chromatic adaptation; ii) color component switching; iii) Hue-Saturation-Lightness adjustment; and iv) image fusion using a proportional-weighing metric. The proposed results achieve to: i) restore RGB color component, ii) remove the color-overlapped effect and generate structure details close to the ground truth images; and iii) preserve overall brightness.
APA, Harvard, Vancouver, ISO, and other styles
24

Lewis, E. Neil, and Ira W. Levin. "Vibrational Spectroscopic Microscopy: Raman, Near-Infrared and Mid-Infrared Imaging Techniques." Microscopy and Microanalysis 1, no. 1 (February 1995): 35–46. http://dx.doi.org/10.1017/s1431927695110351.

Full text
Abstract:
New instrumental approaches for performing vibrational Raman, near-infrared and mid-infrared spectroscopic imaging microscopy are described. The instruments integrate imaging quality filters such as acousto-optic tunable filters (AOTFs), with visible charge-coupled device (CCD) and infrared focal-plane array detectors. These systems are used in conjunction with infinity-corrected, refractive microscopes for operation in the visible and near-infrared spectral regions and with Cassegrainian reflective optics for operation in the mid-infrared spectral interval. Chemically specific images at moderate spectral resolution (2 nm) and high spatial resolution (1 μm) can be collected rapidly and noninvasively. Image data are presented containing 128 × 128 pixels, although significantly larger format images can be collected in approximately the same time. The instruments can be readily configured for both absorption and reflectance spectroscopies. We present Raman emission images of polystyrene microspheres and a lipid/amino acid mixture and near-infrared images of onion epidermis and a hydrated phospholipid dispersion. Images generated from mid-infrared spectral data are presented for a KBr disk containing nonhomogeneous domains of lipid and for 50-μm slices of monkey cerebellum. These are the first results illustrating the use of infrared focal-plane array detectors as chemically specific spectroscopic imaging devices and demonstrating their application in biomolecular areas. Extensions and future applications of the various vibrational spectroscopic imaging techniques are discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

Matsui, Sosuke, Takahiro Okabe, Mihoko Shimano, and Yoichi Sato. "Image Enhancement of Low-light Scenes with Near-infrared Flash Images." IPSJ Transactions on Computer Vision and Applications 2 (2010): 215–23. http://dx.doi.org/10.2197/ipsjtcva.2.215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tacconi-Garman, L. E., L. Weitzel, M. Cameron, S. Drapatz, R. Genzel, A. Krabbe, H. Kroker, and N. Thatte. "3D: The New Near-Infrared Field Imaging Spectrometer." Symposium - International Astronomical Union 167 (1995): 373. http://dx.doi.org/10.1017/s0074180900056862.

Full text
Abstract:
3D is a next-generation near-IR spectrometer developed at the MPE which offers, in a single integration, the opportunity to image an 8″ × 8″ field across almost the entire K-band at a simultaneous spatial resolution of 0.″5 wide strips which are then aligned optically on top of each other forming a single long slit. This long slit is then used as the input for a grating spectrometer which images it onto a two dimensional detector array. Each detector row then represents the spectrum of one spatial element of the two dimensional field of view. The central part of the optical system is the image slicer which is made of two complex plane mirror systems consisting of 16 segments each. The detector is a NICMOSIII HgCdTe array with 256 × 256 pixels. In the spectral domain the spectrometer provides a resolving power of R = 1000.Here we present not only the design of the instrument but also first data obtained during instrument commissioning at the 3.5-m Calar Alto telescope in December, 1993.
APA, Harvard, Vancouver, ISO, and other styles
27

Liang, Wei, Derui Ding, and Guoliang Wei. "An improved DualGAN for near-infrared image colorization." Infrared Physics & Technology 116 (August 2021): 103764. http://dx.doi.org/10.1016/j.infrared.2021.103764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dilhan, L., J. Vaillant, A. Ostrovsky, L. Masarotto, C. Pichard, and R. Paquet. "Planar microlenses for near infrared CMOS image sensors." Electronic Imaging 2020, no. 7 (January 26, 2020): 144–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.7.iss-144.

Full text
Abstract:
In this paper we present planar microlenses designed to improve the sensitivity of SPAD pixels. We designed diffractive and metasurface planar microlens structures based on rigorous optical simulations. The current melted microlens solution and designed diffractive microlens were implemented on STMicroelectronics 40nm CMOS testchips (32 × 32 SPAD array), and average gains of 1.9 and 1.4 in sensitivity respectively were measured, compared to a SPAD without microlens.
APA, Harvard, Vancouver, ISO, and other styles
29

Kaufmann, R., G. Isella, A. Sanchez-Amores, S. Neukom, A. Neels, L. Neumann, A. Brenzikofer, A. Dommann, C. Urban, and H. von Känel. "Near infrared image sensor with integrated germanium photodiodes." Journal of Applied Physics 110, no. 2 (July 15, 2011): 023107. http://dx.doi.org/10.1063/1.3608245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lodder, Robert A., and Gary M. Hieftje. "Subsurface Image Reconstruction by Near-Infrared Reflectance Analysis." Applied Spectroscopy 42, no. 2 (February 1988): 309–12. http://dx.doi.org/10.1366/0003702884428194.

Full text
Abstract:
A method is described for reconstructing decorative images concealed by layers of overcoatings. The method employs a nonparametric multivariate algorithm and a cellular-automaton construct to convert the near-infrared spectrum of a surface into digital images of the surface layers. The technique is potentially applicable to the restoration of historic buildings where aging, previous restoration attempts, and even political considerations have resulted in the alteration of the original historic surfaces.
APA, Harvard, Vancouver, ISO, and other styles
31

Vahrmeijer, Alexander L., Merlijn Hutteman, Joost R. van der Vorst, Cornelis J. H. van de Velde, and John V. Frangioni. "Image-guided cancer surgery using near-infrared fluorescence." Nature Reviews Clinical Oncology 10, no. 9 (July 23, 2013): 507–18. http://dx.doi.org/10.1038/nrclinonc.2013.123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Landry, Markita P. "(Invited) Near Infrared Nanosensors to Image Brain Neurochemistry." ECS Meeting Abstracts MA2020-02, no. 67 (November 23, 2020): 3419. http://dx.doi.org/10.1149/ma2020-02673419mtgabs.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Aminoto, Toto, Purnomo Sidi Priambodo, and Harry Sudibyo. "Image Decomposition Technique Based on Near-Infrared Transmission." Journal of Imaging 8, no. 12 (December 3, 2022): 322. http://dx.doi.org/10.3390/jimaging8120322.

Full text
Abstract:
One way to diagnose a disease is to examine pictures of tissue thought to be affected by the disease. Near-infrared properties are subdivided into nonionizing, noninvasive, and nonradiative properties. Near-infrared also has selectivity properties for the objects it passes through. With this selectivity, the resulting attenuation coefficient value will differ depending on the type of material or wavelength. By measuring the output and input intensity values, as well as the attenuation coefficient, the thickness of a material can be measured. The thickness value can then be used to display a reconstructed image. In this study, the object studied was a phantom consisting of silicon rubber, margarine, and gelatin. The results showed that margarine materials could be decomposed from other ingredients with a wavelength of 980 nm.
APA, Harvard, Vancouver, ISO, and other styles
34

Gatley, Ian, R. Joyce, A. Fowler, D. Depoy, and R. Probst. "A Large Near-Infrared Image of the Galactic Center." Symposium - International Astronomical Union 136 (1989): 361–64. http://dx.doi.org/10.1017/s0074180900186747.

Full text
Abstract:
We have obtained a K band image of the central 30 × 40 arcminutes of the Galaxy at a scale of 1.4″/pixel using a 256 × 256 Pt: Si Schottky barrier diode array detector provided by the Hughes Aircraft Company. The excellent cosmetic quality and large field of this device provide an unprecedented view of the inner Galaxy. Images of the central 10 arcminutes at a scale of 0.9″/pixel in the H (1.65μm) and K (2.2μm) bands produced with the same detector array have been combined to produce a color picture, which clearly shows the circumnuclear molecular ring in absorption; this picture demonstrates directly that the southwestern side of the ring lies in front of, and the northeastern side behind, the Galactic center.
APA, Harvard, Vancouver, ISO, and other styles
35

Ma, Xiaoyu, Wei Huang, Rui Huang, and Xuefeng Liu. "Near-Infrared Image Colorization Using Asymmetric Codec and Pixel-Level Fusion." Applied Sciences 12, no. 19 (October 7, 2022): 10087. http://dx.doi.org/10.3390/app121910087.

Full text
Abstract:
This paper mainly studies the colorization of near-infrared (NIR) images. Image colorization methods cannot be extended to NIR image colorization since the wavelength band of the NIR image exceeds the visible light spectral range and it is often linearly independent of the luminance of the RGB image. Furthermore, a symmetric codec, which cannot guarantee the ability of the encoder to extract features, is often used as the main frame of the network in both CNN-based colorization networks and CycleGAN-based colorization networks. In order to deal with the investigated problem, we propose a novel NIR colorization method using asymmetric codec (ACD) and pixel-level fusion. ACD is designed to improve the feature extraction ability of the encoder by allowing the information to enter deeper into the model and learning more non-redundant information. In addition, the global and local feature fusion networks (GLFFNet) are embedded between the encoder and the decoder to improve the prediction of the subtle color information of the image. The ACD and GLFFNet together constitute the colorization network (ColorNet) in this paper. Bilateral filtering and weighted least squares filtering (BFWLS) are used to fuse the pixel-level information of the input NIR image into the raw output image of the ColorNet. Finally, an intensive comparison analysis based on common datasets is conducted to verify superiority over existing methods in qualitative and quantitative visual assessments.
APA, Harvard, Vancouver, ISO, and other styles
36

Torres, Juan, Carmen Vega, Tomás Antelo, José Manuel Menéndez, Marian del Egido, Miriam Bueso, and Alberto Posse. "Formation of Hyperspectral Near-Infrared Images from Artworks." MRS Proceedings 1374 (2012): 3–15. http://dx.doi.org/10.1557/opl.2012.1374.

Full text
Abstract:
ABSTRACTIn this paper, a novel hyperspectral image acquisition system able to obtain a set of narrowband images (~2,25 nm of bandwidth) and the related composition of monochrome images in the near-infrared is described. The aim of this system is to discriminate the materials by their optical spectral response in the range of 900-1700 nm. This system has been developed in the framework of a collaborative project that includes the improvement of the automatic composition of reflectographic mosaics in order to study the underdrawing of large formats big artworks in real-time. The main features of this project are detailed in this paper. Furthermore, a few enlightening results of the hyperspectral system and new lines of research are shown.
APA, Harvard, Vancouver, ISO, and other styles
37

Prunet, Simon. "A Low-rank Approach to Image Defringing." Publications of the Astronomical Society of the Pacific 133, no. 1029 (November 1, 2021): 114502. http://dx.doi.org/10.1088/1538-3873/ac3408.

Full text
Abstract:
Abstract In this work, we revisit the problem of interference fringe patterns in CCD chips occurring in near-infrared bands due to multiple light reflections within the chip. We briefly discuss the traditional approaches that were developed to remove these patterns from science images, and mention their limitations. We then introduce a new method to globally estimate the fringe patterns in a collection of science images without additional external data, allowing for some variation of the patterns between images. We demonstrate this new method on near-infrared images taken by the CFHT wide-field imager Megacam.
APA, Harvard, Vancouver, ISO, and other styles
38

Park, Cheul-Woo, Hyuk-Ju Kwon, and Sung-Hak Lee. "Illuminant Adaptive Wideband Image Synthesis Using Separated Base-Detail Layer Fusion Maps." Applied Sciences 12, no. 19 (September 21, 2022): 9441. http://dx.doi.org/10.3390/app12199441.

Full text
Abstract:
In this study, we present a wideband image synthesis technique for day and night object identification. To synthesize the visible and near-infrared images, a base component and a detailed component are first decomposed using a bilateral filter, and the detailed component is synthesized using a local variance map. In addition, considering the difference in the near-infrared image characteristics between daytime and nighttime, the base components are synthesized using a luminance saturation region map and depth and penetration map using a joint bilateral filter. The proposed method overcomes the partial over- or under-exposure caused by sunlight and infrared auxiliary light, which is experienced variously in wideband imaging, and improves the identification of objects in various indoor and outdoor images compared with that achieved by existing methods by emphasizing detailed components.
APA, Harvard, Vancouver, ISO, and other styles
39

Browne, Andrew W., Ekaterina Deyneka, Francesco Ceccarelli, Josiah K. To, Siwei Chen, Jianing Tang, Anderson N. Vu, and Pierre F. Baldi. "Deep learning to enable color vision in the dark." PLOS ONE 17, no. 4 (April 6, 2022): e0265185. http://dx.doi.org/10.1371/journal.pone.0265185.

Full text
Abstract:
Humans perceive light in the visible spectrum (400-700 nm). Some night vision systems use infrared light that is not perceptible to humans and the images rendered are transposed to a digital display presenting a monochromatic image in the visible spectrum. We sought to develop an imaging algorithm powered by optimized deep learning architectures whereby infrared spectral illumination of a scene could be used to predict a visible spectrum rendering of the scene as if it were perceived by a human with visible spectrum light. This would make it possible to digitally render a visible spectrum scene to humans when they are otherwise in complete “darkness” and only illuminated with infrared light. To achieve this goal, we used a monochromatic camera sensitive to visible and near infrared light to acquire an image dataset of printed images of faces under multispectral illumination spanning standard visible red (604 nm), green (529 nm) and blue (447 nm) as well as infrared wavelengths (718, 777, and 807 nm). We then optimized a convolutional neural network with a U-Net-like architecture to predict visible spectrum images from only near-infrared images. This study serves as a first step towards predicting human visible spectrum scenes from imperceptible near-infrared illumination. Further work can profoundly contribute to a variety of applications including night vision and studies of biological samples sensitive to visible light.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Min, Yue Ma, and Shuai Chen. "Surface Defects Detection of Red Jujube Based on Near-Infrared Vision System." Applied Mechanics and Materials 303-306 (February 2013): 573–77. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.573.

Full text
Abstract:
Quality evaluation of agricultural and food products is important for processing, inventory control, and marketing. Fruit surface defects are important quality factors for the jujube industry, especially for high quality jujubes such as Xinjiang red jujube. This paper presents the development and test results of a machine vision system for automatic jujube surface defects detection. Unlike other near-infrared spectrometric approaches, the developed machine vision system uses reflective near-infrared image to evaluate jujube quality by analyzing two-dimensional images. Near-infrared image, vision algorithms and a variety of operational details of the system, including cameras, optics, illumination, and fruit carrier are presented. The complete machine vision system has been built, and the experimental results show that the designed machine vision system is feasible to detect the defects of jujubes.
APA, Harvard, Vancouver, ISO, and other styles
41

Son, Dong-Min, Hyuk-Ju Kwon, and Sung-Hak Lee. "Visible and Near-Infrared Image Synthesis Using PCA Fusion of Multiscale Layers." Applied Sciences 10, no. 23 (December 4, 2020): 8702. http://dx.doi.org/10.3390/app10238702.

Full text
Abstract:
This study proposes a method of blending visible and near-infrared (NIR) images to enhance their edge details and local contrast based on the Laplacian pyramid and principal component analysis (PCA). In the proposed method, both the Laplacian pyramid and PCA are implemented to generate a radiance map. Using the PCA algorithm, the soft-mixing method and the mask-skipping filter were applied when the images were fused. The color compensation method uses the ratio between the radiance map fused by the Laplacian pyramid and the PCA algorithm and the luminance channel of the visible image to preserve the chrominance of the visible image. The results show that the proposed method improves edge details and local contrast effectively.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Rui, Ya-Zhou Xue, and Xiao-Feng Yang. "Biomedical optical properties of color light and near-infrared fluorescence separated-merged imager." Journal of Innovative Optical Health Sciences 12, no. 06 (November 2019): 1940001. http://dx.doi.org/10.1142/s1793545819400017.

Full text
Abstract:
Objective: We study the biomedical optical properties of the color light and near-infrared fluorescence separated-merged imager. Materials and Methods: The color light and near-infrared fluorescence separated-merged imager can illuminate the visible light and the near-infrared light of [Formula: see text][Formula: see text]nm, receiving the reflected light and [Formula: see text][Formula: see text]nm near-infrared fluorescence, and display the color, fluorescence and merge image. ICG solution of different concentration, including standing time, was allocated to study the best imaging condition in vitro, and the depth of fluorescence penetration was studied with 5% agarose gel; the imaging characteristics of the imager was studied using SD rat; and then the SLNs tracing in 4 cases of penile carcinoma was performed. Results: When the concentration of ICG is 13.11[Formula: see text][Formula: see text]mol/L, the fluorescence intensity and the merge image are the best. The maximum depth of fluorescence imaging is 9[Formula: see text]mm in 5% agarose gel, while the bone has the greatest influence on it. The SLNs tracing shows that the imager can locate the SLNs in vitro, to achieve perioperative navigation during biopsy. Conclusion: There are many factors that affect the imaging effect, but the imaging effect of the imager meets the requirement of vision in a wide range, and can effectively trace the SLNs in perioperative period.
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Seung Hyun, Yu Hua Quan, Min Sub Kim, Ki Hyeok Kwon, Byeong Hyeon Choi, Hyun Koo Kim, and Beop-Min Kim. "Design and Testing of Augmented Reality-Based Fluorescence Imaging Goggle for Intraoperative Imaging-Guided Surgery." Diagnostics 11, no. 6 (May 21, 2021): 927. http://dx.doi.org/10.3390/diagnostics11060927.

Full text
Abstract:
The different pathways between the position of a near-infrared camera and the user’s eye limit the use of existing near-infrared fluorescence imaging systems for tumor margin assessments. By utilizing an optical system that precisely matches the near-infrared fluorescence image and the optical path of visible light, we developed an augmented reality (AR)-based fluorescence imaging system that provides users with a fluorescence image that matches the real-field, without requiring any additional algorithms. Commercial smart glasses, dichroic beam splitters, mirrors, and custom near-infrared cameras were employed to develop the proposed system, and each mount was designed and utilized. After its performance was assessed in the laboratory, preclinical experiments involving tumor detection and lung lobectomy in mice and rabbits by using indocyanine green (ICG) were conducted. The results showed that the proposed system provided a stable image of fluorescence that matched the actual site. In addition, preclinical experiments confirmed that the proposed system could be used to detect tumors using ICG and evaluate lung lobectomies. The AR-based intraoperative smart goggle system could detect fluorescence images for tumor margin assessments in animal models, without disrupting the surgical workflow in an operating room. Additionally, it was confirmed that, even when the system itself was distorted when worn, the fluorescence image consistently matched the actual site.
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Nan, Hongjun Liu, Zhaolu Wang, Jing Han, and Shuan Zhang. "Femtowatt incoherent image conversion from mid-infrared light to near-infrared light." Laser Physics 27, no. 3 (January 23, 2017): 035401. http://dx.doi.org/10.1088/1555-6611/aa57db.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Garcia, Missael, Christopher Edmiston, Timothy York, Radoslav Marinov, Suman Mondal, Nan Zhu, Gail P. Sudlow, et al. "Bio-inspired imager improves sensitivity in near-infrared fluorescence image-guided surgery." Optica 5, no. 4 (April 5, 2018): 413. http://dx.doi.org/10.1364/optica.5.000413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Everitt, James H., James V. Richerson, Mario A. Alaniz, David E. Escobar, Ricardo Villarreal, and Michael R. Davis. "Light Reflectance Characteristics and Remote Sensing of Big Bend Loco (Astragalus mollissimusvar.earlei) and Wooton Loco (Astragalus wootonii)." Weed Science 42, no. 1 (March 1994): 115–22. http://dx.doi.org/10.1017/s0043174500084265.

Full text
Abstract:
The high near-infrared reflectance (0.76 to 0.90 μm) of Big Bend loco and Wooton loco contributed significantly to their orange-red and red image tonal responses, respectively, on color-infrared aerial photographs making them distinguishable from associated vegetation and soil. Big Bend loco could also be distinguished on color-infrared and near-infrared black-and-white video imagery where it had distinct red and whitish tonal responses, respectively. Computer analyses of photographic and videographic images showed that Big Bend loco and Wooton loco populations could be quantified from other landscape features. A global positioning system was integrated with the video imagery that permitted latitude-longitude coordinates to appear on each image. The latitude-longitude data were integrated with a geographical information system to map Big Bend loco populations.
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, John Y. K., John T. Pierce, Jayesh P. Thawani, Ryan Zeh, Shuming Nie, Maria Martinez-Lage, and Sunil Singhal. "Near-infrared fluorescent image-guided surgery for intracranial meningioma." Journal of Neurosurgery 128, no. 2 (February 2018): 380–90. http://dx.doi.org/10.3171/2016.10.jns161636.

Full text
Abstract:
OBJECTIVEMeningiomas are the most common primary tumor of the central nervous system. Complete resection can be curative, but intraoperative identification of dural tails and tumor remnants poses a clinical challenge. Given data from preclinical studies and previous clinical trials, the authors propose a novel method of localizing tumor tissue and identifying residual disease at the margins via preoperative systemic injection of a near-infrared (NIR) fluorescent contrast dye. This technique, what the authors call “second-window indocyanine green” (ICG), relies on the visualization of ICG approximately 24 hours after intravenous injection.METHODSEighteen patients were prospectively identified and received 5 mg/kg of second-window ICG the day prior to surgery. An NIR camera was used to localize the tumor prior to resection and to inspect the margins following standard resection. The signal to background ratio (SBR) of the tumor to the normal brain parenchyma was measured in triplicate. Gross tumor and margin specimens were qualitatively reported with respect to fluorescence. Neuropathological diagnosis served as the reference gold standard to calculate the sensitivity and specificity of the imaging technique.RESULTSEighteen patients harbored 15 WHO Grade I and 3 WHO Grade II meningiomas. Near-infrared visualization during surgery ranged from 18 to 28 hours (mean 23 hours) following second-window ICG infusion. Fourteen of the 18 tumors demonstrated a markedly elevated SBR of 5.6 ± 1.7 as compared with adjacent brain parenchyma. Four of the 18 patients showed an inverse pattern of NIR signal, that is, stronger in the adjacent normal brain than in the tumor (SBR 0.31 ± 0.1). The best predictor of inversion was time from injection, as the patients who were imaged earlier were more likely to demonstrate an appropriate SBR. The second-window ICG technique demonstrated a sensitivity of 96.4%, specificity of 38.9%, positive predictive value of 71.1%, and a negative predictive value of 87.5% for tumor.CONCLUSIONSSystemic injection of NIR second-window ICG the day before surgery can be used to visualize meningiomas intraoperatively. Intraoperative NIR imaging provides higher sensitivity in identifying meningiomas than the unassisted eye. In this study, 14 of the 18 patients with meningioma demonstrated a strong SBR compared with adjacent brain. In the future, reducing the time interval from dye injection to intraoperative imaging may improve fluorescence at the margins, though this approach requires further investigation.Clinical trial registration no.: NCT02280954 (clincialtrials.gov).
APA, Harvard, Vancouver, ISO, and other styles
48

Yang Xiukun, 杨秀坤, 钟明亮 Zhong Mingliang, 景晓军 Jing Xiaojun, and 岳新启 Yue Xinqi. "Near-Infrared Microscopic Image Segmentation Based on W2DPCA-FCM." Acta Optica Sinica 33, no. 8 (2013): 0811002. http://dx.doi.org/10.3788/aos201333.0811002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

WU Hong-yu, 武红宇, 王灵丽 WANG Ling-li, 钟兴 ZHONG Xing, 苏志强 SU Zhi-qiang, 陈关州 CHEN Guan-zhou, and 白杨 BAI Yang. "Near-infrared Image Simulation Based on Spectral Correlation Method." ACTA PHOTONICA SINICA 47, no. 4 (2018): 410001. http://dx.doi.org/10.3788/gzxb20184704.0410001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Luker, Gary D. "Shining Near-Infrared Light on Image-guided Liver Surgery." Radiology: Imaging Cancer 2, no. 1 (January 1, 2020): e204003. http://dx.doi.org/10.1148/rycan.2020204003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography