Academic literature on the topic 'Near-infrared Image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Near-infrared Image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Near-infrared Image"

1

Yakno, Marlina, Junita Mohamad-Saleh, Mohd Zamri Ibrahim, and W. N. A. W. Samsudin. "Camera-projector calibration for near infrared imaging system." Bulletin of Electrical Engineering and Informatics 9, no. 1 (February 1, 2020): 160–70. http://dx.doi.org/10.11591/eei.v9i1.1697.

Full text
Abstract:
Advanced biomedical engineering technologies are continuously changing the medical practices to improve medical care for patients. Needle insertion navigation during intravenous catheterization process via Near infrared (NIR) and camera-projector is one solution. However, the central point of the problem is the image captured by camera misaligns with the image projected back on the object of interest. This causes the projected image not to be overlaid perfectly in the real-world. In this paper, a camera-projector calibration method is presented. Polynomial algorithm was used to remove the barrel distortion in captured images. Scaling and translation transformations are used to correct the geometric distortions introduced in the image acquisition process. Discrepancies in the captured and projected images are assessed. The accuracy of the image and the projected image is 90.643%. This indicates the feasibility of the captured approach to eliminate discrepancies in the projection and navigation images.
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Jae Taeg, Sung Woong Ra, Sungmin Lee, and Seung-Won Jung. "Image Dehazing Algorithm Using Near-infrared Image Characteristics." Journal of the Institute of Electronics and Information Engineers 52, no. 11 (November 25, 2015): 115–23. http://dx.doi.org/10.5573/ieie.2015.52.11.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kil, Taeho, and Nam Ik Cho. "Image Fusion using RGB and Near Infrared Image." Journal of Broadcast Engineering 21, no. 4 (July 30, 2016): 515–24. http://dx.doi.org/10.5909/jbe.2016.21.4.515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, You Sun, and Duk Shin. "Multiband Camera System Using Color and Near Infrared Images." Applied Mechanics and Materials 446-447 (November 2013): 922–26. http://dx.doi.org/10.4028/www.scientific.net/amm.446-447.922.

Full text
Abstract:
Various applications using a camera system have been developed and deployed commercially to improve our daily life. The performance of camera system is mainly dependent on image quality and illumination conditions. Multiband camera has been developed to provide a wealth of information for image acquisition. In this paper, we developed two applications about image segmentation and face detection using a multiband camera, which is available in four bands consisting of a near infrared and three color bands. We proposed a multiband camera system to utilize two different images i.e. color image extracted from Bayer filter and near infrared images. The experimental results showed the effectiveness of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
5

Son, Dong-Min, Hyuk-Ju Kwon, and Sung-Hak Lee. "Visible and Near Infrared Image Fusion Using Base Tone Compression and Detail Transform Fusion." Chemosensors 10, no. 4 (March 25, 2022): 124. http://dx.doi.org/10.3390/chemosensors10040124.

Full text
Abstract:
This study aims to develop a spatial dual-sensor module for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images. The proposed method synthesizes visible and near-infrared images using contourlet transform, principal component analysis, and iCAM06, while the blending method uses color information in a visible image and detailed information in an infrared image. The contourlet transform obtains detailed information and can decompose an image into directional images, making it better in obtaining detailed information than decomposition algorithms. The global tone information is enhanced by iCAM06, which is used for high-dynamic range imaging. The result of the blended images shows a clear appearance through both the compressed tone information of the visible image and the details of the infrared image.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhuge, Jing Chang, Zhi Jing Yu, and Jian Shu Gao. "Ice Detection Based on Near Infrared Image Analysis." Applied Mechanics and Materials 121-126 (October 2011): 3960–64. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.3960.

Full text
Abstract:
In order to detect the ice on aircraft wings, a method based on near infrared image processing is proposed. According to the variety of near-infrared reflectivity, four images of one object are obtained under different detection wavelengths. Water and ice can be distinguished by the different variation trends of near infrared images. In this paper, 1.10μm, 1.16μm, 1.26μm and 1.28μm are selected to be the detection wavelengths. The images of Carbon Fiber Composite material aircraft wings partially covered by water or ice are obtained and analyzed. Parameter D can reflect the variation trend of relative near-infrared reflectivity, so that Parameter D also can be the distinguish basis. The results of the experiment show that the method proposed in this paper is available for ice detection.
APA, Harvard, Vancouver, ISO, and other styles
7

Nagata, Masaki, Toshiki Hirogaki, Eiichi Aoyama, Takahiro Iida, Yasuhiro Uenishi, Masami Matsubara, and Yoshitaka Usui. "Quality Control Method Based on Gear Tooth Contact Evaluation Using Near-Infrared Ray Imagery." Key Engineering Materials 447-448 (September 2010): 569–73. http://dx.doi.org/10.4028/www.scientific.net/kem.447-448.569.

Full text
Abstract:
Conventionally, tooth contact evaluation has been performed visually by machine operators in gear manufacturing fields when finishing a gear or during assembly. With automation, the contact area’s boundary is unclear due to scattered light when visible light is used to obtain an image for tooth contact evaluation. We therefore focused on using near-infrared to prevent scattered light. First, we confirmed that the tooth contact image obtained by image binarization is hardly affected by the image threshold. Second, we propose a new method to extract the boundary part of the tooth contact by differential calculation of the fine near-infrared image. These methods allow automatic division of near-infrared images into the contact area, the boundary, and the non-contact area. Finally, the obtained result is compared with the tooth contact calculated from the measured tooth surface. We demonstrated that the near-infrared image method is effective for automatic tooth contact evaluation.
APA, Harvard, Vancouver, ISO, and other styles
8

Kwon, Hyuk-Ju, and Sung-Hak Lee. "Visible and Near-Infrared Image Acquisition and Fusion for Night Surveillance." Chemosensors 9, no. 4 (April 8, 2021): 75. http://dx.doi.org/10.3390/chemosensors9040075.

Full text
Abstract:
Image fusion combines images with different information to create a single, information-rich image. The process may either involve synthesizing images using multiple exposures of the same scene, such as exposure fusion, or synthesizing images of different wavelength bands, such as visible and near-infrared (NIR) image fusion. NIR images are frequently used in surveillance systems because they are beyond the narrow perceptual range of human vision. In this paper, we propose an infrared image fusion method that combines high and low intensities for use in surveillance systems under low-light conditions. The proposed method utilizes a depth-weighted radiance map based on intensities and details to enhance local contrast and reduce noise and color distortion. The proposed method involves luminance blending, local tone mapping, and color scaling and correction. Each of these stages is processed in the LAB color space to preserve the color attributes of a visible image. The results confirm that the proposed method outperforms conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Choi, Janghoon, Jun-Geun Shin, Yoon-Oh Tak, Youngseok Seo, and Jonghyun Eom. "Single Camera-Based Dual-Channel Near-Infrared Fluorescence Imaging system." Sensors 22, no. 24 (December 13, 2022): 9758. http://dx.doi.org/10.3390/s22249758.

Full text
Abstract:
In this study, we propose a single camera-based dual-channel near-infrared (NIR) fluorescence imaging system that produces color and dual-channel NIR fluorescence images in real time. To simultaneously acquire color and dual-channel NIR fluorescence images of two fluorescent agents, three cameras and additional optical parts are generally used. As a result, the volume of the image acquisition unit increases, interfering with movements during surgical procedures and increasing production costs. In the system herein proposed, instead of using three cameras, we set a single camera equipped with two image sensors that can simultaneously acquire color and single-channel NIR fluorescence images, thus reducing the volume of the image acquisition unit. The single-channel NIR fluorescence images were time-divided into two channels by synchronizing the camera and two excitation lasers, and the noise caused by the crosstalk effect between the two fluorescent agents was removed through image processing. To evaluate the performance of the system, experiments were conducted for the two fluorescent agents to measure the sensitivity, crosstalk effect, and signal-to-background ratio. The compactness of the resulting image acquisition unit alleviates the inconvenient movement obstruction of previous devices during clinical and animal surgery and reduces the complexity and costs of the manufacturing process, which may facilitate the dissemination of this type of system.
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Rongxin, Hualin Liu, and Jingbo Wei. "Visualizing Near Infrared Hyperspectral Images with Generative Adversarial Networks." Remote Sensing 12, no. 23 (November 24, 2020): 3848. http://dx.doi.org/10.3390/rs12233848.

Full text
Abstract:
The visualization of near infrared hyperspectral images is valuable for quick view and information survey, whereas methods using band selection or dimension reduction fail to produce good colors as reasonable as corresponding multispectral images. In this paper, an end-to-end neural network of hyperspectral visualization is proposed, based on the convolutional neural networks, to transform a hyperspectral image of hundreds of near infrared bands to a three-band image. Supervised learning is used to train the network where multispectral images are targeted to reconstruct naturally looking images. Each pair of the training images shares the same geographic location and similar moments. The generative adversarial framework is used with an adversarial network to improve the training of the generating network. In the experimental procedure, the proposed method is tested for the near infrared bands of EO-1 Hyperion images with LandSat-8 images as the benchmark, which is compared with five state-of-the-art visualization algorithms. The experimental results show that the proposed method performs better in producing naturally looking details and colors for near infrared hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Near-infrared Image"

1

Font, Aragonès Xavier. "Visible, near infrared and thermal hand-based image biometric recognition." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/117685.

Full text
Abstract:
Biometric Recognition refers to the automatic identification of a person based on his or her anatomical characteristic or modality (i.e., fingerprint, palmprint, face) or behavioural (i.e., signature) characteristic. It is a fundamental key issue in any process concerned with security, shared resources, network transactions among many others. Arises as a fundamental problem widely known as recognition, and becomes a must step before permission is granted. It is supposed that protects key resources by only allowing those resources to be used by users that have been granted authority to use or to have access to them. Biometric systems can operate in verification mode, where the question to be solved is Am I who I claim I am? or in identification mode where the question is Who am I? Scientific community has increased its efforts in order to improve performance of biometric systems. Depending on the application many solutions go in the way of working with several modalities or combining different classification methods. Since increasing modalities require some user inconvenience many of these approaches will never reach the market. For example working with iris, face and fingerprints requires some user effort in order to help acquisition. This thesis addresses hand-based biometric system in a thorough way. The main contributions are in the direction of a new multi-spectral hand-based image database and methods for performance improvement. The main contributions are: A) The first multi-spectral hand-based image database from both hand faces: palmar and dorsal. Biometric database are a precious commodity for research, mainly when it offers something new like visual (VIS), near infrared (NIR) and thermography (TIR) images at a time. This database with a length of 100 users and 10 samples per user constitute a good starting point to check algorithms and hand suitability for recognition. B) In order to correctly deal with raw hand data, some image preprocessing steps are necessary. Three different segmentation phases are deployed to deal with VIS, NIR and TIR images specifically. Some of the tough questions to address: overexposed images, ring fingers and the cuffs, cold finger and noise image. Once image segmented, two different approaches are prepared to deal with the segmented data. These two approaches called: Holistic and Geometric define the main focus to extract the feature vector. These feature vectors can be used alone or can be combined in some way. Many questions can be stated: e.g. which approach is better for recognition?, Can fingers alone obtain better performance than the whole hand? and Is thermography hand information suitable for recognition due to its thermoregulation properties? A complete set of data ready to analyse, coming from the holistic and geometric approach have been designed and saved to test. Some innovative geometric approach related to curvature will be demonstrated. C) Finally the Biometric Dispersion Matcher (BDM) is used in order to explore how it works under different fusion schemes, as well as with different classification methods. It is the intention of this research to contrast what happen when using other methods close to BDM like Linear Discriminant Analysis (LDA). At this point, some interesting questions will be solved, e.g. by taking advantage of the finger segmentation (as five different modalities) to figure out if they can outperform what the whole hand data can teach us.
El Reconeixement Biomètric fa referència a la identi cació automàtica de persones fent us d'alguna característica o modalitat anatòmica (empremta digital) o d'alguna característica de comportament (signatura). És un aspecte fonamental en qualsevol procés relacionat amb la seguretat, la compartició de recursos o les transaccions electròniques entre d'altres. És converteix en un pas imprescindible abans de concedir l'autorització. Aquesta autorització, s'entén que protegeix recursos clau, permeten així, que aquests siguin utilitzats pels usuaris que han estat autoritzats a utilitzar-los o a tenir-hi accés. Els sistemes biomètrics poden funcionar en veri cació, on es resol la pregunta: Soc jo qui dic que soc? O en identi cació on es resol la qüestió: Qui soc jo? La comunitat cientí ca ha incrementat els seus esforços per millorar el rendiment dels sistemes biomètrics. En funció de l'aplicació, diverses solucions s'adrecen a treballar amb múltiples modalitats o combinant diferents mètodes de classi cació. Donat que incrementar el número de modalitats, representa a la vegada problemes pels usuaris, moltes d'aquestes aproximacions no arriben mai al mercat. La tesis contribueix principalment en tres grans àrees, totes elles amb el denominador comú següent: Reconeixement biometric a través de les mans. i) La primera d'elles constitueix la base de qualsevol estudi, les dades. Per poder interpretar, i establir un sistema de reconeixement biomètric prou robust amb un clar enfocament a múltiples fonts d'informació, però amb el mínim esforç per part de l'usuari es construeix aquesta Base de Dades de mans multi espectral. Les bases de dades biomètriques constitueixen un recurs molt preuat per a la recerca; sobretot si ofereixen algun element nou com es el cas. Imatges de mans en diferents espectres electromagnètics: en visible (VIS), en infraroig (NIR) i en tèrmic (TIR). Amb un total de 100 usuaris, i 10 mostres per usuari, constitueix un bon punt de partida per estudiar i posar a prova sistemes multi biomètrics enfocats a les mans. ii) El segon bloc s'adreça a les dues aproximacions existents en la literatura per a tractar les dades en brut. Aquestes dues aproximacions, anomenades Holística (tracta la imatge com un tot) i Geomètrica (utilitza càlculs geomètrics) de neixen el focus alhora d'extreure el vector de característiques. Abans de tractar alguna d'aquestes dues aproximacions, però, és necessària l'aplicació de diferents tècniques de preprocessat digital de la imatge per obtenir les regions d'interès desitjades. Diferents problemes presents a les imatges s'han hagut de solucionar de forma original per a cadascuna de les tipologies de les imatges presents: VIS, NIR i TIR. VIS: imatges sobre exposades, anells, mànigues, braçalets. NIR: Ungles pintades, distorsió en forma de soroll en les imatges TIR: Dits freds La segona àrea presenta aspectes innovadors, ja que a part de segmentar la imatge de la ma, es segmenten tots i cadascun dels dits (feature-based approach). Així aconseguim contrastar la seva capacitat de reconeixement envers la ma de forma completa. Addicionalment es presenta un conjunt de procediments geomètrics amb la idea de comparar-los amb els provinents de l'extracció holística. La tercera i última àrea contrasta el procediment de classi cació anomenat Biometric Dispersion Matcher (BDM) amb diferents situacions. La primera relacionada amb l'efectivitat respecte d'altres mètode de reconeixement, com ara l'Anàlisi Lineal Discriminant (LDA) o bé mètodes com KNN o la regressió logística. Les altres situacions que s'analitzen tenen a veure amb múltiples fonts d'informació, quan s'apliquen tècniques de normalització i/o estratègies de combinació (fusió) per millorar els resultats. Els resultats obtinguts no deixen lloc per a la confusió, i són certament prometedors en el sentit que posen a la llum la importància de combinar informació complementària per obtenir rendiments superiors.
APA, Harvard, Vancouver, ISO, and other styles
2

Karlsson, Jonas. "FPGA-Accelerated Dehazing by Visible and Near-infrared Image Fusion." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28322.

Full text
Abstract:
Fog and haze can have a dramatic impact on vision systems for land and sea vehicles. The impact of such conditions on infrared images is not as severe as for standard images. By fusing images from two cameras, one ordinary and one near-infrared camera, a complete dehazing system with colour preservation can be achieved. Applying several different algorithms to an image set and evaluating the results, the most suitable image fusion algoritm has been identified. Using an FPGA, a programmable integrated circuit, a crucial part of the algorithm has been implemented. It is capable of producing processed images 30 times faster than a laptop computer. This implementation lays the foundation of a real-time dehazing system and provides a significant part of the full solution. The results show that such a system can be accomplished with an FPGA.
APA, Harvard, Vancouver, ISO, and other styles
3

Clarke, Fiona Catherine. "Near-infrared microscopy and image analysis for pharmaceutical process control." Thesis, University College London (University of London), 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Flowerdew, Roland John. "Atmospheric correction for the visible and near-infrared channels of ATSR-2." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Touse, Michael P. "Demonstration of a near and mid-infrared detector using multiple step quantum wells." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FTouse.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Carson, Kathryn Jane. "Contributions towards image reconstruction for functional imaging using time-resolved near-infrared measurements." Thesis, Keele University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wong, Gerald. "Snapshot hyperspectral imaging : near-infrared image replicating imaging spectrometer and achromatisation of Wollaston prisms." Thesis, Heriot-Watt University, 2012. http://hdl.handle.net/10399/2615.

Full text
Abstract:
Conventional hyperspectral imaging (HSI) techniques are time-sequential and rely on temporal scanning to capture hyperspectral images. This temporal constraint can limit the application of HSI to static scenes and platforms, where transient and dynamic events are not expected during data capture. The Near-Infrared Image Replicating Imaging Spectrometer (N-IRIS) sensor described in this thesis enables snapshot HSI in the short-wave infrared (SWIR), without the requirement for scanning and operates without rejection in polarised light. It operates in eight wavebands from 1.1μm to 1.7μm with a 2.0° diagonal field-of-view. N-IRIS produces spectral images directly, without the need for prior topographic or image reconstruction. Additional benefits include compactness, robustness, static operation, lower processing overheads, higher signal-to-noise ratio and higher optical throughput with respect to other HSI snapshot sensors generally. This thesis covers the IRIS design process from theoretical concepts to quantitative modelling, culminating in the N-IRIS prototype designed for SWIR imaging. This effort formed the logical step in advancing from peer efforts, which focussed upon the visible wavelengths. After acceptance testing to verify optical parameters, empirical laboratory trials were carried out. This testing focussed on discriminating between common materials within a controlled environment as proof-of-concept. Significance tests were used to provide an initial test of N-IRIS capability in distinguishing materials with respect to using a conventional SWIR broadband sensor. Motivated by the design and assembly of a cost-effective visible IRIS, an innovative solution was developed for the problem of chromatic variation in the splitting angle (CVSA) of Wollaston prisms. CVSA introduces spectral blurring of images. Analytical theory is presented and is illustrated with an example N-IRIS application where a sixfold reduction in dispersion is achieved for wavelengths in the region 400nm to 1.7μm, although the principle is applicable from ultraviolet to thermal-IR wavelengths. Experimental proof of concept is demonstrated and the spectral smearing of an achromatised N-IRIS is shown to be reduced by an order of magnitude. These achromatised prisms can provide benefits to areas beyond hyperspectral imaging, such as microscopy, laser pulse control and spectrometry.
APA, Harvard, Vancouver, ISO, and other styles
8

Teresi, Michael Bryan. "Multispectral Image Labeling for Unmanned Ground Vehicle Environments." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/53998.

Full text
Abstract:
Described is the development of a multispectral image labeling system with emphasis on Unmanned Ground Vehicles(UGVs). UGVs operating in unstructured environments face significant problems detecting viable paths when LIDAR is the sole source for perception. Promising advances in computer vision and machine learning has shown that multispectral imagery can be effective at detecting materials in unstructured environments [1][2][3][4][5][6]. This thesis seeks to extend previous work[6][7] by performing pixel level classification with multispectral features and texture. First the images are spatially registered to create a multispectral image cube. Visual, near infrared, shortwave infrared, and visible/near infrared polarimetric data are considered. The aligned images are then used to extract features which are fed to machine learning algorithms. The class list includes common materials present in rural and urban scenes such as vehicles, standing water, various forms of vegetation, and concrete. Experiments are conducted to explore the data requirement for a desired performance and the selection of a hyper-parameter for the textural features. A complete system is demonstrated, progressing from the data collection and labeling to the analysis of the classifier performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Khalaf, Reem. "Image reconstruction for optical tomography using photon density waves." Thesis, University of Hertfordshire, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Edjlali, Ehsan. "Fluorescence diffuse optical tomographic iterative image reconstruction for small animal molecular imaging with continuous-wave near infrared light." Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10673.

Full text
Abstract:
L’approximation par harmoniques sphériques (SPN) simplifiées de l’équation de transfert radiatif a été proposée comme un modèle fiable de propagation de la lumière dans les tissus biologiques. Cependant, peu de solutions analytiques ont été trouvées pour ce modèle. De telles solutions analytiques sont d’une grande valeur pour valider les solutions numériques des équations SPN, auxquelles il faut recourir dans le cas de tissus avec des géométries courbes complexes. Dans la première partie de cette thèse, des solutions analytiques pour deux géométries courbes sont présentées pour la première fois, à savoir pour la sphère et pour le cylindre. Pour les deux solutions, les conditions aux frontières générales tenant compte du saut d’indice de réfraction à l’interface du tissus et de son milieu environnant, telles qu’applicables à l’optique biomédicale, sont utilisées. Ces solutions sont validées à l’aide de simulations Monte Carlo basées sur un maillage de discrétisation du milieu. Ainsi, ces solutions permettent de valider rapidement un code numérique, par exemple utilisant les différences finies ou les éléments finis, sans nécessiter de longues simulations Monte Carlo. Dans la deuxième partie de cette thèse, la reconstruction itérative pour l’imagerie par tomographie optique diffuse par fluorescence est proposée sur la base d’une fonction objective et de son terme de régularisation de type Lq-Lp. Pour résoudre le problème inverse d’imagerie, la discrétisation du modèle de propagation de la lumière est effectuée en utilisant la méthode des différences finies. La reconstruction est effectuée sur un modèle de souris numérique en utilisant un maillage multi-échelle. Le problème inverse est résolu itérativement en utilisant une méthode d’optimisation. Pour cela, le gradient de la fonction de coût par rapport à la carte de concentration de l’agent fluorescent est nécessaire. Ce gradient est calculé à l’aide d’une méthode adjointe. Des mesures quantitatives utilisées en l’imagerie médicale sont utilisées pour évaluer la performance de l’approche de reconstruction dans différentes conditions. L’approche Lq-Lp montre des performances quantifiées élevées par rapport aux algorithmes traditionnels basés sur des fonction coût de type somme de carrés de différences.
Abstract : The simplified spherical harmonics (SPN) approximation to the radiative transfer equation has been proposed as a reliable model of light propagation in biological tissues. However, few analytical solutions have been found for this model. Such analytical solutions are of great value to validate numerical solutions of the SPN equations, which must be resorted to when dealing with media with complex curved geometries. In the first part of this thesis, analytical solutions for two curved geometries are presented for the first time, namely for the sphere and for the cylinder. For both solutions, the general refractiveindex mismatch boundary conditions, as applicable in biomedical optics, are resorted to. These solutions are validated using mesh-based Monte Carlo simulations. So validated, these solutions allow in turn to rapidly validate numerical code, based for example on finite differences or on finite elements, without requiring lengthy Monte Carlo simulations. provide reliable tool for validating numerical simulations. In the second part, iterative reconstruction for fluorescence diffuse optical tomography imaging is proposed based on an Lq-Lp framework for formulating an objective function and its regularization term. To solve the imaging inverse problem, the discretization of the light propagation model is performed using the finite difference method. The framework is used along with a multigrid mesh on a digital mouse model. The inverse problem is solved iteratively using an optimization method. For this, the gradient of the cost function with respect to the fluorescent agent’s concentration map is necessary. This is calculated using an adjoint method. Quantitative metrics resorted to in medical imaging are used to evaluate the performance of the framework under different conditions. The results obtained support this new approach based on an Lq-Lp formulation of cost functions in order to solve the inverse fluorescence problem with high quantified performance.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Near-infrared Image"

1

(Society), SPIE, Optical Society of America, and European Optical Society, eds. Diffuse optical imaging II: 14-17 June 2009, Munich, Germany. Bellingham, Wash: SPIE, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hielscher, Andreas H. Diffuse optical imaging III: 22-24 May 2011, Munich, Germany. Edited by SPIE (Society), Optical Society of America, Deutsche Gesellschaft für Lasermedizin, German Biophotonics Research Program, Photonics4Life (Group), and United States. Air Force. Office of Scientific Research. Bellingham, Wash: SPIE, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

The search for extended infrared emission near interacting and active galaxies. [Washington, D.C.?: National Aeronautics and Space Administration, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

K, Matthews, and United States. National Aeronautics and Space Administration., eds. The first diffraction-limited images from the W.M. Keck Telescope. [Washington, D.C: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Near-infrared Image"

1

Taini, Matti, Guoying Zhao, and Matti Pietikäinen. "Weight-Based Facial Expression Recognition from Near-Infrared Video Sequences." In Image Analysis, 239–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02230-2_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kang, Jinwoo, David V. Anderson, and Monson H. Hayes. "Direct Image Alignment for Active Near Infrared Image Differencing." In Advanced Concepts for Intelligent Vision Systems, 334–44. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25903-1_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Larsen, Rasmus, Morten Arngren, Per Waaben Hansen, and Allan Aasbjerg Nielsen. "Kernel Based Subspace Projection of Near Infrared Hyperspectral Images of Maize Kernels." In Image Analysis, 560–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02230-2_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Drury, S. A. "Digital processing of images in the visible and near infrared." In Image Interpretation in Geology, 118–48. Dordrecht: Springer Netherlands, 1987. http://dx.doi.org/10.1007/978-94-010-9393-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Renwu, Siting Zheng, Jia He, and Linlin Shen. "CycleGAN-Based Image Translation for Near-Infrared Camera-Trap Image Recognition." In Pattern Recognition and Artificial Intelligence, 453–64. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59830-3_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Matsui, Sosuke, Takahiro Okabe, Mihoko Shimano, and Yoichi Sato. "Image Enhancement of Low-Light Scenes with Near-Infrared Flash Images." In Computer Vision – ACCV 2009, 213–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12307-8_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gatley, Ian, R. Joyce, A. Fowler, D. DePoy, and R. Probst. "A Large Near-Infrared Image of the Galactic Center." In The Center of the Galaxy, 361–64. Dordrecht: Springer Netherlands, 1989. http://dx.doi.org/10.1007/978-94-009-2362-1_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salamati, Neda, Diane Larlus, Gabriela Csurka, and Sabine Süsstrunk. "Semantic Image Segmentation Using Visible and Near-Infrared Channels." In Computer Vision – ECCV 2012. Workshops and Demonstrations, 461–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33868-7_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mao, Kai, Meng Yang, and Haijian Wang. "Infrared and Near-Infrared Image Generation via Content Consistency and Style Adversarial Learning." In Pattern Recognition and Computer Vision, 618–30. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18907-4_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Zizhou, and Yueli Hu. "Image Enhancement Based on the Fusion of Visible and Near-Infrared Images." In Advances in Intelligent Automation and Soft Computing, 814–25. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81007-8_93.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Near-infrared Image"

1

Fertala, Remi A., and Georges Couderc. "Near-ultraviolet/near-infrared image mixing." In Aerospace Sensing, edited by Sankaran Gowrinathan and James F. Shanley. SPIE, 1992. http://dx.doi.org/10.1117/12.138082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suarez, Patricia L., Angel D. Sappa, Boris X. Vintimilla, and Riad I. Hammoud. "Near InfraRed Imagery Colorization." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Dingliang, Bin Hu, Yinna Chen, Yu Chen, Liangchen Sui, Zhaoyang Wang, Yijun Jiang, et al. "Autonomous Robotic Subcutaneous Injection Under Near-Infrared Image Guidance." In ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-69087.

Full text
Abstract:
Abstract Subcutaneous injections are administered into the region beneath the skin, while avoiding puncturing the blood vessels, which gives many types of medications for various medical conditions. In this paper, a portable robotic system performing autonomous cannulation of subcutaneous injections is proposed, and it achieves to automatically locate the proper injection site by analyzing near-infrared (NIR) image sequences. The robot mainly consists of two functional modules-image processing module and motion control module. The former with a full-search algorithm processes the images obtained by the NIR equipment. The puncture point is selected in the area where there are no blood vessels and a method of “range square” is utilized. The motion control module of the robotic system employs the pulse width modulation (PWM) wave to effectively control the motors and manipulates the syringe to puncture at the selected point. The image processing algorithm was evaluated based on the real NIR images of volunteers’ hands and forearms, and the image servo control of the robot was tested on the phantom. The experimental results were analyzed by a medical professional, and the success rate of the image processing algorithm is 96.09%, while the puncture time can satisfy the clinical demand for the efficiency of puncture procedures.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Hyeongyu, Jonghyun Kim, and Joongkyu Kim. "Image-to-Image Translation for Near-Infrared Image Colorization." In 2022 International Conference on Electronics, Information, and Communication (ICEIC). IEEE, 2022. http://dx.doi.org/10.1109/iceic54506.2022.9748773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Feng, Chen, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, and Sabine Susstrunk. "Near-infrared guided color image dehazing." In 2013 20th IEEE International Conference on Image Processing (ICIP). IEEE, 2013. http://dx.doi.org/10.1109/icip.2013.6738487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lindmayer, Joseph, and David McGuire. "Extended-range near-infrared image intensifier." In SC - DL tentative, edited by Illes P. Csorba. SPIE, 1990. http://dx.doi.org/10.1117/12.19470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hong, Yuchen, Youwei Lyu, Si Li, and Boxin Shi. "Near-Infrared Image Guided Reflection Removal." In 2020 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2020. http://dx.doi.org/10.1109/icme46284.2020.9102937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kudo, Yuhei, and Akira Kubota. "Image dehazing method by fusing weighted near-infrared image." In 2018 International Workshop on Advanced Image Technology (IWAIT). IEEE, 2018. http://dx.doi.org/10.1109/iwait.2018.8369744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Fangyu, Weihang You, Jeremy S. Smith, Wenjin Lu, and Bailing Zhang. "Image-Image Translation to Enhance Near Infrared Face Recognition." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8804414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Simanovski, Dmitrii M., Daniel V. Palanker, Philip Huie, and Todd I. Smith. "Image formation in near-field infrared microscopy." In BiOS 2000 The International Symposium on Biomedical Optics, edited by Shuming Nie, Eiichi Tamiya, and Edward S. Yeung. SPIE, 2000. http://dx.doi.org/10.1117/12.383347.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Near-infrared Image"

1

Bhatt, Parth, Curtis Edson, and Ann MacLean. Image Processing in Dense Forest Areas using Unmanned Aerial System (UAS). Michigan Technological University, September 2022. http://dx.doi.org/10.37099/mtu.dc.michigantech-p/16366.

Full text
Abstract:
Imagery collected via Unmanned Aerial System (UAS) platforms has become popular in recent years due to improvements in a Digital Single-Lens Reflex (DSLR) camera (centimeter and sub-centimeter), lower operation costs as compared to human piloted aircraft, and the ability to collect data over areas with limited ground access. Many different application (e.g., forestry, agriculture, geology, archaeology) are already using and utilizing the advantages of UAS data. Although, there are numerous UAS image processing workflows, for each application the approach can be different. In this study, we developed a processing workflow of UAS imagery collected in a dense forest (e.g., coniferous/deciduous forest and contiguous wetlands) area allowing users to process large datasets with acceptable mosaicking and georeferencing errors. Imagery was acquired with near-infrared (NIR) and red, green, blue (RGB) cameras with no ground control points. Image quality of two different UAS collection platforms were observed. Agisoft Metashape, a photogrammetric suite, which uses SfM (Structure from Motion) techniques, was used to process the imagery. The results showed that an UAS having a consumer grade Global Navigation Satellite System (GNSS) onboard had better image alignment than an UAS with lower quality GNSS.
APA, Harvard, Vancouver, ISO, and other styles
2

Becker, Sarah, Craig Daughtry, and Andrew Russ. Robust forest cover indices for multispectral images. Engineer Research and Development Center (U.S.), December 2021. http://dx.doi.org/10.21079/11681/42760.

Full text
Abstract:
Trees occur in many land cover classes and provide significant ecosystem services. Remotely sensed multispectral images are often used to create thematic maps of land cover, but accurately identifying trees in mixed land-use scenes is challenging. We developed two forest cover indices and protocols that reliably identified trees in WorldView-2 multispectral images. The study site in Maryland included coniferous and deciduous trees associated with agricultural fields and pastures, residential and commercial buildings, roads, parking lots, wetlands, and forests. The forest cover indices exploited the product of either the reflectance in red (630 to 690 nm) and red edge (705 to 745 nm) bands or the product of reflectance in red and near infrared (770 to 895 nm) bands. For two classes (trees versus other), overall classification accuracy was >77 percent for the four images that were acquired in each season of the year. Additional research is required to evaluate these indices for other scenes and sensors.
APA, Harvard, Vancouver, ISO, and other styles
3

Cohen, Yafit, Carl Rosen, Victor Alchanatis, David Mulla, Bruria Heuer, and Zion Dar. Fusion of Hyper-Spectral and Thermal Images for Evaluating Nitrogen and Water Status in Potato Fields for Variable Rate Application. United States Department of Agriculture, November 2013. http://dx.doi.org/10.32747/2013.7594385.bard.

Full text
Abstract:
Potato yield and quality are highly dependent on an adequate supply of nitrogen and water. Opportunities exist to use airborne hyperspectral (HS) remote sensing for the detection of spatial variation in N status of the crop to allow more targeted N applications. Thermal remote sensing has the potential to identify spatial variations in crop water status to allow better irrigation management and eventually precision irrigation. The overall objective of this study was to examine the ability of HS imagery in the visible and near infrared spectrum (VIS-NIR) and thermal imagery to distinguish between water and N status in potato fields. To lay the basis for achieving the research objectives, experiments in the US and in Israel were conducted in potato with different irrigation and N-application amounts. Thermal indices based merely on thermal images were found sensitive to water status in both Israel and the US in three potato varieties. Spectral indices based on HS images were found suitable to detect N stress accurately and reliably while partial least squares (PLS) analysis of spectral data was more sensitive to N levels. Initial fusion of HS and thermal images showed the potential of detecting both N stress and water stress and even to differentiate between them. This study is one of the first attempts at fusing HS and thermal imagery to detect N and water stress and to estimate N and water levels. Future research is needed to refine these techniques for use in precision agriculture applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography