Добірка наукової літератури з теми "VISIBLE IMAGE"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "VISIBLE IMAGE".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "VISIBLE IMAGE"

1

Uddin, Mohammad Shahab, Chiman Kwan, and Jiang Li. "MWIRGAN: Unsupervised Visible-to-MWIR Image Translation with Generative Adversarial Network." Electronics 12, no. 4 (February 20, 2023): 1039. http://dx.doi.org/10.3390/electronics12041039.

Повний текст джерела
Анотація:
Unsupervised image-to-image translation techniques have been used in many applications, including visible-to-Long-Wave Infrared (visible-to-LWIR) image translation, but very few papers have explored visible-to-Mid-Wave Infrared (visible-to-MWIR) image translation. In this paper, we investigated unsupervised visible-to-MWIR image translation using generative adversarial networks (GANs). We proposed a new model named MWIRGAN for visible-to-MWIR image translation in a fully unsupervised manner. We utilized a perceptual loss to leverage shape identification and location changes of the objects in the translation. The experimental results showed that MWIRGAN was capable of visible-to-MWIR image translation while preserving the object’s shape with proper enhancement in the translated images and outperformed several competing state-of-the-art models. In addition, we customized the proposed model to convert game-engine-generated (a commercial software) images to MWIR images. The quantitative results showed that our proposed method could effectively generate MWIR images from game-engine-generated images, greatly benefiting MWIR data augmentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Yongxin, Deguang Li, and WenPeng Zhu. "Infrared and Visible Image Fusion with Hybrid Image Filtering." Mathematical Problems in Engineering 2020 (July 29, 2020): 1–17. http://dx.doi.org/10.1155/2020/1757214.

Повний текст джерела
Анотація:
Image fusion is an important technique aiming to generate a composite image from multiple images of the same scene. Infrared and visible images can provide the same scene information from different aspects, which is useful for target recognition. But the existing fusion methods cannot well preserve the thermal radiation and appearance information simultaneously. Thus, we propose an infrared and visible image fusion method by hybrid image filtering. We represent the fusion problem with a divide and conquer strategy. A Gaussian filter is used to decompose the source images into base layers and detail layers. An improved co-occurrence filter fuses the detail layers for preserving the thermal radiation of the source images. A guided filter fuses the base layers for retaining the background appearance information of the source images. Superposition of the fused base layer and fused detail layer generates the final fusion image. Subjective visual and objective quantitative evaluations comparing with other fusion algorithms demonstrate the better performance of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dong, Yumin, Zhengquan Chen, Ziyi Li, and Feng Gao. "A Multi-Branch Multi-Scale Deep Learning Image Fusion Algorithm Based on DenseNet." Applied Sciences 12, no. 21 (October 30, 2022): 10989. http://dx.doi.org/10.3390/app122110989.

Повний текст джерела
Анотація:
Infrared images have good anti-environmental interference ability and can capture hot target information well, but their pictures lack rich detailed texture information and poor contrast. Visible image has clear and detailed texture information, but their imaging process depends more on the environment, and the quality of the environment determines the quality of the visible image. This paper presents an infrared image and visual image fusion algorithm based on deep learning. Two identical feature extractors are used to extract the features of visible and infrared images of different scales, fuse these features through specific fusion methods, and restore the features of visible and infrared images to the pictures through the feature restorer to make up for the deficiencies in the various photos of infrared and visible images. This paper tests infrared visual images, multi-focus images, and other data sets. The traditional image fusion algorithm is compared several with the current advanced image fusion algorithm. The experimental results show that the image fusion method proposed in this paper can keep more feature information of the source image in the fused image, and achieve excellent results in some image evaluation indexes.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Son, Dong-Min, Hyuk-Ju Kwon, and Sung-Hak Lee. "Visible and Near Infrared Image Fusion Using Base Tone Compression and Detail Transform Fusion." Chemosensors 10, no. 4 (March 25, 2022): 124. http://dx.doi.org/10.3390/chemosensors10040124.

Повний текст джерела
Анотація:
This study aims to develop a spatial dual-sensor module for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images. The proposed method synthesizes visible and near-infrared images using contourlet transform, principal component analysis, and iCAM06, while the blending method uses color information in a visible image and detailed information in an infrared image. The contourlet transform obtains detailed information and can decompose an image into directional images, making it better in obtaining detailed information than decomposition algorithms. The global tone information is enhanced by iCAM06, which is used for high-dynamic range imaging. The result of the blended images shows a clear appearance through both the compressed tone information of the visible image and the details of the infrared image.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Zheng, Su Mei Cui, He Yin, and Yu Chi Lin. "Comparative Analysis of Image Measurement Accuracy in High Temperature Based on Visible and Infrared Vision." Applied Mechanics and Materials 300-301 (February 2013): 1681–86. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1681.

Повний текст джерела
Анотація:
Image measurement is a common and non-contact dimensional measurement method. However, because of light deflection, visible light imaging is influenced largely, which makes the measurement accuracy reduce greatly. Various factors of visual measurement in high temperature are analyzed with the application of Planck theory. Thereafter, by means of the light dispersion theory, image measurement errors of visible and infrared images in high temperature which caused by light deviation are comparatively analyzed. Imaging errors of visible and infrared images are proposed quantitatively with experiments. Experimental results indicate that, based on the same imaging resolution, the relative error value of visible light image is 3.846 times larger than infrared image in 900°C high temperature. Therefore, the infrared image measurement has higher accuracy than the visible light image measurement in high temperature circumstances.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Yugui, Bo Zhai, Gang Wang, and Jianchu Lin. "Pedestrian Detection Method Based on Two-Stage Fusion of Visible Light Image and Thermal Infrared Image." Electronics 12, no. 14 (July 21, 2023): 3171. http://dx.doi.org/10.3390/electronics12143171.

Повний текст джерела
Анотація:
Pedestrian detection has important research value and practical significance. It has been used in intelligent monitoring, intelligent transportation, intelligent therapy, and automatic driving. However, in the pixel-level fusion and the feature-level fusion of visible light images and thermal infrared images under shadows during the daytime or under low illumination at night in actual surveillance, missed and false pedestrian detection always occurs. To solve this problem, an algorithm for the pedestrian detection based on the two-stage fusion of visible light images and thermal infrared images is proposed. In this algorithm, in view of the difference and complementarity of visible light images and thermal infrared images, these two types of images are subjected to pixel-level fusion and feature-level fusion according to the varying daytime conditions. In the pixel-level fusion stage, the thermal infrared image, after being brightness enhanced, is fused with the visible image. The obtained pixel-level fusion image contains the information critical for accurate pedestrian detection. In the feature-level fusion stage, in the daytime, the previous pixel-level fusion image is fused with the visible light image; meanwhile, under low illumination at night, the previous pixel-level fusion image is fused with the thermal infrared image. According to the experimental results, the proposed algorithm accurately detects pedestrian under shadows during the daytime and low illumination at night, thereby improving the accuracy of the pedestrian detection and reducing the missed rate and false rate in the detection of pedestrians.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Hui, Linlu Dong, Zhishuang Xue, Xiaofang Liu, and Caijian Hua. "Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns)." PLOS ONE 16, no. 2 (February 19, 2021): e0245563. http://dx.doi.org/10.1371/journal.pone.0245563.

Повний текст джерела
Анотація:
Aiming at the situation that the existing visible and infrared images fusion algorithms only focus on highlighting infrared targets and neglect the performance of image details, and cannot take into account the characteristics of infrared and visible images, this paper proposes an image enhancement fusion algorithm combining Karhunen-Loeve transform and Laplacian pyramid fusion. The detail layer of the source image is obtained by anisotropic diffusion to get more abundant texture information. The infrared images adopt adaptive histogram partition and brightness correction enhancement algorithm to highlight thermal radiation targets. A novel power function enhancement algorithm that simulates illumination is proposed for visible images to improve the contrast of visible images and facilitate human observation. In order to improve the fusion quality of images, the source image and the enhanced images are transformed by Karhunen-Loeve to form new visible and infrared images. Laplacian pyramid fusion is performed on the new visible and infrared images, and superimposed with the detail layer images to obtain the fusion result. Experimental results show that the method in this paper is superior to several representative image fusion algorithms in subjective visual effects on public data sets. In terms of objective evaluation, the fusion result performed well on the 8 evaluation indicators, and its own quality was high.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang, and Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network." Applied Sciences 10, no. 2 (January 11, 2020): 554. http://dx.doi.org/10.3390/app10020554.

Повний текст джерела
Анотація:
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Niu, Yifeng, Shengtao Xu, Lizhen Wu, and Weidong Hu. "Airborne Infrared and Visible Image Fusion for Target Perception Based on Target Region Segmentation and Discrete Wavelet Transform." Mathematical Problems in Engineering 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/275138.

Повний текст джерела
Анотація:
Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Batchuluun, Ganbayar, Se Hyun Nam, and Kang Ryoung Park. "Deep Learning-Based Plant Classification Using Nonaligned Thermal and Visible Light Images." Mathematics 10, no. 21 (November 1, 2022): 4053. http://dx.doi.org/10.3390/math10214053.

Повний текст джерела
Анотація:
There have been various studies conducted on plant images. Machine learning algorithms are usually used in visible light image-based studies, whereas, in thermal image-based studies, acquired thermal images tend to be analyzed with a naked eye visual examination. However, visible light cameras are sensitive to light, and cannot be used in environments with low illumination. Although thermal cameras are not susceptible to these drawbacks, they are sensitive to atmospheric temperature and humidity. Moreover, in previous thermal camera-based studies, time-consuming manual analyses were performed. Therefore, in this study, we conducted a novel study by simultaneously using thermal images and corresponding visible light images of plants to solve these problems. The proposed network extracted features from each thermal image and corresponding visible light image of plants through residual block-based branch networks, and combined the features to increase the accuracy of the multiclass classification. Additionally, a new database was built in this study by acquiring thermal images and corresponding visible light images of various plants.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "VISIBLE IMAGE"

1

Salvador, Amaia. "Computer vision beyond the visible : image understanding through language." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/667162.

Повний текст джерела
Анотація:
In the past decade, deep neural networks have revolutionized computer vision. High performing deep neural architectures trained for visual recognition tasks have pushed the field towards methods relying on learned image representations instead of hand-crafted ones, in the seek of designing end-to-end learning methods to solve challenging tasks, ranging from long-lasting ones such as image classification to newly emerging tasks like image captioning. As this thesis is framed in the context of the rapid evolution of computer vision, we present contributions that are aligned with three major changes in paradigm that the field has recently experienced, namely 1) the power of re-utilizing deep features from pre-trained neural networks for different tasks, 2) the advantage of formulating problems with end-to-end solutions given enough training data, and 3) the growing interest of describing visual data with natural language rather than pre-defined categorical label spaces, which can in turn enable visual understanding beyond scene recognition. The first part of the thesis is dedicated to the problem of visual instance search, where we particularly focus on obtaining meaningful and discriminative image representations which allow efficient and effective retrieval of similar images given a visual query. Contributions in this part of the thesis involve the construction of sparse Bag-of-Words image representations from convolutional features from a pre-trained image classification neural network, and an analysis of the advantages of fine-tuning a pre-trained object detection network using query images as training data. The second part of the thesis presents contributions to the problem of image-to-set prediction, understood as the task of predicting a variable-sized collection of unordered elements for an input image. We conduct a thorough analysis of current methods for multi-label image classification, which are able to solve the task in an end-to-end manner by simultaneously estimating both the label distribution and the set cardinality. Further, we extend the analysis of set prediction methods to semantic instance segmentation, and present an end-to-end recurrent model that is able to predict sets of objects (binary masks and categorical labels) in a sequential manner. Finally, the third part of the dissertation takes insights learned in the previous two parts in order to present deep learning solutions to connect images with natural language in the context of cooking recipes and food images. First, we propose a retrieval-based solution in which the written recipe and the image are encoded into compact representations that allow the retrieval of one given the other. Second, as an alternative to the retrieval approach, we propose a generative model to predict recipes directly from food images, which first predicts ingredients as sets and subsequently generates the rest of the recipe one word at a time by conditioning both on the image and the predicted ingredients.
En l'última dècada, les xarxes neuronals profundes han revolucionat el camp de la visió per computador. Els resultats favorables obtinguts amb arquitectures neuronals profundes entrenades per resoldre tasques de reconeixement visual han causat un canvi de paradigma cap al disseny de mètodes basats en representacions d'imatges apreses de manera automàtica, deixant enrere les tècniques tradicionals basades en l'enginyeria de representacions. Aquest canvi ha permès l'aparició de tècniques basades en l'aprenentatge d'extrem a extrem (end-to-end), capaces de resoldre de manera efectiva molts dels problemes tradicionals de la visió per computador (e.g. classificació d'imatges o detecció d'objectes), així com nous problemes emergents com la descripció textual d'imatges (image captioning). Donat el context de la ràpida evolució de la visió per computador en el qual aquesta tesi s'emmarca, presentem contribucions alineades amb tres dels canvis més importants que la visió per computador ha experimentat recentment: 1) la reutilització de representacions extretes de models neuronals pre-entrenades per a tasques auxiliars, 2) els avantatges de formular els problemes amb solucions end-to-end entrenades amb grans bases de dades, i 3) el creixent interès en utilitzar llenguatge natural en lloc de conjunts d'etiquetes categòriques pre-definits per descriure el contingut visual de les imatges, facilitant així l'extracció d'informació visual més enllà del reconeixement de l'escena i els elements que la composen La primera part de la tesi està dedicada al problema de la cerca d'imatges (image retrieval), centrada especialment en l'obtenció de representacions visuals significatives i discriminatòries que permetin la recuperació eficient i efectiva d'imatges donada una consulta formulada amb una imatge d'exemple. Les contribucions en aquesta part de la tesi inclouen la construcció de representacions Bag-of-Words a partir de descriptors locals obtinguts d'una xarxa neuronal entrenada per classificació, així com un estudi dels avantatges d'utilitzar xarxes neuronals per a detecció d'objectes entrenades utilitzant les imatges d'exemple, amb l'objectiu de millorar les capacitats discriminatòries de les representacions obtingudes. La segona part de la tesi presenta contribucions al problema de predicció de conjunts a partir d'imatges (image to set prediction), entès com la tasca de predir una col·lecció no ordenada d'elements de longitud variable donada una imatge d'entrada. En aquest context, presentem una anàlisi exhaustiva dels mètodes actuals per a la classificació multi-etiqueta d'imatges, que són capaços de resoldre la tasca de manera integral calculant simultàniament la distribució probabilística sobre etiquetes i la cardinalitat del conjunt. Seguidament, estenem l'anàlisi dels mètodes de predicció de conjunts a la segmentació d'instàncies semàntiques, presentant un model recurrent capaç de predir conjunts d'objectes (representats per màscares binàries i etiquetes categòriques) de manera seqüencial. Finalment, la tercera part de la tesi estén els coneixements apresos en les dues parts anteriors per presentar solucions d'aprenentatge profund per connectar imatges amb llenguatge natural en el context de receptes de cuina i imatges de plats cuinats. En primer lloc, proposem una solució basada en algoritmes de cerca, on la recepta escrita i la imatge es codifiquen amb representacions compactes que permeten la recuperació d'una donada l'altra. En segon lloc, com a alternativa a la solució basada en algoritmes de cerca, proposem un model generatiu capaç de predir receptes (compostes pels seus ingredients, predits com a conjunts, i instruccions) directament a partir d'imatges de menjar.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Font, Aragonès Xavier. "Visible, near infrared and thermal hand-based image biometric recognition." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/117685.

Повний текст джерела
Анотація:
Biometric Recognition refers to the automatic identification of a person based on his or her anatomical characteristic or modality (i.e., fingerprint, palmprint, face) or behavioural (i.e., signature) characteristic. It is a fundamental key issue in any process concerned with security, shared resources, network transactions among many others. Arises as a fundamental problem widely known as recognition, and becomes a must step before permission is granted. It is supposed that protects key resources by only allowing those resources to be used by users that have been granted authority to use or to have access to them. Biometric systems can operate in verification mode, where the question to be solved is Am I who I claim I am? or in identification mode where the question is Who am I? Scientific community has increased its efforts in order to improve performance of biometric systems. Depending on the application many solutions go in the way of working with several modalities or combining different classification methods. Since increasing modalities require some user inconvenience many of these approaches will never reach the market. For example working with iris, face and fingerprints requires some user effort in order to help acquisition. This thesis addresses hand-based biometric system in a thorough way. The main contributions are in the direction of a new multi-spectral hand-based image database and methods for performance improvement. The main contributions are: A) The first multi-spectral hand-based image database from both hand faces: palmar and dorsal. Biometric database are a precious commodity for research, mainly when it offers something new like visual (VIS), near infrared (NIR) and thermography (TIR) images at a time. This database with a length of 100 users and 10 samples per user constitute a good starting point to check algorithms and hand suitability for recognition. B) In order to correctly deal with raw hand data, some image preprocessing steps are necessary. Three different segmentation phases are deployed to deal with VIS, NIR and TIR images specifically. Some of the tough questions to address: overexposed images, ring fingers and the cuffs, cold finger and noise image. Once image segmented, two different approaches are prepared to deal with the segmented data. These two approaches called: Holistic and Geometric define the main focus to extract the feature vector. These feature vectors can be used alone or can be combined in some way. Many questions can be stated: e.g. which approach is better for recognition?, Can fingers alone obtain better performance than the whole hand? and Is thermography hand information suitable for recognition due to its thermoregulation properties? A complete set of data ready to analyse, coming from the holistic and geometric approach have been designed and saved to test. Some innovative geometric approach related to curvature will be demonstrated. C) Finally the Biometric Dispersion Matcher (BDM) is used in order to explore how it works under different fusion schemes, as well as with different classification methods. It is the intention of this research to contrast what happen when using other methods close to BDM like Linear Discriminant Analysis (LDA). At this point, some interesting questions will be solved, e.g. by taking advantage of the finger segmentation (as five different modalities) to figure out if they can outperform what the whole hand data can teach us.
El Reconeixement Biomètric fa referència a la identi cació automàtica de persones fent us d'alguna característica o modalitat anatòmica (empremta digital) o d'alguna característica de comportament (signatura). És un aspecte fonamental en qualsevol procés relacionat amb la seguretat, la compartició de recursos o les transaccions electròniques entre d'altres. És converteix en un pas imprescindible abans de concedir l'autorització. Aquesta autorització, s'entén que protegeix recursos clau, permeten així, que aquests siguin utilitzats pels usuaris que han estat autoritzats a utilitzar-los o a tenir-hi accés. Els sistemes biomètrics poden funcionar en veri cació, on es resol la pregunta: Soc jo qui dic que soc? O en identi cació on es resol la qüestió: Qui soc jo? La comunitat cientí ca ha incrementat els seus esforços per millorar el rendiment dels sistemes biomètrics. En funció de l'aplicació, diverses solucions s'adrecen a treballar amb múltiples modalitats o combinant diferents mètodes de classi cació. Donat que incrementar el número de modalitats, representa a la vegada problemes pels usuaris, moltes d'aquestes aproximacions no arriben mai al mercat. La tesis contribueix principalment en tres grans àrees, totes elles amb el denominador comú següent: Reconeixement biometric a través de les mans. i) La primera d'elles constitueix la base de qualsevol estudi, les dades. Per poder interpretar, i establir un sistema de reconeixement biomètric prou robust amb un clar enfocament a múltiples fonts d'informació, però amb el mínim esforç per part de l'usuari es construeix aquesta Base de Dades de mans multi espectral. Les bases de dades biomètriques constitueixen un recurs molt preuat per a la recerca; sobretot si ofereixen algun element nou com es el cas. Imatges de mans en diferents espectres electromagnètics: en visible (VIS), en infraroig (NIR) i en tèrmic (TIR). Amb un total de 100 usuaris, i 10 mostres per usuari, constitueix un bon punt de partida per estudiar i posar a prova sistemes multi biomètrics enfocats a les mans. ii) El segon bloc s'adreça a les dues aproximacions existents en la literatura per a tractar les dades en brut. Aquestes dues aproximacions, anomenades Holística (tracta la imatge com un tot) i Geomètrica (utilitza càlculs geomètrics) de neixen el focus alhora d'extreure el vector de característiques. Abans de tractar alguna d'aquestes dues aproximacions, però, és necessària l'aplicació de diferents tècniques de preprocessat digital de la imatge per obtenir les regions d'interès desitjades. Diferents problemes presents a les imatges s'han hagut de solucionar de forma original per a cadascuna de les tipologies de les imatges presents: VIS, NIR i TIR. VIS: imatges sobre exposades, anells, mànigues, braçalets. NIR: Ungles pintades, distorsió en forma de soroll en les imatges TIR: Dits freds La segona àrea presenta aspectes innovadors, ja que a part de segmentar la imatge de la ma, es segmenten tots i cadascun dels dits (feature-based approach). Així aconseguim contrastar la seva capacitat de reconeixement envers la ma de forma completa. Addicionalment es presenta un conjunt de procediments geomètrics amb la idea de comparar-los amb els provinents de l'extracció holística. La tercera i última àrea contrasta el procediment de classi cació anomenat Biometric Dispersion Matcher (BDM) amb diferents situacions. La primera relacionada amb l'efectivitat respecte d'altres mètode de reconeixement, com ara l'Anàlisi Lineal Discriminant (LDA) o bé mètodes com KNN o la regressió logística. Les altres situacions que s'analitzen tenen a veure amb múltiples fonts d'informació, quan s'apliquen tècniques de normalització i/o estratègies de combinació (fusió) per millorar els resultats. Els resultats obtinguts no deixen lloc per a la confusió, i són certament prometedors en el sentit que posen a la llum la importància de combinar informació complementària per obtenir rendiments superiors.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yigit, Ahmet. "Thermal And Visible Band Image Fusion For Abandoned Object Detection." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611720/index.pdf.

Повний текст джерела
Анотація:
Packages that are left unattended in public spaces are a security concern and timely detection of these packages is important for prevention of potential threats. Operators should be always alert to detect abandoned items in crowded environments. However, it is very difficult for operators to stay concentrated for extended periods. Therefore, it is important to aid operators with automatic detection of abandoned items. Most of the methods in the literature define abandoned items as items newly added to the scene and stayed stationary for a predefined time. Hence other stationary objects, such as people sitting on a bench are also detected as suspicious objects resulting in a high number of false alarms. These false alarms could be prevented by discriminating suspicious items as living/nonliving objects. In this thesis, visible band and thermal band cameras are used together to analyze the interactions between humans and other objects. Thermal images help classification of objects using their heat signatures. This way, people and the objects they carry or left behind can be detected separately. Especially, it is aimed to detect abandoned items and discriminate living or nonliving objects
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Karlsson, Jonas. "FPGA-Accelerated Dehazing by Visible and Near-infrared Image Fusion." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28322.

Повний текст джерела
Анотація:
Fog and haze can have a dramatic impact on vision systems for land and sea vehicles. The impact of such conditions on infrared images is not as severe as for standard images. By fusing images from two cameras, one ordinary and one near-infrared camera, a complete dehazing system with colour preservation can be achieved. Applying several different algorithms to an image set and evaluating the results, the most suitable image fusion algoritm has been identified. Using an FPGA, a programmable integrated circuit, a crucial part of the algorithm has been implemented. It is capable of producing processed images 30 times faster than a laptop computer. This implementation lays the foundation of a real-time dehazing system and provides a significant part of the full solution. The results show that such a system can be accomplished with an FPGA.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

ALMEIDA, MARIA DA GLORIA DE SOUZA. "SEEING BEYOND THE VISIBLE: THE IMAGE OUT OF THE EYES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30174@1.

Повний текст джерела
Анотація:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
A proposição da temática desta tese constitui-se no levantamento da discussão que trouxe à tona a possibilidade real de uma pessoa cega construir imagens mentais. Discutiu-se, pois, a viabilidade da construção dessas imagens a partir das próprias condições de que o indivíduo cego é dotado. Procurou-se demostrar como o corpo com seus sentidos e suas ilimitadas valências, converte-se em instrumental capaz de acionar recursos sensoriais que dão informações, passam dados, concretizam sensações e percepções, formulam conceitos. Sobre o tripé – conhecimento, cultura e artes –, fez-se uma pesquisa que, pela complexidade e abrangência do assunto, exigiu o cruzamento de diferentes disciplinas, de diversas linhas de pensamento, ainda que guardando diferenças pudessem estabelecer um diálogo que levasse a compreensão mais clara ao cerne da proposta feita. Recorreu-se a Zubiri, Bachelard e Durand para compor a base da leitura de três poetas cegas e da escritora Marina Colasanti, apontando a força expressiva das imagens em ambos os casos.
The proposition of the theme of this thesis is the discussion collection that highlights the real possibility of a blind person to construct mental images. The possibility of constructing these images was therefore discussed from the very conditions of which the blind individual is endowed. We sought to demonstrate how the body, with its senses and its unlimited valences, becomes an instrument capable of assessing sensory resources which give information, pass data, materialize sensations and perceptions, formulate concepts. On the tripod - knowledge, culture and the arts - a research was made that due to the complexity and comprehensiveness of the subject, required the crossing of different disciplines, of different lines of thought, even if keeping their differences it could establish a dialogue that would lead to understanding clearly the heart of the proposal. Zubiri, Bachelard and Durand were used to compose the basis of the reading of three blind poets and the writer Marina Colasanti, pointing out the expressive force of the images in both cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Nielsen, Casper Falkenberg. "A robust framework for medical image segmentation through adaptable class-specific representation." Thesis, Middlesex University, 2002. http://eprints.mdx.ac.uk/13507/.

Повний текст джерела
Анотація:
Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hard\vare for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable ClassSpecific Representation) is developed in the first case for 2D colour cryo section segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Whitelegg, Andrew Jeremy. "The visible and the invisible : the production of image in Atlanta." Thesis, King's College London (University of London), 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265549.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

RAWAT, URVASHI. "INFRARED AND VISIBLE IMAGE FUSION USING HYBRID LWT AND PCA METHOD." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18907.

Повний текст джерела
Анотація:
Image fusion is a method in which all the relevant information is collected from the input source images and included in few/single output image. Image fusion techniques are divided into two broad categories: spatial domain and transform domain. Principal component analysis (PCA) is a spatial domain technique which is computationally simpler and reduces redundant information but has the demerit of spectral degradation. Lifting wavelet transform (LWT) is a transform domain technique which has an adaptive design and demands less memory. In this project, a novel hybrid fusion algorithm has been introduced which combines the LWT and PCA in a parallel manner. These two fusion methods are applied on Infrared and Visible image data set. Infrared and visible images contain complementary information and their fusion gives us an output image which is more informative than the individual source images. The hybrid method is also compared with conventional fusion techniques like PCA, LWT and DWT. It has been shown that the proposed method outperforms the conventional methods. The results are analyzed using performance parameters standard deviation, average value, the average difference, and normalized cross- correlation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Di, Mercurio Francine. "Les images scéniques de Romeo Castellucci : expérience d'un théâtre plastique." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM3025.

Повний текст джерела
Анотація:
Certaines œuvres théâtrales de ce XXIème siècle semblent inscrire au cœur de leur projet esthétique une mise en question de l’ordre perceptif en créant de nouvelles manières de voir, d’entendre et d’être affecté. Le drame ‒ comme action ‒ s’y trouve réactualisé dans une forme s’adressant aux perceptions du spectateur au travers d’une dramaturgie visuelle, voire perceptuelle, prenant appui sur une fabrique d’images scéniques. Notre étude propose d'investir dans le champ du théâtre une part de la recherche sur l’image en mettant en perspective les questions esthétiques, philosophiques, voire politiques que soulève la notion d’image scénique. L’œuvre du metteur en scène italien Romeo Castellucci est représentative de la spécificité du plasticien qui trouve sur la scène un nouveau support de création d’images, un médium particulier mettant en jeu le corps, l’espace, la forme et où le texte, pourtant largement muet, hante la scène. Ce théâtre plastique remet en cause les fondements du théâtre traditionnel (action, personnage, fiction), invente un drame proprement figural et cherche à déplacer le regard du spectateur. Dans une oscillation entre apparition et disparition du visible, représentation et ineffable, il offre au spectateur un processus expérientiel au sein d’un laboratoire où l’image joue contre l’image, inquiète le regard et les perceptions convenues de la réalité.Le déplacement de la théorie de l’image dans le champ du théâtre nous permettra, au travers de la démarche singulière de Romeo Castellucci, d’apporter notre contribution à l’analyse des modalités opératoires de son efficace esthétique et des enjeux critiques et politiques du théâtre contemporain
Some theatrical pieces from this 21st century seem to include in their esthetical project a question of the perceptive order by creating new ways of seeing, hearing and being affected. The drama ‒ as an action ‒ finds itself updated in a form addressing to the spectator’s perceptions through visual dramaturgy, even perceptive, based on a factory of scenic images. Our study offers to invest in the theater field a part of the research and thoughts about the image by putting into perspective the esthetical, philosophical, and even political questions that the notion of scenic image raise.The particular piece from the Italian director Romeo Castellucci is representative of the visual artist’s specificity who finds on the stage a new creating support for images, a particular way of involving body, space, shape, matter and where the text, although especially mute, is haunting the stage. This plastic theater questions the foundations of traditional theater (action, character, fiction), creates a properly figural drama and tries to move the spectator’s sight. In an oscillation between visual appearance and disappearance, representation and unspeakable, he offers to the spectator an experiential process in a laboratory where image plays against image, worries the sight and the accepted perceptions of reality.The displacement of the image’s theory in the theatrical field will allow us, through Romeo Castellucci’s singular approach, to make our contribution to the operating procedures analysis of his aesthetic’s efficiency, reception mechanism of the spectator and critical and political skates of the contemporary theater
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hu, Lequn. "Development and evaluation of image registration and segmentation algorithms for long wavelength infrared and visible wavelength images." Master's thesis, Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-07082009-171221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "VISIBLE IMAGE"

1

Jones, Philippe. Image verbale, image visible. Châtelineau, Belgique: Le Taillis pré, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Partager le visible: Repenser Foucault. Paris: L'Harmattan, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

La brûlure du visible: Photographie et écriture. Paris: L'Harmattan, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ahijado, Jorge Quijano. Autour du visible, l'image et la fuite. Paris: Editions L'Harmattan, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

The image and the region: Making mega-city regions visible! Baden: Lars Müller Publishers, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

H, Roosen-Runge Peter, Roosen-Runge Anna P, and Canadian Heritage Information Network, eds. The virtual display case: Making museum image assets safety visible. 3rd ed. Ottawa, Ont: Canadian Heritage Information Network, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Olins, Wally. Corporate identity: Making business strategy visible through design. Boston, Mass: Harvard Business School Press, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Olins, Wally. Corporate identity: Making business strategy visible through design. London: Thames and Hudson, 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

L'invention du visible: L'image à la lumière des arts. Paris: Hermann éditeurs, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Vauday, Patrick. L'invention du visible: L'image à la lumière des arts. Paris: Hermann éditeurs, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "VISIBLE IMAGE"

1

Magnenat-Thalmann, Nadia, and Daniel Thalmann. "Visible surface algorithms." In Image Synthesis, 84–107. Tokyo: Springer Japan, 1987. http://dx.doi.org/10.1007/978-4-431-68060-4_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Durou, Jean-Denis, Laurent Mascarilla, and Didier Piau. "Non-visible deformations." In Image Analysis and Processing, 519–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63507-6_240.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gorad, Ajinkya, Sakira Hassan, and Simo Särkkä. "Vessel Bearing Estimation Using Visible and Thermal Imaging." In Image Analysis, 373–81. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-31438-4_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Krishnan, Shoba, and Prathibha Sudhakaran. "Visible image and video watermarking." In Thinkquest~2010, 234–38. New Delhi: Springer India, 2011. http://dx.doi.org/10.1007/978-81-8489-989-4_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sonka, Milan, Daniel R. Thedens, Boudewijn P. F. Lelieveldt, Steven C. Mitchell, Rob J. van der Geest, and Johan H. C. Reiber. "Cardiovascular MR Image Analysis." In Computer Vision Beyond the Visible Spectrum, 193–239. London: Springer London, 2005. http://dx.doi.org/10.1007/1-84628-065-6_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Khuwuthyakorn, Pattaraporn, Antonio Robles-Kelly, and Jun Zhou. "Affine Invariant Hyperspectral Image Descriptors Based upon Harmonic Analysis." In Machine Vision Beyond Visible Spectrum, 179–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-11568-4_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Tong, Hui Xue, Shiqiang Wang, and Dongfang Zhang. "Fire Detection Method Based on Infrared Image and Visible Image." In Proceedings of International Conference on Image, Vision and Intelligent Systems 2022 (ICIVIS 2022), 505–18. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0923-0_51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sengupta, Madhumita, and J. K. Mandal. "Color Image Authentication through Visible Patterns (CAV)." In ICT and Critical Infrastructure: Proceedings of the 48th Annual Convention of Computer Society of India- Vol II, 617–25. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03095-1_67.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Clark, Pamela Elizabeth, and Michael Lee Rilee. "Visible and Circumvisible Regions and Image Interpretation." In Remote Sensing Tools for Exploration, 53–113. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-6830-2_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Drury, S. A. "Digital processing of images in the visible and near infrared." In Image Interpretation in Geology, 118–48. Dordrecht: Springer Netherlands, 1987. http://dx.doi.org/10.1007/978-94-010-9393-4_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "VISIBLE IMAGE"

1

Zhao, Zixiang, Shuang Xu, Chunxia Zhang, Junmin Liu, Jiangshe Zhang, and Pengfei Li. "DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/135.

Повний текст джерела
Анотація:
Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile surpass state-of-the-art (SOTA) approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chuang, Shang-Chih, Chun-Hsiang Huang, and Ja-Ling Wu. "Unseen Visible Watermarking." In 2007 IEEE International Conference on Image Processing. IEEE, 2007. http://dx.doi.org/10.1109/icip.2007.4379296.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Di, Jinyuan Liu, Xin Fan, and Risheng Liu. "Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/487.

Повний текст джерела
Анотація:
Recent learning-based image fusion methods have marked numerous progress in pre-registered multi-modality data, but suffered serious ghosts dealing with misaligned multi-modality data, due to the spatial deformation and the difficulty narrowing cross-modality discrepancy. To overcome the obstacles, in this paper, we present a robust cross-modality generation-registration paradigm for unsupervised misaligned infrared and visible image fusion (IVIF). Specifically, we propose a Cross-modality Perceptual Style Transfer Network (CPSTN) to generate a pseudo infrared image taking a visible image as input. Benefiting from the favorable geometry preservation ability of the CPSTN, the generated pseudo infrared image embraces a sharp structure, which is more conducive to transforming cross-modality image alignment into mono-modality registration coupled with the structure-sensitive of the infrared image. In this case, we introduce a Multi-level Refinement Registration Network (MRRN) to predict the displacement vector field between distorted and pseudo infrared images and reconstruct registered infrared image under the mono-modality setting. Moreover, to better fuse the registered infrared images and visible images, we present a feature Interaction Fusion Module (IFM) to adaptively select more meaningful features for fusion in the Dual-path Interaction Fusion Network (DIFN). Extensive experimental results suggest that the proposed method performs superior capability on misaligned cross-modality image fusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Carrión-Ruiz, Berta, Silvia Blanco-Pons, and Jose Luis Lerma. "DIGITAL IMAGE ANALYSIS OF THE VISIBLE REGION THROUGH SIMULATION OF ROCK ART PAINTINGS." In ARQUEOLÓGICA 2.0 - 8th International Congress on Archaeology, Computer Graphics, Cultural Heritage and Innovation. Valencia: Universitat Politècnica València, 2016. http://dx.doi.org/10.4995/arqueologica8.2016.3560.

Повний текст джерела
Анотація:
Non-destructive rock art recording techniques are getting special attention in the last years, opening new research lines in order to improve the level of documentation and understanding of our rich legacy. This paper applies the principal component analysis (PCA) technique in images that include wavelengths between 400-700 nm (visible range). Our approach is focused on determining the difference provided by the image processing of the visible region through four spectral images versus an image that encompasses the entire visible spectrum. The images were taken by means of optical filters that take specific wavelengths and exclude parts of the spectrum. Simulation of rock art is prepared in laboratory. For this purpose, three different pigments were made simulating the material composition of rock art paintings. The advantages of studying the visible spectrum in separate images are analysed. In addition, PCA is applied to each of the images to reduce redundant data. Finally, PCA is applied to the image that contains the entire visible spectrum and is compared with previous results. Through the results of the four visible spectral images one can begin to draw conclusions about constituent painting materials without using decorrelation techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Da Silva, Teofilo Augusto, and Suzete Venturelli. "Computational Image: a review about the image as simulation." In IV Congreso Internacional de Investigación en Artes Visuales. ANIAV 2019. Imagen [N] Visible. Valencia: Universitat Politècnica de València, 2019. http://dx.doi.org/10.4995/aniav.2019.8953.

Повний текст джерела
Анотація:
This essay it’s a bibliographic review about the issue of the image in the computational art creation environment. With the technological breakthroughs, specially in the computer graphics, the computational images insert themselves in the art field to understand and discuss how they can interfere in sign and meaning matter. WIth the computational technology the images became machine generated and this phenomenon impacts are still under investigation. This work, that have as base the discussions brought by Edmond Couchot, Oliver Grau, Cláudia Gianetti, José Luis Brea, Pierre Lévy, Vilém Flusser, Suzete Venturelli and Cleomar Rocha, aims to show off the consequences, in the artist context, of numerical ambient images uses.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yoo, Youngjin, Wonhee Choe, and SeongDeok Lee. "Wide-band image guided visible-band image enhancement." In 2011 18th IEEE International Conference on Image Processing (ICIP 2011). IEEE, 2011. http://dx.doi.org/10.1109/icip.2011.6116442.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kuang Yan and Wu Yunfeng. "Research on image fusion for visible and infrared images." In Instruments (ICEMI). IEEE, 2011. http://dx.doi.org/10.1109/icemi.2011.6037859.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Braudaway, Gordon W., Karen A. Magerlein, and Frederick C. Mintzer. "Protecting publicly available images with a visible image watermark." In Electronic Imaging: Science & Technology, edited by Rudolf L. van Renesse. SPIE, 1996. http://dx.doi.org/10.1117/12.235469.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chen, Yunjie, Jianwei Zhang, Ann Heng Pheng, and Deshen Xia. "Chinese Visible Human Brain Image Segmentation." In 2008 Congress on Image and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/cisp.2008.4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nukui, Kazumitsu. "Supervisering system by infrared/visible image." In 15th International Conference on Infrared and Millimeter Waves. SPIE, 1990. http://dx.doi.org/10.1117/12.2301565.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "VISIBLE IMAGE"

1

Boopalan, Santhana. Aerial Wildlife Image Repository. Mississippi State University, 2023. http://dx.doi.org/10.54718/wvgf3020.

Повний текст джерела
Анотація:
The availability of an ever-improving repository of datasets allows machine learning algorithms to have a robust training set of images, which in turn allows for accurate detection and classification of wildlife. This repository (AWIR---Aerial Wildlife Image Repository) would be a step in creating a collaborative rich dataset both in terms of taxa of animals and in terms of the sensors used to observe (visible, infrared, Lidar etc.). Initially, priority would be given to wildlife species hazardous to aircrafts, and common wildlife damage-associated species. AWIR dataset is accompanied by a classification benchmarking website showcasing examples of state-of-the-art algorithms recognizing the wildlife in the images.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Topolski, John. The magnitude image from the 50% compressed PPH (R) shows an increased noise floor visible in the shadows. Office of Scientific and Technical Information (OSTI), May 2015. http://dx.doi.org/10.2172/1182685.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Haohang, Jiayi Luo, Kelin Ding, Erol Tutumluer, John Hart, and Issam Qamhia. I-RIPRAP 3D Image Analysis Software: User Manual. Illinois Center for Transportation, June 2023. http://dx.doi.org/10.36501/0197-9191/23-008.

Повний текст джерела
Анотація:
Riprap rock and aggregates are commonly used in various engineering applications such as structural, transportation, geotechnical, and hydraulic engineering. To ensure the quality of the aggregate materials selected for these applications, it is important to determine their morphological properties such as size and shape. There have been many imaging approaches developed to characterize the size and shape of individual aggregates, but obtaining 3D characterization of aggregates in stockpiles at production or construction sites can be a challenging task. This research study introduces a new approach based on deep learning techniques that combines three developed research components: field 3D reconstruction procedures, 3D stockpiles instance segmentation, and 3D shape completion. The approach is designed to reconstruct aggregate stockpiles from multiple images, segment the stockpile into individual instances, and predict the unseen sides of each instance (particle) based on the partially visible shapes. The approach was validated using ground-truth measurements and demonstrated satisfactory algorithm performance in capturing and predicting the unseen sides of aggregates. For better user experience, the integrated approach has been implemented into a software application named “I-RIPRAP 3D,” with a user-friendly graphical user interface (GUI). This stockpile aggregate analysis approach is envisioned to provide efficient field evaluation of aggregate stockpiles by offering convenient and reliable solutions for on-site quality assurance and quality control tasks of riprap rock and aggregate stockpiles. This document provides information for users of the I-RIPRAP 3D software to make the best use of the software’s capabilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Burks, Thomas F., Victor Alchanatis, and Warren Dixon. Enhancement of Sensing Technologies for Selective Tree Fruit Identification and Targeting in Robotic Harvesting Systems. United States Department of Agriculture, October 2009. http://dx.doi.org/10.32747/2009.7591739.bard.

Повний текст джерела
Анотація:
The proposed project aims to enhance tree fruit identification and targeting for robotic harvesting through the selection of appropriate sensor technology, sensor fusion, and visual servo-control approaches. These technologies will be applicable for apple, orange and grapefruit harvest, although specific sensor wavelengths may vary. The primary challenges are fruit occlusion, light variability, peel color variation with maturity, range to target, and computational requirements of image processing algorithms. There are four major development tasks in original three-year proposed study. First, spectral characteristics in the VIS/NIR (0.4-1.0 micron) will be used in conjunction with thermal data to provide accurate and robust detection of fruit in the tree canopy. Hyper-spectral image pairs will be combined to provide automatic stereo matching for accurate 3D position. Secondly, VIS/NIR/FIR (0.4-15.0 micron) spectral sensor technology will be evaluated for potential in-field on-the-tree grading of surface defect, maturity and size for selective fruit harvest. Thirdly, new adaptive Lyapunov-basedHBVS (homography-based visual servo) methods to compensate for camera uncertainty, distortion effects, and provide range to target from a single camera will be developed, simulated, and implemented on a camera testbed to prove concept. HBVS methods coupled with imagespace navigation will be implemented to provide robust target tracking. And finally, harvesting test will be conducted on the developed technologies using the University of Florida harvesting manipulator test bed. During the course of the project it was determined that the second objective was overly ambitious for the project period and effort was directed toward the other objectives. The results reflect the synergistic efforts of the three principals. The USA team has focused on citrus based approaches while the Israeli counterpart has focused on apples. The USA team has improved visual servo control through the use of a statistical-based range estimate and homography. The results have been promising as long as the target is visible. In addition, the USA team has developed improved fruit detection algorithms that are robust under light variation and can localize fruit centers for partially occluded fruit. Additionally, algorithms have been developed to fuse thermal and visible spectrum image prior to segmentation in order to evaluate the potential improvements in fruit detection. Lastly, the USA team has developed a multispectral detection approach which demonstrated fruit detection levels above 90% of non-occluded fruit. The Israel team has focused on image registration and statistical based fruit detection with post-segmentation fusion. The results of all programs have shown significant progress with increased levels of fruit detection over prior art.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Huang, Haohang, Erol Tutumluer, Jiayi Luo, Kelin Ding, Issam Qamhia, and John Hart. 3D Image Analysis Using Deep Learning for Size and Shape Characterization of Stockpile Riprap Aggregates—Phase 2. Illinois Center for Transportation, September 2022. http://dx.doi.org/10.36501/0197-9191/22-017.

Повний текст джерела
Анотація:
Riprap rock and aggregates are extensively used in structural, transportation, geotechnical, and hydraulic engineering applications. Field determination of morphological properties of aggregates such as size and shape can greatly facilitate the quality assurance/quality control (QA/QC) process for proper aggregate material selection and engineering use. Many aggregate imaging approaches have been developed to characterize the size and morphology of individual aggregates by computer vision. However, 3D field characterization of aggregate particle morphology is challenging both during the quarry production process and at construction sites, particularly for aggregates in stockpile form. This research study presents a 3D reconstruction-segmentation-completion approach based on deep learning techniques by combining three developed research components: field 3D reconstruction procedures, 3D stockpile instance segmentation, and 3D shape completion. The approach was designed to reconstruct aggregate stockpiles from multi-view images, segment the stockpile into individual instances, and predict the unseen side of each instance (particle) based on the partial visible shapes. Based on the dataset constructed from individual aggregate models, a state-of-the-art 3D instance segmentation network and a 3D shape completion network were implemented and trained, respectively. The application of the integrated approach was demonstrated on re-engineered stockpiles and field stockpiles. The validation of results using ground-truth measurements showed satisfactory algorithm performance in capturing and predicting the unseen sides of aggregates. The algorithms are integrated into a software application with a user-friendly graphical user interface. Based on the findings of this study, this stockpile aggregate analysis approach is envisioned to provide efficient field evaluation of aggregate stockpiles by offering convenient and reliable solutions for on-site QA/QC tasks of riprap rock and aggregate stockpiles.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pautet, P. D., J. Stegmman, C. M. Wrasse, K. Nielsen, H. Takahashi, M. J. Taylor, K. W. Hoppel, and S. D. Eckermann. Analysis of Gravity Waves Structures Visible in Noctilucent Cloud Images. Fort Belvoir, VA: Defense Technical Information Center, January 2010. http://dx.doi.org/10.21236/ada523106.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

McDonald, T. E. Jr, D. M. Numkena, J. Payton, G. J. Yates, and P. Zagarino. Using optical parametric oscillators (OPO) for wavelength shifting IR images to visible spectrum. Office of Scientific and Technical Information (OSTI), December 1998. http://dx.doi.org/10.2172/334329.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hartter, Joel, and Chris Colocousis. Environmental, economic, and social changes in rural America visible in survey data and satellite images. University of New Hampshire Libraries, 2011. http://dx.doi.org/10.34051/p/2020.130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cohen, Yafit, Carl Rosen, Victor Alchanatis, David Mulla, Bruria Heuer, and Zion Dar. Fusion of Hyper-Spectral and Thermal Images for Evaluating Nitrogen and Water Status in Potato Fields for Variable Rate Application. United States Department of Agriculture, November 2013. http://dx.doi.org/10.32747/2013.7594385.bard.

Повний текст джерела
Анотація:
Potato yield and quality are highly dependent on an adequate supply of nitrogen and water. Opportunities exist to use airborne hyperspectral (HS) remote sensing for the detection of spatial variation in N status of the crop to allow more targeted N applications. Thermal remote sensing has the potential to identify spatial variations in crop water status to allow better irrigation management and eventually precision irrigation. The overall objective of this study was to examine the ability of HS imagery in the visible and near infrared spectrum (VIS-NIR) and thermal imagery to distinguish between water and N status in potato fields. To lay the basis for achieving the research objectives, experiments in the US and in Israel were conducted in potato with different irrigation and N-application amounts. Thermal indices based merely on thermal images were found sensitive to water status in both Israel and the US in three potato varieties. Spectral indices based on HS images were found suitable to detect N stress accurately and reliably while partial least squares (PLS) analysis of spectral data was more sensitive to N levels. Initial fusion of HS and thermal images showed the potential of detecting both N stress and water stress and even to differentiate between them. This study is one of the first attempts at fusing HS and thermal imagery to detect N and water stress and to estimate N and water levels. Future research is needed to refine these techniques for use in precision agriculture applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, Z., S. E. Grasby, C. Deblonde, and X. Liu. AI-enabled remote sensing data interpretation for geothermal resource evaluation as applied to the Mount Meager geothermal prospective area. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330008.

Повний текст джерела
Анотація:
The objective of this study is to search for features and indicators from the identified geothermal resource sweet spot in the south Mount Meager area that are applicable to other volcanic complexes in the Garibaldi Volcanic Belt. A Landsat 8 multi-spectral band dataset, for a total of 57 images ranging from visible through infrared to thermal infrared frequency channels and covering different years and seasons, were selected. Specific features that are indicative of high geothermal heat flux, fractured permeable zones, and groundwater circulation, the three key elements in exploring for geothermal resource, were extracted. The thermal infrared images from different seasons show occurrence of high temperature anomalies and their association with volcanic and intrusive bodies, and reveal the variation in location and intensity of the anomalies with time over four seasons, allowing inference of specific heat transform mechanisms. Automatically extracted linear features using AI/ML algorithms developed for computer vision from various frequency bands show various linear segment groups that are likely surface expression associated with local volcanic activities, regional deformation and slope failure. In conjunction with regional structural models and field observations, the anomalies and features from remotely sensed images were interpreted to provide new insights for improving our understanding of the Mount Meager geothermal system and its characteristics. After validation, the methods developed and indicators identified in this study can be applied to other volcanic complexes in the Garibaldi, or other volcanic belts for geothermal resource reconnaissance.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії