Добірка наукової літератури з теми "Image"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Image".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Image":

1

Silva, Adriano Alves da. "O TEMPO IMAGÉTICO: a condensação do tempo na imagem fotográfica e a eternidade cíclica na imagem fílmica." Revista Observatório 4, no. 1 (January 1, 2018): 879. http://dx.doi.org/10.20873/uft.2447-4266.2018v4n1p879.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Os profissionais da comunicação buscam criar e midiatizar imagens dotadas de sentido. O conhecimento de saberes que permitem entender como são percebidas as imagens se colocam como uma importante ferramenta, tanto para a produção, quanto para a interpretação destas. Este estudo tem como premissa discutir, problematizar e realizar ensaio artístico autoral sobre a ideia de tempo-imagem, tanto na imagem estática, quanto fílmica, a luz dos conceitos apresentados por Jacques Aumont no livro “A Imagem”. PALAVRAS-CHAVE: Narrativas visuais; imagem-tempo; fotografia; cinema. ABSTRACT Communication professionals seek to create and mediaize meaningful images. The knowledge of knowledges that allow to understand how the images are perceived are placed as an important tool, both for the production, as for the interpretation of these. This study has the premise of discussing, problematizing and performing authorial essay on the idea of time-image, both in the static and filmic image, the light of the concepts presented by Jacques Aumont in the book "The Image". KEYWORDS: Visual narratives; image-time; photography; movie theater. RESUMEN Los profesionales de la comunicación buscan crear y midiatizar imágenes dotadas de sentido. El conocimiento de saberes que permiten entender cómo se perciben las imágenes se plantean como una importante herramienta, tanto para la producción, como para la interpretación de éstas. Este estudio tiene como premisa discutir, problematizar y realizar ensayo artístico autoral sobre la idea de tiempo-imagen, tanto en la imagen estática, cuanto fílmica, la luz de los conceptos presentados por Jacques Aumont en el libro "La Imagen". PALABRAS CLAVE: Narrativa visual; imagen-tiempo; fotografía; el cine.
2

Legland, David, and Marie-Françoise Devaux. "ImageM: a user-friendly interface for the processing of multi-dimensional images with Matlab." F1000Research 10 (April 30, 2021): 333. http://dx.doi.org/10.12688/f1000research.51732.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Modern imaging devices provide a wealth of data often organized as images with many dimensions, such as 2D/3D, time and channel. Matlab is an efficient software solution for image processing, but it lacks many features facilitating the interactive interpretation of image data, such as a user-friendly image visualization, or the management of image meta-data (e.g. spatial calibration), thus limiting its application to bio-image analysis. The ImageM application proposes an integrated user interface that facilitates the processing and the analysis of multi-dimensional images within the Matlab environment. It provides a user-friendly visualization of multi-dimensional images, a collection of image processing algorithms and methods for analysis of images, the management of spatial calibration, and facilities for the analysis of multi-variate images. ImageM can also be run on the open source alternative software to Matlab, Octave. ImageM is freely distributed on GitHub: https://github.com/mattools/ImageM.
3

Vijayalakshmi, A., and V. Girish. "Affordable image analysis using NIH Image/ImageJ." Indian Journal of Cancer 41, no. 1 (2004): 47. http://dx.doi.org/10.4103/0019-509x.12345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Janagam, Raju, and K. Yakub Reddy. "Automatic Image Captions for Lightly Labelled Images." International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (April 30, 2018): 452–54. http://dx.doi.org/10.31142/ijtsrd10786.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liang, Bo, Xi Chen, Lan Yu, Song Feng, Yangfan Guo, Wenda Cao, Wei Dai, Yunfei Yang, and Ding Yuan. "High-precision Multichannel Solar Image Registration Using Image Intensity." Astrophysical Journal Supplement Series 261, no. 2 (July 20, 2022): 10. http://dx.doi.org/10.3847/1538-4365/ac7232.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Solar images observed in different channels with different instruments are crucial to the study of solar activity. However, the images have different fields of view, causing them to be misaligned. It is essential to accurately register the images for studying solar activity from multiple perspectives. Image registration is described as an optimizing problem from an image to be registered to a reference image. In this paper, we proposed a novel coarse-to-fine solar image registration method to register the multichannel solar images. In the coarse registration step, we used the regular step gradient descent algorithm as an optimizer to maximize the normalized cross correlation metric. The fine registration step uses the Powell–Brent algorithms as an optimizer and brings the Mattes mutual information similarity metric to the minimum. We selected five pairs of images with different resolutions, rotation angles, and shifts to compare and evaluate our results to those obtained by scale-invariant feature transform and phase correlation. The images are observed by the 1.6 m Goode Solar Telescope at Big Bear Solar Observatory and the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Furthermore, we used the mutual information and registration time criteria to quantify the registration results. The results prove that the proposed method not only reaches better registration precision but also has better robustness. Meanwhile, we want to highlight that the method can also work well for the time-series solar image registration.
6

Budige, Usharani, and Srikar Goud Konda. "Text To Image Generation By Using Stable Diffusion Model With Variational Autoencoder Decoder." International Journal for Research in Applied Science and Engineering Technology 11, no. 10 (October 31, 2023): 514–19. http://dx.doi.org/10.22214/ijraset.2023.56024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Imagen is a text-to-image diffusion model with a profound comprehension of language and an unmatched level of photorealism. Imagen relies on the potency of diffusion models for creating high-fidelity images and draws on the strength of massive transformer language models for comprehending text. Our most important finding is that general large language models, like T5, pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: expanding the language model in Imagen improves sample fidelity and image to text alignment much more than expanding the image diffusion model.
7

Strauss, Lourens Jochemus, and William ID Rae. "Image quality dependence on image processing software in computed radiography." South African Journal of Radiology 16, no. 2 (June 12, 2012): 44–48. http://dx.doi.org/10.4102/sajr.v16i2.305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background. Image post-processing gives computed radiography (CR) a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different image appearance was recently released: MUSICA2. Aim. This study quantitatively compares the image quality of images acquired without post-processing (flatfield) with images processed using these two software packages. Methods. Four aspects of image quality were evaluated. An aluminium step-wedge was imaged using constant mA at tube voltages varying from 40 to 117kV. Signal-to-noise ratios (SNRs) and contrast-to-noise Ratios (CNRs) were calculated from all steps. Contrast variation with object size was evaluated with visual assessment of images of a Perspex contrast-detail phantom, and an image quality figure (IQF) was calculated. Resolution was assessed using modulation transfer functions (MTFs). Results. SNRs for MUSICA2 were generally higher than the other two methods. The CNRs were comparable between the two software versions, although MUSICA2 had slightly higher values at lower kV. The flatfield CNR values were better than those for the processed images. All images showed a decrease in CNRs with tube voltage. The contrast-detail measurements showed that both MUSICA programmes improved the contrast of smaller objects. MUSICA2 was found to give the lowest (best) IQF; MTF measurements confirmed this, with values at 3.5 lp/mm of 10% for MUSICA2, 8% for MUSICA and 5% for flatfield. Conclusion. Both MUSICA software packages produced images with better contrast resolution than unprocessed images. MUSICA2 has slightly improved image quality than MUSICA.
8

Niitsu, M., H. Hirohata, H. Yoshioka, I. Anno, N. G. Campeau, and Y. Itai. "Magnetization Transfer Contrast on Gradient Echo MR Imaging of the Temporomandibular Joint." Acta Radiologica 36, no. 3 (May 1995): 295–99. http://dx.doi.org/10.1177/028418519503600317.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thirty-nine temporomandibular joints (TMJ) from 20 patients with suspected internal derangements were imaged by a 1.5 T MR imager. The on-resonance binomial magnetization transfer contrast (MTC) pulse was applied to gradient echo images with a dual receiver coil (9 s/section). With the use of an opening device, a series of sequential images were obtained at increments of mouth opening and closing. The tissue signal intensities with (Ms) and without (Mo) MTC were measured and subjective image analysis was performed. Compared with the standard images, MTC technique provided selective signal suppression of disks. The average of Ms/Mo ratio of the disks (0.56) was lower than that of the retrodiskal pad (0.79) and of the effusion (0.89). With MTC technique, fluid conspicuity was superior to standard image. However, no significant superiority was found in disk definition subjectively.
9

D, Shahi. "Reversible Steganography for RGB Images Using Image Interpolation." Journal of Advanced Research in Dynamical and Control Systems 12, no. 3 (March 20, 2020): 41–49. http://dx.doi.org/10.5373/jardcs/v12i3/20201165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rani, V. Amala, and Dr Lalithakumari S. "Efficient Hybrid Multimodal Image Fusion for Brain Images." Journal of Advanced Research in Dynamical and Control Systems 12, no. 8 (August 19, 2020): 116–23. http://dx.doi.org/10.5373/jardcs/v12i8/20202453.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Image":

1

Suri, Sahil. "Automatic image to image registration for multimodal remote sensing images." kostenfrei, 2010. https://mediatum2.ub.tum.de/node?id=967187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mennborg, Alexander. "AI-Driven Image Manipulation : Image Outpainting Applied on Fashion Images." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85148.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The e-commerce industry frequently has to deal with displaying product images in a website where the images are provided by the selling partners. The images in question can have drastically different aspect ratios and resolutions which makes it harder to present them while maintaining a coherent user experience. Manipulating images by cropping can sometimes result in parts of the foreground (i.e. product or person within the image) to be cut off. Image outpainting is a technique that allows images to be extended past its boundaries and can be used to alter the aspect ratio of images. Together with object detection for locating the foreground makes it possible to manipulate images without sacrificing parts of the foreground. For image outpainting a deep learning model was trained on product images that can extend images by at least 25%. The model achieves 8.29 FID score, 44.29 PSNR score and 39.95 BRISQUE score. For testing this solution in practice a simple image manipulation pipeline was created which uses image outpainting when needed and it shows promising results. Images can be manipulated in under a second running on ZOTAC GeForce RTX 3060 (12GB) GPU and a few seconds running on a Intel Core i7-8700K (16GB) CPU. There is also a special case of images where the background has been digitally replaced with a solid color and they can be outpainted even faster without deep learning.
3

Dalkvist, Mikael. "Image Completion Using Local Images." Thesis, Linköpings universitet, Informationskodning, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70940.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image completion is a process of removing an area from a photograph and replacing it with suitable data. Earlier methods either search for this relevant data within the image itself, or extends the search to some form of additional data, usually some form of database. Methods that search for suitable data within the image itself has problems when no suitable data can be found in the image. Methods that extend their search has in earlier work either used some form of database with labeled images or a massive database with photos from the Internet. For the labels in a database to be useful they typically needs to be entered manually, which is a very time consuming process. Methods that uses databases with millions of images from the Internet has issues with copyrighted images, storage of the photographs and computation time. This work shows that a small database of the user’s own private, or professional, photos can be used to improve the quality of image completions. A photographer today typically take many similar photographs on similar scenes during a photo session. Therefore a smaller number of images are needed to find images that are visually and structurally similar, than when random images downloaded from the internet are used. Thus, this approach gains most of the advantages of using additional data for the image completions, while at the same time minimizing the disadvantages. It gains a better ability to find suitable data without having to process millions of irrelevant photos.
4

Schilling, Lennart. "Generating synthetic brain MR images using a hybrid combination of Noise-to-Image and Image-to-Image GANs." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Generative Adversarial Networks (GANs) have attracted much attention because of their ability to learn high-dimensional, realistic data distributions. In the field of medical imaging, they can be used to augment the often small image sets available. In this way, for example, the training of image classification or segmentation models can be improved to support clinical decision making. GANs can be distinguished according to their input. While Noise-to-Image GANs synthesize new images from a random noise vector, Image-To-Image GANs translate a given image into another domain. In this study, it is investigated if the performance of a Noise-To-Image GAN, defined by its generated output quality and diversity, can be improved by using elements of a previously trained Image-To-Image GAN within its training. The data used consists of paired T1- and T2-weighted MR brain images. With the objective of generating additional T1-weighted images, a hybrid model (Hybrid GAN) is implemented that combines elements of a Deep Convolutional GAN (DCGAN) as a Noise-To-Image GAN and a Pix2Pix as an Image-To-Image GAN. Thereby, starting from the dependency of an input image, the model is gradually converted into a Noise-to-Image GAN. Performance is evaluated by the use of an independent classifier that estimates the divergence between the generative output distribution and the real data distribution. When comparing the Hybrid GAN performance with the DCGAN baseline, no improvement, neither in the quality nor in the diversity of the generated images, could be observed. Consequently, it could not be shown that the performance of a Noise-To-Image GAN is improved by using elements of a previously trained Image-To-Image GAN within its training.
5

Murphy, Brian P. "Image processing techniques for acoustic images." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26585.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Approved for public release; distribution is unlimited
The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge Detection and Segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering
6

Zeng, Ziming. "Medical image segmentation on multimodality images." Thesis, Aberystwyth University, 2013. http://hdl.handle.net/2160/17cd13c2-067c-451b-8217-70947f89164e.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Segmentation is a hot issue in the domain of medical image analysis. It has a wide range of applications on medical research. A great many medical image segmentation algorithms have been proposed, and many good segmentation results were obtained. However, due to the noise, density inhomogenity, partial volume effects, and density overlap between normal and abnormal tissues in medical images, the segmentation accuracy and robustness of some state-of-the-art methods still have room for improvement. This thesis aims to deal with the above segmentation problems and improve the segmentation accuracy. This project investigated medical image segmentation methods across a range of modalities and clinical applications, covering magnetic resonance imaging (MRI) in brain tissue segmentation, MRI based multiple sclerosis (MS) lesions segmentation, histology based cell nuclei segmentation, and positron emission tomography (PET) based tumour detection. For the brain MRI tissue segmentation, a method based on mutual information was developed to estimate the number of brain tissue groups. Then a unsupervised segmentation method was proposed to segment the brain tissues. For the MS lesions segmentation, 2D/3D joint histogram modelling were proposed to model the grey level distribution of MS lesions in multimodality MRI. For the PET segmentation of the head and neck tumours, two hierarchical methods based on improved active contour/surface modelling were proposed to segment the tumours in PET volumes. For the histology based cell nuclei segmentation, a novel unsupervised segmentation based on adaptive active contour modelling driven by morphology initialization was proposed to segment the cell nuclei. Then the segmentation results were further processed for subtypes classification. Among these segmentation approaches, a number of techniques (such as modified bias field fuzzy c-means clustering, multiimage spatially joint histogram representation, and convex optimisation of deformable model, etc.) were developed to deal with the key problems in medical image segmentation. Experiments show that the novel methods in this thesis have great potential for various image segmentation scenarios and can obtain more accurate and robust segmentation results than some state-of-the-art methods.
7

Khan, Preoyati. "Cluster Based Image Processing for ImageJ." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1492164847520322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tummala, Sai Virali, and Veerendra Marni. "Comparison of Image Compression and Enhancement Techniques for Image Quality in Medical Images." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15360.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Asplund, Raquel. "Evaluation of a cloud-based image analysis and image display system for medical images." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105984.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Karlsson, Simon, and Per Welander. "Generative Adversarial Networks for Image-to-Image Translation on Street View and MR Images." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148475.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Generative Adversarial Networks (GANs) is a deep learning method that has been developed for synthesizing data. One application for which it can be used for is image-to-image translations. This could prove to be valuable when training deep neural networks for image classification tasks. Two areas where deep learning methods are used are automotive vision systems and medical imaging. Automotive vision systems are expected to handle a broad range of scenarios which demand training data with a high diversity. The scenarios in the medical field are fewer but the problem is instead that it is difficult, time consuming and expensive to collect training data. This thesis evaluates different GAN models by comparing synthetic MR images produced by the models against ground truth images. A perceptual study is also performed by an expert in the field. It is shown by the study that the implemented GAN models can synthesize visually realistic MR images. It is also shown that models producing more visually realistic synthetic images not necessarily have better results in quantitative error measurements, when compared to ground truth data. Along with the investigations on medical images, the thesis explores the possibilities of generating synthetic street view images of different resolution, light and weather conditions. Different GAN models have been compared, implemented with our own adjustments, and evaluated. The results show that it is possible to create visually realistic images for different translations and image resolutions.

Книги з теми "Image":

1

Mello, Vicente de. Áspera imagem =: Âpre image = Rough image. Edited by Saraiva Alberto 1967-, Monterosso Jean-Luc, Oi Futuro (Rio de Janeiro, Brazil), Maison européenne de la photographie (Paris, France), and São Paulo (Brazil : State). Pinacoteca do Estado. Rio de Janeiro: Aeroplano Editora, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

aut, Tschumi Bernard, ed. Virtuael: Expo 2004 image = images. Paris: Bernard Tschumi Architectes, 2002.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Alma, Lorenz, ed. Imago Sloveniae =: Image of Slovenia. Ljubljana: Imago Sloveniae, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lorenz, Alma. Imago Sloveniae: Image of Slovenia. Ljubljana: Imago Sloveniae, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Robin, Jean-François. Image par image. [Castelnau-le-Lez?]: Climats, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lander, Daniel. Image par image. Paris: Cherche midi, 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

McDowell, Josh. His image my image. Amersham-on-the Hill: Scripture Press, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

McDowell, Josh. His image, my image. Nashville: T. Nelson, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

1953-, McDowall John, Taylor Chris, and Wild Pansy Press, eds. Text / image =: Image / text. Leeds: Wild Pansy Press, University of Leeds, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Roberts-Jones, Philippe. Image donnée, image reçue. Bruxelles: Palais des Académies, 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Image":

1

Wolff, Robert S., and Larry Yaeger. "Images and Image Processing." In Visualization of Natural Phenomena, 1–26. New York, NY: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4684-0646-7_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Corke, Peter. "Images and Image Processing." In Springer Tracts in Advanced Robotics, 359–411. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54413-7_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Elad, Michael. "Image Compression – Facial Images." In Sparse and Redundant Representations, 247–71. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-7011-4_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Corke, Peter. "Images and Image Processing." In Robotic Vision, 103–55. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79175-9_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Corke, Peter, Witold Jachimczyk, and Remo Pillat. "Images and Image Processing." In Springer Tracts in Advanced Robotics, 435–91. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-07262-8_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Corke, Peter. "Images and Image Processing." In Springer Tracts in Advanced Robotics, 417–78. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-06469-2_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Koeva, Svetla. "Multilingual Image Corpus." In European Language Grid, 313–18. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17258-8_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe ELG pilot project Multilingual Image Corpus (MIC 21) provides a large image dataset with annotated objects and multilingual descriptions in 25 languages. Our main contributions are: the provision of a large collection of highquality, copyright-free images; the formulation of an ontology of visual objects based on WordNet noun hierarchies; precise manual correction of automatic image segmentation and annotation of object classes; and association of objects and images with extended multilingual descriptions. The dataset is designed for image classification, object detection and semantic segmentation. It can be also used for multilingual image caption generation, image-to-text alignment and automatic question answering for images and videos.
8

Cree, Michael J., and Herbert F. Jelinek. "Image Analysis of Retinal Images." In Medical Image Processing, 249–68. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-9779-1_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Donchyts, Gennadii. "Exploring Image Collections." In Cloud-Based Remote Sensing with Google Earth Engine, 255–65. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-26588-4_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis chapter teaches how to explore image collections, including their spatiotemporal extent, resolution, and values stored in images and image properties. You will learn how to map and inspect image collections using maps, charts, and interactive tools and how to compute different statistics of values stored in image collections using reducers.
10

Bader, Nadia, Sina Hartmann, Notburga Karl, and Raphael Spielmann. "»Kein Bock mehr auf Zoom!« oder: Vom Spiel mit den Regeln als Bildungschance." In Image, 75–98. Bielefeld, Germany: transcript Verlag, 2023. http://dx.doi.org/10.14361/9783839465493-006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Image":

1

Hoffmann, Rolf. "A High Quality Image Stitching Process for Industrial Image Processing and Quality Assurance." In OCM 2021 - 5th International Conference on Optical Characterization of Materials. KIT Scientific Publishin, 2021. http://dx.doi.org/10.58895/ksp/1000128686-18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The size of the recording area of a camera is limited. The resolution of a camera image is also limited. To capture larger areas, a wide angle lens can be used, for example. However, the image resolution per unit area decreases. The decreased image resolution can be compensated by image sensors with a higher number of pixels. However, the use of a high pixel number of image sensors is limited to the current state of the art and availability of real image sensors. Furthermore the use of a wide angle lens has the disadvantage of a stronger distortion of the image scene. Also the viewing direction from a central location is usually problematic in the outer areas of a wide angle lens. Instead of using a wide angle lens, there is still the possibility to capture the large image scene with several images. This can be done either by moving the camera or by using several cameras that are positioned accordingly. In case of multiple image captures, the single use of the required image is a simple way to evaluat e a limited area of a large image scene with image processing. For example, it can be determined whether a feature limited by the size is present in the image scene. The use of this simple variant of a moving camera system or the use of single images makes it difficult or even impossible to use some image processing options. For example, determining the positions and dimensions of features that exceed a single image is difficult. With moving camera systems, the required mechanics add to the effort, which is subject to wear and tear and introduces a time factor. Image stitching techniques can reduce many of these problems in large image scenes. Here, single images are captured (by one or more cameras) and stitched together to fit. The original smaller single images are merged into a larger coherent image scene. Difficulties that arise here and are problematic for the use in industrial image processing are, among others: the exact positioning of the single images to each other and the actual joining of the imag es, if possible without creating disturbing artifacts. This publication is intended to make a contribution to this.
2

Cho, Soojin, and Byunghyun Kim. "Image-driven Bridge Inspection Framework using Deep Learning and Image Registration." In IABSE Conference, Seoul 2020: Risk Intelligence of Infrastructures. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2020. http://dx.doi.org/10.2749/seoul.2020.269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>This paper proposes an image-driven bridge inspection framework using automated damage detection using deep learning technique and image registration. A state-of-the-art deep learning model, Cascade Mask R-CNN (Mask and Region-based Convolutional Neural Networks) is trained for detection of cracks, which is a representative damage type of bridges, from the images taken from a bridge. The model is trained with more than a thousand training images containing cracks as well as crack-like objects (hard negative samples). The images taken from a test bridge are input to a deep learning model trained to detect damages, which is further mapped on a large image of each bridge component registered using a commercial registration software. The performance of the proposed framework is evaluated on piers of existing bridges, whose external appearance was imaged using a DSLR with a telescopic lens. The results are compared with the conventional visual inspection to analyse the performance and applicability of the proposed framework.</p>
3

Doan, Tien Tai, Guillaume Ghyselinck, and Blaise Hanczar. "Informative Multimodal Unsupervised Image-to-Image Translation." In 9th International Conference of Security, Privacy and Trust Management (SPTM 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110503.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose a new method of multimodal image translation, called InfoMUNIT, which is an extension of the state-of-the-art method MUNIT. Our method allows controlling the style of the generated images and improves their quality and diversity. It learns to maximize the mutual information between a subset of style code and the distribution of the output images. Experiments show that our model cannot only translate one image from the source domain to multiple images in the target domain but also explore and manipulate features of the outputs without annotation. Furthermore, it achieves a superior diversity and a competitive image quality to state-of-the-art methods in multiple image translation tasks.
4

Yang, S., Y. Wang, and C. Shrivastava. "Sedimentary Analysis Via Automatic Image Segmentation and Clustering with the LWD OBM Resistivity Image: A Case Study from Gulf of Mexico." In SPE Annual Technical Conference and Exhibition. SPE, 2023. http://dx.doi.org/10.2118/214908-ms.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Microfacies analysis is the first step for depositional environment interpretation and sand body prediction. Textural details from borehole images are building blocks for facies analysis, representing different paleo sediimentation conditions. Associated workflows have been applied on high resolution borehole images by geologists and log analysts manually. Automation via machine learning solutions provides an opportunity to improve the working efficiency and accuracy. Such an approach has given satisfactory results with post-drilling wireline images. In this paper, the improved workflow for sedimentary analysis was applied and validated with a logging-while-drilling (LWD) resistivity imager in oil-based mud environment (OBM). The OBM LWD resistivity image in oil-based mud provides 72 data points at single depth from 4 different frequencies of electromagnetic measurements with a patented processing. The non-gap resistivity image gives more confident texture characterization. The continuous histogram and correlogram derived from image data were used for image segmentation. In each image segmentation, multiple vector properties were extracted from image data representing different texture features including adoptive variogram horizontally. Agglomerative clustering was selected for its stability and repeatability. The internally built dendrogram allows to automatically determine the number of clusters by finding a stable distance between the clusters’ hierarchy branches. In addition to the features extracted from image data, optional petrophysical logs with variable weights may be fed to the algorithm for a better classification. A case study from Gulf of Mexico is being used to demonstrate this workflow with Hi-Res LWD image. More than 10 different sedimentary geometries were classified automatically from image and petrophysical logs. The microfacies were named manually from sedimentary geometries with the related geological concept accordingly. The fluvial channel and delta sedimentary environment were interpretated finally from microfacies association. The interpretation results were compared and validated with published dips-based solution as well. This is the first time for the automatic borehole image segmentation with LWD OBM images. The working efficiency was improved a lot through this workflow and the accuracy of microfacies interpretation was guaranteed by machine learning solution.
5

Zhao, Zixiang, Shuang Xu, Chunxia Zhang, Junmin Liu, Jiangshe Zhang, and Pengfei Li. "DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/135.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile surpass state-of-the-art (SOTA) approaches.
6

Varnasuthan, S., W. T. M. Fernando, D. S. Dahanayaka, A. B. N. Dassanayake, M. A. D. M. G. Wickrama, and I. M. T. N. Illankoon. "Image analysis approach to determine the porosity of rocks." In International Symposium on Earth Resources Management & Environment - ISERME 2023. Department of Earth Resources Engineering, 2023. http://dx.doi.org/10.31705/iserme.2023.7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Accurate characterisation of rock porosity is essential for assessing its strength and durability. This study explores both conventional and image analysis methods for determining rock porosity of two types of rocks, Bibai sandstone, a hard clastic rock and limestone, a soft rock. Conventional methods for determining rock porosity involve physical measurements and laboratory analysis, while image analysis methods utilize advanced imaging techniques such as CT scans or SEM to assess porosity based on visual information extracted from rock images. While various image analysis approaches exist to determine rock porosity, questions arise as to which approach is applicable and whether the results are comparable to current conventional methods. Hence, this study focuses on comparing the accuracy of alternative image analysis approaches. Representative rock chips from each core sample were examined using SEM, and 2D porosity was evaluated through image processing with ImageJ software. The Avizo visualisation software was employed to assess Bibai sandstone samples' porosity from CT images. The research offers insights into the pros and cons of each approach, contributing to the enhancement of accuracy and efficiency in rock porosity evaluation, particularly in geology, mining, and civil engineering applications.
7

VARGHESE, SUBIN, REBECCA WANG, and VEDHUS HOSKERE. "IMAGE TO IMAGE TRANSLATION OF STRUCTURAL DAMAGE USING GENERATIVE ADVERSARIAL NETWORKS." In Structural Health Monitoring 2021. Destech Publications, Inc., 2022. http://dx.doi.org/10.12783/shm2021/36307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the aftermath of earthquakes, structures can become unsafe and hazardous for humans to safely reside. Automated methods that detect structural damage can be invaluable for rapid inspections and faster recovery times. Deep neural networks (DNNs) have proven to be an effective means to classify damaged areas in images of structures but have limited generalizability due to the lack of large and diverse annotated datasets (e.g., variations in building properties like size, shape, color). Given a dataset of paired images of damaged and undamaged structures supervised deep learning methods could be employed, but such paired correspondences of images required for training are exceedingly difficult to acquire. Obtaining a variety of undamaged images, and a smaller set of damaged images is more viable. We present a novel application of deep learning for unpaired image-to-image translation between undamaged and damaged structures as a means of data augmentation to combat the lack of diverse data. Unpaired image-to-image translation is achieved using Cycle Consistent Adversarial Network (CCAN) architectures, which have the capability to translate images while retaining the geometric structure of an image. We explore the capability of the original CCAN architecture, and propose a new architecture for unpaired image-to-image translation (termed Eigen Integrated Generative Adversarial Network or EIGAN) that addresses shortcomings of the original architecture for our application. We create a new unpaired dataset to translate an image between domains of damaged and undamaged structures. The dataset created consists of a set of damaged and undamaged buildings from Mexico City affected by the 2017 Puebla earthquake. Qualitative and quantitative results of the various architectures are presented to better compare the quality of the translated images. A comparison is also done on the performance of DNNs trained to classify damaged structures using generated images. The results demonstrate that targeted image-to-image translation of undamaged to damaged structures is an effective means of data augmentation to improve network performance.
8

Cheng, Li-Chang, and Joerg Meyer. "Volumetric Image Alignment Utilizing Particle Swarm Optimization." In ASME 2009 4th Frontiers in Biomedical Devices Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/biomed2009-83084.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In some cases an object can not be completely imaged with a single image. These cases call for taking multiple images and stitching the images together to form a complete view of the object. The images that are dealt with are taken with a confocal laser-scanning microscope. This produces a stack of two-dimensional images that represent the volume in the microscope’s field of view. While the microscope automatically creates a z stack, the positioning of the tiles in x and y is done manually. Therefore, the slices within each stack are assumed to be already aligned. Our technique will take multiple stacks of images and stitch them together to form a single volume. We will use a particle swarm optimization technique to calculate the proper transformations required to produce the final volume.
9

Kim, Do-Hyung, No-Cheol Park, Sungbin Jeon, and Young-Pil Park. "Novel Method of Crosstalk Analysis in Multiple Image Encryption and Image Quality Equalization Technology." In ASME 2014 Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/isps2014-6909.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Novel crosstalk analysis method is suggested in optical encryption of multiple image. To optimize the total capacity of stored images in optical encryption of multiple images. We analyze the effect of crosstalk noise with each individual image by using suggested method. From the results, individual crosstalk robustness is verified with various target images and unbalance of image qualities among encrypted multiple images could be explained effectively. In addition, simple modulation method is adapted to equalizing image quality and it shows the highly improved results compare to conventional methods.
10

Yoder, Michael F. "Images and image domains in Image Algebra Ada." In San Diego '90, 8-13 July, edited by Paul D. Gader. SPIE, 1990. http://dx.doi.org/10.1117/12.23595.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Image":

1

Lingner, Stefan, Karl Heger, and Claas Faber. MAMS Image Broker. GEOMAR, December 2022. http://dx.doi.org/10.3289/sw_3_2022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The GEOMAR Helmhotz Centre for Ocean Research Kiel is operating a Media Asset Management System (MAMS) which manages large amounts image- and video data. Similar systems are operated by the Helmholtz Centre HEREON and by the Alfred Wegener Institute (AWI). Although the MAMS system provides access to data and metadata using an API, it is not possible to directly request image data without prior knowledge of the internal MAMS data structure. The image broker is a web service which brokers between a client (e.g. web-browser) and the MAMS. It allows users to request images by metadata values (e.g. image uuid). The broker uses the [IIIF](https://iiif.io/) standard which allows users to request the images in different formats, scaled copies or only specific parts of the image.
2

Wendelberger, James G. Image Analysis of Stringer Area: Three Images. Office of Scientific and Technical Information (OSTI), August 2018. http://dx.doi.org/10.2172/1467243.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Thayer, Colette, and Laura Skufca. Media Image Landscape: Age Representation in Online Images. AARP Research, September 2019. http://dx.doi.org/10.26419/res.00339.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Doerry, Armin, and Douglas Bickel. Synthetic Aperture Radar Image Geolocation Using Fiducial Images. Office of Scientific and Technical Information (OSTI), October 2022. http://dx.doi.org/10.2172/1890785.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rosenfeld, Azriel. Parallel Image Processing and Image Understanding. Fort Belvoir, VA: Defense Technical Information Center, March 1986. http://dx.doi.org/10.21236/ada183223.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Rosenfeld, A. Parallel Image Processing and Image Understanding. Fort Belvoir, VA: Defense Technical Information Center, July 1985. http://dx.doi.org/10.21236/ada159029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wendelberger, James G., Kimberly Ann Kaufeld, Elizabeth J. Kelly, Juan Duque, and John M. Berg. Automated Image Analysis for Screening LCM Images – Examples from 09DE2 Images. Office of Scientific and Technical Information (OSTI), February 2018. http://dx.doi.org/10.2172/1422981.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Boopalan, Santhana. Aerial Wildlife Image Repository. Mississippi State University, 2023. http://dx.doi.org/10.54718/wvgf3020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The availability of an ever-improving repository of datasets allows machine learning algorithms to have a robust training set of images, which in turn allows for accurate detection and classification of wildlife. This repository (AWIR---Aerial Wildlife Image Repository) would be a step in creating a collaborative rich dataset both in terms of taxa of animals and in terms of the sensors used to observe (visible, infrared, Lidar etc.). Initially, priority would be given to wildlife species hazardous to aircrafts, and common wildlife damage-associated species. AWIR dataset is accompanied by a classification benchmarking website showcasing examples of state-of-the-art algorithms recognizing the wildlife in the images.
9

Caldeira, J., W. L. K. Wu, and B. Nord. DeepCMB: Saliency for Image-to-Image Regression. Office of Scientific and Technical Information (OSTI), October 2019. http://dx.doi.org/10.2172/1637621.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Self, Herschel C. Optical Displays: A Tutorial on Images and Image Formation. Fort Belvoir, VA: Defense Technical Information Center, October 1992. http://dx.doi.org/10.21236/ada266230.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії