Статті в журналах з теми "Image"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Image.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Image".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Silva, Adriano Alves da. "O TEMPO IMAGÉTICO: a condensação do tempo na imagem fotográfica e a eternidade cíclica na imagem fílmica." Revista Observatório 4, no. 1 (January 1, 2018): 879. http://dx.doi.org/10.20873/uft.2447-4266.2018v4n1p879.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Os profissionais da comunicação buscam criar e midiatizar imagens dotadas de sentido. O conhecimento de saberes que permitem entender como são percebidas as imagens se colocam como uma importante ferramenta, tanto para a produção, quanto para a interpretação destas. Este estudo tem como premissa discutir, problematizar e realizar ensaio artístico autoral sobre a ideia de tempo-imagem, tanto na imagem estática, quanto fílmica, a luz dos conceitos apresentados por Jacques Aumont no livro “A Imagem”. PALAVRAS-CHAVE: Narrativas visuais; imagem-tempo; fotografia; cinema. ABSTRACT Communication professionals seek to create and mediaize meaningful images. The knowledge of knowledges that allow to understand how the images are perceived are placed as an important tool, both for the production, as for the interpretation of these. This study has the premise of discussing, problematizing and performing authorial essay on the idea of time-image, both in the static and filmic image, the light of the concepts presented by Jacques Aumont in the book "The Image". KEYWORDS: Visual narratives; image-time; photography; movie theater. RESUMEN Los profesionales de la comunicación buscan crear y midiatizar imágenes dotadas de sentido. El conocimiento de saberes que permiten entender cómo se perciben las imágenes se plantean como una importante herramienta, tanto para la producción, como para la interpretación de éstas. Este estudio tiene como premisa discutir, problematizar y realizar ensayo artístico autoral sobre la idea de tiempo-imagen, tanto en la imagen estática, cuanto fílmica, la luz de los conceptos presentados por Jacques Aumont en el libro "La Imagen". PALABRAS CLAVE: Narrativa visual; imagen-tiempo; fotografía; el cine.
2

Legland, David, and Marie-Françoise Devaux. "ImageM: a user-friendly interface for the processing of multi-dimensional images with Matlab." F1000Research 10 (April 30, 2021): 333. http://dx.doi.org/10.12688/f1000research.51732.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Modern imaging devices provide a wealth of data often organized as images with many dimensions, such as 2D/3D, time and channel. Matlab is an efficient software solution for image processing, but it lacks many features facilitating the interactive interpretation of image data, such as a user-friendly image visualization, or the management of image meta-data (e.g. spatial calibration), thus limiting its application to bio-image analysis. The ImageM application proposes an integrated user interface that facilitates the processing and the analysis of multi-dimensional images within the Matlab environment. It provides a user-friendly visualization of multi-dimensional images, a collection of image processing algorithms and methods for analysis of images, the management of spatial calibration, and facilities for the analysis of multi-variate images. ImageM can also be run on the open source alternative software to Matlab, Octave. ImageM is freely distributed on GitHub: https://github.com/mattools/ImageM.
3

Vijayalakshmi, A., and V. Girish. "Affordable image analysis using NIH Image/ImageJ." Indian Journal of Cancer 41, no. 1 (2004): 47. http://dx.doi.org/10.4103/0019-509x.12345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Janagam, Raju, and K. Yakub Reddy. "Automatic Image Captions for Lightly Labelled Images." International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (April 30, 2018): 452–54. http://dx.doi.org/10.31142/ijtsrd10786.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liang, Bo, Xi Chen, Lan Yu, Song Feng, Yangfan Guo, Wenda Cao, Wei Dai, Yunfei Yang, and Ding Yuan. "High-precision Multichannel Solar Image Registration Using Image Intensity." Astrophysical Journal Supplement Series 261, no. 2 (July 20, 2022): 10. http://dx.doi.org/10.3847/1538-4365/ac7232.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Solar images observed in different channels with different instruments are crucial to the study of solar activity. However, the images have different fields of view, causing them to be misaligned. It is essential to accurately register the images for studying solar activity from multiple perspectives. Image registration is described as an optimizing problem from an image to be registered to a reference image. In this paper, we proposed a novel coarse-to-fine solar image registration method to register the multichannel solar images. In the coarse registration step, we used the regular step gradient descent algorithm as an optimizer to maximize the normalized cross correlation metric. The fine registration step uses the Powell–Brent algorithms as an optimizer and brings the Mattes mutual information similarity metric to the minimum. We selected five pairs of images with different resolutions, rotation angles, and shifts to compare and evaluate our results to those obtained by scale-invariant feature transform and phase correlation. The images are observed by the 1.6 m Goode Solar Telescope at Big Bear Solar Observatory and the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Furthermore, we used the mutual information and registration time criteria to quantify the registration results. The results prove that the proposed method not only reaches better registration precision but also has better robustness. Meanwhile, we want to highlight that the method can also work well for the time-series solar image registration.
6

Budige, Usharani, and Srikar Goud Konda. "Text To Image Generation By Using Stable Diffusion Model With Variational Autoencoder Decoder." International Journal for Research in Applied Science and Engineering Technology 11, no. 10 (October 31, 2023): 514–19. http://dx.doi.org/10.22214/ijraset.2023.56024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Imagen is a text-to-image diffusion model with a profound comprehension of language and an unmatched level of photorealism. Imagen relies on the potency of diffusion models for creating high-fidelity images and draws on the strength of massive transformer language models for comprehending text. Our most important finding is that general large language models, like T5, pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: expanding the language model in Imagen improves sample fidelity and image to text alignment much more than expanding the image diffusion model.
7

Strauss, Lourens Jochemus, and William ID Rae. "Image quality dependence on image processing software in computed radiography." South African Journal of Radiology 16, no. 2 (June 12, 2012): 44–48. http://dx.doi.org/10.4102/sajr.v16i2.305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background. Image post-processing gives computed radiography (CR) a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different image appearance was recently released: MUSICA2. Aim. This study quantitatively compares the image quality of images acquired without post-processing (flatfield) with images processed using these two software packages. Methods. Four aspects of image quality were evaluated. An aluminium step-wedge was imaged using constant mA at tube voltages varying from 40 to 117kV. Signal-to-noise ratios (SNRs) and contrast-to-noise Ratios (CNRs) were calculated from all steps. Contrast variation with object size was evaluated with visual assessment of images of a Perspex contrast-detail phantom, and an image quality figure (IQF) was calculated. Resolution was assessed using modulation transfer functions (MTFs). Results. SNRs for MUSICA2 were generally higher than the other two methods. The CNRs were comparable between the two software versions, although MUSICA2 had slightly higher values at lower kV. The flatfield CNR values were better than those for the processed images. All images showed a decrease in CNRs with tube voltage. The contrast-detail measurements showed that both MUSICA programmes improved the contrast of smaller objects. MUSICA2 was found to give the lowest (best) IQF; MTF measurements confirmed this, with values at 3.5 lp/mm of 10% for MUSICA2, 8% for MUSICA and 5% for flatfield. Conclusion. Both MUSICA software packages produced images with better contrast resolution than unprocessed images. MUSICA2 has slightly improved image quality than MUSICA.
8

Niitsu, M., H. Hirohata, H. Yoshioka, I. Anno, N. G. Campeau, and Y. Itai. "Magnetization Transfer Contrast on Gradient Echo MR Imaging of the Temporomandibular Joint." Acta Radiologica 36, no. 3 (May 1995): 295–99. http://dx.doi.org/10.1177/028418519503600317.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thirty-nine temporomandibular joints (TMJ) from 20 patients with suspected internal derangements were imaged by a 1.5 T MR imager. The on-resonance binomial magnetization transfer contrast (MTC) pulse was applied to gradient echo images with a dual receiver coil (9 s/section). With the use of an opening device, a series of sequential images were obtained at increments of mouth opening and closing. The tissue signal intensities with (Ms) and without (Mo) MTC were measured and subjective image analysis was performed. Compared with the standard images, MTC technique provided selective signal suppression of disks. The average of Ms/Mo ratio of the disks (0.56) was lower than that of the retrodiskal pad (0.79) and of the effusion (0.89). With MTC technique, fluid conspicuity was superior to standard image. However, no significant superiority was found in disk definition subjectively.
9

D, Shahi. "Reversible Steganography for RGB Images Using Image Interpolation." Journal of Advanced Research in Dynamical and Control Systems 12, no. 3 (March 20, 2020): 41–49. http://dx.doi.org/10.5373/jardcs/v12i3/20201165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rani, V. Amala, and Dr Lalithakumari S. "Efficient Hybrid Multimodal Image Fusion for Brain Images." Journal of Advanced Research in Dynamical and Control Systems 12, no. 8 (August 19, 2020): 116–23. http://dx.doi.org/10.5373/jardcs/v12i8/20202453.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Vivekalakshmi, R., B. Yaalini, and S. Karthik Raja Ms Sruthi Anand. "Image Re-Ranking Modernistic Structure for Web Images." International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (December 31, 2017): 535–38. http://dx.doi.org/10.31142/ijtsrd7005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Nandeesh, M. D., and Dr M. Meenakshi. "Image Fusion Algorithms for Medical Images-A Comparison." Bonfring International Journal of Advances in Image Processing 5, no. 3 (July 31, 2015): 23–26. http://dx.doi.org/10.9756/bijaip.8051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Asri, Isnindar Tandya, Chomsin Sulistya Widodo, and Yuyun Yueniwati Prabowowati Wadjib. "Comparison of Grayscale Value in T1-Weighted Pre- and Post-Contrast Brain MRI Images: with and without Fat Suppression Technique." Journal of Physics: Conference Series 2049, no. 1 (October 1, 2021): 012057. http://dx.doi.org/10.1088/1742-6596/2049/1/012057.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The MRI T1-weighted image can provide information on the pre- and post-contrast images. Post-contrast images is an image obtained after the administration of GBCA In some cases, not all post-contrast images can show clear lesions so it requires additional technique in the form of Fat Suppression (FS), which works by suppressing the fat signal in an image. The T1-weighted images with and without FS have a different signal intensity. Therefore, the purpose of this study is to compare the signal intensity of the pre- and post-contrast T1-weighted images with and without the FS technique. The signal intensities are indicated with a grayscale value. There are seven T1-weighted images with FS and seven T1-weighted images without FS. Each of the image have a pre-and post-contrast. Image reading is done by a radiology specialist. Area plot was performed on abnormal tissues in each image. Each area will be measured with an ImageJ software to obtain the grayscale mean value. The measurements of the post contrast T1-weighted image showed an increase in the grayscale mean value with or without the FS technique. This showed that the administration of GBCA can increase the signal intensity on the T1-weighted images with or without the FS technique.
14

Shapiro, Alan. "Images: Real and Virtual, Projected and Perceived, from Kepler to Dechales." Early Science and Medicine 13, no. 3 (2008): 270–312. http://dx.doi.org/10.1163/157338208x285044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn developing a new theory of vision in Ad Vitellionem paralipomena (1604) Kepler introduced a new optical concept, pictura, which is an image projected on to a screen by a camera obscura. He distinguished this pictura from an imago, the traditional image of medieval optics that existed only in the imagination. By the 1670s a new theory of optical imagery had been developed, and Kepler's pictura and imago became real and virtual images, two aspects of a unified concept of image. The new concept of image developed out of a synthesis of Kepler's determination of the geometrical location of a pictura as the limit, or focus, of refracted pencils of rays and the triangulation used by a single eye to determine the perceived location of an imago. The distinction between real and imaginary images was largely developed by Gilles Personne de Roberval and the Jesuits Francesco Eschinardi and Claude François Milliet Dechales.
15

Madhu, Shrija, and Mohammed Ali Hussain. "Securing Medical Images by Image Encryption using Key Image." International Journal of Computer Applications 104, no. 3 (October 18, 2014): 30–34. http://dx.doi.org/10.5120/18184-9079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Han, Z., X. Tang, X. Gao, and F. Hu. "IMAGE FUSION AND IMAGE QUALITY ASSESSMENT OF FUSED IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W1 (July 12, 2013): 33–36. http://dx.doi.org/10.5194/isprsarchives-xl-7-w1-33-2013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Sanderson, H. "Image segmentation for compression of images and image sequences." IEE Proceedings - Vision, Image, and Signal Processing 142, no. 1 (1995): 15. http://dx.doi.org/10.1049/ip-vis:19951681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Schneider, Caroline A., Wayne S. Rasband, and Kevin W. Eliceiri. "NIH Image to ImageJ: 25 years of image analysis." Nature Methods 9, no. 7 (June 28, 2012): 671–75. http://dx.doi.org/10.1038/nmeth.2089.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Tetard, Martin, Ross Marchant, Giuseppe Cortese, Yves Gally, Thibault de Garidel-Thoron, and Luc Beaufort. "Technical note: A new automated radiolarian image acquisition, stacking, processing, segmentation and identification workflow." Climate of the Past 16, no. 6 (December 2, 2020): 2415–29. http://dx.doi.org/10.5194/cp-16-2415-2020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Identification of microfossils is usually done by expert taxonomists and requires time and a significant amount of systematic knowledge developed over many years. These studies require manual identification of numerous specimens in many samples under a microscope, which is very tedious and time-consuming. Furthermore, identification may differ between operators, biasing reproducibility. Recent technological advances in image acquisition, processing and recognition now enable automated procedures for this process, from microscope image acquisition to taxonomic identification. A new workflow has been developed for automated radiolarian image acquisition, stacking, processing, segmentation and identification. The protocol includes a newly proposed methodology for preparing radiolarian microscopic slides. We mount eight samples per slide, using a recently developed 3D-printed decanter that enables the random and uniform settling of particles and minimizes the loss of material. Once ready, slides are automatically imaged using a transmitted light microscope. About 4000 specimens per slide (500 per sample) are captured in digital images that include stacking techniques to improve their focus and sharpness. Automated image processing and segmentation is then performed using a custom plug-in developed for the ImageJ software. Each individual radiolarian image is automatically classified by a convolutional neural network (CNN) trained on a Neogene to Quaternary radiolarian database (currently 21 746 images, corresponding to 132 classes) using the ParticleTrieur software. The trained CNN has an overall accuracy of about 90 %. The whole procedure, including the image acquisition, stacking, processing, segmentation and recognition, is entirely automated via a LabVIEW interface, and it takes approximately 1 h per sample. Census data count and classified radiolarian images are then automatically exported and saved. This new workflow paves the way for the analysis of long-term, radiolarian-based palaeoclimatic records from siliceous-remnant-bearing samples.
20

Yahanda, Alexander T., Timothy J. Goble, Peter T. Sylvester, Gretchen Lessman, Stanley Goddard, Bridget McCollough, Amar Shah, Trevor Andrews, Tammie L. S. Benzinger, and Michael R. Chicoine. "Impact of 3-Dimensional Versus 2-Dimensional Image Distortion Correction on Stereotactic Neurosurgical Navigation Image Fusion Reliability for Images Acquired With Intraoperative Magnetic Resonance Imaging." Operative Neurosurgery 19, no. 5 (June 10, 2020): 599–607. http://dx.doi.org/10.1093/ons/opaa152.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract BACKGROUND Fusion of preoperative and intraoperative magnetic resonance imaging (iMRI) studies during stereotactic navigation may be very useful for procedures such as tumor resections but can be subject to error because of image distortion. OBJECTIVE To assess the impact of 3-dimensional (3D) vs 2-dimensional (2D) image distortion correction on the accuracy of auto-merge image fusion for stereotactic neurosurgical images acquired with iMRI using a head phantom in different surgical positions. METHODS T1-weighted intraoperative images of the head phantom were obtained using 1.5T iMRI. Images were postprocessed with 2D and 3D image distortion correction. These studies were fused to T1-weighted preoperative MRI studies performed on a 1.5T diagnostic MRI. The reliability of the auto-merge fusion of these images for 2D and 3D correction techniques was assessed both manually using the stereotactic navigation system and via image analysis software. RESULTS Eight surgical positions of the head phantom were imaged with iMRI. Greater image distortion occurred with increased distance from isocenter in all 3 axes, reducing accuracy of image fusion to preoperative images. Visually reliable image fusions were accomplished in 2/8 surgical positions using 2D distortion correction and 5/8 using 3D correction. Three-dimensional correction yielded superior image registration quality as defined by higher maximum mutual information values, with improvements ranging between 2.3% and 14.3% over 2D correction. CONCLUSION Using 3D distortion correction enhanced the reliability of surgical navigation auto-merge fusion of phantom images acquired with iMRI across a wider range of head positions and may improve the accuracy of stereotactic navigation using iMRI images.
21

Calderón Noguera, Donald Freddy. "La imagen del indio en Ingermina o la hija de Calamar de Juan José Nieto: de la hija de Calamar a la hija de Velásquez." La Palabra, no. 17 (April 12, 2012): 17–28. http://dx.doi.org/10.19053/01218530.n17.2010.930.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Este artículo constituye un avance de investigación del proyecto Imago: construcción de la Imagen de América Latina a través de la Literatura y las Artes, el cual tiene como propósito problematizar , de manera interdisciplinaria, el papel de los textos artísticos en la construcción de la imagen que tenemos sobre nuestra región continental. Se presenta una reflexión teórica acerca de la imagen y se deletrea la novela Ingermina, recomponiendo la imagen del indio desde las refracciones que espejea el texto. Palabras clave:Palabras clave: imagen, indio, novela, campesino.Abstract:This article constitutes a research advance of the interdisciplinary research project “Imago: the Construction of Latin American Image Repertoire through Literature and the Arts”, whose purpose is to question the role of artistic texts in the image construction of our continental region. In this study, we present a theoretical reflection on image, and following a literal reading of the novel Ingermina, we reconstruct theimage of “Indian” through the refractions mirrored by the text.Key words:image, Indian, novel, peasant.
22

Kinosita, K., H. Itoh, S. Ishiwata, K. Hirano, T. Nishizaka, and T. Hayakawa. "Dual-view microscopy with a single camera: real-time imaging of molecular orientations and calcium." Journal of Cell Biology 115, no. 1 (October 1, 1991): 67–73. http://dx.doi.org/10.1083/jcb.115.1.67.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A new microscope technique, termed "W" (double view video) microscopy, enables simultaneous observation of two different images of an object through a single video camera or by eye. The image pair may, for example, be transmission and fluorescence, fluorescence at different wavelengths, or mutually perpendicular components of polarized fluorescence. Any video microscope can be converted into a dual imager by simple insertion of a small optical device. The continuous appearance of the dual image assures the best time resolution in existing and future video microscopes. As an application, orientations of actin protomers in individual, moving actin filaments have been imaged at the video rate. Asymmetric calcium influxes into a cell exposed to an intense electric pulse have also been visualized.
23

Badgainya, Shruti, Prof Pankaj Sahu, and Prof Vipul Awasthi. "Image Denoising by OWT for Gaussian Noise Corrupted Images." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 2477–84. http://dx.doi.org/10.31142/ijtsrd18337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Sravani, L., N. Rama Venkat Sai, K. Noomika, M. Upendra Kumar, and K. V. Adarsh. "Image Enhancement of Underwater Images using Deep Learning Techniques." International Journal of Research Publication and Reviews 4, no. 4 (April 3, 2023): 81–86. http://dx.doi.org/10.55248/gengpi.2023.4.4.34620.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ohnishi, Takashi, Yuka Nakamura, Toru Tanaka, Takuya Tanaka, Noriaki Hashimoto, Hideaki Haneishi, Tracy T. Batchelor, et al. "Deformable image registration between pathological images and MR image via an optical macro image." Pathology - Research and Practice 212, no. 10 (October 2016): 927–36. http://dx.doi.org/10.1016/j.prp.2016.07.018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sedlak, René, Andreas Welscher, Patrick Hannawald, Sabine Wüst, Rainer Lienhart, and Michael Bittner. "Analysis of 2D airglow imager data with respect to dynamics using machine learning." Atmospheric Measurement Techniques 16, no. 12 (June 26, 2023): 3141–53. http://dx.doi.org/10.5194/amt-16-3141-2023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. We demonstrate how machine learning can be easily applied to support the analysis of large quantities of excited hydroxyl (OH*) airglow imager data. We use a TCN (temporal convolutional network) classification algorithm to automatically pre-sort images into the three categories “dynamic” (images where small-scale motions like turbulence are likely to be found), “calm” (clear-sky images with weak airglow variations) and “cloudy” (cloudy images where no airglow analyses can be performed). The proposed approach is demonstrated using image data of FAIM 3 (Fast Airglow IMager), acquired at Oberpfaffenhofen, Germany, between 11 June 2019 and 25 February 2020, achieving a mean average precision of 0.82 in image classification. The attached video sequence demonstrates the classification abilities of the learned TCN. Within the dynamic category, we find a subset of 13 episodes of image series showing turbulence. As FAIM 3 exhibits a high spatial (23 m per pixel) and temporal (2.8 s per image) resolution, turbulence parameters can be derived to estimate the energy diffusion rate. Similarly to the results the authors found for another FAIM station (Sedlak et al., 2021), the values of the energy dissipation rate range from 0.03 to 3.18 W kg−1.
27

Mueller, M. S., T. Sattler, M. Pollefeys, and B. Jutzi. "IMAGE-TO-IMAGE TRANSLATION FOR ENHANCED FEATURE MATCHING, IMAGE RETRIEVAL AND VISUAL LOCALIZATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W7 (September 16, 2019): 111–19. http://dx.doi.org/10.5194/isprs-annals-iv-2-w7-111-2019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p><strong>Abstract.</strong> The performance of machine learning and deep learning algorithms for image analysis depends significantly on the quantity and quality of the training data. The generation of annotated training data is often costly, time-consuming and laborious. Data augmentation is a powerful option to overcome these drawbacks. Therefore, we augment training data by rendering images with arbitrary poses from 3D models to increase the quantity of training images. These training images usually show artifacts and are of limited use for advanced image analysis. Therefore, we propose to use image-to-image translation to transform images from a <i>rendered</i> domain to a <i>captured</i> domain. We show that translated images in the <i>captured</i> domain are of higher quality than the rendered images. Moreover, we demonstrate that image-to-image translation based on rendered 3D models enhances the performance of common computer vision tasks, namely feature matching, image retrieval and visual localization. The experimental results clearly show the enhancement on translated images over rendered images for all investigated tasks. In addition to this, we present the advantages utilizing translated images over exclusively captured images for visual localization.</p>
28

Otiede, David, and Ke Jian Wu. "The Effect of Image Resolution on the Geometry and Topological Characteristics of 3-D Reconstructed Images of Reservoir Rock Samples." International Journal of Engineering Research in Africa 6 (November 2011): 37–44. http://dx.doi.org/10.4028/www.scientific.net/jera.6.37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The effect of image resolution on the measured geometry and topological characteristics of network models extracted from 3-D micro-computer tomography images has been investigated. The study was conducted by extracting geologically realistic networks from images of two rock samples, imaged at different resolutions. The rock samples involved were a Castlegate Sandstone and a Carbonate-28 reservoir rock. Two-dimensional images of these rocks were obtained at a magnification of ×50. The carbonate sample was studied at two different resolutions of 0.133 microns and 1.33 microns, while the sandstone was studied at 5.60 microns. Three-dimensional images of these 2-D images were obtained via image reconstruction, to generate the pore architecture models (PAMs) from which networks models of the imaged rocks were extracted with the aid of Pore Analysis software Tools (PATs). The measured geometry and topology (GT) properties included Coordination Number, Pore Shape Factor, Pore Size Distribution, and Pore Connectivity. The results showed that the measured geometry-topology (GT) characteristics of a network model depend greatly on the image resolution used for the model. Depending on the micro-structure of the reservoir rock, a minimum image resolution is necessary to properly define the geometrical and topological characteristics of the given porous medium.
29

O'Mara, Aidan R., Jessica M. Collins, Anna E. King, James C. Vickers та Matthew T. K. Kirkcaldie. "Accurate and Unbiased Quantitation of Amyloid-β Fluorescence Images Using ImageSURF". Current Alzheimer Research 16, № 2 (4 лютого 2019): 102–8. http://dx.doi.org/10.2174/1567205016666181212152622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: Images of amyloid-β pathology characteristic of Alzheimer’s disease are difficult to consistently and accurately segment, due to diffuse deposit boundaries and imaging variations. Methods: We evaluated the performance of ImageSURF, our open-source ImageJ plugin, which considers a range of image derivatives to train image classifiers. We compared ImageSURF to standard image thresholding to assess its reproducibility, accuracy and generalizability when used on fluorescence images of amyloid pathology. Results: ImageSURF segments amyloid-β images significantly more faithfully, and with significantly greater generalizability, than optimized thresholding. Conclusion: In addition to its superior performance in capturing human evaluations of pathology images, ImageSURF is able to segment image sets of any size in a consistent and unbiased manner, without requiring additional blinding, and can be retrospectively applied to existing images. The training process yields a classifier file which can be shared as supplemental data, allowing fully open methods and data, and enabling more direct comparisons between different studies.
30

Podkowinski, Dominika, Ehsan Sharian Varnousfaderani, Christian Simader, Hrvoje Bogunovic, Ana-Maria Philip, Bianca S. Gerendas, Ursula Schmidt-Erfurth, and Sebastian M. Waldstein. "Impact of B-Scan Averaging on Spectralis Optical Coherence Tomography Image Quality before and after Cataract Surgery." Journal of Ophthalmology 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/8148047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background and Objective. To determine optimal image averaging settings for Spectralis optical coherence tomography (OCT) in patients with and without cataract. Study Design/Material and Methods. In a prospective study, the eyes were imaged before and after cataract surgery using seven different image averaging settings. Image quality was quantitatively evaluated using signal-to-noise ratio, distinction between retinal layer image intensity distributions, and retinal layer segmentation performance. Measures were compared pre- and postoperatively across different degrees of averaging. Results. 13 eyes of 13 patients were included and 1092 layer boundaries analyzed. Preoperatively, increasing image averaging led to a logarithmic growth in all image quality measures up to 96 frames. Postoperatively, increasing averaging beyond 16 images resulted in a plateau without further benefits to image quality. Averaging 16 frames postoperatively provided comparable image quality to 96 frames preoperatively. Conclusion. In patients with clear media, averaging 16 images provided optimal signal quality. A further increase in averaging was only beneficial in the eyes with senile cataract. However, prolonged acquisition time and possible loss of details have to be taken into account.
31

Valevski, Dani, Matan Kalman, Eyal Molad, Eyal Segalis, Yossi Matias, and Yaniv Leviathan. "UniTune: Text-Driven Image Editing by Fine Tuning a Diffusion Model on a Single Image." ACM Transactions on Graphics 42, no. 4 (July 26, 2023): 1–10. http://dx.doi.org/10.1145/3592451.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Text-driven image generation methods have shown impressive results recently, allowing casual users to generate high quality images by providing textual descriptions. However, similar capabilities for editing existing images are still out of reach. Text-driven image editing methods usually need edit masks, struggle with edits that require significant visual changes and cannot easily keep specific details of the edited portion. In this paper we make the observation that image-generation models can be converted to image-editing models simply by fine-tuning them on a single image. We also show that initializing the stochastic sampler with a noised version of the base image before the sampling and interpolating relevant details from the base image after sampling further increase the quality of the edit operation. Combining these observations, we propose UniTune, a novel image editing method. UniTune gets as input an arbitrary image and a textual edit description, and carries out the edit while maintaining high fidelity to the input image. UniTune does not require additional inputs, like masks or sketches, and can perform multiple edits on the same image without retraining. We test our method using the Imagen model in a range of different use cases. We demonstrate that it is broadly applicable and can perform a surprisingly wide range of expressive editing operations, including those requiring significant visual changes that were previously impossible.
32

Destyningtias, Budiani, Andi Kurniawan Nugroho, and Sri Heranurweni. "Analisa Citra Medis Pada Pasien Stroke dengan Metoda Peregangan Kontras Berbasis ImageJ." eLEKTRIKA 10, no. 1 (June 19, 2019): 15. http://dx.doi.org/10.26623/elektrika.v10i1.1105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>This study aims to develop medical image processing technology, especially medical images of CT scans of stroke patients. Doctors in determining the severity of stroke patients usually use medical images of CT scans and have difficulty interpreting the extent of bleeding. Solutions are used with contrast stretching which will distinguish cell tissue, skull bone and type of bleeding. This study uses contrast stretching from the results of CT Scan images produced by first turning the DICOM Image into a JPEG image using the help of the ImageJ program. The results showed that the histogram equalization method and statistical texture analysis could be used to distinguish normal MRI and abnormal MRI detected by stroke.</p><p><strong>Keywords : </strong>Stroke, MRI, Dicom, JPEG, ImageJ, Contrast Stretching<strong></strong></p><p> </p>
33

Avinash, Gopal B. "Image compression and data integrity in confocal microscopy." Proceedings, annual meeting, Electron Microscopy Society of America 51 (August 1, 1993): 206–7. http://dx.doi.org/10.1017/s0424820100146874.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In confocal microscopy, one method of managing large data is to store the data in a compressed form using image compression algorithms. These algorithms can be either lossless or lossy. Lossless algorithms compress images without losing any information with modest compression ratios (memory for the original / memory for the compressed) which are usually between 1 and 2 for typical confocal 2-D images. However, lossy algorithms can provide higher compression ratios (3 to 8) at the expense of information content in the images. The main purpose of this study is to empirically demonstrate the use of lossy compression techniques to images obtained from a confocal microscope while retaining the qualitative and quantitative image integrity under certain criteria.A fluorescent pollen specimen was imaged using ODYSSEY, a real-time laser scanning confocal microscope from NORAN Instruments, Inc. The images (128 by 128) consisted of a single frame (scanned in 33ms), a 4-frame average, a 64-frame average and an edge-preserving smoothed image of the single frame.
34

Hata, Yutaka. "Image Understanding on Medical Images : Toward Medical Image Understanding Systems." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 17, no. 1 (2005): 11–18. http://dx.doi.org/10.3156/jsoft.17.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

ChandraSekhar, P., K. Srinivasa Rao, and P. Srinivasa Rao. "Image Segmentation Algorithm for Images having Asymmetrically Distributed Image Regions." International Journal of Computer Applications 96, no. 21 (June 18, 2014): 64–73. http://dx.doi.org/10.5120/16922-7076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Lili, N. A., M. B. NorMasaina, K. Fatimah, and Y. Razali. "Image Noise Removal on Grayscale Images for Better Image Restoration." Advanced Science Letters 19, no. 8 (August 1, 2013): 2398–403. http://dx.doi.org/10.1166/asl.2013.4941.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Calbó, Josep, and Jeff Sabburg. "Feature Extraction from Whole-Sky Ground-Based Images for Cloud-Type Recognition." Journal of Atmospheric and Oceanic Technology 25, no. 1 (January 1, 2008): 3–14. http://dx.doi.org/10.1175/2007jtecha959.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Several features that can be extracted from digital images of the sky and that can be useful for cloud-type classification of such images are presented. Some features are statistical measurements of image texture, some are based on the Fourier transform of the image and, finally, others are computed from the image where cloudy pixels are distinguished from clear-sky pixels. The use of the most suitable features in an automatic classification algorithm is also shown and discussed. Both the features and the classifier are developed over images taken by two different camera devices, namely, a total sky imager (TSI) and a whole sky imager (WSC), which are placed in two different areas of the world (Toowoomba, Australia; and Girona, Spain, respectively). The performance of the classifier is assessed by comparing its image classification with an a priori classification carried out by visual inspection of more than 200 images from each camera. The index of agreement is 76% when five different sky conditions are considered: clear, low cumuliform clouds, stratiform clouds (overcast), cirriform clouds, and mottled clouds (altocumulus, cirrocumulus). Discussion on the future directions of this research is also presented, regarding both the use of other features and the use of other classification techniques.
38

Shin, Chang Jong, Tae Bok Lee, and Yong Seok Heo. "Dual Image Deblurring Using Deep Image Prior." Electronics 10, no. 17 (August 24, 2021): 2045. http://dx.doi.org/10.3390/electronics10172045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Blind image deblurring, one of the main problems in image restoration, is a challenging, ill-posed problem. Hence, it is important to design a prior to solve it. Recently, deep image prior (DIP) has shown that convolutional neural networks (CNNs) can be a powerful prior for a single natural image. Previous DIP-based deblurring methods exploited CNNs as a prior when solving the blind deburring problem and performed remarkably well. However, these methods do not completely utilize the given multiple blurry images, and have limitations of performance for severely blurred images. This is because their architectures are strictly designed to utilize a single image. In this paper, we propose a method called DualDeblur, which uses dual blurry images to generate a single sharp image. DualDeblur jointly utilizes the complementary information of multiple blurry images to capture image statistics for a single sharp image. Additionally, we propose an adaptive L2_SSIM loss that enhances both pixel accuracy and structural properties. Extensive experiments show the superior performance of our method to previous methods in both qualitative and quantitative evaluations.
39

Pham, Nam, Jong-Weon Lee, Goo-Rak Kwon, and Chun-Su Park. "Hybrid Image-Retrieval Method for Image-Splicing Validation." Symmetry 11, no. 1 (January 14, 2019): 83. http://dx.doi.org/10.3390/sym11010083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, the task of validating the authenticity of images and the localization of tampered regions has been actively studied. In this paper, we go one step further by providing solid evidence for image manipulation. If a certain image is proved to be the spliced image, we try to retrieve the original authentic images that were used to generate the spliced image. Especially for the image retrieval of spliced images, we propose a hybrid image-retrieval method exploiting Zernike moment and Scale Invariant Feature Transform (SIFT) features. Due to the symmetry and antisymmetry properties of the Zernike moment, the scaling invariant property of SIFT and their common rotation invariant property, the proposed hybrid image-retrieval method is efficient in matching regions with different manipulation operations. Our simulation shows that the proposed method significantly increases the retrieval accuracy of the spliced images.
40

Deserno, T. M., H. P. Meinzer, T. Tolxdorff, and H. Handels. "Image Analysis and Modeling in Medical Image Computing." Methods of Information in Medicine 51, no. 05 (2012): 395–97. http://dx.doi.org/10.1055/s-0038-1627047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Summary Background: Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. Objectives: In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Methods: Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Results: Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. Conclusions: The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.
41

Naseera, Shaik, G. k. Rajini, and Saravanan M. "STATISTICAL DETECTION OF BREAST CANCER BY MAMMOGRAM IMAGE." Asian Journal of Pharmaceutical and Clinical Research 10, no. 1 (January 1, 2016): 227. http://dx.doi.org/10.22159/ajpcr.2017.v10i1.15003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ABSTRACTObjective: To create awareness about the breast cancer which has become one of the most common diseases among women that leads to death if notrecognized at early stage.Methods: The technique of acquiring breast image is called mammography and is a diagnostic and screening tool to detect cancer. A cascade algorithmbased on these statistical parameters is implemented on these mammogram images to segregate normal, benign, and malignant diseases.Results: Statistical features - such as mean, median, standard deviation, perimeter, and skewness - were extracted from mammogram images todescribe their intensity and nature of distribution using ImageJ.Conclusion: A noninvasive technique which includes statistical features to determine and classify normal, benign, and malignant images are identified.Keywords: Breast cancer, Benign, Malignant, Mammogram image, ImageJ.
42

Vachkov, I. V., and M. A. Sukhoruchenkov. "Special Aspects of Sensual Images During Imago Therapy Process." Клиническая и специальная психология 6, no. 4 (2017): 125–47. http://dx.doi.org/10.17759/cpse.2017060409.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article presents the results of the study performed on 27 adults who have completed five imago therapeutic sessions. The subjects were split in two groups: problem-solving group and problem-analysis group. The peculiarities of sensory images that arise during the sessions by Glezer grounded theory method were studied. It turned out that the subject sensual tissue of the psychosemiological tetrahedron (the term of F. Vasiljuk) of the imago therapeutic image is represented mainly by sensory imaginary events and less frequently by sensual objects and recalled events. The pole and sensory tissue of meaning are least expressed in the imago therapeutic image. The pole and sensory tissue of personal meaning are significantly expressed by the imago therapeutic image. Not only elements relating to the imagination are woven into the imago therapeutic image but also elements reflecting the process of imaginative psychotherapy as a whole, a separate session, the process of imagination. These elements are closely related to the elements of imaginary and remembered events, their feelings and comprehension.
43

Zuvirie Hernández, Rosa Margarita, and María Dolores Rodríguez Ortiz. "Psychophysiological reaction to exposure of thin women images in college students / Reacción psicofisiológica a la exposición de imágenes de mujeres delgadas en universitarias." Revista Mexicana de Trastornos Alimentarios/Mexican Journal of Eating Disorders 2, no. 1 (June 30, 2011): 33–41. http://dx.doi.org/10.22201/fesi.20071523e.2011.1.167.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Social standards of beauty profitness model is leading some young women to worry, especially those that are more susceptible to these models. The thin ideal internalization is a risk factor in development of body image concern. Therefore it was important to conduct a study that contemplate thoughts through body image and psychophysiological reaction to images of thin women. Psychophysiological assessment was conducted with 40 women between 19 and 25 age college students. The sample was divided in two groups: the group with negative thoughts toward body image and group with positive thoughts. We used a exploratory design. The statistical analysis found non statistically significant differences in the psychophysiological reaction to images of thin women in the group with positive and negative thoughts toward body image. These results indicate that exposure to these images does not cause variations in the psychophysiological reaction of women, because they have no significant body dissatisfaction, indicating the need to employ better methods to assess body image. Key Words. Body Image, Thoughts, Thinness, Psychophysiological Assessment, Resumen. Los estándares sociales del modelo de belleza prodelgadez, está llevando a que algunas mujeres jóvenes se preocupen, especialmente aquellas más susceptibles a estos modelos. A partir de esto la internalización del ideal de delgadez representa un factor de riesgo en el desarrollo de la preocupación por la imagen corporal. Por ello, fue importante realizar un estudio que contemplara el tipo de pensamientos hacia la imagen corporal y la reacción psicofisiológica ante imágenes de mujeres delgadas. Para tal efecto se realizó una evaluación psicofisiológica a 40 mujeres de 19 a 25 años estudiantes de Licenciatura. La muestra se dividió en dos grupos: el grupo con pensamientos negativos hacia la imagen corporal y el grupo con pensamientos positivos. Se utilizó un diseño exploratorio. Los resultados del estudio indican que no hay diferencias estadísticamente significativas en la reacción psicofisiológica ante imágenes de mujeres delgadas en el grupo con pensamientos positivos y negativos hacia la imagen corporal. Dichos resultados señalan que la exposición a estas imágenes no provoca variaciones en la reacción psicofisiológica de las mujeres, probablemente debido a que ellas no tienen insatisfacción corporal significativa, lo que muestra que es necesario emplear mejores métodos para evaluar la imagen corporal. Palabras Clave. Imagen Corporal, Pensamientos, Delgadez, Evaluación Psicofisiológica.
44

Wen, Cathlyn Y., and Robert J. Beaton. "Subjective Image Quality Evaluation of Image Compression Techniques." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, no. 23 (October 1996): 1188–92. http://dx.doi.org/10.1177/154193129604002309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image compression reduces the amount of data in digital images and, therefore, allows efficient storage, processing, and transmission of pictorial information. However, compression algorithms can degrade image quality by introducing artifacts, which may be unacceptable for users' tasks. This work examined the subjective effects of JPEG and wavelet compression algorithms on a series of medical images. Six digitized chest images were processed by each algorithm at various compression levels. Twelve radiologists rated the perceived image quality of the compressed images relative to the corresponding uncompressed images, as well as rated the acceptability of the compressed images for diagnostic purposes. The results indicate that subjective image quality and acceptability decreased with increasing compression levels; however, all images remained acceptable for diagnostic purposes. At high compression ratios, JPEG compressed images were judged less acceptable for diagnostic purposes than the wavelet compressed images. These results contribute to emerging system design guidelines for digital imaging workstations.
45

Lee, Hosang. "Successive Low-Light Image Enhancement Using an Image-Adaptive Mask." Symmetry 14, no. 6 (June 6, 2022): 1165. http://dx.doi.org/10.3390/sym14061165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Low-light images are obtained in dark environments or in environments where there is insufficient light. Because of this, low-light images have low intensity values and dimmed features, making it difficult to directly apply computer vision or image recognition software to them. Therefore, to use computer vision processing on low-light images, an image improvement procedure is needed. There have been many studies on how to enhance low-light images. However, some of the existing methods create artifact and distortion effects in the resulting images. To improve low-light images, their contrast should be stretched naturally according to their features. This paper proposes the use of a low-light image enhancement method utilizing an image-adaptive mask that is composed of an image-adaptive ellipse. As a result, the low-light regions of the image are stretched and the bright regions are enhanced in a way that appears natural by an image-adaptive mask. Moreover, images that have been enhanced using the proposed method are color balanced, as this method has a color compensation effect due to the use of an image-adaptive mask. As a result, the improved image can better reflect the image’s subject, such as a sunset, and appears natural. However, when low-light images are stretched, the noise elements are also enhanced, causing part of the enhanced image to look dim and hazy. To tackle this issue, this paper proposes the use of guided image filtering based on using triple terms for the image-adaptive value. Images enhanced by the proposed method look natural and are objectively superior to those enhanced via other state-of-the-art methods.
46

Lai, Chang, Wei Li, Jiyao Xu, Xiao Liu, Wei Yuan, Jia Yue, and Qinzeng Li. "Extraction of Quasi-Monochromatic Gravity Waves from an Airglow Imager Network." Atmosphere 11, no. 6 (June 10, 2020): 615. http://dx.doi.org/10.3390/atmos11060615.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An algorithm has been developed to isolate the gravity waves (GWs) of different scales from airglow images. Based on the discrete wavelet transform, the images are decomposed and then reconstructed in a series of mutually orthogonal spaces, each of which takes a Daubechies (db) wavelet of a certain scale as a basis vector. The GWs in the original airglow image are stripped to the peeled image reconstructed in each space, and the scale of wave patterns in a peeled image corresponds to the scale of the db wavelet as a basis vector. In each reconstructed image, the extracted GW is quasi-monochromatic. An adaptive band-pass filter is applied to enhance the GW structures. From an ensembled airglow image with a coverage of 2100 km × 1200 km using an all-sky airglow imager (ASAI) network, the quasi-monochromatic wave patterns are extracted using this algorithm. GWs range from ripples with short wavelength of 20 km to medium-scale GWs with a wavelength of 590 km. The images are denoised, and the propagating characteristics of GWs with different wavelengths are derived separately.
47

Xu, Chen, Yu Han, George Baciu, and Min Li. "Fabric image recolorization based on intrinsic image decomposition." Textile Research Journal 89, no. 17 (December 10, 2018): 3617–31. http://dx.doi.org/10.1177/0040517518817051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fabric image recolorization is widely used in assisting designers to generate new color design proposals for fabric. In this paper, a new image recolorization method is proposed. Different from classical image recolorization methods, which need some complicated interactive operations from users, our proposed method can achieve automatic recolorization of images. The proposed method contains three sequential phases: a phase of extracting representative colors from fabric images; an image segmentation phase; and an image reconstruction phase by using given color themes. Integrated with intrinsic image decomposition, a new image segmentation model is designed in a variational framework, and an algorithm is given to solve the model. Our image recolorization results are images that are reconstructed by the composition of the image segmentation results and the given color themes. Numerical results demonstrate that our newly proposed intrinsic image decomposition-based image recolorization method can generate better results than the classical cartoon-and-texture decomposition-based method.
48

Pardhasaradhi, P., B. T PMadhav, G. Lakshmi Sindhuja, K. Sai Sreeram, M. Parvathi, and B. Lokesh. "Image enhancement with contrast coefficients using wavelet based image fusion." International Journal of Engineering & Technology 7, no. 2.8 (March 19, 2018): 432. http://dx.doi.org/10.14419/ijet.v7i2.8.10476.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The future is mainly focused on image brightness and the capacity that required storing the image. The sharp images provide better information than the blur images. To overcome from the blurriness in the image, we use image enhancement techniques. Image fusion used to overcome information loss in the image. This paper is provided with image enhancement and fusion by applying wavelet transform technique. Wavelet transform is mainly used because due to its inherent property that is they are redundant and shift invariant. It transforms the image into different scales. Image enhancement will be decided based on the levels of transformation. Low contrast results from poor resolution, lack of dynamic range, wrong settings of sensor lens during acquisition and poor quality of cameras and sensors. To avoid the information loss there is an interesting solution that is for the pictures of the same image but focused on different regions. Then using image fusion concept, all images which are captured are combined to get a single image which contains the properties of both the source images. The image entropy is composed to determine the quality of the image. The paper shows the image fusion method for both multi-resolution and images captured at different temperatures.
49

Tosi, Sébastien, Lídia Bardia, Maria Jose Filgueira, Alexandre Calon, and Julien Colombelli. "LOBSTER: an environment to design bioimage analysis workflows for large and complex fluorescence microscopy data." Bioinformatics 36, no. 8 (December 20, 2019): 2634–35. http://dx.doi.org/10.1093/bioinformatics/btz945.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Summary Open source software such as ImageJ and CellProfiler greatly simplified the quantitative analysis of microscopy images but their applicability is limited by the size, dimensionality and complexity of the images under study. In contrast, software optimized for the needs of specific research projects can overcome these limitations, but they may be harder to find, set up and customize to different needs. Overall, the analysis of large, complex, microscopy images is hence still a critical bottleneck for many Life Scientists. We introduce LOBSTER (Little Objects Segmentation and Tracking Environment), an environment designed to help scientists design and customize image analysis workflows to accurately characterize biological objects from a broad range of fluorescence microscopy images, including large images exceeding workstation main memory. LOBSTER comes with a starting set of over 75 sample image analysis workflows and associated images stemming from state-of-the-art image-based research projects. Availability and implementation LOBSTER requires MATLAB (version ≥ 2015a), MATLAB Image processing toolbox, and MATLAB statistics and machine learning toolbox. Code source, online tutorials, video demonstrations, documentation and sample images are freely available from: https://sebastients.github.io. Supplementary information Supplementary data are available at Bioinformatics online.
50

Hack, Lilian, and Édio Raniere Da Silva. "Escrever sob o fascínio da imagem – ressonâncias entre o pensamento de Maurice Blanchot e Georges Didi-Huberman." Visualidades 15, no. 2 (December 19, 2017): 69. http://dx.doi.org/10.5216/vis.v15i2.48066.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Resumo O artigo problematiza o conceito de imagem a partir do pensamento de Maurice Blanchot e Georges Didi-Huberman. Explorando especialmente os conceitos de duplicidade da imagem em Blanchot e dupla distância do olhar em DidiHuberman, pretende-se verificar os pontos em que se faz convergir nesses autores um modo de operar o pensamento e a escrita sobre a arte em sua relação com a imagem. Nesse jogo, sujeito e objeto são lançados a uma instabilidade, movimento em que o sujeito se desfaz de si mesmo pela imagem, se desfaz de si no encontro com uma obra. Momento em que a imagem solicita a palavra e em que a língua torna-se o lugar desde onde podemos nos aproximar dela. AbstractThe article problematizes the concept of image from the thinking of Maurice Blanchot and Georges Didi-Huberman. Exploring especially the concepts of duplicity of the image in Blanchot and double distance of the look in Didi-Huberman, it is intended to compose a starting plan that will allow to verify the points in which one converges in these authors a way of operating the thought about art in its relation to the image. In this game subject and object are thrown into an instability, a movement in which the subject undoes himself by the image, he discards himself in the encounter with a work. Moment when the image asks for the word and where the language becomes the place from which we can approach it. ResumenEl artículo problematiza el concepto de imagen a partir del pensamiento de Maurice Blanchot y Georges DidiHuberman. Examinando especialmente los conceptos de duplicidad de la imagen en Blanchot y doble distancia de la mirada en Didi-Huberman, se pretende verificar los puntos en los que se hace converger en esos autores un modo de operar el pensamiento y la escritura sobre el arte en su relación con la imagen. En ese juego, sujeto y objeto son lanzados a una inestabilidad, movimiento en que el sujeto se deshace de sí mismo por la imagen, se deshace de sí en el encuentro con una obra. Momento en el que la imagen pide la palabra y en el que la lengua se convierte en el lugar desde donde podemos acercarnos a ella.

До бібліографії