Auswahl der wissenschaftlichen Literatur zum Thema „Source image“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Source image" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Source image"

1

Wang, Guanjie, Zehua Ma, Chang Liu, Xi Yang, Han Fang, Weiming Zhang und Nenghai Yu. „MuST: Robust Image Watermarking for Multi-Source Tracing“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 5364–71. http://dx.doi.org/10.1609/aaai.v38i6.28344.

Der volle Inhalt der Quelle
Annotation:
In recent years, with the popularity of social media applications, massive digital images are available online, which brings great convenience to image recreation. However, the use of unauthorized image materials in multi-source composite images is still inadequately regulated, which may cause significant loss and discouragement to the copyright owners of the source image materials. Ideally, deep watermarking techniques could provide a solution for protecting these copyrights based on their encoder-noise-decoder training strategy. Yet existing image watermarking schemes, which are mostly designed for single images, cannot well address the copyright protection requirements in this scenario, since the multi-source image composing process commonly includes distortions that are not well investigated in previous methods, e.g., the extreme downsizing. To meet such demands, we propose MuST, a multi-source tracing robust watermarking scheme, whose architecture includes a multi-source image detector and minimum external rectangle operation for multiple watermark resynchronization and extraction. Furthermore, we constructed an image material dataset covering common image categories and designed the simulation model of the multi-source image composing process as the noise layer. Experiments demonstrate the excellent performance of MuST in tracing sources of image materials from the composite images compared with SOTA watermarking methods, which could maintain the extraction accuracy above 98% to trace the sources of at least 3 different image materials while keeping the average PSNR of watermarked image materials higher than 42.51 dB. We released our code on https://github.com/MrCrims/MuST
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wink, Alexandra Elisabeth, Amanda N. Telfer und Michael A. Pascoe. „Google Images Search Results as a Resource in the Anatomy Laboratory: Rating of Educational Value“. JMIR Medical Education 8, Nr. 4 (21.10.2022): e37730. http://dx.doi.org/10.2196/37730.

Der volle Inhalt der Quelle
Annotation:
Background Preclinical medical learners are embedded in technology-rich environments, allowing them rapid access to a large volume of information. The anatomy laboratory is an environment in which faculty can assess the development of professional skills such as information literacy in preclinical medical learners. In the anatomy laboratory, many students use Google Images searches in addition to or in place of other course materials as a resource to locate and identify anatomical structures. However, the most frequent sources as well as the educational quality of these images are unknown. Objective This study was designed to assess the sources and educational value of Google Images search results for commonly searched anatomical structures. Methods The top 10 Google Images search results were collected for 39 anatomical structures. Image source websites were recorded and categorized based on the purpose and target audience of the site publishing the image. Educational value was determined through assessment of relevance (is the searched structure depicted in the image?), accuracy (does the image contain errors?), and usefulness (will the image assist a learner in locating the structure on an anatomical donor?). A reliable scoring rubric was developed to assess an image’s usefulness. Results A total of 390 images were analyzed. Most often, images were sourced from websites targeting health care professionals and health care professions students (38% of images), while Wikipedia was the most frequent single source of image results (62/390 results). Of the 390 total images, 363 (93.1%) depicted the searched structure and were therefore considered relevant. However, only 43.0% (156/363) of relevant images met the threshold to be deemed useful in identifying the searched structure in an anatomical donor. The usefulness of images did not significantly differ across source categories. Conclusions Anatomy faculty may use these results to develop interventions for gaps in information literacy in preclinical medical learners in the context of image searches in the anatomy laboratory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Haldorsen, Jakob B. U., W. Scott Leaney, Richard T. Coates, Steen A. Petersen, Helge Ivar Rutledal und Kjetil A. Festervoll. „Imaging above an extended-reach horizontal well using converted shear waves and a rig source“. GEOPHYSICS 78, Nr. 2 (01.03.2013): S93—S103. http://dx.doi.org/10.1190/geo2012-0154.1.

Der volle Inhalt der Quelle
Annotation:
We evaluated a method for using 3C vertical seismic profile data to image acoustic interfaces located between the surface source and a downhole receiver array. The approach was based on simple concepts adapted from whole-earth seismology, in which observed compressional and shear wavefields are traced back to a common origin. However, unlike whole-earth and passive seismology, in which physical sources are imaged, we used the observed compressional and shear wavefields to image secondary sources (scatterers) situated between the surface source and the downhole receiver array. The algorithm consisted of the following steps: first, estimating the receiver compressional wavefield; second, using polarization to estimating the shear wavefield; third, deconvolving the shear wavefield using estimates of the source wavelet obtained from the direct compressional wave; fourth, the compressional and shear wavefields were back projected into the volume between the source and receivers; where, finally, an imaging condition was applied. When applied to rig-source VSP data acquired in an extended-reach horizontal well, this process was demonstrated to give images of formation features in the overburden, consistent with surface-seismic images obtained from the same area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Vokes, Martha S., und Anne E. Carpenter. „CellProfiler: Open-Source Software to Automatically Quantify Images“. Microscopy Today 16, Nr. 5 (September 2008): 38–39. http://dx.doi.org/10.1017/s1551929500061757.

Der volle Inhalt der Quelle
Annotation:
Researchers often examine samples by eye on the microscope — qualitatively scoring each sample for a particular feature of interest. This approach, while suitable for many experiments, sacrifices quantitative results and a permanent record of the experiment. By contrast, if digital images are collected of each sample, software can be used to quantify features of interest. For small experiments, quantitative analysis is often done manually using interactive programs like Adobe Photoshop©. For the large number of images that can be easily collected with automated microscopes, this approach is tedious and time-consuming. NIH Image/ImageJ (http://rsb.info.nih.gov/ij) allows users comfortable writing in a macro language to automate quantitative image analysis. We have developed Cell- Profiler, a free, open-source software package, designed to enable scientists without prior programming experience to quantify relevant features of samples in large numbers of images automatically, in a modular system suitable for processing hundreds of thousands of images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fu, Shiyuan, Lu Wang, Yaodong Cheng und Gang Chen. „Intelligent compression for synchrotron radiation source image“. EPJ Web of Conferences 251 (2021): 03073. http://dx.doi.org/10.1051/epjconf/202125103073.

Der volle Inhalt der Quelle
Annotation:
Synchrotron radiation sources (SRS) produce a huge amount of image data. This scientific data, which needs to be stored and transferred losslessly, will bring great pressure on storage and bandwidth. The SRS images have the characteristics of high frame rate and high resolution, and traditional image lossless compression methods can only save up to 30% in size. Focus on this problem, we propose a lossless compression method for SRS images based on deep learning. First, we use the difference algorithm to reduce the linear correlation within the image sequence. Then we propose a reversible truncated mapping method to reduce the range of the pixel value distribution. Thirdly, we train a deep learning model to learn the nonlinear relationship within the image sequence. Finally, we use the probability distribution predicted by the deep leaning model combined with arithmetic coding to fulfil lossless compression. Test result based on SRS images shows that our method can further decrease 20% of the data size compared to PNG, JPEG2000 and FLIF.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Legland, David, und Marie-Françoise Devaux. „ImageM: a user-friendly interface for the processing of multi-dimensional images with Matlab“. F1000Research 10 (30.04.2021): 333. http://dx.doi.org/10.12688/f1000research.51732.1.

Der volle Inhalt der Quelle
Annotation:
Modern imaging devices provide a wealth of data often organized as images with many dimensions, such as 2D/3D, time and channel. Matlab is an efficient software solution for image processing, but it lacks many features facilitating the interactive interpretation of image data, such as a user-friendly image visualization, or the management of image meta-data (e.g. spatial calibration), thus limiting its application to bio-image analysis. The ImageM application proposes an integrated user interface that facilitates the processing and the analysis of multi-dimensional images within the Matlab environment. It provides a user-friendly visualization of multi-dimensional images, a collection of image processing algorithms and methods for analysis of images, the management of spatial calibration, and facilities for the analysis of multi-variate images. ImageM can also be run on the open source alternative software to Matlab, Octave. ImageM is freely distributed on GitHub: https://github.com/mattools/ImageM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yao, Xin-Wei, Xinge Zhang, Yuchen Zhang, Weiwei Xing und Xing Zhang. „Nighttime Image Dehazing Based on Point Light Sources“. Applied Sciences 12, Nr. 20 (11.10.2022): 10222. http://dx.doi.org/10.3390/app122010222.

Der volle Inhalt der Quelle
Annotation:
Images routinely suffer from quality degradation in fog, mist, and other harsh weather conditions. Consequently, image dehazing is an essential and inevitable pre-processing step in computer vision tasks. Image quality enhancement for special scenes, especially nighttime image dehazing is extremely well studied for unmanned driving and nighttime surveillance, while the vast majority of dehazing algorithms in the past were only applicable to daytime conditions. After observing a large number of nighttime images, artificial light sources have replaced the position of the sun in daytime images and the impact of light sources on pixels varies with distance. This paper proposed a novel nighttime dehazing method using the light source influence matrix. The luminosity map can well express the photometric difference value of the picture light source. Then, the light source influence matrix is calculated to divide the image into near light source region and non-near light source region. Using the result of two regions, the two initial transmittances obtained by dark channel prior are fused by edge-preserving filtering. For the atmospheric light term, the initial atmospheric light value is corrected by the light source influence matrix. Finally, the final result is obtained by substituting the atmospheric light model. Theoretical analysis and comparative experiments verify the performance of the proposed image dehazing method. In terms of PSNR, SSIM, and UQI, this method improves 9.4%, 11.2%, and 3.3% over the existed night-time defogging method OSPF. In the future, we will explore the work from static picture dehazing to real-time video stream dehazing detection and will be used in detection on potential applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Botti, Filippo, Tomaso Fontanini, Massimo Bertozzi und Andrea Prati. „Masked Style Transfer for Source-Coherent Image-to-Image Translation“. Applied Sciences 14, Nr. 17 (04.09.2024): 7876. http://dx.doi.org/10.3390/app14177876.

Der volle Inhalt der Quelle
Annotation:
The goal of image-to-image translation (I2I) is to translate images from one domain to another while maintaining the content representations. A popular method for I2I translation involves the use of a reference image to guide the transformation process. However, most architectures fail to maintain the input’s main characteristics and produce images that are too similar to the reference during style transfer. In order to avoid this problem, we propose a novel architecture that is able to perform source-coherent translation between multiple domains. Our goal is to preserve the input details during I2I translation by weighting the style code obtained from the reference images before applying it to the source image. Therefore, we choose to mask the reference images in an unsupervised way before extracting the style from them. By doing so, the input characteristics are better maintained while performing the style transfer. As a result, we also increase the diversity in the generated images by extracting the style from the same reference. Additionally, adaptive normalization layers, which are commonly used to inject styles into a model, are substituted with an attention mechanism for the purpose of increasing the quality of the generated images. Several experiments are performed on the CelebA-HQ and AFHQ datasets in order to prove the efficacy of the proposed system. Quantitative results measured using the LPIPS and FID metrics demonstrate the superiority of the proposed architecture compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ferrer-Rosell, Berta, und Estela Marine-Roig. „Projected Versus Perceived Destination Image“. Tourism Analysis 25, Nr. 2 (08.07.2020): 227–37. http://dx.doi.org/10.3727/108354220x15758301241747.

Der volle Inhalt der Quelle
Annotation:
Due to the spectacular growth of traveler-generated content (TGC), researchers are using TGC as a source of data to analyze the image of destinations as perceived by tourists. In order to analyze a destination's projected image, researchers typically look to websites from destination marketing or management organizations (DMOs). The objective of this study is to calculate the gap between the projected and perceived images of Barcelona, Catalonia, in 2017, using Gartner's classification and applying compositional analysis. The official online press dossier is used as an induced source, the Lonely Planet guidebook as an autonomous source, and a collection of more than 70,000 online travel reviews hosted on TripAdvisor as an organic source. In addition to quantitative content analysis, this study undertakes two thematic analyses: the masterworks of architect Gaudi recognized as UNESCO WHS as part of the cognitive image component and feeling-related keywords as part of the affective image component. The results reveal strong differences between the induced and organic sources, but much smaller differences between the autonomous and organic sources. These results can be useful for DMOs to optimize promotion and supply.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Maqsood, Sarmad, Umer Javed, Muhammad Mohsin Riaz, Muhammad Muzammil, Fazal Muhammad und Sunghwan Kim. „Multiscale Image Matting Based Multi-Focus Image Fusion Technique“. Electronics 9, Nr. 3 (12.03.2020): 472. http://dx.doi.org/10.3390/electronics9030472.

Der volle Inhalt der Quelle
Annotation:
Multi-focus image fusion is a very essential method of obtaining an all focus image from multiple source images. The fused image eliminates the out of focus regions, and the resultant image contains sharp and focused regions. A novel multiscale image fusion system based on contrast enhancement, spatial gradient information and multiscale image matting is proposed to extract the focused region information from multiple source images. In the proposed image fusion approach, the multi-focus source images are firstly refined over an image enhancement algorithm so that the intensity distribution is enhanced for superior visualization. The edge detection method based on a spatial gradient is employed for obtaining the edge information from the contrast stretched images. This improved edge information is further utilized by a multiscale window technique to produce local and global activity maps. Furthermore, a trimap and decision maps are obtained based upon the information provided by these near and far focus activity maps. Finally, the fused image is achieved by using an enhanced decision maps and fusion rule. The proposed multiscale image matting (MSIM) makes full use of the spatial consistency and the correlation among source images and, therefore, obtains superior performance at object boundaries compared to region-based methods. The achievement of the proposed method is compared with some of the latest techniques by performing qualitative and quantitative evaluation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Source image"

1

Feideropoulou, Georgia. „Codage Conjoint Source-Canal des Sources Vidéo“. Phd thesis, Télécom ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001294.

Der volle Inhalt der Quelle
Annotation:
L'objet de cette thèse est de proposer un codage conjoint source-canal de séquences vidéo pour la transmission sur des canaux sans fil. Le système de codage conjoint source-canal est fondée sur un quantificateur vectoriel structuré et une assignation linéaire d'étiquette qui minimisent simultanément la distorsion canal et la distorsion source. Le quantificateur vectoriel qui est construit à partir de constellations provenant de réseaux de points, lesquels satisfont la propriété de diversité maximale, minimise la distorsion source d'une source gaussienne. La distorsion canal est également minimisée par l'étiquetage linéaire. Nous avons étudié les dépendances entre les coefficients d'ondelettes provenant d'une décomposition t+2D, avec ou sans estimation de mouvement afin d'étendre le schéma du codage conjoint source-canal, développé pour les sources gaussiennes, dans le domaine vidéo où la distribution des coefficients est loin d'être gaussienne. Nous proposons un modèle doublement stochastique afin de capturer ces dépendances et nous l'appliquons à la protection des erreurs pour prédire les coefficients perdus et améliorer ainsi la qualité de vidéo. Dans le cas d'un canal gaussien, nous développons deux systèmes, un avec un étiquetage linéaire non codé et l'autre avec un étiquetage linéaire codé utilisant des codes de Reed-Muller. Nous comparons ces deux schémas de codage avec un schéma non-structuré dont l'étiquetage est adapté au canal et avec un coder vidéo scalable. Dans le cas d'un canal de Rayleigh non-sélectif à évanouissements indépendants le schéma devient robuste lorsque nous utilisons une matrice de rotation avant la transmission sur le canal.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

van, der Gracht Joseph. „Partially coherent image enhancement by source modification“. Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/13379.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Livadas, Gerassimos Michail. „Composite source models in image signal interpolation“. Thesis, Imperial College London, 1988. http://hdl.handle.net/10044/1/47156.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Solano, Solano David. „Image quality analysis in dual-source CT“. [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:16-opus-89891.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Subbalakshmi, K. P. „Joint source-channel decoding of variable-length encoded sources with applications to image transmission“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0013/NQ61684.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shajahan, Sunoj. „Agricultural Field Applications of Digital Image Processing Using an Open Source ImageJ Platform“. Diss., North Dakota State University, 2019. https://hdl.handle.net/10365/29711.

Der volle Inhalt der Quelle
Annotation:
Digital image processing is one of the potential technologies used in precision agriculture to gather information, such as seed emergence, plant health, and phenology from the digital images. Despite its potential, the rate of adoption is slow due to limited accessibility, unsuitability to specific issues, unaffordability, and high technical knowledge requirement from the clientele. Therefore, the development of open source image processing applications that are task-specific, easy-to-use, requiring fewer inputs, and rich with features will be beneficial to the users/farmers for adoption. The Fiji software, an open source free image processing ImageJ platform, was used in this application development study. A collection of four different agricultural field applications were selected to address the existing issues and develop image processing tools by applying novel approaches and simple mathematical principles. First, an automated application, using a digital image and “pixel-march” method, performed multiple radial measurements of sunflower floral components. At least 32 measurements for ray florets and eight for the disc were required statistically for accurate dimensions. Second, the color calibration of digital images addressed the light intensity variations of images using standard calibration chart and derived color calibration matrix from selected color patches. Calibration using just three-color patches: red, green, and blue was sufficient to obtain images of uniform intensity. Third, plant stand count and their spatial distribution from UAS images were determined with an accuracy of ≈96 %, through pixel-profile identification method and plant cluster segmentation. Fourth, the soybean phenological stages from the PhenoCam time-lapse imagery were analyzed and they matched with the manual visual observation. The green leaf index produced the minimum variations from its smoothed curve. The time of image capture and PhenoCam distances had significant effects on the vegetation indices analyzed. A simplified approach using kymograph was developed, which was quick and efficient for phenological observations. Based on the study, these tools can be equally applied to other scenarios, or new user-coded, user-friendly, image processing tools can be developed to address specific requirements. In conclusion, these successful results demonstrated the suitability and possibility of task-specific, open source, digital image processing tools development for agricultural field applications.
United States. Agricultural Research Service
National Institute of Food and Agriculture (U.S.)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Soobhany, Ahmad Ryad. „Image source identification and characterisation for forensic analysis“. Thesis, Keele University, 2013. http://eprints.keele.ac.uk/2301/.

Der volle Inhalt der Quelle
Annotation:
Digital imaging devices, such as digital cameras or mobile phones, are prevalent in society. The images created by these devices can be used in the commission of crime. Source device identification is an emerging research area and involves the identification of artefacts that are left behind in an image by the camera pipeline. These artefacts can be used as digital signatures to identify the source device forensically. The type of digital signature considered in this thesis is the Sensor Pattern Noise (SPN), which consists mainly of the PRNU (Photo Response Non-Uniformity) of the imaging device. The PRNU is unique to each individual sensor, which can be extracted traditionally with a wavelet denoising filter and enhanced to attenuate unwanted artefacts. This thesis proposes a novel method to extract the PRNU of a digital image by using Singular Value Decomposition (SVD) to extract the digital signature. The extraction of the PRNU is performed using the homomorphic filtering technique, where the inherently nonlinear PRNU is transformed into an additive noise. The range of the energy of the PRNU is estimated, which makes it easier to separate from other polluting components to obtain a cleaner signature, as compared to extracting all the high frequency signals from an image. The image is decomposed by using SVD, which separates the image into ranks of descending order of energies. The estimated energy range of the PRNU is used to obtain the interesting ranks that are utilised to form part of the digital signature. A case study of an existing image analyser platform was performed by investigating its identification and classification results. The SVD based extraction method was tested by extracting image signatures from camera phones. The results of the experiments show that it is possible to determine the source device of digital images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Aitken, David M. „The fallacy of single source fire support“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FAitken.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Andersson, Tomas. „On error-robust source coding with image coding applications“. Licentiate thesis, Stockholm : Department of Signals, Sensors and Systems, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4046.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Biggar, M. J. „Source coding of segmented digital image and video signals“. Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38235.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Source image"

1

Thompson, Sean. Interactive image-source techniques for virtual acoustics. Ottawa: National Library of Canada, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

(Agency), Image Bank. The source book for art directors. New York: Image Bank, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

1950-, Darshan Singh, Hrsg. Western image of the Sikh religion: A source book. New Delhi: National Book Organisation, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Goelker, Klaus. Gimp 2.6 for photographers: Image editing with open source software. Santa Barbara, CA: Rocky Nook, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Goelker, Klaus. GIMP 2 for photographers: Image editing with open source software. Santa Barbara, CA: Rocky Nook, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dodgson, Terence Edwin. Source coded image data in the presence of channel errors. Birmingham: Aston University. Department ofElectrical and Electronic Engineering, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sayood, Khalid. Design of source coders and joint source/channel coders for noisy channels: Semi-annual status report ... May 15, 1987 - November 15, 1987. Greenbelt, Md: Instrument Division, Engineering Directorate, Goddard Space Flight Center, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Goddard Space Flight Center. Instrument Division., Hrsg. Design of source coders and joint source/channel coders for noisy channels: Semi-annual status report ... May 15, 1987 - November 15, 1987. Greenbelt, Md: Instrument Division, Engineering Directorate, Goddard Space Flight Center, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rafajłowicz, Ewaryst, Wojciech Rafajłowicz und Andrzej Rusiecki. Algorytmy przetwarzania obrazów i wstęp do pracy z biblioteką OpenCV. Wrocław: Oficyna Wydawnicza Politechniki Wrocławskiej, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cachia, Nicholas. The image of the good shepherd as a source for the spirituality of the ministerial priesthood. Roma: Pontificia Università gregoriana, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Source image"

1

Roy, Aniket, Rahul Dixit, Ruchira Naskar und Rajat Subhra Chakraborty. „Camera Source Identification“. In Digital Image Forensics, 11–26. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-10-7644-2_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Magnenat-Thalmann, Nadia, und Daniel Thalmann. „Complex light-source and illumination models“. In Image Synthesis, 123–40. Tokyo: Springer Japan, 1987. http://dx.doi.org/10.1007/978-4-431-68060-4_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Atkinson, Gary A. „Two-Source Surface Reconstruction Using Polarisation“. In Image Analysis, 123–35. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59129-2_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kandaswamy, Chetak, Luís M. Silva und Jaime S. Cardoso. „Source-Target-Source Classification Using Stacked Denoising Autoencoders“. In Pattern Recognition and Image Analysis, 39–47. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19390-8_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

McInerney, Daniel, und Pieter Kempeneers. „Image Overviews, Tiling and Pyramids“. In Open Source Geospatial Tools, 85–97. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-01824-9_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

McInerney, Daniel, und Pieter Kempeneers. „Image (Re-)projections and Merging“. In Open Source Geospatial Tools, 99–127. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-01824-9_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kot, Alex C., und Hong Cao. „Image and Video Source Class Identification“. In Digital Image Forensics, 157–78. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-0757-7_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mandelli, Sara, Nicolò Bonettini und Paolo Bestagini. „Source Camera Model Identification“. In Multimedia Forensics, 133–73. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_7.

Der volle Inhalt der Quelle
Annotation:
AbstractEvery camera model acquires images in a slightly different way. This may be due to differences in lenses and sensors. Alternatively, it may be due to the way each vendor applies characteristic image processing operations, from white balancing to compression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Marwick, Arthur. „Class: a Source-based Approach“. In Class: Image and Reality, 1–18. London: Palgrave Macmillan UK, 1990. http://dx.doi.org/10.1007/978-1-349-20954-5_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Yuying, Yonggang Huang, Jun Zhang, Xu Liu und Hualei Shen. „Noisy Smoothing Image Source Identification“. In Cyberspace Safety and Security, 135–47. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69471-9_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Source image"

1

Caldelli, Roberto, Irene Amerini, Francesco Picchioni und Matteo Innocenti. „Fast image clustering of unknown source images“. In 2010 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2010. http://dx.doi.org/10.1109/wifs.2010.5711454.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yu, Han, und Liang-Jian Deng. „Image Editing Via Searching Source Image“. In The 2015 International Conference on Applied Mechanics, Mechatronics and Intelligent Systems (AMMIS2015). WORLD SCIENTIFIC, 2015. http://dx.doi.org/10.1142/9789814733878_0053.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bobin, J., J. Starck, J. Rapin und A. Larue. „Sparse blind source separation for partially correlated sources“. In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7026215.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Van Der Gracht, Joseph. „Partially coherent image enhancement by source modification“. In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.thgg1.

Der volle Inhalt der Quelle
Annotation:
We introduce a partially coherent optical image enhancement system capable of eliminating unwanted sinusoidal signals that are additive in object amplitude transmittance. The spatial distribution of the pupil mask in a Koehler illumination imaging system remains fixed while the source distribution is changed to block sinusoids of different spatial frequencies. The operating principle is a variation of dark field imaging. In dark field imaging, the pupil is chosen to block all bright regions of the source so that no undiffracted light is passed by the pupil. A similar principle can be employed to block sinusoids of different spatial frequencies. For a single sinusoid object, two spatially shifted images of the source are incident on the pupil. The source is chosen so that these shifted versions are completely blocked by the pupil to prevent that particular sinusoid from reaching the output image plane. The choice of a pseudorandom pupil distribution combined with the appropriate source distribution leads to good rejection of unwanted sinusoids while preserving image detail.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhao, Pu, Parikshit Ram, Songtao Lu, Yuguang Yao, Djallel Bouneffouf, Xue Lin und Sijia Liu. „Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/239.

Der volle Inhalt der Quelle
Annotation:
Adversarial perturbations are critical for certifying the robustness of deep learning models. A ``universal adversarial perturbation'' (UAP) can simultaneously attack multiple images, and thus offers a more unified threat model, obviating an image-wise attack algorithm. However, the existing UAP generator is underdeveloped when images are drawn from different image sources (e.g., with different image resolutions). Towards an authentic universality across image sources, we take a novel view of UAP generation as a customized instance of ``few-shot learning'', which leverages bilevel optimization and learning-to-optimize (L2O) techniques for UAP generation with improved attack success rate (ASR). We begin by considering the popular model agnostic meta-learning (MAML) framework to meta-learn a UAP generator. However, we see that the MAML framework does not directly offer the universal attack across image sources, requiring us to integrate it with another meta-learning framework of L2O. The resulting scheme for meta-learning a UAP generator (i) has better performance (50% higher ASR) than baselines such as Projected Gradient Descent, (ii) has better performance (37% faster) than the vanilla L2O and MAML frameworks (when applicable), and (iii) is able to simultaneously handle UAP generation for different victim models and data sources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Damavandi, Hamidreza Ghasemi, Ananya Sen Gupta, Robert Nelson und Christopher Reddy. „Compressed Forensic Source Image Using Source Pattern Map“. In 2016 Data Compression Conference (DCC). IEEE, 2016. http://dx.doi.org/10.1109/dcc.2016.108.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mancini, Massimiliano, Samuel Rota Bulo, Barbara Caputo und Elisa Ricci. „Best Sources Forward: Domain Generalization through Source-Specific Nets“. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451318.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bagchi, Swarnadeep, und Ruairí de Fréin. „Acoustic Source Localization Using Straight Line Approximations“. In 24th Irish Machine Vision and Image Processing Conference. Irish Pattern Recognition and Classification Society, 2022. http://dx.doi.org/10.56541/ljrb7078.

Der volle Inhalt der Quelle
Annotation:
The short paper extends an acoustic signal delay estimation method to general anechoic scenario using image processing techniques. The technique proposed in this paper localizes acoustic speech sources by creating a matrix of phase versus frequency histograms, where the same phases are stacked in appropriate bins. With larger delays and multiple sources coexisting in the same matrix, it becomes cluttered with activated bins. This results in high intensity spots on the spectrogram, making source discrimination difficult. In this paper, we have employed morphological filtering, chain-coding and straight line approximations to ignore noise and enhance the target signal features. Lastly, Hough transform is used for the source localization. The resulting estimates are accurate and invariant to the sampling-rate and shall have application in acoustic source separation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lehn, Waldemar H., und R. Edgar Wallace. „Continuous-Tone Mirage Images computed from Digitised Source Photographs“. In Meteorological Optics. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/mo.1986.thb4.

Der volle Inhalt der Quelle
Annotation:
The properties of mirage images corresponding to a given atmospheric model can be graphically summarised in two distinct ways: the image space1, and the transfer characteristic2. The former shows the apparent heights of surfaces of constant elevation, and is useful for estimating images of objects that possess considerable longitudinal extent, i.e. varying distance from the observer. The transfer characteristic is more convenient for objects concentrated near a single plane at a fixed distance from the observer. Detailed image construction, however, is laborious with both representations: typically, the points on a line drawing of the object are individually mapped into new apparent positions and joined by straight lines to create a line drawing of the image3. Entry of object data is slow; either individual x,y coordinates are entered by hand, or the object can be traced on a digitising tablet. Because the computed image is still only a line drawing, it is not entirely satisfactory. Certainly it is not convincingly realistic when compared with a direct photograph of a mirage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cheng, Yih-Shyang. „Lau effect with cross gratings“. In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.mff3.

Der volle Inhalt der Quelle
Annotation:
A grating, when illuminated by a quasi-monochromatic point source, can form self-images downstream. With a periodic, spatially incoherent source illumination, when the self-images due to all the source points coincide, a high contrast gratinglike structure is formed. In this paper, the Fresnel approximation is used to calculate the diffraction pattern of the object. The resulting intensity pattern is essentially the correlation between the intensity distribution of the spatially incoherent source and the scaled intensity distribution of the Fresnel diffraction of the object. By suitably choosing the plane of observation and the source structure to object structure ratio, a high contrast cross-gratinglike structure can be obtained. Al though the image pattern is the correlation between the source structure and the scaled object structure, the magnification ratio between the self-image and the source structure is the same as that in conventional imaging. We can then consider the self-image to be the image of the source structure, imaged by the object grating. When the illuminating source is imaged to the right of the object grating, the relationship between the self-image and the new source is found to be the same as before.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Source image"

1

Phoha, Shashi, und Mendel Schmiedekamp. Semantic Source Coding for Flexible Lossy Image Compression. Fort Belvoir, VA: Defense Technical Information Center, März 2007. http://dx.doi.org/10.21236/ada464658.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gunther, John E., und Ronald G. Hegg. Liquid Crystal Matrix Image Source for Helmet Mounted Displays (HMDs). Fort Belvoir, VA: Defense Technical Information Center, April 1988. http://dx.doi.org/10.21236/ada326374.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Conery, Ian, Brittany Bruder, Connor Geis, Jessamin Straub, Nicholas Spore und Katherine Brodie. Applicability of CoastSnap, a crowd-sourced coastal monitoring approach for US Army Corps of Engineers district use. Engineer Research and Development Center (U.S.), September 2023. http://dx.doi.org/10.21079/11681/47568.

Der volle Inhalt der Quelle
Annotation:
This US Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory, technical report details the pilot deployment, accuracy evaluation, and best practices of the citizen-science, coastal-image monitoring program CoastSnap. Despite the need for regular observational data, many coastlines are monitored infrequently due to cost and personnel, and this cell phone-image-based approach represents a new potential data source to districts in addition to providing an outreach opportunity for the public. Requiring minimal hardware and signage, the system is simple to install but requires user-image processing. Analysis shows the CoastSnap-derived shorelines compare well to real-time kinematic and lidar-derived shorelines during low-to-moderate wave conditions (root mean square errors [RMSEs] <10 m). During high-wave conditions, errors are higher (RMSE up to 18 m) but are improved when incorporating wave run-up. Beyond shoreline quantification, images provide other qualitative information such as storm-impact characteristics and timing of the formation of beach scarps. Ultimately, the citizen-science tool is a viable low-cost option to districts for monitoring shorelines and tracking the evolution of coastal projects such as beach nourishments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Miles, Richard B. Development of Pulse-Burst Laser Source and Digital Image Processing for Measurements of High-Speed, Time-Evolving Flow. Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada381328.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Miles, Richard B. AASERT: Development of Pulse-Burst Laser Source and Digital Image Processing for Measurements of High-Speed, Time-Evolving Flow. Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada383154.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Main, Robert G., John F. Long und Wendi A. Beane. The Effect of Video Image Size and Screen Refresher Rate on Content Mastery and Source Credibility in Distance Learning Systems. Fort Belvoir, VA: Defense Technical Information Center, Mai 1998. http://dx.doi.org/10.21236/ada370562.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Barrera, C., und M. Moran. Experimental Component Characterization, Monte-Carlo-Based Image Generation and Source Reconstruction for the Neutron Imaging System of the National Ignition Facility. Office of Scientific and Technical Information (OSTI), August 2007. http://dx.doi.org/10.2172/924968.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lasko, Kristofer, und Sean Griffin. Monitoring Ecological Restoration with Imagery Tools (MERIT) : Python-based decision support tools integrated into ArcGIS for satellite and UAS image processing, analysis, and classification. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40262.

Der volle Inhalt der Quelle
Annotation:
Monitoring the impacts of ecosystem restoration strategies requires both short-term and long-term land surface monitoring. The combined use of unmanned aerial systems (UAS) and satellite imagery enable effective landscape and natural resource management. However, processing, analyzing, and creating derivative imagery products can be time consuming, manually intensive, and cost prohibitive. In order to provide fast, accurate, and standardized UAS and satellite imagery processing, we have developed a suite of easy-to-use tools integrated into the graphical user interface (GUI) of ArcMap and ArcGIS Pro as well as open-source solutions using NodeOpenDroneMap. We built the Monitoring Ecological Restoration with Imagery Tools (MERIT) using Python and leveraging third-party libraries and open-source software capabilities typically unavailable within ArcGIS. MERIT will save US Army Corps of Engineers (USACE) districts significant time in data acquisition, processing, and analysis by allowing a user to move from image acquisition and preprocessing to a final output for decision-making with one application. Although we designed MERIT for use in wetlands research, many tools have regional or global relevancy for a variety of environmental monitoring initiatives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gantzer, Clark J., Shmuel Assouline und Stephen H. Anderson. Synchrotron CMT-measured soil physical properties influenced by soil compaction. United States Department of Agriculture, Februar 2006. http://dx.doi.org/10.32747/2006.7587242.bard.

Der volle Inhalt der Quelle
Annotation:
Methods to quantify soil conditions of pore connectivity, tortuosity, and pore size as altered by compaction were done. Air-dry soil cores were scanned at the GeoSoilEnviroCARS sector at the Advanced Photon Source for x-ray computed microtomography of the Argonne facility. Data was collected on the APS bending magnet Sector 13. Soil sample cores 5- by 5-mm were studied. Skeletonization algorithms in the 3DMA-Rock software of Lindquist et al. were used to extract pore structure. We have numerically investigated the spatial distribution for 6 geometrical characteristics of the pore structure of repacked Hamra soil from three-dimensional synchrotron computed microtomography (CMT) computed tomographic images. We analyzed images representing cores volumes 58.3 mm³ having average porosities of 0.44, 0.35, and 0.33. Cores were packed with < 2mm and < 0.5mm sieved soil. The core samples were imaged at 9.61-mm resolution. Spatial distributions for pore path length and coordination number, pore throat size and nodal pore volume obtained. The spatial distributions were computed using a three-dimensional medial axis analysis of the void space in the image. We used a newly developed aggressive throat computation to find throat and pore partitioning for needed for higher porosity media such as soil. Results show that the coordination number distribution measured from the medial axis were reasonably fit by an exponential relation P(C)=10⁻C/C0. Data for the characteristic area, were also reasonably well fit by the relation P(A)=10⁻ᴬ/ᴬ0. Results indicates that compression preferentially affects the largest pores, reducing them in size. When compaction reduced porosity from 44% to 33%, the average pore volume reduced by 30%, and the average pore-throat area reduced by 26%. Compaction increased the shortest paths interface tortuosity by about 2%. Soil structure alterations induced by compaction using quantitative morphology show that the resolution is sufficient to discriminate soil cores. This study shows that analysis of CMT can provide information to assist in assessment of soil management to ameliorate soil compaction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Brown, Shannon, Robert Fischer, Nicholas Spore, Ian Conery, Jessamin Straub, Annika O’Dea, Brittany Bruder und Katherine Brodie. Evaluating topographic reconstruction accuracy of Planet Lab’s stereo satellite imagery. Engineer Research and Development Center (U.S.), September 2024. http://dx.doi.org/10.21079/11681/49213.

Der volle Inhalt der Quelle
Annotation:
The goal of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document initial results to derive topography on the beachface in the northern Outer Banks, North Carolina, utilizing Planet Labs’ SkySat stereo panchromatic imagery processed in Agisoft Metashape. This technical note will provide an initial evaluation into whether Planet Lab’s SkySat imagery is a suitable image source for satellite Structure from Motion (SfM) algorithms as well as whether these data should be explored as a federal beach project monitoring tool. Depending on required accuracy, these data have the potential to aid coastal scientists, managers, and US Army Corps of Engineers (USACE) engineers in understanding the now-state of their coastlines and employ cost-effective adaptive management techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie