Auswahl der wissenschaftlichen Literatur zum Thema „Transformation of image“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Transformation of image" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Transformation of image"

1

Kim, J., T. Kim, D. Shin und S. H. Kim. „ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (06.06.2016): 879–83. http://dx.doi.org/10.5194/isprsarchives-xli-b1-879-2016.

Der volle Inhalt der Quelle
Annotation:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kim, J., T. Kim, D. Shin und S. H. Kim. „ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (06.06.2016): 879–83. http://dx.doi.org/10.5194/isprs-archives-xli-b1-879-2016.

Der volle Inhalt der Quelle
Annotation:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sempio, J. N. H., R. K. D. Aranas, B. P. Lim, B. J. Magallon, M. E. A. Tupas und I. A. Ventura. „ASSESSMENT OF DIFFERENT IMAGE TRANSFORMATION METHODS ON DIWATA-1 SMI IMAGES USING STRUCTURAL SIMILARITY MEASURE“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W19 (23.12.2019): 393–400. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w19-393-2019.

Der volle Inhalt der Quelle
Annotation:
Abstract. This paper aims to provide a qualitative assessment of different image transformation parameters as applied on images taken by the spaceborne multispectral imager (SMI) sensor installed in Diwata-1, the Philippines’ first Earth observation microsatellite, with the aim of determining the order of transformation that is sufficient for operationalization purposes. Images of the Palawan area were subjected to different image transformations by manual georeferencing using QGIS 3, and cloud masks generated and applied to remove the effects of clouds. The resulting images were then subjected to structural similarity (SSIM) tests using resampled and cloud masked Landsat 8 images of the same area to generate SSIM indices, which are then used as a quantitative means to assess the best performing transformation. The results of this study point to all transformed images having good SSIM ratings with their Landsat 8 counterparts, indicating that features shown in a Diwata-1 SMI image are structurally similar to the same features in a resampled Landsat 8 data. This implies that for Diwata-1 data processing operationalization purposes, higher order transformations, with the necessary effort to implement them, offer little advantage to lower order counterparts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kim, Jae-In, Hyun-cheol Kim und Taejung Kim. „Robust Mosaicking of Lightweight UAV Images Using Hybrid Image Transformation Modeling“. Remote Sensing 12, Nr. 6 (20.03.2020): 1002. http://dx.doi.org/10.3390/rs12061002.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a robust feature-based mosaicking method that can handle images obtained by lightweight unmanned aerial vehicles (UAVs). The imaging geometry of small UAVs can be characterized by unstable flight attitudes and low flight altitudes. These can reduce mosaicking performance by causing insufficient overlaps, tilted images, and biased tiepoint distributions. To solve these problems in the mosaicking process, we introduce the tiepoint area ratio (TAR) as a geometric stability indicator and orthogonality as an image deformation indicator. The proposed method estimates pairwise transformations with optimal transformation models derived by geometric stability analysis between adjacent images. It then estimates global transformations from optimal pairwise transformations that maximize geometric stability between adjacent images and minimize mosaic deformation. The valid criterion for the TAR in selecting an optimal transformation model was found to be about 0.3 from experiments with two independent image datasets. The results of a performance evaluation showed that the problems caused by the imaging geometry characteristics of small UAVs could actually occur in image datasets and showed that the proposed method could reliably produce image mosaics for image datasets obtained in both general and extreme imaging environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sarid, Orly, und Ephrat Huss. „Image formation and image transformation“. Arts in Psychotherapy 38, Nr. 4 (September 2011): 252–55. http://dx.doi.org/10.1016/j.aip.2011.07.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Hongbing, Gengyi Liu, Xuewen Ma und Daohua Liu. „Training dictionary by granular computing with L∞-norm for patch granule–based image denoising“. Journal of Algorithms & Computational Technology 12, Nr. 2 (02.03.2018): 136–46. http://dx.doi.org/10.1177/1748301818761131.

Der volle Inhalt der Quelle
Annotation:
Considering the objects by different granularity reflects the recognition common law of people, granular computing embodies the transformation between different granularity spaces. We present the image denoising algorithm by using the dictionary trained by granular computing with L∞-norm, which realizes three transformations, (1) the transformation from image space to patch granule space, (2) the transformation between granule spaces with different granularities, and (3) the transformation from patch granule space to image space. We demonstrate that the granular computing with L∞-norm achieved the comparable peak signal to noise ratio (PSNR) measure compared with BM3D and patch group prior based denoising for eight natural images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wang, Nannan, Jie Li, Dacheng Tao, Xuelong Li und Xinbo Gao. „Heterogeneous image transformation“. Pattern Recognition Letters 34, Nr. 1 (Januar 2013): 77–84. http://dx.doi.org/10.1016/j.patrec.2012.04.005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hou, Dongdong, Weiming Zhang und Nenghai Yu. „Image camouflage by reversible image transformation“. Journal of Visual Communication and Image Representation 40 (Oktober 2016): 225–36. http://dx.doi.org/10.1016/j.jvcir.2016.06.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mat Jizat, Jessnor Arif, Ahmad Fakhri Ab. Nasir, Anwar P.P Abdul Majeed und Edmund Yuen. „Effect of Image Compression using Fast Fourier Transformation and Discrete Wavelet Transformation on Transfer Learning Wafer Defect Image Classification“. MEKATRONIKA 2, Nr. 1 (05.06.2020): 16–22. http://dx.doi.org/10.15282/mekatronika.v2i1.6704.

Der volle Inhalt der Quelle
Annotation:
Automated inspection machines for wafer defects usually captured thousands of images on a large scale to preserve the detail of defect features. However, most transfer learning architecture requires smaller images as input images. Thus, proper compression is required to preserve the defect features whilst maintaining an acceptable classification accuracy. This paper reports on the effect of image compression using Fast Fourier Transformation and Discrete Wavelet Transformation on transfer learning wafer defect image classification. A total of 500 images with 5 classes with 4 defect classes and 1 non-defect class were split to 60:20:20 ratio for training, validating and testing using InceptionV3 and Logistic Regression classifier. However, the input images were compressed using Fast Fourier Transformation and Discrete Wavelet Transformation using 4 level decomposition and Debauchies 4 wavelet family. The images were compressed by 50%, 75%, 90%, 95%, and 99%. As a result, the Fast Fourier Transformation compression show an increase from 89% to 94% in classification accuracy up to 95% compression, while Discrete Wavelet Transformation shows consistent classification accuracy throughout albeit diminishing image quality. From the experiment, it can be concluded that FFT and DWT image compression can be a reliable method for image compression for grayscale image classification as the image memory space drop 56.1% while classification accuracy increased by 5.6% with 95% FFT compression and memory space drop 55.6% while classification accuracy increased 2.2% with 50% DWT compression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

G., Sindhu Madhuri, und Indra Gandhi M. P. „New Image Registration Techniques: Development and Comparative Analysis“. International Journal of Emerging Research in Management and Technology 6, Nr. 7 (29.06.2018): 146. http://dx.doi.org/10.23956/ijermt.v6i7.204.

Der volle Inhalt der Quelle
Annotation:
Design and Development of new Image Registration Techniques by using complex mathematical transformation functions are attempted in this research work as there is a requirement for the performance measurement of image registration complexity. The design and development of new image registration techniques are carried out with complex mathematical transformations of Radon and Slant functions due to their importance. And the rotation and translation geometric function are considered for better insight into the complex image registration process. The newly developed image registration techniques areevaluated and analyzed with openly available images of Lena, Cameraman and VegCrop. The accuracy as a performance measure of the newly developed image registration techniques are attempted to measure with popularly known metrics of RMSE, PSNR and Entropy. And the results obtained after successful image registration process are compared are presented. It is observed from the results that the developed new image registration techniques using Radon and Slant transformation functions with rotation and translation are superior and useful for the requirement and purpose in the digital image processing domain. Finally a research effort is made to development of new image registration techniques that are useful to extract intelligence embedded in the images with complex transformation function and an attempt is made to measure its performance also.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Transformation of image"

1

DeMeola, Christina. „Tattoo| Image and Transformation“. Thesis, Pacifica Graduate Institute, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10784168.

Der volle Inhalt der Quelle
Annotation:

This thesis uses heuristic and alchemical hermeneutic methodologies and a depth psychological perspective to examine the metaphor and experience of tattoo. The history of tattoos and ideas around healing are explored, as well as the author’s own healing and transformation through multiple tattoo experiences. The author’s analysis illustrates how a tattoo may be not only representative of a snapshot of the psyche in a moment in time, but might also move the psyche toward healing through the exploration of the archetypal energy in the image. In addition, the author explores how the modification of the body has the capacity to change the emotional and psychological relationship to one’s body.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

LEME, ROBERTO BETIM PAES. „CONSUMPITION AND TRANSFORMATION OF SURFING IMAGE“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=32609@1.

Der volle Inhalt der Quelle
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
O estudo propõe uma possível relação entre o discurso editorial e o publicitário na formação de uma imagem sobre a prática do surfe presente em nosso cotidiano. Mais especificamente, o trabalho apresenta uma reflexão sobre a construção dessa prática a partir de quem vive e consome os produtos vinculados a ela. Sendo assim, um método de pesquisa com viés antropológico foi desenvolvido. Após observar o surfe na publicidade, analisamos as publicações especializadas no sentido de compreender o que e para quem dizem os editores de tais veículos. Acreditamos que ao observar costumes e hábitos sob o prisma do consumo, podemos encontrar pistas importantes para entender o indivíduo e o design para o qual tais publicações se dirigem. Entendemos, por fim, que a prática do surfe à qual se faz referência ao longo do texto, carrega consigo elementos importantes na caracterização desse indivíduo.
The study proposes a possible relationship between the publishing and the advertising discourses in the idealization an image of the practice of surfing present in everyday life. More specifically, the work presents a reflection on the construction of this practice from those who live and consume products tied to it. Therefore, a research method with anthropological aproach was developed. After observing the surfing in advertising, specialized publications were analyzed, in order to understand what and who the editors were aiming with each media. We observed that consumers life style and habits can give important clues to understand the design adapted in the studied graphic material. We believe that the practice of surfing, wich is referred throghout the text, carries important elements in characterization of this individual.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bourque, Eric. „Image-based procedural texture matching and transformation“. Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100327.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we present an approach to finding a procedural representation of a texture to replicate a given texture image which we call image-based procedural texture matching. Procedural representations are frequently used for many aspects of computer generated imagery, however, the ability to use procedural textures is limited by the difficulty inherent in finding a suitable procedural representation to match a desired texture. More importantly, the process of determining an appropriate set of parameters necessary to approximate the sample texture is a difficult task for a graphic artist.
The textural characteristics of many real world objects change over time, so we are therefore interested in how textured objects in a graphical animation could also be made to change automatically. We would like this automatic texture transformation to be based on different texture samples in a time-dependant manner. This notion, which is a natural extension of procedural texture matching, involves the creation of a smoothly varying sequence of texture images, while allowing the graphic artist to control various characteristics of the texture sequence.
Given a library of procedural textures, our approach uses a perceptually motivated texture similarity measure to identify which procedural textures in the library may produce a suitable match. Our work assumes that at least one procedural texture in the library is capable of approximating the desired texture. Because exhaustive search of all of the parameter combinations for each procedural texture is not computationally feasible, we perform a two-stage search on the candidate procedural textures. First, a global search is performed over pre-computed samples from the given procedural texture to locate promising parameter settings. Secondly, these parameter settings are optimised using a local search method to refine the match to the desired texture.
The characteristics of a procedural texture generally do not vary uniformly for uniform parameter changes. That is, in some areas of the parameter domain of a procedural texture (the set of all valid parameter settings for the given procedural texture) small changes may produce large variations in the resulting texture, while in other areas the same changes may produce no variation at all. In this thesis, we present an adaptive random sampling algorithm which captures the texture range (the set of all images a procedural texture can produce) of a procedural texture by maintaining a sampling density which is consistent with the amount of change occurring in that region of the parameter domain.
Texture transformations may not always be contained to a single procedural texture, and we therefore describe an approach to finding transitional points from one procedural texture to another. We present an algorithm for finding a path through the texture space formed from combining the texture range of the relevant procedural textures and their transitional points.
Several examples of image-based texture matching, and texture transformations are shown. Finally, potential limitations of this work as well as future directions are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sivaramakrishna, Radhika. „Breast image registration using a textural transformation“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23666.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cook, Anthony John. „Digital image processing using colour space transformation“. Thesis, University of Hertfordshire, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323433.

Der volle Inhalt der Quelle
Annotation:
The purpose of the work is to explore the feasibility of devising a computer system that implements the desirable effects of a photographic filter and provides an environment for colour filter design for image processing. Using conversion from RGB to the CIELUV colour space a new method for the implementation of photographic filter as a digital filter is described. A filter is implemented by converting image pixel rgb values into CIELUV (u', v') and L* values and operates using the visual wavelength values provided by the (u', v') chromaticity diagram. However, the (u', v') diagram cannot provide wavelength values for pixels that correspond to (u', v') points in the `purple line' sector of the diagram. These pixels are allocated wavelengths by means of a new wavelengths scale that makes it possible for the filter to process any pixel in a digital image. Filter transmittance data for visual spectrum wavelengths is obtained from published tables. The transmittance data for purple sector pixels is provided by a colour model of the (u', v') chromaticity diagram. The system is evaluated by means of the Macbeth ColorChecker chart and the use of physical measurements. The extension of the CIELUV diagram with an equivalent wavelength scale provides a new environment for the enhancement and manipulation of digital colour images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hirasawa, Tetsu. „Organizational identity formation and transformation“. Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607893.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Shi, Bibo. „Regularity-Guaranteed Transformation Estimation in Medical Image Registration“. Ohio University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1312842132.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sonogashira, Motoharu. „Variational Bayesian Image Restoration with Transformation Parameter Estimation“. Kyoto University, 2018. http://hdl.handle.net/2433/232409.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Aparnnaa. „Image Denoising and Noise Estimation by Wavelet Transformation“. Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1555929391906805.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sun, Lu. „Geometric transformation and image singularity with wavelet analysis“. HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/656.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Transformation of image"

1

Mironovskiĭ, L. A. (Leonid Alekseevich), Hrsg. Strip-method for image and signal transformation. Berlin: De Gruyter, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mironovskiĭ, L. A. Strip-method for image and signal transformation. Berlin: De Gruyter, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

1937-, Roberts David, Hrsg. Elias Canetti's counter-image of society: Crowds, power, transformation. Rochester, N.Y: Camden House, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bakken, Kenneth L. Healing and transformation: Into the image & likeness of God. Minneapolis, Minn: Bethany Fellowship, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wolberg, George. Digital image warping. Los Alamitos, Calif: IEEE Computer Society Press, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

The printed image and the transformation of popular culture, 1790-1860. Oxford: Clarendon Press, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

The printed image and the transformation of popular culture,1790-1860. Oxford: Clarendon Press, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Transformation of the God-image: An elucidation of Jung's Answer to Job. Toronto: Inner City Books, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

In Turkey's image: The transformation of occupied Cyprus into a Turkish province. New Rochelle, N.Y: A.D. Caratzas, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Taming the diet dragon: Using language & imagery for weight control and body transformation. 2. Aufl. St. Paul, Minn: Llewellyn Publications, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Transformation of image"

1

Kamusoko, Courage. „Image Transformation“. In Springer Geography, 67–79. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8012-9_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Goshtasby, A. Ardeshir. „Transformation Functions“. In Image Registration, 343–400. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2458-0_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chityala, Ravishankar, und Sridevi Pudipeddi. „Affine Transformation“. In Image Processing and Acquisition using Python, 123–35. Second edition. | Boca Raton : Chapman & Hall/CRC Press, 2020. | Series: Chapman & Hall/CRC the Python series: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429243370-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wagner, Björn, Andreas Dinges, Paul Müller und Gundolf Haase. „Parallel Volume Image Segmentation with Watershed Transformation“. In Image Analysis, 420–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02230-2_43.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Derungs, Isabelle My Hanh. „The Image of Leadership“. In Trans-Cultural Leadership for Transformation, 169–85. London: Palgrave Macmillan UK, 2010. http://dx.doi.org/10.1057/9780230304185_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chaki, Jyotismita, und Nilanjan Dey. „Geometric Transformation Techniques“. In A Beginner's Guide to Image Preprocessing Techniques, 25–38. Boca Raton : Taylor & Francis, a CRC title, part of the Taylor & Francis imprint, a member of the Taylor & Francis Group, the academicdivision of T&F Informa, plc, 2019. | Series: Intelligent signalprocessing and data analysis: CRC Press, 2018. http://dx.doi.org/10.1201/9780429441134-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Richards, John A. „Fourier Transformation of Image Data“. In Remote Sensing Digital Image Analysis, 155–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-88087-2_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Richards, John A., und Xiuping Jia. „Fourier Transformation of Image Data“. In Remote Sensing Digital Image Analysis, 155–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-662-03978-6_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Richards, John A. „Fourier Transformation of Image Data“. In Remote Sensing Digital Image Analysis, 148–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/978-3-662-02462-1_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Roy, Swalpa Kumar, Nilavra Bhattacharya, Bhabatosh Chanda, Bidyut B. Chaudhuri und Soumitro Banerjee. „Image Denoising Using Fractal Hierarchical Classification“. In Social Transformation – Digital Way, 631–45. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1343-1_52.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Transformation of image"

1

Takano, Shuichi, Kiyoshi Tanaka und Tatsuo Sugimura. „Steganograpic image transformation“. In Electronic Imaging '99, herausgegeben von Ping W. Wong und Edward J. Delp III. SPIE, 1999. http://dx.doi.org/10.1117/12.344686.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Huang, J., J. Kopf, N. Ahuja und S. B. Kang. „Transformation guided image completion“. In 2013 IEEE International Conference on Computational Photography (ICCP). IEEE, 2013. http://dx.doi.org/10.1109/iccphot.2013.6528313.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Eldon, John A. „Image Transformation And Resampling“. In Medical Imaging II, herausgegeben von Roger H. Schneider und Samuel J. Dwyer III. SPIE, 1988. http://dx.doi.org/10.1117/12.968688.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Singh, Virendra, und Manini Singh. „Analysis and transformation of image using spectral transformation technique“. In 2010 International Conference on Signal and Image Processing (ICSIP). IEEE, 2010. http://dx.doi.org/10.1109/icsip.2010.5697451.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lutz, Adam, Kendrick Grace, Neal Messer, Soundararajan Ezekiel, Erik Blasch, Mark Alford, Adnan Bubalo und Maria Cornacchia. „Bandelet transformation based image registration“. In 2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2015. http://dx.doi.org/10.1109/aipr.2015.7444530.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Paulin, Mattis, Jerome Revaud, Zaid Harchaoui, Florent Perronnin und Cordelia Schmid. „Transformation Pursuit for Image Classification“. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.466.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Liu, Ying. „Image compression using DGIC transformation“. In 9th Computing in Aerospace Conference. Reston, Virigina: American Institute of Aeronautics and Astronautics, 1993. http://dx.doi.org/10.2514/6.1993-4617.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhao, Mingsheng, und Congxiao Bao. „Image thresholding by histogram transformation“. In SPIE's International Symposium on Optical Engineering and Photonics in Aerospace Sensing, herausgegeben von David P. Casasent und Andrew G. Tescher. SPIE, 1994. http://dx.doi.org/10.1117/12.177723.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhang, Lixia, Hongzhi Song, Zhaoming Ou, Huakun Liang und Lei Xiao. „Fisheye transformation for image retargeting“. In 2010 International Conference on Information and Automation (ICIA). IEEE, 2010. http://dx.doi.org/10.1109/icinfa.2010.5512242.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Jabbar, Muhammad Usama, Waqar Ahmad, Ali Waqar, Muhammad Jamshed Abbas und Sunil Pervaiz. „Transformation based image de-noising“. In 2020 3rd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET). IEEE, 2020. http://dx.doi.org/10.1109/icomet48670.2020.9074064.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Transformation of image"

1

LaGrange, T. First single-shot image of the alpha->beta Phase Transformation in Pure Nanocyrstalline Ti with Nanosecond Resolution. Office of Scientific and Technical Information (OSTI), Februar 2007. http://dx.doi.org/10.2172/1129147.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kelekci, Osman. Transformations with Palindromic Images. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, Juni 2020. http://dx.doi.org/10.7546/crabs.2020.06.01.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Tang, H., Ed X. Wu, D. Gallagher und S. B. Heymsfield. Monochrome Image Presentation and Segmentation Based on the Pseudo-Color and PCT Transformations. Fort Belvoir, VA: Defense Technical Information Center, Oktober 2001. http://dx.doi.org/10.21236/ada412412.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Shiliang. Application of Hough transformation to detect ovulatory patterns in cervical mucus images. Portland State University Library, Januar 2000. http://dx.doi.org/10.15760/etd.5873.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Baluk, Nadia, Natalia Basij, Larysa Buk und Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, Februar 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Der volle Inhalt der Quelle
Annotation:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combined are defined and characterized. The most important in journalism is verbal content, it is the one that carries the main information load. The dynamic development of converged media leads to the dominance of image and video content; the likelihood of increasing the secondary content of the text increases. Given the market situation, the effective information product is a combined content that combines text with images, spreadsheets with video, animation with infographics, etc. Increasing number of new media are using applications and website platforms to interact with recipients. To proceed, the peculiarities of the new content of new media with the involvement of augmented reality are determined. Examples of successful interactive communication between recipients, the leading news agencies and commercial structures are provided. The conditions for effective use of VR / AR-technologies in the media content of new media, the involvement of viewers in changing stories with augmented reality are determined. The so-called immersive effect with the use of VR / AR-technologies involves complete immersion, immersion of the interested audience in the essence of the event being relayed. This interaction can be achieved through different types of VR video interactivity. One of the most important results of using VR content is the spatio-temporal and emotional immersion of viewers in the plot. The recipient turns from an external observer into an internal one; but his constant participation requires that the user preferences are taken into account. Factors such as satisfaction, positive reinforcement, empathy, and value influence the choice of VR / AR content by viewers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie