Academic literature on the topic 'Image representation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Image representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Image representation"

1

Li, Hong, Jin Ping Zhang, Fen Xia Wu, and Cong E. Tan. "Image Fusion with Sparse Representation." Advanced Materials Research 798-799 (September 2013): 737–40. http://dx.doi.org/10.4028/www.scientific.net/amr.798-799.737.

Full text
Abstract:
Sparse representation is a new image representation theory. It can accurately represent the image information. In this paper, a novel fusion scheme using sparse representation is proposed. The sparse representation is conducted on overlapping patches. Each source image is divided into patches, and all the patches are transformed into vectors. Decompose the vectors into theirs sparse representations using orthogonal matching pursuit. Sparse coefficients are fused with the maximum absolute. The simulation results show that the proposed method can provide high-quality images.
APA, Harvard, Vancouver, ISO, and other styles
2

HU, CHAO, LI LIU, BO SUN, and MAX Q. H. MENG. "COMPACT REPRESENTATION AND PANORAMIC REPRESENTATION FOR CAPSULE ENDOSCOPE IMAGES." International Journal of Information Acquisition 06, no. 04 (December 2009): 257–68. http://dx.doi.org/10.1142/s0219878909001989.

Full text
Abstract:
A capsule endoscope robot is a miniature medical instrument for inspection of gastrointestinal tract. In this paper, we present image compact representation and preliminary panoramic representation methods for the capsule endoscope. First, the characteristics of the capsule endoscopic images are investigated and different coordinate representations of the circular image are discussed. Secondly, effective compact representation methods including special DPCM and wavelet compression techniques are applied to the endoscopic images to get high compression ratio and signal to noise ratio. Then, a preliminary approach to panoramic representation of endoscopic images is presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Gavaler, Chris. "Refining the Comics Form." European Comic Art 10, no. 2 (September 1, 2017): 1–23. http://dx.doi.org/10.3167/eca.2017.100202.

Full text
Abstract:
Setting aside historical factors and focusing exclusively on a formal definition of comics as juxtaposed images, comics may be further refined by analysing the divisions, orders and relationships of those images. The images may also have both representational and abstract levels that together produce narrative’s intrinsic patterns and its extrinsic feeling of story. Although narrative comics and abstract comics sound like opposites, a representative narrative may be understood non-representationally because it is composed of abstract marks, and a sequence of abstract images can still create the experience of story through implied conflict and transformation. Analysed according to image representation, image relation, and image order, comics divide into six formally distinct categories: representational and abstract narratives; representational and abstract arrangements; and representational and abstract non sequiturs.
APA, Harvard, Vancouver, ISO, and other styles
4

Song, Lijuan. "Image Segmentation Based on Supervised Discriminative Learning." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 10 (June 20, 2018): 1854027. http://dx.doi.org/10.1142/s0218001418540277.

Full text
Abstract:
In view of the complex background of images and the segmentation difficulty, a sparse representation and supervised discriminative learning were applied to image segmentation. The sparse and over-complete representation can represent images in a compact and efficient manner. Most atom coefficients are zero, only a few coefficients are large, and the nonzero coefficient can reveal the intrinsic structures and essential properties of images. Therefore, sparse representations are beneficial to subsequent image processing applications. We first described the sparse representation theory. This study mainly revolved around three aspects, namely a trained dictionary, greedy algorithms, and the application of the sparse representation model in image segmentation based on supervised discriminative learning. Finally, we performed an image segmentation experiment on standard image datasets and natural image datasets. The main focus of this thesis was supervised discriminative learning, and the experimental results showed that the proposed algorithm was optimal, sparse, and efficient.
APA, Harvard, Vancouver, ISO, and other styles
5

Tian, Chunwei, Qi Zhang, Jian Zhang, Guanglu Sun, and Yuan Sun. "2D-PCA Representation and Sparse Representation for Image Recognition." Journal of Computational and Theoretical Nanoscience 14, no. 1 (January 1, 2017): 829–34. http://dx.doi.org/10.1166/jctn.2017.6281.

Full text
Abstract:
The two-dimensional principal component analysis (2D-PCA) method has been widely applied in fields of image classification, computer vision, signal processing and pattern recognition. The 2D-PCA algorithm also has a satisfactory performance in both theoretical research and real-world applications. It not only retains main information of the original face images, but also decreases the dimension of original face images. In this paper, we integrate the 2D-PCA and spare representation classification (SRC) method to distinguish face images, which has great performance in face recognition. The novel representation of original face image obtained using 2D-PCA is complementary with original face image, so that the fusion of them can obviously improve the accuracy of face recognition. This is also attributed to the fact the features obtained using 2D-PCA are usually more robust than original face image matrices. The experiments of face recognition demonstrate that the combination of original face images and new representations of the original face images is more effective than the only original images. Especially, the simultaneous use of the 2D-PCA method and sparse representation can extremely improve accuracy in image classification. In this paper, the adaptive weighted fusion scheme automatically obtains optimal weights and it has no any parameter. The proposed method is not only simple and easy to achieve, but also obtains high accuracy in face recognition.
APA, Harvard, Vancouver, ISO, and other styles
6

Younas, Junaid, Shoaib Ahmed Siddiqui, Mohsin Munir, Muhammad Imran Malik, Faisal Shafait, Paul Lukowicz, and Sheraz Ahmed. "Fi-Fo Detector: Figure and Formula Detection Using Deformable Networks." Applied Sciences 10, no. 18 (September 16, 2020): 6460. http://dx.doi.org/10.3390/app10186460.

Full text
Abstract:
We propose a novel hybrid approach that fuses traditional computer vision techniques with deep learning models to detect figures and formulas from document images. The proposed approach first fuses the different computer vision based image representations, i.e., color transform, connected component analysis, and distance transform, termed as Fi-Fo image representation. The Fi-Fo image representation is then fed to deep models for further refined representation-learning for detecting figures and formulas from document images. The proposed approach is evaluated on a publicly available ICDAR-2017 Page Object Detection (POD) dataset and its corrected version. It produces the state-of-the-art results for formula and figure detection in document images with an f1-score of 0.954 and 0.922, respectively. Ablation study results reveal that the Fi-Fo image representation helps in achieving superior performance in comparison to raw image representation. Results also establish that the hybrid approach helps deep models to learn more discriminating and refined features.
APA, Harvard, Vancouver, ISO, and other styles
7

RIZO-RODRÍGUEZ, DAYRON, HEYDI MÉNDEZ-VAZQUEZ, and EDEL GARCÍA-REYES. "ILLUMINATION INVARIANT FACE RECOGNITION IN QUATERNION DOMAIN." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 03 (May 2013): 1360004. http://dx.doi.org/10.1142/s0218001413600045.

Full text
Abstract:
The performance of face recognition systems tends to decrease when images are affected by illumination. Feature extraction is one of the main steps of a face recognition process, where it is possible to alleviate the illumination effects on face images. In order to increase the accuracy of recognition tasks, different methods for obtaining illumination invariant features have been developed. The aim of this work is to compare two different ways to represent face image descriptions in terms of their illumination invariant properties for face recognition. The first representation is constructed following the structure of complex numbers and the second one is based on quaternion numbers. Using four different face description approaches both representations are constructed, transformed into frequency domain and expressed in polar coordinates. The most illumination invariant component of each frequency domain representation is determined and used as the representative information of the face image. Verification and identification experiments are then performed in order to compare the discriminative power of the selected components. Representative component of the quaternion representation overcame the complex one.
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Bo, Abdullah M. Iliyasu, Fei Yan, Fangyan Dong, and Kaoru Hirota. "An RGB Multi-Channel Representation for Images on Quantum Computers." Journal of Advanced Computational Intelligence and Intelligent Informatics 17, no. 3 (May 20, 2013): 404–17. http://dx.doi.org/10.20965/jaciii.2013.p0404.

Full text
Abstract:
RGB multi channel representation is proposed for images on quantum computers (MCQI) that captures information about colors (RGB channels) and their corresponding positions in an image in a normalized quantum state. The proposed representation makes it possible to store the RGB information about an image simultaneously by using 2n+3 qubits for encoding 2n× 2npixel images, whereas pixel-wise processing is necessary in many other quantum image representations, e.g., qubit lattice, grid qubit, and quantum lattice. Simulation of storage and retrieval of MCQI images using human facial images demonstrated that 15 qubits are required for encoding 64 × 64 colored images, and encoded information is retrieved by measurement. Perspectives of designing quantum image operators are also discussed based onMCQI representation, e.g., channel of interest, channel swapping, and restrict version of color transformation.
APA, Harvard, Vancouver, ISO, and other styles
9

Mihálik, A., and R. Ďurikovič. "Image-based BRDF Representation." Journal of Applied Mathematics, Statistics and Informatics 11, no. 2 (December 1, 2015): 47–56. http://dx.doi.org/10.1515/jamsi-2015-0011.

Full text
Abstract:
Abstract To acquire a certain level of photorealism in computer graphics, it is necessary to analyze, how the materials scatter the incident light. In this work, we propose the method to direct rendering of isotropic bidirectional reflectance function (BRDF) from the small set of images. The image-based rendering is focused to synthesize as accurately as possible scenes composed of natural and artificial objects. The realistic image synthesis of BRDF data requires evaluation of radiance over the multiple directions of incident and scattered light from the surface. In our approach the images depict only the material reflectance, the shape is represented as the object geometry. We store the BRDF representation, acquired from the sample material, in a number of two-dimensional textures that contain images of spheres lit from the multiple directions. In order to render particular material, we interpolate between textures in the similar way the image morphing works. Our method allows the real-time rendering of tabulated BRDF data on low memory devices such as mobile phones.
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Shijun, Yongjun Zhang, Wenjie Liu, and Yongjie Zou. "Improved image representation and sparse representation for image classification." Applied Intelligence 50, no. 6 (February 10, 2020): 1687–98. http://dx.doi.org/10.1007/s10489-019-01612-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Image representation"

1

Engel, Claude. "Image et representation." Université Marc Bloch (Strasbourg) (1971-2008), 1989. http://www.theses.fr/1989STR20027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chintala, Venkatram Reddy. "Digital image data representation." Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183128563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moltisanti, Marco. "Image Representation using Consensus Vocabulary and Food Images Classification." Doctoral thesis, Università di Catania, 2016. http://hdl.handle.net/10761/3968.

Full text
Abstract:
Digital images are the result of many physical factors, such as illumination, point of view an thermal noise of the sensor. These elements may be irrelevant for a specific Computer Vision task; for instance, in the object detection task, the viewpoint and the color of the object should not be relevant in order to answer the question "Is the object present in the image?". Nevertheless, an image depends crucially on all such parameters and it is simply not possible to ignore them in analysis. Hence, finding a representation that, given a specific task, is able to keep the significant features of the image and discard the less useful ones is the first step to build a robust system in Computer Vision. One of the most popular model to represent images is the Bag-of-Visual-Words (BoW) model. Derived from text analysis, this model is based on the generation of a codebook (also called vocabulary) which is subsequently used to provide the actual image representation. Considering a set of images, the typical pipeline, consists in: 1. Select a subset of images to be the training set for the model; 2. Extract the desired features from the all the images; 3. Run a clustering algorithm on the features extracted from the training set: each cluster is a codeword, the set containing all the clusters is the codebook; 4. For each feature point, nd the closest codeword according to a distance function or metric; 5. Build a normalized histogram of the occurrences of each word. The choices made in the design phase influence strongly the final outcome of the representation. In this work we will discuss how to aggregate di fferent kind of features to obtain more powerful representations, presenting some state-of-the-art methods in Computer Vision community. We will focus on Clustering Ensemble techniques, presenting the theoretical framework and a new approach (Section 2.5). Understanding food in everyday life (e.g., the recognition of dishes and the related ingredients, the estimation of quantity, etc.) is a problem which has been considered in different research areas due its important impact under the medical, social and anthropological aspects. For instance, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced Computer Vision tools to recognize food images (e.g., acquired with mobile/wearable cameras), as well as their properties (e.g., calories, volume), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). On the other hand, the great diffusion of low cost image acquisition devices embedded in smartphones allows people to take pictures of food and share them on Internet (e.g., on social media); the automatic analysis of the posted images could provide information on the relationship between people and their meals and can be exploited by food retailer to better understand the preferences of a person for further recommendations of food and related products. Image representation plays a key role while trying to infer information about food items depicted in the image. We propose a deep review of the state-of-the-art two different novel representation techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Bowley, James. "Sparse image representation with encryption." Thesis, Aston University, 2013. http://publications.aston.ac.uk/20914/.

Full text
Abstract:
In this thesis we present an overview of sparse approximations of grey level images. The sparse representations are realized by classic, Matching Pursuit (MP) based, greedy selection strategies. One such technique, termed Orthogonal Matching Pursuit (OMP), is shown to be suitable for producing sparse approximations of images, if they are processed in small blocks. When the blocks are enlarged, the proposed Self Projected Matching Pursuit (SPMP) algorithm, successfully renders equivalent results to OMP. A simple coding algorithm is then proposed to store these sparse approximations. This is shown, under certain conditions, to be competitive with JPEG2000 image compression standard. An application termed image folding, which partially secures the approximated images is then proposed. This is extended to produce a self contained folded image, containing all the information required to perform image recovery. Finally a modified OMP selection technique is applied to produce sparse approximations of Red Green Blue (RGB) images. These RGB approximations are then folded with the self contained approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Huu Ton. "Improving image representation using image saliency and information gain." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2287/document.

Full text
Abstract:
De nos jours, avec le développement des nouvelles technologies multimédia, la recherche d’images basée sur le contenu visuel est un sujet de recherche en plein essor avec de nombreux domaines d'application: indexation et recherche d’images, la graphologie, la détection et le suivi d’objets... Un des modèles les plus utilisés dans ce domaine est le sac de mots visuels qui tire son inspiration de la recherche d’information dans des documents textuels. Dans ce modèle, les images sont représentées par des histogrammes de mots visuels à partir d'un dictionnaire visuel de référence. La signature d’une image joue un rôle important car elle détermine la précision des résultats retournés par le système de recherche.Dans cette thèse, nous étudions les différentes approches concernant la représentation des images. Notre première contribution est de proposer une nouvelle méthodologie pour la construction du vocabulaire visuel en utilisant le gain d'information extrait des mots visuels. Ce gain d’information est la combinaison d’un modèle de recherche d’information avec un modèle d'attention visuelle.Ensuite, nous utilisons un modèle d'attention visuelle pour améliorer la performance de notre modèle de sacs de mots visuels. Cette étude de la saillance des descripteurs locaux souligne l’importance d’utiliser un modèle d’attention visuelle pour la description d’une image.La dernière contribution de cette thèse au domaine de la recherche d’information multimédia démontre comment notre méthodologie améliore le modèle des sacs de phrases visuelles. Finalement, une technique d’expansion de requêtes est utilisée pour augmenter la performance de la recherche par les deux modèles étudiés
Nowadays, along with the development of multimedia technology, content based image retrieval (CBIR) has become an interesting and active research topic with an increasing number of application domains: image indexing and retrieval, face recognition, event detection, hand writing scanning, objects detection and tracking, image classification, landmark detection... One of the most popular models in CBIR is Bag of Visual Words (BoVW) which is inspired by Bag of Words model from Information Retrieval field. In BoVW model, images are represented by histograms of visual words from a visual vocabulary. By comparing the images signatures, we can tell the difference between images. Image representation plays an important role in a CBIR system as it determines the precision of the retrieval results.In this thesis, image representation problem is addressed. Our first contribution is to propose a new framework for visual vocabulary construction using information gain (IG) values. The IG values are computed by a weighting scheme combined with a visual attention model. Secondly, we propose to use visual attention model to improve the performance of the proposed BoVW model. This contribution addresses the importance of saliency key-points in the images by a study on the saliency of local feature detectors. Inspired from the results from this study, we use saliency as a weighting or an additional histogram for image representation.The last contribution of this thesis to CBIR shows how our framework enhances the BoVP model. Finally, a query expansion technique is employed to increase the retrieval scores on both BoVW and BoVP models
APA, Harvard, Vancouver, ISO, and other styles
6

Elliott, Desmond. "Structured representation of images for language generation and image retrieval." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10524.

Full text
Abstract:
A photograph typically depicts an aspect of the real world, such as an outdoor landscape, a portrait, or an event. The task of creating abstract digital representations of images has received a great deal of attention in the computer vision literature because it is rarely useful to work directly with the raw pixel data. The challenge of working with raw pixel data is that small changes in lighting can result in different digital images, which is not typically useful for downstream tasks such as object detection. One approach to representing an image is automatically extracting and quantising visual features to create a bag-of-terms vector. The bag-of-terms vector helps overcome the problems with raw pixel data but this unstructured representation discards potentially useful information about the spatial and semantic relationships between the parts of the image. The central argument of this thesis is that capturing and encoding the relationships between parts of an image will improve the performance of extrinsic tasks, such as image description or search. We explore this claim in the restricted domain of images representing events, such as riding a bicycle or using a computer. The first major contribution of this thesis is the Visual Dependency Representation: a novel structured representation that captures the prominent region–region relationships in an image. The key idea is that images depicting the same events are likely to have similar spatial relationships between the regions contributing to the event. This representation is inspired by dependency syntax for natural language, which directly captures the relationships between the words in a sentence. We also contribute a data set of images annotated with multiple human-written descriptions, labelled image regions, and gold-standard Visual Dependency Representations, and explain how the gold-standard representations can be constructed by trained human annotators. The second major contribution of this thesis is an approach to automatically predicting Visual Dependency Representations using a graph-based statistical dependency parser. A dependency parser is typically used in Natural Language Processing to automatically predict the dependency structure of a sentence. In this thesis we use a dependency parser to predict the Visual Dependency Representation of an image because we are working with a discrete image representation – that of image regions. Our approach can exploit features from the region annotations and the description to predict the relationships between objects in an image. In a series of experiments using gold-standard region annotations, we report significant improvements in labelled and unlabelled directed attachment accuracy over a baseline that assumes there are no relationships between objects in an image. Finally, we find significant improvements in two extrinsic tasks when we represent images as Visual Dependency Representations predicted from gold-standard region annotations. In an image description task, we show significant improvements in automatic evaluation measures and human judgements compared to state-of-the-art models that use either external text corpora or region proximity to guide the generation process. In the query-by-example image retrieval task, we show a significant improvement in Mean Average Precision and the precision of the top 10 images compared to a bag-of-terms approach. We also perform a correlation analysis of human judgements against automatic evaluation measures for the image description task. The automatic measures are standard measures adopted from the machine translation and summarization literature. The main finding of the analysis is that unigram BLEU is less correlated with human judgements than Smoothed BLEU, Meteor, or skip-bigram ROUGE.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xin. "Abstractive Representation Modeling for Image Classification." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623250959448677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mutelo, Risco Mulwani. "Biometric face image representation and recognition." Thesis, University of Newcastle upon Tyne, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.548004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Hua. "Colour image representation by scalar variables." Thesis, Loughborough University, 1992. https://dspace.lboro.ac.uk/2134/10477.

Full text
Abstract:
A number of studies have shown that it is possible to use a colour codebook, which has a limited number of colours (typically 100-200), to replace the colour gamut and obtain a good quality reconstructed colour image. Thus colour images can be displayed on less expensive devices retaining high quality and can be stored in less space. However, a colour codebook is normally randomly arranged and the coded image, which is referred to as the index image, has no structure. This prevents the use of this kind of colour image representation in any further image processing. The objective of the research described in this thesis is to explore the possibility of making the index image meaningful, that is, the index image can retain the structure existing in the original full colour image, such as correlation and edges. In this way, a three band colour image represented by colour vectors can be transfomled into a one band index image represented by scalar variables. To achieve the scalar representation of colour images, the colour codebook must be ordered to satisfy the following two conditions: (I) codewords representing similar colours must be close together in the code book and (2) close code words in the codebook must represent similar colours. Some effective methods are proposed for ordering the colour codebook. First, several grouping strategies are suggested for grouping the code words representing similar colours together. Second, an ordering function is designed, which gives a quantity. measurement of the satisfaction of the two conditions of an ordered codebook. The code book ordering is then iteratively refined by the ordering function. Finally, techniques, such as artificial codeword insertion, are developed to refine the code book ordering further. A number of algorithms for colour codebook ordering have been tried to retain as much structure in the index image as possible. The efficiency of the algorithms for ordering a colour codebook has been tested by applying some image processing techniques to the index image. A VQ/DCT colour image coding scheme has been developed to test the possibility of compressing and decompressing the index image. Edge detection is applied to the index image to test how well the edges existing in the original colour image can be retained in the index image. Experiments demonstrate that the index image can retain a lot of structure existing in the original colour image if the codebook is ordered by an appreciate ordering algorithm, such as the PNNbased/ ordering function method together with artificial codeword insertion. Then further image processing techniques, such as image compression and edge detection, can be applied to the index image. In this way, colour image processing can be realized by index image processing in the same way as monochrome image processing. In this sense, a three-band colour image represented by colour vectors is transformed into a single band index image represented by scalar variables.
APA, Harvard, Vancouver, ISO, and other styles
10

Chang, William. "Representation Theoretical Methods in Image Processing." Scholarship @ Claremont, 2004. https://scholarship.claremont.edu/hmc_theses/160.

Full text
Abstract:
Image processing refers to the various operations performed on pictures that are digitally stored as an aggregate of pixels. One can enhance or degrade the quality of an image, artistically transform the image, or even find or recognize objects within the image. This paper is concerned with image processing, but in a very mathematical perspective, involving representation theory. The approach traces back to Cooley and Tukey’s seminal paper on the Fast Fourier Transform (FFT) algorithm (1965). Recently, there has been a resurgence in investigating algebraic generalizations of this original algorithm with respect to different symmetry groups. My approach in the following chapters is as follows. First, I will give necessary tools from representation theory to explain how to generalize the Discrete Fourier Transform (DFT). Second, I will introduce wreath products and their application to images. Third, I will show some results from applying some elementary filters and compression methods to spectrums of images. Fourth, I will attempt to generalize my method to noncyclic wreath product transforms and apply it to images and three-dimensional geometries.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Image representation"

1

Lacey, Nick. Image and Representation. London: Macmillan Education UK, 1998. http://dx.doi.org/10.1007/978-1-349-26712-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lacey, Nick. Image and Representation. London: Macmillan Education UK, 2009. http://dx.doi.org/10.1007/978-1-137-28800-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alexandrov, V. V., and N. D. Gorsky. Image Representation and Processing. Dordrecht: Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-011-1747-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Image et cognition. Paris: Presses universitaires de France, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Conference on Representation: Relationship between Language and Image (1991 Viterbo, Italy). Representation: Relationship between language and image. Singapore: World Scientific, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Netravali, Arun N. Digital pictures: Representation and compression. New York: Plenum Press, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Image and cognition. New York: Harvester Wheatsheaf, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

G, Haskell Barry, ed. Digital pictures: Representation, compression, and standards. 2nd ed. New York: Plenum Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Yongzhen, and Tieniu Tan. Feature Coding for Image Representation and Recognition. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45000-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aleksandrov, V. V. Image Representation and Processing: A Recursive Approach. Dordrecht: Springer Netherlands, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Image representation"

1

Lacey, Nick. "Representation." In Image and Representation, 146–89. London: Macmillan Education UK, 2009. http://dx.doi.org/10.1007/978-1-137-28800-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lacey, Nick. "Representation." In Image and Representation, 131–88. London: Macmillan Education UK, 1998. http://dx.doi.org/10.1007/978-1-349-26712-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gouet-Brunet, Valerie. "Image Representation." In Encyclopedia of Database Systems, 1–7. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_1438-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vidal, René, Yi Ma, and S. Shankar Sastry. "Image Representation." In Interdisciplinary Applied Mathematics, 349–76. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-0-387-87811-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gouet-Brunet, Valerie. "Image Representation." In Encyclopedia of Database Systems, 1374–79. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_1438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jähne, Bernd. "Image Representation." In Digital Image Processing, 29–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-662-04781-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jähne, Bernd. "Image Representation." In Digital Image Processing, 27–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-662-03477-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Chiwoo, and Yu Ding. "Image Representation." In Data Science for Nano Image Analysis, 15–33. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72822-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gouet-Brunet, Valerie. "Image Representation." In Encyclopedia of Database Systems, 1783–89. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_1438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jähne, Bernd. "Multiscale Representation." In Digital Image Processing, 125–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-662-04781-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Image representation"

1

Xie, Ruobing, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. "Image-embodied Knowledge Representation Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/438.

Full text
Abstract:
Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.
APA, Harvard, Vancouver, ISO, and other styles
2

Arsenault, H. H., V. François, G. April, and U. Laval. "New invariant image representation." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.thm2.

Full text
Abstract:
A new 2-D image representation of the form with is introduced, and its use for image compression and classification is considered. The image is entirely represented by its invariant coefficients a nk . The representation is a generalization of the circular harmonic decomposition, but the advantage is that any image is represented by a set of complex constants ank. The normalized coefficients are invariant under changes of scale or of orientation of the object represented by the coefficients. Experiments on grey scale imagery of aircraft show that images may be represented by a relatively small number of coefficients.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Junqian, and Yirui Liu. "Multiple Representations and Sparse Representation for Color Image Classification." In the 2018 International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3232829.3232847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Shidong. "Image representation and compression via adaptive multi-Gabor representations." In SPIE's International Symposium on Optical Science, Engineering, and Instrumentation, edited by Mark S. Schmalz. SPIE, 1998. http://dx.doi.org/10.1117/12.330374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wachinger, Christian, and Nassir Navab. "Structural image representation for image registration." In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2010. http://dx.doi.org/10.1109/cvprw.2010.5543432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aghajani, Khadijeh, Mohsen Shirpour, and M. T. Manzuri. "Structural image representation for image registration." In 2015 International Symposium on Artificial Intelligence and Signal Processing (AISP). IEEE, 2015. http://dx.doi.org/10.1109/aisp.2015.7123534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khandelwal, Ankit, M. Girish Chandra, and Sayantan Pramanik. "On Classifying Images using Quantum Image Representation." In 2022 IEEE/ACM 7th Symposium on Edge Computing (SEC). IEEE, 2022. http://dx.doi.org/10.1109/sec54971.2022.00067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Shiqiang, Weilan Wang, and Yide Ma. "Graph regularized compact self-representative decomposition for image representation." In 2016 Chinese Control and Decision Conference (CCDC). IEEE, 2016. http://dx.doi.org/10.1109/ccdc.2016.7531675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bai, Yang, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie, and Min Zhang. "RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/62.

Full text
Abstract:
Text-based person search aims to retrieve the specified person images given a textual description. The key to tackling such a challenging task is to learn powerful multi-modal representations. Towards this, we propose a Relation and Sensitivity aware representation learning method (RaSa), including two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For one thing, existing methods cluster representations of all positive pairs without distinction and overlook the noise problem caused by the weak positive pairs where the text and the paired image have noise correspondences, thus leading to overfitting learning. RA offsets the overfitting risk by introducing a novel positive relation detection task (i.e., learning to distinguish strong and weak positive pairs). For another thing, learning invariant representation under data augmentation (i.e., being insensitive to some transformations) is a general practice for improving representation's robustness in existing methods. Beyond that, we encourage the representation to perceive the sensitive transformation by SA (i.e., learning to detect the replaced words), thus promoting the representation's robustness. Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. Code is available at: https://github.com/Flame-Chasers/RaSa.
APA, Harvard, Vancouver, ISO, and other styles
10

Cherkashyn, Valeriy, Roumen Kountchev, Dong-Chen He, and Roumiana Kountcheva. "Adaptive Image Pyramidal Representation." In 2008 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). IEEE, 2008. http://dx.doi.org/10.1109/isspit.2008.4775650.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Image representation"

1

Thayer, Colette, and Laura Skufca. Media Image Landscape: Age Representation in Online Images. AARP Research, September 2019. http://dx.doi.org/10.26419/res.00339.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mairal, Julien, Michael Elad, and Guillermo Sapiro. Sparse Representation for Color Image Restoration (PREPRINT). Fort Belvoir, VA: Defense Technical Information Center, October 2006. http://dx.doi.org/10.21236/ada478437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balas, Benjamin J., and Pawan Sinha. Dissociated Dipoles: Image Representation via Non-local Comparisons. Fort Belvoir, VA: Defense Technical Information Center, August 2003. http://dx.doi.org/10.21236/ada459820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baker, H. H. Building and Using Scene Representation in Image Understanding. Fort Belvoir, VA: Defense Technical Information Center, September 1993. http://dx.doi.org/10.21236/ada461044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schuster, J. Image Document Representation CRADA No. TSB-1557-98. Office of Scientific and Technical Information (OSTI), March 2001. http://dx.doi.org/10.2172/790105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schuster, John, and Vern Hanzlik. Image Document Representation: Project Accomplishments Summary CRADA No. TSB-1557-98. Office of Scientific and Technical Information (OSTI), March 2001. http://dx.doi.org/10.2172/1410065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Doerry, Armin W. SAR Image Complex Pixel Representations. Office of Scientific and Technical Information (OSTI), March 2015. http://dx.doi.org/10.2172/1177594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Бережна, Маргарита Василівна. The Traitor Psycholinguistic Archetype. Premier Publishing, 2022. http://dx.doi.org/10.31812/123456789/6051.

Full text
Abstract:
Film studies have recently begun to employ Jung’s concept of archetypes prototypical characters which play the role of blueprint in constructing clear-cut characters. New typologies of archetype characters appear to reflect the changes in the constantly developing world of literature, theater, film, comics and other forms of entertainment. Among those, there is the classification of forty-five master characters by V. Schmidt , which is the basis for defining the character’s archetype in the present article. The aim of the research is to identify the elements of the psycholinguistic image of Justin Hammer in the superhero film Iron Man 2 based on the Marvel Comics and directed by Jon Favreau (2010). The task consists of three stages, namely identification of the psychological characteristics of the character, subsequent determination of Hammer’s archetype and definition of speech elements that reveal the character’s psychological image. This paper explores 92 Hammer’s turns of dialogues in the film. According to V. Schmidt’s classification, Hammer belongs to the Traitor archetype, which is a villainous representation of the Businessman archetype.
APA, Harvard, Vancouver, ISO, and other styles
9

Cohen, Leon. Signal and Image Processing in Different Representations. Fort Belvoir, VA: Defense Technical Information Center, January 2008. http://dx.doi.org/10.21236/ada477452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Chung-Nim, and Azriel Rosenfeld. Continuous Representations of Digital Images. Fort Belvoir, VA: Defense Technical Information Center, October 1985. http://dx.doi.org/10.21236/ada164189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography