Добірка наукової літератури з теми "Model of the color vision"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Model of the color vision".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Model of the color vision"

1

Yoder, Lane. "Relative absorption model of color vision." Color Research & Application 30, no. 4 (2005): 252–64. http://dx.doi.org/10.1002/col.20121.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

김하나, 이지영, and 이지호. "Color Model Development of Color Conversion Technology for Color Vision Defectives." Journal of Korea Society of Color Studies 28, no. 2 (May 2014): 49–58. http://dx.doi.org/10.17289/jkscs.28.2.201405.49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Massof, Robert W. "Color-vision theory and linear models of color vision." Color Research & Application 10, no. 3 (1985): 133–46. http://dx.doi.org/10.1002/col.5080100302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

BONNARDEL, VALÉRIE. "Color naming and categorization in inherited color vision deficiencies." Visual Neuroscience 23, no. 3-4 (May 2006): 637–43. http://dx.doi.org/10.1017/s0952523806233558.

Повний текст джерела
Анотація:
Dichromatic subjects can name colors accurately, even though they cannot discriminate among red-green hues (Jameson & Hurvich, 1978). This result is attributed to a normative language system that dichromatic observers developed by learning subtle visual cues to compensate for their impoverished color system. The present study used multidimensional scaling techniques to compare color categorization spaces of color-vision deficient (CVD) subjects to those of normal trichromat (NT) subjects, and consensus analysis estimated the normative effect of language on categorization. Subjects sorted 140 Munsell color samples in three different ways: a free sorting task (unlimited number of categories), a constrained sorting task (number of categories limited to eight), and a constrained naming task (limited to eight basic color terms). CVD color categories were comparable to those of NT subjects. For both CVD and NT subjects, a common color categorization space derived from the three tasks was well described by a three-dimensional model, with the first two dimensions corresponding to reddish-greenish and yellowish-bluish axes. However, the third axis, which was associated with an achromatic dimension in NTs, was not identified in the CVD model. Individual differences multidimensional scaling failed to reveal group differences in the sorting tasks. In contrast, the personal color naming spaces of CVD subjects exhibited a relative compression of the yellowish-bluish dimension that is inconsistent with the typical deutan-type color spaces derived from more direct measures of perceptual color judgments. As expected, the highest consensus among CVDs (77%) and NTs (82%) occurred in the naming task. The categorization behaviors studied in this experiment seemed to rely more on learning factors, and may reveal little about CVD perceptual representation of colors.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jetsu, Tuija, Yasser Essiarab, Ville Heikkinen, Timo Jaaskelainen, and Jussi Parkkinen. "Color classification using color vision models." Color Research & Application 36, no. 4 (November 8, 2010): 266–71. http://dx.doi.org/10.1002/col.20632.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chittka, Lars. "BEE COLOR VISION IS OPTIMAL FOR CODING FLOWER COLOR, BUT FLOWER COLORS ARE NOT OPTIMAL FOR BEING CODED—WHY?" Israel Journal of Plant Sciences 45, no. 2-3 (May 13, 1997): 115–27. http://dx.doi.org/10.1080/07929978.1997.10676678.

Повний текст джерела
Анотація:
Model calculations are used to determine an optimal color coding system for identifying flower colors, and to see whether flower colors are well suited for being encoded. It is shown that the trichromatic color vision of bees comprises UV, blue, and green receptors whose wavelength positions are optimal for identifying flower colors. But did flower colors actually drive the evolution of bee color vision? A phylogenetic analysis reveals that UV, blue, and green receptors were probably present in the ancestors of crustaceans and insects 570 million years ago, and thus predate the evolution of flower color by at least 400 million years. In what ways did flower colors adapt to insect color vision? The variability of flower color is subject to constraint. Flowers are clustered in the bee color space (probably because of biochemical constraints), and different plant families differ strongly in their variation of color (which points to phylogenetic constraint). However, flower colors occupy areas of color space that are significantly different from those occupied by common background materials, such as green foliage. Finally, models are developed to test whether the colors of flowers of sympatric and simultaneously blooming species diverge or converge to a higher degree than expected by chance. Such effects are indeed found in some habitats.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Valberg, Arne, and Thorstein Seim. "Neurophysiological correlates of color vision: A model." Psychology & Neuroscience 6, no. 2 (2013): 213–18. http://dx.doi.org/10.3922/j.psns.2013.2.09.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Guth, S. Lee. "Model for color vision and light adaptation." Journal of the Optical Society of America A 8, no. 6 (June 1, 1991): 976. http://dx.doi.org/10.1364/josaa.8.000976.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fry, Glenn A. "Color vision model of macLeod and Boynton." Color Research & Application 14, no. 3 (June 1989): 152–56. http://dx.doi.org/10.1002/col.5080140309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ohkoba, Minoru, Tomoharu Ishikawa, Shoko Hira, Sakuichi Ohtsuka, and Miyoshi Ayama. "Analysis of Hue Circle Perception of Congenital Red-green Congenital Color Deficiencies Based on Color Vision Model." Color and Imaging Conference 2020, no. 28 (November 4, 2020): 105–8. http://dx.doi.org/10.2352/issn.2169-2629.2020.28.15.

Повний текст джерела
Анотація:
To investigate individual property of internal color representation of congenital red-green color-deficient observers (CDOs) and color-normal observers (CNOs) precisely, difference scaling experiment using pairs of primary colors was carried out for protans, deutans, and normal trichromats, and the results were analyzed using multi-dimensional Scaling (MDS). MDS configuration of CNOs showed circular shape similar to hue circle, whereas that of CNO showed large individual differences from circular to U- shape. Distortion index, DI, is proposed to express the shape variation of MDS configuration. All color chips were plotted in the color vision space, (L, r/g, y/b), and the MDS using a non-linear conversion from the distance in the color vision space to perceptual difference scaling was successful to obtain U-shape configuration that reflects internal color representation of CDOs.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Model of the color vision"

1

Liu, Yan. "Negative feedback control of the visual system and systematic colors vision model /." Online version of thesis, 1991. http://hdl.handle.net/1850/11211.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Skaff, Sandra. "Spectral models for color vision." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66750.

Повний текст джерела
Анотація:
This thesis introduces a maximum entropy approach to model surface reflectance spectra. A reflectance spectrum is the amount of light, relative to the incident light, reflected from a surface at each wavelength. While the color of a surface can be in 3D vector form such as RGB, CMY, or YIQ, this thesis takes the surface reflectance spectrum to be the color of a surface. A reflectance spectrum is a physical property of a surface and does not vary with the different interactions a surface may undergo with its environment. Therefore, models of reflectance spectra can be used to fuse camera sensor responses from different images of the same surface or multiple surfaces of the same scene. This fusion improves the spectral estimates that can be obtained, and thus leads to better estimates of surface colors. The motivation for using a maximum entropy approach stems from the fact that surfaces observed in our everyday life surroundings typically have broad and therefore high entropy spectra. The maximum entropy approach, in addition, imposes the fewest constraints as it estimates surface reflectance spectra given only camera sensor responses. This is a major advantage over the widely used linear basis function spectral representations, which require a prespecified set of basis functions. Experimental results show that surface spectra of Munsell and construction paper patches can be successfully estimated using the maximum entropy approach in the case of three different surface interactions with the environment. First, in the case of changes in illumination, the thesis shows that the spectral models estimated are comparable to those obtained from the best approach which computes spectral models in the literature. Second, in the case of changes in the positions of surfaces with respect to each other, interreflections between the surfaces arise. Results show that the fusion of sensor responses from interreflection
Cette thèse introduit une approche par entropie maximale pour la modélisation des spectres de réflectance de surface. Un spectre de réflectance est la quantité de lumière, relative à la lumière incidente, réfléchie d'une surface à chaque longueur d'onde. Bien que la couleur d'une surface puisse prendre la forme d'un vecteur 3D tel que RGB, CMY ou YIQ, cette thèse prend le spectre de réflectance de surface comme étant la couleur d'une surface. Un spectre de réflectance est une propriété physique d'une surface et ne varie pas avec les différentes interactions que peut subir une surface avec son environnement. Par conséquent, les modèles de spectres de réflectance peuvent être utilisés pour fusionner les réponses de senseurs de caméra provenant de différentes images d'une même surface ou de multiples surfaces de la même scène. Cette fusion améliore les estimés spectraux qui peuvent être obtenus et mène donc à de meilleurs estimés de couleurs de surfaces.La motivation pour l'utilisation d'une approche par entropie maximale provient du fait que les surfaces observées dans notre environnement habituel ont typiquement un spectre large et donc à haute entropie. De plus, l'approche par entropie maximale impose le moins de contraintes puisqu'elle estime les spectres de réflectance de surface à l'aide seulement des réponses de senseurs de caméra. Ceci est un avantage majeur par rapport aux très répandues représentations spectrales par fonctions de base linéaires qui requièrent une série pré-spécifiée de fonctions de base.Les résultats expérimentaux montrent que les spectres de surface de taches de surface de Munsell et de papier de construction peuvent être estimés avec succès en utilisant l'approche par entropie maximal dans le cas de trois différentes interactions de surfaces avec l'environnement. D'abord, dans le cas de changements dans l'illumination, la t
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Shayeghpour, Omid. "Improving information perception from digital images for users with dichromatic color vision." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-101984.

Повний текст джерела
Анотація:
Color vision deficiency (CVD) is the inability or limited ability to recognize colors and discriminate between them. A person with this condition perceives a narrower range of colors compared to a person with a normal color vision. A growing number of researchers are striving to improve the quality of life for CVD patients. Finding cure, making rectification equipment, providing simulation tools and applying color transformation methods are among the efforts being made by researchers in this field. In this study we concentrate on recoloring digital images in such a way that users with CVD, especially dichromats, perceive more details from the recolored images compared to the original image. The main focus is to give the CVD user a chance to find information within the picture which they could not perceive before. However, this transformed image might look strange or unnatural to users with normal color vision. During this color transformation process, the goal is to keep the overall contrast of the image constant while adjusting the colors that might cause confusion for the CVD user. First, each pixel in the RGB-image is converted to HSV color space in order to be able to control hue, saturation and intensity for each pixel and then safe and problematic hue ranges need to be found. The method for recognizing these ranges was inspired by a condition called “unilateral dichromacy” in which the patient has normal color vision in one eye and dichromacy in another. A special grid-like color card is designed, having constant saturation and intensity over the entire image, while the hue smoothly changes from one block to another to cover the entire hue range. The next step is to simulate the way this color card is perceived by a dichromatic user and finally to find the colors that are perceived identically from two images and the ones that differ too much. This part makes our method highly customizable and we can apply it to other types of CVD, even personalize it for the color vision of a specific observer. The resulting problematic colors need to be dealt with by shifting the hue or saturation based on some pre-defined rules. The results for the method have been evaluated both objectively and subjectively. First, we simulated a set of images as they would be perceived by a dichromat and compared them with simulated view of our transformed images. The results clearly show that our recolored images can eliminate a lot of confusion from user and convey more details. Moreover, an online questionnaire was created and 39 users with CVD confirmed that the transformed images allow them to perceive more information compared to the original images.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lau, Hoi Ying. "Neural inspired color constancy model based on double opponent neurons /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20LAU.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Machado, Gustavo Mello. "A model for simulation of color vision deficiency and a color contrast enhancement technique for dichromats." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/26950.

Повний текст джерела
Анотація:
As Deficiências na Percepção de Cores (DPC) afetam aproximadamente 200 milhões de pessoas em todo o mundo, comprometendo suas habilidades para efetivamente realizar tarefas relacionadas com cores e com visualização. Isto impacta significantemente os âmbitos pessoais e profissionais de suas vidas. Este trabalho apresenta um modelo baseado na fisiologia para simulação da percepção de cores. Além de modelar visão de cores normal, ele também compreende os tipos mais predominantes de deficiências na visão de cores (i.e., protanopia, deuteranopia, protanomalia e deuteranomalia), cujas causas são hereditárias. Juntos estes representam aproximadamente 99.96% de todos os casos de DPC. Para modelar a percepção de cores da visão humana, este modelo é baseado na teoria dos estágios e é derivado de dados reportados em estudos eletrofisiológicos. Ele é o primeiro modelo a consistentemente tratar visão de cores normal, tricromacia anômala e dicromacia de modo unificados. Seus resultados foram validados por avaliações experimentais envolvendo grupos de indivíduos com deficiência na percepção de cores e outros com visão de cores normal. Além disso, ele pode proporcionar a melhor compreensão e um feedback sobre como aperfeiçoar as experiências de visualização por indivíduos com DPC. Ele também proporciona um framework para se testar hipóteses sobre alguns aspectos acerca das células fotoreceptoras na retina de indivíduos com deficiência na percepção de cores. Este trabalho também apresenta uma técnica automática de recoloração de imagens que visa realçar o contraste de cores para indivíduos dicromatas com custo computacional variando linearmente com o número de pixels. O algoritmo proposto pode ser eficientemente implementado em GPUs, e para imagens com tamanhos tipicos ele apresenta performance de até duas ordens de magnitude mais rápida do que as técnicas estado da arte atuais. Ao contrário das abordagens anteriores, a técnica proposta preserva coerência temporal e, portanto, é adequado para recoloração de vídeos. Este trabalho demonstra a efetividade da técnica proposta ao integrá-la a um sistema de visualização e apresentando, pela primeira vez, cenas de visualização recoloridas para dicromatas em tempo-real e com alta qualidade.
Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualizationrelated tasks. This has a significant impact on their private and professional lives. This thesis presents a physiologically-based model for simulating color perception. Besides modeling normal color vision, it also accounts for the hereditary and most prevalent cases of color vision deficiency (i.e., protanopia, deuteranopia, protanomaly, and deuteranomaly), which together account for approximately 99.96% of all CVD cases. This model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. The proposed model was validated through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. This model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals. This thesis also presents an automatic image-recoloring technique for enhancing color contrast for dichromats whose computational cost varies linearly with the number of input pixels. This approach can be efficiently implemented on GPUs, and for typical image sizes it is up to two orders of magnitude faster than the current state-of-the-art technique. Unlike previous approaches, the proposed technique preserves temporal coherence and, therefore, is suitable for video recoloring. This thesis demonstrates the effectiveness of the proposed technique by integrating it into a visualization system and showing, for the first time, real-time high-quality recolored visualizations for dichromats.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kim, Taek Gyu. "Comparing color appearance models using pictorial images /." Online version of thesis, 1994. http://hdl.handle.net/1850/11756.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Spencer, Lisa. "REAL-TIME MONOCULAR VISION-BASED TRACKING FOR INTERACTIVE AUGMENTED REALITY." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4289.

Повний текст джерела
Анотація:
The need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous behavior, classifying video clips, and measuring athletic performance. In this dissertation we focus on augmented reality, but the methods and conclusions are applicable to a wide variety of other areas. In particular, our work deals with achieving real-time performance while tracking with augmented reality systems using a minimum set of commercial hardware. We have built prototypes that use both existing technologies and new algorithms we have developed. While performance improvements would be possible with additional hardware, such as multiple cameras or parallel processors, we have concentrated on getting the most performance with the least equipment. Tracking is a broad research area, but an essential component of an augmented reality system. Tracking of some sort is needed to determine the location of scene augmentation. First, we investigated the effects of illumination on the pixel values recorded by a color video camera. We used the results to track a simple solid-colored object in our first augmented reality application. Our second augmented reality application tracks complex non-rigid objects, namely human faces. In the color experiment, we studied the effects of illumination on the color values recorded by a real camera. Human perception is important for many applications, but our focus is on the RGB values available to tracking algorithms. Since the lighting in most environments where video monitoring is done is close to white, (e.g., fluorescent lights in an office, incandescent lights in a home, or direct and indirect sunlight outside,) we looked at the response to "white" light sources as the intensity varied. The red, green, and blue values recorded by the camera can be converted to a number of other color spaces which have been shown to be invariant to various lighting conditions, including view angle, light angle, light intensity, or light color, using models of the physical properties of reflection. Our experiments show how well these derived quantities actually remained constant with real materials, real lights, and real cameras, while still retaining the ability to discriminate between different colors. This color experiment enabled us to find color spaces that were more invariant to changes in illumination intensity than the ones traditionally used. The first augmented reality application tracks a solid colored rectangle and replaces the rectangle with an image, so it appears that the subject is holding a picture instead. Tracking this simple shape is both easy and hard; easy because of the single color and the shape that can be represented by four points or four lines, and hard because there are fewer features available and the color is affected by illumination changes. Many algorithms for tracking fixed shapes do not run in real time or require rich feature sets. We have created a tracking method for simple solid colored objects that uses color and edge information and is fast enough for real-time operation. We also demonstrate a fast deinterlacing method to avoid "tearing" of fast moving edges when recorded by an interlaced camera, and optimization techniques that usually achieved a speedup of about 10 from an implementation that already used optimized image processing library routines. Human faces are complex objects that differ between individuals and undergo non-rigid transformations. Our second augmented reality application detects faces, determines their initial pose, and then tracks changes in real time. The results are displayed as virtual objects overlaid on the real video image. We used existing algorithms for motion detection and face detection. We present a novel method for determining the initial face pose in real time using symmetry. Our face tracking uses existing point tracking methods as well as extensions to Active Appearance Models (AAMs). We also give a new method for integrating detection and tracking data and leveraging the temporal coherence in video data to mitigate the false positive detections. While many face tracking applications assume exactly one face is in the image, our techniques can handle any number of faces. The color experiment along with the two augmented reality applications provide improvements in understanding the effects of illumination intensity changes on recorded colors, as well as better real-time methods for detection and tracking of solid shapes and human faces for augmented reality. These techniques can be applied to other real-time video analysis tasks, such as surveillance and video analysis.
Ph.D.
School of Computer Science
Engineering and Computer Science
Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Jeong, Kideog. "OBJECT MATCHING IN DISJOINT CAMERAS USING A COLOR TRANSFER APPROACH." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/434.

Повний текст джерела
Анотація:
Object appearance models are a consequence of illumination, viewing direction, camera intrinsics, and other conditions that are specific to a particular camera. As a result, a model acquired in one view is often inappropriate for use in other viewpoints. In this work we treat this appearance model distortion between two non-overlapping cameras as one in which some unknown color transfer function warps a known appearance model from one view to another. We demonstrate how to recover this function in the case where the distortion function is approximated as general affine and object appearance is represented as a mixture of Gaussians. Appearance models are brought into correspondence by searching for a bijection function that best minimizes an entropic metric for model dissimilarity. These correspondences lead to a solution for the transfer function that brings the parameters of the models into alignment in the UV chromaticity plane. Finally, a set of these transfer functions acquired from a collection of object pairs are generalized to a single camera-pair-specific transfer function via robust fitting. We demonstrate the method in the context of a video surveillance network and show that recognition of subjects in disjoint views can be significantly improved using the new color transfer approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pirrotta, Elizabeth. "Testing chromatic adaptation models using object colors /." Online version of thesis, 1994. http://hdl.handle.net/1850/11674.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zapata, Iván R. "Detecting humans in video sequences using statistical color and shape models." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp1058/ivan%5Fthesis2.pdf.

Повний текст джерела
Анотація:
Thesis (M.S.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains viii, 49 p.; also contains graphics. Vita. Includes bibliographical references (p. 47-48).
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Model of the color vision"

1

CIE Technical Committee TC-8-01. A colour appearance model for colour management systems: CIEAMO2. Vienna, Austria: CIE Central Bureau, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Color appearance models. Reading, Mass: Addison-Wesley, 1998.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fairchild, Mark D. Color Appearance Models. New York: John Wiley & Sons, Ltd., 2005.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xing, Jing. Reexamination of color vision standards. Washington, D.C: Federal Aviation Administration, Office of Aerospace Medicine, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Untersuchungen zur Farbästhetik im späten Schulkind- und Jugendalter: Ein Modell zur ästhetischen Wahrnehmung von Farbe und zur Gestaltung von Farbwirkungen durch ästhetische Organisation gewählter Farben. Frankfurt am Main: Lang, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Color vision. New York: AMPHOTO, 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

M, Boynton Robert, ed. Human color vision. 2nd ed. Washington, DC: Optical Society of America, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Human color vision. [Washington, DC]: Optical Society of America, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Valberg, Arne. Light Vision Color. New York: John Wiley & Sons, Ltd., 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kremers, Jan, Rigmor C. Baraas, and N. Justin Marshall, eds. Human Color Vision. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44978-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Model of the color vision"

1

Tominaga, Shoji. "Color Model." In Computer Vision, 1–6. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_449-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tominaga, Shoji. "Color Model." In Computer Vision, 116–20. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_449.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tominaga, Shoji. "Color Model." In Computer Vision, 176–81. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_449.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bai, Xue, Jue Wang, and Guillermo Sapiro. "Dynamic Color Flow: A Motion-Adaptive Color Model for Object Segmentation in Video." In Computer Vision – ECCV 2010, 617–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15555-0_45.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Syeda-Mahmood, Tanveer Fathima. "Data and model-driven selection using color regions." In Computer Vision — ECCV'92, 115–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55426-2_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bäuml, Karl-Heinz, Xuemei Zhang, and Brian Wandell. "Color Spaces and Color Metrics." In Vision Models and Applications to Image and Video Processing, 99–122. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3411-9_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fry, Glenn A. "König Models of Color Vision." In Colour Vision Deficiencies IX, 117–24. Dordrecht: Springer Netherlands, 1989. http://dx.doi.org/10.1007/978-94-009-2695-0_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Iwaki, Ryuichi, and Michinari Shimoda. "Electronic Circuit Model of Color Sensitive Retinal Cell Network." In Biologically Motivated Computer Vision, 482–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45482-9_49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lu, Chenguang. "Explaining Color Evolution, Color Blindness, and Color Recognition by the Decoding Model of Color Vision." In IFIP Advances in Information and Communication Technology, 287–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46931-3_27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Alarcon, Teresa, and Oscar Dalmau. "Color Categorization Models for Color Image Segmentation." In Lecture Notes in Computational Vision and Biomechanics, 303–27. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-7584-8_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Model of the color vision"

1

Moorhead, Ian R. "Computational color vision model." In Photonics West '98 Electronic Imaging, edited by Bernice E. Rogowitz and Thrasyvoulos N. Pappas. SPIE, 1998. http://dx.doi.org/10.1117/12.320155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wang, Haihui. "A model of color vision with a robot system." In ICO20:Illumination, Radiation, and Color Technologies, edited by Dazun Zhao, M. R. Luo, and Hirohisa Yaguchi. SPIE, 2006. http://dx.doi.org/10.1117/12.668080.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Guth, S. Lee. "ATD model for color vision II: applications." In IS&T/SPIE 1994 International Symposium on Electronic Imaging: Science and Technology, edited by Eric Walowit. SPIE, 1994. http://dx.doi.org/10.1117/12.173844.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Nonaka, Takako, Morimasa Matsuda, and Tomohiro Hase. "Color Mixture Model Based on Spatial Frequency Response of Color Vision." In 2006 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icsmc.2006.384395.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Shengdong, Yue Wu, Yuanjie Zhao, Zuomin Cheng, and Wenqi Ren. "Color-Constrained Dehazing Model." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00443.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yuxin Peng, Yuxin Jin, Kezhong He, Fuchun Sun, Huaping Liu, and Linmi Tao. "Color Model based real-time Face Detection with AdaBoost in color image." In 2007 International Conference on Machine Vision. IEEE, 2007. http://dx.doi.org/10.1109/icmv.2007.4469270.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chakrabarti, Ayan, Daniel Scharstein, and Todd Zickler. "An Empirical Camera Model for Internet Color Vision." In British Machine Vision Conference 2009. British Machine Vision Association, 2009. http://dx.doi.org/10.5244/c.23.51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

De Valois, Russell L. "Standard model of color vision: problems and an alternative." In Computational Vision Based on Neurobiology, edited by Teri B. Lawton. SPIE, 1994. http://dx.doi.org/10.1117/12.171148.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Guth, S. Lee. "Further applications of the ATD model for color vision." In IS&T/SPIE's Symposium on Electronic Imaging: Science & Technology, edited by Eric Walowit. SPIE, 1995. http://dx.doi.org/10.1117/12.206546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rao, Xiuqin, and Yibin Ying. "Color model for fruit quality inspection with machine vision." In Optics East 2005, edited by Yud-Ren Chen, George E. Meyer, and Shu-I. Tu. SPIE, 2005. http://dx.doi.org/10.1117/12.630504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Model of the color vision"

1

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Повний текст джерела
Анотація:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Krauskopf, John. Higher Order Mechanisms of Color Vision. Fort Belvoir, VA: Defense Technical Information Center, May 1989. http://dx.doi.org/10.21236/ada214616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Krauskopf, John. Higher Order Mechanisms of Color Vision. Fort Belvoir, VA: Defense Technical Information Center, June 1988. http://dx.doi.org/10.21236/ada198093.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Laxar, Kevin V. U.S. Navy Color Vision Standards Revisited. Fort Belvoir, VA: Defense Technical Information Center, April 1998. http://dx.doi.org/10.21236/ada347110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Krauskopf, John. Higher Order Mechanisms of Color Vision. Fort Belvoir, VA: Defense Technical Information Center, November 1991. http://dx.doi.org/10.21236/ada244720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jacob J. Jacobson, Robert F. Jeffers, Gretchen E. Matthern, Steven J. Piet, Benjamin A. Baker, and Joseph Grimm. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model. Office of Scientific and Technical Information (OSTI), August 2009. http://dx.doi.org/10.2172/968564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lindquist, Goerge H., J. Richard Freeling, and Allyn W. Dunstan. Computational Vision Model (CVM) Research and Development. Fort Belvoir, VA: Defense Technical Information Center, March 1998. http://dx.doi.org/10.21236/ada361237.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Syeda-Mahmood, Tanveer F. Data and Model-Driven Selection Using Color Regions. Fort Belvoir, VA: Defense Technical Information Center, February 1992. http://dx.doi.org/10.21236/ada260101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chang, Huey, Katsushi Ikeuchi, and Takeo Kanade. Model-Based Vision System by Object-Oriented Programming. Fort Belvoir, VA: Defense Technical Information Center, February 1988. http://dx.doi.org/10.21236/ada195819.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Krnjaic, Gordan Zdenko. Dark Matter and Color Octets Beyond the Standard Model. Office of Scientific and Technical Information (OSTI), July 2012. http://dx.doi.org/10.2172/1127922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії