Academic literature on the topic 'Vehicle color recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vehicle color recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vehicle color recognition"

1

Yaba, Hawar Hussein, and Hemin Omer Latif. "Plate Number Recognition based on Hybrid Techniques." UHD Journal of Science and Technology 6, no. 2 (September 1, 2022): 39–48. http://dx.doi.org/10.21928/uhdjst.v6n2y2022.pp39-48.

Full text
Abstract:
Globally and locally, the number of vehicles is on the rise. It is becoming more and more challenging for authorities to track down specific vehicles. Automatic License Plate Recognition becomes an addition to transportation systems automation. Where the extraction of the vehicle license plate is done without human intervention. Identifying the precise place of a vehicle through its license plate number from moving images of the vehicle image is among the crucial activities for vehicle plate discovery systems. Artificial intelligence systems are connecting the gap between the physical world and digital world of automatic license plate detection. The proposed research uses machine learning to recognizing Arabic license plate numbers. An image of the vehicle number plate is captured and the detection is done by image processing, character segmentation which locates Arabic numeric characters on a number plate. The system recognizes the license plate number area and extracts the plate area from the vehicle image. The background color of the number plate identifies the vehicle types: (1) White color for private vehicle; (2) red color for bus and taxi; (3) blue color for governmental vehicle; (4) yellow color for trucks, tractors, and cranes; (5) black color for temporary license; and (6) green color for army. The recognition of Arabic numbers from license plates is achieved by two methods as (1) Google Tesseract OCR based recognition and (2) Machine Learning-based training and testing Arabic number character as K-nearest neighbors (kNN). The system has been tested on 90 images downloaded from the internet and captured from CCTV. Empirical outcomes show that the proposed system finds plate numbers as well as recognizes background color and Arabic number characters successfully. The overall success rates of plate localization and background color detection have been done. The overall success rate of plate localization and background color detection is 97.78%, and Arabic number detection in OCR is 45.56 % as well as in KNN is 92.22%.
APA, Harvard, Vancouver, ISO, and other styles
2

Gmiterko, Alexander. "LINE RECOGNITION SENSORS." TECHNICAL SCIENCES AND TECHNOLOGIES, no. 4 (14) (2018): 194–200. http://dx.doi.org/10.25140/2411-5363-2018-4(14)-194-200.

Full text
Abstract:
Urgency of the research. There is a need from industrial practice for developing of methods for linefollowing navigation of automated guided vehicle (AGV) for logistic task in factories without operators. Target setting. Various types of navigation methods are used for vehicles. Actual scientific researches and issues analysis. Navigation of this automated guided vehicle can be made through the color line on ground or through the inductive sensed cable located underground. Also magnetically guided method is used. Various types of optical markers can be also used. Nowadays this type of autonomous robot applications grows up, because there is a need from industry. Uninvestigated parts of general matters defining. Next generation of automated guided vehicle is navigated via using laser scanners and they are also called LGV – Laser Guided Vehicle. This type is not covered in this paper. The research objective. The main aim of paper is to design the sensing system for color line sensing. There are several problems in using of these types of sensors. Manufacturer notes that there is placed daylight filter, but first experiments shows sensitivity to daylight. This problem can occurs when vehicle goes to tunnel. Next problem is when vehicle moves uphill and downhill on a bridge. The statement of basic materials. The color of sensor can be sensed with sensor - reflection optocoupler working in infrared light range. The optocoupler includes the infrared LED transmitter and infrared phototransistor, which senses the reflected light. Optocouplers are placed on bottom side of vehicle. Navigation line is black and other ground area is white. Optocoupler located over the navigation black line has no infrared reflection. Conclusions. The selected sensor system has been adapted for line detection application. Also ramp problems have been solved. Sensors have been successfully installed on linefollower vehicle. Results shows visible difference between the voltage levels related to black and white color line. Future plans is to add camera vision system for automatic recognition of line before vehicle and continuously path planning. Vision systems are also frequently used for obstacle detection and mapping of environment and consequently for path planning.
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Mingdi, Yi Wu, Jiulun Fan, and Bingyi Jing. "Joint Semantic Intelligent Detection of Vehicle Color under Rainy Conditions." Mathematics 10, no. 19 (September 26, 2022): 3512. http://dx.doi.org/10.3390/math10193512.

Full text
Abstract:
Color is an important feature of vehicles, and it plays a key role in intelligent traffic management and criminal investigation. Existing algorithms for vehicle color recognition are typically trained on data under good weather conditions and have poor robustness for outdoor visual tasks. Fine vehicle color recognition under rainy conditions is still a challenging problem. In this paper, an algorithm for jointly deraining and recognizing vehicle color, (JADAR), is proposed, where three layers of UNet are embedded into RetinaNet-50 to obtain joint semantic fusion information. More precisely, the UNet subnet is used for deraining, and the feature maps of the recovered clean image and the extracted feature maps of the input image are cascaded into the Feature Pyramid Net (FPN) module to achieve joint semantic learning. The joint feature maps are then fed into the class and box subnets to classify and locate objects. The RainVehicleColor-24 dataset is used to train the JADAR for vehicle color recognition under rainy conditions, and extensive experiments are conducted. Since the deraining and detecting modules share the feature extraction layers, our algorithm maintains the test time of RetinaNet-50 while improving its robustness. Testing on self-built and public real datasets, the mean average precision (mAP) of vehicle color recognition reaches 72.07%, which beats both sate-of-the-art algorithms for vehicle color recognition and popular target detection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Sun-Mi, and Ku-Jin Kim. "PCA-SVM Based Vehicle Color Recognition." KIPS Transactions:PartB 15B, no. 4 (August 29, 2008): 285–92. http://dx.doi.org/10.3745/kipstb.2008.15-b.4.285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hou, Dong Liang, and Xiao Lin Feng. "A New Quick Recognition Method Based on RGB Color Space." Advanced Materials Research 1049-1050 (October 2014): 1581–85. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1581.

Full text
Abstract:
Automatic Guided vehicle (AGV) can free drivers from boring work. Previously AGV guided by black conduction band, as the color is single, it can only used for a kind of vehicle navigation.When there are several kind of vehicles need to be guided, the black conduction band becomes powerless. Color conduction bands can overcome this problem. Usually RGB image should be converted to other image space, and then identifies the bands,but it costs lots of time. This paper proposes a quick recognition method which can be used in RGB color space. It effectively improved the recognition speed, and realized real-time identification.
APA, Harvard, Vancouver, ISO, and other styles
6

WU, JUI-CHEN, JUN-WEI HSIEH, SIN-YU CHEN, CHENG-MIN TU, and YUNG-SHENG CHEN. "VEHICLE ORIENTATION ANALYSIS USING EIGEN COLOR, EDGE MAP, AND NORMALIZED CUT CLUSTERING." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 05 (August 2010): 823–46. http://dx.doi.org/10.1142/s0218001410008111.

Full text
Abstract:
This paper proposes a novel approach for estimating vehicles' orientations from still images using "eigen color" and edge map through a clustering framework. To extract the eigen color, a novel color transform model is used for roughly segmenting a vehicle from its background. The model is invariant to various situations like contrast changes, background, and lighting. It does not need to be re-estimated for any new vehicles. In this eigen color space, different vehicle regions can be easily identified. However, since the problem of object segmentation is still ill-posed, only with this model, the shape of a vehicle cannot be well extracted from its background and thus affects the accuracy of orientation estimation. In order to solve this problem, the distributions of vehicle edges and colors are then integrated together to form a powerful but high-dimensional feature space. Since the feature dimension is high, the normalized cut spectral clustering (Ncut) is then used for feature reduction and orientation clustering. The criterion in Ncut tries to minimize the ratio of the total dissimilarity between groups to the total similarity within the groups. Then, the vehicle orientation can be analyzed using the eigenvectors derived from the Ncut result. The proposed framework needs only one still image and is thus very different to traditional methods which need motion features to determine vehicle orientations. Experimental results reveal the superior performances in vehicle orientation analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Panetta, Karen, Landry Kezebou, Victor Oludare, James Intriligator, and Sos Agaian. "Artificial Intelligence for Text-Based Vehicle Search, Recognition, and Continuous Localization in Traffic Videos." AI 2, no. 4 (December 6, 2021): 684–704. http://dx.doi.org/10.3390/ai2040041.

Full text
Abstract:
The concept of searching and localizing vehicles from live traffic videos based on descriptive textual input has yet to be explored in the scholarly literature. Endowing Intelligent Transportation Systems (ITS) with such a capability could help solve crimes on roadways. One major impediment to the advancement of fine-grain vehicle recognition models is the lack of video testbench datasets with annotated ground truth data. Additionally, to the best of our knowledge, no metrics currently exist for evaluating the robustness and performance efficiency of a vehicle recognition model on live videos and even less so for vehicle search and localization models. In this paper, we address these challenges by proposing V-Localize, a novel artificial intelligence framework for vehicle search and continuous localization captured from live traffic videos based on input textual descriptions. An efficient hashgraph algorithm is introduced to compute valid target information from textual input. This work further introduces two novel datasets to advance AI research in these challenging areas. These datasets include (a) the most diverse and large-scale Vehicle Color Recognition (VCoR) dataset with 15 color classes—twice as many as the number of color classes in the largest existing such dataset—to facilitate finer-grain recognition with color information; and (b) a Vehicle Recognition in Video (VRiV) dataset, a first of its kind video testbench dataset for evaluating the performance of vehicle recognition models in live videos rather than still image data. The VRiV dataset will open new avenues for AI researchers to investigate innovative approaches that were previously intractable due to the lack of annotated traffic vehicle recognition video testbench dataset. Finally, to address the gap in the field, five novel metrics are introduced in this paper for adequately accessing the performance of vehicle recognition models in live videos. Ultimately, the proposed metrics could also prove intuitively effective at quantitative model evaluation in other video recognition applications. T One major advantage of the proposed vehicle search and continuous localization framework is that it could be integrated in ITS software solution to aid law enforcement, especially in critical cases such as of amber alerts or hit-and-run incidents.
APA, Harvard, Vancouver, ISO, and other styles
8

Che, Sheng Bing, and Jin Kai Luo. "Vehicle License Plate Recognition with Intelligent Materials Based on Color Division." Advanced Materials Research 485 (February 2012): 592–95. http://dx.doi.org/10.4028/www.scientific.net/amr.485.592.

Full text
Abstract:
With the development and progress of the economic and technology, transportation has become more and more important in human’s usual life. Intelligent transportation system can be used in many areas. Images in RGB color space are very sensitive to the environment light intensity, and what’s more, there may be some pollution appeared on the plate and so on, these made it difficult to locate the plate area. In this paper, we proposed some new algorithms. In the first stage, car image preprocess, a new algorithm named color division was proposed. Experiments show that, influences from the environment as well as some inevitably differences of colors in the plate into standard color, so the performance of whole system was improved. In the period of license location, a new algorithm based on license color pairs was proposed, and it has good performance after experiments.
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Mingdi, Chenrui Wang, Jingbing Yang, Yi Wu, Jiulun Fan, and Bingyi Jing. "Rain Rendering and Construction of Rain Vehicle Color-24 Dataset." Mathematics 10, no. 17 (September 5, 2022): 3210. http://dx.doi.org/10.3390/math10173210.

Full text
Abstract:
The fine identification of vehicle color can assist in criminal investigation or intelligent traffic management law enforcement. Since almost all vehicle-color datasets that are used to train models are collected in good weather, the existing vehicle-color recognition algorithms typically show poor performance for outdoor visual tasks. In this paper we construct a new RainVehicleColor-24 dataset by rain-image rendering using PS technology and a SyRaGAN algorithm based on the VehicleColor-24 dataset. The dataset contains a total of 40,300 rain images with 125 different rain patterns, which can be used to train deep neural networks for specific vehicle-color recognition tasks. Experiments show that the vehicle-color recognition algorithms trained on the new dataset RainVehicleColor-24 improve accuracy to around 72% and 90% on rainy and sunny days, respectively. The code is available at humingdi2005@github.com.
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Dong Liang, and Xiao Lin Feng. "A Kind of Design of Automatic Guided Vehicle Based on RGB Color Space." Applied Mechanics and Materials 687-691 (November 2014): 3844–48. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.3844.

Full text
Abstract:
Automatic Guided vehicle (AGV) can free drivers from boring work. Previously AGV guided by black conduction band, as the color is single, it can only used for a kind of vehicle navigation.When there are several kind of vehicles need to be guided, the black conduction band becomes powerless. Color conduction bands can overcome this problem. Usually RGB image should be converted to other image space, and then identified the bands,but it costs lots of time. This paper proposes the design of AGV and intruduces the hardware, also proposes a quick recognition method which can be used in RGB color space. It effectively improved the recognition speed, and realized real-time identification.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Vehicle color recognition"

1

Gopinath, Sudhir. "Using Color and Shape Analysis for Boundary Line Extraction in Autonomous Vehicle Applications." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35015.

Full text
Abstract:
Autonomous vehicles are the subject of intense research because they are a safe and convenient alternative to present-day vehicles. Human drivers base their navigational decisions primarily on visual information and researchers have been attempting to use computers to do the same. The current challenge in using computer vision lies not in the collection or transmission of visual data, but in the perception of visual data to extract from it useful information. The focus of this thesis is on the use of computer vision to navigate an autonomous vehicle that will participate in the Intelligent Ground Vehicle Competition (IGVC.) This document starts with a description of the IGVC and the software design of an autonomous vehicle. This thesis then focuses on the weakest link in the system - the computer vision module. Vehicles at the IGVC are expected to autonomously navigate an obstacle course. Competing vehicles need to recognize and stay between lines painted on grass or pavement. The research presented in this document describes two methods used for boundary line extraction: color-based object extraction, and shape analysis for line recognition. This is the first time a combination of these methods is being applied to the problem of line recognition in the context of the IGVC. The most significant contribution of this work is a method for extracting lines in a binary image even when the line is attached to a shape that is not a line. Novel methods have been used to simplify camera calibration, and for perspective correction of the image. The results give promise of vastly improved autonomous vehicle performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Fraz, Muhammad. "Video content analysis for intelligent forensics." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/18065.

Full text
Abstract:
The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild.
APA, Harvard, Vancouver, ISO, and other styles
3

Yuan, Chao Hong, and 袁兆宏. "Vehicle Color Recognition in Infrared Band." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/b73upv.

Full text
Abstract:
碩士
國立虎尾科技大學
航空與電子科技研究所
99
The most vehicle color identification methods now are based on visible spectrum color space. However, how to identify vehicle color under invisible spectrum is still a problem need to solve. In this thesis we develop a novel algorithm to recognize the vehicle color. This algorithm is based on the reflection ratio of the vehicle surface under infrared light. Our experiment results show that the reflectance of the different colors of vehicle in infrared spectrum has some particular properties. Thus, we calculate the distances of reflectance, measured under 750nm、900nm and 950nm infrared light, as features for recognition. And then the k-Nearest Neighbor algorithm is used to classify the colors. Furthermore, in order to increase the identification rate, we add three parameters, which are slopes of 720nm~760nm、770nm~810nm and 890nm~930nm light bands, into the k-Nearest Neighbor algorithm. In our experiments, the identification rate is about 77% except the red color vehicle. Because the reflectance of the red color vehicle is not consistent, it’s still unrecognized correctly now.
APA, Harvard, Vancouver, ISO, and other styles
4

Cheng, Chih-te, and 鄭志德. "Vehicle Color Recognition Technology Based on the Relational Analysis." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/37237787503697565749.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
98
In this thesis, we proposed a method that can detect the color of a vehicle based on the statistical theory. The method can analyze the relationship between moving vehicles and background and then classify the vehicles according to moving directions. Then recognize the color of a vehicle in different classifications. Owing to the not being hidden color of a vehicle, it can increase the degree of freedom for setting up a surveillance system. Therefore, the main purpose of this thesis is to provide the vehicle color recognition information to enhance the database of a traffic video surveillance system, and then save the manpower and improve the reliability of the recognition system. In the process of capturing traffic information, we use the pictures capturing by a CCD camera and setting different thresholds in HSV (hue, saturation, and value) color space to classify all the pixels in the images of vehicles in different directions. The test image is going through the multilayer filters to classify to which class it belongs. Then based on the concentration ratio and proportion for each class, it then can recognize the color of a vehicle of the test image. In addition, this method uses the concentration ratio between objects and background to learn the correlation between them, so it also has the ability to improve the light source effect in the outdoor environment. Besides that, we set three different parameters to improve the recognition rates for three different vehicle moving directions. Experimental results show that the proposed method is very robust and efficient for the vehicle in front, side and overlook directions and the average color recognition rates are up to 90.2%.
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Qichang. "Dynamic Scene Understanding with Applications to Traffic Monitoring." Thesis, 2017. http://hdl.handle.net/2440/119678.

Full text
Abstract:
Many breakthroughs have been witnessed in the computer vision community in recent years, largely due to deep Convolutional Neural Networks (CNN) and largescale datasets. This thesis aims to investigate dynamic scene understanding from images. The problem of dynamic scene understanding involves simultaneously solving several sub-tasks including object detection, object recognition, and segmentation. Successfully completing these tasks will enable us to interpret the objects of interest within a scene. Vision-based traffic monitoring is one of many fast-emerging areas in the intelligent transportation system (ITS). In the thesis, we focus on the following problems in traffic scene understanding. They are 1) How to detect and recognize all the objects of interest in street view images? 2) How to employ CNN features and semantic pixel labelling to boost the performance of pedestrian detection? 3) How to enhance the discriminative power of CNN representations for improving the performance of fine-grained car recognition? 4) How to learn an adaptive color space to represent vehicle images for vehicle color recognition? For the first task, we propose a single learning based detection framework to detect three important classes of objects (traffic signs, cars, and cyclists). The proposed framework consists of a dense feature extractor and detectors of these three classes. The advantage of using one common framework is that the detection speed is much faster, since all dense features need only to be evaluated once and then are shared with all detectors. The proposed framework introduces spatially pooled features as a part of aggregated channel features to enhance the robustness to noises and image deformations. We also propose an object subcategorization scheme as a means of capturing the intra-class variation of objects. To address the second problem, we show that by re-using the convolutional feature maps (CFMs) of a deep CNN model as visual features to train an ensemble of boosted decision forests, we are able to remarkably improve the performance of pedestrian detection without using specially designed learning algorithms. We also show that semantic pixel labelling can be simply combined with a pedestrian detector to further boost the detection performance. Fine-grained details of objects usually contain very discriminative information which are crucial for fine-grained object recognition. Conventional pooling strategies (e.g. max-pooling, average-pooling) may discard these fine-grained details and hurt the iii iv recognition performance. To remedy this problem, we propose a spatially weighted pooling (swp) strategy which considerably improves the discriminative power of CNN representations. The swp pools the CNN features with the guidance of its learnt masks, which measures the importance of the spatial units in terms of discriminative power. In image color recognition, visual features are extracted from image pixels represented in one color space. The choice of the color space may influence the quality of extracted features and impact the recognition performance. We propose a color transformation method that converts image pixels from the RGB space to a learnt space for improving the recognition performance. Moreover, we propose a ColorNet which optimizes the architecture of AlexNet and embeds a mini-CNN of color transformation for vehicle color recognition.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2017
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Vehicle color recognition"

1

Kidz, Creative Scholar. Trucks, Planes and Cars Color by Number for Kids 4-8: Fun & Educational Vehicle Coloring Activity Book for Kids to Practice Counting, Number Recognition and Improve Motor Skills with Things That Go. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Vehicle color recognition"

1

Tang, Zhiwei, Yong Chen, Bin Li, and Liangyi Li. "Vehicle Color Recognition Based on CUDA Acceleration." In Lecture Notes in Electrical Engineering, 1167–72. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0539-8_118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Xifang, Songlin Sun, Na Chen, Meixia Fu, and Xiaoying Hou. "Real-Time Vehicle Color Recognition Based on YOLO9000." In Lecture Notes in Electrical Engineering, 82–89. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6504-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hobson, Emily K. "Talk About Loving in the War Years." In Lavender and Red. University of California Press, 2016. http://dx.doi.org/10.1525/california/9780520279056.003.0006.

Full text
Abstract:
Between 1984 and 1990, lesbian and gay activists in the Bay Area and in Nicaragua built transnational ties that reshaped the gay and lesbian left. Two solidarity brigades composed largely of lesbians of color, Somos Hermanas and the Victoria Mercado Brigade, traveled from San Francisco to Nicaragua and made Nicaraguan solidarity a vehicle for multiracial, transnational, and women of color feminism. Meanwhile, the Nicaraguan lesbian and gay movement sought recognition in the Sandinista Revolution. They resisted repression by Sandinista security forces but kept that repression unknown in the United States to ensure ongoing support from solidarity activists. By managing solidarity efforts, Nicaraguan activists pursued their own goals and won Sandinista support for AIDS prevention and lesbian and gay activism.
APA, Harvard, Vancouver, ISO, and other styles
4

Tsinas, Lampros. "MORE INTELLIGENCE BY KNOWLEDGE-BASED COLOUR-EVALUATION; SIGNAL LIGHT RECOGNITION." In Intelligent Autonomous Vehicles 1995, 7–12. Elsevier, 1995. http://dx.doi.org/10.1016/b978-0-08-042366-1.50006-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vehicle color recognition"

1

Dong, Yanmei, Mingtao Pei, and Xiameng Qin. "Vehicle Color Recognition Based on License Plate Color." In 2014 Tenth International Conference on Computational Intelligence and Security (CIS). IEEE, 2014. http://dx.doi.org/10.1109/cis.2014.63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Mengjie, Guang Han, Xiaofei Li, Xiuchang Zhu, and Liang Li. "Vehicle color recognition using monocular camera." In Signal Processing (WCSP 2011). IEEE, 2011. http://dx.doi.org/10.1109/wcsp.2011.6096902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tilakaratna, Damitha S. B., Ukrit Watchareeruetai, Supakorn Siddhichai, and Nattachai Natcharapinchai. "Image analysis algorithms for vehicle color recognition." In 2017 International Electrical Engineering Congress (iEECON). IEEE, 2017. http://dx.doi.org/10.1109/ieecon.2017.8075881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Qiuli, Feng Liu, Qiang Zhao, and Ran Xu. "Vehicle color recognition based on superpixel features." In Eleventh International Conference on Digital Image Processing, edited by Xudong Jiang and Jenq-Neng Hwang. SPIE, 2019. http://dx.doi.org/10.1117/12.2539809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Ku-Jin, Sun-Mi Park, and Yoo-Joo Choi. "Deciding the Number of Color Histogram Bins for Vehicle Color Recognition." In 2008 IEEE Asia-Pacific Services Computing Conference (APSCC). IEEE, 2008. http://dx.doi.org/10.1109/apscc.2008.207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Xiuzhi, Guangming Zhang, Jing Fang, Jian Wu, and Zhiming Cui. "Vehicle Color Recognition Using Vector Matching of Template." In 2010 Third International Symposiums on Electronic Commerce and Security (ISECS). IEEE, 2010. http://dx.doi.org/10.1109/isecs.2010.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Mingyang, Pengli Wang, and Xiaoman Zhang. "Vehicle Color Recognition Using Deep Convolutional Neural Networks." In AICS 2019: 2019 International Conference on Artificial Intelligence and Computer Science. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3349341.3349408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Kwang-Ju, Pyong-Kun Kim, Kil-Taek Lim, Yun-Su Chung, Yoon-Jeong Song, Soo In Lee, and Doo-Hyun Choi. "Vehicle Color Recognition via Representative Color Region Extraction and Convolutional Neural Network." In 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN). IEEE, 2018. http://dx.doi.org/10.1109/icufn.2018.8436710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Tiantian, Chunbo Xiu, and Yi Cheng. "Vehicle recognition based on saliency detection and color histogram." In 2015 27th Chinese Control and Decision Conference (CCDC). IEEE, 2015. http://dx.doi.org/10.1109/ccdc.2015.7162347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aarathi, K. S., and Anish Abraham. "Vehicle color recognition using deep learning for hazy images." In 2017 International Conference on Inventive Communication and Computational Technologies (ICICCT). IEEE, 2017. http://dx.doi.org/10.1109/icicct.2017.7975215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography