Добірка наукової літератури з теми "Detection and recognition"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Detection and recognition".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Detection and recognition":

1

Sugiura, Hiroki, Shinichi Demura, Yoshinori Nagasawa, Shunsuke Yamaji, Tamotsu Kitabayashi, Shigeki Matsuda, Takayoshi Yamada, and Ning Xu. "Relationship between Extent of Coffee Intake and Recognition of Its Effects and Ingredients." Detection 01, no. 01 (2013): 1–6. http://dx.doi.org/10.4236/detection.2013.11001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Shah, Dr Dipti M., and Parul D. Sindha. "Color detection in real time traffic sign detection and recognition system." Indian Journal of Applied Research 3, no. 7 (October 1, 2011): 152–53. http://dx.doi.org/10.15373/2249555x/july2013/43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Srilatha, J., T. S. Subashini, and K. Vaidehi. "Solid Waste Detection and Recognition using Faster RCNN." Indian Journal Of Science And Technology 16, no. 42 (November 13, 2023): 3778–85. http://dx.doi.org/10.17485/ijst/v16i42.2005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shevtekar, Prof Sumit, and Shrinidhi kulkarni. "Traffic-sign Recognition and Detection using Yolo-v8." International Journal of Research Publication and Reviews 5, no. 5 (May 2, 2024): 1619–31. http://dx.doi.org/10.55248/gengpi.5.0524.1141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yamini, Maidam. "Number Plate Detection in an Image." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 09 (September 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem25883.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic Vehicle license plate detection and recognition is a key technique in most of traffic related applications and is an active research topic in the image processing domain. Different methods, techniques and algorithms have been developed for license plate detection and recognitions. Due to the varying characteristics of the license plate like numbering system, colors, style and sizes of license plate, When detection and recognition are two separate jobs, which also results in a huge number of factors, there is an issue with identification. So,further research is still needed in this area. We propose a unified convolutional neural network (CNN) and the F1 score as metrics in a deep learning project for picture categorization which can localize license plates and recognize the letters. We work on license plate recognition and segments characters in the license plate firstly, and then recognizes each segmented character using Optical Character Recognition(OCR)techniques. Extensive experiments show the effectiveness and the efficiency of our proposed approach.
6

M C, Sohan, Akanksh A M, Anala M R, and Hemavathy R. "Banknote Denomination Recognition on Mobile Devices." ECS Transactions 107, no. 1 (April 24, 2022): 11781–90. http://dx.doi.org/10.1149/10701.11781ecst.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Several mobile applications have been developed to facilitate denomination detection for blind users. However, none of the existing applications allow for detecting multiple notes in a single frame and relaying the total denomination, nor is there a dataset available for the new Indian currency notes, annotated for object detection training. We describe the development of a detection application that aims to improve on the previously existing solutions by enabling multi-note detection, continuous audio feedback, automatic torch usage, and minimal user-application interaction. YOLOv4 allowed the training of a lightweight and fast object detection model with high accuracy on a custom-created dataset post-demonetization of Indian currencies that is deployed on a mobile device.
7

G., Nirmala Priya. "Comparison of Partially Occluded Face Detection and Recognition Methods." Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (July 25, 2020): 201–11. http://dx.doi.org/10.5373/jardcs/v12sp7/20202099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

C P, Anju, Andria Joy, Haritha Ashok, Joseph Ronald Pious, and Livya George. "Traffic Sign Detection and Recognition." International Journal of Innovative Science and Research Technology 5, no. 7 (August 10, 2020): 1143–46. http://dx.doi.org/10.38124/ijisrt20jul787.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As placement of traffic sign board do not follow any international standard, it may be difficultfor non-local residents to recognize and infer the signs easily. So, this project mainly focuses ondemonstrating a system that can help facilitate this inconvenience. This can be achieved byinterpreting the traffic sign as a voice note in the user’s preferred language. Therefore, the wholeprocess involves detecting the traffic sign, detecting textual data if any with the help of availabledatasets and then processing it into an audio as the output to the user in his/her preferred language.The proposed system not only tackles the above-mentioned problem, but also to an extent ensuressafer driving by reducing accidents through conveying the traffic signs properly. The techniques usedto implement the system include digital image processing, natural language processing and machinelearning concepts. The implementation of the system includesthree major steps which are detection of traffic sign from a captured traffic scene, classification of traffic signs and finally conversion of classified traffic signs to audio message.
9

Katkar, Aniruddha. "EYE DISEASE RECOGNITION SYSTEM." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 28, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem32078.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents an innovative system for detecting eye diseases utilizing advanced machine learning techniques. Given the increasing prevalence of eye disorders, early detection and intervention are of utmost importance. The proposed system integrates a diverse dataset comprising medical images and patient information. Deep learning algorithms are employed to extract intricate features from the dataset. These features are then input into a predictive model, facilitating accurate identification of potential eye diseases. Rigorous testing and validation demonstrate the system's performance and its ability to provide reliable predictions. The early diagnosis enabled by this system has the potential to significantly impact patient outcomes and contribute to the advancement of ophthalmic healthcare. The Eye Disease Detection System serves as a valuable tool for the early detection and management of various eye conditions. Through the integration of advanced technologies such as machine learning and medical imaging, this system enhances the accuracy and efficiency of the diagnostic process. Index Terms : Vision disorders, Glaucoma, Macular degeneration, Eye diseases, Ophthalmology, Corneal diseases.
10

Yu, Myoungseok, Narae Kim, Yunho Jung, and Seongjoo Lee. "A Frame Detection Method for Real-Time Hand Gesture Recognition Systems Using CW-Radar." Sensors 20, no. 8 (April 18, 2020): 2321. http://dx.doi.org/10.3390/s20082321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, a method to detect frames was described that can be used as hand gesture data when configuring a real-time hand gesture recognition system using continuous wave (CW) radar. Detecting valid frames raises accuracy which recognizes gestures. Therefore, it is essential to detect valid frames in the real-time hand gesture recognition system using CW radar. The conventional research on hand gesture recognition systems has not been conducted on detecting valid frames. We took the R-wave on electrocardiogram (ECG) detection as the conventional method. The detection probability of the conventional method was 85.04%. It has a low accuracy to use the hand gesture recognition system. The proposal consists of 2-stages to improve accuracy. We measured the performance of the detection method of hand gestures provided by the detection probability and the recognition probability. By comparing the performance of each detection method, we proposed an optimal detection method. The proposal detects valid frames with an accuracy of 96.88%, 11.84% higher than the accuracy of the conventional method. Also, the recognition probability of the proposal method was 94.21%, which was 3.71% lower than the ideal method.

Дисертації з теми "Detection and recognition":

1

O'Shea, Kieran. "Roadsign detection & recognition /." Leeds : University of Leeds, School of Computer Studies, 2008. http://www.comp.leeds.ac.uk/fyproj/reports/0708/OShea.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bashir, Sulaimon A. "Change detection for activity recognition." Thesis, Robert Gordon University, 2017. http://hdl.handle.net/10059/3104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Activity Recognition is concerned with identifying the physical state of a user at a particular point in time. Activity recognition task requires the training of classification algorithm using the processed sensor data from the representative population of users. The accuracy of the generated model often reduces during classification of new instances due to the non-stationary sensor data and variations in user characteristics. Thus, there is a need to adapt the classification model to new user haracteristics. However, the existing approaches to model adaptation in activity recognition are blind. They continuously adapt a classification model at a regular interval without specific and precise detection of the indicator of the degrading performance of the model. This approach can lead to wastage of system resources dedicated to continuous adaptation. This thesis addresses the problem of detecting changes in the accuracy of activity recognition model. The thesis developed a classifier for activity recognition. The classifier uses three statistical summaries data that can be generated from any dataset for similarity based classification of new samples. The weighted ensemble combination of the classification decision from each statistical summary data results in a better performance than three existing benchmarked classification algorithms. The thesis also presents change detection approaches that can detect the changes in the accuracy of the underlying recognition model without having access to the ground truth label of each activity being recognised. The first approach called `UDetect' computes the change statistics from the window of classified data and employed statistical process control method to detect variations between the classified data and the reference data of a class. Evaluation of the approach indicates a consistent detection that correlates with the error rate of the model. The second approach is a distance based change detection technique that relies on the developed statistical summaries data for comparing new classified samples and detects any drift in the original class of the activity. The implemented approach uses distance function and a threshold parameter to detect the accuracy change in the classifier that is classifying new instances. Evaluation of the approach yields above 90% detection accuracy. Finally, a layered framework for activity recognition is proposed to make model adaptation in activity recognition informed using the developed techniques in this thesis.
3

Sandström, Marie. "Liveness Detection in Fingerprint Recognition Systems." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2397.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Biometrics deals with identifying individuals with help of their biological data. Fingerprint scanning is the most common method of the biometric methods available today. The security of fingerprint scanners has however been questioned and previous studies have shown that fingerprint scanners can be fooled with artificial fingerprints, i.e. copies of real fingerprints. The fingerprint recognition systems are evolving and this study will discuss the situation of today.

Two approaches have been used to find out how good fingerprint recognition systems are in distinguishing between live fingers and artificial clones. The first approach is a literature study, while the second consists of experiments.

A literature study of liveness detection in fingerprint recognition systems has been performed. A description of different liveness detection methods is presented and discussed. Methods requiring extra hardware use temperature, pulse, blood pressure, electric resistance, etc., and methods using already existent information in the system use skin deformation, pores, perspiration, etc.

The experiments focus on making artificial fingerprints in gelatin from a latent fingerprint. Nine different systems were tested at the CeBIT trade fair in Germany and all were deceived. Three other different systems were put up against more extensive tests with three different subjects. All systems werecircumvented with all subjects'artificial fingerprints, but with varying results. The results are analyzed and discussed, partly with help of the A/R value defined in this report.

4

Khan, Muhammad. "Hand Gesture Detection & Recognition System." Thesis, Högskolan Dalarna, Datateknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6496.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.
5

Zakir, Usman. "Automatic road sign detection and recognition." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/9733.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Road Sign Detection and Recognition (RSDR) systems provide an additional level of driver assistance, leading to improved safety for passengers, road users and vehicles. As part of Advanced Driving Assistance Systems (ADAS), RSDR can be used to benefit drivers (specially with driving disabilities) by alerting them about the presence of road signs to reduce risks in situations of driving distraction, fatigue ,poor sight and weather conditions. Although a number of RSDR systems have been proposed in literature; the design of a robust algorithm still remains an open research problem. This thesis aims to resolve some of the outstanding research challenges in RSDR, while considering variations in colour illumination, scale, rotation, translation, occlusion, computational complexity and functional limitations. RSDR pipeline is divided into three parts namely; Colour Segmentation, Shape Classification and Content Recognition. This thesis presents each part as a separate chapter, except for Colour Segmentation that introduces two distinct approaches for Road Sign region of interest (ROI) selection. The first approach in Colour Segmentation presents a detailed investigation of computer based colour spaces i.e. YCbCr, YIQ, RGB, CIElab, CYMK and HSV, whereas second approach presents the development and utilisation of an illumination invariant Combined Colour Model (CCM) on Gamma Corrected images containing road signs considering varying illumination conditions. Shape Classification of the road sign acts as second part of RSDR pipeline consisting on shape feature extraction and shape feature classification stages. Shape features of road signs are extracted by introducing Contourlet Transforms at the decomposition level-3 with haar filters for generating the Laplacian Pyramid (LP) and Directional Filter Bank (DFB). The third part of the RSDR system presented in this thesis is the Content Recognition, which is carried out by extracting the LESH (Local Energy based Shape Histogram) features of the normalized road sign contents. Extracted shape and content features are utilised to train a Support Vector Machine (SVM) polynomial kernel which are later classified with the input candidate road sign shapes and contents respectively. The thesis further highlights possible extensions and improvements to the proposed approaches for RSDR.
6

Park, Chi-youn 1981. "Consonant landmark detection for speech recognition." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44905.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 191-197).
This thesis focuses on the detection of abrupt acoustic discontinuities in the speech signal, which constitute landmarks for consonant sounds. Because a large amount of phonetic information is concentrated near acoustic discontinuities, more focused speech analysis and recognition can be performed based on the landmarks. Three types of consonant landmarks are defined according to its characteristics -- glottal vibration, turbulence noise, and sonorant consonant -- so that the appropriate analysis method for each landmark point can be determined. A probabilistic knowledge-based algorithm is developed in three steps. First, landmark candidates are detected and their landmark types are classified based on changes in spectral amplitude. Next, a bigram model describing the physiologically-feasible sequences of consonant landmarks is proposed, so that the most likely landmark sequence among the candidates can be found. Finally, it has been observed that certain landmarks are ambiguous in certain sets of phonetic and prosodic contexts, while they can be reliably detected in other contexts. A method to represent the regions where the landmarks are reliably detected versus where they are ambiguous is presented. On TIMIT test set, 91% of all the consonant landmarks and 95% of obstruent landmarks are located as landmark candidates. The bigram-based process for determining the most likely landmark sequences yields 12% deletion and substitution rates and a 15% insertion rate. An alternative representation that distinguishes reliable and ambiguous regions can detect 92% of the landmarks and 40% of the landmarks are judged to be reliable. The deletion rate within reliable regions is as low as 5%.
(cont.) The resulting landmark sequences form a basis for a knowledge-based speech recognition system since the landmarks imply broad phonetic classes of the speech signal and indicate the points of focus for estimating detailed phonetic information. In addition, because the reliable regions generally correspond to lexical stresses and word boundaries, it is expected that the landmarks can guide the focus of attention not only at the phoneme-level, but at the phrase-level as well.
by Chiyoun Park.
Ph.D.
7

Ning, Guanghan. "Vehicle license plate detection and recognition." Thesis, University of Missouri - Columbia, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10157318.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

In this work, we develop a license plate detection method using a SVM (Support Vector Machine) classifier with HOG (Histogram of Oriented Gradients) features. The system performs window searching at different scales and analyzes the HOG feature using a SVM and locates their bounding boxes using a Mean Shift method. Edge information is used to accelerate the time consuming scanning process.

Our license plate detection results show that this method is relatively insensitive to variations in illumination, license plate patterns, camera perspective and background variations. We tested our method on 200 real life images, captured on Chinese highways under different weather conditions and lighting conditions. And we achieved a detection rate of 100%.

After detecting license plates, alignment is then performed on the plate candidates. Conceptually, this alignment method searches neighbors of the bounding box detected, and finds the optimum edge position where the outside regions are very different from the inside regions of the license plate, from color's perspective in RGB space. This method accurately aligns the bounding box to the edges of the plate so that the subsequent license plate segmentation and recognition can be performed accurately and reliably.

The system performs license plate segmentation using global alignment on the binary license plate. A global model depending on the layout of license plates is proposed to segment the plates. This model searches for the optimum position where the characters are all segmented but not chopped into pieces. At last, the characters are recognized by another SVM classifier, with a feature size of 576, including raw features, vertical and horizontal scanning features.

Our character recognition results show that 99% of the digits are successfully recognized, while the letters achieve an recognition rate of 95%.

The license plate recognition system was then incorporated into an embedded system for parallel computing. Several TS7250 and an auxiliary board are used to simulate the process of vehicle retrieval.

8

Liu, Chang. "Human motion detection and action recognition." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1108.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Anwer, Rao Muhammad. "Color for Object Detection and Action Recognition." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/120224.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Detectar objetos en imágenes es un problema central en el campo de la visión por computador. El marco de detección basado en modelos de partes deformable es actualmente el más eficaz. Generalmente, HOG es el descriptor de imágenes a partir del cual se construyen esos modelos. El reconocimiento de acciones humanas es otro de los tópicos de más interés actualmente en el campo de la visión por computador. En este caso, los modelos usados siguen la idea de conjuntos de palabras (visuales), en inglés bag-of-words, en este caso siendo SIFT uno de los descriptor de imágenes más usados para dar soporte a la formación de esos modelos. En este contexto hay una información muy relevante para el sistema visual humano que normalmente está infrautilizada tanto en la detección de objetos como en el reconocimiento de acciones, hablamos del color. Es decir, tanto HOG como SIFT suelen ser aplicados al canal de luminancia o algún tipo de proyección de los canales de color que también lo desechan. Globalmente esta tesis se centra en incorporar color como fuente de información adicional para mejorar tanto la detección objetos como el reconocimiento de acciones. En primer lugar la tesis analiza el problema de la detección de personas en fotografías. En particular nos centramos en analizar la aportación del color a los métodos del estado del arte. A continuación damos el salto al problema de la detección de objetos en general, no solo personas. Además, en lugar de introducir el color en el nivel más bajo de la representación de la imagen, lo cual incrementa la dimensión de la representación provocando un mayor coste computacional y la necesidad de más ejemplos de aprendizaje, en esta tesis nos centramos en introducir el color en un nivel más alto de la representación. Esto no es trivial ya que el sistema en desarrollo tiene que aprender una serie de atributos de color que sean lo suficientemente discriminativos para cada tarea. En particular, en esta tesis combinamos esos atributos de color con los tradicionales atributos de forma y lo aplicamos de forma que mejoramos el estado del arte de la detección de objetos. Finalmente, nos centramos en llevar las ideas incorporadas para la tarea de detección a la tarea de reconocimiento de acciones. En este caso también demostramos cómo la incorporación del color, tal y como proponemos en esta tesis, permite mejorar el estado del arte.
Recognizing object categories in real world images is a challenging problem in computer vision. The deformable part based framework is currently the most successful approach for object detection. Generally, HOG are used for image representation within the part-based framework. For action recognition, the bag-of-word framework has shown to provide promising results. Within the bag-of-words framework, local image patches are described by SIFT descriptor. Contrary to object detection and action recognition, combining color and shape has shown to provide the best performance for object and scene recognition. In the first part of this thesis, we analyze the problem of person detection in still images. Standard person detection approaches rely on intensity based features for image representation while ignoring the color. Channel based descriptors is one of the most commonly used approaches in object recognition. This inspires us to evaluate incorporating color information using the channel based fusion approach for the task of person detection. In the second part of the thesis, we investigate the problem of object detection in still images. Due to high dimensionality, channel based fusion increases the computational cost. Moreover, channel based fusion has been found to obtain inferior results for object category where one of the visual varies significantly. On the other hand, late fusion is known to provide improved results for a wide range of object categories. A consequence of late fusion strategy is the need of a pure color descriptor. Therefore, we propose to use Color attributes as an explicit color representation for object detection. Color attributes are compact and computationally efficient. Consequently color attributes are combined with traditional shape features providing excellent results for object detection task. Finally, we focus on the problem of action detection and classification in still images. We investigate the potential of color for action classification and detection in still images. We also evaluate different fusion approaches for combining color and shape information for action recognition. Additionally, an analysis is performed to validate the contribution of color for action recognition. Our results clearly demonstrate that combining color and shape information significantly improve the performance of both action classification and detection in still images.
10

Wang, Ge. "Verilogo proactive phishing detection via logo recognition /." Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/fullcit?p1477945.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed July 16, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (leaves 38-40).

Книги з теми "Detection and recognition":

1

Cipolla, Roberto, Sebastiano Battiato, and Giovanni Maria Farinella. Computer vision: Detection, recognition and reconstruction. Berlin: Springer, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bogusław Cyganek. Object Detection and Recognition in Digital Images. Oxford, UK: John Wiley & Sons Ltd, 2013. http://dx.doi.org/10.1002/9781118618387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jiang, Xiaoyue, Abdenour Hadid, Yanwei Pang, Eric Granger, and Xiaoyi Feng, eds. Deep Learning in Object Detection and Recognition. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-10-5152-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Miller, Gary J. Drugs and the law: Detection, recognition & investigation. Charlottesville, VA: LexisNexis, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Miller, Gary J. Drugs and the law: Detection, recognition & investigation. [Altamonte Springs, FL]: Gould Publications, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wosnitza, Matthias Werner. High precision 1024-point FFT processor for 2D object detection. Hartung-Gorre: Konstanz, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zourob, Mohammed, Souna Elwary, and Anthony Turner, eds. Principles of Bacterial Detection: Biosensors, Recognition Receptors and Microsystems. New York, NY: Springer New York, 2008. http://dx.doi.org/10.1007/978-0-387-75113-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rajalingam, Mallikka. Text Segmentation and Recognition for Enhanced Image Spam Detection. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-53047-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yang, Ming-Hsuan, and Narendra Ahuja. Face Detection and Gesture Recognition for Human-Computer Interaction. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1423-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, Datong. Text detection and recognition in images and video sequences. Lausanne: EPFL, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Detection and recognition":

1

Colmenarez, Antonio J., and Thomas S. Huang. "Face Detection and Recognition." In Face Recognition, 174–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Character Segmentation and Recognition." In Video Text Detection, 145–68. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Stan Z., and Jianxin Wu. "Face Detection." In Handbook of Face Recognition, 277–303. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-932-1_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yu, Shiqi, Yuantao Feng, Hanyang Peng, Yan-ran Li, and Jianguo Zhang. "Face Detection." In Handbook of Face Recognition, 103–35. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-43567-6_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Amit, Yali, Donald Geman, and Bruno Jedynak. "Efficient Focusing and Face Detection." In Face Recognition, 157–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shao, Li, Ronghang Zhu, and Qijun Zhao. "Glasses Detection Using Convolutional Neural Networks." In Biometric Recognition, 711–19. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46654-5_78.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Escalera, Sergio, Xavier Baró, Oriol Pujol, Jordi Vitrià, and Petia Radeva. "Traffic Sign Detection." In Traffic-Sign Recognition Systems, 15–52. London: Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2245-6_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pan, Jiaxing, and Dong Liang. "Holistic Crowd Interaction Modelling for Anomaly Detection." In Biometric Recognition, 642–49. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69923-3_69.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pei, Yuhang, Liming Xu, and Bochuan Zheng. "Improved YOLOv5 for Dense Wildlife Object Detection." In Biometric Recognition, 569–78. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20233-9_58.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Liu, Yangfan, Yanan Guo, Kangning Du, and Lin Cao. "Enhanced Memory Adversarial Network for Anomaly Detection." In Biometric Recognition, 417–26. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8565-4_39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Detection and recognition":

1

Pacaldo, Joren Mundane, Chi Wee Tan, Wah Pheng Lee, Dustin Gerard Ancog, and Haroun Al Raschid Christopher Macalisang. "Utilizing Synthetically-Generated License Plate Automatic Detection and Recognition of Motor Vehicle Plates in Philippines." In International Conference on Digital Transformation and Applications (ICDXA 2021). Tunku Abdul Rahman University College, 2021. http://dx.doi.org/10.56453/icdxa.2021.1022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We investigated the potential use of synthetic data for automatic license plate detection and recognition by detecting and clustering each of the characters on the license plates. We used 36 cascading classifiers (26 letters + 10 numbers) as an individual character to detect synthetically generated license plates. We trained our cascade classifier using a Local Binary Pattern (LBP) as the visual descriptor. After detecting all the characters individually, an investigation has been established in identifying and utilizing a clustering algorithm in grouping these characters for valid license plate recognition. Two clustering algorithms have been considered including Hierarchical and K-means. Investigation results revealed that the hierarchical clustering algorithm approach produces better results in clustering the detecting characters than the K-means. Inaccuracy in the actual detection and recognition of license plates is largely attributed to the false detections in some of the 36 classifiers used in the study. To improve the precision in the detection of plate numbers, it is recommended to have a good classifier for each character detection and utilization of a good clustering algorithm. The proponents concluded that detecting and clustering each character was not an effective approach, however the use of synthetic data in training the classifiers shows promising results. Keywords: Cascading Classifiers, Synthetic Data, Local Binary Pattern, License Plate Recognition
2

Wu, Liyang, and Xiaofang Zhang. "An underwater polarimetric image descattering and material identification method based on unpaired multi-scale polarization fusion adversarial generative network." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3018076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Qixiang, Yannan Yang, and Wende Dong. "Image dehazing based on Uformer modified WGAN." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3016206.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ma, Ning, Yunan Wu, Wancheng Liu, Yining Yang, Jinjin Wang, and Xin Liu. "A fusion adaptive recognition network based on intensity and polarization imaging." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3025945.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

fan, bozhao, jing wang, yuan ma, bida su, teng sun, yue peng, and hong chen. "Research on feature extraction method of space targets image based on Hu extension moment." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3013302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xu, Yuan, Feng Li, Kaimin Shi, and Peikun Li. "Underwater image enhancement based on unsupervised adaptive uncertainty distribution." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3014202.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yu, Long, Xiangchun Shi, Jia Yu, Huiping Liu, Bin Guo, and Yao Fu. "Research on fringe projection profilometry for 3D reconstruction of target in turbid water." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3018075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yao, XinYu, fengtao He, and binghui Wang. "Deep learning-based recurrent neural network for underwater image enhancement." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3018273.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Guo, Ju Guang, Da yong Wang, Yun Xin Wang, Guang ping Wang, Wei Wei Jiang, and Zhi hui Yang. "Experimental study on anti-interference based on infrared radiation characteristics of jamming target." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3015538.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fang, Qipeng, Yongmo LV, Tao Tan, Zhanjun Yan, Jianjun Chen, Xiuhui Sun, Chao Hu, and Shaoyun Yin. "Diffraction efficiency control of liquid crystal polymer polarizing grating film layer through grating layer thinning." In Imaging Detection and Target Recognition, edited by Jiangtao Xu and Chao Zuo. SPIE, 2024. http://dx.doi.org/10.1117/12.3023676.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Detection and recognition":

1

Mouroulis, P. Visual target detection and recognition. Office of Scientific and Technical Information (OSTI), January 1990. http://dx.doi.org/10.2172/5087944.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Grenander, Ulf. Foundations of Object Detection and Recognition,. Fort Belvoir, VA: Defense Technical Information Center, August 1998. http://dx.doi.org/10.21236/ada352287.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dittmar, George. Object Detection and Recognition in Natural Settings. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.926.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chun, Cornell S., and Firooz A. Sadjadi. Polarimetric Imaging System for Automatic Target Detection and Recognition. Fort Belvoir, VA: Defense Technical Information Center, March 2000. http://dx.doi.org/10.21236/ada395219.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Devaney, A. J., R. Raghavan, H. Lev-Ari, E. Manolakos, and M. Kokar. Automatic Target Detection And Recognition: A Wavelet Based Approach. Fort Belvoir, VA: Defense Technical Information Center, January 1997. http://dx.doi.org/10.21236/ada329696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hupp, N. A. Detection of Prosodics by Using a Speech Recognition System. Fort Belvoir, VA: Defense Technical Information Center, July 1991. http://dx.doi.org/10.21236/ada242432.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bragdon, Sophia, Vuong Truong, and Jay Clausen. Environmentally informed buried object recognition. Engineer Research and Development Center (U.S.), November 2022. http://dx.doi.org/10.21079/11681/45902.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The ability to detect and classify buried objects using thermal infrared imaging is affected by the environmental conditions at the time of imaging, which leads to an inconsistent probability of detection. For example, periods of dense overcast or recent precipitation events result in the suppression of the soil temperature difference between the buried object and soil, thus preventing detection. This work introduces an environmentally informed framework to reduce the false alarm rate in the classification of regions of interest (ROIs) in thermal IR images containing buried objects. Using a dataset that consists of thermal images containing buried objects paired with the corresponding environmental and meteorological conditions, we employ a machine learning approach to determine which environmental conditions are the most impactful on the visibility of the buried objects. We find the key environmental conditions include incoming shortwave solar radiation, soil volumetric water content, and average air temperature. For each image, ROIs are computed using a computer vision approach and these ROIs are coupled with the most important environmental conditions to form the input for the classification algorithm. The environmentally informed classification algorithm produces a decision on whether the ROI contains a buried object by simultaneously learning on the ROIs with a classification neural network and on the environmental data using a tabular neural network. On a given set of ROIs, we have shown that the environmentally informed classification approach improves the detection of buried objects within the ROIs.
8

Sherlock, Barry G. Wavelet Based Feature Extraction for Target Recognition and Minefield Detection. Fort Belvoir, VA: Defense Technical Information Center, May 2002. http://dx.doi.org/10.21236/ada401966.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rangwala, Huzefa, and George Karypis. Building Multiclass Classifiers for Remote Homology Detection and Fold Recognition. Fort Belvoir, VA: Defense Technical Information Center, April 2006. http://dx.doi.org/10.21236/ada446086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sherlock, Barry G. Wavelet Based Feature Extraction for Target Recognition and Minefield Detection. Fort Belvoir, VA: Defense Technical Information Center, November 1999. http://dx.doi.org/10.21236/ada371103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії