Добірка наукової літератури з теми "Object recognition from optical images"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Object recognition from optical images".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Object recognition from optical images"

1

Loo, Chu Kiong. "Simulated Quantum-Optical Object Recognition from High-Resolution Images." Optics and Spectroscopy 99, no. 2 (2005): 218. http://dx.doi.org/10.1134/1.2034607.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rasheed, Nada Abdullah, and Wessam Lahmod Nados. "Object Segmentation from Background of 2D Image." JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences 26, no. 5 (March 12, 2018): 204–15. http://dx.doi.org/10.29196/jub.v26i5.913.

Повний текст джерела
Анотація:
One of the difficult tasks in the image processing field and still not solved is segmentation of object from background of 2D image accurately. Therefore, a new method has been proposed for the purpose of segmenting the object from its background for the purpose of enhancing the images and obtains characteristics of the object without the rest of the region of the image. This process is important to provide optimal classification in the process of pattern recognition. Therefore, this paper proposed the method that includes several tasks, after loading the six files of images; this work applies the segmentation al- gorithm depending on the border and the color of the object. Finally, 2D median filtering algorithm was employed to remove noisy objects of various shapes and sizes. The algo- rithm was tested on variety images, and the results are high precision. In the other words, the proposed method is able to segment the objects from the background with promising results.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Morozov, A. N., A. L. Nazolin, and I. L. Fufurin. "Optical and Spectral Methods for Detection and Recognition of Unmanned Aerial Vehicles." Radio Engineering, no. 2 (May 17, 2020): 39–50. http://dx.doi.org/10.36027/rdeng.0220.0000167.

Повний текст джерела
Анотація:
The paper considers a problem of detection and identification of unmanned aerial vehicles (UAVs) against the animate and inanimate objects and identification of their load by optical and spectral optical methods. The state-of-the-art analysis has shown that, when using the radar methods to detect small UAVs, there is a dead zone for distances of 250-700 m, and in this case it is important to use optical methods for detecting UAVs.The application possibilities and improvements of the optical scheme for detecting UAVs at long distances of about 1-2 km are considered. Location is performed by intrinsic infrared (IR) radiation of an object using the IR cameras and thermal imagers, as well as using a laser rangefinder (LIDAR). The paper gives examples of successful dynamic detection and recognition of objects from video images by methods of graph theory and neural networks using the network FasterR-CNN, YOLO and SSD models, including one frame received.The possibility for using the available spectral optical methods to analyze the chemical composition of materials that can be employed for remote identification of UAV coating materials, as well as for detecting trace amounts of matter on its surface has been studied. The advantages and disadvantages of the luminescent spectroscopy with UV illumination, Raman spectroscopy, differential absorption spectroscopy based on a tunable UV laser, spectral imaging methods (hyper / multispectral images), diffuse reflectance laser spectroscopy using infrared tunable quantum cascade lasers (QCL) have been shown.To assess the potential limiting distances for detecting and identifying UAVs, as well as identifying the chemical composition of an object by optical and spectral optical methods, a described experimental setup (a hybrid lidar UAV identification complex) is expected to be useful. The experimental setup structure and its performances are described. Such studies are aimed at development of scientific basics for remote detection, identification, tracking, and determination of UAV parameters and UAV belonging to different groups by optical location and spectroscopy methods, as well as for automatic optical UAV recognition in various environments against the background of moving wildlife. The proposed problem solution is to combine the optical location and spectral analysis methods, methods of the theory of statistics, graphs, deep learning, neural networks and automatic control methods, which is an interdisciplinary fundamental scientific task.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

FINLAYSON, GRAHAM D., and GUI YUN TIAN. "COLOR NORMALIZATION FOR COLOR OBJECT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 08 (December 1999): 1271–85. http://dx.doi.org/10.1142/s0218001499000720.

Повний текст джерела
Анотація:
Color images depend on the color of the capture illuminant and object reflectance. As such image colors are not stable features for object recognition, however stability is necessary since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Before the colors in images can be compared, they must first be preprocessed to remove the effect of illumination. Two types of preprocessing have been proposed: first, run a color constancy algorithm or second apply an invariant normalization. In color constancy preprocessing the illuminant color is estimated and then, at a second stage, the image colors are corrected to remove color bias due to illumination. In color invariant normalization image RGBs are redescribed, in an illuminant independent way, relative to the context in which they are seen (e.g. RGBs might be divided by a local RGB average). In theory the color constancy approach is superior since it works in a scene independently: color invariant normalization can be calculated post-color constancy but the converse is not true. However, in practice color invariant normalization usually supports better indexing. In this paper we ask whether color constancy algorithms will ever deliver better indexing than color normalization. The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation. The equivalence is empirically derived based on color object recognition experiments. colorful objects are imaged under several different colors of light. To remove dependency due to illumination these images are preprocessed using either a perfect color constancy algorithm or the comprehensive color image normalization. In the perfect color constancy algorithm the illuminant is measured rather than estimated. The import of this is that the perfect color constancy algorithm can determine the actual illuminant without error and so bounds the performance of all existing and future algorithms. Post-color constancy or color normalization processing, the color content is used as cue for object recognition. Counter-intuitively perfect color constancy does not support perfect recognition. In comparison the color invariant normalization does deliver near-perfect recognition. That the color constancy approach fails implies that the scene effective illuminant is different from the measured illuminant. This explanation has merit since it is well known that color constancy is more difficult in the presence of physical processes such as fluorescence and mutual illumination. Thus, in a second experiment, image colors are corrected based on a scene dependent "effective illuminant". Here, color constancy preprocessing facilitates near-perfect recognition. Of course, if the effective light is scene dependent then optimal color constancy processing is also scene dependent and so, is equally a color invariant normalization.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Slyusar, Vadym, Mykhailo Protsenko, Anton Chernukha, Pavlo Kovalov, Pavlo Borodych, Serhii Shevchenko, Oleksandr Chernikov, Serhii Vazhynskyi, Oleg Bogatov, and Kirill Khrustalev. "Improvement of the model of object recognition in aero photographs using deep convolutional neural networks." Eastern-European Journal of Enterprise Technologies 5, no. 2 (113) (October 31, 2021): 6–21. http://dx.doi.org/10.15587/1729-4061.2021.243094.

Повний текст джерела
Анотація:
Detection and recognition of objects in images is the main problem to be solved by computer vision systems. As part of solving this problem, the model of object recognition in aerial photographs taken from unmanned aerial vehicles has been improved. A study of object recognition in aerial photographs using deep convolutional neural networks has been carried out. Analysis of possible implementations showed that the AlexNet 2012 model (Canada) trained on the ImageNet image set (China) is most suitable for this problem solution. This model was used as a basic one. The object recognition error for this model with the use of the ImageNet test set of images amounted to 15 %. To solve the problem of improving the effectiveness of object recognition in aerial photographs for 10 classes of images, the final fully connected layer was modified by rejection from 1,000 to 10 neurons and additional two-stage training of the resulting model. Additional training was carried out with a set of images prepared from aerial photographs at stage 1 and with a set of VisDrone 2021 (China) images at stage 2. Optimal training parameters were selected: speed (step) (0.0001), number of epochs (100). As a result, a new model under the proposed name of AlexVisDrone was obtained. The effectiveness of the proposed model was checked with a test set of 100 images for each class (the total number of classes was 10). Accuracy and sensitivity were chosen as the main indicators of the model effectiveness. As a result, an increase in recognition accuracy from 7 % (for images from aerial photographs) to 9 % (for the VisDrone 2021 set) was obtained which has indicated that the choice of neural network architecture and training parameters was correct. The use of the proposed model makes it possible to automate the process of object recognition in aerial photographs. In the future, it is advisable to use this model at ground stations of unmanned aerial vehicle complex control when processing aerial photographs taken from unmanned aerial vehicles, in robotic systems, in video surveillance complexes and when designing unmanned vehicle systems
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kupinski, Matthew A., Eric Clarkson, John W. Hoppin, Liying Chen, and Harrison H. Barrett. "Experimental determination of object statistics from noisy images." Journal of the Optical Society of America A 20, no. 3 (March 1, 2003): 421. http://dx.doi.org/10.1364/josaa.20.000421.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kholifah, Desiana Nur, Hendri Mahmud Nawawi, and Indra Jiwana Thira. "IMAGE BACKGROUND PROCESSING FOR COMPARING ACCURACY VALUES OF OCR PERFORMANCE." Jurnal Pilar Nusa Mandiri 16, no. 1 (March 15, 2020): 33–38. http://dx.doi.org/10.33480/pilar.v16i1.1076.

Повний текст джерела
Анотація:
Optical Character Recognition (OCR) is an application used to process digital text images into text. Many documents that have a background in the form of images in the visual context of the background image increase the security of documents that state authenticity, but the background image causes difficulties with OCR performance because it makes it difficult for OCR to recognize characters overwritten by background images. By removing background images can maximize OCR performance compared to document images that are still background. Using the thresholding method to eliminate background images and look for recall values, precision, and character recognition rates to determine the performance value of OCR that is used as the object of research. From eliminating the background image with thresholding, an increase in performance on the three types of OCR is used as the object of research.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lysenko, A. V., M. S. Oznobikhin, E. A. Kireev, K. S. Dubrova, and S. S. Vorobyeva. "Identification of Baikal phytoplankton inferred from computer vision methods and machine learning." Limnology and Freshwater Biology, no. 3 (2021): 1143–46. http://dx.doi.org/10.31951/2658-3518-2021-a-3-1143.

Повний текст джерела
Анотація:
Abstract. This study discusses the problem of phytoplankton classification using computer vision methods and convolutional neural networks. We created a system for automatic object recognition consisting of two parts: analysis and primary processing of phytoplankton images and development of the neural network based on the obtained information about the images. We developed software that can detect particular objects in images from a light microscope. We trained a convolutional neural network in transfer learning and determined optimal parameters of this neural network and the optimal size of using dataset. To increase accuracy for these groups of classes, we created three neural networks with the same structure. The obtained accuracy in the classification of Baikal phytoplankton by these neural networks was up to 80%.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jung, Jae-Hyun, Tian Pu, and Eli Peli. "Comparing object recognition from binary and bipolar edge images for visual prostheses." Journal of Electronic Imaging 25, no. 6 (December 22, 2016): 061619. http://dx.doi.org/10.1117/1.jei.25.6.061619.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

R. Sanjuna, K., and K. Dinakaran. "A Multi-Object Feature Selection Based Text Detection and Extraction Using Skeletonized Region Optical Character Recognition in-Text Images." International Journal of Engineering & Technology 7, no. 3.6 (July 4, 2018): 386. http://dx.doi.org/10.14419/ijet.v7i3.6.16009.

Повний текст джерела
Анотація:
Information or content extraction from image is crucial task for obtaining text in natural scene images. The problem arise due to variation in images contains differential object to explore values like, background filling, saturation ,color etc. text projections from different styles varies the essential information which is for wrong understand for detecting characters.so detection of region text need more accuracy to identify the exact object. To consider this problem, to propose a multi-objective feature for text detection and localization based on skeletonized text bound box region of text confidence score. This contributes the intra edge detection, segmentation along skeleton of object reflective. the impact of multi-objective region selection model (MSOR) is to recognize the exact character of style matches using the bounding box region analysis which is to identify the object portion to accomplish the candidate extraction model.To enclose the text region localization of text resolution and hazy image be well identified edge smoothing quick guided filter methods. Further the region are skeletonized to morphing the segmented region of inter segmentation to extract the text.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Object recognition from optical images"

1

Illing, Diane Patricia. "Orientation and recognition of both noisy and partially occluded 3-D objects from single 2-D images." Thesis, University of South Wales, 1990. https://pure.southwales.ac.uk/en/studentthesis/orientation-and-recognition-of-both-noisy-and-partially-occluded-3d-objects-from-single-2d-images(c849d6e3-24e4-4462-9afb-c608120a4019).html.

Повний текст джерела
Анотація:
This work is concerned with the problem of 3-D object recognition and orientation determination from single 2-D image frames in which objects may be noisy, partially occluded or both. Global descriptors of shape such as moments and Fourier descriptors rely on the whole shape being present. If part of a shape is missing then all of the descriptors will be affected. Consequently, such approaches are not suitable when objects are partially occluded, as results presented here show. Local methods of describing shape, where distortion of part of the object affects only the descriptors associated with that particular region, and nowhere else, are more likely to provide a successful solution to the problem. One such method is to locate points of maximum curvature on object boundaries. These are commonly believed to be the most perceptually significant points on digital curves. However, results presented in this thesis will show that estimators of point curvature become highly unreliable in the presence of noise. Rather than attempting to locate such high curvature points directly, an approach is presented which searches for boundary segments which exhibit significant linearity; curvature discontinuities are then assigned to the junctions between boundary segments. The resulting object descriptions are more stable in the presence of noise. Object orientation and recognition is achieved through a directed search and comparison to a database of similar 2-D model descriptions stored at various object orientations. Each comparison of sensed and model data is realised through a 2-D pose-clustering procedure, solving for the coordinate transformation which maps model features onto image features. Object features are used both to control the amount of computation and to direct the search of the database. In conditions of noise and occlusion objects can be recognised and their orientation determined to within less than 7 degrees of arc, on average.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Izciler, Fatih. "3d Object Recognition From Range Images." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614915/index.pdf.

Повний текст джерела
Анотація:
Recognizing generic objects by single or multi view range images is a contemporary popular problem in 3D object recognition area with developing technology of scanning devices such as laser range scanners. This problem is vital to current and future vision systems performing shape based matching and classification of the objects in an arbitrary scene. Despite improvements on scanners, there are still imperfections on range scans such as holes or unconnected parts on images. This studyobjects at proposing and comparing algorithms that match a range image to complete 3D models in a target database.The study started with a baseline algorithm which usesstatistical representation of 3D shapesbased on 4D geometricfeatures, namely SURFLET-Pair relations.The feature describes the geometrical relationof a surface-point pair and reflects local and the global characteristics of the object. With the desire of generating solution to the problem,another algorithmthat interpretsSURFLET-Pairslike in the baseline algorithm, in which histograms of the features are used,isconsidered. Moreover, two other methods are proposed by applying 2D space filing curves on range images and applying 4D space filling curves on histograms of SURFLET-Pairs. Wavelet transforms are used for filtering purposes in these algorithms. These methods are tried to be compact, robust, independent on a global coordinate frame and descriptive enough to be distinguish queries&rsquo
categories.Baseline and proposed algorithms are implemented on a database in which range scans of real objects with imperfections are queries while generic 3D objects from various different categories are target dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hong, Tao. "Object recognition with features from complex wavelets." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610239.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gadsby, David. "Object recognition for threat detection from 2D X-ray images." Thesis, Manchester Metropolitan University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493851.

Повний текст джерела
Анотація:
This thesis examines methods to identify threat objects inside airport handheld passenger baggage. The work presents techniques for the enhancement and classification of objects from 2-dimensional x-ray images. It has been conducted with the collaboration of Manchester Aviation Services and uses test images from real x-ray baggage machines. The research attempts to overcome the key problem of object occlusion that impedes the performance of x-ray baggage operators identifying threat objects such as guns and knifes in x-ray images. Object occlusions can hide key information on the appearance of an object and potentially lead to a threat item entering an aircraft.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Villalobos, Leda. "Three dimensional primitive CAD-based object recognition from range images." Case Western Reserve University School of Graduate Studies / OhioLINK, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=case1057759966.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Малишевська, Катерина Миколаївна. "Інтелектуальна система для розпізнавання об'єктів на оптичних зображеннях з використанням каскадних нейронних мереж". Doctoral thesis, Київ, 2015. https://ela.kpi.ua/handle/123456789/14391.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Schlecht, Joseph. "Learning 3-D Models of Object Structure from Images." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194661.

Повний текст джерела
Анотація:
Recognizing objects in images is an effortless task for most people.Automating this task with computers, however, presents a difficult challengeattributable to large variations in object appearance, shape, and pose. The problemis further compounded by ambiguity from projecting 3-D objects into a 2-D image.In this thesis we present an approach to resolve these issues by modeling objectstructure with a collection of connected 3-D geometric primitives and a separatemodel for the camera. From sets of images we simultaneously learn a generative,statistical model for the object representation and parameters of the imagingsystem. By learning 3-D structure models we are going beyond recognitiontowards quantifying object shape and understanding its variation.We explore our approach in the context of microscopic images of biologicalstructure and single view images of man-made objects composed of block-likeparts, such as furniture. We express detected features from both domains asstatistically generated by an image likelihood conditioned on models for theobject structure and imaging system. Our representation of biological structurefocuses on Alternaria, a genus of fungus comprising ellipsoid and cylindershaped substructures. In the case of man-made furniture objects, we representstructure with spatially contiguous assemblages of blocks arbitrarilyconstructed according to a small set of design constraints.We learn the models with Bayesian statistical inference over structure andcamera parameters per image, and for man-made objects, across categories, suchas chairs. We develop a reversible-jump MCMC sampling algorithm to exploretopology hypotheses, and a hybrid of Metropolis-Hastings and stochastic dynamicsto search within topologies. Our results demonstrate that we can infer both 3-Dobject and camera parameters simultaneously from images, and that doing soimproves understanding of structure in images. We further show how 3-D structuremodels can be inferred from single view images, and that learned categoryparameters capture structure variation that is useful for recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Shujun. "Model-based 3D object perception from single monochromatic images of unknown environments." Thesis, University of Reading, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315501.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ziqing, Li S. "Towards 3D vision from range images : an optimisation framework and parallel distributed networks." Thesis, University of Surrey, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291880.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kanaparthi, Pradeep Kumar. "Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1352934398.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Object recognition from optical images"

1

M, Bhandarkar S., ed. Three-dimensional object recognition from range images. Tokyo: Springer-Verlag, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Suk, Minsoo. Three-Dimensional Object Recognition from Range Images. Tokyo: Springer Japan, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Suk, Minsoo, and Suchendra M. Bhandarkar. Three-Dimensional Object Recognition from Range Images. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lowe, David G. Three-dimensional object recognition from single two-dimensional images. New York: Courant Institute of Mathematical Sciences, New York University, 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rubtsov, Nickolai, Mikhail Alymov, Alexander Kalinin, Alexey Vinogradov, Alexey Rodionov, and Kirill Troshin. Remote studies of combustion and explosion processes based on optoelectronic methods. au: AUS PUBLISHERS, 2022. http://dx.doi.org/10.26526/monography_62876066a124d8.04785158.

Повний текст джерела
Анотація:
The main objective of this book is to acquaint the reader with the main modern problems of the multisensor data analysis and opportunities of the hyperspectral shooting being carried out in the wide range of wavelengths from ultraviolet to the infrared range, visualization of the fast combustion processes of flame propagation and flame acceleration, the limit phenomena at flame ignition and propagation. The book can be useful to students of the high courses and scientists dealing with problems of optical spectroscopy, vizualisation, digital recognizing images and gaseous combustion. The main goal of this book is to bring to the attention of the reader the main modern problems of multisensory data analysis and the possibilities of hyperspectral imaging, carried out in a broad wave-length range from ultraviolet to infrared by methods of visualizing fast combustion processes, propagation and flames acceleration, and limiting phenomena during ignition and flame propagation. The book can be useful for students of higher courses and experimental scientists dealing with problems of optical spectroscopy, visualization, pattern recognition and gas combustion.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Geological Survey (U.S.), ed. MAKESHARE.COM: A VMS utility for creating shareable images from object module libaries. Menlo Park, Calif: U.S. Dept. of the Interior, U.S. Geological Survey, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Geological Survey (U.S.), ed. MAKESHARE.COM: A VMS utility for creating shareable images from object module libraries. Menlo Park, Calif: U.S. Dept. of the Interior, U.S. Geological Survey, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Geological Survey (U.S.), ed. MAKESHARE.COM: A VMS utility for creating shareable images from object module libaries. Menlo Park, Calif: U.S. Dept. of the Interior, U.S. Geological Survey, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Geological Survey (U.S.), ed. MAKESHARE.COM: A VMS utility for creating shareable images from object module libraries. Menlo Park, Calif: U.S. Dept. of the Interior, U.S. Geological Survey, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

MAKESHARE.COM: A VMS utility for creating shareable images from object module libaries. Menlo Park, Calif: U.S. Dept. of the Interior, U.S. Geological Survey, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Object recognition from optical images"

1

Suk, Minsoo, and Suchendra M. Bhandarkar. "Polyhedral Object Recognition." In Three-Dimensional Object Recognition from Range Images, 145–81. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hänsch, Ronny, and Olaf Hellwich. "Object Recognition from Polarimetric SAR Images." In Radar Remote Sensing of Urban Areas, 109–31. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-3751-0_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fan, Xingxing, Zhi Liu, and Linwei Ye. "Salient Object Segmentation from Stereoscopic Images." In Graph-Based Representations in Pattern Recognition, 272–81. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18224-7_27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Suk, Minsoo, and Suchendra M. Bhandarkar. "Recognition and Localization Techniques." In Three-Dimensional Object Recognition from Range Images, 103–42. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Suk, Minsoo, and Suchendra M. Bhandarkar. "Recognition of Curved Objects." In Three-Dimensional Object Recognition from Range Images, 183–220. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Suk, Minsoo, and Suchendra M. Bhandarkar. "Parallel Implementations of Recognition Techniques." In Three-Dimensional Object Recognition from Range Images, 257–77. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Suk, Minsoo, and Suchendra M. Bhandarkar. "Introduction." In Three-Dimensional Object Recognition from Range Images, 1–13. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Suk, Minsoo, and Suchendra M. Bhandarkar. "Range Image Sensors and Sensing Techniques." In Three-Dimensional Object Recognition from Range Images, 17–37. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Suk, Minsoo, and Suchendra M. Bhandarkar. "Range Image Segmentation." In Three-Dimensional Object Recognition from Range Images, 39–75. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Suk, Minsoo, and Suchendra M. Bhandarkar. "Representation." In Three-Dimensional Object Recognition from Range Images, 77–101. Tokyo: Springer Japan, 1992. http://dx.doi.org/10.1007/978-4-431-68213-4_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Object recognition from optical images"

1

Yu, Xian, Xiangrui Xing, Han Zheng, Xueyang Fu, Yue Huang, and Xinghao Ding. "Man-Made Object Recognition from Underwater Optical Images Using Deep Learning and Transfer Learning." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461549.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Park, Seok-Chan, Seung-Cheol Kim, and Eun-Soo Kim. "Recognition of 3-D objects from computationally reconstructed integral images using 3-D reference image." In SPIE Optical Engineering + Applications, edited by Khan M. Iftekharuddin and Abdul A. S. Awwal. SPIE, 2009. http://dx.doi.org/10.1117/12.825849.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Zhaohui, and Fernand S. Cohen. "Affine-invariant moments and B-splines for object recognition from image curves." In Optical Engineering and Photonics in Aerospace Sensing, edited by Kim L. Boyer and Louise Stark. SPIE, 1993. http://dx.doi.org/10.1117/12.141768.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Nosato, Hirokazu, Hidenori Sakanashi, Eiichi Takahashi, and Masahiro Murakawa. "Method of retrieving multi-scale objects from optical colonoscopy images based on image-recognition techniques." In 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS). IEEE, 2015. http://dx.doi.org/10.1109/biocas.2015.7348442.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pardo, Alejandro, Mengmeng Xu, Ali Thabet, Pablo Arbelaez, and Bernard Ghanem. "BAOD: Budget-Aware Object Detection." In LatinX in AI at Computer Vision and Pattern Recognition Conference 2021. Journal of LatinX in AI Research, 2021. http://dx.doi.org/10.52591/lxai202106254.

Повний текст джерела
Анотація:
We study the problem of object detection from a novel perspective in which annotation budget constraints are taken into consideration, appropriately coined Budget Aware Object Detection (BAOD). When provided with a fixed budget, we propose a strategy for building a diverse and informative dataset that can be used to optimally train a robust detector. We investigate both optimization and learning-based methods to sample which images to annotate and what type of annotation (strongly or weakly supervised) to annotate them with. We adopt a hybrid supervised learning framework to train the object detector from both these types of annotation. We conduct a comprehensive empirical study showing that a handcrafted optimization method outperforms other selection techniques including random sampling, uncertainty sampling and active learning. By combining an optimal image/annotation selection scheme with hybrid supervised learning to solve the BAOD problem, we show that one can achieve the performance of a strongly supervised detector on PASCAL-VOC 2007 while saving 12.8% of its original annotation budget. Furthermore, when 100% of the budget is used, it surpasses this performance by 2.0 mAP percentage points.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shaikh, Soharab Hossain, Saikat Roy, and Nabendu Chaki. "Recognition of object orientation from images." In 2012 International Conference on Emerging Trends in Science, Engineering and Technology (INCOSET). IEEE, 2012. http://dx.doi.org/10.1109/incoset.2012.6513915.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Taud, Hind, Juan Carlos Herrera-Lozada, Jesus Antonio Alvarez-Cedillo, Magdalena Marciano-Melchor, Ramon Silva-Ortigoza, and Mauricio Olguin-Carbajal. "Circular object recognition from satellite images." In IGARSS 2012 - 2012 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2012. http://dx.doi.org/10.1109/igarss.2012.6351029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mustafa, A. A. Y., L. G. Shapiro, and M. A. Ganter. "3D object recognition from color intensity images." In Proceedings of 13th International Conference on Pattern Recognition. IEEE, 1996. http://dx.doi.org/10.1109/icpr.1996.546100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hu, Xiangyun, Zuxun Zhang, and Jianqing Zhang. "Object-space-based interactive extraction of manmade object from aerial images." In Multispectral Image Processing and Pattern Recognition, edited by Jun Shen, Sharatchandra Pankanti, and Runsheng Wang. SPIE, 2001. http://dx.doi.org/10.1117/12.441661.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hafi, Lotfi El, Ming Ding, Jun Takamatsu, and Tsukasa Ogasawara. "Gaze Tracking and Object Recognition from Eye Images." In 2017 First IEEE International Conference on Robotic Computing (IRC). IEEE, 2017. http://dx.doi.org/10.1109/irc.2017.44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Object recognition from optical images"

1

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Повний текст джерела
Анотація:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir, and Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, December 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Повний текст джерела
Анотація:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії