Auswahl der wissenschaftlichen Literatur zum Thema „Object detection in images“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Object detection in images" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Object detection in images"

1

Shin, Su-Jin, Seyeob Kim, Youngjung Kim und Sungho Kim. „Hierarchical Multi-Label Object Detection Framework for Remote Sensing Images“. Remote Sensing 12, Nr. 17 (24.08.2020): 2734. http://dx.doi.org/10.3390/rs12172734.

Der volle Inhalt der Quelle
Annotation:
Detecting objects such as aircraft and ships is a fundamental research area in remote sensing analytics. Owing to the prosperity and development of CNNs, many previous methodologies have been proposed for object detection within remote sensing images. Despite the advance, using the object detection datasets with a more complex structure, i.e., datasets with hierarchically multi-labeled objects, is limited to the existing detection models. Especially in remote sensing images, since objects are obtained from bird’s-eye view, the objects are captured with restricted visual features and not always guaranteed to be labeled up to fine categories. We propose a hierarchical multi-label object detection framework applicable to hierarchically partial-annotated datasets. In the framework, an object detection pipeline called Decoupled Hierarchical Classification Refinement (DHCR) fuses the results of two networks: (1) an object detection network with multiple classifiers, and (2) a hierarchical sibling classification network for supporting hierarchical multi-label classification. Our framework additionally introduces a region proposal method for efficient detection on vain areas of the remote sensing images, called clustering-guided cropping strategy. Thorough experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from WorldView-3 and SkySat satellites. Under our proposed framework, DHCR-based detections significantly improve the performance of respective baseline models and we achieve state-of-the-art results on the datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jung, Sejung, Won Hee Lee und Youkyung Han. „Change Detection of Building Objects in High-Resolution Single-Sensor and Multi-Sensor Imagery Considering the Sun and Sensor’s Elevation and Azimuth Angles“. Remote Sensing 13, Nr. 18 (13.09.2021): 3660. http://dx.doi.org/10.3390/rs13183660.

Der volle Inhalt der Quelle
Annotation:
Building change detection is a critical field for monitoring artificial structures using high-resolution multitemporal images. However, relief displacement depending on the azimuth and elevation angles of the sensor causes numerous false alarms and misdetections of building changes. Therefore, this study proposes an effective object-based building change detection method that considers azimuth and elevation angles of sensors in high-resolution images. To this end, segmentation images were generated using a multiresolution technique from high-resolution images after which object-based building detection was performed. For detecting building candidates, we calculated feature information that could describe building objects, such as rectangular fit, gray-level co-occurrence matrix (GLCM) homogeneity, and area. Final building detection was then performed considering the location relationship between building objects and their shadows using the Sun’s azimuth angle. Subsequently, building change detection of final building objects was performed based on three methods considering the relationship of the building object properties between the images. First, only overlaying objects between images were considered to detect changes. Second, the size difference between objects according to the sensor’s elevation angle was considered to detect the building changes. Third, the direction between objects according to the sensor’s azimuth angle was analyzed to identify the building changes. To confirm the effectiveness of the proposed object-based building change detection performance, two building density areas were selected as study sites. Site 1 was constructed using a single sensor of KOMPSAT-3 bitemporal images, whereas Site 2 consisted of multi-sensor images of KOMPSAT-3 and unmanned aerial vehicle (UAV). The results from both sites revealed that considering additional shadow information showed more accurate building detection than using feature information only. Furthermore, the results of the three object-based change detections were compared and analyzed according to the characteristics of the study area and the sensors. Accuracy of the proposed object-based change detection results was achieved over the existing building detection methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Vajda, Peter, Ivan Ivanov, Lutz Goldmann, Jong-Seok Lee und Touradj Ebrahimi. „Robust Duplicate Detection of 2D and 3D Objects“. International Journal of Multimedia Data Engineering and Management 1, Nr. 3 (Juli 2010): 19–40. http://dx.doi.org/10.4018/jmdem.2010070102.

Der volle Inhalt der Quelle
Annotation:
In this paper, the authors analyze their graph-based approach for 2D and 3D object duplicate detection in still images. A graph model is used to represent the 3D spatial information of the object based on the features extracted from training images to avoid explicit and complex 3D object modeling. Therefore, improved performance can be achieved in comparison to existing methods in terms of both robustness and computational complexity. Different limitations of this approach are analyzed by evaluating performance with respect to the number of training images and calculation of optimal parameters in a number of applications. Furthermore, effectiveness of object duplicate detection algorithm is measured over different object classes. The authors’ method is shown to be robust in detecting the same objects even when images with objects are taken from different viewpoints or distances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sejr, Jonas Herskind, Peter Schneider-Kamp und Naeem Ayoub. „Surrogate Object Detection Explainer (SODEx) with YOLOv4 and LIME“. Machine Learning and Knowledge Extraction 3, Nr. 3 (06.08.2021): 662–71. http://dx.doi.org/10.3390/make3030033.

Der volle Inhalt der Quelle
Annotation:
Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It remains, for example, unclear whether an object is detected based on what it looks like or based on the context in which it is located. We have developed an algorithm, Surrogate Object Detection Explainer (SODEx), that can explain any object detection algorithm using any classification explainer. We evaluate SODEx qualitatively and quantitatively by detecting objects in the COCO dataset with YOLOv4 and explaining these detections with LIME. This empirical evaluation does not only demonstrate the value of explainable object detection, it also provides valuable insights into how YOLOv4 detects objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Karimanzira, Divas, Helge Renkewitz, David Shea und Jan Albiez. „Object Detection in Sonar Images“. Electronics 9, Nr. 7 (21.07.2020): 1180. http://dx.doi.org/10.3390/electronics9071180.

Der volle Inhalt der Quelle
Annotation:
The scope of the project described in this paper is the development of a generalized underwater object detection solution based on Automated Machine Learning (AutoML) principles. Multiple scales, dual priorities, speed, limited data, and class imbalance make object detection a very challenging task. In underwater object detection, further complications come in to play due to acoustic image problems such as non-homogeneous resolution, non-uniform intensity, speckle noise, acoustic shadowing, acoustic reverberation, and multipath problems. Therefore, we focus on finding solutions to the problems along the underwater object detection pipeline. A pipeline for realizing a robust generic object detector will be described and demonstrated on a case study of detection of an underwater docking station in sonar images. The system shows an overall detection and classification performance average precision (AP) score of 0.98392 for a test set of 5000 underwater sonar frames.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yan, Longbin, Min Zhao, Xiuheng Wang, Yuge Zhang und Jie Chen. „Object Detection in Hyperspectral Images“. IEEE Signal Processing Letters 28 (2021): 508–12. http://dx.doi.org/10.1109/lsp.2021.3059204.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wu, Jingqian, und Shibiao Xu. „From Point to Region: Accurate and Efficient Hierarchical Small Object Detection in Low-Resolution Remote Sensing Images“. Remote Sensing 13, Nr. 13 (03.07.2021): 2620. http://dx.doi.org/10.3390/rs13132620.

Der volle Inhalt der Quelle
Annotation:
Accurate object detection is important in computer vision. However, detecting small objects in low-resolution images remains a challenging and elusive problem, primarily because these objects are constructed of less visual information and cannot be easily distinguished from similar background regions. To resolve this problem, we propose a Hierarchical Small Object Detection Network in low-resolution remote sensing images, named HSOD-Net. We develop a point-to-region detection paradigm by first performing a key-point prediction to obtain position hypotheses, then only later super-resolving the image and detecting the objects around those candidate positions. By postponing the object prediction to after increasing its resolution, the obtained key-points are more stable than their traditional counterparts based on early object detection with less visual information. This hierarchical approach, HSOD-Net, saves significant run-time, which makes it more suitable for practical applications such as search and rescue, and drone navigation. In comparison with the state-of-art models, HSOD-Net achieves remarkable precision in detecting small objects in low-resolution remote sensing images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lorencs, Aivars, Ints Mednieks und Juris Siņica-Siņavskis. „Fast object detection in digital grayscale images“. Proceedings of the Latvian Academy of Sciences. Section B. Natural, Exact, and Applied Sciences. 63, Nr. 3 (01.01.2009): 116–24. http://dx.doi.org/10.2478/v10046-009-0026-5.

Der volle Inhalt der Quelle
Annotation:
Fast object detection in digital grayscale images The problem of specific object detection in digital grayscale images is considered under the following conditions: relatively small image fragments can be analysed (a priori information about the size of objects is available); images contain a varying undefined background (clutter) of larger objects; processing time should be minimised and must be independent from the image contents; proposed methods should provide for efficient implementation in application-specific electronic circuits. The last two conditions reflect the aim to propose approaches suitable for application in real time systems where known sophisticated methods would be inapplicable. The research is motivated by potential applications in the food industry (detection of contaminants in products from their X-ray images), medicine (detection of anomalies in fragments of computer tomography images etc.). Possible objects to be detected may include compact small objects, curved lines in different directions, and small regions of pixels with brightness different from the background. The paper describes proposed image processing approaches to detection of such objects and the results obtained from processing of sample food images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shen, Jie, Zhenxin Xu, Zhe Chen, Huibin Wang und Xiaotao Shi. „Optical Prior-Based Underwater Object Detection with Active Imaging“. Complexity 2021 (27.04.2021): 1–12. http://dx.doi.org/10.1155/2021/6656166.

Der volle Inhalt der Quelle
Annotation:
Underwater object detection plays an important role in research and practice, as it provides condensed and informative content that represents underwater objects. However, detecting objects from underwater images is challenging because underwater environments significantly degenerate image quality and distort the contrast between the object and background. To address this problem, this paper proposes an optical prior-based underwater object detection approach that takes advantage of optical principles to identify optical collimation over underwater images, providing valuable guidance for extracting object features. Unlike data-driven knowledge, the prior in our method is independent of training samples. The fundamental novelty of our approach lies in the integration of an image prior and the object detection task. This novelty is fundamental to the satisfying performance of our approach in underwater environments, which is demonstrated through comparisons with state-of-the-art object detection methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Wei, Dayu Cheng, Pengcheng Yin, Mengyuan Yang, Erzhu Li, Meng Xie und Lianpeng Zhang. „Small Manhole Cover Detection in Remote Sensing Imagery with Deep Convolutional Neural Networks“. ISPRS International Journal of Geo-Information 8, Nr. 1 (19.01.2019): 49. http://dx.doi.org/10.3390/ijgi8010049.

Der volle Inhalt der Quelle
Annotation:
With the development of remote sensing technology and the advent of high-resolution images, obtaining data has become increasingly convenient. However, the acquisition of small manhole cover information still has shortcomings including low efficiency of manual surveying and high leakage rate. Recently, deep learning models, especially deep convolutional neural networks (DCNNs), have proven to be effective at object detection. However, several challenges limit the applications of DCNN in manhole cover object detection using remote sensing imagery: (1) Manhole cover objects often appear at different scales in remotely sensed images and DCNNs’ fixed receptive field cannot match the scale variability of such objects; (2) Manhole cover objects in large-scale remotely-sensed images are relatively small in size and densely packed, while DCNNs have poor localization performance when applied to such objects. To address these problems, we propose an effective method for detecting manhole cover objects in remotely-sensed images. First, we redesign the feature extractor by adopting the visual geometry group (VGG), which can increase the variety of receptive field size. Then, detection is performed using two sub-networks: a multi-scale output network (MON) for manhole cover object-like edge generation from several intermediate layers whose receptive fields match different object scales and a multi-level convolution matching network (M-CMN) for object detection based on fused feature maps, which combines several feature maps that enable small and densely packed manhole cover objects to produce a stronger response. The results show that our method is more accurate than existing methods at detecting manhole covers in remotely-sensed images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Object detection in images"

1

Kok, R. „An object detection approach for cluttered images“. Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53281.

Der volle Inhalt der Quelle
Annotation:
Thesis (MScEng)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: We investigate object detection against cluttered backgrounds, based on the MINACE (Minimum Noise and Correlation Energy) filter. Application of the filter is followed by a suitable segmentation algorithm, and the standard techniques of global and local thresholding are compared to watershed-based segmentation. The aim of this approach is to provide a custom region-based object detection algorithm with a concise set of regions of interest. Two industrial case studies are examined: diamond detection in X-ray images, and the reading of a dynamic, and ink stamped, 2D barcode on packaging clutter. We demonstrate the robustness of our approach on these two diverse applications, and develop a complete algorithmic prototype for an automatic stamped code reader.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die herkenning van voorwerpe teen onduidelike agtergronde. Ons benadering maak staat op die MINACE (" Minimum Noise and Correlation Energy") korrelasiefilter. Die filter word aangewend saam met 'n gepaste segmenteringsalgoritme, en die standaard tegnieke van globale en lokale drumpelingsalgoritmes word vergelyk met 'n waterskeidingsgebaseerde segmenteringsalgoritme. Die doel van hierdie deteksiebenadering is om 'n klein stel moontlike voorwerpe te kan verskaf aan enige klassifikasie-algoritme wat fokus op die voorwerpe self. Twee industriële toepassings word ondersoek: die opsporing van diamante in X-straal beelde, en die lees van 'n dinamiese, inkgedrukte, 2D balkieskode op verpakkingsmateriaal. Ons demonstreer die robuustheid van ons benadering met hierdie twee uiteenlopende voorbeelde, en ontwikkel 'n volledige algoritmiese prototipe vir 'n outomatiese stempelkode leser.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mohan, Anuj 1976. „Robust object detection in images by components“. Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80554.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Grahn, Fredrik, und Kristian Nilsson. „Object Detection in Domain Specific Stereo-Analysed Satellite Images“. Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159917.

Der volle Inhalt der Quelle
Annotation:
Given satellite images with accompanying pixel classifications and elevation data, we propose different solutions to object detection. The first method uses hierarchical clustering for segmentation and then employs different methods of classification. One of these classification methods used domain knowledge to classify objects while the other used Support Vector Machines. Additionally, a combination of three Support Vector Machines were used in a hierarchical structure which out-performed the regular Support Vector Machine method in most of the evaluation metrics. The second approach is more conventional with different types of Convolutional Neural Networks. A segmentation network was used as well as a few detection networks and different fusions between these. The Convolutional Neural Network approach proved to be the better of the two in terms of precision and recall but the clustering approach was not far behind. This work was done using a relatively small amount of data which potentially could have impacted the results of the Machine Learning models in a negative way.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Papageorgiou, Constantine P. „A Trainable System for Object Detection in Images and Video Sequences“. Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/5566.

Der volle Inhalt der Quelle
Annotation:
This thesis presents a general, trainable system for object detection in static images and video sequences. The core system finds a certain class of objects in static images of completely unconstrained, cluttered scenes without using motion, tracking, or handcrafted models and without making any assumptions on the scene structure or the number of objects in the scene. The system uses a set of training data of positive and negative example images as input, transforms the pixel images to a Haar wavelet representation, and uses a support vector machine classifier to learn the difference between in-class and out-of-class patterns. To detect objects in out-of-sample images, we do a brute force search over all the subwindows in the image. This system is applied to face, people, and car detection with excellent results. For our extensions to video sequences, we augment the core static detection system in several ways -- 1) extending the representation to five frames, 2) implementing an approximation to a Kalman filter, and 3) modeling detections in an image as a density and propagating this density through time according to measured features. In addition, we present a real-time version of the system that is currently running in a DaimlerChrysler experimental vehicle. As part of this thesis, we also present a system that, instead of detecting full patterns, uses a component-based approach. We find it to be more robust to occlusions, rotations in depth, and severe lighting conditions for people detection than the full body version. We also experiment with various other representations including pixels and principal components and show results that quantify how the number of features, color, and gray-level affect performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gonzalez-Garcia, Abel. „Image context for object detection, object context for part detection“. Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28842.

Der volle Inhalt der Quelle
Annotation:
Objects and parts are crucial elements for achieving automatic image understanding. The goal of the object detection task is to recognize and localize all the objects in an image. Similarly, semantic part detection attempts to recognize and localize the object parts. This thesis proposes four contributions. The first two make object detection more efficient by using active search strategies guided by image context. The last two involve parts. One of them explores the emergence of parts in neural networks trained for object detection, whereas the other improves on part detection by adding object context. First, we present an active search strategy for efficient object class detection. Modern object detectors evaluate a large set of windows using a window classifier. Instead, our search sequentially chooses what window to evaluate next based on all the information gathered before. This results in a significant reduction on the number of necessary window evaluations to detect the objects in the image. We guide our search strategy using image context and the score of the classifier. In our second contribution, we extend this active search to jointly detect pairs of object classes that appear close in the image, exploiting the valuable information that one class can provide about the location of the other. This leads to an even further reduction on the number of necessary evaluations for the smaller, more challenging classes. In the third contribution of this thesis, we study whether semantic parts emerge in Convolutional Neural Networks trained for different visual recognition tasks, especially object detection. We perform two quantitative analyses that provide a deeper understanding of their internal representation by investigating the responses of the network filters. Moreover, we explore several connections between discriminative power and semantics, which provides further insights on the role of semantic parts in the network. Finally, the last contribution is a part detection approach that exploits object context. We complement part appearance with the object appearance, its class, and the expected relative location of the parts inside it. We significantly outperform approaches that use part appearance alone in this challenging task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gadsby, David. „Object recognition for threat detection from 2D X-ray images“. Thesis, Manchester Metropolitan University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493851.

Der volle Inhalt der Quelle
Annotation:
This thesis examines methods to identify threat objects inside airport handheld passenger baggage. The work presents techniques for the enhancement and classification of objects from 2-dimensional x-ray images. It has been conducted with the collaboration of Manchester Aviation Services and uses test images from real x-ray baggage machines. The research attempts to overcome the key problem of object occlusion that impedes the performance of x-ray baggage operators identifying threat objects such as guns and knifes in x-ray images. Object occlusions can hide key information on the appearance of an object and potentially lead to a threat item entering an aircraft.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Vi, Margareta. „Object Detection Using Convolutional Neural Network Trained on Synthetic Images“. Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153224.

Der volle Inhalt der Quelle
Annotation:
Training data is the bottleneck for training Convolutional Neural Networks. A larger dataset gives better accuracy though also needs longer training time. It is shown by finetuning neural networks on synthetic rendered images, that the mean average precision increases. This method was applied to two different datasets with five distinctive objects in each. The first dataset consisted of random objects with different geometric shapes. The second dataset contained objects used to assemble IKEA furniture. The neural network with the best performance, trained on 5400 images, achieved a mean average precision of 0.81 on a test which was a sample of a video sequence. Analysis of the impact of the factors dataset size, batch size, and numbers of epochs used in training and different network architectures were done. Using synthetic images to train CNN’s is a promising path to take for object detection where access to large amount of annotated image data is hard to come by.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rickert, Thomas D. (Thomas Dale) 1975. „Texture-based statistical models for object detection in natural images“. Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80570.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 63-65).
by Thomas D. Rickert.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jangblad, Markus. „Object Detection in Infrared Images using Deep Convolutional Neural Networks“. Thesis, Uppsala universitet, Avdelningen för systemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-355221.

Der volle Inhalt der Quelle
Annotation:
In the master thesis about object detection(OD) using deep convolutional neural network(DCNN), the area of OD is being tested when being applied to infrared images(IR). In this thesis the, goal is to use both long wave infrared(LWIR) images and short wave infrared(SWIR) images taken from an airplane in order to train a DCNN to detect runways, Precision Approach Path Indicator(PAPI) lights, and approaching lights. The purpose for detecting these objects in IR images is because IR light transmits better than visible light under certain weather conditions, for example, fog. This system could then help the pilot detect the runway in bad weather. The RetinaNet model architecture was used and modified in different ways to find the best performing model. The models contain parameters that are found during the training process but some parameters, called hyperparameters, need to be determined in advance. A way to automatically find good values of these hyperparameters was also tested. In hyperparameter optimization, the Bayesian optimization method proved to create a model with equally good performance as the best performance acieved by the author using manual hyperparameter tuning. The OD system was implemented using Keras with Tensorflow backend and received a high perfomance (mAP=0.9245) on the test data. The system manages to detect the wanted objects in the images but is expected to perform worse in a general situation since the training data and test data are very similar. In order to further develop this system and to improve performance under general conditions more data is needed from other airfields and under different weather conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Melcherson, Tim. „Image Augmentation to Create Lower Quality Images for Training a YOLOv4 Object Detection Model“. Thesis, Uppsala universitet, Signaler och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429146.

Der volle Inhalt der Quelle
Annotation:
Research in the Arctic is of ever growing importance, and modern technology is used in news ways to map and understand this very complex region and how it is effected by climate change. Here, animals and vegetation are tightly coupled with their environment in a fragile ecosystem, and when the environment undergo rapid changes it risks damaging these ecosystems severely.  Understanding what kind of data that has potential to be used in artificial intelligence, can be of importance as many research stations have data archives from decades of work in the Arctic. In this thesis, a YOLOv4 object detection model has been trained on two classes of images to investigate the performance impacts of disturbances in the training data set. An expanded data set was created by augmenting the initial data to contain various disturbances. A model was successfully trained on the augmented data set and a correlation between worse performance and presence of noise was detected, but changes in saturation and altered colour levels seemed to have less impact than expected. Reducing noise in gathered data is seemingly of greater importance than enhancing images with lacking colour levels. Further investigations with a larger and more thoroughly processed data set is required to gain a clearer picture of the impact of the various disturbances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Object detection in images"

1

Bogusław Cyganek. Object Detection and Recognition in Digital Images. Oxford, UK: John Wiley & Sons Ltd, 2013. http://dx.doi.org/10.1002/9781118618387.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lee, Chin-Hwa. Similarity counting architecture for object detection. Monterey, California: Naval Postgraduate School, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Geometric constraints for object detection and delineation. Boston: Kluwer Academic Publishers, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wosnitza, Matthias Werner. High precision 1024-point FFT processor for 2D object detection. Hartung-Gorre: Konstanz, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shaikh, Soharab Hossain, Khalid Saeed und Nabendu Chaki. Moving Object Detection Using Background Subtraction. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07386-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Goulermas, John. Hough transform techniques for circular object detection. Manchester: UMIST, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jiang, Xiaoyue, Abdenour Hadid, Yanwei Pang, Eric Granger und Xiaoyi Feng, Hrsg. Deep Learning in Object Detection and Recognition. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-10-5152-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shufelt, Jefferey. Geometric Constraints for Object Detection and Delineation. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-5273-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ntalias, A. Automated flaw detection in textile images. Manchester: UMIST, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Suk, Minsoo. Three-Dimensional Object Recognition from Range Images. Tokyo: Springer Japan, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Object detection in images"

1

Topkar, V., B. Kjell und A. Sood. „Object detection in noisy images“. In Active Perception and Robot Vision, 651–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2_34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yavari, Abulfazl, und H. R. Pourreza. „Object Detection in Foveated Images“. In Technological Developments in Networking, Education and Automation, 281–85. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-9151-2_49.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ziran, Zahra, und Simone Marinai. „Object Detection in Floor Plan Images“. In Artificial Neural Networks in Pattern Recognition, 383–94. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99978-4_30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kumar, Nitin, Maheep Singh, M. C. Govil, E. S. Pilli und Ajay Jaiswal. „Salient Object Detection in Noisy Images“. In Advances in Artificial Intelligence, 109–14. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-34111-8_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schneiderman, Henry. „Learning Statistical Structure for Object Detection“. In Computer Analysis of Images and Patterns, 434–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45179-2_54.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kelm, André Peter, Vijesh Soorya Rao und Udo Zölzer. „Object Contour and Edge Detection with RefineContourNet“. In Computer Analysis of Images and Patterns, 246–58. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29888-3_20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sharma, Raghav, Rohit Pandey und Aditya Nigam. „Real Time Object Detection on Aerial Imagery“. In Computer Analysis of Images and Patterns, 481–91. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29888-3_39.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lecron, Fabian, Mohammed Benjelloun und Saïd Mahmoudi. „Descriptive Image Feature for Object Detection in Medical Images“. In Lecture Notes in Computer Science, 331–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31298-4_39.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Cai, Qiang, Liwei Wei, Haisheng Li und Jian Cao. „Salient Object Detection Based on RGBD Images“. In Proceedings of 2016 Chinese Intelligent Systems Conference, 437–44. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2335-4_40.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kollmitzer, Christian. „Object Detection and Measurement Using Stereo Images“. In Communications in Computer and Information Science, 159–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30721-8_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Object detection in images"

1

Zhang, Pingping, Wei Liu, Huchuan Lu und Chunhua Shen. „Salient Object Detection by Lossless Feature Reflection“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/160.

Der volle Inhalt der Quelle
Annotation:
Salient object detection, which aims to identify and locate the most salient pixels or regions in images, has been attracting more and more interest due to its various real-world applications. However, this vision task is quite challenging, especially under complex image scenes. Inspired by the intrinsic reflection of natural images, in this paper we propose a novel feature learning framework for large-scale salient object detection. Specifically, we design a symmetrical fully convolutional network (SFCN) to learn complementary saliency features under the guidance of lossless feature reflection. The location information, together with contextual and semantic information, of salient objects are jointly utilized to supervise the proposed network for more accurate saliency predictions. In addition, to overcome the blurry boundary problem, we propose a new structural loss function to learn clear object boundaries and spatially consistent saliency. The coarse prediction results are effectively refined by these structural information for performance improvements. Extensive experiments on seven saliency detection datasets demonstrate that our approach achieves consistently superior performance and outperforms the very recent state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Felix, Heitor, Francisco Simões, Kelvin Cunha und Veronica Teichrieb. „Image Processing Techniques to Improve Deep 6DoF Detection in RGB Images“. In XXI Symposium on Virtual and Augmented Reality. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/svr_estendido.2019.8457.

Der volle Inhalt der Quelle
Annotation:
Six degrees of freedom (6DoF) Object Detection has great relevance in computer vision due to its use in applications on several areas, such as augmented reality and robotics. Even with the improved results provided by deep learning techniques, object detection of textured and non-textured objects is still a challenge. The objective of this work was to seek improvements in the six degrees of freedom detection of non-textured objects using a Convolutional Neural Network (CNN) approach through the preprocessing of the images that were used for training the network. A State of the art research was carried out on techniques that use CNN to detect objects in six degrees of freedom. We also searched for filters with enhancement factors for detection. Finally, a detection technique based on a CNN was selected and adapted to use single-channel images (grayscale) as input, instead of using three-channel images (RGB) as in the original proposition, aiming to increase its robustness while reducing the complexity of the input images. The technique was also tested with the application of two different preprocessing filters to enhance the objects’ contours on the single-channel images, one being the ”pencil effect”, and the other based on local binary patterns (LBP). With this study, it was possible to evaluate the impact on the CNN detection performance due to the application of both of the filters. The proposed technique used with one channel images and the filters on the images still could not surpass the results of the technique with the three-channel image (RGB), although it indicated paths for improvement. The pencil filter also proved to be more robust than the LBP filter, as expected.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ayush, Kumar, Burak Uzkent, Marshall Burke, David Lobell und Stefano Ermon. „Generating Interpretable Poverty Maps using Object Detection in Satellite Images“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/608.

Der volle Inhalt der Quelle
Annotation:
Accurate local-level poverty measurement is an essential task for governments and humanitarian organizations to track the progress towards improving livelihoods and distribute scarce resources. Recent computer vision advances in using satellite imagery to predict poverty have shown increasing accuracy, but they do not generate features that are interpretable to policymakers, inhibiting adoption by practitioners. Here we demonstrate an interpretable computational framework to accurately predict poverty at a local level by applying object detectors to high resolution (30cm) satellite images. Using the weighted counts of objects as features, we achieve 0.539 Pearson's r^2 in predicting village-level poverty in Uganda, a 31% improvement over existing (and less interpretable) benchmarks. Feature importance and ablation analysis reveal intuitive relationships between object counts and poverty predictions. Our results suggest that interpretability does not have to come at the cost of performance, at least in this important domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Saha, Ranajit, Ajoy Mondal und C. V. Jawahar. „Graphical Object Detection in Document Images“. In 2019 International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2019. http://dx.doi.org/10.1109/icdar.2019.00018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Li, Tingtian, und Daniel P. K. Lun. „Salient object detection using array images“. In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282039.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yang, Fan, Heng Fan, Peng Chu, Erik Blasch und Haibin Ling. „Clustered Object Detection in Aerial Images“. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00840.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Medvedeva, Elena. „Moving Object Detection in Noisy Images“. In 2019 8th Mediterranean Conference on Embedded Computing (MECO). IEEE, 2019. http://dx.doi.org/10.1109/meco.2019.8760066.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kwan, Chiman, Bryan Chou, David Gribben, Leif Hagen, Jerry Yang, Bulent Ayhan und Krzysztof Koperski. „Ground object detection in worldview images“. In Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII, herausgegeben von Lynne L. Grewe, Erik P. Blasch und Ivan Kadar. SPIE, 2019. http://dx.doi.org/10.1117/12.2518529.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Orellana, Sonny, Lei Zhao, Helen Boussalis, Charles Liu, Khosrow Rad und Jane Dong. „Automated object detection for astronomical images“. In Optics East 2005, herausgegeben von Anthony Vetro, Chang Wen Chen, C. C. J. Kuo, Tong Zhang, Qi Tian und John R. Smith. SPIE, 2005. http://dx.doi.org/10.1117/12.631033.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wang, Jinwang, Wen Yang, Haowen Guo, Ruixiang Zhang und Gui-Song Xia. „Tiny Object Detection in Aerial Images“. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9413340.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Object detection in images"

1

Repperger, Daniel W., Alan R. Pinkus, Julie A. Skipper und Christina D. Schrider. Stochastic Resonance Investigation of Object Detection in Images. Fort Belvoir, VA: Defense Technical Information Center, Dezember 2006. http://dx.doi.org/10.21236/ada472478.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Heisele, Bernd, Thomas Serre, Sayan Mukherjee und Tomaso Poggio. Feature Reduction and Hierarchy of Classifiers for Fast Object Detection in Video Images. Fort Belvoir, VA: Defense Technical Information Center, Januar 2001. http://dx.doi.org/10.21236/ada458821.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gastelum, Zoe, und Timothy Shead. How Low Can You Go? Using Synthetic 3D Imagery to Drastically Reduce Real-World Training Data for Object Detection. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1670874.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Clausen, Jay, Susan Frankenstein, Jason Dorvee, Austin Workman, Blaine Morriss, Keran Claffey, Terrance Sobecki et al. Spatial and temporal variance of soil and meteorological properties affecting sensor performance—Phase 2. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/41780.

Der volle Inhalt der Quelle
Annotation:
An approach to increasing sensor performance and detection reliability for buried objects is to better understand which physical processes are dominant under certain environmental conditions. The present effort (Phase 2) builds on our previously published prior effort (Phase 1), which examined methods of determining the probability of detection and false alarm rates using thermal infrared for buried-object detection. The study utilized a 3.05 × 3.05 m test plot in Hanover, New Hampshire. Unlike Phase 1, the current effort involved removing the soil from the test plot area, homogenizing the material, then reapplying it into eight discrete layers along with buried sensors and objects representing targets of inter-est. Each layer was compacted to a uniform density consistent with the background undisturbed density. Homogenization greatly reduced the microscale soil temperature variability, simplifying data analysis. The Phase 2 study spanned May–November 2018. Simultaneous measurements of soil temperature and moisture (as well as air temperature and humidity, cloud cover, and incoming solar radiation) were obtained daily and recorded at 15-minute intervals and coupled with thermal infrared and electro-optical image collection at 5-minute intervals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Workman, Austin, und Jay Clausen. Meteorological property and temporal variable effect on spatial semivariance of infrared thermography of soil surfaces for detection of foreign objects. Engineer Research and Development Center (U.S.), Juni 2021. http://dx.doi.org/10.21079/11681/41024.

Der volle Inhalt der Quelle
Annotation:
The environmental phenomenological properties responsible for the thermal variability evident in the use of thermal infrared (IR) sensor systems is not well understood. The research objective of this work is to understand the environmental and climatological properties contributing to the temporal and spatial thermal variance of soils. We recorded thermal images of surface temperature of soil as well as several meteorological properties such as weather condition and solar irradiance of loamy soil located at the Cold Regions Research and Engineering Lab (CRREL) facility. We assessed sensor performance by analyzing how recorded meteorological properties affected the spatial structure by observing statistical differences in spatial autocorrelation and dependence parameter estimates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shah, Jayant. Object Oriented Segmentation of Images. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1994. http://dx.doi.org/10.21236/ada290792.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yan, Yujie, und Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, Mai 2021. http://dx.doi.org/10.17760/d20410114.

Der volle Inhalt der Quelle
Annotation:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jain, Ramesh. Object Recognition in Range Images Using CAD Databases. Fort Belvoir, VA: Defense Technical Information Center, Juli 1991. http://dx.doi.org/10.21236/ada239326.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Owens, Jason. Object Detection using the Kinect. Fort Belvoir, VA: Defense Technical Information Center, März 2012. http://dx.doi.org/10.21236/ada564736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Aufderheide, M., A. Barty, S. Lehman, B. Kozioziemski und D. Schneberk. Phase Effects on Mesoscale Object X-ray Absorption Images. Office of Scientific and Technical Information (OSTI), September 2004. http://dx.doi.org/10.2172/15014410.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie