Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Object detection in images.

Zeitschriftenartikel zum Thema „Object detection in images“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Object detection in images" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Shin, Su-Jin, Seyeob Kim, Youngjung Kim und Sungho Kim. „Hierarchical Multi-Label Object Detection Framework for Remote Sensing Images“. Remote Sensing 12, Nr. 17 (24.08.2020): 2734. http://dx.doi.org/10.3390/rs12172734.

Der volle Inhalt der Quelle
Annotation:
Detecting objects such as aircraft and ships is a fundamental research area in remote sensing analytics. Owing to the prosperity and development of CNNs, many previous methodologies have been proposed for object detection within remote sensing images. Despite the advance, using the object detection datasets with a more complex structure, i.e., datasets with hierarchically multi-labeled objects, is limited to the existing detection models. Especially in remote sensing images, since objects are obtained from bird’s-eye view, the objects are captured with restricted visual features and not always guaranteed to be labeled up to fine categories. We propose a hierarchical multi-label object detection framework applicable to hierarchically partial-annotated datasets. In the framework, an object detection pipeline called Decoupled Hierarchical Classification Refinement (DHCR) fuses the results of two networks: (1) an object detection network with multiple classifiers, and (2) a hierarchical sibling classification network for supporting hierarchical multi-label classification. Our framework additionally introduces a region proposal method for efficient detection on vain areas of the remote sensing images, called clustering-guided cropping strategy. Thorough experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from WorldView-3 and SkySat satellites. Under our proposed framework, DHCR-based detections significantly improve the performance of respective baseline models and we achieve state-of-the-art results on the datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jung, Sejung, Won Hee Lee und Youkyung Han. „Change Detection of Building Objects in High-Resolution Single-Sensor and Multi-Sensor Imagery Considering the Sun and Sensor’s Elevation and Azimuth Angles“. Remote Sensing 13, Nr. 18 (13.09.2021): 3660. http://dx.doi.org/10.3390/rs13183660.

Der volle Inhalt der Quelle
Annotation:
Building change detection is a critical field for monitoring artificial structures using high-resolution multitemporal images. However, relief displacement depending on the azimuth and elevation angles of the sensor causes numerous false alarms and misdetections of building changes. Therefore, this study proposes an effective object-based building change detection method that considers azimuth and elevation angles of sensors in high-resolution images. To this end, segmentation images were generated using a multiresolution technique from high-resolution images after which object-based building detection was performed. For detecting building candidates, we calculated feature information that could describe building objects, such as rectangular fit, gray-level co-occurrence matrix (GLCM) homogeneity, and area. Final building detection was then performed considering the location relationship between building objects and their shadows using the Sun’s azimuth angle. Subsequently, building change detection of final building objects was performed based on three methods considering the relationship of the building object properties between the images. First, only overlaying objects between images were considered to detect changes. Second, the size difference between objects according to the sensor’s elevation angle was considered to detect the building changes. Third, the direction between objects according to the sensor’s azimuth angle was analyzed to identify the building changes. To confirm the effectiveness of the proposed object-based building change detection performance, two building density areas were selected as study sites. Site 1 was constructed using a single sensor of KOMPSAT-3 bitemporal images, whereas Site 2 consisted of multi-sensor images of KOMPSAT-3 and unmanned aerial vehicle (UAV). The results from both sites revealed that considering additional shadow information showed more accurate building detection than using feature information only. Furthermore, the results of the three object-based change detections were compared and analyzed according to the characteristics of the study area and the sensors. Accuracy of the proposed object-based change detection results was achieved over the existing building detection methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Vajda, Peter, Ivan Ivanov, Lutz Goldmann, Jong-Seok Lee und Touradj Ebrahimi. „Robust Duplicate Detection of 2D and 3D Objects“. International Journal of Multimedia Data Engineering and Management 1, Nr. 3 (Juli 2010): 19–40. http://dx.doi.org/10.4018/jmdem.2010070102.

Der volle Inhalt der Quelle
Annotation:
In this paper, the authors analyze their graph-based approach for 2D and 3D object duplicate detection in still images. A graph model is used to represent the 3D spatial information of the object based on the features extracted from training images to avoid explicit and complex 3D object modeling. Therefore, improved performance can be achieved in comparison to existing methods in terms of both robustness and computational complexity. Different limitations of this approach are analyzed by evaluating performance with respect to the number of training images and calculation of optimal parameters in a number of applications. Furthermore, effectiveness of object duplicate detection algorithm is measured over different object classes. The authors’ method is shown to be robust in detecting the same objects even when images with objects are taken from different viewpoints or distances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sejr, Jonas Herskind, Peter Schneider-Kamp und Naeem Ayoub. „Surrogate Object Detection Explainer (SODEx) with YOLOv4 and LIME“. Machine Learning and Knowledge Extraction 3, Nr. 3 (06.08.2021): 662–71. http://dx.doi.org/10.3390/make3030033.

Der volle Inhalt der Quelle
Annotation:
Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It remains, for example, unclear whether an object is detected based on what it looks like or based on the context in which it is located. We have developed an algorithm, Surrogate Object Detection Explainer (SODEx), that can explain any object detection algorithm using any classification explainer. We evaluate SODEx qualitatively and quantitatively by detecting objects in the COCO dataset with YOLOv4 and explaining these detections with LIME. This empirical evaluation does not only demonstrate the value of explainable object detection, it also provides valuable insights into how YOLOv4 detects objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Karimanzira, Divas, Helge Renkewitz, David Shea und Jan Albiez. „Object Detection in Sonar Images“. Electronics 9, Nr. 7 (21.07.2020): 1180. http://dx.doi.org/10.3390/electronics9071180.

Der volle Inhalt der Quelle
Annotation:
The scope of the project described in this paper is the development of a generalized underwater object detection solution based on Automated Machine Learning (AutoML) principles. Multiple scales, dual priorities, speed, limited data, and class imbalance make object detection a very challenging task. In underwater object detection, further complications come in to play due to acoustic image problems such as non-homogeneous resolution, non-uniform intensity, speckle noise, acoustic shadowing, acoustic reverberation, and multipath problems. Therefore, we focus on finding solutions to the problems along the underwater object detection pipeline. A pipeline for realizing a robust generic object detector will be described and demonstrated on a case study of detection of an underwater docking station in sonar images. The system shows an overall detection and classification performance average precision (AP) score of 0.98392 for a test set of 5000 underwater sonar frames.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yan, Longbin, Min Zhao, Xiuheng Wang, Yuge Zhang und Jie Chen. „Object Detection in Hyperspectral Images“. IEEE Signal Processing Letters 28 (2021): 508–12. http://dx.doi.org/10.1109/lsp.2021.3059204.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wu, Jingqian, und Shibiao Xu. „From Point to Region: Accurate and Efficient Hierarchical Small Object Detection in Low-Resolution Remote Sensing Images“. Remote Sensing 13, Nr. 13 (03.07.2021): 2620. http://dx.doi.org/10.3390/rs13132620.

Der volle Inhalt der Quelle
Annotation:
Accurate object detection is important in computer vision. However, detecting small objects in low-resolution images remains a challenging and elusive problem, primarily because these objects are constructed of less visual information and cannot be easily distinguished from similar background regions. To resolve this problem, we propose a Hierarchical Small Object Detection Network in low-resolution remote sensing images, named HSOD-Net. We develop a point-to-region detection paradigm by first performing a key-point prediction to obtain position hypotheses, then only later super-resolving the image and detecting the objects around those candidate positions. By postponing the object prediction to after increasing its resolution, the obtained key-points are more stable than their traditional counterparts based on early object detection with less visual information. This hierarchical approach, HSOD-Net, saves significant run-time, which makes it more suitable for practical applications such as search and rescue, and drone navigation. In comparison with the state-of-art models, HSOD-Net achieves remarkable precision in detecting small objects in low-resolution remote sensing images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lorencs, Aivars, Ints Mednieks und Juris Siņica-Siņavskis. „Fast object detection in digital grayscale images“. Proceedings of the Latvian Academy of Sciences. Section B. Natural, Exact, and Applied Sciences. 63, Nr. 3 (01.01.2009): 116–24. http://dx.doi.org/10.2478/v10046-009-0026-5.

Der volle Inhalt der Quelle
Annotation:
Fast object detection in digital grayscale images The problem of specific object detection in digital grayscale images is considered under the following conditions: relatively small image fragments can be analysed (a priori information about the size of objects is available); images contain a varying undefined background (clutter) of larger objects; processing time should be minimised and must be independent from the image contents; proposed methods should provide for efficient implementation in application-specific electronic circuits. The last two conditions reflect the aim to propose approaches suitable for application in real time systems where known sophisticated methods would be inapplicable. The research is motivated by potential applications in the food industry (detection of contaminants in products from their X-ray images), medicine (detection of anomalies in fragments of computer tomography images etc.). Possible objects to be detected may include compact small objects, curved lines in different directions, and small regions of pixels with brightness different from the background. The paper describes proposed image processing approaches to detection of such objects and the results obtained from processing of sample food images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shen, Jie, Zhenxin Xu, Zhe Chen, Huibin Wang und Xiaotao Shi. „Optical Prior-Based Underwater Object Detection with Active Imaging“. Complexity 2021 (27.04.2021): 1–12. http://dx.doi.org/10.1155/2021/6656166.

Der volle Inhalt der Quelle
Annotation:
Underwater object detection plays an important role in research and practice, as it provides condensed and informative content that represents underwater objects. However, detecting objects from underwater images is challenging because underwater environments significantly degenerate image quality and distort the contrast between the object and background. To address this problem, this paper proposes an optical prior-based underwater object detection approach that takes advantage of optical principles to identify optical collimation over underwater images, providing valuable guidance for extracting object features. Unlike data-driven knowledge, the prior in our method is independent of training samples. The fundamental novelty of our approach lies in the integration of an image prior and the object detection task. This novelty is fundamental to the satisfying performance of our approach in underwater environments, which is demonstrated through comparisons with state-of-the-art object detection methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Wei, Dayu Cheng, Pengcheng Yin, Mengyuan Yang, Erzhu Li, Meng Xie und Lianpeng Zhang. „Small Manhole Cover Detection in Remote Sensing Imagery with Deep Convolutional Neural Networks“. ISPRS International Journal of Geo-Information 8, Nr. 1 (19.01.2019): 49. http://dx.doi.org/10.3390/ijgi8010049.

Der volle Inhalt der Quelle
Annotation:
With the development of remote sensing technology and the advent of high-resolution images, obtaining data has become increasingly convenient. However, the acquisition of small manhole cover information still has shortcomings including low efficiency of manual surveying and high leakage rate. Recently, deep learning models, especially deep convolutional neural networks (DCNNs), have proven to be effective at object detection. However, several challenges limit the applications of DCNN in manhole cover object detection using remote sensing imagery: (1) Manhole cover objects often appear at different scales in remotely sensed images and DCNNs’ fixed receptive field cannot match the scale variability of such objects; (2) Manhole cover objects in large-scale remotely-sensed images are relatively small in size and densely packed, while DCNNs have poor localization performance when applied to such objects. To address these problems, we propose an effective method for detecting manhole cover objects in remotely-sensed images. First, we redesign the feature extractor by adopting the visual geometry group (VGG), which can increase the variety of receptive field size. Then, detection is performed using two sub-networks: a multi-scale output network (MON) for manhole cover object-like edge generation from several intermediate layers whose receptive fields match different object scales and a multi-level convolution matching network (M-CMN) for object detection based on fused feature maps, which combines several feature maps that enable small and densely packed manhole cover objects to produce a stronger response. The results show that our method is more accurate than existing methods at detecting manhole covers in remotely-sensed images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Chen, Y., L. Pang, H. Liu und X. Xu. „WAVELET FUSION FOR CONCEALED OBJECT DETECTION USING PASSIVE MILLIMETER WAVE SEQUENCE IMAGES“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (30.04.2018): 193–98. http://dx.doi.org/10.5194/isprs-archives-xlii-3-193-2018.

Der volle Inhalt der Quelle
Annotation:
PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods,firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Li, Xiao Chun, Chun Yang Jia und Wei Hua Li. „A Novel Change Detection Method Using Independent Component Analysis and Oriented-Object Method“. Applied Mechanics and Materials 548-549 (April 2014): 633–36. http://dx.doi.org/10.4028/www.scientific.net/amm.548-549.633.

Der volle Inhalt der Quelle
Annotation:
hrough analyzing problems brought on change detection methods of high-resolution remote sensing images, a novel change detection algorithm is proposed. First, feature images of image’s objects extracted using oriented-object method serve as data of input vector to estimate sub-space for Independent Component Analysis(ICA), which can improve effect of noise suppression, simultaneously, a new algorithm using self-adapted weight is proposed in order to extract image’s object, which optimizes processing method on oriented-object deeply;new partitioning scheme using undecimated discrete wavelet transform(UDWT) overcomes effectively prominent problem which shrinking of the size of input vector becomes leads to unprecisely estimation of sub-space for ICA. Compared with typical algorithm, such as ICA and UDWT, simulation results show that new algorithm improves robust and veracity of change detection for high-resolution images greatly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

AL-Alimi, Dalal, Yuxiang Shao, Ahamed Alalimi und Ahmed Abdu. „Mask R-CNN for Geospatial Object Detection“. International Journal of Information Technology and Computer Science 12, Nr. 5 (08.10.2020): 63–72. http://dx.doi.org/10.5815/ijitcs.2020.05.05.

Der volle Inhalt der Quelle
Annotation:
Geospatial imaging technique has opened a door for researchers to implement multiple beneficial applications in many fields, including military investigation, disaster relief, and urban traffic control. As the resolution of geospatial images has increased in recent years, the detection of geospatial objects has attracted a lot of researchers. Mask R-CNN had been designed to identify an object outlines at the pixel level (instance segmentation), and for object detection in natural images. This study describes the Mask R-CNN model and uses it to detect objects in geospatial images. This experiment was prepared an existing dataset to be suitable with object segmentation, and it shows that Mask R-CNN also has the ability to be used in geospatial object detection and it introduces good results to extract the ten classes dataset of Seg-VHR-10.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Crawford, Eric, und Joelle Pineau. „Spatially Invariant Unsupervised Object Detection with Convolutional Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3412–20. http://dx.doi.org/10.1609/aaai.v33i01.33013412.

Der volle Inhalt der Quelle
Annotation:
There are many reasons to expect an ability to reason in terms of objects to be a crucial skill for any generally intelligent agent. Indeed, recent machine learning literature is replete with examples of the benefits of object-like representations: generalization, transfer to new tasks, and interpretability, among others. However, in order to reason in terms of objects, agents need a way of discovering and detecting objects in the visual world - a task which we call unsupervised object detection. This task has received significantly less attention in the literature than its supervised counterpart, especially in the case of large images containing many objects. In the current work, we develop a neural network architecture that effectively addresses this large-image, many-object setting. In particular, we combine ideas from Attend, Infer, Repeat (AIR), which performs unsupervised object detection but does not scale well, with recent developments in supervised object detection. We replace AIR’s core recurrent network with a convolutional (and thus spatially invariant) network, and make use of an object-specification scheme that describes the location of objects with respect to local grid cells rather than the image as a whole. Through a series of experiments, we demonstrate a number of features of our architecture: that, unlike AIR, it is able to discover and detect objects in large, many-object scenes; that it has a significant ability to generalize to images that are larger and contain more objects than images encountered during training; and that it is able to discover and detect objects with enough accuracy to facilitate non-trivial downstream processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Li, Yong, Guofeng Tong, Huashuai Gao, Yuebin Wang, Liqiang Zhang und Huairong Chen. „Pano-RSOD: A Dataset and Benchmark for Panoramic Road Scene Object Detection“. Electronics 8, Nr. 3 (18.03.2019): 329. http://dx.doi.org/10.3390/electronics8030329.

Der volle Inhalt der Quelle
Annotation:
Panoramic images have a wide range of applications in many fields with their ability to perceive all-round information. Object detection based on panoramic images has certain advantages in terms of environment perception due to the characteristics of panoramic images, e.g., lager perspective. In recent years, deep learning methods have achieved remarkable results in image classification and object detection. Their performance depends on the large amount of training data. Therefore, a good training dataset is a prerequisite for the methods to achieve better recognition results. Then, we construct a benchmark named Pano-RSOD for panoramic road scene object detection. Pano-RSOD contains vehicles, pedestrians, traffic signs and guiding arrows. The objects of Pano-RSOD are labelled by bounding boxes in the images. Different from traditional object detection datasets, Pano-RSOD contains more objects in a panoramic image, and the high-resolution images have 360-degree environmental perception, more annotations, more small objects and diverse road scenes. The state-of-the-art deep learning algorithms are trained on Pano-RSOD for object detection, which demonstrates that Pano-RSOD is a useful benchmark, and it provides a better panoramic image training dataset for object detection tasks, especially for small and deformed objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Volkov, Vladimir. „Adaptive multi-threshold object selection in remote sensing images“. Information and Control Systems, Nr. 3 (15.06.2020): 12–24. http://dx.doi.org/10.31799/1684-8853-2020-3-12-24.

Der volle Inhalt der Quelle
Annotation:
Introduction: Detection, selection and analysis of objects of interest in digital images is a major problem for remote sensing and technical vision systems. The known methods of threshold detection and selection of objects avoid using the processing results, therefore not providing a low probability of false alarms, and not keeping the shape of the selected objects well enough. There are only few results from the studies about quantifying the quality of such algorithms on either model or real images. Purpose: Studying the effectiveness of algorithms for detecting, selecting, and localizing objects of interest using their geometric characteristics, when the object properties and background are a priori uncertain, and the shape of the selected objects is kept unchanged. Results: We have obtained and studied the characteristics of algorithms for detecting and selecting objects of interest on test models of monochrome images. These software-implemented algorithms use multi-threshold processing, providing a set of binary slices. This makes it possible to perform morphological processing of objects on each slice in order to analyze their geometric characteristics and then select them according to geometric criteria, taking into account the percolation effect which causes changes in the area, and fragmentation of the objects. As a result of analyzing these changes, an adaptive detection threshold is set for each of the selected objects. The selection allows you to significantly reduce the number of false positives during the detection and to use lower thresholds, increasing the correct detection probability. We present the detection characteristics and the results of test model processing, as well as the results of object selection on a real television and radar image, confirming the effectiveness of the considered algorithms. Practical relevance: The proposed algorithms can more effectively select objects on images of various nature obtained in remote sensing, material research or medical diagnostics systems. Their microprocessor implementation is much simpler than the implementation of universal trainable neural network algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Xiao, Zhifeng, Linjun Qian, Weiping Shao, Xiaowei Tan und Kai Wang. „Axis Learning for Orientated Objects Detection in Aerial Images“. Remote Sensing 12, Nr. 6 (12.03.2020): 908. http://dx.doi.org/10.3390/rs12060908.

Der volle Inhalt der Quelle
Annotation:
Orientated object detection in aerial images is still a challenging task due to the bird’s eye view and the various scales and arbitrary angles of objects in aerial images. Most current methods for orientated object detection are anchor-based, which require considerable pre-defined anchors and are time consuming. In this article, we propose a new one-stage anchor-free method to detect orientated objects in per-pixel prediction fashion with less computational complexity. Arbitrary orientated objects are detected by predicting the axis of the object, which is the line connecting the head and tail of the object, and the width of the object is vertical to the axis. By predicting objects at the pixel level of feature maps directly, the method avoids setting a number of hyperparameters related to anchor and is computationally efficient. Besides, a new aspect-ratio-aware orientation centerness method is proposed to better weigh positive pixel points, in order to guide the network to learn discriminative features from a complex background, which brings improvements for large aspect ratio object detection. The method is tested on two common aerial image datasets, achieving better performance compared with most one-stage orientated methods and many two-stage anchor-based methods with a simpler procedure and lower computational complexity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Zhou, Junxiu, Yangyang Tao und Xian Liu. „Tensor Decomposition for Salient Object Detection in Images“. Big Data and Cognitive Computing 3, Nr. 2 (19.06.2019): 33. http://dx.doi.org/10.3390/bdcc3020033.

Der volle Inhalt der Quelle
Annotation:
The fundamental challenge of salient object detection is to find the decision boundary that separates the salient object from the background. Low-rank recovery models address this challenge by decomposing an image or image feature-based matrix into a low-rank matrix representing the image background and a sparse matrix representing salient objects. This method is simple and efficient in finding salient objects. However, it needs to convert high-dimensional feature space into a two-dimensional matrix. Therefore, it does not take full advantage of image features in discovering the salient object. In this article, we propose a tensor decomposition method which considers spatial consistency and tries to make full use of image feature information in detecting salient objects. First, we use high-dimensional image features in tensor to preserve spatial information about image features. Following this, we use a tensor low-rank and sparse model to decompose the image feature tensor into a low-rank tensor and a sparse tensor, where the low-rank tensor represents the background and the sparse tensor is used to identify the salient object. To solve the tensor low-rank and sparse model, we employed a heuristic strategy by relaxing the definition of tensor trace norm and tensor l1-norm. Experimental results on three saliency benchmarks demonstrate the effectiveness of the proposed tensor decomposition method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Li, Yiran, Han Xie und Hyunchul Shin. „3D Object Detection Using Frustums and Attention Modules for Images and Point Clouds“. Signals 2, Nr. 1 (12.02.2021): 98–107. http://dx.doi.org/10.3390/signals2010009.

Der volle Inhalt der Quelle
Annotation:
Three-dimensional (3D) object detection is essential in autonomous driving. Three-dimensional (3D) Lidar sensor can capture three-dimensional objects, such as vehicles, cycles, pedestrians, and other objects on the road. Although Lidar can generate point clouds in 3D space, it still lacks the fine resolution of 2D information. Therefore, Lidar and camera fusion has gradually become a practical method for 3D object detection. Previous strategies focused on the extraction of voxel points and the fusion of feature maps. However, the biggest challenge is in extracting enough edge information to detect small objects. To solve this problem, we found that attention modules are beneficial in detecting small objects. In this work, we developed Frustum ConvNet and attention modules for the fusion of images from a camera and point clouds from a Lidar. Multilayer Perceptron (MLP) and tanh activation functions were used in the attention modules. Furthermore, the attention modules were designed on PointNet to perform multilayer edge detection for 3D object detection. Compared with a previous well-known method, Frustum ConvNet, our method achieved competitive results, with an improvement of 0.27%, 0.43%, and 0.36% in Average Precision (AP) for 3D object detection in easy, moderate, and hard cases, respectively, and an improvement of 0.21%, 0.27%, and 0.01% in AP for Bird’s Eye View (BEV) object detection in easy, moderate, and hard cases, respectively, on the KITTI detection benchmarks. Our method also obtained the best results in four cases in AP on the indoor SUN-RGBD dataset for 3D object detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Brunnstro¨m, Kjell. „Object detection in cluttered infrared images“. Optical Engineering 42, Nr. 2 (01.02.2003): 388. http://dx.doi.org/10.1117/1.1531637.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Jiang, Longyu, Tao Cai, Qixiang Ma, Fanjin Xu und Shijie Wang. „Active Object Detection in Sonar Images“. IEEE Access 8 (2020): 102540–53. http://dx.doi.org/10.1109/access.2020.2999341.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Guan, Yurong, Muhammad Aamir, Zhihua Hu, Waheed Ahmed Abro, Ziaur Rahman, Zaheer Ahmed Dayo und Shakeel Akram. „A Region-Based Efficient Network for Accurate Object Detection“. Traitement du Signal 38, Nr. 2 (30.04.2021): 481–94. http://dx.doi.org/10.18280/ts.380228.

Der volle Inhalt der Quelle
Annotation:
Object detection in images is an important task in image processing and computer vision. Many approaches are available for object detection. For example, there are numerous algorithms for object positioning and classification in images. However, the current methods perform poorly and lack experimental verification. Thus, it is a fascinating and challenging issue to position and classify image objects. Drawing on the recent advances in image object detection, this paper develops a region-baed efficient network for accurate object detection in images. To improve the overall detection performance, image object detection was treated as a twofold problem, involving object proposal generation and object classification. First, a framework was designed to generate high-quality, class-independent, accurate proposals. Then, these proposals, together with their input images, were imported to our network to learn convolutional features. To boost detection efficiency, the number of proposals was reduced by a network refinement module, leaving only a few eligible candidate proposals. After that, the refined candidate proposals were loaded into the detection module to classify the objects. The proposed model was tested on the test set of the famous PASCAL Visual Object Classes Challenge 2007 (VOC2007). The results clearly demonstrate that our model achieved robust overall detection efficiency over existing approaches using fewer or more proposals, in terms of recall, mean average best overlap (MABO), and mean average precision (mAP).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Shin, Sujin, Youngjung Kim, Insu Hwang, Junhee Kim und Sungho Kim. „Coupling Denoising to Detection for SAR Imagery“. Applied Sciences 11, Nr. 12 (16.06.2021): 5569. http://dx.doi.org/10.3390/app11125569.

Der volle Inhalt der Quelle
Annotation:
Detecting objects in synthetic aperture radar (SAR) imagery has received much attention in recent years since SAR can operate in all-weather and day-and-night conditions. Due to the prosperity and development of convolutional neural networks (CNNs), many previous methodologies have been proposed for SAR object detection. In spite of the advance, existing detection networks still have limitations in boosting detection performance because of inherently noisy characteristics in SAR imagery; hence, separate preprocessing step such as denoising (despeckling) is required before utilizing the SAR images for deep learning. However, inappropriate denoising techniques might cause detailed information loss and even proper denoising methods does not always guarantee performance improvement. In this paper, we therefore propose a novel object detection framework that combines unsupervised denoising network into traditional two-stage detection network and leverages a strategy for fusing region proposals extracted from both raw SAR image and synthetically denoised SAR image. Extensive experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from TerraSAR-X and COSMO-SkyMed satellites. Extensive experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from TerraSAR-X and COSMO-SkyMed satellites. The proposed framework shows better performances when we compared the model with using only noisy SAR images and only denoised SAR images after despeckling under multiple backbone networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Murthy, Chinthakindi Balaram, Mohammad Farukh Hashmi, Neeraj Dhanraj Bokde und Zong Woo Geem. „Investigations of Object Detection in Images/Videos Using Various Deep Learning Techniques and Embedded Platforms—A Comprehensive Review“. Applied Sciences 10, Nr. 9 (08.05.2020): 3280. http://dx.doi.org/10.3390/app10093280.

Der volle Inhalt der Quelle
Annotation:
In recent years there has been remarkable progress in one computer vision application area: object detection. One of the most challenging and fundamental problems in object detection is locating a specific object from the multiple objects present in a scene. Earlier traditional detection methods were used for detecting the objects with the introduction of convolutional neural networks. From 2012 onward, deep learning-based techniques were used for feature extraction, and that led to remarkable breakthroughs in this area. This paper shows a detailed survey on recent advancements and achievements in object detection using various deep learning techniques. Several topics have been included, such as Viola–Jones (VJ), histogram of oriented gradient (HOG), one-shot and two-shot detectors, benchmark datasets, evaluation metrics, speed-up techniques, and current state-of-art object detectors. Detailed discussions on some important applications in object detection areas, including pedestrian detection, crowd detection, and real-time object detection on Gpu-based embedded systems have been presented. At last, we conclude by identifying promising future directions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Xiong, Shengzhou, Yihua Tan, Yansheng Li, Cai Wen und Pei Yan. „Subtask Attention Based Object Detection in Remote Sensing Images“. Remote Sensing 13, Nr. 10 (14.05.2021): 1925. http://dx.doi.org/10.3390/rs13101925.

Der volle Inhalt der Quelle
Annotation:
Object detection in remote sensing images (RSIs) is one of the basic tasks in the field of remote sensing image automatic interpretation. In recent years, the deep object detection frameworks of natural scene images (NSIs) have been introduced into object detection on RSIs, and the detection performance has improved significantly because of the powerful feature representation. However, there are still many challenges concerning the particularities of remote sensing objects. One of the main challenges is the missed detection of small objects which have less than five percent of the pixels of the big objects. Generally, the existing algorithms choose to deal with this problem by multi-scale feature fusion based on a feature pyramid. However, the benefits of this strategy are limited, considering that the location of small objects in the feature map will disappear when the detection task is processed at the end of the network. In this study, we propose a subtask attention network (StAN), which handles the detection task directly on the shallow layer of the network. First, StAN contains one shared feature branch and two subtask attention branches of a semantic auxiliary subtask and a detection subtask based on the multi-task attention network (MTAN). Second, the detection branch uses only low-level features considering small objects. Third, the attention map guidance mechanism is put forward to optimize the network for keeping the identification ability. Fourth, the multi-dimensional sampling module (MdS), global multi-view channel weights (GMulW) and target-guided pixel attention (TPA) are designed for further improvement of the detection accuracy in complex scenes. The experimental results on the NWPU VHR-10 dataset and DOTA dataset demonstrated that the proposed algorithm achieved the SOTA performance, and the missed detection of small objects decreased. On the other hand, ablation experiments also proved the effects of MdS, GMulW and TPA.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Prof. Vasudha Bahl and Prof. Nidhi Sengar, Akash Kumar, Dr Amita Goel. „Real-Time Object Detection Model“. International Journal for Modern Trends in Science and Technology 6, Nr. 12 (18.12.2020): 360–64. http://dx.doi.org/10.46501/ijmtst061267.

Der volle Inhalt der Quelle
Annotation:
Object Detection is a study in the field of computer vision. An object detection model recognizes objects of the real world present either in a captured image or in real-time video where the object can belong to any class of objects namely humans, animals, objects, etc. This project is an implementation of an algorithm based on object detection called You Only Look Once (YOLO v3). The architecture of yolo model is extremely fast compared to all previous methods. Yolov3 model executes a single neural network to the given image and then divides the image into predetermined bounding boxes. These boxes are weighted by the predicted probabilities. After non max-suppression it gives the result of recognized objects together with bounding boxes. Yolo trains and directly executes object detection on full images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Freitas, Vander Luis de Souza, Barbara Maximino da Fonseca Reis und Antonio Maria Garcia Tommaselli. „AUTOMATIC SHADOW DETECTION IN AERIAL AND TERRESTRIAL IMAGES“. Boletim de Ciências Geodésicas 23, Nr. 4 (Dezember 2017): 578–90. http://dx.doi.org/10.1590/s1982-21702017000400038.

Der volle Inhalt der Quelle
Annotation:
Abstract: Shadows exist in almost all aerial and outdoor images, and they can be useful for estimating Sun position estimation or measuring object size. On the other hand, they represent a problem in processes such as object detection/recognition, image matching, etc., because they may be confused with dark objects and change the image radiometric properties. We address this problem on aerial and outdoor color images in this work. We use a filter to find low intensities as a first step. For outdoor color images, we analyze spectrum ratio properties to refine the detection, and the results are assessed with a dataset containing ground truth. For the aerial case we validate the detections depending of the hue component of pixels. This stage takes into account that, in deep shadows, most pixels have blue or violet wavelengths because of an atmospheric scattering effect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Wang, Ting, Changqing Cao, Xiaodong Zeng, Zhejun Feng, Jingshi Shen, Weiming Li, Bo Wang, Yuedong Zhou und Xu Yan. „An Aircraft Object Detection Algorithm Based on Small Samples in Optical Remote Sensing Image“. Applied Sciences 10, Nr. 17 (20.08.2020): 5778. http://dx.doi.org/10.3390/app10175778.

Der volle Inhalt der Quelle
Annotation:
In recent years, remote sensing technology has developed rapidly, and the ground resolution of spaceborne optical remote sensing images has reached the sub-meter range, providing a new technical means for aircraft object detection. Research on aircraft object detection based on optical remote sensing images is of great significance for military object detection and recognition. However, spaceborne optical remote sensing images are difficult to obtain and costly. Therefore, this paper proposes the aircraft detection algorithm, itcan detect aircraft objects with small samples. Firstly, this paper establishes an aircraft object dataset containing weak and small aircraft objects. Secondly, the detection algorithm has been proposed to detect weak and small aircraft objects. Thirdly, the aircraft detection algorithm has been proposed to detect multiple aircraft objects of varying sizes. There are 13,324 aircraft in the test set. According to the method proposed in this paper, the f1 score can achieve 90.44%. Therefore, the aircraft objects can be detected simply and efficiently by using the method proposed. It can effectively detect aircraft objects and improve early warning capabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Jiang, Jingchao, Cheng-Zhi Qin, Juan Yu, Changxiu Cheng, Junzhi Liu und Jingzhou Huang. „Obtaining Urban Waterlogging Depths from Video Images Using Synthetic Image Data“. Remote Sensing 12, Nr. 6 (22.03.2020): 1014. http://dx.doi.org/10.3390/rs12061014.

Der volle Inhalt der Quelle
Annotation:
Reference objects in video images can be used to indicate urban waterlogging depths. The detection of reference objects is the key step to obtain waterlogging depths from video images. Object detection models with convolutional neural networks (CNNs) have been utilized to detect reference objects. These models require a large number of labeled images as the training data to ensure the applicability at a city scale. However, it is hard to collect a sufficient number of urban flooding images containing valuable reference objects, and manually labeling images is time-consuming and expensive. To solve the problem, we present a method to synthesize image data as the training data. Firstly, original images containing reference objects and original images with water surfaces are collected from open data sources, and reference objects and water surfaces are cropped from these original images. Secondly, the reference objects and water surfaces are further enriched via data augmentation techniques to ensure the diversity. Finally, the enriched reference objects and water surfaces are combined to generate a synthetic image dataset with annotations. The synthetic image dataset is further used for training an object detection model with CNN. The waterlogging depths are calculated based on the reference objects detected by the trained model. A real video dataset and an artificial image dataset are used to evaluate the effectiveness of the proposed method. The results show that the detection model trained using the synthetic image dataset can effectively detect reference objects from images, and it can achieve acceptable accuracies of waterlogging depths based on the detected reference objects. The proposed method has the potential to monitor waterlogging depths at a city scale.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Bashir, Syed Muhammad Arsalan, und Yi Wang. „Small Object Detection in Remote Sensing Images with Residual Feature Aggregation-Based Super-Resolution and Object Detector Network“. Remote Sensing 13, Nr. 9 (10.05.2021): 1854. http://dx.doi.org/10.3390/rs13091854.

Der volle Inhalt der Quelle
Annotation:
This paper deals with detecting small objects in remote sensing images from satellites or any aerial vehicle by utilizing the concept of image super-resolution for image resolution enhancement using a deep-learning-based detection method. This paper provides a rationale for image super-resolution for small objects by improving the current super-resolution (SR) framework by incorporating a cyclic generative adversarial network (GAN) and residual feature aggregation (RFA) to improve detection performance. The novelty of the method is threefold: first, a framework is proposed, independent of the final object detector used in research, i.e., YOLOv3 could be replaced with Faster R-CNN or any object detector to perform object detection; second, a residual feature aggregation network was used in the generator, which significantly improved the detection performance as the RFA network detected complex features; and third, the whole network was transformed into a cyclic GAN. The image super-resolution cyclic GAN with RFA and YOLO as the detection network is termed as SRCGAN-RFA-YOLO, which is compared with the detection accuracies of other methods. Rigorous experiments on both satellite images and aerial images (ISPRS Potsdam, VAID, and Draper Satellite Image Chronology datasets) were performed, and the results showed that the detection performance increased by using super-resolution methods for spatial resolution enhancement; for an IoU of 0.10, AP of 0.7867 was achieved for a scale factor of 16.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Huyan, Lang, Yunpeng Bai, Ying Li, Dongmei Jiang, Yanning Zhang, Quan Zhou, Jiayuan Wei, Juanni Liu, Yi Zhang und Tao Cui. „A Lightweight Object Detection Framework for Remote Sensing Images“. Remote Sensing 13, Nr. 4 (13.02.2021): 683. http://dx.doi.org/10.3390/rs13040683.

Der volle Inhalt der Quelle
Annotation:
Onboard real-time object detection in remote sensing images is a crucial but challenging task in this computation-constrained scenario. This task not only requires the algorithm to yield excellent performance but also requests limited time and space complexity of the algorithm. However, previous convolutional neural networks (CNN) based object detectors for remote sensing images suffer from heavy computational cost, which hinders them from being deployed on satellites. Moreover, an onboard detector is desired to detect objects at vastly different scales. To address these issues, we proposed a lightweight one-stage multi-scale feature fusion detector called MSF-SNET for onboard real-time object detection of remote sensing images. Using lightweight SNET as the backbone network reduces the number of parameters and computational complexity. To strengthen the detection performance of small objects, three low-level features are extracted from the three stages of SNET respectively. In the detection part, another three convolutional layers are designed to further extract deep features with rich semantic information for large-scale object detection. To improve detection accuracy, the deep features and low-level features are fused to enhance the feature representation. Extensive experiments and comprehensive evaluations on the openly available NWPU VHR-10 dataset and DIOR dataset are conducted to evaluate the proposed method. Compared with other state-of-art detectors, the proposed detection framework has fewer parameters and calculations, while maintaining consistent accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Ma, Yuchi, John Anderson, Stephen Crouch und Jie Shan. „Moving Object Detection and Tracking with Doppler LiDAR“. Remote Sensing 11, Nr. 10 (14.05.2019): 1154. http://dx.doi.org/10.3390/rs11101154.

Der volle Inhalt der Quelle
Annotation:
In this paper, we present a model-free detection-based tracking approach for detecting and tracking moving objects in street scenes from point clouds obtained via a Doppler LiDAR that can not only collect spatial information (e.g., point clouds) but also Doppler images by using Doppler-shifted frequencies. Using our approach, Doppler images are used to detect moving points and determine the number of moving objects followed by complete segmentations via a region growing technique. The tracking approach is based on Multiple Hypothesis Tracking (MHT) with two extensions. One is that a point cloud descriptor, Oriented Ensemble of Shape Function (OESF), is proposed to evaluate the structure similarity when doing object-to-track association. Another is to use Doppler images to improve the estimation of dynamic state of moving objects. The quantitative evaluation of detection and tracking results on different datasets shows the advantages of Doppler LiDAR and the effectiveness of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Zhang, Xueyang, Junhua Xiang und Yulin Zhang. „Space Object Detection in Video Satellite Images Using Motion Information“. International Journal of Aerospace Engineering 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/1024529.

Der volle Inhalt der Quelle
Annotation:
Compared to ground-based observation, space-based observation is an effective approach to catalog and monitor increasing space objects. In this paper, space object detection in a video satellite image with star image background is studied. A new detection algorithm using motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. The effect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of the previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Experimental results with a video image from the Tiantuo-2 satellite show that this algorithm provides a good way for space object detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Körez, Atakan, Necaattin Barışçı, Aydın Çetin und Uçman Ergün. „Weighted Ensemble Object Detection with Optimized Coefficients for Remote Sensing Images“. ISPRS International Journal of Geo-Information 9, Nr. 6 (04.06.2020): 370. http://dx.doi.org/10.3390/ijgi9060370.

Der volle Inhalt der Quelle
Annotation:
The detection of objects in very high-resolution (VHR) remote sensing images has become increasingly popular with the enhancement of remote sensing technologies. High-resolution images from aircrafts or satellites contain highly detailed and mixed backgrounds that decrease the success of object detection in remote sensing images. In this study, a model that performs weighted ensemble object detection using optimized coefficients is proposed. This model uses the outputs of three different object detection models trained on the same dataset. The model’s structure takes two or more object detection methods as its input and provides an output with an optimized coefficient-weighted ensemble. The Northwestern Polytechnical University Very High Resolution 10 (NWPU-VHR10) and Remote Sensing Object Detection (RSOD) datasets were used to measure the object detection success of the proposed model. Our experiments reveal that the proposed model improved the Mean Average Precision (mAP) performance by 0.78%–16.5% compared to stand-alone models and presents better mean average precision than other state-of-the-art methods (3.55% higher on the NWPU-VHR-10 dataset and 1.49% higher when using the RSOD dataset).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Wang, Jian, Le Yang und Fan Li. „Predicting Arbitrary-Oriented Objects as Points in Remote Sensing Images“. Remote Sensing 13, Nr. 18 (17.09.2021): 3731. http://dx.doi.org/10.3390/rs13183731.

Der volle Inhalt der Quelle
Annotation:
To detect rotated objects in remote sensing images, researchers have proposed a series of arbitrary-oriented object detection methods, which place multiple anchors with different angles, scales, and aspect ratios on the images. However, a major difference between remote sensing images and natural images is the small probability of overlap between objects in the same category, so the anchor-based design can introduce much redundancy during the detection process. In this paper, we convert the detection problem to a center point prediction problem, where the pre-defined anchors can be discarded. By directly predicting the center point, orientation, and corresponding height and width of the object, our methods can simplify the design of the model and reduce the computations related to anchors. In order to further fuse the multi-level features and get accurate object centers, a deformable feature pyramid network is proposed, to detect objects under complex backgrounds and various orientations of rotated objects. Experiments and analysis on two remote sensing datasets, DOTA and HRSC2016, demonstrate the effectiveness of our approach. Our best model, equipped with Deformable-FPN, achieved 74.75% mAP on DOTA and 96.59% on HRSC2016 with a single-stage model, single-scale training, and testing. By detecting arbitrarily oriented objects from their centers, the proposed model performs competitively against oriented anchor-based methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Li, Yangyang, Heting Mao, Ruijiao Liu, Xuan Pei, Licheng Jiao und Ronghua Shang. „A Lightweight Keypoint-Based Oriented Object Detection of Remote Sensing Images“. Remote Sensing 13, Nr. 13 (24.06.2021): 2459. http://dx.doi.org/10.3390/rs13132459.

Der volle Inhalt der Quelle
Annotation:
Object detection in remote sensing images has been widely used in military and civilian fields and is a challenging task due to the complex background, large-scale variation, and dense arrangement in arbitrary orientations of objects. In addition, existing object detection methods rely on the increasingly deeper network, which increases a lot of computational overhead and parameters, and is unfavorable to deployment on the edge devices. In this paper, we proposed a lightweight keypoint-based oriented object detector for remote sensing images. First, we propose a semantic transfer block (STB) when merging shallow and deep features, which reduces noise and restores the semantic information. Then, the proposed adaptive Gaussian kernel (AGK) is adapted to objects of different scales, and further improves detection performance. Finally, we propose the distillation loss associated with object detection to obtain a lightweight student network. Experiments on the HRSC2016 and UCAS-AOD datasets show that the proposed method adapts to different scale objects, obtains accurate bounding boxes, and reduces the influence of complex backgrounds. The comparison with mainstream methods proves that our method has comparable performance under lightweight.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Perenleilkhundev, Gantuya, Mungunshagai Batdemberel, Batnyam Battulga und Suvdaa Batsuuri. „Object Detection from Mongolian Nomadic Environmental Images“. Journal of Multimedia Information System 6, Nr. 4 (31.12.2019): 173–78. http://dx.doi.org/10.33851/jmis.2019.6.4.173.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Kim, Gi-Tae, und Hyun-Soo Kang. „Suspectible Object Detection Method for Radiographic Images“. Journal of the Korea Institute of Information and Communication Engineering 18, Nr. 3 (31.03.2014): 670–78. http://dx.doi.org/10.6109/jkiice.2014.18.3.670.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Hanton, Katherine, Marcus Butavicius, Ray Johnson und Jadranka Sunde. „Improving Infrared Images for Standoff Object Detection“. Journal of Computing and Information Technology 18, Nr. 2 (2010): 151. http://dx.doi.org/10.2498/cit.1001817.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Hoguro, Masahiro, Yuki Inoue, Taizo Umezaki und Takefumi Setta. „Moving Object Detection Using Strip Frame Images“. IEEJ Transactions on Electronics, Information and Systems 128, Nr. 8 (2008): 1277–85. http://dx.doi.org/10.1541/ieejeiss.128.1277.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Vizilter, Y. V., V. S. Gorbatsevich, B. V. Vishnyakov und S. V. Sidyakin. „Object detection in images using morphlet descriptions“. Computer Optics 41, Nr. 3 (01.01.2017): 406–11. http://dx.doi.org/10.18287/2412-6179-2017-41-3-406-411.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Volkov, Vladimir. „Object detection quality in remote sensing images“. Procedia Computer Science 176 (2020): 3245–54. http://dx.doi.org/10.1016/j.procs.2020.09.124.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Bergboer, N. H., E. O. Postma und H. J. van den Herik. „Context-based object detection in still images“. Image and Vision Computing 24, Nr. 9 (September 2006): 987–1000. http://dx.doi.org/10.1016/j.imavis.2006.02.024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Milyaev, S., und I. Laptev. „Towards reliable object detection in noisy images“. Pattern Recognition and Image Analysis 27, Nr. 4 (Oktober 2017): 713–22. http://dx.doi.org/10.1134/s1054661817040149.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Mu, Qi, Zhiqiang He, Yankui Liu und Yu Sun. „Object Detection on Underground Low-quality Images“. Journal of Physics: Conference Series 1187, Nr. 4 (April 2019): 042048. http://dx.doi.org/10.1088/1742-6596/1187/4/042048.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Li, Zhuoling, Minghui Dong, Shiping Wen, Xiang Hu, Pan Zhou und Zhigang Zeng. „CLU-CNNs: Object detection for medical images“. Neurocomputing 350 (Juli 2019): 53–59. http://dx.doi.org/10.1016/j.neucom.2019.04.028.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Kamath, M., S. Chaudhuri und U. B. Desai. „Direct parametric object detection in tomographic images“. Image and Vision Computing 16, Nr. 9-10 (Juli 1998): 669–76. http://dx.doi.org/10.1016/s0262-8856(98)00082-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Hoguro, Masahiro, Yuki Inoue, Taizo Umezaki und Takefumi Setta. „Moving object detection using strip frame images“. Electrical Engineering in Japan 172, Nr. 4 (11.06.2010): 38–47. http://dx.doi.org/10.1002/eej.21095.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Li, Hongliang, Fanman Meng und King Ngi Ngan. „Co-Salient Object Detection From Multiple Images“. IEEE Transactions on Multimedia 15, Nr. 8 (Dezember 2013): 1896–909. http://dx.doi.org/10.1109/tmm.2013.2271476.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Liu, Zhengyi, Qian Xiang, Jiting Tang, Yuan Wang und Peng Zhao. „Robust salient object detection for RGB images“. Visual Computer 36, Nr. 9 (05.12.2019): 1823–35. http://dx.doi.org/10.1007/s00371-019-01778-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie