Artykuły w czasopismach na temat „Unambiguous dynamic real time single object detection”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Unambiguous dynamic real time single object detection.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 25 najlepszych artykułów w czasopismach naukowych na temat „Unambiguous dynamic real time single object detection”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Hashimoto, Naoki, Ryo Koizumi i Daisuke Kobayashi. "Dynamic Projection Mapping with a Single IR Camera". International Journal of Computer Games Technology 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/4936285.

Pełny tekst źródła
Streszczenie:
We propose a dynamic projection mapping system with effective machine-learning and high-speed edge-based object tracking using a single IR camera. The machine-learning techniques are used for precise 3D initial posture estimation from 2D IR images, as a detection process. After the detection, we apply an edge-based tracking process for real-time image projection. In this paper, we implement our proposal and actually achieve dynamic projection mapping. In addition, we evaluate the performance of our proposal through the comparison with a Kinect-based tracking system.
Style APA, Harvard, Vancouver, ISO itp.
2

Gunawan, Chichi Rizka, Nurdin Nurdin i Fajriana Fajriana. "Design of A Real-Time Object Detection Prototype System with YOLOv3 (You Only Look Once)". International Journal of Engineering, Science and Information Technology 2, nr 3 (17.10.2022): 96–99. http://dx.doi.org/10.52088/ijesty.v2i3.309.

Pełny tekst źródła
Streszczenie:
Object detection is an activity that aims to gain an understanding of the classification, concept estimation, and location of objects in an image. As one of the fundamental computer vision problems, object detection can provide valuable information for the semantic understanding of images and videos and is associated with many applications, including image classification. Object detection has recently become one of the most exciting fields in computer vision. Detection of objects on this system using YOLOv3. The You Only Look Once (YOLO) method is one of the fastest and most accurate methods for object detection and is even capable of exceeding two times the capabilities of other algorithms. You Only Look Once, an object detection method, is very fast because a single neural network predicts bounded box and class probabilities directly from the whole image in an evaluation. In this study, the object under study is an object that is around the researcher (a random thing). System design using Unified Modeling Language (UML) diagrams, including use case diagrams, activity diagrams, and class diagrams. This system will be built using the python language. Python is a high-level programming language that can execute some multi-use instructions directly (interpretively) with the Object Oriented Programming method and also uses dynamic semantics to provide a level of syntax readability. As a high-level programming language, python can be learned easily because it has been equipped with automatic memory management, where the user must run through the Anaconda prompt and then continue using Jupyter Notebook. The purpose of this study was to determine the accuracy and performance of detecting random objects on YOLOv3. The result of object detection will display the name and bounding box with the percentage of accuracy. In this study, the system is also able to recognize objects when they object is stationary or moving.
Style APA, Harvard, Vancouver, ISO itp.
3

Khatab, Esraa, Ahmed Onsy i Ahmed Abouelfarag. "Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles". Sensors 22, nr 4 (21.02.2022): 1663. http://dx.doi.org/10.3390/s22041663.

Pełny tekst źródła
Streszczenie:
One of the primary tasks undertaken by autonomous vehicles (AVs) is object detection, which comes ahead of object tracking, trajectory estimation, and collision avoidance. Vulnerable road objects (e.g., pedestrians, cyclists, etc.) pose a greater challenge to the reliability of object detection operations due to their continuously changing behavior. The majority of commercially available AVs, and research into them, depends on employing expensive sensors. However, this hinders the development of further research on the operations of AVs. In this paper, therefore, we focus on the use of a lower-cost single-beam LiDAR in addition to a monocular camera to achieve multiple 3D vulnerable object detection in real driving scenarios, all the while maintaining real-time performance. This research also addresses the problems faced during object detection, such as the complex interaction between objects where occlusion and truncation occur, and the dynamic changes in the perspective and scale of bounding boxes. The video-processing module works upon a deep-learning detector (YOLOv3), while the LiDAR measurements are pre-processed and grouped into clusters. The output of the proposed system is objects classification and localization by having bounding boxes accompanied by a third depth dimension acquired by the LiDAR. Real-time tests show that the system can efficiently detect the 3D location of vulnerable objects in real-time scenarios.
Style APA, Harvard, Vancouver, ISO itp.
4

Kwan, Chiman, David Gribben, Bryan Chou, Bence Budavari, Jude Larkin, Akshay Rangamani, Trac Tran, Jack Zhang i Ralph Etienne-Cummings. "Real-Time and Deep Learning Based Vehicle Detection and Classification Using Pixel-Wise Code Exposure Measurements". Electronics 9, nr 6 (18.06.2020): 1014. http://dx.doi.org/10.3390/electronics9061014.

Pełny tekst źródła
Streszczenie:
One key advantage of compressive sensing is that only a small amount of the raw video data is transmitted or saved. This is extremely important in bandwidth constrained applications. Moreover, in some scenarios, the local processing device may not have enough processing power to handle object detection and classification and hence the heavy duty processing tasks need to be done at a remote location. Conventional compressive sensing schemes require the compressed data to be reconstructed first before any subsequent processing can begin. This is not only time consuming but also may lose important information in the process. In this paper, we present a real-time framework for processing compressive measurements directly without any image reconstruction. A special type of compressive measurement known as pixel-wise coded exposure (PCE) is adopted in our framework. PCE condenses multiple frames into a single frame. Individual pixels can also have different exposure times to allow high dynamic ranges. A deep learning tool known as You Only Look Once (YOLO) has been used in our real-time system for object detection and classification. Extensive experiments showed that the proposed real-time framework is feasible and can achieve decent detection and classification performance.
Style APA, Harvard, Vancouver, ISO itp.
5

Wan, Jixiang, Ming Xia, Zunkai Huang, Li Tian, Xiaoying Zheng, Victor Chang, Yongxin Zhu i Hui Wang. "Event-Based Pedestrian Detection Using Dynamic Vision Sensors". Electronics 10, nr 8 (8.04.2021): 888. http://dx.doi.org/10.3390/electronics10080888.

Pełny tekst źródła
Streszczenie:
Pedestrian detection has attracted great research attention in video surveillance, traffic statistics, and especially in autonomous driving. To date, almost all pedestrian detection solutions are derived from conventional framed-based image sensors with limited reaction speed and high data redundancy. Dynamic vision sensor (DVS), which is inspired by biological retinas, efficiently captures the visual information with sparse, asynchronous events rather than dense, synchronous frames. It can eliminate redundant data transmission and avoid motion blur or data leakage in high-speed imaging applications. However, it is usually impractical to directly apply the event streams to conventional object detection algorithms. For this issue, we first propose a novel event-to-frame conversion method by integrating the inherent characteristics of events more efficiently. Moreover, we design an improved feature extraction network that can reuse intermediate features to further reduce the computational effort. We evaluate the performance of our proposed method on a custom dataset containing multiple real-world pedestrian scenes. The results indicate that our proposed method raised its pedestrian detection accuracy by about 5.6–10.8%, and its detection speed is nearly 20% faster than previously reported methods. Furthermore, it can achieve a processing speed of about 26 FPS and an AP of 87.43% when implanted on a single CPU so that it fully meets the requirement of real-time detection.
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Yen-Chiu, Kun-Ming Yu, Tzu-Hsiang Kao i Hao-Lun Hsieh. "Deep learning based real-time tourist spots detection and recognition mechanism". Science Progress 104, nr 3_suppl (lipiec 2021): 003685042110442. http://dx.doi.org/10.1177/00368504211044228.

Pełny tekst źródła
Streszczenie:
More and more information on tourist spots is being represented as pictures rather than text. Consequently, tourists who are interested in a specific attraction shown in pictures may have no idea how to perform a text search to get more information about the interesting tourist spots. In the view of this problem and to enhance the competitiveness of the tourism market, this research proposes an innovative tourist spot identification mechanism, which is based on deep learning-based object detection technology, for real-time detection and identification of tourist spots by taking pictures on location or retrieving images from the Internet. This research establishes a tourist spot recognition system, which is a You Only Look Once version 3 model built in Tensorflow AI framework, and is used to identify tourist attractions by taking pictures with a smartphone's camera. To verify the possibility, a set of tourist spots in Hsinchu City, Taiwan is taken as an example. Currently, the tourist spot recognition system of this research can identify 28 tourist spots in Hsinchu. In addition to the attraction recognition feature, tourists can further use this tourist spot recognition system to obtain more information about 77 tourist spots from the Hsinchu City Government Information Open Data Platform, and then make dynamic travel itinerary planning and Google MAP navigation. Compared with other deep learning models using Faster region-convolutional neural networks or Single-Shot Multibox Detector algorithms for the same data set, the recognition time by the models using You Only Look Once version 3, Faster region-convolutional neural networks, and Single-Shot Multibox Detector algorithms are respectively 4.5, 5, and 9 s, and the mean average precision for each when IoU = 0.6 is 88.63%, 85%, and 43.19%, respectively. The performance experimental results of this research show the model using the You Only Look Once version 3 algorithm is more efficient and precise than the models using the Faster region-convolutional neural networks or the Single-Shot Multibox Detector algorithms, where You Only Look Once version 3 and Single-Shot Multibox Detector are one-stage learning architectures with efficient features, and Faster region-convolutional neural networks is a two-stage learning architecture with precise features.
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Hanzi, Vinu R. V., Hongliang Ren, Xingpeng Du, Ziyang Chen i Jixiong Pu. "Single-Shot On-Axis Fizeau Polarization Phase-Shifting Digital Holography for Complex-Valued Dynamic Object Imaging". Photonics 9, nr 3 (23.02.2022): 126. http://dx.doi.org/10.3390/photonics9030126.

Pełny tekst źródła
Streszczenie:
Digital holography assisted with inline phase-shifting methods has the benefit of a large field of view and a high resolution, but it is limited in dynamic imaging due to sequential detection of multiple holograms. Here we propose and experimentally demonstrate a single-shot phase-shifting digital holography system based on a highly stable on-axis Fizeau-type polarization interferometry. The compact on-axis design of the system with the capability of instantaneous recording of multiple phase-shifted holograms and with robust stability features makes the technique a novel tool for the imaging of complex-valued dynamic objects. The efficacy of the approach is demonstrated experimentally by complex field imaging of various kinds of reflecting-type static and dynamic objects. Moreover, a quantitative analysis on the robust phase stability and sensitivity of the technique is evaluated by comparing the approach with conventional phase-shifting methods. The high phase stability and dynamic imaging potential of the technique are expected to make the system an ideal tool for quantitative phase imaging and real-time imaging of dynamic samples.
Style APA, Harvard, Vancouver, ISO itp.
8

Aslam, Asra, i Edward Curry. "Investigating response time and accuracy in online classifier learning for multimedia publish-subscribe systems". Multimedia Tools and Applications 80, nr 9 (9.01.2021): 13021–57. http://dx.doi.org/10.1007/s11042-020-10277-x.

Pełny tekst źródła
Streszczenie:
AbstractThe enormous growth of multimedia content in the field of the Internet of Things (IoT) leads to the challenge of processing multimedia streams in real-time. Event-based systems are constructed to process event streams. They cannot natively consume multimedia event types produced by the Internet of Multimedia Things (IoMT) generated data to answer multimedia-based user subscriptions. Machine learning-based techniques have enabled rapid progress in solving real-world problems and need to be optimised for the low response time of the multimedia event processing paradigm. In this paper, we describe a classifier construction approach for the training of online classifiers, that can handle dynamic subscriptions with low response time and provide reasonable accuracy for the multimedia event processing. We find that the current object detection methods can be configured dynamically for the construction of classifiers in real-time, by tuning hyperparameters even when training from scratch. Our experiments demonstrate that deep neural network-based object detection models, with hyperparameter tuning, can improve the performance within less training time for the answering of previously unknown user subscriptions. The results from this study show that the proposed online classifier training based model can achieve accuracy of 79.00% with 15-min of training and 84.28% with 1-hour training from scratch on a single GPU for the processing of multimedia events.
Style APA, Harvard, Vancouver, ISO itp.
9

Golenkov, A. G., A. V. Shevchik-Shekera, M. Yu Kovbasa, I. O. Lysiuk, M. V. Vuichyk, S. V. Korinets, S. G. Bunchuk, S. E. Dukhnin, V. P. Reva i F. F. Sizov. "THz linear array scanner in application to the real-time imaging and convolutional neural network recognition". Semiconductor Physics, Quantum Electronics and Optoelectronics 24, nr 1 (9.03.2021): 90–99. http://dx.doi.org/10.15407/spqeo24.01.090.

Pełny tekst źródła
Streszczenie:
Room temperature linear arrays (up to 160 detectors in array) from silicon metal- oxide-semiconductor field-effect transistors (Si-MOSFETs) have been designed for sub- THz (radiation frequency 140 GHz) close to real-time direct detection operation scanner to be used for detection and recognition of hidden objects. For this scanner, the optical system with aspherical lenses has been designed and manufactured. To estimate the quality of optical system and its resolution, the system modulation transfer function was applied. The scanner can perform real-time imaging with the spatial resolution better than 5 mm at the radiation frequency 140 GHz and contrast 0.5 for the moving object speed up to 200 mm/s and the depth of field 20 mm. The average dynamic range of real time imaging system with 160-detector linear array is close to 35 dB, when the sources with the output radiation power of 23 mW (IMPATT diodes) are used (scan speed 200 mm/s). For the system with 32-detector array, the dynamic range was about 48 dB and for the single-detector system with raster scanning 80 dB with lock-in amplifier. However, in the latter case for obtaining the image with the sizes 20×40 mm and step of 1 mm, the average scanning time close to 15 min is needed. Convolutional neural network was exploited for automatic detection and recognition of hidden items.
Style APA, Harvard, Vancouver, ISO itp.
10

Wei, A. Hui, i B. Yang Chen. "Robotic object recognition and grasping with a natural background". International Journal of Advanced Robotic Systems 17, nr 2 (1.03.2020): 172988142092110. http://dx.doi.org/10.1177/1729881420921102.

Pełny tekst źródła
Streszczenie:
In this article, a novel, efficient grasp synthesis method is introduced that can be used for closed-loop robotic grasping. Using only a single monocular camera, the proposed approach can detect contour information from an image in real time and then determine the precise position of an object to be grasped by matching its contour with a given template. This approach is much lighter than the currently prevailing methods, especially vision-based deep-learning techniques, in that it requires no prior training. With the use of the state-of-the-art techniques of edge detection, superpixel segmentation, and shape matching, our visual servoing method does not rely on accurate camera calibration or position control and is able to adapt to dynamic environments. Experiments show that the approach provides high levels of compliance, performance, and robustness under diverse experiment environments.
Style APA, Harvard, Vancouver, ISO itp.
11

-, Julham, Meryatul Husna i Arif Ridho Lubis. "OpenCV Using on a Single Board Computer for Incorrect Facemask-Wearing Detection and Capturing". JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 5, nr 2 (26.01.2022): 315–23. http://dx.doi.org/10.31289/jite.v5i2.6118.

Pełny tekst źródła
Streszczenie:
OpenCV (Open Source Computer Vision Library) is a software library intended for real-time dynamic image processing, created by Intel. In this study, the library will be used to detect the face, nose and mouth. Furthermore, the system is equipped with the knowledge that if the mouth and nose or one of them is detected, then the face has not used the mask correctly and the system records the face. The system is supported by an image capture device in the form of a camcorder, a processing device in the form of a single board computer, such as a Raspberry Pi and a display device in the form of a monitor. And the result is that the system is able to make a decision whether the face is wearing a mask correctly or not. By means of labeling around the face in the form of red angular lines, if not properly use the mask. The success rate is 88.4% using detector parameters, namely: scale factor = 1.1 for all face, nose and mouth object libraries.
Style APA, Harvard, Vancouver, ISO itp.
12

Winiwarter, L., K. Anders, D. Schröder i B. Höfle. "VIRTUAL LASER SCANNING OF DYNAMIC SCENES CREATED FROM REAL 4D TOPOGRAPHIC POINT CLOUD DATA". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2022 (17.05.2022): 79–86. http://dx.doi.org/10.5194/isprs-annals-v-2-2022-79-2022.

Pełny tekst źródła
Streszczenie:
Abstract. Virtual laser scanning (VLS) allows the generation of realistic point cloud data at a fraction of the costs required for real acquisitions. It also allows carrying out experiments that would not be feasible or even impossible in the real world, e.g., due to time constraints or when hardware does not exist. A critical part of a simulation is an adequate substitution of reality. In the case of VLS, this concerns the scanner, the laser-object interaction, and the scene. In this contribution, we present a method to recreate a realistic dynamic scene, where the surface changes over time. We first apply change detection and quantification on a real dataset of an erosion-affected high-mountain slope in Tyrol, Austria, acquired with permanent terrestrial laser scanning (TLS). Then, we model and extract the time series of a single change form, and transfer it to a virtual model scene. The benefit of such a transfer is that no physical modelling of the change processes is required. In our example, we use a Kalman filter with subsequent clustering to extract a set of erosion rills from a time series of high-resolution TLS data. The change magnitudes quantified at the locations of these rills are then transferred to a triangular mesh, representing the virtual scene. Subsequently, we apply VLS to investigate the detectability of such erosion rills from airborne laser scanning at multiple subsequent points in time. This enables us to test if, e.g., a certain flying altitude is appropriate in a disaster response setting for the detection of areas exposed to immediate danger. To ensure a successful transfer, the spatial resolution and the accuracy of the input dataset are much higher than the accuracy and resolution that are being simulated. Furthermore, the investigated change form is detected as significant in the input data. We, therefore, conclude the model of the dynamic scene derived from real TLS data to be an appropriate substitution for reality.
Style APA, Harvard, Vancouver, ISO itp.
13

Hale, Nicholas, Maria Valero, Jinghua Tang, David Moser i Liudi Jiang. "A preliminary study on characterisation of finger interface kinetics using a pressure and shear sensor system". Prosthetics and Orthotics International 42, nr 1 (31.08.2017): 60–65. http://dx.doi.org/10.1177/0309364617728121.

Pełny tekst źródła
Streszczenie:
Background: Our hands constantly handle objects throughout our lives, where a crucial component of this interaction is the detection of grasping (pressure) and slipping (shear) of the object. While there have been a large amount of studies using pressure sensors for grasping detection, synchronised pressure and shear detection at the finger/object interface is still needed. Objectives: This study aims to assess the feasibility of a sensor system designed to detect both pressure and shear at the fingertip/object interface via a single subject test. Study design: Descriptive study, proof of concept. Methods: One healthy subject participated in the study and was asked to perform a single finger test protocol and a simple hand test protocol. The corresponding multidirectional loads at the fingertip/object interface were measured in real time using a pressure and shear sensor system. Results: Results from the finger test protocol show peak values of up to approximately 50 kPa (5 N) and 30 kPa (3 N) of pressure for each test, respectively. Results from the hand test protocol show a pressure and shear profile that shows a large increase in grip force during the initial grasping of the object, with a peak pressure of approximately 50 kPa (5 N). The pressure and shear profile demonstrates that the load is not evenly distributed across all digits. Conclusion: This study provides evidence that the reported sensor system has sufficient resolution, dynamic response and load capability to capture biomechanical information during basic protocols and hand-grasping tasks. Clinical relevance The presented sensor system could be potentially used as a tool for measuring and evaluating hand function and could be incorporated into a prosthetic hand as a tactile feedback system.
Style APA, Harvard, Vancouver, ISO itp.
14

Akhavizadegan, Alireza, Mohd Yamani Idna Idris, Zaidi Bin Razak, Mazlina Binti Mazlan, Norhamizan Binti Hamzah, Ainuddin Wahid Abdul Wahab i Mehdi Hussain. "REAL-TIME AUTOMATED CONTOUR BASED MOTION TRACKING USING A SINGLE-CAMERA FOR UPPER LIMB ANGULAR MOTION MEASUREMENT". Malaysian Journal of Computer Science 33, nr 1 (31.01.2020): 52–77. http://dx.doi.org/10.22452/mjcs.vol33no1.4.

Pełny tekst źródła
Streszczenie:
Upper limb angular motion measurement is particularly a subject of interest in the field of rehabilitation. It is commonly measured manually by physiotherapist via goniometer to monitor stroke patient’s progress. However, manual measurement has many inherent drawbacks such as the need to have both patient and physiotherapist to be at the same place. In this paper, an automated real-time single-camera upper limb angular motion measuring system is proposed for home-based rehabilitation to aid physiotherapist to monitor patient’s progress. The measurement of concern are angle measurement of elbow extension, elbow flexion, wrist flexion and wrist extension. To do this, we propose a method that utilized predefined coordinate points extracted from the contours of the object named as contour based motion tracking. The proposed method overcome problems of prior target tracking techniques such as Kalman filter, Optical flow and Cam-shift. The proposed method includes skin region segmentation and arm modelling for motion tracking. Prior skin segmentation techniques suffer from fixed threshold value set by the user. Therefore, we introduce dynamic threshold based on the lower and upper threshold boundary of isolated skin regions from the background. To ensure the reliability of our skin segmentation method, we compare them with four different related algorithms. The result shows that our skin segmentation method outperformed the prior method with 93% detection accuracy. Following the segmentation, we model the arm motion tracking by formulating mathematical equation of various points and motion velocities to track the arm. We then model the wrist and elbow position to estimate arm angular motion. The method is put together and tested with real human subject and other test settings. The result shows that our proposed method able to produce an accurate and reliable reading of +-1.25 average range of error from actual physiotherapist reading.
Style APA, Harvard, Vancouver, ISO itp.
15

Baranova, V. S., V. A. Saetchnikov i A. A. Spiridonov. "Autonomous Streaming Space Objects Detection Based on a Remote Optical System". Devices and Methods of Measurements 12, nr 4 (22.12.2021): 272–79. http://dx.doi.org/10.21122/2220-9506-2021-12-4-272-279.

Pełny tekst źródła
Streszczenie:
Traditional image processing techniques provide sustainable efficiency in the astrometry of deep space objects and in applied problems of determining the parameters of artificial satellite orbits. But the speed of the computing architecture and the functions of small optical systems are rapidly developing thus contribute to the use of a dynamic video stream for detecting and initializing space objects. The purpose of this paper is to automate the processing of optical measurement data during detecting space objects and numerical methods for the initial orbit determination.This article provided the implementation of a low-cost autonomous optical system for detecting of space objects with remote control elements. The basic algorithm model had developed and tested within the framework of remote control of a simplified optical system based on a Raspberry Pi 4 single-board computer with a modular camera. Under laboratory conditions, the satellite trajectory had simulated for an initial assessment of the compiled algorithmic modules of the computer vision library OpenCV.Based on the simulation results, dynamic detection of the International Space Station in real-time from the observation site with coordinates longitude 25o41′49″ East, latitude 53o52′36″ North in the interval 00:54:00–00:54:30 17.07.2021 (UTC + 03:00) had performed. The video processing result of the pass had demonstrated in the form of centroid coordinates of the International Space Station in the image plane with a timestamps interval of which is 0.2 s.This approach provides an autonomous raw data extraction of a space object for numerical methods for the initial determination of its orbit.
Style APA, Harvard, Vancouver, ISO itp.
16

Wang, Maolin, Hongyu Wang, Zhi Wang i Yumeng Li. "A UAV Visual Relocalization Method Using Semantic Object Features Based on Internet of Things". Wireless Communications and Mobile Computing 2022 (11.02.2022): 1–11. http://dx.doi.org/10.1155/2022/7299309.

Pełny tekst źródła
Streszczenie:
Unmanned Air Vehicle (UAV) has the advantages of high autonomy and strong dynamic deployment capabilities. At the same time, with the rapid development of the Internet of Things (IoT) technology, the construction of the IoT based on UAVs can break away from the traditional single-line communication mode of UAVs and control terminals, which makes the UAVs more intelligent and flexible when performing tasks. When using UAVs to perform IoT tasks, it is necessary to track the UAVs’ position and pose at all times. When the position and pose tracking fails, relocalization is required to restore the current position and pose. Therefore, how to perform UAV relocalization accurately by using visual information has attracted much attention. However, the complex changes in light conditions in the real world have brought huge challenges to the visual relocalization of UAV. Traditional visual relocalization algorithms mostly rely on artificially designed low-level geometric features which are sensitive to light conditions. In this paper, oriented to the UAV-based IoT, a UAV visual relocalization method using semantic object features is proposed. Specifically, the method uses YOLOv3 as the object detection framework to extract the semantic information in the picture and uses the semantic information to construct a topological map as a sparse description of the environment. With prior knowledge of the map, the random walk algorithm is used on the association graphs to match the semantic features and the scenes. Finally, the EPnP algorithm is used to solve the position and pose of the UAV which will be returned to the IoT platform. Simulation results show that the method proposed in this paper can achieve robust real-time UAVs relocalization when the scene lighting conditions change dynamically and provide a guarantee for UAVs to perform IoT tasks.
Style APA, Harvard, Vancouver, ISO itp.
17

Tsukanova, Elena Yu. "TO THE QUESTION OF LEGAL CONDITIONS PLACE IN THE SYSTEM OF LEGAL FACTS". RUDN Journal of Law 24, nr 1 (15.12.2020): 25–45. http://dx.doi.org/10.22363/2313-2337-2020-24-1-25-45.

Pełny tekst źródła
Streszczenie:
The article is devoted to the problems of the formation and positioning of the legal status category in legal science. The relevance of this phenomenon in law is due to the lack of its unambiguous perception, which does not allow to fully determine its place and purpose in the theory of legal facts. The purpose of this article is to determine the philosophical and dialectical basis for the inclusion of this concept in the scientific categorical apparatus of jurisprudence. This will allow with sufficient certainty to identify its main characteristics, place in the classification of legal facts, as well as functional relationships with other elements of the legal-factual system. The methodological basis of the article was made by modern achievements of the theory of knowledge. In the research process, theoretical, general philosophical (dialectic, analysis, synthesis, deduction, systemic method,), as well as traditional legal methods (formal-logical, normative-dogmatic and others) were used. In the process of research, based on the ratio of the dialectic categories of movement and rest, the conclusion was formulated that physical reality is a series of static and dynamic situations. Static circumstances characterizing the stability and sustainability of a phenomenon or object are states. The variability of social relations is due to dynamic circumstances, which serve as the basis for a change of state. This approach allowed us to formulate the conclusion that states are natural elements of physical being. They can be qualified as real life circumstances and, provided that the rule of law associates a certain legal consequence with them, they should be recognized as legal facts. An analysis of the place of the state in the system of legal facts allowed us to conclude that the length of time cannot be considered as qualifying it. States are characterized by length in time, and it is precisely the “fluidity” of the phenomenon that matters for a specific situation. When a certain process takes a long time, but as applied to the social situation, it matters as a single whole, then it should be considered as an instantaneous fact.
Style APA, Harvard, Vancouver, ISO itp.
18

Bołoz, Łukasz. "Digital Prototyping on the Example of Selected Self-Propelled Mining Machines". Multidisciplinary Aspects of Production Engineering 3, nr 1 (1.09.2020): 172–83. http://dx.doi.org/10.2478/mape-2020-0015.

Pełny tekst źródła
Streszczenie:
AbstractThe changing requirements and needs of users as well as the demand for new mining machinery solutions pose a challenge for designers. Time spent on the design phase of a new machine is often reduced to a minimum. Small lot production and sometimes single piece production entails the necessity to frequently design and manufacture commercial copies, which are also prototypes. The only effective solution is to apply available CAD and CAE programs enabling digital prototyping. Their use allows for making a complete machine in a virtual environment. The virtual model is subjected to tests aimed at eliminating collisions, optimizing the construction as well as obtaining a lot of information concerning the load, kinematics, dynamics, stability and power demand. Digital prototyping enables avoiding the majority of errors whose detection and elimination in a real object is time-consuming and expensive. The article presents examples of the application of broadly understood computer aided designing of self-propelled mining machines, produced by Mine Master. The effects of applying the modelling, FEM strength analysis, static and dynamic simulations, modelling of drive systems have been demonstrated. The methods used to validate computer models and verify the parameters of finished machines have also been discussed.
Style APA, Harvard, Vancouver, ISO itp.
19

Pohlkamp, Christian, Niroshan Nadarajah, Inseok Heo, Dimitros Tziotis, Sven Maschek, Rudolf Drescher, Siegfried Hänselmann i in. "A Completely Digital Workflow for Differentials in Bone Marrow Cytomorphology Supported By Machine Learning Provides Promising Results in Object Detection". Blood 138, Supplement 1 (5.11.2021): 4922. http://dx.doi.org/10.1182/blood-2021-152851.

Pełny tekst źródła
Streszczenie:
Abstract Background: Cytomorphology is an essential method to assess disease phenotypes. Recently, promising results of automation, digitalization and machine learning (ML) for this gold standard have been demonstrated. We reported on successful integration of such workflows into our lab routine, including automated scanning of peripheral blood smears and ML-based classification of blood cell images (ASH 2020). Following this pilot project, we are focusing on an equivalent approach for bone marrow. Aim: To establish a multistep-approach including scan of bone marrow smears and detection/classification of all kinds of bone marrow cell types in healthy individuals and leukemia patients. Methods: The method includes a pre-scan at 10x magnification for detecting suitable "areas of interest" (AOI) for cytomorphological analysis, a high resolution capture of a predefinable number of AOI at 40x magnification (always using oil) and an automated object detection and classification. For all scanning tasks, a Metafer Scanning System (Zeiss Axio Imager.Z2 microscope, automatic slide feeder SFx80 and automated oil disperser) from MetaSystems (Altlussheim, GER) was used. To generate training data for AOI detection, 37 bone marrow smears were scanned at 10x magnification. 6 different quality classes of regions (based on number and distribution of cells) were annotated by hem experts using polygons. In total, 185,000 grid images were extracted from the annotated regions and used for training a deep neural network (DNN) to distinguish the 6 quality classes and to generate a position list for a high resolution scan (40x magnification). In addition, we scanned the labeled AOI of 68 smears at 40x magnification, acquiring colour images (2048x1496 pixels) of bone marrow cell layers. Each single cell was labeled by human investigators using rectangular bounding boxes (in total: 47,118 cells in 511 images). We set up a supervised ML model, using the labeled 40x images as an input. We fine-tuned the COCO dataset pre-trained YOLOv5 model with our dataset and evaluated using 5-fold cross valuation. To reduce overfitting, image augmentation algorithms were applied. Results: Our first DNN was able to detect (10x magnification) and capture (40x magnification) AOI in bone marrow smears, sorted by quality and in acceptable time spans. Average time for the 10x pre-scan was 6 min. From the resulting position list, the 50 positions with highest quality values were acquired at an average of 1:30 min. Our second, independent DNN was able to detect nucleated cells at 94% sensitivity and 75% precision in unlabeled bone marrow images (40x magnification). In this model, we overweighted recall over precision (5:1) to avoid missing any objects of interest, assuming that false positive labels could be corrected by human investigators when reviewing digital images. For the classification of single cells, a third independent DNN will be necessary. Actually, different approaches are being tested, including our existing blood cell classifier and a former collaborative bone marrow classification model based on a training set of 100,000 annotated bone marrow cells. Depending on these results, new training data for generation of a completely new model could be assessed. The two existing models enable a fully automated digital workflow including scan of bone marrow smears and delivery of single cell image galleries for human classification already now. Conclusion: We here present solutions for multiple-DNN-based tools for bone marrow cytomorphology. They allow working digitally and remotely in routine diagnostics. Final solutions will offer single cell classifications and galleries for human review and include real time training of respective classifier models with dynamic datasets. Figure 1 Figure 1. Disclosures Haferlach: MLL Munich Leukemia Laboratory: Other: Part ownership. Kern: MLL Munich Leukemia Laboratory: Other: Part ownership. Haferlach: MLL Munich Leukemia Laboratory: Other: Part ownership.
Style APA, Harvard, Vancouver, ISO itp.
20

Arakelyan, S. M., V. L. Evstigneev, M. A. Kazaryan, M. N. Gerke, A. F. Galkin, S. V. Zhirnova, A. V. Osipov, G. A. Evstyunin i E. L. Shamanskaya. "DETECTION OF DYNAMIC PROCESSES FOR LASER THERMOSTRENGTHENING OF THE MATERIAL SURFACE IN A REAL-TIME SCALEUNDER ILLUMINATION FROM LASER PLUME IN THE TRANSMISSION CHANNEL OF OPTICAL IMAGES VIA OPTICAL FIBER HARNESS USING A LASER BRIGHTNESS AMPLIFIER". Alternative Energy and Ecology (ISJAEE), nr 31-36 (6.01.2019): 71–85. http://dx.doi.org/10.15518/isjaee.2018.31-36.071-085.

Pełny tekst źródła
Streszczenie:
The article deals with model/test experiments on laser thermal hardening with the registration of the dynamics of the material surface modification in real time using a laser projection microscope (monitor) in the geometry of “pump-probe”. Hardening of materials by laser radiation, as well as dimensional processing compared to traditional methods is much more environmentally friendly because it happens very quickly and almost without emissions of harmful substances. And the possibility of observing the surface at the time of the hardening process can improve the quality of processed products. With the help of a laser projection microscope, it is possible to detect the moment of appearance of the transition region arising from the interaction of laser radiation with matter, to monitor the dynamics of its expansion, to register the appearance of the thermal front, the melting front, oxide fronts which is relevant in the heat treatment processes. A large number of publications are devoted to such methods and their modifications which confirms the importance and effectiveness of diagnostic methods using laser projection microscope to study various dynamic processes in the interaction of laser radiation with matter. In this work, the modernization of laser projection microscope with inclusion in the probing channel of optical harness is carried out. The basic physical principles of the obtained system and the existing problems, as well as the prospects of their overcoming in various conditions of specific processes of laser thermal hardening, including the use of computer simulation to find the optimal optical circuits and modes, are revealed. Depth-of-field problems are discussed for the resulting image through an optical fiber/optical bundle when recording such dynamic processes and how to overcome them by choosing the appropriate optical scheme. The analysis is also carried out on the basis of computer modeling. These issues are important in the implementation of various thermal hardening regimes in experiments with single-and multi-beam radiation of a power laser affecting the object at the appropriate setting of the laser monitor in the probing channel.
Style APA, Harvard, Vancouver, ISO itp.
21

Sivokobylenko, Vitaliy F., Andrey P. Nikiforov i Ivan V. Zhuravlov. "Detecting development scenarios of dynamic events in electric power network smart-grids. Part two “Selective Protection”". Applied Aspects of Information Technology 4, nr 4 (21.12.2021): 311–28. http://dx.doi.org/10.15276/aait.04.2021.2.

Pełny tekst źródła
Streszczenie:
When implementing development concepts in the electric power industry (such as “Smart Grid”, “Digital substation” and “Outsourcing of services”), the task of ensuring stable relay protection operations and automation devices is urgent. The problem is solved according to the developed structural-information (SI) method. A method for selective search of the optimal amount of structured information for automatic decision-making is proposed. The article discusses an algorithm for recognising scenarios for the development of semantic events, which is included in the SP-method. The algorithm is applied uniformly for all hierarchical levels of recognition, based on the goals of decision making at the senior level. Control of the sequence of information events is performed in the dynamics of the passage of events along one path from all relationships according to the structural-information model. Part 1 shows a collaborative structural-information model consisting of a shaping tree in a dynamic object and a recognition tree in devices. A theoretical description of the algorithm is given using the amplitude and time (Ξ,Η) selectivity windows in the general structural scheme of S-detection. The application of the method for different hierarchical levels of recognition is shown. The decision-making results are presented in two forms, by means of a single semantic signal to indicate a group of results and filling in the table of the sequence of occurrence of the recognised elementary information components. Part 2 shows the application of the SPmethod at different hierarchical levels of recognition for the synthesis of a selective relay, which implements an algorithm for finding a damaged network section with single-phase ground faults in 6-35 kV distribution networks with a Petersen’s coil. The reasons for the unstable operation of algorithms of known selective relays are indicated, based on the concepts of scenario recognition. The improvement of the structure of a selective relay operating on the basis of the criterion for monitoring the coincidence of the first half-waves of the mid-frequency components in the signals of transient processes is considered. Examples of the synthesis of elementary detectors of absolute, relative and cumulative actions in relation to a selective relay are given, which make it possible to fill the amount of information for general S-detection. The simulation of the operation of the synthesised S-detector on the signals of real emergency files of the natural development of damage to the isolation of the network phase and simulation of artificial scenarios of events in the mathematical SI-model are carried out.
Style APA, Harvard, Vancouver, ISO itp.
22

Xu, Lijuan, Lihong Zhang i Zhenhua Du. "Coastal Ecological Environment Monitoring and Protection System Based on Multisource Information Fusion Decision". Journal of Sensors 2021 (28.10.2021): 1–15. http://dx.doi.org/10.1155/2021/5194700.

Pełny tekst źródła
Streszczenie:
With the problem of nuclear leakage being concerned by more and more industries, the research of coastal ecological environment monitoring has become more and more important. Therefore, it is necessary to study the current unsystematic coastal ecological environment monitoring and protection system. Aiming at the accuracy of feature fusion and representation of single short environment information, this paper compares the classification effects of the three fusion methods on four classifiers: logistic regression, SVM, random forest, and naive Bayes, to verify the effectiveness of LDA and DS model fusion and determine the consistency vector representation method of short environment information data. This paper collects and analyzes the coastal data in recent years using multisource information fusion decision-making. In this paper, DS (Dempster Shafer) evidence algorithm is used to collect the data of coastal salinization degree and air relative humidity, and then, the DS feature matching model is introduced to fuse the whole index system. The method in the article completes the standardized and standardized processing of monitoring data digital conversion, quality control, and data classification, forms interrelated four-dimensional spatiotemporal data, and establishes a distributed, object-oriented, Internet-oriented dynamic management real-time and delayed database. Finally, this paper carries out tree decision processing on the coastal ecological environment monitoring data of multisource information fusion, to achieve the extraction and intuitive analysis of special data, and puts forward targeted protection strategies for the coastal ecological environment according to the data results of the DS algorithm. The research shows that the number of indicators in multisource information fusion in this paper is 16, a total of 3251 data, 2866 meaningful information, and 1869 data including ecological cycle. These data are the results of the collection of multi-information data. Based on the multilevel nature of the existing marine environment three-dimensional monitoring system, the study established a comprehensive resource-guaranteed framework and divided it into four levels according to the level of the marine monitoring system: country, sea area, locality, and data access point. In specific analysis, the guarantee resources involved in each level are introduced. On the basis of in-depth analysis of the requirements of the marine environment three-dimensional monitoring system operation guarantee and the guarantee resource structure, the marine environment three-dimensional monitoring operation comprehensive guarantee system is described from the internal structure and the external connection. The DS algorithm extracts the status information resources of various marine environment three-dimensional monitoring systems, through the interaction of various subsystems, realizes the operation and maintenance of the monitoring system, and provides various technical supports such as system evaluation and failure analysis. After multisource information fusion and decision-making, it is obtained that the index equilibrium module in the DS algorithm in this paper is 0.52, the sensitivity is 0.68, and the independence is 0.42. Among them, the range of sensitivity is the largest. In the simulation results, the eco-economic coefficient can be increased from 12% to 36%. Therefore, using the method of multisource information fusion for quantitative index analysis can provide data support for coastal ecological environment detection, to establish a more perfect protection system.
Style APA, Harvard, Vancouver, ISO itp.
23

Rawat, Seema, Praveen Kumar, Ishita Singh, Shourya Banerjee, Shabana Urooj i Fadwa Alrowais. "Dynamic Gesture Controlled User Interface Expert HCI System using Adaptative Background Masking: An Aid to Prevent Cross Infections". JOURNAL OF CLINICAL AND DIAGNOSTIC RESEARCH, 2020. http://dx.doi.org/10.7860/jcdr/2020/45065.13961.

Pełny tekst źródła
Streszczenie:
Human-Computer Interaction (HCI) interfaces need unambiguous instructions in the form of mouse clicks or keyboard taps from the user and thus gets complex. To simplify this monotonous task, a real-time hand gesture recognition method using computer vision, image, and video processing techniques has been proposed. Controlling infections has turned out to be the major concern of the healthcare environment. Several input devices such as keyboards, mouse, touch screens can be considered as a breeding ground for various micro pathogens and bacteria. Direct use of hands as an input device is an innovative method for providing natural HCI ensuring minimal physical contact with the devices i.e., less transmission of bacteria and thus can prevent cross infections. Convolutional Neural Network (CNN) has been used for object detection and classification. CNN architecture for 3d object recognition has been proposed which consists of two models: 1) A detector, a CNN architecture for detection of gestures; and 2) A classifier, a CNN for classification of the detected gestures. By using dynamic hand gesture recognition to interact with the system, the interactions can be increased with the help of multidimensional use of hand gestures as compared to other input methods. The dynamic hand gesture recognition method focuses to replace the mouse for interaction with the virtual objects. This work centralises the efforts of implementing a method that employs computer vision algorithms and gesture recognition techniques for developing a low-cost interface device for interacting with objects in the virtual environment such as screens using hand gestures.
Style APA, Harvard, Vancouver, ISO itp.
24

Guoqiang, Chen, Yi Huailong i Mao Zhuangzhuang. "Vehicle and Pedestrian Detection Based on Multi-level Feature Fusion in Autonomous Driving". Recent Advances in Computer Science and Communications 13 (4.03.2020). http://dx.doi.org/10.2174/2666255813666200304123323.

Pełny tekst źródła
Streszczenie:
Aims: The factors including light, weather, dynamic objects, seasonal effects and structures bring great challenges for the autonomous driving algorithm in the real world. Autonomous vehicles can detect different object obstacles in complex scenes to ensure safe driving. Background: The ability to detect vehicles and pedestrians is critical to the safe driving of autonomous vehicles. Automated vehicle vision systems must handle extremely wide and challenging scenarios. Objective: The goal of the work is to design a robust detector to detect vehicles and pedestrians. The main contribution is that the Multi-level Feature Fusion Block (MFFB) and the Detector Cascade Block (DCB) are designed. The multi-level feature fusion and multi-step prediction are used which greatly improve the detection object precision. Methods: The paper proposes a vehicle and pedestrian object detector, which is an end-to-end deep convolutional neural network. The key parts of the paper are to design the Multi-level Feature Fusion Block (MFFB) and Detector Cascade Block (DCB). The former combines inherent multi-level features by combining contextual information with useful multi-level features that combine high resolution but low semantics and low resolution but high semantic features. The latter uses multi-step prediction, cascades a series of detectors, and combines predictions of multiple feature maps to handle objects of different sizes. Results: The experiments on the RobotCar dataset and the KITTI dataset show that our algorithm can achieve high precision results through real-time detection. The algorithm achieves 84.61% mAP on the RobotCar dataset and is evaluated on the well-known KITTI benchmark dataset, achieving 81.54% mAP. In particular, the detection accuracy of a single-category vehicle reaches 90.02%. Conclusion: The experimental results show that the proposed algorithm has a good trade-off between detection accuracy and detection speed, which is beyond the current state-of-the-art RefineDet algorithm. The 2D object detector is proposed in the paper, which can solve the problem of vehicle and pedestrian detection and improve the accuracy, robustness and generalization ability in autonomous driving.
Style APA, Harvard, Vancouver, ISO itp.
25

Almanzor, Elijah, Nzebo Richard Anvo, Thomas George Thuruthel i Fumiya Iida. "Autonomous detection and sorting of litter using deep learning and soft robotic grippers". Frontiers in Robotics and AI 9 (1.12.2022). http://dx.doi.org/10.3389/frobt.2022.1064853.

Pełny tekst źródła
Streszczenie:
Road infrastructure is one of the most vital assets of any country. Keeping the road infrastructure clean and unpolluted is important for ensuring road safety and reducing environmental risk. However, roadside litter picking is an extremely laborious, expensive, monotonous and hazardous task. Automating the process would save taxpayers money and reduce the risk for road users and the maintenance crew. This work presents LitterBot, an autonomous robotic system capable of detecting, localizing and classifying common roadside litter. We use a learning-based object detection and segmentation algorithm trained on the TACO dataset for identifying and classifying garbage. We develop a robust modular manipulation framework by using soft robotic grippers and a real-time visual-servoing strategy. This enables the manipulator to pick up objects of variable sizes and shapes even in dynamic environments. The robot achieves greater than 80% classified picking and binning success rates for all experiments; which was validated on a wide variety of test litter objects in static single and cluttered configurations and with dynamically moving test objects. Our results showcase how a deep model trained on an online dataset can be deployed in real-world applications with high accuracy by the appropriate design of a control framework around it.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii