Статті в журналах з теми "Embedded Systems, Computer Vision, Object Classification"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Embedded Systems, Computer Vision, Object Classification.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Embedded Systems, Computer Vision, Object Classification".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Medina, Adán, Juana Isabel Méndez, Pedro Ponce, Therese Peffer, and Arturo Molina. "Embedded Real-Time Clothing Classifier Using One-Stage Methods for Saving Energy in Thermostats." Energies 15, no. 17 (August 23, 2022): 6117. http://dx.doi.org/10.3390/en15176117.

Повний текст джерела
Анотація:
Energy-saving is a mandatory research topic since the growing population demands additional energy yearly. Moreover, climate change requires more attention to reduce the impact of generating more CO2. As a result, some new research areas need to be explored to create innovative energy-saving alternatives in electrical devices that have high energy consumption. One research area of interest is the computer visual classification for reducing energy consumption and keeping thermal comfort in thermostats. Usually, connected thermostats obrtain information from sensors for detecting persons and scheduling autonomous operations to save energy. However, there is a lack of knowledge of how computer vision can be deployed in embedded digital systems to analyze clothing insulation in connected thermostats to reduce energy consumption and keep thermal comfort. The clothing classification algorithm embedded in a digital system for saving energy could be a companion device in connected thermostats to obtain the clothing insulation. Currently, there is no connected thermostat in the market using complementary computer visual classification systems to analyze the clothing insulation factor. Hence, this proposal aims to develop and evaluate an embedded real-time clothing classifier that could help to improve the efficiency of heating and ventilation air conditioning systems in homes or buildings. This paper compares six different one-stage object detection and classification algorithms trained with a small custom dataset in two embedded systems and a personal computer to compare the models. In addition, the paper describes how the classifier could interact with the thermostat to tune the temperature set point to save energy and keep thermal comfort. The results confirm that the proposed real-time clothing classifier could be implemented as a companion device in connected thermostats to provide additional information to end-users about making decisions on saving energy.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sengan, Sudhakar, Ketan Kotecha, Indragandhi Vairavasundaram, Priya Velayutham, Vijayakumar Varadarajan, Logesh Ravi, and Subramaniyaswamy Vairavasundaram. "Real-Time Automatic Investigation of Indian Roadway Animals by 3D Reconstruction Detection Using Deep Learning for R-3D-YOLOv3 Image Classification and Filtering." Electronics 10, no. 24 (December 10, 2021): 3079. http://dx.doi.org/10.3390/electronics10243079.

Повний текст джерела
Анотація:
Statistical reports say that, from 2011 to 2021, more than 11,915 stray animals, such as cats, dogs, goats, cows, etc., and wild animals were wounded in road accidents. Most of the accidents occurred due to negligence and doziness of drivers. These issues can be handled brilliantly using stray and wild animals-vehicle interaction and the pedestrians’ awareness. This paper briefs a detailed forum on GPU-based embedded systems and ODT real-time applications. ML trains machines to recognize images more accurately than humans. This provides a unique and real-time solution using deep-learning real 3D motion-based YOLOv3 (DL-R-3D-YOLOv3) ODT of images on mobility. Besides, it discovers methods for multiple views of flexible objects using 3D reconstruction, especially for stray and wild animals. Computer vision-based IoT devices are also besieged by this DL-R-3D-YOLOv3 model. It seeks solutions by forecasting image filters to find object properties and semantics for object recognition methods leading to closed-loop ODT.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Osipov, Aleksey, Ekaterina Pleshakova, Sergey Gataullin, Sergey Korchagin, Mikhail Ivanov, Anton Finogeev, and Vibhash Yadav. "Deep Learning Method for Recognition and Classification of Images from Video Recorders in Difficult Weather Conditions." Sustainability 14, no. 4 (February 20, 2022): 2420. http://dx.doi.org/10.3390/su14042420.

Повний текст джерела
Анотація:
The sustainable functioning of the transport system requires solving the problems of identifying and classifying road users in order to predict the likelihood of accidents and prevent abnormal or emergency situations. The emergence of unmanned vehicles on urban highways significantly increases the risks of such events. To improve road safety, intelligent transport systems, embedded computer vision systems, video surveillance systems, and photo radar systems are used. The main problem is the recognition and classification of objects and critical events in difficult weather conditions. For example, water drops, snow, dust, and dirt on camera lenses make images less accurate in object identification, license plate recognition, vehicle trajectory detection, etc. Part of the image is overlapped, distorted, or blurred. The article proposes a way to improve the accuracy of object identification by using the Canny operator to exclude the damaged areas of the image from consideration by capturing the clear parts of objects and ignoring the blurry ones. Only those parts of the image where this operator has detected the boundaries of the objects are subjected to further processing. To classify images by the remaining whole parts, we propose using a combined approach that includes the histogram-oriented gradient (HOG) method, a bag-of-visual-words (BoVW), and a back propagation neural network (BPNN). For the binary classification of the images of the damaged objects, this method showed a significant advantage over the classical method of convolutional neural networks (CNNs) (79 and 65% accuracies, respectively). The article also presents the results of a multiclass classification of the recognition objects on the basis of the damaged images, with an accuracy spread of 71 to 86%.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Dong, Alok Desai, and Dah-Jye Lee. "Using synthetic basis feature descriptor for motion estimation." International Journal of Advanced Robotic Systems 15, no. 5 (September 1, 2018): 172988141880383. http://dx.doi.org/10.1177/1729881418803839.

Повний текст джерела
Анотація:
Development of advanced driver assistance systems has become an important focus for automotive industry in recent years. Within this field, many computer vision–related functions require motion estimation. This article discusses the implementation of a newly developed SYnthetic BAsis (SYBA) feature descriptor for matching feature points to generate a sparse motion field for analysis. Two motion estimation examples using this sparse motion field are presented. One uses motion classification for monitoring vehicle motion to detect abrupt movement and to provide a rough estimate of the depth of the scene in front of the vehicle. The other one detects moving objects for vehicle surrounding monitoring to detect vehicles with movements that could potentially cause collisions. This algorithm detects vehicles that are speeding up from behind, slowing down in the front, changing lane, or passing. Four videos are used to evaluate these algorithms. Experimental results verify SYnthetic BAsis’ performance and the feasibility of using the resulting sparse motion field in embedded vision sensors for motion-based driver assistance systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mohan, Navya, and James Kurian. "Design and implementation of shape-based feature extraction engine for vision systems using Zynq SoC." International journal of electrical and computer engineering systems 13, no. 2 (February 28, 2022): 109–17. http://dx.doi.org/10.32985/ijeces.13.2.3.

Повний текст джерела
Анотація:
With the great impact of vision and Artificial Intelligence (AI) technology in the fields of quality control, robotic assembly and robot navigation, the hardware implementation of object detection and classification algorithms on embedded platforms has got ever-increasing attention these days. The real-time performance with optimum resource utilization of the implementation and its reliability as well as the robustness of the underlying algorithm is the overarching challenges in this field. In this work, an approach employing a fast and accurate vision-based shape-detection algorithm has been proposed and its implementation in heterogeneous System on Chip (SoC) is discussed. The proposed system determines centroid distance and its Fourier Transform for the object feature vector extraction and is realized in the Zybo Z7 development board. The ARM processor is responsible for communication with the external systems as well as for writing data to the Block RAM (BRAM), the control signals for efficient execution of the memory operations are designed and implemented using Finite State Machine (FSM) in the Programmable Logic (PL) fabric. Shape feature vector determination has been accelerated using custom modules developed in Verilog, taking full advantage of the possible parallelization and pipeline stages. Meanwhile, industry-standard Advanced Extendable Interface (AXI) buses are adopted for encapsulating standardized IP cores and building high-speed data exchange bridges between units within Zynq-7000. The developed system processes images of size 32 × 64 in real-time and can generate feature descriptors at a clock rate of 62MHz. Moreover, the method yields a shape feature vector that is computationally light, scalable and rotation invariant. The hardware design is validated using MATLAB for comparative studies
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kalms, Lester, Pedram Amini Rad, Muhammad Ali, Arsany Iskander, and Diana Göhringer. "A Parametrizable High-Level Synthesis Library for Accelerating Neural Networks on FPGAs." Journal of Signal Processing Systems 93, no. 5 (March 15, 2021): 513–29. http://dx.doi.org/10.1007/s11265-021-01651-5.

Повний текст джерела
Анотація:
AbstractIn recent years, Convolutional Neural Network CNN have been incorporated in a large number of applications, including multimedia retrieval and image classification. However, CNN based algorithms are computationally and resource intensive and therefore difficult to be used in embedded systems. FPGA based accelerators are becoming more and more popular in research and industry due to their flexibility and energy efficiency. However, the available resources and the size of the on-chip memory can limit the performance of the FPGA accelerator for CNN. This work proposes an High-Level Synthesis HLS library for CNN algorithms. It contains seven different streaming-capable CNN (plus two conversion) functions for creating large neural networks with deep pipelines. The different functions have many parameter settings (e.g. for resolution, feature maps, data types, kernel size, parallelilization, accuracy, etc.), which also enable compile-time optimizations. Our functions are integrated into the HiFlipVX library, which is an open source HLS FPGA library for image processing and object detection. This offers the possibility to implement different types of computer vision applications with one library. Due to the various configuration and parallelization possibilities of the library functions, it is possible to implement a high-performance, scalable and resource-efficient system, as our evaluation of the MobileNets algorithm shows.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Parise, Cesare V., Cesare V. Parise, and Marc O. Ernst. "Multisensory mechanisms for perceptual disambiguation. A classification image study on the stream–bounce illusion." Multisensory Research 26 (2013): 96–97. http://dx.doi.org/10.1163/22134808-000s0068.

Повний текст джерела
Анотація:
Sensory information is inherently ambiguous, and a given signal can in principle correspond to infinite states of the world. A primary task for the observer is therefore to disambiguate sensory information and accurately infer the actual state of the world. Here, we take the stream–bounce illusion as a tool to investigate perceptual disambiguation from a cue-integration perspective, and explore how humans gather and combine sensory information to resolve ambiguity. In a classification task, we presented two bars moving in opposite directions along the same trajectory meeting at the centre. We asked observers to classify such ambiguous displays as streaming or bouncing. Stimuli were embedded in dynamic audiovisual noise, so that through a reverse correlation analysis, we could estimate the perceptual templates used for the classification. Such templates, the classification images, describe the spatiotemporal statistical properties of the noise, which are selectively associated to either percept. Our results demonstrate that the features of both visual and auditory noise, and interactions thereof, strongly biased the final percept towards streaming or bouncing. Computationally, participants’ performance is explained by a model involving a matching stage, where the perceptual systems cross-correlate the sensory signals with the internal templates; and an integration stage, where matching estimates are linearly combined to determine the final percept. These results demonstrate that observers use analogous MLE-like integration principles for categorical stimulus properties (stream/bounce decisions) as they do for continuous estimates (object size, position, etc.). Finally, the time-course of the classification images reveal that most of the decisional weight for disambiguation is assigned to information gathered before the physical crossing of the stimuli, thus highlighting a predictive nature of perceptual disambiguation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Junfeng, Dehai Zhang, Yu Ma, and Qing Liu. "Lane Image Detection Based on Convolution Neural Network Multi-Task Learning." Electronics 10, no. 19 (September 27, 2021): 2356. http://dx.doi.org/10.3390/electronics10192356.

Повний текст джерела
Анотація:
Based on deep neural network multi-task learning technology, lane image detection is studied to improve the application level of driverless technology, improve assisted driving technology and reduce traffic accidents. The lane line database published by Caltech and Tucson company is used to extract the ROI (Region of Interest), scale, and inverse perspective transformation as well as to preprocess the image, so as to enrich the data set and improve the efficiency of the algorithm. In this study, ZFNet is used to replace the basic networks of VPGNet, and their structures are changed to improve the detection efficiency. Multi-label classification, grid box regression and object mask are used as three task modules to build a multi-task learning network named ZF-VPGNet. Considering that neural networks will be combined with embedded systems in the future, the network will be compressed to CZF-VPGNet without excessively affecting the accuracy. Experimental results show that the vision system of driverless technology in this study achieved good test results. In the case of fuzzy lane line and missing lane line mark, the improved algorithm can still detect and obtain the correct results, and achieves high accuracy and robustness. CZF-VPGNet can achieve high real-time performance (26FPS), and a single forward pass takes about 36 ms or less.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Nayyar, Anand, Pijush Kanti Dutta Pramankit, and Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions." Scalable Computing: Practice and Experience 21, no. 3 (August 1, 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Повний текст джерела
Анотація:
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kim, Iuliia, João Pedro Matos-Carvalho, Ilya Viksnin, Tiago Simas, and Sérgio Duarte Correia. "Particle Swarm Optimization Embedded in UAV as a Method of Territory-Monitoring Efficiency Improvement." Symmetry 14, no. 6 (May 24, 2022): 1080. http://dx.doi.org/10.3390/sym14061080.

Повний текст джерела
Анотація:
Unmanned aerial vehicles have large prospects for organizing territory monitoring. To integrate them into this sphere, it is necessary to improve their high functionality and safety. Computer vision is one of the vital monitoring aspects. In this paper, we developed and validated a methodology for terrain classification. The overall classification procedure consists of the following steps: (1) pre-processing, (2) feature extraction, and (3) classification. For the pre-processing stage, a clustering method based on particle swarm optimization was elaborated, which helps to extract object patterns from the image. Feature extraction is conducted via Gray-Level Co-Occurrence Matrix calculation, and the output of the matrix is turned into the input for a feed-forward neural network classification stage. The developed computer vision system showed 88.7% accuracy on the selected test set. These results can provide high quality territory monitoring; prospectively, we plan to establish a self-positioning system based on computer vision.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

DRAPER, BRUCE A., ALLEN R. HANSON, and EDWARD M. RISEMAN. "LEARNING BLACKBOARD-BASED SCHEDULING ALGORITHMS FOR COMPUTER VISION." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 02 (April 1993): 309–28. http://dx.doi.org/10.1142/s0218001493000169.

Повний текст джерела
Анотація:
The goal of image understanding by computer is to identify objects in visual images and (if necessary) to determine their location and orientation. Objects are identified by comparing data extracted from images to an a priori description of the object or object class in memory. It is a generally accepted premise that, in many domains, the timely and appropriate use of knowledge can substantially reduce the complexity of matching image data to object descriptions. Because of the variety and scope of knowledge relevant to different object classes, contexts and viewing conditions, blackboard architectures are well suited to the task of selecting and applying the relevant knowledge to each situation as it is encountered. This paper reviews ten years of work on the UMass VISIONS system and its blackboard-based high-level component, the schema system. The schema system could interpret complex natural scenes when given carefully crafted knowledge bases describing the domain, but its application in practice was limited by the problem of model (knowledge base) acquisition. Experience with the schema system convinced us that learning techniques must be embedded in vision systems of the future to reduce or eliminate the knowledge engineering aspects of system construction. The Schema Learning System (SLS) is a supervised learning system for acquiring knowledge-directed object recognition (control) strategies from training images. The recognition strategies are precompiled reactive sequences of knowledge source invocations that replace the dynamic scheduler found in most blackboard systems. Each strategy is specialized to recognize instances of a specific object class within a specific context. Since the strategies are learned automatically, the knowledge base contains only general-purpose knowledge sources rather than problem-specific control heuristics or sequencing information.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hirabayashi, Manato, Yukihiro Saito, Kosuke Murakami, Akihito Ohsato, Shinpei Kato, and Masato Edahiro. "Vision-Based Sensing Systems for Autonomous Driving: Centralized or Decentralized?" Journal of Robotics and Mechatronics 33, no. 3 (June 20, 2021): 686–97. http://dx.doi.org/10.20965/jrm.2021.p0686.

Повний текст джерела
Анотація:
The perception of the surrounding circumstances is an essential task for fully autonomous driving systems, but its high computational and network loads typically impede a single host machine from taking charge of the systems. Decentralized processing is a candidate to decrease such loads; however, it has not been clear that this approach fulfills the requirements of onboard systems, including low latency and low power consumption. Embedded oriented graphics processing units (GPUs) are attracting great interest because they provide massively parallel computation capacity with lower power consumption compared to traditional GPUs. This study explored the effects of decentralized processing on autonomous driving using embedded oriented GPUs as decentralized units. We implemented a prototype system that off-loaded image-based object detection tasks onto embedded oriented GPUs to clarify the effects of decentralized processing. The results of experimental evaluation demonstrated that decentralized processing and network quantization achieved approximately 27 ms delay between the feeding of an image and the arrival of detection results to the host as well as approximately 7 W power consumption on each GPU and network load degradation in orders of magnitude. Judging from these results, we concluded that decentralized processing could be a promising approach to decrease processing latency, network load, and power consumption toward the deployment of autonomous driving systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Jasim, Mohammed Saaduldeen, and Mohammed Chachan Younis. "Object-based Classification of Natural Scenes Using Machine Learning Methods." Technium: Romanian Journal of Applied Sciences and Technology 6 (February 8, 2023): 1–22. http://dx.doi.org/10.47577/technium.v6i.8286.

Повний текст джерела
Анотація:
The replication of human intellectual processes by machines, particularly computer systems, is known as artificial intelligence (AI). AI is an intelligent tool that is utilized across sectors to improve decision making, increase productivity, and eliminate repetitive tasks. Machine learning (ML) is a key component of AI since it includes understanding and developing ways that can learn or improve performance on tasks. For the last decade, ML has been applied in computer vision (CV) applications. In computer vision, systems and computers extract meaningful data from digital videos, photos, and other visual sources and use that information to conduct actions or make suggestions. In this work, we have solved the image segmentation problem for the natural images to segment out water, land, and sky. Instead of applying image segmentation directly to the images, images are pre-processed, and statistical and textural features are then passed through a neural network for the pixel-wise semantic segmentation of the images. We chose the 5X5 window over the pixel-by-pixel technique since it requires less resources and time for training and testing.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Niewiadomski, Artur, and Kornel Domeradzki. "Object classification with artificial neural networks: A comparative analysis." Studia Informatica, no. 23 (December 22, 2020): 43–56. http://dx.doi.org/10.34739/si.2019.23.03.

Повний текст джерела
Анотація:
Object classification is a problem which has attracted a lot of research attention in recent years. Traditional approach to this problem is built on a shallow trainable architecture that was meant to detect handcrafted features. That approach works poorly and introduces many complications in situations where one is to work with more than a couple types of objects in an image with a large resolution. That is why in the past few years convolutional and residual neural networks have experienced a tremendous rise in popularity. In this paper, we provide a review on topics related to artificial neural networks and a brief overview of our research. Our review begins with a short introduction to the topic of computer vision. Afterwards we cover briefly the concepts of neural networks, convolutional and residual neural networks and their commonly used models. Then we provide a comparative performance analysis of the previously mentioned models in a binary and multi-label classification problem. Finally, multiple conclusions are drawn, which are to serve as guidelines for future computer vision systems implementations.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gurwicz, Yaniv, Raanan Yehezkel, and Boaz Lachover. "Multiclass object classification for real-time video surveillance systems." Pattern Recognition Letters 32, no. 6 (April 2011): 805–15. http://dx.doi.org/10.1016/j.patrec.2011.01.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kumar, Lalit, and Dushyant Kumar Singh. "Hardware Response and Performance Analysis of Multicore Computing Systems for Deep Learning Algorithms." Cybernetics and Information Technologies 22, no. 3 (September 1, 2022): 68–81. http://dx.doi.org/10.2478/cait-2022-0028.

Повний текст джерела
Анотація:
Abstract With the advancement in technological world, the technologies like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are gaining more popularity in many applications of computer vision like object classification, object detection, Human detection, etc., ML and DL approaches are highly compute-intensive and require advanced computational resources for implementation. Multicore CPUs and GPUs with a large number of dedicated processor cores are typically the more prevailing and effective solutions for the high computational need. In this manuscript, we have come up with an analysis of how these multicore hardware technologies respond to DL algorithms. A Convolutional Neural Network (CNN) model have been trained for three different classification problems using three different datasets. All these experimentations have been performed on three different computational resources, i.e., Raspberry Pi, Nvidia Jetson Nano Board, & desktop computer. Results are derived for performance analysis in terms of classification accuracy and hardware response for each hardware configuration.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ghani, Arfan, Rawad Hodeify, Chan H. See, Simeon Keates, Dah-Jye Lee, and Ahmed Bouridane. "Computer Vision-Based Kidney’s (HK-2) Damaged Cells Classification with Reconfigurable Hardware Accelerator (FPGA)." Electronics 11, no. 24 (December 19, 2022): 4234. http://dx.doi.org/10.3390/electronics11244234.

Повний текст джерела
Анотація:
In medical and health sciences, the detection of cell injury plays an important role in diagnosis, personal treatment and disease prevention. Despite recent advancements in tools and methods for image classification, it is challenging to classify cell images with higher precision and accuracy. Cell classification based on computer vision offers significant benefits in biomedicine and healthcare. There have been studies reported where cell classification techniques have been complemented by Artificial Intelligence-based classifiers such as Convolutional Neural Networks. These classifiers suffer from the drawback of the scale of computational resources required for training and hence do not offer real-time classification capabilities for an embedded system platform. Field Programmable Gate Arrays (FPGAs) offer the flexibility of hardware reconfiguration and have emerged as a viable platform for algorithm acceleration. Given that the logic resources and on-chip memory available on a single device are still limited, hardware/software co-design is proposed where image pre-processing and network training were performed in software, and trained architectures were mapped onto an FPGA device (Nexys4DDR) for real-time cell classification. This paper demonstrates that the embedded hardware-based cell classifier performs with almost 100% accuracy in detecting different types of damaged kidney cells.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Baldoni, Matteo, Cristina Baroglio, and Davide Cavagnino. "Use of IFS Codes for Learning 2D Isolated-Object Classification Systems." Computer Vision and Image Understanding 77, no. 3 (March 2000): 371–87. http://dx.doi.org/10.1006/cviu.1999.0823.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Omkar, Dr SN, Nikhil Asogekar, and Sudarshan Rathi. "DETECTION, TRACKING AND CLASSIFICATION OF ROGUE DRONES USING COMPUTER VISION." International Journal of Engineering Applied Sciences and Technology 7, no. 3 (July 1, 2022): 11–19. http://dx.doi.org/10.33564/ijeast.2022.v07i03.003.

Повний текст джерела
Анотація:
The increase in the volume of UAVs has been rapid in the past few years. The utilization of drones has increased considerably in the military and commercial setups, with UAVs of all sizes, shapes, and types being used for various applications, from recreational flying to purpose-driven missions. This development has come with challenges and has been identified as a potential source of operational disruptions leading to various security complications, including threats to Critical Infrastructures (CI). Thus, the need for developing fully autonomous antiUAV Defense Systems (AUDS) hasn't been more imminent than today. To attenuate and nullify the threat posed by the UAVs, either deliberately or otherwise, this paper presents the holistic design and operational prototype of drone detection technology based on visual detection using Digital Image Processing (DIP) and Machine Learning (ML) to detect, track and classify drones accurately. The proposed system uses a background-subtracted frame difference technique for detecting moving objects partnered with a Pan-Tilt tracking system powered by Raspberry Pi to track the moving object. The identification of moving objects is made by a Convolutional Neural Network (CNN) system called the YOLO v4-tiny ML algorithm. The novelty of the proposed system lies in its accuracy, effectiveness with low-cost sensing equipment, and better performance compared to other alternatives. Along with ease of operations, combining the system with other systems like RADAR could be a real game-changer in detection technology. The experimental validation of the proposed technology was justified in various tests in an uncontrolled outdoor environment (in the presence of clouds, birds, trees, rain, etc.), proving to be equally effective in all the situations yielding high-quality results.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ivanov, Y. S., S. V. Zhiganov, and N. N. Liubushkina. "Comparative Analysis of Deep Neural Networks Architectures for Visual Recognition in the Autonomous Transport Systems." Journal of Physics: Conference Series 2096, no. 1 (November 1, 2021): 012101. http://dx.doi.org/10.1088/1742-6596/2096/1/012101.

Повний текст джерела
Анотація:
Abstract This paper analyses and presents an experimental investigation of the efficiency of modern models for object recognition in computer vision systems of robotic complexes. In this article, the applicability of transformers for experimental classification problems has been investigated. The comparison results are presented taking into account various limitations specific to robotics. Based on the results of the undertaken studies, recommendations on the use of models in the marine vessels classification problem are proposed
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Rehman, Amjad, Tanzila Saba, Muhammad Zeeshan Khan, Robertas Damaševičius, and Saeed Ali Bahaj. "Internet-of-Things-Based Suspicious Activity Recognition Using Multimodalities of Computer Vision for Smart City Security." Security and Communication Networks 2022 (October 5, 2022): 1–12. http://dx.doi.org/10.1155/2022/8383461.

Повний текст джерела
Анотація:
Automatic human activity recognition is one of the milestones of smart city surveillance projects. Human activity detection and recognition aim to identify the activities based on the observations that are being performed by the subject. Hence, vision-based human activity recognition systems have a wide scope in video surveillance, health care systems, and human-computer interaction. Currently, the world is moving towards a smart and safe city concept. Automatic human activity recognition is the major challenge of smart city surveillance. The proposed research work employed fine-tuned YOLO-v4 for activity detection, whereas for classification purposes, 3D-CNN has been implemented. Besides the classification, the presented research model also leverages human-object interaction with the help of intersection over union (IOU). An Internet of Things (IoT) based architecture is implemented to take efficient and real-time decisions. The dataset of exploit classes has been taken from the UCF-Crime dataset for activity recognition. At the same time, the dataset extracted from MS-COCO for suspicious object detection is involved in human-object interaction. This research is also applied to human activity detection and recognition in the university premises for real-time suspicious activity detection and automatic alerts. The experiments have exhibited that the proposed multimodal approach achieves remarkable activity detection and recognition accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Gaonkar, Needhi U. "Road Traffic Analysis Using Computer Vision." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 2002–6. http://dx.doi.org/10.22214/ijraset.2021.37630.

Повний текст джерела
Анотація:
Abstract: Traffic analysis plays an important role in a transportation system for traffic management. Traffic analysis system using computer vision project paper proposes the video based data for vehicle detection and counting systems based on the computer vision. In most Transportation Systems cameras are installed in fixed locations. Vehicle detection is the most important requirement in traffic analysis part. Vehicle detection, tracking, classification and counting is very useful for people and government for traffic flow, highway monitoring, traffic planning. Vehicle analysis will supply with information about traffic flow, traffic summit times on road. The motivation of visual object detection is to track the vehicle position and then tracking in successive frames is to detect and connect target vehicles for frames. Recognising vehicles in an ongoing video is useful for traffic analysis. Recognizing what kind of vehicle in an ongoing video is helpful for traffic analysing. this system can classify the vehicle into bicycle, bus, truck, car and motorcycle. In this system I have used a video-based vehicle counting method in a highway traffic video capture using cctv camera. Project presents the analysis of tracking-by-detection approach which includes detection by YOLO(You Only Look Once) and tracking by SORT(simple online and realtime tracking) algorithm. Keywords: Vehicle detection, Vehicle tracking, Vehicle counting, YOLO, SORT, Analysis, Kalman filter, Hungarian algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Teixeira, Eduardo, Beatriz Araujo, Victor Costa, Samuel Mafra, and Felipe Figueiredo. "Literature Review on Ship Localization, Classification, and Detection Methods Based on Optical Sensors and Neural Networks." Sensors 22, no. 18 (September 12, 2022): 6879. http://dx.doi.org/10.3390/s22186879.

Повний текст джерела
Анотація:
Object detection is a common application within the computer vision area. Its tasks include the classic challenges of object localization and classification. As a consequence, object detection is a challenging task. Furthermore, this technique is crucial for maritime applications since situational awareness can bring various benefits to surveillance systems. The literature presents various models to improve automatic target recognition and tracking capabilities that can be applied to and leverage maritime surveillance systems. Therefore, this paper reviews the available models focused on localization, classification, and detection. Moreover, it analyzes several works that apply the discussed models to the maritime surveillance scenario. Finally, it highlights the main opportunities and challenges, encouraging new research in this area.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wu, Yirui, Zhouyu M. Meng, hivakumara Palaiahnakote, and Tong Lu. "COMPRESSIVE SENSING BASED CONVOLUTIONAL NEURAL NETWORK FOR OBJECT DETECTION." Malaysian Journal of Computer Science 33, no. 1 (January 31, 2020): 78–89. http://dx.doi.org/10.22452/mjcs.vol33no1.5.

Повний текст джерела
Анотація:
Deep neural networks (DNN) have shown significant performance in several domains including computer vision and machine learning. Convolutional Neural Networks (CNN), known as a particular type of DNN, have shown their promising potentials in discovering vision-based patterns from quantity of labeled images. Many CNN-based algorithms are thus proposed to solve the problem of object detection and object recognition. However, CNN-based systems are hard to deploy on embedded systems due to their computationally and storage intensive. In this paper, we propose a method to compress convolutional neural network to decreases its computation and storage cost by exploiting inherent redundancy property of parameters in different kinds of layers of CNN architecture. During the compression, we firstly construct parameter matrices from different kinds of layers and convert parameter matrices to frequency domain through discrete cosine transform (DCT). Due to the smooth property of parameters when processing images, the resulting frequency matrices are dominated by low-frequency components. We thus prune high-frequency part to emphasize the dominating part of frequency matrix and make the frequency matrix sparse. Then, the sparse frequency matrices are sampled with distributed random Gaussian matrix under the guiding of compress sensing. Finally, we retrain the network with the sampling matrices to fine-tune the remaining parameters. We evaluate the proposed method on several typical convolutional neural network and show it outperforms one latest compression approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wang, Jinyeong, and Sanghwan Lee. "Data Augmentation Methods Applying Grayscale Images for Convolutional Neural Networks in Machine Vision." Applied Sciences 11, no. 15 (July 22, 2021): 6721. http://dx.doi.org/10.3390/app11156721.

Повний текст джерела
Анотація:
In increasing manufacturing productivity with automated surface inspection in smart factories, the demand for machine vision is rising. Recently, convolutional neural networks (CNNs) have demonstrated outstanding performance and solved many problems in the field of computer vision. With that, many machine vision systems adopt CNNs to surface defect inspection. In this study, we developed an effective data augmentation method for grayscale images in CNN-based machine vision with mono cameras. Our method can apply to grayscale industrial images, and we demonstrated outstanding performance in the image classification and the object detection tasks. The main contributions of this study are as follows: (1) We propose a data augmentation method that can be performed when training CNNs with industrial images taken with mono cameras. (2) We demonstrate that image classification or object detection performance is better when training with the industrial image data augmented by the proposed method. Through the proposed method, many machine-vision-related problems using mono cameras can be effectively solved by using CNNs.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Hasan, Mokhtar M., Noor A. Ibraheem, and Noor M. Abdulhadi. "2D Geometric Object Shapes Detection and Classification." Webology 19, no. 1 (January 20, 2022): 1689–702. http://dx.doi.org/10.14704/web/v19i1/web19113.

Повний текст джерела
Анотація:
In computer vision, object detection is a basic process for advanced procedure forms such as object detecting, analyzing, tracking, etc., for further processes, features extraction play a vital part to identify the objects accurately. Most of the existing frameworks may not be able to distinguish the objects appropriately when different objects have a place to a single frame. In this work, an automatic detection and recognition system of two-dimensional geometric shapes have been proposed. Firstly applied Genetic Algorithm (GA) to fill the shape after performing proper segmentation pre-processing method. The proposed framework is able of identifying numerous objects in the input image, determining the sort of the identified object, and labeled the recognized objects. Statistical method has been applied for each shape to extract the objects corners' points by calculating the largest boundary to form the features vector. Ultimately, the identified objects are classified as geometrical shapes such as square, rectangular, triangular, or circular. The proposed method achieved high accuracy around 98.3%, and an average computational time 0.521 sec. as an effective classification technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zedan, Mohammad J. M., Ali I. Abduljabbar, Fahad Layth Malallah, and Mustafa Ghanem Saeed. "Controlling Embedded Systems Remotely via Internet-of-Things Based on Emotional Recognition." Advances in Human-Computer Interaction 2020 (December 5, 2020): 1–10. http://dx.doi.org/10.1155/2020/8895176.

Повний текст джерела
Анотація:
Nowadays, much research attention is focused on human–computer interaction (HCI), specifically in terms of biosignal, which has been recently used for the remote controlling to offer benefits especially for disabled people or protecting against contagions, such as coronavirus. In this paper, a biosignal type, namely, facial emotional signal, is proposed to control electronic devices remotely via emotional vision recognition. The objective is converting only two facial emotions: a smiling or nonsmiling vision signal captured by the camera into a remote control signal. The methodology is achieved by combining machine learning (for smiling recognition) and embedded systems (for remote control IoT) fields. In terms of the smiling recognition, GENKl-4K database is exploited to train a model, which is built in the following sequenced steps: real-time video, snapshot image, preprocessing, face detection, feature extraction using HOG, and then finally SVM for the classification. The achieved recognition rate is up to 89% for the training and testing with 10-fold validation of SVM. In terms of IoT, the Arduino and MCU (Tx and Rx) nodes are exploited for transferring the resulting biosignal remotely as a server and client via the HTTP protocol. Promising experimental results are achieved by conducting experiments on 40 individuals who participated in controlling their emotional biosignals on several devices such as closing and opening a door and also turning the alarm on or off through Wi-Fi. The system implementing this research is developed in Matlab. It connects a webcam to Arduino and a MCU node as an embedded system.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

C, Abhishek. "Development of Hexapod Robot with Computer Vision." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 1796–805. http://dx.doi.org/10.22214/ijraset.2021.37455.

Повний текст джерела
Анотація:
Abstract: Nowadays many robotic systems are developed with lot of innovation, seeking to get flexibility and efficiency of biological systems. Hexapod Robot is the best example for such robots, it is a six-legged robot whose walking movements try to imitate the movements of the insects, it has two sets of three legs alternatively which is used to walk, this will provide stability, flexibility and mobility to travel on irregular surfaces. With these attributes the hexapod robots can be used to explore irregular surfaces, inhospitable places, or places which are difficult for humans to access. This paper involves the development of hexapod robot with digital image processing implemented on Raspberry Pi, to study in the areas of robotic systems with legged locomotion and robotic vision. This paper is an integration of a robotic system and an embedded system of digital image processing, programmed in high level language using Python. It is equipped with a camera to capture real time video and uses a distance sensor that allow the robot to detect obstacles. The Robot is Self-Stabilizing and can detect corners. The robot has 3 degrees of freedom in each six legs thus making a 18 DOF robotic movement. The use of multiple degrees of freedom at the joints of the legs allows the legged robots to change their movement direction without slippage. Additionally, it is possible to change the height from the ground, introducing a damping and a decoupling between the terrain irregularities and the body of the robot servo motors. Keywords: Hexapod, Raspberry Pi, Computer vision, Object detection, Yolo, Servo Motor, OpevCV.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Liu, Yu, and King Ngi Ngan. "Embedded wavelet packet object-based image coding based on context classification and quadtree ordering." Signal Processing: Image Communication 21, no. 2 (February 2006): 143–55. http://dx.doi.org/10.1016/j.image.2005.09.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Lopes, Jessica Fernandes, Leniza Ludwig, Douglas Fernandes Barbin, Maria Victória Eiras Grossmann, and Sylvio Barbon. "Computer Vision Classification of Barley Flour Based on Spatial Pyramid Partition Ensemble." Sensors 19, no. 13 (July 4, 2019): 2953. http://dx.doi.org/10.3390/s19132953.

Повний текст джерела
Анотація:
Imaging sensors are largely employed in the food processing industry for quality control. Flour from malting barley varieties is a valuable ingredient in the food industry, but its use is restricted due to quality aspects such as color variations and the presence of husk fragments. On the other hand, naked varieties present superior quality with better visual appearance and nutritional composition for human consumption. Computer Vision Systems (CVS) can provide an automatic and precise classification of samples, but identification of grain and flour characteristics require more specialized methods. In this paper, we propose CVS combined with the Spatial Pyramid Partition ensemble (SPPe) technique to distinguish between naked and malting types of twenty-two flour varieties using image features and machine learning. SPPe leverages the analysis of patterns from different spatial regions, providing more reliable classification. Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), J48 decision tree, and Random Forest (RF) were compared for samples’ classification. Machine learning algorithms embedded in the CVS were induced based on 55 image features. The results ranged from 75.00% (k-NN) to 100.00% (J48) accuracy, showing that sample assessment by CVS with SPPe was highly accurate, representing a potential technique for automatic barley flour classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Raiser, Stefan, Edwin Lughofer, Christian Eitzinger, and James Edward Smith. "Impact of object extraction methods on classification performance in surface inspection systems." Machine Vision and Applications 21, no. 5 (August 27, 2009): 627–41. http://dx.doi.org/10.1007/s00138-009-0205-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Gururaj, Vaishnavi, Shriya Varada Ramesh, Sanjana Satheesh, Ashwini Kodipalli, and Kusuma Thimmaraju. "Analysis of deep learning frameworks for object detection in motion." International Journal of Knowledge-based and Intelligent Engineering Systems 26, no. 1 (June 8, 2022): 7–16. http://dx.doi.org/10.3233/kes-220002.

Повний текст джерела
Анотація:
Object detection and recognition is a computer vision technology and is considered as one of the challenging tasks in the field of computer vision. Many approaches for detection have been proposed in the past. AIM: This paper is mainly aiming to discuss the existing detection and classification techniques of Deep Convolutional Neural Networks (CNN) with an importance placed on highlighting the training and accuracy of the different CNN models. METHODS: In the proposed work, Faster RCNN, YOLO and SSD are used to detect helmets. OUTCOME: The survey says MobileNets has higher accuracy when compared to VGG16, VGG19 and Inception V3 and is therefore chosen to be used with SSD. The impact of the differences in the amount of training of each algorithm is highlighted which helps understand the advantages and disadvantages of each algorithm and deduce the most suitable.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Moroz, Mykola, Denys Berestov, and Oleg Kurchenko. "Analysis of visual object tracking algorithms for real-time systems." Advanced Information Technology, no. 1 (1) (2021): 59–65. http://dx.doi.org/10.17721/ait.2021.1.08.

Повний текст джерела
Анотація:
The article analyzes the latest achievements and decisions in the process of visual support of the target object in the field of computer vision, considers approaches to the choice of algorithm for visual support of objects on video sequences, highlights the main visual features that can be based on tracking object. The criteria that influence the choice of the target object-tracking algorithm in real time are defined. However, for real-time tracking with limited computing resources, the choice of the appropriate algorithm is crucial. The choice of visual tracking algorithm is also influenced by the requirements and limitations for the monitored objects and prior knowledge or assumptions about them. As a result of the analysis, the Staple tracking algorithm was preferred, according to the criterion of speed, which is a crucial indicator in the design and development of software and hardware for automated visual support of the object in real-time video stream for various surveillance and security systems, monitoring traffic, activity recognition and other embedded systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Hassan, Adel, and Muath Sabha. "Feature Extraction for Image Analysis and Detection using Machine Learning Techniques." International Journal of Advanced Networking and Applications 14, no. 04 (2023): 5499–508. http://dx.doi.org/10.35444/ijana.2023.14401.

Повний текст джерела
Анотація:
Feature extraction is the most vital step in image classification to produce high-quality and good content images for further analysis, image detection, segmentation, and object recognition. Using machine learning algorithms, profound learning like convolutional neural network CNN became necessary to train, classify, and recognize images and objects like humans. Combined feature extraction and machine learning classification to locate and identify objects on images can then be an input of automatic recognition systems ATR such as surveillance systems CCTV, to enhance these systems and reduce time and effort for object detection and recognition in images based on digital image processing techniques especially image segmentation that differentiate from computer vision approach. This article will use machine learning and deep learning algorithms to facilitate and achieve the study's objectives.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Soudy, Mohamed, Yasmine M. Afify, and Nagwa Badr. "GenericConv: A Generic Model for Image Scene Classification Using Few-Shot Learning." Information 13, no. 7 (June 28, 2022): 315. http://dx.doi.org/10.3390/info13070315.

Повний текст джерела
Анотація:
Scene classification is one of the most complex tasks in computer-vision. The accuracy of scene classification is dependent on other subtasks such as object detection and object classification. Accurate results may be accomplished by employing object detection in scene classification since prior information about objects in the image will lead to an easier interpretation of the image content. Machine and transfer learning are widely employed in scene classification achieving optimal performance. Despite the promising performance of existing models in scene classification, there are still major issues. First, the training phase for the models necessitates a large amount of data, which is a difficult and time-consuming task. Furthermore, most models are reliant on data previously seen in the training set, resulting in ineffective models that can only identify samples that are similar to the training set. As a result, few-shot learning has been introduced. Although few attempts have been reported applying few-shot learning to scene classification, they resulted in perfect accuracy. Motivated by these findings, in this paper we implement a novel few-shot learning model—GenericConv—for scene classification that has been evaluated using benchmarked datasets: MiniSun, MiniPlaces, and MIT-Indoor 67 datasets. The experimental results show that the proposed model GenericConv outperforms the other benchmark models on the three datasets, achieving accuracies of 52.16 ± 0.015, 35.86 ± 0.014, and 37.26 ± 0.014 for five-shots on MiniSun, MiniPlaces, and MIT-Indoor 67 datasets, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Gao, Hongbo, Bo Cheng, Jianqiang Wang, Keqiang Li, Jianhui Zhao, and Deyi Li. "Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment." IEEE Transactions on Industrial Informatics 14, no. 9 (September 2018): 4224–31. http://dx.doi.org/10.1109/tii.2018.2822828.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ming, John, and Bir Bhanu. "ORACLE: An Integrated Learning Approach for Object Recognition." International Journal of Pattern Recognition and Artificial Intelligence 11, no. 06 (September 1997): 961–90. http://dx.doi.org/10.1142/s0218001497000445.

Повний текст джерела
Анотація:
Model-based object recognition has become a popular paradigm in computer vision research. In most of the current model-based vision systems, the object models used for recognition are generally a priori given (e.g. obtained using a CAD model). For many object recognition applications, it is not realistic to utilize a fixed object model database with static model features. Rather, it is desirable to have a recognition system capable of performing automated object model acquisition and refinement. In order to achieve these capabilities, we have developed a system called ORACLE: Object Recognition Accomplished through Consolidated Learning Expertise. It uses two machine learning techniques known as Explanation-Based Learning (EBL) and Structured Conceptual Clustering (SCC) combined in a synergistic manner. As compared to systems which learn from numerous positive and negative examples, EBL allows the generalization of object model descriptions from a single example. Using these generalized descriptions, SCC constructs an efficient classification tree which is incremently built and modified over time. Learning from experience is used to dynamically update the specific feature values of each object. These capabilities provide a dynamic object model database which allows the system to exhibit improved performance over time. We provide an overview of the ORACLE system and present experimental results using a database of thirty aircraft models.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Li, Guoming, Yanbo Huang, Zhiqian Chen, Gary D. Chesser, Joseph L. Purswell, John Linhoss, and Yang Zhao. "Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review." Sensors 21, no. 4 (February 21, 2021): 1492. http://dx.doi.org/10.3390/s21041492.

Повний текст джерела
Анотація:
Convolutional neural network (CNN)-based computer vision systems have been increasingly applied in animal farming to improve animal management, but current knowledge, practices, limitations, and solutions of the applications remain to be expanded and explored. The objective of this study is to systematically review applications of CNN-based computer vision systems on animal farming in terms of the five deep learning computer vision tasks: image classification, object detection, semantic/instance segmentation, pose estimation, and tracking. Cattle, sheep/goats, pigs, and poultry were the major farm animal species of concern. In this research, preparations for system development, including camera settings, inclusion of variations for data recordings, choices of graphics processing units, image preprocessing, and data labeling were summarized. CNN architectures were reviewed based on the computer vision tasks in animal farming. Strategies of algorithm development included distribution of development data, data augmentation, hyperparameter tuning, and selection of evaluation metrics. Judgment of model performance and performance based on architectures were discussed. Besides practices in optimizing CNN-based computer vision systems, system applications were also organized based on year, country, animal species, and purposes. Finally, recommendations on future research were provided to develop and improve CNN-based computer vision systems for improved welfare, environment, engineering, genetics, and management of farm animals.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Barba-Guaman, Luis, José Eugenio Naranjo, and Anthony Ortiz. "Deep Learning Framework for Vehicle and Pedestrian Detection in Rural Roads on an Embedded GPU." Electronics 9, no. 4 (March 31, 2020): 589. http://dx.doi.org/10.3390/electronics9040589.

Повний текст джерела
Анотація:
Object detection, one of the most fundamental and challenging problems in computer vision. Nowadays some dedicated embedded systems have emerged as a powerful strategy for deliver high processing capabilities including the NVIDIA Jetson family. The aim of the present work is the recognition of objects in complex rural areas through an embedded system, as well as the verification of accuracy and processing time. For this purpose, a low power embedded Graphics Processing Unit (Jetson Nano) has been selected, which allows multiple neural networks to be run in simultaneous and a computer vision algorithm to be applied for image recognition. As well, the performance of these deep learning neural networks such as ssd-mobilenet v1 and v2, pednet, multiped and ssd-inception v2 has been tested. Moreover, it was found that the accuracy and processing time were in some cases improved when all the models suggested in the research were applied. The pednet network model provides a high performance in pedestrian recognition, however, the sdd-mobilenet v2 and ssd-inception v2 models are better at detecting other objects such as vehicles in complex scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Urban, David, and Alice Caplier. "Time- and Resource-Efficient Time-to-Collision Forecasting for Indoor Pedestrian Obstacles Avoidance." Journal of Imaging 7, no. 4 (March 25, 2021): 61. http://dx.doi.org/10.3390/jimaging7040061.

Повний текст джерела
Анотація:
As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Alzahrani, Ali, and Md Al-Amin Bhuiyan. "Feature selection for urban land cover classification employing genetic algorithm." Bulletin of Electrical Engineering and Informatics 11, no. 2 (April 1, 2022): 793–802. http://dx.doi.org/10.11591/eei.v11i2.3399.

Повний текст джерела
Анотація:
Feature selection has attained substantial research interest in image processing, computer vision, pattern recognition and so on due to tremendous dimensional reduction in image analysis. This research addresses a genetic algorithm based feature selection strategy for urban land cover classification. The principal purpose of this research is to monitor the land cover alterations in satellite imagery for urban planning. The method is based on object based classification by detecting the object area of a given image with the knowledge of visual information of the object from remote sensing images. The classification system is organized through a multilayer perceptron with genetic algorithm (MLPGA). Experimental results explicitly indicate that this MLPGA based hybrid feature selection procedure performs classification with sensitivity 94%, specificity 90% and precision 89%, respectively. This MLPGA centered hybrid feature selection scheme attains better performance than the counterpart methods in terms of classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Upadhyay, Jatin, Abhishek Rawat, Dipankar Deb, Vlad Muresan, and Mihaela-Ligia Unguresan. "An RSSI-Based Localization, Path Planning and Computer Vision-Based Decision Making Robotic System." Electronics 9, no. 8 (August 17, 2020): 1326. http://dx.doi.org/10.3390/electronics9081326.

Повний текст джерела
Анотація:
A robotic navigation system operates flawlessly under an adequate GPS signal range, whereas indoor navigation systems use the simultaneous localization and mapping system or other vision-based localization systems. The sensor used in indoor navigation systems is not suitable for low power and small scale robotic systems. The wireless area network transmitting devices have fixed transmission power, and the receivers get the different values of signal strength based on their surrounding environments. In the proposed method, the received signal strength index (RSSI) values of three fixed transmitter units are measured every 1.6 m in mesh format and analyzed by the classifiers, and robot position can be mapped in the indoor area. After navigation, the robot analyzes objects and detects and recognize human faces with the help of object recognition and facial recognition-based classification methods respectively. This robot detects the intruder with the current position in an indoor environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Basheer Ahmed, Mohammed Imran, Rim Zaghdoud, Mohammed Salih Ahmed, Razan Sendi, Sarah Alsharif, Jomana Alabdulkarim, Bashayr Adnan Albin Saad, Reema Alsabt, Atta Rahman, and Gomathi Krishnasamy. "A Real-Time Computer Vision Based Approach to Detection and Classification of Traffic Incidents." Big Data and Cognitive Computing 7, no. 1 (January 28, 2023): 22. http://dx.doi.org/10.3390/bdcc7010022.

Повний текст джерела
Анотація:
To constructively ameliorate and enhance traffic safety measures in Saudi Arabia, a prolific number of AI (Artificial Intelligence) traffic surveillance technologies have emerged, including Saher, throughout the past years. However, rapidly detecting a vehicle incident can play a cardinal role in ameliorating the response speed of incident management, which in turn minimizes road injuries that have been induced by the accident’s occurrence. To attain a permeating effect in increasing the entailed demand for road traffic security and safety, this paper presents a real-time traffic incident detection and alert system that is based on a computer vision approach. The proposed framework consists of three models, each of which is integrated within a prototype interface to fully visualize the system’s overall architecture. To begin, the vehicle detection and tracking model utilized the YOLOv5 object detector with the DeepSORT tracker to detect and track the vehicles’ movements by allocating a unique identification number (ID) to each vehicle. This model attained a mean average precision (mAP) of 99.2%. Second, a traffic accident and severity classification model attained a mAP of 83.3% while utilizing the YOLOv5 algorithm to accurately detect and classify an accident’s severity level, sending an immediate alert message to the nearest hospital if a severe accident has taken place. Finally, the ResNet152 algorithm was utilized to detect the ignition of a fire following the accident’s occurrence; this model achieved an accuracy rate of 98.9%, with an automated alert being sent to the fire station if this perilous event occurred. This study employed an innovative parallel computing technique for reducing the overall complexity and inference time of the AI-based system to run the proposed system in a concurrent and parallel manner.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Fang, Hai-Feng, Jin Cao, and Zhi-Yuan Li. "A Small Network MicronNet-BF of Traffic Sign Classification." Computational Intelligence and Neuroscience 2022 (March 18, 2022): 1–10. http://dx.doi.org/10.1155/2022/3995209.

Повний текст джерела
Анотація:
One of a very significant computer vision task in many real-world applications is traffic sign recognition. With the development of deep neural networks, state-of-art performance traffic sign recognition has been provided in recent five years. Getting very high accuracy in object classification is not a dream any more. However, one of the key challenges is becoming making the deep neural network suitable for an embedded system. As a result, a small neural network with as less parameters as possible and high accuracy needs to be explored. In this paper, the MicronNet which is a small but powerful convolutional neural network is improved by batch normalization and factorization, and the proposed MicronNet-BN-Factorization (MicronNet-BF) takes advantages about reducing parameters and improving accuracy. The effect of image brightness is reduced for feature recognition by the elimination of mean and variance of each input layer in MicronNet via BN. A lower number of parameters are realized with the replacement of convolutional layers in MicronNet, which is the inspiration of factorization. In addition, data augmentation is also been changed to get higher accuracy. Most important, the experiment shows that the accuracy of MicronNet-BF is 99.383% on German traffic sign recognition benchmark (GTSRB) which is much higher than the original MicronNet (98.9%), and the most influence factor is batch normalization after the confirmation of orthogonal experimental. Furthermore, the handsome training efficiency and generality of MicronNet-BF indicate the wide application in embedded scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Singh, Prince Kumar. "A Comprehensive Review on Advance Surveillance System." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 3975–79. http://dx.doi.org/10.22214/ijraset.2022.44825.

Повний текст джерела
Анотація:
Abstract: Advance Surveillance System has received growing attention due to the increasing demand for security and safety. It is capable of automatically analysing images, video, audio or other types of surveillance data with or without limited human intervention. The recent developments in sensor devices, computer vision, and machine learning have an important role in enabling such intelligent systems. This paper aims to provide an overview of smart surveillance systems and live CCTV detection. This paper also discusses the main processing steps in an Advance surveillance system: Object tracking, background- foreground segmentation, object detection and classification, and behavioural analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Murthy, Chinthakindi Balaram, Mohammad Farukh Hashmi, Neeraj Dhanraj Bokde, and Zong Woo Geem. "Investigations of Object Detection in Images/Videos Using Various Deep Learning Techniques and Embedded Platforms—A Comprehensive Review." Applied Sciences 10, no. 9 (May 8, 2020): 3280. http://dx.doi.org/10.3390/app10093280.

Повний текст джерела
Анотація:
In recent years there has been remarkable progress in one computer vision application area: object detection. One of the most challenging and fundamental problems in object detection is locating a specific object from the multiple objects present in a scene. Earlier traditional detection methods were used for detecting the objects with the introduction of convolutional neural networks. From 2012 onward, deep learning-based techniques were used for feature extraction, and that led to remarkable breakthroughs in this area. This paper shows a detailed survey on recent advancements and achievements in object detection using various deep learning techniques. Several topics have been included, such as Viola–Jones (VJ), histogram of oriented gradient (HOG), one-shot and two-shot detectors, benchmark datasets, evaluation metrics, speed-up techniques, and current state-of-art object detectors. Detailed discussions on some important applications in object detection areas, including pedestrian detection, crowd detection, and real-time object detection on Gpu-based embedded systems have been presented. At last, we conclude by identifying promising future directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

L .Vinagreiro, Michel Andre, Edson C. Kitani, Armando Antonio M. Lagana, and Leopoldo R. Yoshioka. "An Acceleration Method based on Deep Learning and Multilinear Feature Space." International Journal of Artificial Intelligence & Applications 12, no. 5 (September 30, 2021): 9–26. http://dx.doi.org/10.5121/ijaia.2021.12502.

Повний текст джерела
Анотація:
Computer vision plays a crucial role in Advanced Assistance Systems. Most computer vision systems are based on Deep Convolutional Neural Networks (deep CNN) architectures. However, the high computational resource to run a CNN algorithm is demanding. Therefore, the methods to speed up computation have become a relevant research issue. Even though several works on architecture reduction found in the literaturehave not yet been achievedsatisfactory results for embedded real-time system applications. This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures. The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach. After the training process, the generated features maps are used to create vector feature space. We use this new vector space to make projections of any new sample to classify them. Our method, named AMFC, uses the transfer learning from pre-trained CNN to reduce the classification time of new sample image, with minimal accuracy loss. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database and the German Traffic Sign Recognition Benchmark, we compared the classification time of the original VGG-16 model with the AMFCmethod, and our method is, on average, 17 times faster. The fast classification time reduces the computational and memory demands in embedded applications requiring a large CNN architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Vanusha, D., and B. Amutha. "Classification of Diabetic Retinopathy using Capsules." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 29, no. 06 (December 2021): 835–54. http://dx.doi.org/10.1142/s0218488521500379.

Повний текст джерела
Анотація:
Deep learning models have performed exceptionally well in detection of diabetic retinopathy. Most of these existing works either use Multi Layered Perceptron (MLP) or Convolutional Neural Network (CNN) based models. A significant drawback of these models is their inability to retain spatial dependencies as we go deeper into the network. Because of such issue, these models focus only on extraction of feature maps which help in classification or detection. In recent years’ transformers have shown enormous promise in both natural language processing and computer vision, due to which there have been significant work on transformer based techniques for image classification and object detection. One important model used in this research work is the Capsule Network which uses Set Transformers to perform vision tasks with high accuracy and has the capability to maintain spatial dependencies throughout the process. In this work, we leverage the power of capsules to the diabetic retinopathy classification and detection problem and trained capsules for classification of retinal fundus images are considered which were categorize into five different classes depending on the severity of retinopathy. Further, we also propose a sliding window based detector which can pinpoint the exact position of a blood burst in the retina which will ease the job for ophthalmologists while studying retinal fundus. From our experiments we found that capsules provide better results than existing convolutional neural network and multi layered perceptron based approaches in standard retinopathy datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Yang, Judy X., Lily D. Li, and Mohammad G. Rasul. "Warehouse Management Models Using Artificial Intelligence Technology with Application at Receiving Stage – A Review." International Journal of Machine Learning and Computing 11, no. 3 (May 2021): 242–49. http://dx.doi.org/10.18178/ijmlc.2021.11.3.1042.

Повний текст джерела
Анотація:
This paper reviewed recent literature on inventory management technologies and Artificial Intelligence (AI) applications. The classical Artificial Neural Network (ANN) models and computer vision technology applications for object classification were reviewed in particularly. The challenges of AI technologies in industrial warehouse management, particularly the ANN for solving object classification and counting are discussed. Some researchers reported the use of face recognition, moving vehicle classification and counting, which are easy to recognise objects on the floor or the ground. Other researchers explored the object counting technologies which are used to identify the visible objects on the ground or in images. Although several studies focused on industrial component identification and counting problems, a study on the warehouse receiving stage remains a blank canvas. This paper reviews and analyses current industrial warehouse management developments around AI applications in this field, which may provide a reference for future researchers and end-users for the best modelling approach to this specific problem at the warehouse receiving stage.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Bakhshande, Fateme, Daniel Adofo Ameyaw, Neelu Madan, and Dirk Söffker. "New Metric for Evaluation of Deep Neural Network Applied in Vision-Based Systems." Applied Sciences 12, no. 7 (March 23, 2022): 3251. http://dx.doi.org/10.3390/app12073251.

Повний текст джерела
Анотація:
Vision-based object detection plays a crucial role for the complete functionality of many engineering systems. Typically, detectors or classifiers are used to detect objects or to distinguish different targets. This contribution presents a new evaluation of CNN classifiers in image detection using a modified Probability of Detection reliability measure. The proposed method allows the evaluation of further image parameters affecting the classification results. The proposed evaluation method is implemented on images and comparisons made on parameters with the best detection capability. A typical certification standard (90/95) denoting a 90% probability of detection at 95% reliability level is adapted and successfully applied. Using the 90/95 standard, comparisons are made between different image parameters. A noise analysis procedure is introduced, permitting the trade-off between the detection rate, false alarms, and process parameters. The advantage of the novel approach is experimentally evaluated for vision-based classification results of CNN considering different image parameters. With this new POD evaluation, classifiers will become a trustworthy part of vision systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії