Journal articles on the topic 'Computer vision in data analytics and signal processing'

To see the other types of publications on this topic, follow the link: Computer vision in data analytics and signal processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer vision in data analytics and signal processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Höferlin, Benjamin, Markus Höferlin, Gunther Heidemann, and Daniel Weiskopf. "Scalable video visual analytics." Information Visualization 14, no. 1 (June 5, 2013): 10–26. http://dx.doi.org/10.1177/1473871613488571.

Full text
Abstract:
Video visual analytics is the research field that addresses scalable and reliable analysis of video data. The vast amount of video data in typical analysis tasks renders manual analysis by watching the video data impractical. However, automatic evaluation of video material is not reliable enough, especially when it comes to semantic abstraction from the video signal. In this article, we describe the video visual analytics method that combines the complementary strengths of human recognition and machine processing. After inspecting the challenges of scalable video analysis, we derive the main components of visual analytics for video data. Based on these components, we present our video visual analytics system that has its origins in our IEEE VAST Challenge 2009 participation.
APA, Harvard, Vancouver, ISO, and other styles
2

Chadebecq, François, Francisco Vasconcelos, Evangelos Mazomenos, and Danail Stoyanov. "Computer Vision in the Surgical Operating Room." Visceral Medicine 36, no. 6 (2020): 456–62. http://dx.doi.org/10.1159/000511934.

Full text
Abstract:
<b><i>Background:</i></b> Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. <b><i>Summary:</i></b> In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. <b><i>Key Messages:</i></b> With the increasing availability of surgical video sources and the convergence of technologies<b><i></i></b>around video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic.
APA, Harvard, Vancouver, ISO, and other styles
3

Lemenkova, Polina, Raphaël De Plaen, Thomas Lecocq, and Olivier Debeir. "Computer Vision Algorithms of DigitSeis for Building a Vectorised Dataset of Historical Seismograms from the Archive of Royal Observatory of Belgium." Sensors 23, no. 1 (December 21, 2022): 56. http://dx.doi.org/10.3390/s23010056.

Full text
Abstract:
Archived seismograms recorded in the 20th century present a valuable source of information for monitoring earthquake activity. However, old data, which are only available as scanned paper-based images should be digitised and converted from raster to vector format prior to reuse for geophysical modelling. Seismograms have special characteristics and specific featuresrecorded by a seismometer and encrypted in the images: signal trace lines, minute time gaps, timing and wave amplitudes. This information should be recognised and interpreted automatically when processing archives of seismograms containing large collections of data. The objective was to automatically digitise historical seismograms obtained from the archives of the Royal Observatory of Belgium (ROB). The images were originallyrecorded by the Galitzine seismometer in 1954 in Uccle seismic station, Belgium. A dataset included 145 TIFF images which required automatic approach of data processing. Software for digitising seismograms are limited and many have disadvantages. We applied the DigitSeis for machine-based vectorisation and reported here a full workflowof data processing. This included pattern recognition, classification, digitising, corrections and converting TIFFs to the digital vector format. The generated contours of signals were presented as time series and converted into digital format (mat files) which indicated information on ground motion signals contained in analog seismograms. We performed the quality control of the digitised traces in Python to evaluate the discriminating functionality of seismic signals by DigitSeis. We shown a robust approach of DigitSeis as a powerful toolset for processing analog seismic signals. The graphical visualisation of signal traces and analysis of the performed vectorisation results shown that the algorithms of data processing performed accurately and can be recommended in similar applications of seismic signal processing in future related works in geophysical research.
APA, Harvard, Vancouver, ISO, and other styles
4

Sarada, B., M. Vinayaka Murthy, and V. Udaya Rani. "Combined secure approach based on whale optimization to improve the data classification for data analytics." Pattern Recognition Letters 152 (December 2021): 327–32. http://dx.doi.org/10.1016/j.patrec.2021.10.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gotz, David, and Harry Stavropoulos. "DecisionFlow: Visual Analytics for High-Dimensional Temporal Event Sequence Data." IEEE Transactions on Visualization and Computer Graphics 20, no. 12 (December 31, 2014): 1783–92. http://dx.doi.org/10.1109/tvcg.2014.2346682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xiaoru Yuan, He Xiao, Hanqi Guo, Peihong Guo, W. Kendall, Jian Huang, and Yongxian Zhang. "Scalable Multi-variate Analytics of Seismic and Satellite-based Observational Data." IEEE Transactions on Visualization and Computer Graphics 16, no. 6 (November 2010): 1413–20. http://dx.doi.org/10.1109/tvcg.2010.192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kurzhals, Kuno, and Daniel Weiskopf. "Space-Time Visual Analytics of Eye-Tracking Data for Dynamic Stimuli." IEEE Transactions on Visualization and Computer Graphics 19, no. 12 (December 2013): 2129–38. http://dx.doi.org/10.1109/tvcg.2013.194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Jialuan, Zirui Xing, Tianqi Xiang, Xin Zhang, Yinghai Zhou, Chuanyu Xi, and Hai Lu. "Wireless Signal Propagation Prediction Based on Computer Vision Sensing Technology for Forestry Security Monitoring." Sensors 21, no. 17 (August 24, 2021): 5688. http://dx.doi.org/10.3390/s21175688.

Full text
Abstract:
In this paper, Computer Vision (CV) sensing technology based on Convolutional Neural Network (CNN) is introduced to process topographic maps for predicting wireless signal propagation models, which are applied in the field of forestry security monitoring. In this way, the terrain-related radio propagation characteristic including diffraction loss and shadow fading correlation distance can be predicted or extracted accurately and efficiently. Two data sets are generated for the two prediction tasks, respectively, and are used to train the CNN. To enhance the efficiency for the CNN to predict diffraction losses, multiple output values for different locations on the map are obtained in parallel by the CNN to greatly boost the calculation speed. The proposed scheme achieved a good performance in terms of prediction accuracy and efficiency. For the diffraction loss prediction task, 50% of the normalized prediction error was less than 0.518%, and 95% of the normalized prediction error was less than 8.238%. For the correlation distance extraction task, 50% of the normalized prediction error was less than 1.747%, and 95% of the normalized prediction error was less than 6.423%. Moreover, diffraction losses at 100 positions were predicted simultaneously in one run of CNN under the settings in this paper, for which the processing time of one map is about 6.28 ms, and the average processing time of one location point can be as low as 62.8 us. This paper shows that our proposed CV sensing technology is more efficient in processing geographic information in the target area. Combining a convolutional neural network to realize the close coupling of a prediction model and geographic information, it improves the efficiency and accuracy of prediction.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Bao Shu, and Ze Lin Shi. "Performance Bound of Position Estimation in Image Matching." Key Engineering Materials 500 (January 2012): 766–72. http://dx.doi.org/10.4028/www.scientific.net/kem.500.766.

Full text
Abstract:
Position estimation in image matching is a fundamental step in computer vision and image processing. To deal with the problem of performance prediction, we formulate it under statistical parameter estimation aspect. The lower bound of position estimation variance is obtained based on Cramer-Rao lower bound (CRLB) theory. This paper analyses the impact of noise to 1-D signal matching, derives the lower bound of variance, and then extends it to 2-D image matching. Furthermore, we derive numerical expression that can be computed from observed data. Finally, we use Monte Carlo simulation method to verify the derived analytical expressions. Experimental results show that the derived CRLB is tight to simulation estimated variance. The CRLB can characterize the performance bound of position estimation in image matching.
APA, Harvard, Vancouver, ISO, and other styles
10

Wagner, Jorge, Wolfgang Stuerzlinger, and Luciana Nedel. "Comparing and Combining Virtual Hand and Virtual Ray Pointer Interactions for Data Manipulation in Immersive Analytics." IEEE Transactions on Visualization and Computer Graphics 27, no. 5 (May 2021): 2513–23. http://dx.doi.org/10.1109/tvcg.2021.3067759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chernov, Andrey V., Ilias K. Savvas, Alexander A. Alexandrov, Oleg O. Kartashov, Dmitry S. Polyanichenko, Maria A. Butakova, and Alexander V. Soldatov. "Integrated Video and Acoustic Emission Data Fusion for Intelligent Decision Making in Material Surface Inspection System." Sensors 22, no. 21 (November 6, 2022): 8554. http://dx.doi.org/10.3390/s22218554.

Full text
Abstract:
In the field of intelligent surface inspection systems, particular attention is paid to decision making problems, based on data from different sensors. The combination of such data helps to make an intelligent decision. In this research, an approach to intelligent decision making based on a data integration strategy to raise awareness of a controlled object is used. In the following article, this approach is considered in the context of reasonable decisions when detecting defects on the surface of welds that arise after the metal pipe welding processes. The main data types were RGB, RGB-D images, and acoustic emission signals. The fusion of such multimodality data, which mimics the eyes and ears of an experienced person through computer vision and digital signal processing, provides more concrete and meaningful information for intelligent decision making. The main results of this study include an overview of the architecture of the system with a detailed description of its parts, methods for acquiring data from various sensors, pseudocodes for data processing algorithms, and an approach to data fusion meant to improve the efficiency of decision making in detecting defects on the surface of various materials.
APA, Harvard, Vancouver, ISO, and other styles
12

A. Abu-Ein, Ashraf, Obaida M. Al-Hazaimeh, Alaa M. Dawood, and Andraws I. Swidan. "Analysis of the current state of deepfake techniques-creation and detection methods." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 3 (October 7, 2022): 1659. http://dx.doi.org/10.11591/ijeecs.v28.i3.pp1659-1667.

Full text
Abstract:
Deep learning has effectively solved complicated challenges ranging from large data analytics to human level control and computer vision. However, deep learning has been used to produce software that threatens privacy, democracy, and national security. Deepfake is one of these new applications backed by deep learning. Fake images and movies created by Deepfake algorithms might be difficult for people to tell apart from real ones. This necessitates the development of tools that can automatically detect and evaluate the quality of digital visual media. This paper provides an overview of the algorithms and datasets used to build deepfakes, as well as the approaches presented to detect deepfakes to date. By reviewing the background of deepfakes methods, this paper provides a complete overview of deepfake approaches and promotes the creation of new and more robust strategies to deal with the increasingly complex deepfakes.
APA, Harvard, Vancouver, ISO, and other styles
13

Arandjelović, Ognjen. "Targeted Adaptable Sample for Accurate and Efficient Quantile Estimation in Non-Stationary Data Streams." Machine Learning and Knowledge Extraction 1, no. 3 (July 27, 2019): 848–70. http://dx.doi.org/10.3390/make1030049.

Full text
Abstract:
The need to detect outliers or otherwise unusual data, which can be formalized as the estimation a particular quantile of a distribution, is an important problem that frequently arises in a variety of applications of pattern recognition, computer vision and signal processing. For example, our work was most proximally motivated by the practical limitations and requirements of many semi-automatic surveillance analytics systems that detect abnormalities in closed-circuit television (CCTV) footage using statistical models of low-level motion features. In this paper, we specifically address the problem of estimating the running quantile of a data stream with non-stationary stochasticity when the absolute (rather than asymptotic) memory for storing observations is severely limited. We make several major contributions: (i) we derive an important theoretical result that shows that the change in the quantile of a stream is constrained regardless of the stochastic properties of data; (ii) we describe a set of high-level design goals for an effective estimation algorithm that emerge as a consequence of our theoretical findings; (iii) we introduce a novel algorithm that implements the aforementioned design goals by retaining a sample of data values in a manner adaptive to changes in the distribution of data and progressively narrowing down its focus in the periods of quasi-stationary stochasticity; and (iv) we present a comprehensive evaluation of the proposed algorithm and compare it with the existing methods in the literature on both synthetic datasets and three large “real-world” streams acquired in the course of operation of an existing commercial surveillance system. Our results and their detailed analysis convincingly and comprehensively demonstrate that the proposed method is highly successful and vastly outperforms the existing alternatives, especially when the target quantile is high-valued and the available buffer capacity severely limited.
APA, Harvard, Vancouver, ISO, and other styles
14

Hsu, Ting-Yu, and Xiang-Ju Kuo. "A Stand-Alone Smart Camera System for Online Post-Earthquake Building Safety Assessment." Sensors 20, no. 12 (June 15, 2020): 3374. http://dx.doi.org/10.3390/s20123374.

Full text
Abstract:
Computer vision-based approaches are very useful for dynamic displacement measurement, damage detection, and structural health monitoring. However, for the application using a large number of existing cameras in buildings, the computational cost of videos from dozens of cameras using a centralized computer becomes a huge burden. Moreover, when a manual process is required for processing the videos, prompt safety assessment of tens of thousands of buildings after a catastrophic earthquake striking a megacity becomes very challenging. Therefore, a decentralized and fully automatic computer vision-based approach for prompt building safety assessment and decision-making is desired for practical applications. In this study, a prototype of a novel stand-alone smart camera system for measuring interstory drifts was developed. The proposed system is composed of a single camera, a single-board computer, and two accelerometers with a microcontroller unit. The system is capable of compensating for rotational effects of the camera during earthquake excitations. Furthermore, by fusing the camera-based interstory drifts with the accelerometer-based ones, the interstory drifts can be measured accurately even when residual interstory drifts exist. Algorithms used to compensate for the camera’s rotational effects, algorithms used to track the movement of three targets within three regions of interest, artificial neural networks used to convert the interstory drifts to engineering units, and some necessary signal processing algorithms, including interpolation, cross-correlation, and filtering algorithms, were embedded in the smart camera system. As a result, online processing of the video data and acceleration data using decentralized computational resources is achieved in each individual smart camera system to obtain interstory drifts. Using the maximum interstory drifts measured during an earthquake, the safety of a building can be assessed right after the earthquake excitation. We validated the feasibility of the prototype of the proposed smart camera system through the use of large-scale shaking table tests of a steel building. The results show that the proposed smart camera system had very promising results in terms of assessing the safety of steel building specimens after earthquake excitations.
APA, Harvard, Vancouver, ISO, and other styles
15

Narechania, Arpit, Adam Coscia, Emily Wall, and Alex Endert. "Lumos: Increasing Awareness of Analytic Behavior during Visual Data Analysis." IEEE Transactions on Visualization and Computer Graphics 28, no. 1 (January 2022): 1009–18. http://dx.doi.org/10.1109/tvcg.2021.3114827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Marchesin, S., and G. C. de Verdiere. "High-Quality, Semi-Analytical Volume Rendering for AMR Data." IEEE Transactions on Visualization and Computer Graphics 15, no. 6 (November 2009): 1611–18. http://dx.doi.org/10.1109/tvcg.2009.149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hyunmo Kang, L. Getoor, B. Shneiderman, M. Bilgic, and L. Licamele. "Interactive Entity Resolution in Relational Data: A Visual Analytic Tool and Its Evaluation." IEEE Transactions on Visualization and Computer Graphics 14, no. 5 (September 2008): 999–1014. http://dx.doi.org/10.1109/tvcg.2008.55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Ching-Feng, Wei-Siang Ciou, Peng-Ting Chen, and Yi-Chun Du. "A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach." Sensors 20, no. 12 (June 22, 2020): 3527. http://dx.doi.org/10.3390/s20123527.

Full text
Abstract:
In the context of assisted human, identifying and enhancing non-stationary speech targets speech in various noise environments, such as a cocktail party, is an important issue for real-time speech separation. Previous studies mostly used microphone signal processing to perform target speech separation and analysis, such as feature recognition through a large amount of training data and supervised machine learning. The method was suitable for stationary noise suppression, but relatively limited for non-stationary noise and difficult to meet the real-time processing requirement. In this study, we propose a real-time speech separation method based on an approach that combines an optical camera and a microphone array. The method was divided into two stages. Stage 1 used computer vision technology with the camera to detect and identify interest targets and evaluate source angles and distance. Stage 2 used beamforming technology with microphone array to enhance and separate the target speech sound. The asynchronous update function was utilized to integrate the beamforming control and speech processing to reduce the effect of the processing delay. The experimental results show that the noise reduction in various stationary and non-stationary noise environments were 6.1 dB and 5.2 dB respectively. The response time of speech processing was less than 10ms, which meets the requirements of a real-time system. The proposed method has high potential to be applied in auxiliary listening systems or machine language processing like intelligent personal assistant.
APA, Harvard, Vancouver, ISO, and other styles
19

Fazlollahtabar, Hamed, and Minoo Talebi Ashoori. "Business Analytics using Dynamic Pricing based on Customer Entry-Exit Rates Tradeoff." Statistics, Optimization & Information Computing 8, no. 1 (February 18, 2020): 272–80. http://dx.doi.org/10.19139/soic-2310-5070-551.

Full text
Abstract:
This paper concerns with an integrated business process to be applied as a decision support for market analysis and decision making. The proposed business intelligence and analytics system makes use of an extract, transform and load mechanism for data collection and purification. As a mathematical decision optimization, dynamic pricing is formulated based on customer entry-exit rates in a history-based pricing model. The optimal prices for products are obtained so that aggregated profit is maximized. A case study is reported to show the effectiveness of the approach. Also, analytical investigations on the impacts of the sensitive parameters of the pricing model are given.
APA, Harvard, Vancouver, ISO, and other styles
20

Cobos-Torres, Juan-Carlos, Mohamed Abderrahim, and José Martínez-Orgado. "Non-Contact, Simple Neonatal Monitoring by Photoplethysmography." Sensors 18, no. 12 (December 10, 2018): 4362. http://dx.doi.org/10.3390/s18124362.

Full text
Abstract:
This paper presents non-contact vital sign monitoring in neonates, based on image processing, where a standard color camera captures the plethysmographic signal and the heart and breathing rates are processed and estimated online. It is important that the measurements are taken in a non-invasive manner, which is imperceptible to the patient. Currently, many methods have been proposed for non-contact measurement. However, to the best of the authors’ knowledge, it has not been possible to identify methods with low computational costs and a high tolerance to artifacts. With the aim of improving contactless measurement results, the proposed method based on the computer vision technique is enhanced to overcome the mentioned drawbacks. The camera is attached to an incubator in the Neonatal Intensive Care Unit and a single area in the neonate’s diaphragm is monitored. Several factors are considered in the stages of image acquisition, as well as in the plethysmographic signal formation, pre-filtering and filtering. The pre-filter step uses numerical analysis techniques to reduce the signal offset. The proposed method decouples the breath rate from the frequency of sinus arrhythmia. This separation makes it possible to analyze independently any cardiac and respiratory dysrhythmias. Nine newborns were monitored with our proposed method. A Bland-Altman analysis of the data shows a close correlation of the heart rates measured with the two approaches (correlation coefficient of 0.94 for heart rate (HR) and 0.86 for breath rate (BR)) with an uncertainty of 4.2 bpm for HR and 4.9 for BR (k = 1). The comparison of our method and another non-contact method considered as a standard independent component analysis (ICA) showed lower central processing unit (CPU) usage for our method (75% less CPU usage).
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Nachuan, Yongjun Zhao, and Jinyang Chen. "Real-Time Φ-OTDR Vibration Event Recognition Based on Image Target Detection." Sensors 22, no. 3 (February 2, 2022): 1127. http://dx.doi.org/10.3390/s22031127.

Full text
Abstract:
Accurate and fast identification of vibration signals detected based on the phase-sensitive optical time-domain reflectometer (Φ-OTDR) is crucial in reducing the false-alarm rate of the long-distance distributed vibration warning system. This study proposes a computer vision-based Φ-OTDR multi-vibration events detection method in real-time, which can effectively detect perimeter intrusion events and reduce personnel patrol costs. Pulse accumulation, pulse cancellers, median filter, and pseudo-color processing are employed for vibration signal feature enhancement to generate vibration spatio-temporal images and form a customized dataset. This dataset is used to train and evaluate an improved YOLO-A30 based on the YOLO target detection meta-architecture to improve system performance. Experiments show that using this method to process 8069 vibration data images generated from 5 abnormal vibration activities for two types of fiber optic laying scenarios, buried underground or hung on razor barbed wire at the perimeter of high-speed rail, the system mAP@.5 is 99.5%, 555 frames per second (FPS), and can detect a theoretical maximum distance of 135.1 km per second. It can quickly and effectively identify abnormal vibration activities, reduce the false-alarm rate of the system for long-distance multi-vibration along high-speed rail lines, and significantly reduce the computational cost while maintaining accuracy.
APA, Harvard, Vancouver, ISO, and other styles
22

Manganelli Conforti, Pietro, Mario D’Acunto, and Paolo Russo. "Deep Learning for Chondrogenic Tumor Classification through Wavelet Transform of Raman Spectra." Sensors 22, no. 19 (October 3, 2022): 7492. http://dx.doi.org/10.3390/s22197492.

Full text
Abstract:
The grading of cancer tissues is still one of the main challenges for pathologists. The development of enhanced analysis strategies hence becomes crucial to accurately identify and further deal with each individual case. Raman spectroscopy (RS) is a promising tool for the classification of tumor tissues as it allows us to obtain the biochemical maps of the tissues under analysis and to observe their evolution in terms of biomolecules, proteins, lipid structures, DNA, vitamins, and so on. However, its potential could be further improved by providing a classification system which would be able to recognize the sample tumor category by taking as input the raw Raman spectroscopy signal; this could provide more reliable responses in shorter time scales and could reduce or eliminate false-positive or -negative diagnoses. Deep Learning techniques have become ubiquitous in recent years, with models able to perform classification with high accuracy in most diverse fields of research, e.g., natural language processing, computer vision, medical imaging. However, deep models often rely on huge labeled datasets to produce reasonable accuracy, otherwise occurring in overfitting issues when the training data is insufficient. In this paper, we propose a chondrogenic tumor CLAssification through wavelet transform of RAman spectra (CLARA), which is able to classify with high accuracy Raman spectra obtained from bone tissues. CLARA recognizes and grades the tumors in the evaluated dataset with 97% accuracy by exploiting a classification pipeline consisting of the division of the original task in two binary classification steps, where the first is performed on the original RS signals while the latter is accomplished through the use of a hybrid temporal-frequency 2D transform.
APA, Harvard, Vancouver, ISO, and other styles
23

Amin, Farhan, Omar M. Barukab, and Gyu Sang Choi. "Big Data Analytics Using Graph Signal Processing." Computers, Materials & Continua 74, no. 1 (2023): 489–502. http://dx.doi.org/10.32604/cmc.2023.030615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Carpenter, Chris. "Computer Vision Analytics Enables Determination of Rig State." Journal of Petroleum Technology 74, no. 01 (January 1, 2022): 96–98. http://dx.doi.org/10.2118/0122-0096-jpt.

Full text
Abstract:
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 204086, “Determining Rig State From Computer Vision Analytics,” by Crispin Chatar, SPE, and Suhas Suresha, Schlumberger, and Laetitia Shao, Stanford University, et al. The paper has not been peer reviewed. While companies cannot agree on a standard definition of “rig state,” they can agree that, as further use is made of remote operations and automation, rig-state calculation is mandatory in some form. By use of a machine-learning model that relies exclusively on videos collected on the rig floor to infer rig states, overcoming the limitations of existing methods is possible as the industry moves into a future of rigs featuring advanced technologies. Introduction The complete paper presents a machine-learning pipeline implemented to determine rig state from videos captured on the floor of an operating rig. The pipeline is composed of two parts. First, the annotation pipeline matches each frame of the video data set to a rig state. A convolutional neural network (CNN) is used to match the time of the video with corresponding sensor data. Second, additional CNNs are trained, capturing both spatial and temporal information, to extract an estimation of rig state from the videos. The models are trained on a data set of 3 million frames on a cloud platform using graphics processing units. Some of the models used include a pretrained visual geometry group (VGG) network, a convolutional 3D (C3D) model, and a two-stream model that uses optical flow to capture temporal information.
APA, Harvard, Vancouver, ISO, and other styles
25

Sumathi, J. k. "Dynamic Image Forensics and Forgery Analytics using Open Computer Vision Framework." Wasit Journal of Computer and Mathematics Science 1, no. 1 (March 17, 2021): 1–8. http://dx.doi.org/10.31185/wjcm.vol1.iss1.3.

Full text
Abstract:
The key advances in Computer Vision and Optical Image Processing are the emerging technologies nowadays in diverse fields including Facial Recognition, Biometric Verifications, Internet of Things (IoT), Criminal Investigation, Signature Identification in banking and several others. Thus, these applications use image and live video processing for facilitating different applications for analyzing and forecasting." Computer vision is used in tons of activities such as monitoring, face recognition, motion recognition, object detection, among many others. The development of social networking platforms such as Facebook and Instagram led to an increase in the volume of image data that was being generated. Use of image and video processing software is a major concern for Facebook because the photos and videos that people post to the social network are doctored images. These kind of images are frequently cited as fake and used in malevolent ways such as motivating violence and death. You need to authenticate the questionable images before take action. It is very hard to ensure photo authenticity due to the power of photo manipulations. Image formation can be determined by image forensic techniques. The technique of image duplication is used to conceal missing areas.
APA, Harvard, Vancouver, ISO, and other styles
26

Andrienko, Natalia, and Gennady Andrienko. "Visual analytics of movement: An overview of methods, tools and procedures." Information Visualization 12, no. 1 (September 5, 2012): 3–24. http://dx.doi.org/10.1177/1473871612457601.

Full text
Abstract:
Analysis of movement is currently a hot research topic in visual analytics. A wide variety of methods and tools for analysis of movement data has been developed in recent years. They allow analysts to look at the data from different perspectives and fulfil diverse analytical tasks. Visual displays and interactive techniques are often combined with computational processing, which, in particular, enables analysis of a larger number of data than would be possible with purely visual methods. Visual analytics leverages methods and tools developed in other areas related to data analytics, particularly statistics, machine learning and geographic information science. We present an illustrated structured survey of the state of the art in visual analytics concerning the analysis of movement data. Besides reviewing the existing works, we demonstrate, using examples, how different visual analytics techniques can support our understanding of various aspects of movement.
APA, Harvard, Vancouver, ISO, and other styles
27

Ramasamy, Prabha, and Mohan Kabadi. "An autonomous navigational system using GPS and computer vision for futuristic road traffic." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 1 (February 1, 2022): 179. http://dx.doi.org/10.11591/ijece.v12i1.pp179-188.

Full text
Abstract:
Navigational service is one of the most essential dependency towards any transport system and at present, there are various revolutionary approaches that has contributed towards its improvement. This paper has reviewed the global positioning system (GPS) and computer vision based navigational system and found that there is a large gap between the actual demand of navigation and what currently exists. Therefore, the proposed study discusses about a novel framework of an autonomous navigation system that uses GPS as well as computer vision considering the case study of futuristic road traffic system. An analytical model is built up where the geo-referenced data from GPS is integrated with the signals captured from the visual sensors are considered to implement this concept. The simulated outcome of the study shows that proposed study offers enhanced accuracy as well as faster processing in contrast to existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
28

Babar, Muhammad, Mohammad Dahman Alshehri, Muhammad Usman Tariq, Fasee Ullah, Atif Khan, M. Irfan Uddin, and Ahmed S. Almasoud. "IoT-Enabled Big Data Analytics Architecture for Multimedia Data Communications." Wireless Communications and Mobile Computing 2021 (December 17, 2021): 1–9. http://dx.doi.org/10.1155/2021/5283309.

Full text
Abstract:
The present spreading out of the Internet of Things (IoT) originated the realization of millions of IoT devices connected to the Internet. With the increase of allied devices, the gigantic multimedia big data (MMBD) vision is also gaining eminence and has been broadly acknowledged. MMBD management offers computation, exploration, storage, and control to resolve the QoS issues for multimedia data communications. However, it becomes challenging for multimedia systems to tackle the diverse multimedia-enabled IoT settings including healthcare, traffic videos, automation, society parking images, and surveillance that produce a massive amount of big multimedia data to be processed and analyzed efficiently. There are several challenges in the existing structural design of the IoT-enabled data management systems to handle MMBD including high-volume storage and processing of data, data heterogeneity due to various multimedia sources, and intelligent decision-making. In this article, an architecture is proposed to process and store MMBD efficiently in an IoT-enabled environment. The proposed architecture is a layered architecture integrated with a parallel and distributed module to accomplish big data analytics for multimedia data. A preprocessing module is also integrated with the proposed architecture to prepare the MMBD and speed up the processing mechanism. The proposed system is realized and experimentally tested using real-time multimedia big data sets from athentic sources that discloses the effectiveness of the proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
29

Shekan, Raid Abd Alreda, Ahmed Mahdi Abdulkadium, and Hiba Ameer Jabir. "Data Mining and Knowledge Discovery for Big Data in Cloud Environment." Webology 18, Special Issue 04 (September 30, 2021): 1118–31. http://dx.doi.org/10.14704/web/v18si04/web18186.

Full text
Abstract:
In past few decades, big data has evolved as a modern framework that offers huge amount of data and possibilities for applying and/or promoting analysis and decision-making technologies with unparalleled importance for digital processes in organization, engineering and science. Because of the new methods in these domains, the paper discusses history of big data mining under the cloud computing environment. In addition to the pursuit of exploration of knowledge, Big Data revolution gives companies many exciting possibilities (in relation to new vision, decision making and business growths strategies). The prospect of developing large-data processing, data analytics, and evaluation through a cloud computing model has been explored. The key component of this paper is the technical description of how to use cloud computing and the uses of data mining techniques and analytics methods in predictive and decision support systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Blascheck, Tanja, Markus John, Kuno Kurzhals, Steffen Koch, and Thomas Ertl. "VA2: A Visual Analytics Approach for Evaluating Visual Analytics Applications." IEEE Transactions on Visualization and Computer Graphics 22, no. 1 (January 31, 2016): 61–70. http://dx.doi.org/10.1109/tvcg.2015.2467871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cavicchioli, Roberto, Riccardo Martoglia, and Micaela Verucchi. "A Novel Real-Time Edge-Cloud Big Data Management and Analytics Framework for Smart Cities." JUCS - Journal of Universal Computer Science 28, no. 1 (January 28, 2022): 3–26. http://dx.doi.org/10.3897/jucs.71645.

Full text
Abstract:
Exposing city information to dynamic, distributed, powerful, scalable, and user-friendly big data systems is expected to enable the implementation of a wide range of new opportunities; however, the size, heterogeneity and geographical dispersion of data often makes it difficult to combine, analyze and consume them in a single system. In the context of the H2020 CLASS project, we describe an innovative framework aiming to facilitate the design of advanced big-data analytics workflows. The proposal covers the whole compute continuum, from edge to cloud, and relies on a well-organized distributed infrastructure exploiting: a) edge solutions with advanced computer vision technologies enabling the real-time generation of &ldquo;rich&rdquo; data from a vast array of sensor types; b) cloud data management techniques offering efficient storage, real-time querying and updating of the high-frequency incoming data at different granularity levels. We specifically focus on obstacle detection and tracking for edge processing, and consider a traffic density monitoring application, with hierarchical data aggregation features for cloud processing; the discussed techniques will constitute the groundwork enabling many further services. The tests are performed on the real use-case of the Modena Automotive Smart Area (MASA).
APA, Harvard, Vancouver, ISO, and other styles
32

Nayyar, Anand, Pijush Kanti Dutta Pramankit, and Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions." Scalable Computing: Practice and Experience 21, no. 3 (August 1, 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Full text
Abstract:
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
APA, Harvard, Vancouver, ISO, and other styles
33

Smuc, Michael. "Just the other side of the coin? From error to insight analysis." Information Visualization 15, no. 4 (July 26, 2016): 312–24. http://dx.doi.org/10.1177/1473871615598641.

Full text
Abstract:
To shed more light on data explorers dealing with complex information visualizations in real-world scenarios, new methodologies and models are needed which overcome existing explanatory gaps. Therefore, a novel model to analyze users’ errors and insights is outlined that is derived from Rasmussen’s model on different levels of cognitive processing, and integrates explorers’ skills, schemes, and knowledge (skill–rule–knowledge model). After locating this model in the landscape of theories for visual analytics, the main building blocks of the model, where three cognitive processing levels are interlinked, are described in detail. A case study illustrates how the cognitive processing efforts can be identified from triangulated eye-tracking and think-aloud data. Finally, the model’s applicability, challenges in measurement, and future research options are discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Buhmeida, Abdelbaset, Yrjo Collan, Kari Syrjanen, and Seppo Pyrhonen. "DNA IMAGE CYTOMETRY IN PROGNOSTICATION OF COLORECTAL CANCER: PRACTICAL CONSIDERATIONS OF THE TECHNIQUE AND INTERPRETATION OF THE HISTOGRAMS." Image Analysis & Stereology 25, no. 1 (May 3, 2011): 1. http://dx.doi.org/10.5566/ias.v25.p1-12.

Full text
Abstract:
The role of DNA content as a prognostic factor in colorectal cancer (CRC) is highly controversial. Some of these controversies are due to purely technical reasons, e.g. variable practices in interpreting the DNA histograms, which is problematic particularly in advanced cases. In this report, we give a detailed account on various options how these histograms could be optimally interpreted, with the idea of establishing the potential value of DNA image cytometry in prognosis and in selection of proper treatment. Material consists of nuclei isolated from 50 ƒĘm paraffin sections from 160 patients with stage II, III or IV CRC diagnosed, treated and followed-up in our clinic. The nuclei were stained with the Feulgen stain. Nuclear DNA was measured using computer-assisted image cytometry. We applied 4 different approaches to analyse the DNA histograms: 1) appearance of the histogram (ABCDE approach), 2) range of DNA values, 3) peak evaluation, and 4) events present at high DNA values. Intra-observer reproducibility of these four histogram interpretation was 89%, 95%, 96%, and 100%, respectively. We depicted selected histograms to illustrate the four analytical approaches in cases with different stages of CRC, with variable disease outcome. In our analysis, the range of DNA values was the best prognosticator, i.e., the tumours with the widest histograms had the most ominous prognosis. These data implicate that DNA cytometry based on isolated nuclei is valuable in predicting the prognosis of CRC. Different interpretation techniques differed in their reproducibility, but the method showing the best prognostic value also had high reproducibility in our analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Pak Chung Wong, H. Foote, G. Chin, P. Mackey, and K. Perrine. "Graph Signatures for Visual Analytics." IEEE Transactions on Visualization and Computer Graphics 12, no. 6 (November 2006): 1399–413. http://dx.doi.org/10.1109/tvcg.2006.92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Xie, Danfeng, Lei Zhang, and Li Bai. "Deep Learning in Visual Computing and Signal Processing." Applied Computational Intelligence and Soft Computing 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/1320780.

Full text
Abstract:
Deep learning is a subfield of machine learning, which aims to learn a hierarchy of features from input data. Nowadays, researchers have intensively investigated deep learning algorithms for solving challenging problems in many areas such as image classification, speech recognition, signal processing, and natural language processing. In this study, we not only review typical deep learning algorithms in computer vision and signal processing but also provide detailed information on how to apply deep learning to specific areas such as road crack detection, fault diagnosis, and human activity detection. Besides, this study also discusses the challenges of designing and training deep neural networks.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Jianlong, and Fang Chen. "DecisionMind: revealing human cognition states in data analytics-driven decision making with a multimodal interface." Journal on Multimodal User Interfaces 12, no. 2 (October 3, 2017): 67–76. http://dx.doi.org/10.1007/s12193-017-0249-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Melnikov, Boris, and Yulia Terentyeva. "An approach for obtaining estimation of stability of large communication network taking into account its dependent paths." Cybernetics and Physics, Volume 11, 2022, Number 3 (November 17, 2022): 145–50. http://dx.doi.org/10.35470/2226-4116-2022-11-3-145-150.

Full text
Abstract:
The physical data layer transmits bits over physical communication channels, such as coaxial cable or twisted pair. That is, it is this level that directly transmits data. At this level, the characteristics of electrical signals that transmit discrete information are determined. After that it is necessary to consider the control of the communication network, its various algorithms. When designing a communication network, a prerequisite is to calculate its stability, and in the case of large scale communication networks, this is a big problem. The most common deterministic, as well as fairly fast approximate method, which is often implemented at the present time, is the method by which the stability of the communication direction which is estimated by analyzing independent paths only. The main disadvantage of this method is obtaining an understated estimate of stability due to unaccountable dependent routes of communication directions. And this leads to inefficient use of resources. Our proposed methodology allows to take into not only independent paths, but also dependent ones, which is the basis for obtaining a significantly more correct estimate. It is based on an algorithm for checking the presence of a certain path and, based on it, an algorithm for obtaining an exact assessment of stability. The paper also provides analytical and statistical analysis of the considered algorithms. In particular, a special parameter was introduced that characterizes the probability of a failure event of communication lines, in which the number of failed communication lines lies in a certain specified range; after which a study of the function describing this parameter was carried out.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Peng, and Yan Song. "A Hybrid Vision Processing Unit with a Pipelined Workflow for Convolutional Neural Network Accelerating and Image Signal Processing." Electronics 10, no. 23 (December 1, 2021): 2989. http://dx.doi.org/10.3390/electronics10232989.

Full text
Abstract:
Vision processing chips have been widely used in image processing and recognition tasks. They are conventionally designed based on the image signal processing (ISP) units directly connected with the sensors. In recent years, convolutional neural networks (CNNs) have become the dominant tools for many state-of-the-art vision processing tasks. However, CNNs cannot be processed by a conventional vision processing unit (VPU) with a high speed. On the other side, the CNN processing units cannot process the RAW images from the sensors directly and an ISP unit is required. This makes a vision system inefficient with a lot of data transmission and redundant hardware resources. Additionally, many CNN processing units suffer from a low flexibility for various CNN operations. To solve this problem, this paper proposed an efficient vision processing unit based on a hybrid processing elements array for both CNN accelerating and ISP. Resources are highly shared in this VPU, and a pipelined workflow is introduced to accelerate the vision tasks. We implement the proposed VPU on the Field-Programmable Gate Array (FPGA) platform and various vision tasks are tested on it. The results show that this VPU achieves a high efficiency for both CNN processing and ISP and shows a significant reduction in energy consumption for vision tasks consisting of CNNs and ISP. For various CNN tasks, it maintains an average multiply accumulator utilization of over 94% and achieves a performance of 163.2 GOPS with a frequency of 200 MHz.
APA, Harvard, Vancouver, ISO, and other styles
40

Martin, Jon, David Cantero, Maite González, Andrea Cabrera, Mikel Larrañaga, Evangelos Maltezos, Panagiotis Lioupis, et al. "Embedded Vision Intelligence for the Safety of Smart Cities." Journal of Imaging 8, no. 12 (December 14, 2022): 326. http://dx.doi.org/10.3390/jimaging8120326.

Full text
Abstract:
Advances in Artificial intelligence (AI) and embedded systems have resulted on a recent increase in use of image processing applications for smart cities’ safety. This enables a cost-adequate scale of automated video surveillance, increasing the data available and releasing human intervention. At the same time, although deep learning is a very intensive task in terms of computing resources, hardware and software improvements have emerged, allowing embedded systems to implement sophisticated machine learning algorithms at the edge. Additionally, new lightweight open-source middleware for constrained resource devices, such as EdgeX Foundry, have appeared to facilitate the collection and processing of data at sensor level, with communication capabilities to exchange data with a cloud enterprise application. The objective of this work is to show and describe the development of two Edge Smart Camera Systems for safety of Smart cities within S4AllCities H2020 project. Hence, the work presents hardware and software modules developed within the project, including a custom hardware platform specifically developed for the deployment of deep learning models based on the I.MX8 Plus from NXP, which considerably reduces processing and inference times; a custom Video Analytics Edge Computing (VAEC) system deployed on a commercial NVIDIA Jetson TX2 platform, which provides high level results on person detection processes; and an edge computing framework for the management of those two edge devices, namely Distributed Edge Computing framework, DECIoT. To verify the utility and functionality of the systems, extended experiments were performed. The results highlight their potential to provide enhanced situational awareness and demonstrate the suitability for edge machine vision applications for safety in smart cities.
APA, Harvard, Vancouver, ISO, and other styles
41

Huang, Xiaohua, Abhinav Dhall, Guoying Zhao, Wenming Zheng, and Matti Peitikänen. "Editorial for the special issue of IMAVIS on automatic face analytics for human behavior understanding." Image and Vision Computing 110 (June 2021): 104185. http://dx.doi.org/10.1016/j.imavis.2021.104185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

El rai, Marwa Chendeb, Muna Darweesh, and Mina Al-Saad. "Semi-Supervised Segmentation of Echocardiography Videos Using Graph Signal Processing." Electronics 11, no. 21 (October 26, 2022): 3462. http://dx.doi.org/10.3390/electronics11213462.

Full text
Abstract:
Machine learning and computer vision algorithms can provide a precise and automated interpretation of medical videos. The segmentation of the left ventricle of echocardiography videos plays an essential role in cardiology for carrying out clinical cardiac diagnosis and monitoring the patient’s condition. Most of the developed deep learning algorithms for video segmentation require an enormous amount of labeled data to generate accurate results. Thus, there is a need to develop new semi-supervised segmentation methods due to the scarcity and costly labeled data. In recent research, semi-supervised learning approaches based on graph signal processing emerged in computer vision due to their ability to avail the geometrical structure of data. Video object segmentation can be considered as a node classification problem. In this paper, we propose a new approach called GraphECV based on the use of graph signal processing for semi-supervised learning of video object segmentation applied for the segmentation of the left ventricle in echordiography videos. GraphECV includes instance segmentation, extraction of temporal, texture and statistical features to represent the nodes, construction of a graph using K-nearest neighbors, graph sampling to embed the graph with small amount of labeled nodes or graph signals, and finally a semi-supervised learning approach based on the minimization of the Sobolov norm of graph signals. The new algorithm is evaluated using two publicly available echocardiography videos, EchoNet-Dynamic and CAMUS datasets. The proposed approach outperforms other state-of-the-art methods under challenging background conditions.
APA, Harvard, Vancouver, ISO, and other styles
43

Maciejewski, R., R. Hafen, S. Rudolph, S. G. Larew, M. A. Mitchell, W. S. Cleveland, and D. S. Ebert. "Forecasting Hotspots—A Predictive Analytics Approach." IEEE Transactions on Visualization and Computer Graphics 17, no. 4 (April 2011): 440–53. http://dx.doi.org/10.1109/tvcg.2010.82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sivaramaraju, Vetukuri, Nilambar Sethi, and Renugunta Rajender. "Heuristics for Winner Prediction in International Cricket Matches." Statistics, Optimization & Information Computing 8, no. 2 (May 28, 2020): 602–9. http://dx.doi.org/10.19139/soic-2310-5070-648.

Full text
Abstract:
Cricket is popularly known as the game of gentlemen. The game of cricket has been introduced to the World by England. Since the introduction till date, it has become the second most ever popular game. In this context, few a data mining and analytical techniques have been proposed for the same. In this work, two different scenario have been considered for the prediction of winning team based on several parameters. These scenario are taken for two different standard formats for the game namely, one day international (ODI) cricket and twenty-twenty cricket (T-20). The prediction approaches differ from each other based on the types of parameters considered and the corresponding functional strategies. The strategies proposed here adopts two different approaches. One approach is for the winner prediction for one-day matches and the other is for predicting the winner for a T-20 match. The approaches have been proposed separately for both the versions of the game pertaining to the intra-variability in the strategies adopted by a team and individuals for each. The proposed strategies for each of the two scenarios have been individually evaluated against existing benchmark works, and for each of the cases the duo of approaches have outperformed the rest in terms of the prediction accuracy. The novel heuristics proposed herewith reflects efficiency and accuracy with respect to prediction of cricket data.
APA, Harvard, Vancouver, ISO, and other styles
45

Bodlák, Karel, Arun Balasundarun M. Gokhale, and Viktor Beneš. "CHARACTERIZATION OF BIVARIATE SIZE-ORIENTATION DISTRIBUTION OF CIRCULAR PLATE PARTICLES." Image Analysis & Stereology 21, no. 3 (May 3, 2011): 175. http://dx.doi.org/10.5566/ias.v21.p175-181.

Full text
Abstract:
The paper is devoted to the stereological unfolding problem of bivariate size-orientation distribution of platelike particles in metallography. Gokhale (1996) derived an integral equation which relates this bivariate distribution in three-dimensional (3D) space to the corresponding size-orientation distribution of planar sections of the specimen. The present paper yields a numerical algorithm which enables to transform a bivariate histogram of observed quantities to the histogram of 3D characteristics. The use of the method is demonstrated in examples with simulated data, where an easy analytical solution is available and can be compared with the results of estimation. The spectrum of unfolding problems solved numerically (cf. Ohser and Muecklich, 2000) is thus extended.
APA, Harvard, Vancouver, ISO, and other styles
46

Brofferio, S., G. Mastronardi, and V. Rampa. "A migrating data-driven architecture for multidimensional signal processing." Signal Processing: Image Communication 3, no. 2-3 (June 1991): 249–57. http://dx.doi.org/10.1016/0923-5965(91)90013-r.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jiao, Yutao, Ping Wang, Shaohan Feng, and Dusit Niyato. "Profit Maximization Mechanism and Data Management for Data Analytics Services." IEEE Internet of Things Journal 5, no. 3 (June 2018): 2001–14. http://dx.doi.org/10.1109/jiot.2018.2819706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Califano, Andrea, Rick Kjeldsen, and Ruud M. Bolle. "Data- and Model-Driven Multiresolution Processing." Computer Vision and Image Understanding 63, no. 1 (January 1996): 27–49. http://dx.doi.org/10.1006/cviu.1996.0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Turdukulov, Ulanbek D., Connie A. Blok, Barend Köbben, and Javier Morales. "Challenges in data integration and interoperability in geovisual analytics." Journal of Location Based Services 4, no. 3-4 (September 2010): 166–82. http://dx.doi.org/10.1080/17489725.2010.532815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Turdukulov, Ulanbek D., Connie A. Blok, Barend Köbben, Javier Morales, Nguyen Duc Ha, Nguyen Hoang Long, and Lizda Iswari. "Challenges in data integration and interoperability in geovisual analytics." Journal of Location Based Services 5, no. 1 (February 8, 2011): 58. http://dx.doi.org/10.1080/17489725.2011.554641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography