Academic literature on the topic 'Computer vision in data analytics and signal processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer vision in data analytics and signal processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer vision in data analytics and signal processing"

1

Höferlin, Benjamin, Markus Höferlin, Gunther Heidemann, and Daniel Weiskopf. "Scalable video visual analytics." Information Visualization 14, no. 1 (June 5, 2013): 10–26. http://dx.doi.org/10.1177/1473871613488571.

Full text
Abstract:
Video visual analytics is the research field that addresses scalable and reliable analysis of video data. The vast amount of video data in typical analysis tasks renders manual analysis by watching the video data impractical. However, automatic evaluation of video material is not reliable enough, especially when it comes to semantic abstraction from the video signal. In this article, we describe the video visual analytics method that combines the complementary strengths of human recognition and machine processing. After inspecting the challenges of scalable video analysis, we derive the main components of visual analytics for video data. Based on these components, we present our video visual analytics system that has its origins in our IEEE VAST Challenge 2009 participation.
APA, Harvard, Vancouver, ISO, and other styles
2

Chadebecq, François, Francisco Vasconcelos, Evangelos Mazomenos, and Danail Stoyanov. "Computer Vision in the Surgical Operating Room." Visceral Medicine 36, no. 6 (2020): 456–62. http://dx.doi.org/10.1159/000511934.

Full text
Abstract:
<b><i>Background:</i></b> Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. <b><i>Summary:</i></b> In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. <b><i>Key Messages:</i></b> With the increasing availability of surgical video sources and the convergence of technologies<b><i></i></b>around video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic.
APA, Harvard, Vancouver, ISO, and other styles
3

Lemenkova, Polina, Raphaël De Plaen, Thomas Lecocq, and Olivier Debeir. "Computer Vision Algorithms of DigitSeis for Building a Vectorised Dataset of Historical Seismograms from the Archive of Royal Observatory of Belgium." Sensors 23, no. 1 (December 21, 2022): 56. http://dx.doi.org/10.3390/s23010056.

Full text
Abstract:
Archived seismograms recorded in the 20th century present a valuable source of information for monitoring earthquake activity. However, old data, which are only available as scanned paper-based images should be digitised and converted from raster to vector format prior to reuse for geophysical modelling. Seismograms have special characteristics and specific featuresrecorded by a seismometer and encrypted in the images: signal trace lines, minute time gaps, timing and wave amplitudes. This information should be recognised and interpreted automatically when processing archives of seismograms containing large collections of data. The objective was to automatically digitise historical seismograms obtained from the archives of the Royal Observatory of Belgium (ROB). The images were originallyrecorded by the Galitzine seismometer in 1954 in Uccle seismic station, Belgium. A dataset included 145 TIFF images which required automatic approach of data processing. Software for digitising seismograms are limited and many have disadvantages. We applied the DigitSeis for machine-based vectorisation and reported here a full workflowof data processing. This included pattern recognition, classification, digitising, corrections and converting TIFFs to the digital vector format. The generated contours of signals were presented as time series and converted into digital format (mat files) which indicated information on ground motion signals contained in analog seismograms. We performed the quality control of the digitised traces in Python to evaluate the discriminating functionality of seismic signals by DigitSeis. We shown a robust approach of DigitSeis as a powerful toolset for processing analog seismic signals. The graphical visualisation of signal traces and analysis of the performed vectorisation results shown that the algorithms of data processing performed accurately and can be recommended in similar applications of seismic signal processing in future related works in geophysical research.
APA, Harvard, Vancouver, ISO, and other styles
4

Sarada, B., M. Vinayaka Murthy, and V. Udaya Rani. "Combined secure approach based on whale optimization to improve the data classification for data analytics." Pattern Recognition Letters 152 (December 2021): 327–32. http://dx.doi.org/10.1016/j.patrec.2021.10.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gotz, David, and Harry Stavropoulos. "DecisionFlow: Visual Analytics for High-Dimensional Temporal Event Sequence Data." IEEE Transactions on Visualization and Computer Graphics 20, no. 12 (December 31, 2014): 1783–92. http://dx.doi.org/10.1109/tvcg.2014.2346682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xiaoru Yuan, He Xiao, Hanqi Guo, Peihong Guo, W. Kendall, Jian Huang, and Yongxian Zhang. "Scalable Multi-variate Analytics of Seismic and Satellite-based Observational Data." IEEE Transactions on Visualization and Computer Graphics 16, no. 6 (November 2010): 1413–20. http://dx.doi.org/10.1109/tvcg.2010.192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kurzhals, Kuno, and Daniel Weiskopf. "Space-Time Visual Analytics of Eye-Tracking Data for Dynamic Stimuli." IEEE Transactions on Visualization and Computer Graphics 19, no. 12 (December 2013): 2129–38. http://dx.doi.org/10.1109/tvcg.2013.194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Jialuan, Zirui Xing, Tianqi Xiang, Xin Zhang, Yinghai Zhou, Chuanyu Xi, and Hai Lu. "Wireless Signal Propagation Prediction Based on Computer Vision Sensing Technology for Forestry Security Monitoring." Sensors 21, no. 17 (August 24, 2021): 5688. http://dx.doi.org/10.3390/s21175688.

Full text
Abstract:
In this paper, Computer Vision (CV) sensing technology based on Convolutional Neural Network (CNN) is introduced to process topographic maps for predicting wireless signal propagation models, which are applied in the field of forestry security monitoring. In this way, the terrain-related radio propagation characteristic including diffraction loss and shadow fading correlation distance can be predicted or extracted accurately and efficiently. Two data sets are generated for the two prediction tasks, respectively, and are used to train the CNN. To enhance the efficiency for the CNN to predict diffraction losses, multiple output values for different locations on the map are obtained in parallel by the CNN to greatly boost the calculation speed. The proposed scheme achieved a good performance in terms of prediction accuracy and efficiency. For the diffraction loss prediction task, 50% of the normalized prediction error was less than 0.518%, and 95% of the normalized prediction error was less than 8.238%. For the correlation distance extraction task, 50% of the normalized prediction error was less than 1.747%, and 95% of the normalized prediction error was less than 6.423%. Moreover, diffraction losses at 100 positions were predicted simultaneously in one run of CNN under the settings in this paper, for which the processing time of one map is about 6.28 ms, and the average processing time of one location point can be as low as 62.8 us. This paper shows that our proposed CV sensing technology is more efficient in processing geographic information in the target area. Combining a convolutional neural network to realize the close coupling of a prediction model and geographic information, it improves the efficiency and accuracy of prediction.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Bao Shu, and Ze Lin Shi. "Performance Bound of Position Estimation in Image Matching." Key Engineering Materials 500 (January 2012): 766–72. http://dx.doi.org/10.4028/www.scientific.net/kem.500.766.

Full text
Abstract:
Position estimation in image matching is a fundamental step in computer vision and image processing. To deal with the problem of performance prediction, we formulate it under statistical parameter estimation aspect. The lower bound of position estimation variance is obtained based on Cramer-Rao lower bound (CRLB) theory. This paper analyses the impact of noise to 1-D signal matching, derives the lower bound of variance, and then extends it to 2-D image matching. Furthermore, we derive numerical expression that can be computed from observed data. Finally, we use Monte Carlo simulation method to verify the derived analytical expressions. Experimental results show that the derived CRLB is tight to simulation estimated variance. The CRLB can characterize the performance bound of position estimation in image matching.
APA, Harvard, Vancouver, ISO, and other styles
10

Wagner, Jorge, Wolfgang Stuerzlinger, and Luciana Nedel. "Comparing and Combining Virtual Hand and Virtual Ray Pointer Interactions for Data Manipulation in Immersive Analytics." IEEE Transactions on Visualization and Computer Graphics 27, no. 5 (May 2021): 2513–23. http://dx.doi.org/10.1109/tvcg.2021.3067759.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computer vision in data analytics and signal processing"

1

Javadi, Mohammad Saleh. "Computer Vision Algorithms for Intelligent Transportation Systems Applications." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för matematik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17166.

Full text
Abstract:
In recent years, Intelligent Transportation Systems (ITS) have emerged as an efficient way of enhancing traffic flow, safety and management. These goals are realized by combining various technologies and analyzing the acquired data from vehicles and roadways. Among all ITS technologies, computer vision solutions have the advantages of high flexibility, easy maintenance and high price-performance ratio that make them very popular for transportation surveillance systems. However, computer vision solutions are demanding and challenging due to computational complexity, reliability, efficiency and accuracy among other aspects.   In this thesis, three transportation surveillance systems based on computer vision are presented. These systems are able to interpret the image data and extract the information about the presence, speed and class of vehicles, respectively. The image data in these proposed systems are acquired using Unmanned Aerial Vehicle (UAV) as a non-stationary source and roadside camera as a stationary source. The goal of these works is to enhance the general performance of accuracy and robustness of the systems with variant illumination and traffic conditions.   This is a compilation thesis in systems engineering consisting of three parts. The red thread through each part is a transportation surveillance system. The first part presents a change detection system using aerial images of a cargo port. The extracted information shows how the space is utilized at various times aiming for further management and development of the port. The proposed solution can be used at different viewpoints and illumination levels e.g. at sunset. The method is able to transform the images taken from different viewpoints and match them together. Thereafter, it detects discrepancies between the images using a proposed adaptive local threshold. In the second part, a video-based vehicle's speed estimation system is presented. The measured speeds are essential information for law enforcement and they also provide an estimation of traffic flow at certain points on the road. The system employs several intrusion lines to extract the movement pattern of each vehicle (non-equidistant sampling) as an input feature to the proposed analytical model. In addition, other parameters such as camera sampling rate and distances between intrusion lines are also taken into account to address the uncertainty in the measurements and to obtain the probability density function of the vehicle's speed. In the third part, a vehicle classification system is provided to categorize vehicles into \private car", \light trailer", \lorry or bus" and \heavy trailer". This information can be used by authorities for surveillance and development of the roads. The proposed system consists of multiple fuzzy c-means clusterings using input features of length, width and speed of each vehicle. The system has been constructed by using prior knowledge of traffic regulations regarding each class of vehicle in order to enhance the classification performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Nilsson, Lovisa. "Data-Driven Methods for Sonar Imaging." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176249.

Full text
Abstract:
Reconstruction of sonar images is an inverse problem, which is normally solved with model-based methods. These methods may introduce undesired artifacts called angular and range leakage into the reconstruction. In this thesis, a method called Learned Primal-Dual Reconstruction, which combines a data-driven and a model-based approach, is used to investigate the use of data-driven methods for reconstruction within sonar imaging. The method uses primal and dual variables inspired by classical optimization methods where parts are replaced by convolutional neural networks to iteratively find a solution to the reconstruction problem. The network is trained and validated with synthetic data on eight models with different architectures and training parameters. The models are evaluated on measurement data and the results are compared with those from a purely model-based method. Reconstructions performed on synthetic data, where a ground truth image is available, show that it is possible to achieve reconstructions with the data-driven method that have less leakage than reconstructions from the model-based method. For reconstructions performed on measurement data where no ground truth is available, some variants of the learned model achieve a good result with less leakage.
APA, Harvard, Vancouver, ISO, and other styles
3

Simioni, Maicon Cezar. "Monitoramento da frequência cardíaca via método de magnificação de vídeo e Euleriana em tempo real." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1373.

Full text
Abstract:
O monitoramento de sinais vitais em pacientes tem o intuito de obter rapidamente dados relevantes para decisões médicas. No entanto, tal mensuração é pouco eficiente, difícil, quando não impossível, em determinados casos, como por exemplo em pacientes vítimas de queimaduras, em função da impossibilidade de colocar o eletrodo diretamente na pele, ou em recém-nascidos, devido à fragilidade da pele. Este estudo trata do desenvolvimento de um sistema para aquisição contínua de sinais fotopletismográficos para a telemetria da frequência cardíaca em tempo real, em uma plataforma de baixo custo, utilizando a biblioteca OpenCV e o método desenvolvido pelo MIT chamado de Magnificação de Vídeo Euleriana, que revela variações que são imperceptíveis a olho nu. Para desenvolver o sistema foi utilizada a plataforma de hardware Raspberry Pi Versão B com processador ARM11 de 700MHz e 512MB de memória RAM. Os dados da frequência cardíaca coletados dos experimentos foram comparados com os dados coletados por um oxímetro de dedo More Fitness modelo MF-425. Esse foi escolhido por utilizar o mesmo princípio "Fotopletismografia"para efetuar a medição. Após a coleta de dados foi estimado o intervalo de confiança para aferir a precisão do sistema, que correspondeu a 96,5% em relação ao oxímetro utilizado. Ficou evidente que o meio utilizado para medir a frequência cardíaca via método de magnificação de vídeo Euleriana em tempo real é uma tecnologia de baixo custo (aproximadamente R$300,00) quando comparado aos monitores multiparamétricos utilizados para o monitoramento de pacientes críticos, cujo custo varia entre R$ 8.000,00 e R$ 34.000,00. Sendo assim, também contribui com a redução de custos no tratamento ao doente que necessita de monitorização constante, possibilitando que, com a economia gerada pela aquisição e implantação desta tecnologia, viabilize um maior investimento em outras áreas dos hospitais.
Monitoring vital signs in patients is used to obtain relevant data for medical decisions in a fast way. However, this measurement is both inefficient and difficult, if not impossible in certain cases, such as in burnt victims, due to the impossibility of placing the electrodes directly on the skin or in infants, because of the fragility of skin. This study aims to develop of a system for continuous acquisition of photopletismografics (PPG) signals for the telemetry of heart rate in real time in a low cost platform using the OpenCV library and the method developed by MIT called the Eulerian Video Magnification, amplifying variations that are imperceptible to the naked eye. To develop the system were used the hardware platform Raspberry Pi version B with ARM11 700MHz processor and 512MB RAM. The heart rate data collected from the experi- ments were compared with data collected by a finger oximeter model More Fitness MF-425 it was chosen, by using the same working principle "PPG"to effect the measurement. After data collection was estimated the confidence interval to measure system accuracy, which corresponded to 96,5% compared to the oximeter used. It became clear that the developed system used to measure heart rate via magnification method of Eulerian live video is a low-cost technology (approximately R$ 300.00) compared to the multiparameter monitors used for monitoring critically patients, ranging in cost from R$ 8,000.00 to R$ 34,000.00. So also, it contributes to cost reduction in the treatment to the patient in need of constant monitoring, enabling with the savings generated by the acquisition and deployment of this technology makes possible greater investment in other areas of hospitals.
5000
APA, Harvard, Vancouver, ISO, and other styles
4

Sandberg, David. "Model-Based Video Coding Using a Colour and Depth Camera." Thesis, Linköpings universitet, Datorseende, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68737.

Full text
Abstract:
In this master thesis, a model-based video coding algorithm has been developed that uses input from a colour and depth camera, such as the Microsoft Kinect. Using a model-based representation of a video has several advantages over the commonly used block-based approach, used by the H.264 standard. For example, videos can be rendered in 3D, be viewed from alternative views, and have objects inserted into them for augmented reality and user interaction. This master thesis demonstrates a very efficient way of encoding the geometry of a scene. The results of the proposed algorithm show that it can reach very low bitrates with comparable results to the H.264 standard.
I detta examensarbete har en modellbaserad videokodningsalgoritm utvecklats som använder data från en djup- och färgkamera, exempelvis Microsoft Kinect. Det finns flera fördelar med en modellbaserad representation av en video över den mer vanligt förekommande blockbaserade varianten, vilket används av bland annat H.264. Några exempel är möjligheten att rendera videon i 3D samt från alternativa vyer, placera in objekt i videon samt möjlighet för användaren att interagera med scenen. Detta examensarbete påvisar en väldigt effektiv metod för komprimering av scengeometri. Resultaten av den presenterade algoritmen visar att möjligheten att uppnå väldigt låg bithastighet med jämförelsebara resultat med H.264-standarden.
APA, Harvard, Vancouver, ISO, and other styles
5

Skepetzis, Vasilios, and Pontus Hedman. "The Effect of Beautification Filters on Image Recognition : "Are filtered social media images viable Open Source Intelligence?"." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44799.

Full text
Abstract:
In light of the emergence of social media, and its abundance of facial imagery, facial recognition finds itself useful from an Open Source Intelligence standpoint. Images uploaded on social media are likely to be filtered, which can destroy or modify biometric features. This study looks at the recognition effort of identifying individuals based on their facial image after filters have been applied to the image. The social media image filters studied occlude parts of the nose and eyes, with a particular interest in filters occluding the eye region. Our proposed method uses a Residual Neural Network Model to extract features from images, with recognition of individuals based on distance measures, based on the extracted features. Classification of individuals is also further done by the use of a Linear Support Vector Machine and XGBoost classifier. In attempts to increase the recognition performance for images completely occluded in the eye region, we present a method to reconstruct this information by using a variation of a U-Net, and from the classification perspective, we also train the classifier on filtered images to increase the performance of recognition. Our experimental results showed good recognition of individuals when filters were not occluding important landmarks, especially around the eye region. Our proposed solution shows an ability to mitigate the occlusion done by filters through either reconstruction or training on manipulated images, in some cases, with an increase in the classifier’s accuracy of approximately 17% points with only reconstruction, 16% points when the classifier trained on filtered data, and  24% points when both were used at the same time. When training on filtered images, we observe an average increase in performance, across all datasets, of 9.7% points.
APA, Harvard, Vancouver, ISO, and other styles
6

Beyou, Sébastien. "Estimation de la vitesse des courants marins à partir de séquences d'images satellitaires." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00870722.

Full text
Abstract:
Cette thèse étudie des méthodes d'assimilation de données par filtrage particulaire à l'estimation d'écoulements fluides observés au travers de séquences d'images. Nous nous appuyons sur un filtre particulaire spécifique dont la distribution de proposition est donnée par un filtre de Kalman d'ensemble, nommé filtre de Kalman d'ensemble pondéré. Deux variations à celui-ci sont introduites et étudiées. La première consiste à utiliser un bruit dynamique (permettant de modéliser l'incertitude du modèle et de séparer les particules entre elles) dont la forme spatiale suit une loi de puissance, cohérente avec la théorie phénoménologique de la turbulence. La deuxième variation repose sur un schéma d'assimilation multi-échelles introduisant un mécanisme de raffinements successifs à partir d'observations à des échelles de plus en plus petites. Ces deux méthodes ont été testées sur des séquences synthétiques et expérimentales d'écoulements 2D incompressibles. Ces résultats montrent un gain important sur l'erreur quadratique moyenne. Elles ont ensuite été testées sur des séquences d'images satellite réelles. Sur les images réelles, une bonne cohérence temporelle est observée, ainsi qu'un bon suivi des structures de vortex. L'assimilation multi-échelles montre un gain visible sur le nombre d'échelles reconstruites. Quelques variations additionnelles sont aussi présentées et testées afin de s'affranchir de problèmes importants rencontrés dans un contexte satellitaire réel. Il s'agit notamment de la prise en compte de données manquantes sur les images de température de surface de l'océan. En dernier lieu, une expérience d'un filtre de Kalman d'ensemble pondéré avec un modèle océanique complet est présentée pour une assimilation de champs de courants de surface en mer d'Iroise, à l'embouchure de la Manche. Quelques autres pistes d'amélioration sont également esquissées et testées.
APA, Harvard, Vancouver, ISO, and other styles
7

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Full text
Abstract:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
APA, Harvard, Vancouver, ISO, and other styles
8

(8771429), Ashley S. Dale. "3D OBJECT DETECTION USING VIRTUAL ENVIRONMENT ASSISTED DEEP NETWORK TRAINING." Thesis, 2021.

Find full text
Abstract:

An RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with a small sample of real-world image data and used to train the Mask R-CNN (MR-CNN) architecture in a variety of configurations. When the MR-CNN architecture was initialized with MS COCO weights and the heads were trained with a mix of synthetic data and real world data, F1 scores improved in four of the five classes: The average maximum F1-score of all classes and all epochs for the networks trained with synthetic data is F1∗ = 0.91, compared to F1 = 0.89 for the networks trained exclusively with real data, and the standard deviation of the maximum mean F1-score for synthetically trained networks is σ∗ F1 = 0.015, compared to σF 1 = 0.020 for the networks trained exclusively with real data. Various backgrounds in synthetic data were shown to have negligible impact on F1 scores, opening the door to abstract backgrounds and minimizing the need for intensive synthetic data fabrication. When the MR-CNN architecture was initialized with MS COCO weights and depth data was included in the training data, the net- work was shown to rely heavily on the initial convolutional input to feed features into the network, the image depth channel was shown to influence mask generation, and the image color channels were shown to influence object classification. A set of latent variables for a subset of the synthetic datatset was generated with a Variational Autoencoder then analyzed using Principle Component Analysis and Uniform Manifold Projection and Approximation (UMAP). The UMAP analysis showed no meaningful distinction between real-world and synthetic data, and a small bias towards clustering based on image background.

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computer vision in data analytics and signal processing"

1

1936-, Aggarwal J. K., North Atlantic Treaty Organization. Scientific Affairs Division., and NATO Advanced Research Workshop on Multisensor Fusion for Computer Vision (1989 : Grenoble, France), eds. Multisensor fusion for computer vision. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Katsaggelos, Aggelos K. Signal Recovery Techniques for Image and Video Compression and Transmission. Boston, MA: Springer US, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Li M. Digital Functions and Data Reconstruction: Digital-Discrete Methods. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Soille, Pierre. Mathematical Morphology and Its Applications to Image and Signal Processing: 10th International Symposium, ISMM 2011, Verbania-Intra, Italy, July 6-8, 2011. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hendriks, Cris L. Luengo. Mathematical Morphology and Its Applications to Signal and Image Processing: 11th International Symposium, ISMM 2013, Uppsala, Sweden, May 27-29, 2013. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chaudhuri, Subhasis. Hyperspectral Image Fusion. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Carlo, Arcelli, Cordella Luigi P, and Sanniti di Baja Gabriella, eds. Advances in visual form analysis: Proceedings of the Third International Workshop on Visual Form, Capri, Italy, May 28-30, 1997. Singapore: World Scientific, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Petra, Perner, Salvetti Ovidio, Siekmann Jörg H, and SpringerLink (Online service), eds. Advances in Mass Data Analysis of Images and Signals in Medicine, Biotechnology, Chemistry and Food Industry: Third International Conference, MDA 2008 Leipzig, Germany, July 14, 2008 Proceedings. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ovidio, Salvetti, and SpringerLink (Online service), eds. Advances in Mass Data Analysis of Signals and Images in Medicine, Biotechnology and Chemistry: International Conferences MDA 2006/2007, Leipzig, Germany, July 18, 2007. Selected Papers. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sencar, Husrev T. Digital Image Forensics: There is More to a Picture than Meets the Eye. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computer vision in data analytics and signal processing"

1

Rot, Peter, Peter Peer, and Vitomir Štruc. "Detecting Soft-Biometric Privacy Enhancement." In Handbook of Digital Face Manipulation and Detection, 391–411. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_18.

Full text
Abstract:
AbstractWith the proliferation of facial analytics and automatic recognition technology that can automatically extract a broad range of attributes from facial images, so-called soft-biometric privacy-enhancing techniques have seen increased interest from the computer vision community recently. Such techniques aim to suppress information on certain soft-biometric attributes (e.g., age, gender, ethnicity) in facial images and make unsolicited processing of the facial data infeasible. However, because the level of privacy protection ensured by these methods depends to a significant extent on the fact that privacy-enhanced images are processed in the same way as non-tampered images (and not treated differently), it is critical to understand whether privacy-enhancing manipulations can be detected automatically. To explore this issue, we design a novel approach for the detection of privacy-enhanced images in this chapter and study its performance with facial images processed by three recent privacy models. The proposed detection approach is based on a dedicated attribute recovery procedure that first tries to restore suppressed soft-biometric information and based on the result of the restoration procedure then infers whether a given probe image is privacy enhanced or not. It exploits the fact that a selected attribute classifier generates different attribute predictions when applied to the privacy-enhanced and attribute-recovered facial images. This prediction mismatch (PREM) is, therefore, used as a measure of privacy enhancement. In extensive experiments with three popular face datasets we show that the proposed PREM model is able to accurately detect privacy enhancement in facial images despite the fact that the technique requires no supervision, i.e., no examples of privacy-enhanced images are needed for training.
APA, Harvard, Vancouver, ISO, and other styles
2

Rani, Jyotsna, Ram Kumar, Abahan Sarkar, and Fazal A. Talukdar. "A Study on Various Image Processing Techniques and Hardware Implementation Using Xilinx System Generator." In Computer Vision, 930–45. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch038.

Full text
Abstract:
This article reviews the various image processing techniques in MATLAB and also hardware implementation in FPGA using Xilinx system generator. Image processing can be termed as processing of images using mathematical operations by using various forms of signal processing techniques. The main aim of image processing is to extract important features from an image data and process it in a desired manner and to visually enhance or to statistically evaluate the desired aspect of the image. This article provides an insight into the various approaches of Digital Image processing techniques in Matlab. This article also provides an introduction to FPGA and also a step by step tutorial in handling Xilinx System Generator. The Xilinx System Generator tool is a new application in image processing and offers a friendly environment design for the processing. This tool support software simulation, but the most important is that can synthesize in FPGAs hardware, with the parallelism, robust and speed, this features are essentials in image processing. Implementation of these algorithms on a FPGA is having advantage of using large memory and embedded multipliers. Advances in FPGA technology with the development of sophisticated and efficient tools for modelling, simulation and synthesis have made FPGA a highly useful platform.
APA, Harvard, Vancouver, ISO, and other styles
3

Baharadwaj, Nitin, Sheena Wadhwa, Pragya Goel, Isha Sethi, Chanpreet Singh Arora, Aviral Goel, Sonika Bhatnagar, and Harish Parthasarathy. "De-Noising, Clustering, Classification, and Representation of Microarray Data for Disease Diagnostics." In Research Developments in Computer Vision and Image Processing, 149–74. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4558-5.ch009.

Full text
Abstract:
A microarray works by exploiting the ability of a given mRNA molecule to bind specifically to the DNA template from which it originated under specific high stringency conditions. After this, the amount of mRNA bound to each DNA site on the array is determined, which represents the expression level of each gene. Qualification of the mRNA (probe) bound to each DNA spot (target) can help us to determine which genes are active or responsible for the current state of the cell. The probe target hybridization is usually detected and quantified using dyes/flurophore/chemiluminescence labels. The microarray data gives a single snapshot of the gene activity profile of a cell at any given time. Microarray data helps to elucidate the various genes involved in the disease and may also be used for diagnosis /prognosis. In spite of its huge potential, microarray data interpretation and use is limited by its error prone nature, the sheer size of the data and the subjectivity of the analysis. Initially, we describe the use of several techniques to develop a pre-processing methodology for denoising microarray data using signal process techniques. The noise free data thus obtained is more suitable for classification of the data as well as for mining useful information from the data. Discrete Fourier Transform (DFT) and Autocorrelation were explored for denoising the data. We also used microarray data to develop the use of microarray data as diagnostic tool in cancer using One Dimensional Fourier Transform followed by simple Euclidean Distance Calculations and Two Dimensional MUltiple SIgnal Classification (MUSIC). To improve the accuracy of the diagnostic tool, Volterra series were used to model the nonlinear behavior of the data. Thus, our efforts at denoising, representation, and classification of microarray data with signal processing techniques show that appreciable results could be attained even with the most basic techniques. To develop a method to search for a gene signature, we used a combination of PCA and density based clustering for inferring the gene signature of Parkinson’s disease. Using this technique in conjunction with gene ontology data, it was possible to obtain a signature comprising of 21 genes, which were then validated by their involvement in known Parkinson’s disease pathways. The methodology described can be further developed to yield future biomarkers for early Parkinson’s disease diagnosis, as well as for drug development.
APA, Harvard, Vancouver, ISO, and other styles
4

Purushothaman, Geethanjali. "Bio-Inspired Techniques in Rehabilitation Engineering for Control of Assistive Devices." In Computer Vision, 2065–82. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch090.

Full text
Abstract:
The intelligent control of assistive devices is possible from bio-signals or gestures to find the user's intention. The goal of the user intention recognition system is to develop computational methods for decoding the acquired bio-signal data. One of the methods of accomplishing the objective will be using the pattern recognition system. The study of higher level control of assistive device using various data processing techniques with bio-inspired techniques is in progress. The knowledge of bio-inspired computation is essential for the neophytes to develop algorithms for identification of intention from bioelectric signals. Most literatures, demonstrates the application using signals and not much definite study describes the various bio-inspiring computation involved to develop the control of assistive devices in real-time. Therefore, this chapter presents a brief survey of the various bio-inspiring techniques used in interfacing devices for identification of information from the user intends.
APA, Harvard, Vancouver, ISO, and other styles
5

Granero, Marco Aurélio, Marco Antônio Gutierrez, and Eduardo Tavares Costa. "Rebuilding IVUS images from raw data of the RF signal exported by IVUS equipment." In Emerging Trends in Image Processing, Computer Vision and Pattern Recognition, 87–97. Elsevier, 2015. http://dx.doi.org/10.1016/b978-0-12-802045-6.00006-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mohanchandra, Kusuma, and Snehanshu Saha. "Machine Learning Methods as a Test Bed for EEG Analysis in BCI Paradigms." In Cognitive Analytics, 1577–97. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2460-2.ch081.

Full text
Abstract:
Machine learning techniques, is a crucial tool to build analytical models in EEG data analysis. These models are an excellent choice for analyzing the high variability in EEG signals. The advancement in EEG-based Brain-Computer Interfaces (BCI) demands advanced processing tools and algorithms for exploration of EEG signals. In the context of the EEG-based BCI for speech communication, few classification and clustering techniques is presented in this book chapter. A broad perspective of the techniques and implementation of the weighted k-Nearest Neighbor (k-NN), Support vector machine (SVM), Decision Tree (DT) and Random Forest (RF) is explained and their usage in EEG signal analysis is mentioned. We suggest that these machine learning techniques provides not only potentially valuable control mechanism for BCI but also a deeper understanding of neuropathological mechanisms underlying the brain in ways that are not possible by conventional linear analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Hota, Rudra Narayan, Kishore Jonna, and P. Radha Krishna. "Video Stream Mining for On-Road Traffic Density Analytics." In Pattern Discovery Using Sequence Data Mining, 182–94. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-61350-056-9.ch011.

Full text
Abstract:
Traffic congestion problem is rising day-by-day due to increasing number of small to heavy weight vehicles on the road, poorly designed infrastructure, and ineffective control systems. This chapter addresses the problem of estimating computer vision based traffic density using video stream mining. We present an efficient approach for traffic density estimation using texture analysis along with Support Vector Machine (SVM) classifier, and describe analyzing traffic density for on-road traffic congestion control with better flow management. This approach facilitates integrated environment for users to derive traffic status by mining the available video streams from multiple cameras. It also facilitates processing video frames received from video cameras installed in traffic posts and classifies the frames according to traffic content at any particular instance. Time series information available from various input streams is combined with traffic video classification results to discover traffic trends.
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Shatakshi, Kanika Gautam, Prachi Singhal, Sunil Kumar Jangir, and Manish Kumar. "A Survey on Intelligence Tools for Data Analytics." In Advances in Data Mining and Database Management, 73–95. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3053-5.ch005.

Full text
Abstract:
The recent development in artificial intelligence is quite astounding in this decade. Especially, machine learning is one of the core subareas of AI. Also, ML field is an incessantly growing along with evolution and becomes a rise in its demand and importance. It transmogrified the way data is extracted, analyzed, and interpreted. Computers are trained to get in a self-training mode so that when new data is fed they can learn, grow, change, and develop themselves without explicit programming. It helps to make useful predictions that can guide better decisions in a real-life situation without human interference. Selection of ML tool is always a challenging task, since choosing an appropriate tool can end up saving time as well as making it faster and easier to provide any solution. This chapter provides a classification of various machine learning tools on the following aspects: for non-programmers, for model deployment, for Computer vision, natural language processing, and audio for reinforcement learning and data mining.
APA, Harvard, Vancouver, ISO, and other styles
9

Abdul Karim, Samsul Ariffin, Nur Atiqah Binti Zulkifli, A'fza Binti Shafie, Muhammad Sarfraz, Abdul Ghaffar, and Kottakkaran Sooppy Nisar. "Medical Image Zooming by Using Rational Bicubic Ball Function." In Advancements in Computer Vision Applications in Intelligent Systems and Multimedia Technologies, 146–61. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-4444-0.ch008.

Full text
Abstract:
This chapter deals with image processing in the specific area of image zooming via interpolation. The authors employ bivariate rational cubic ball function defined on rectangular meshes. These bivariate spline have six free parameters that can be used to alter the shape of the surface without needed to change the data. It also can be used to refine the resolution of the image. In order to cater the image zomming, they propose an efficient algorithm by including image downscaling and upscaling procedures. To measure the effectiveness of the proposed scheme, they compare the performance based on the value of peak signal-to-noise ratio (PSNR) and root mean square error (RMSE). Comparison with existing schemes such as nearest neighbour (NN), bilinear (BL), bicubic (BC), bicubic Hermite (BH), and existing scheme Karim and Saaban (KS) have been made in detail. From all numerical results, the proposed scheme gave higher PSNR value and smaller RMSE value for all tested images.
APA, Harvard, Vancouver, ISO, and other styles
10

Vocaturo, Eugenio. "Image Classification Techniques." In Handbook of Research on Disease Prediction Through Data Analytics and Machine Learning, 22–49. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-2742-9.ch003.

Full text
Abstract:
The image processing task, aimed at interpreting and classifying the contents of the images, has attracted the attention of researchers since the early days of computers. With the advancement of computing system technology, image categorization has found increasingly broader applications, covering new generation disciplines such as image analysis, object recognition, and computer vision, with applications quite general both in scientific and humanistic fields. The automatic recognition, description, and classification of the structures contained in the images are of fundamental importance in a vast set of scientific and engineering fields that require the acquisition, processing, and transmission of information in visual form. Classification tasks also include those related to the categorization of images, such as the construction of a recognition system, the representation of patterns, the selection and extraction of features, and the definition of automatic recognition methods. Image analysis is of collective interest and it is a hot topics of current research.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer vision in data analytics and signal processing"

1

Katarya, Rahul, and Sajal Jain. "Exploration of Big Data Analytics in Healthcare Analytics." In 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP). IEEE, 2020. http://dx.doi.org/10.1109/icccsp49186.2020.9315192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Computer Vision Techniques for Target Detection in Ground Penetrating Radar Data." In Signal and Image Processing. Calgary,AB,Canada: ACTAPRESS, 2012. http://dx.doi.org/10.2316/p.2012.786-034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nylanden, Teemu, Heikki Kultala, Ilkka Hautala, Jani Boutellier, Jari Hannuksela, and Olli Silven. "Programmable data parallel accelerator for mobile computer vision." In 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2015. http://dx.doi.org/10.1109/globalsip.2015.7418271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baluja, Shumeet, and Michele Covell. "Audio Fingerprinting: Combining Computer Vision & Data Stream Processing." In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.366210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Mark, Chen Heng, Haibin Zhu, and Haoyang Cai. "COVID-19 detection based on Computer Vision and Big Data." In 2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2022. http://dx.doi.org/10.1109/icsp54964.2022.9778429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Angulo, Carlos A., Christian D. Hernandez, Gabriel Rincon, Carlos A. Boada, Javier Castillo, and Carlos A. Fajardo. "Accelerating huffman decoding of seismic data on GPUs." In 2015 20th Symposium on Signal Processing, Images and Computer Vision (STSIVA). IEEE, 2015. http://dx.doi.org/10.1109/stsiva.2015.7330430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chatar, Crispin, Suhas Suresha, Laetitia Shao, Soumya Gupta, and Indranil Roychoudhury. "Determining Rig State from Computer Vision Analytics." In SPE/IADC International Drilling Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204086-ms.

Full text
Abstract:
Abstract For years, many companies involved with drilling have searched for the ideal method to calculate the state of a drilling rig. While companies cannot agree on a standard definition of "rig state," they can agree that as we move forward in drilling optimization and with further use of remote operations and automation, that rig state calculation is mandatory in one form or the other. Internally in the service company, many methods exist for calculating rig state, but one new technology area holds promise to deliver a more efficient and cost-effective option with higher accuracy. This technology involves vision analytics. Currently, detection algorithms rely heavily on data collected by sensors installed on the rig. However, relying exclusively on sensor data is problematic because sensors are prone to failure and are expensive to maintain and install. By proposing a machine learning model that relies exclusively on videos collected on the rig floor to infer rig states, it is possible to move away from the existing methods as the industry moves to a future of high-tech rigs. Videos, in contrast to sensor data, are relatively easy to collect from small inexpensive cameras installed at strategic locations. Consequently, this paper presents machine learning pipeline that is implemented to perform rig state determination from videos captured on the rig floor of an operating rig. The pipeline can be described in two parts. Firstly, the annotation pipeline matches each frame of the video dataset to a rig state. A convolutional neural network (CNN) is used to match the time of the video with corresponding sensor data. Secondly, additional CNNs are trained, capturing both spatial and temporal information, to extract an estimation of rig state from videos. The models are trained on a dataset of 3 million frames on a cloud platform using graphics processing units (GPU). Some of the models used include a pretrained visual geometry group (VGG) network, a convolutional three-dimensional (C3D) model that used three-dimensional (3D) convolutions, and a two-stream model that uses optical flow to capture temporal information. The initial results demonstrate this pipeline to be effective in detecting rig states using computer vision analytics.
APA, Harvard, Vancouver, ISO, and other styles
8

Contreras Contreras, Ghiordy Ferney, Byron Medina Delgado, Dinael Guevara Ibarra, Cristiano Leite de Castro, and Brayan Rene Acevedo Jaimes. "Cluster CV2: a Computer Vision Approach to Spatial Identification of Data Clusters." In 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA). IEEE, 2019. http://dx.doi.org/10.1109/stsiva.2019.8730239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Salazar-Castro, J. A., Y. C. Rosas-Narvaez, A. D. Pantoja, Juan C. Alvarado-Perez, and Diego H. Peluffo-Ordonez. "Interactive interface for efficient data visualization via a geometric approach." In 2015 20th Symposium on Signal Processing, Images and Computer Vision (STSIVA). IEEE, 2015. http://dx.doi.org/10.1109/stsiva.2015.7330397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Castelar, Jairo A., Carlos A. Angulo, and Carlos A. Fajardo. "Parallel decompression of seismic data on GPU using a lifting wavelet algorithm." In 2015 20th Symposium on Signal Processing, Images and Computer Vision (STSIVA). IEEE, 2015. http://dx.doi.org/10.1109/stsiva.2015.7330432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography