Academic literature on the topic 'Video image analysi'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video image analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video image analysi"

1

Alothman, Raya Basil, Imad Ibraheem Saada, and Basma Salim Bazel Al-Brge. "A Performance-Based Comparative Encryption and Decryption Technique for Image and Video for Mobile Computing." Journal of Cases on Information Technology 24, no. 2 (April 2022): 1–18. http://dx.doi.org/10.4018/jcit.20220101.oa1.

Full text
Abstract:
When data exchange advances through the electronic system, the need for information security has become a must. Protection of images and videos is important in today's visual communication system. Confidential image / video data must be shielded from unauthorized uses. Detecting and identifying unauthorized users is a challenging task. Various researchers have suggested different techniques for securing the transfer of images. In this research, the comparative study of these current technologies also addressed the types of images / videos and the different techniques of image / video processing with the steps used to process the image or video. This research classifies the two types of Encryption Algorithm, Symmetric and Encryption Algorithm, and provides a comparative analysis of its types, such as AES, MAES, RSA, DES, 3DES and BLOWFISH.
APA, Harvard, Vancouver, ISO, and other styles
2

Deng, Zhaopeng, Maoyong Cao, Yushui Geng, and Laxmisha Rai. "Generating a Cylindrical Panorama from a Forward-Looking Borehole Video for Borehole Condition Analysis." Applied Sciences 9, no. 16 (August 20, 2019): 3437. http://dx.doi.org/10.3390/app9163437.

Full text
Abstract:
Geological exploration plays a fundamental and crucial role in geological engineering. The most frequently used method is to obtain borehole videos using an axial view borehole camera system (AVBCS) in a pre-drilled borehole. This approach to surveying the internal structure of a borehole is based on the video playback and video screenshot analysis. One of the drawbacks of AVBCS is that it provides only a qualitative description of borehole information with a forward-looking borehole video, but quantitative analysis of the borehole data, such as the width and dip angle of fracture, are unavailable. In this paper, we proposed a new approach to create a whole borehole-wall cylindrical panorama from the borehole video acquired by AVBCS, which provides a possibility for further analysis of borehole information. Firstly, based on the Otsu and region labeling algorithms, a borehole center location algorithm is proposed to extract the borehole center of each video image automatically. Afterwards, based on coordinate mapping (CM), a virtual coordinate graph (VCG) is designed in the unwrapping process of the front view borehole-wall image sequence, generating the corresponding unfolded image sequence and reducing the computational cost. Subsequently, based on the sum of absolute difference (SAD), a projection transformation SAD (PTSAD), which considers the gray level similarity of candidate images, is proposed to achieve the matching of the unfolded image sequence. Finally, an image filtering module is introduced to filter the invalid frames and the remaining frames are stitched into a complete cylindrical panorama. Experiments on two real-world borehole videos demonstrate that the proposed method can generate panoramic borehole-wall unfolded images from videos with satisfying visual effect for follow up geological condition analysis. From the resulting image, borehole information, including the rock mechanical properties, distribution and width of fracture, fault distribution and seam thickness, can be further obtained and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
3

Livingston, Merlin L. M., and Agnel L. G. X. Livingston. "Processing of Images and Videos for Extracting Text Information from Clustered Features Using Graph Wavelet Transform." Journal of Computational and Theoretical Nanoscience 16, no. 2 (February 1, 2019): 557–61. http://dx.doi.org/10.1166/jctn.2019.7768.

Full text
Abstract:
Image processing is an interesting domain for extracting knowledge from real time video and images for surveillance, automation, robotics, medical and entertainment industries. The data obtained from videos and images are continuous and hold a primary role in semantic based video analysis, retrieval and indexing. When images and videos are obtained from natural and random sources, they need to be processed for identifying text, tracking, binarization and recognising meaningful information for succeeding actions. This proposal defines a solution with assistance of Spectral Graph Wave Transform (SGWT) technique for localizing and extracting text information from images and videos. K Means clustering technique precedes the SGWT process to group features in an image from a quantifying Hill Climbing algorithm. Precision, Sensitivity, Specificity and Accuracy are the four parameters which declares the efficiency of proposed technique. Experimentation is done from training sets from ICDAR and YVT for videos.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Jianbang, Peng Sun, and Sang-Bing Tsai. "A Study on the Optimization Simulation of Big Data Video Image Keyframes in Motion Models." Wireless Communications and Mobile Computing 2022 (March 16, 2022): 1–12. http://dx.doi.org/10.1155/2022/2508174.

Full text
Abstract:
In this paper, the signal of athletic sports video image frames is processed and studied according to the technology of big data. The sports video image-multiprocessing technology achieves interference-free research and analysis of sports technology and can meet multiple visual needs of sports technology analysis and evaluation through key technologies such as split-screen synchronous comparison, superimposed synchronous comparison, and video trajectory tracking. The sports video image-processing technology realizes the rapid extraction of key technical parameters of the sports scene, the panoramic map technology of sports video images, the split-lane calibration technology, and the development of special video image analysis software that is innovative in the field of athletics research. An image-blending approach is proposed to alleviate the problem of simple and complex background data imbalance, while enhancing the generalization ability of the network trained using small-scale datasets. Local detail features of the target are introduced in the online-tracking process by an efficient block-filter network. Moreover, online hard-sample learning is utilized to avoid the interference of similar objects to the tracker, thus improving the overall tracking performance. For the feature extraction problem of fuzzy videos, this paper proposes a fuzzy kernel extraction scheme based on the low-rank theory. The scheme fuses multiple fuzzy kernels of keyframe images by low-rank decomposition and then deblurs the video. Next, a double-detection mechanism is used to detect tampering points on the blurred video frames. Finally, the video-tampering points are located, and the specific way of video tampering is determined. Experiments on two public video databases and self-recorded videos show that the method is robust in fuzzy video forgery detection, and the efficiency of fuzzy video detection is improved compared to traditional video forgery detection methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Aparna, RR. "Swarm Intelligence for Automatic Video Image Contrast Adjustment." International Journal of Rough Sets and Data Analysis 3, no. 3 (July 2016): 21–37. http://dx.doi.org/10.4018/ijrsda.2016070102.

Full text
Abstract:
Video surveillance has become an integrated part of today's life. We are surrounded by video cameras in all the public places and organizations in our day to day life. Many useful information like face detection, traffic analysis, object classification, crime analysis can be assessed from the recorded videos. Image enhancement plays a vital role to extract any useful information from the images. Enhancing the video frames is a major part as it serves the further analysis of video sequences. The proposed paper discusses the automatic contrast adjustment in the video frames. A new hybrid algorithm was developed using the spatial domain method and Artificial Bee Colony Algorithm (ABC), a swarm intelligence based technique for image enhancement. The proposed algorithm was tested using the traffic surveillance images. The proposed method produced good results and better quality picture for varied levels of poor quality video frames.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Jie-Hyun, Sang-Il Oh, So-Young Han, Ji-Soo Keum, Kyung-Nam Kim, Jae-Young Chun, Young-Hoon Youn, and Hyojin Park. "An Optimal Artificial Intelligence System for Real-Time Endoscopic Prediction of Invasion Depth in Early Gastric Cancer." Cancers 14, no. 23 (December 5, 2022): 6000. http://dx.doi.org/10.3390/cancers14236000.

Full text
Abstract:
We previously constructed a VGG-16 based artificial intelligence (AI) model (image classifier [IC]) to predict the invasion depth in early gastric cancer (EGC) using endoscopic static images. However, images cannot capture the spatio-temporal information available during real-time endoscopy—the AI trained on static images could not estimate invasion depth accurately and reliably. Thus, we constructed a video classifier [VC] using videos for real-time depth prediction in EGC. We built a VC by attaching sequential layers to the last convolutional layer of IC v2, using video clips. We computed the standard deviation (SD) of output probabilities for a video clip and the sensitivities in the manner of frame units to observe consistency. The sensitivity, specificity, and accuracy of IC v2 for static images were 82.5%, 82.9%, and 82.7%, respectively. However, for video clips, the sensitivity, specificity, and accuracy of IC v2 were 33.6%, 85.5%, and 56.6%, respectively. The VC performed better analysis of the videos, with a sensitivity of 82.3%, a specificity of 85.8%, and an accuracy of 83.7%. Furthermore, the mean SD was lower for the VC than IC v2 (0.096 vs. 0.289). The AI model developed utilizing videos can predict invasion depth in EGC more precisely and consistently than image-trained models, and is more appropriate for real-world situations.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Longcheng, Deokhwan Choi, and Zeyun Yang. "Deep Neural Network-Based Sports Marketing Video Detection Research." Scientific Programming 2022 (March 19, 2022): 1–7. http://dx.doi.org/10.1155/2022/8148972.

Full text
Abstract:
With the rapid development of short video, the mode of sports marketing has diversified, and the difficulty of accurately detecting marketing videos has increased. Identifying certain key images in the video is the focus of detection, and then, analysis can effectively detect sports marketing videos. The research of video key image detection based on deep neural network is proposed to solve the problem of unclear and unrecognizable boundaries of key images for multiscene recognition. First, the key image detection model of the feedback network is proposed, and ablation experiments are conducted on a simple test set of DAVSOD. The experimental results show that the proposed model achieves better performance in both quantitative evaluation and visual effects and can accurately capture the overall shape of significant objects. The hybrid loss function is also introduced to identify the boundaries of key images, and the experimental results show that the proposed model outperforms or is comparable to the current state-of-the-art video significant object detection models in terms of quantitative evaluation and visual effects.
APA, Harvard, Vancouver, ISO, and other styles
8

Guangyu, Han. "Analysis of Sports Video Intelligent Classification Technology Based on Neural Network Algorithm and Transfer Learning." Computational Intelligence and Neuroscience 2022 (March 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/7474581.

Full text
Abstract:
With the rapid development of information technology, digital content shows an explosive growth trend. Sports video classification is of great significance for digital content archiving in the server. Therefore, the accurate classification of sports video categories is realized by using deep neural network algorithm (DNN), convolutional neural network (CNN), and transfer learning. Block brightness comparison coding (BICC) and block color histogram are proposed, which reflect the brightness relationship between different regions in video and the color information in the region. The maximum mean difference (MMD) algorithm is adopted to achieve the purpose of transfer learning. On the basis of obtaining the features of sports video images, the sports video image classification method based on deep learning coding model is adopted to realize sports video classification. The results show that, for different types of sports videos, the overall classification effect of this method is obviously better than other current sports video classification methods, which greatly improves the classification effect of sports videos.
APA, Harvard, Vancouver, ISO, and other styles
9

Lokkondra, Chaitra Yuvaraj, Dinesh Ramegowda, Gopalakrishna Madigondanahalli Thimmaiah, Ajay Prakash Bassappa Vijaya, and Manjula Hebbaka Shivananjappa. "ETDR: An Exploratory View of Text Detection and Recognition in Images and Videos." Revue d'Intelligence Artificielle 35, no. 5 (October 31, 2021): 383–93. http://dx.doi.org/10.18280/ria.350504.

Full text
Abstract:
Images and videos with text content are a direct source of information. Today, there is a high need for image and video data that can be intelligently analyzed. A growing number of researchers are focusing on text identification, making it a hot issue in machine vision research. Since this opens the way, several real-time-based applications such as text detection, localization, and tracking have become more prevalent in text analysis systems. To find out more about how text information may be extracted, have a look at our survey. This study presents a trustworthy dataset for text identification in images and videos at first. The second part of the article details the numerous text formats, both in images and video. Third, the process flow for extracting information from the text and the existing machine learning and deep learning techniques used to train the model was described. Fourth, explain assessment measures that are used to validate the model. Finally, it integrates the uses and difficulties of text extraction across a wide range of fields. Difficulties focus on the most frequent challenges faced in the actual world, such as capturing techniques, lightning, and environmental conditions. Images and videos have evolved into valuable sources of data. The text inside the images and video provides a massive quantity of facts and statistics. However, such data is not easy to access. This exploratory view provides easier and more accurate mathematical modeling and evaluation techniques to retrieve the text in image and video into an accessible form.
APA, Harvard, Vancouver, ISO, and other styles
10

Yao-DongWang, Idaku Ishii, Takeshi Takaki, and Kenji Tajima. "An Intelligent High-Frame-Rate Video Logging System for Abnormal Behavior Analysis." Journal of Robotics and Mechatronics 23, no. 1 (February 20, 2011): 53–65. http://dx.doi.org/10.20965/jrm.2011.p0053.

Full text
Abstract:
This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and High-Frame-Rate (HFR) video recording simultaneously. In IDP Express, 512×512 pixel images from two camera heads and the processed results on a dedicated FPGA (Field Programmable Gate Array) board are transferred to standard PC memory at a rate of 1000 fps or more. Owing to the simultaneous HFR video processing and recording, IDP Express can be used as an intelligent video logging system for long-term high-speed phenomenon analysis. In this paper, a real-time abnormal behavior detection algorithm was implemented on IDP-Express to capture HFR videos of crucial moments of unpredictable abnormal behaviors in high-speed periodic motions. Several experiments were performed for a high-speed slider machine with repetitive operation at a frequency of 15 Hz and videos of the abnormal behaviors were automatically recorded to verify the effectiveness of our intelligent HFR video logging system.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Video image analysi"

1

GIACHELLO, SILVIA. "Identità' e memoria visuale: comunità', eventi, documentazione." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2540089.

Full text
Abstract:
Il progetto indaga la potenzialita' delle immagini, con particolare riferimento alla fotografia, di descrivere, documentare e interpretare gli eventi culturali, e il loro linguaggio particolare e funzionale; le conseguenze della proliferante diffusione della cultura visuale (grassroots, social networks, etc.) nell'interpretazione della realtà e nella costruzione della memoria; il valore attribuito dal pubblico e da esperti qualificati al concetto di “evento culturale” attraverso la rilevazione, il monitoraggio e la documentazione di consumi culturali, e la funzione identitaria di tale concetto. La ricerca è stata strutturata sviluppando tecniche della sociologia visuale in prospettiva interculturale, attraverso una metodologia combinata ad hoc: foto-stimolo a partire da immagini da me prodotte per la documentazione di un caso studio - il Ganesh Festival di Pune, Maharashtra, India -, e analisi di Visual Memos (prodotti visuali di autodocumentazione). Due sono in sintesi gli scopi della ricerca: individuazione di frame nella definizione dell'evento culturale e determinazione della sua capacita' di condizionare l’identità’ personale e collettiva, e analisi della funzione identitaria e mnemonica dei prodotti visuali che mediano e/o documentano l'evento, e il loro ruolo nel determinare la permanenza nel tempo dell’evento stesso. La ricerca sul campo è stata effettuata a Pune, città del Maharashtra (India), seconda a Mumbai per dimensioni e importanza economico-culturale nello stato. Tale opportunità è stata utilizzata per operare un confronto fra differenti prospettive culturali in merito ad entrambi gli obiettivi della ricerca, come presupposto cognitivo alla progettazione di campagne di comunicazione e valorizzazione di eventi culturali in una prospettiva interculturale (glocal culture).
APA, Harvard, Vancouver, ISO, and other styles
2

Dye, Brigham R. "Reliability of pre-service teachers' coding of teaching videos using a video-analysis tool /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2020.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Tae-Kyun. "Discriminant analysis of patterns in images, image ensembles, and videos." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sdiri, Bilel. "2D/3D Endoscopic image enhancement and analysis for video guided surgery." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD030.

Full text
Abstract:
Grâce à l’évolution des procédés de diagnostiques médicaux et les développements technologiques, la chirurgie mini-invasive a fait des progrès remarquables au cours des dernières décennies surtout avec l’innovation de nouveaux outils médicaux tels que les systèmes chirurgicaux robotisés et les caméras endoscopiques sans fil. Cependant, ces techniques souffrent de quelques limitations liées essentiellement l’environnement endoscopique telles que la non uniformité de l’éclairage, les réflexions spéculaires des tissus humides, le faible contraste/netteté et le flou dû aux mouvements du chirurgien et du patient (i.e. la respiration). La correction de ces dégradations repose sur des critères de qualité d’image subjective et objective dans le contexte médical. Il est primordial de développer des solutions d’amélioration de la qualité perceptuelle des images acquises par endoscopie 3D. Ces solutions peuvent servir plus particulièrement dans l’étape d’extraction de points d’intérêts pour la reconstruction 3D des organes, qui sert à la planification de certaines opérations chirurgicales. C’est dans cette optique que cette thèse aborde le problème de la qualité des images endoscopiques en proposant de nouvelles méthodes d’analyse et de rehaussement de contraste des images endoscopiques 2D et 3D.Pour la détection et la classification automatique des anomalies tissulaires pour le diagnostic des maladies du tractus gastro-intestinal, nous avons proposé une méthode de rehaussement de contraste local et global des images endoscopiques 2D classiques et pour l’endoscopie capsulaire sans fil.La méthode proposée améliore la visibilité des structures locales fines et des détails de tissus. Ce prétraitement a permis de faciliter le processus de détection des points caractéristiques et d’améliorer le taux de classification automatique des tissus néoplasiques et tumeurs bénignes. Les méthodes développées exploitent également la propriété d’attention visuelle et de perception de relief en stéréovision. Dans ce contexte, nous avons proposé une technique adaptative d’amélioration de la qualité des images stéréo endoscopiques combinant l’information de profondeur et les contours des tissues. Pour rendre la méthode plus efficace et adaptée aux images 3Dl e rehaussement de contraste est ajusté en fonction des caractéristiques locales de l’image et du niveau de profondeur dans la scène tout en contrôlant le traitement inter-vues par un modèle de perception binoculaire.Un test subjectif a été mené pour évaluer la performance de l’algorithme proposé en termes de qualité visuelle des images générées par des observateurs experts et non experts dont les scores ont démontré l’efficacité de notre technique 3D d’amélioration du contraste. Dans cette même optique,nous avons développé une autre technique de rehaussement du contraste des images endoscopiques stéréo basée sur la décomposition en ondelettes.Ce qui offre la possibilité d’effectuer un traitement multi-échelle et d’opérer une traitement sélectif. Le schéma proposé repose sur un traitement stéréo qui exploite à la fois l’informations de profondeur et les redondances intervues,ainsi que certaines propriétés du système visuel humain, notamment la sensibilité au contraste et à la rivalité/combinaison binoculaire. La qualité visuelle des images traitées et les mesures de qualité objective démontrent l’efficacité de notre méthode qui ajuste l’éclairage des images dans les régions sombres et saturées et accentue la visibilité des détails liés aux vaisseaux sanguins et les textures de tissues
Minimally invasive surgery has made remarkable progress in the last decades and became a very popular diagnosis and treatment tool, especially with the rapid medical and technological advances leading to innovative new tools such as robotic surgical systems and wireless capsule endoscopy. Due to the intrinsic characteristics of the endoscopic environment including dynamic illumination conditions and moist tissues with high reflectance, endoscopic images suffer often from several degradations such as large dark regions,with low contrast and sharpness, and many artifacts such as specular reflections and blur. These challenges together with the introduction of three dimensional(3D) imaging surgical systems have prompted the question of endoscopic images quality, which needs to be enhanced. The latter process aims either to provide the surgeons/doctors with a better visual feedback or improve the outcomes of some subsequent tasks such as features extraction for 3D organ reconstruction and registration. This thesis addresses the problem of endoscopic image quality enhancement by proposing novel enhancement techniques for both two-dimensional (2D) and stereo (i.e. 3D)endoscopic images.In the context of automatic tissue abnormality detection and classification for gastro-intestinal tract disease diagnosis, we proposed a pre-processing enhancement method for 2D endoscopic images and wireless capsule endoscopy improving both local and global contrast. The proposed method expose inner subtle structures and tissues details, which improves the features detection process and the automatic classification rate of neoplastic,non-neoplastic and inflammatory tissues. Inspired by binocular vision attention features of the human visual system, we proposed in another workan adaptive enhancement technique for stereo endoscopic images combining depth and edginess information. The adaptability of the proposed method consists in adjusting the enhancement to both local image activity and depth level within the scene while controlling the interview difference using abinocular perception model. A subjective experiment was conducted to evaluate the performance of the proposed algorithm in terms of visual qualityby both expert and non-expert observers whose scores demonstrated the efficiency of our 3D contrast enhancement technique. In the same scope, we resort in another recent stereo endoscopic image enhancement work to the wavelet domain to target the enhancement towards specific image components using the multiscale representation and the efficient space-frequency localization property. The proposed joint enhancement methods rely on cross-view processing and depth information, for both the wavelet decomposition and the enhancement steps, to exploit the inter-view redundancies together with perceptual human visual system properties related to contrast sensitivity and binocular combination and rivalry. The visual qualityof the processed images and objective assessment metrics demonstrate the efficiency of our joint stereo enhancement in adjusting the image illuminationin both dark and saturated regions and emphasizing local image details such as fine veins and micro vessels, compared to other endoscopic enhancement techniques for 2D and 3D images
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Dong. "Thermal image analysis using calibrated video imaging." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4455.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 23, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Eastwood, Brian S. Taylor Russell M. "Multiple layer image analysis for video microscopy." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2813.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.
Title from electronic title page (viewed Mar. 10, 2010). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
APA, Harvard, Vancouver, ISO, and other styles
7

Sheikh, Faridul Hasan. "Analysis of 3D color matches for the creation and consumption of video content." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4001.

Full text
Abstract:
L'objectif de cette thèse est de proposer une solution au problème de la constance des couleurs entre les images d'une même scène acquises selon un même point de vue ou selon différents points de vue. Ce problème constitue un défi majeur en vision par ordinateur car d'un point de vue à l'autre, on peut être confronté à des variations des conditions d'éclairage (spectre de l'éclairage, intensité de l'éclairage) et des conditions de prise de vue (point de vue, type de caméra, paramètres d'acquisition tels que focus, exposition, balance des blancs, etc.). Ces variations induisent alors des différences d'apparence couleur entre les images acquises qui touchent soit sur l'ensemble de la scène observée soit sur une partie de celle-ci. Dans cette thèse, nous proposons une solution à ce problème qui permet de modéliser puis de compenser, de corriger, ces variations de couleur à partir d'une méthode basée sur quatre étapes : (1) calcul des correspondances géométriques à partir de points d'intérêt (SIFT et MESR) ; (2) calculs des correspondances couleurs à partir d'une approche locale; (3) modélisation de ces correspondances par une méthode de type RANSAC; (4) compensation des différences de couleur par une méthode polynomiale à partir de chacun des canaux couleur, puis par une méthode d'approximation linéaire conjuguée à une méthode d'estimation de l'illuminant de type CAT afin de tenir compte des intercorrélations entre canaux couleur et des changements couleur dus à l'illuminant. Cette solution est comparée aux autres approches de l'état de l'art. Afin d'évaluer quantitativement et qualitativement la pertinence, la performance et la robustesse de cette solution, nous proposons deux jeux d'images spécialement conçus à cet effet. Les résultats de différentes expérimentations que nous avons menées prouvent que la solution que nous proposons est plus performante que toutes les autres solutions proposées jusqu'alors
The objective of this thesis is to propose a solution to the problem of color consistency between images originate from the same scene irrespective of acquisition conditions. Therefore, we present a new color mapping framework that is able to compensate color differences and achieve color consistency between views of the same scene. Our proposed, new framework works in two phases. In the first phase, we propose a new method that can robustly collect color correspondences from the neighborhood of sparse feature correspondences, despite the low accuracy of feature correspondences. In the second phase, from these color correspondences, we introduce a new, two-step, robust estimation of the color mapping model: first, nonlinear channel-wise estimation; second, linear cross-channel estimation. For experimental assessment, we propose two new image datasets: one with ground truth for quantitative assessment; another, without the ground truth for qualitative assessment. We have demonstrated a series of experiments in order to investigate the robustness of our proposed framework as well as its comparison with the state of the art. We have also provided brief overview, sample results, and future perspectives of various applications of color mapping. In experimental results, we have demonstrated that, unlike many methods of the state of the art, our proposed color mapping is robust to changes of: illumination spectrum, illumination intensity, imaging devices (sensor, optic), imaging device settings (exposure, white balance), viewing conditions (viewing angle, viewing distance)
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Sangkeun. "Video analysis and abstraction in the compressed domain." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180041/unrestricted/lee%5fsangkeun%5f200312%5fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Y. (Yimo). "Image and video analysis by local descriptors and deformable image registration." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526201412.

Full text
Abstract:
Abstract Image description plays an important role in representing inherent properties of entities and scenes in static images. Within the last few decades, it has become a fundamental issue of many practical vision tasks, such as texture classification, face recognition, material categorization, and medical image processing. The study of static image analysis can also be extended to video analysis, such as dynamic texture recognition, classification and synthesis. This thesis contributes to the research and development of image and video analysis from two aspects. In the first part of this work, two image description methods are presented to provide discriminative representations for image classification. They are designed in unsupervised (i.e., class labels of texture images are not available) and supervised (i.e., class labels of texture images are available) manner, respectively. First, a supervised model is developed to learn discriminative local patterns, which formulates the image description as an integrated three-layered model to estimate an optimal pattern subset of interest by simultaneously considering the robustness, discriminative power and representation capability of features. Second, in the case that class labels of training images are unavailable, a linear configuration model is presented to describe microscopic image structures in an unsupervised manner, which is subsequently combined together with a local descriptor: local binary pattern (LBP). This description is theoretically verified to be rotation invariant and is able to provide a discriminative complement to the conventional LBPs. In the second part of the thesis, based on static image description and deformable image registration, video analysis is studied for the applications of dynamic texture description, synthesis and recognition. First, a dynamic texture synthesis model is proposed to create a continuous and infinitely varying stream of images given a finite input video, which stitches video clips in the time domain by selecting proper matching frames and organizing them into a logical order. Second, a method for the application of facial expression recognition, which formulates the dynamic facial expression recognition problem as the construction of longitudinal atlases and groupwise image registration problem, is proposed
Tiivistelmä Kuvan deskriptiolla on tärkeä rooli staattisissa kuvissa esiintyvien luontaisten kokonaisuuksien ja näkymien kuvaamisessa. Viime vuosikymmeninä se on tullut perustavaa laatua olevaksi ongelmaksi monissa käytännön konenäön tehtävissä, kuten tekstuurien luokittelu, kasvojen tunnistaminen, materiaalien luokittelu ja lääketieteellisten kuvien analysointi. Staattisen kuva-analyysin tutkimusala voidaan myös laajentaa videoanalyysiin, kuten dynaamisten tekstuurien tunnistukseen, luokitteluun ja synteesiin. Tämä väitöskirjatutkimus myötävaikuttaa kuva- ja videoanalyysin tutkimukseen ja kehittymiseen kahdesta näkökulmasta. Työn ensimmäisessä osassa esitetään kaksi kuvan deskriptiomenetelmää erottelukykyisten esitystapojen luomiseksi kuvien luokitteluun. Ne suunnitellaan ohjaamattomiksi (eli tekstuurikuvien luokkien leimoja ei ole käytettävissä) tai ohjatuiksi (eli luokkien leimat ovat saatavilla). Aluksi kehitetään ohjattu malli oppimaan erottelukykyisiä paikallisia kuvioita, mikä formuloi kuvan deskriptiomenetelmän integroituna kolmikerroksisena mallina - tavoitteena estimoida optimaalinen kiinnostavien kuvioiden alijoukko ottamalla samanaikaisesti huomioon piirteiden robustisuus, erottelukyky ja esityskapasiteetti. Seuraavaksi, sellaisia tapauksia varten, joissa luokkaleimoja ei ole saatavilla, esitetään työssä lineaarinen konfiguraatiomalli kuvaamaan kuvan mikroskooppisia rakenteita ohjaamattomalla tavalla. Tätä käytetään sitten yhdessä paikallisen kuvaajan, eli local binary pattern (LBP) –operaattorin kanssa. Teoreettisella tarkastelulla osoitetaan kehitetyn kuvaajan olevan rotaatioinvariantti ja kykenevän tuottamaan erottelukykyistä, täydentävää informaatiota perinteiselle LBP-menetelmälle. Työn toisessa osassa tutkitaan videoanalyysiä, perustuen staattisen kuvan deskriptioon ja deformoituvaan kuvien rekisteröintiin – sovellusaloina dynaamisten tekstuurien kuvaaminen, synteesi ja tunnistaminen. Aluksi ehdotetaan sellainen malli dynaamisten tekstuurien synteesiin, joka luo jatkuvan ja äärettömän kuvien virran annetusta äärellisen mittaisesta videosta. Menetelmä liittää yhteen videon pätkiä aika-avaruudessa valitsemalla keskenään yhteensopivia kuvakehyksiä videosta ja järjestämällä ne loogiseen järjestykseen. Seuraavaksi työssä esitetään sellainen uusi menetelmä kasvojen ilmeiden tunnistukseen, joka formuloi dynaamisen kasvojen ilmeiden tunnistusongelman pitkittäissuuntaisten kartastojen rakentamisen ja ryhmäkohtaisen kuvien rekisteröinnin ongelmana
APA, Harvard, Vancouver, ISO, and other styles
10

Stobaugh, John David. "Novel use of video and image analysis in a video compression system." Thesis, University of Iowa, 2015. https://ir.uiowa.edu/etd/1766.

Full text
Abstract:
As consumer demand for higher quality video at lower bit-rate increases, so does the need for more sophisticated methods of compressing videos into manageable file sizes. This research attempts to address these concerns while still maintaining reasonable encoding times. Modern segmentation and grouping analysis are used with code vectorization techniques and other optimization paradigms to improve quality and performance within the next generation coding standard, High Efficiency Video Coding. This research saw on average a 50% decrease in run-time by the encoder with marginal decreases in perceived quality.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Video image analysi"

1

Tokan-Lawal, Folashade. Quantitative image analysis of video images of the larynx. Manchester: UMIST, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Jing, and Li Chen. Intelligent image and video interpretation: Algorithms and applications. Hershey, PA: Information Science Reference, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

C, Kotropoulos, and Pitas I, eds. Nonlinear model-based image/video processing and analysis. New York: Wiley, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Image and video processing in the compressed domain. Boca Raton: CRC Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Subhasis, Chaudhuri, and SpringerLink (Online service), eds. Video Analysis and Repackaging for Distance Education. New York, NY: Springer New York, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Camastra, Francesco, and Alessandro Vinciarelli. Machine Learning for Audio, Image and Video Analysis. London: Springer London, 2015. http://dx.doi.org/10.1007/978-1-4471-6735-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Camastra, Francesco, and Alessandro Vinciarelli. Machine Learning for Audio, Image and Video Analysis. London: Springer London, 2008. http://dx.doi.org/10.1007/978-1-84800-007-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Content-based analysis of digital video. Boston, MA: Kluwer Academic Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kwaśnicka, Halina, and Lakhmi C. Jain, eds. Bridging the Semantic Gap in Image and Video Analysis. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73891-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Randall, Reed Todd, ed. Digital image sequence processing, compression, and analysis. Boca Raton: CRC Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Video image analysi"

1

Söderström, Ulrik, and Haibo Li. "High Definition Wearable Video Communication." In Image Analysis, 500–512. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02230-2_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Douze, Matthijs, and Vincent Charvillat. "Real-Time Tracking of Video Sequences in a Panoramic View for Object-Based Video Coding." In Image Analysis, 1022–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45103-x_134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Uegaki, Naoki, Masao Izumi, and Kunio Fukunaga. "Multimodal Automatic Indexing for Broadcast Soccer Video." In Image Analysis, 802–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11499145_81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luzardo, Marcos, Matti Karppa, Jorma Laaksonen, and Tommi Jantunen. "Head Pose Estimation for Sign Language Video." In Image Analysis, 349–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38886-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Walter, Robert J., and Michael W. Berns. "Digital Image Processing and Analysis." In Video Microscopy, 327–92. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4757-6925-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Paalanen, Pekka, Joni-Kristian Kämäräinen, and Heikki Kälviäinen. "Image Based Quantitative Mosaic Evaluation with Artificial Video." In Image Analysis, 470–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02230-2_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Koskela, Markus, Mats Sjöberg, and Jorma Laaksonen. "Improving Automatic Video Retrieval with Semantic Concept Detection." In Image Analysis, 480–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02230-2_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Slot, Kristine, René Truelsen, and Jon Sporring. "Content-Aware Video Editing in the Temporal Domain." In Image Analysis, 490–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02230-2_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lundmark, Astrid, and Leif Haglund. "Adaptive Spatial and Temporal Prefiltering for Video Compression." In Image Analysis, 953–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45103-x_125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bräuer-Burchardt, Christian. "Detection of Strong Shadows in Monochromatic Video Streams." In Image Analysis, 646–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45103-x_86.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video image analysi"

1

Fullerton, Anne M., Thomas C. Fu, David A. Drazen, and Don C. Walker. "Analysis Methods for Vessel Generated Spray." In ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31313.

Full text
Abstract:
The droplet sizes and velocities contained in vessel generated spray are difficult to quantify. This paper describes three different methods to quantify velocity and size distributions from high speed video of spray from a planing boat. These methods include feature tracking, displacement tracking and video inversion. For the feature tracking method, the images were preprocessed using contrast limited adaptive histogram equalization, and then converted to binary images with a specific intensity cutoff level. Image statistics were then generated from this image, including droplet area and effective diameter. These images were processed using commercial PIV software to obtain velocities. For the displacement tracking method, the images were also converted to binary images with a specific intensity cutoff level. Image statistics were again compiled from this binary image. A droplet filter was then applied using a binary erosion image processing technique, where large droplets were removed because the entire droplet may not be in frame, and small droplets were removed because they might not overlap between frames. Droplets were then tracked by comparing the bounding boxes of two droplets between time frames. The video inversion method consisted of the manipulating the original high speed videos from spatial x-y frames in time space to time-y frames in x-space, where the x-axis is longitudinally along the ship and the y axis is vertical to the ship. From this orientation, the speed of the general spray mass could be determined by summing the pixels in time columns for each × frame. Comparisons of droplet size distribution between the feature and displacement tracking method yield qualitatively similar results, with some disagreement likely due to the different threshold levels. The trend of the distribution curve suggests that both methods are unable to resolve the smallest droplet sizes, due to the processing filters applied as well as the field of view of the camera. The three analysis methods compare well in their spray velocity computation, and are also similar to spray speed predictions found in the literature for a given geometry and vessel speed.
APA, Harvard, Vancouver, ISO, and other styles
2

Звездакова, Анастасия, Anastasia Zvezdakova, Дмитрий Куликов, Dmitriy Kulikov, Денис Кондранин, Denis Kondranin, Дмитрий Ватолин, and Dmitriy Vatolin. "Barriers Towards No-reference Metrics Application to Compressed Video Quality Analysis: on the Example of No-reference Metric NIQE." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-2-22-27.

Full text
Abstract:
This paper analyses the application of no-reference metric NIQE to the task of video-codec comparison. A number of issues in the metric behavior on videos was detected and described. The metric has outlying scores on black and solid-colored frames. The proposed averaging technique for metric quality scores helped to improve the results in some cases. Also, NIQE has low-quality scores for videos with detailed textures and higher scores for videos of lower bit rates due to the blurring of these textures after compression. Although NIQE showed natural results for many tested videos, it is not universal and currently can’t be used for video-codec comparisons.
APA, Harvard, Vancouver, ISO, and other styles
3

Lima, Karim Ferreira, Rodrigo Marques de Figueiredo, Eduardo Augusto Martins, and Jean Schmith. "Virtual lines for offside situations analysis in football." In Anais Estendidos da Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sibgrapi.est.2021.20037.

Full text
Abstract:
Offside is one of the situations that is analyzed by the Video Assistant Referee (VAR). However, it has caused some controversy due to the delay in the analysis and definition of the irregularity. This work proposes a method that helps in the analysis of offside situations and also makes it available for nonprofessional matches. Here, image processing algorithms were used to determine offside situations in football matches from TV videos, of course, in accordance with the game regulation. The method includes the image vanishing point identification, camera calibration and the virtual offside line drawing. The method presented good results from 10 videos selected for analysis, with five from the right side of the field and five on from the left side. Among the videos, one was chosen as the basis for explaining the development of the method and demonstrate a situation with a virtual line drawn automatically, therefore determining an offside situation. As a result, the virtual line is identified by the color red when the manually selected player is in offside and green when he is not.
APA, Harvard, Vancouver, ISO, and other styles
4

Dimitriu, Anda. "TEACHING ENGLISH IN A DIGITAL WORLD: THE ADVANTAGES AND DISADVANTAGES OF INTRODUCING VIDEOS IN ENGLISH LANGUAGE COURSES." In eLSE 2017. Carol I National Defence University Publishing House, 2017. http://dx.doi.org/10.12753/2066-026x-17-214.

Full text
Abstract:
As the international language of our times, English has benefitted in recent years from a myriad of pedagogical approaches, case-based studies or philosophical musings. This phenomenon has also been doubled by the proliferation of technology in the twenty-first century and the shift in mentality on the part of the newer generations, who now view short video sequences as a commonplace aspect of their lives. As a natural consequence, specialists in the field of pedagogy and didactics have also drawn attention to the opportunities this new technology might bring in the classroom, and have even tried and succeeded to incorporate readily available videos in the teaching-learning process they were a part of. Still, implementing a change in curriculum will inherently bring out obstacles as well; so, disappointed that the change was not revolutionary in nature or all-encompassing in effect, many have given up and returned to the well-trodden paths of dividing English learning in the aspects they were familiar with, leaving videos or films aside in favour of written texts or grammatical exercises. But modest as they may be, and despite the drawbacks which come with this change, I believe the advantages of video clips in the teaching of English are important from a twofold point of view: firstly, videos may prove a very useful tool of placing specialised vocabulary or grammatical structures in a clearer context for the generations who live surrounded by images and video snaps, and secondly, they could provide the students with a chance to be immersed in the culture and frame of mind of native speakers, if only for a short while. Therefore, this present paper will offer an image of the advantages and disadvantages of introducing videos and movie clips in the curriculum of English learners, and will do so by analysing the theoretical foundation of such a change, as well as by giving concrete examples of how a simple video could improve the learning experience for students with various interests and specialisations.
APA, Harvard, Vancouver, ISO, and other styles
5

MacHuchon, Keith R., Wehan J. Wessels, Chin H. Wu, and Paul C. Liu. "The Use of Streamed Digital Video Data and Binocular Stereoscopic Image System (BiSIS) Processing Methods to Analyze Ocean Wave Field Kinematics." In ASME 2009 28th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/omae2009-79853.

Full text
Abstract:
The kinematics of short crested steep and breaking waves in the ocean is a subject that is best studied spatially, in the time domain, to obtain a good understanding of the multi-directional spreading of energy, which is dependant on strongly non-linear wave interactions in the system. The paper will cover the collection, recording and processing of streamed sea surface image data, obtained simultaneously from multiple digital video cameras, for analysis using stereoscopic image processing methods to provide information on the kinematics of ocean wave fields. The data streaming architecture, which will be reviewed, incorporates an advanced laptop computer and two to three stand-alone digital video cameras which are all linked through a gigabit ethernet network connection with sufficient bandwidth to simultaneously transfer the image data from the cameras to hard drive storage. The modifications to the laptop computer comprise the provision of increased processing capacity to enable it to accept and process large IP frames simultaneously. The system has the capacity to continuously record images, at a rate of up to 60 frames per second, for periods of up to one hour. It includes an external triggering mechanism, which is synchronised to a micro-second, to ensure that stereo pairs of images are captured simultaneously. Calibration of the cameras, and their stereoscopic configuration, is a critical part of the overall process and we will discuss how ill-conditioned and singular matrices, which can prevent the determination of required intrinsic and extrinsic parameters, can be avoided. The paper will include examples of wave field image data which has been collected using streamed digital video data and Binocular Stereoscopic Image System (BiSIS) processing methods. It will also give examples digital video images and dimensional wave field data which has been collected and processed using the Automated Trinocular Stereoscopic Imaging Systems (ATSIS) methods. Both of these systems provide a valuable means of analysing irregular, non-linear, short crested waves, which leads to an improved understanding of ocean wave kinematics.
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Shenghan, Dali Wang, Jian Chen, Zhili Feng, and Weihong “Grace” Guo. "Predicting Nugget Size of Resistance Spot Welds Using Infrared Thermal Videos With Image Segmentation and Convolutional Neural Network." In ASME 2021 16th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/msec2021-61775.

Full text
Abstract:
Abstract Resistance spot welding (RSW) is a widely adopted joining technique in automotive industry. Recent advancement in sensing technology makes it possible to collect thermal videos of the weld nugget during RSW using an infrared camera. The effective and timely analysis of such thermal videos has the potential of enabling in-situ nondestructive evaluation (NDE) of the weld nugget by predicting nugget thickness and diameter. Deep learning (DL) has demonstrated to be effective in analyzing imaging data in many applications. However, the thermal videos in RSW present unique data-level challenges that compromise the effectiveness of most pre-trained DL models. We propose a novel image segmentation method for handling the RSW thermal videos to improve the prediction performance of DL models in RSW. The proposed method transforms raw thermal videos into spatial-temporal instances in four steps: video-wise normalization, removal of uninformative images, watershed segmentation, and spatial-temporal instance construction. The extracted spatial-temporal instances serve as the input data for training a DL-based NDE model. The proposed method is able to extract high-quality data with spatial-temporal correlations in the thermal videos, while being robust to the impact of unknown surface emissivity. Our case studies demonstrate that the proposed method achieves better prediction of nugget thickness and diameter than predicting without the transformation.
APA, Harvard, Vancouver, ISO, and other styles
7

Shou, Yu-Wen. "Intelligent Judgment System for Vehicle-Overtaking by Motion Detection in Subsequent Images." In ASME 2009 International Mechanical Engineering Congress and Exposition. ASMEDC, 2009. http://dx.doi.org/10.1115/imece2009-10922.

Full text
Abstract:
We propose an intelligent judgment system to determine the exact timing for overtaking another car in simulated dynamic circumstances by motion detection and feature analysis in subsequent images based on digital image processing in this paper. The strategic methodology in detection of motion vectors extracted from the source video file can effectively evaluate and predict the behavior of surrounding vehicles and at the same time give an appropriate suggestion whether the driver could overtake another car, which is not only different from the traditional methods in computational feature-only analysis for some specific static image but also provides an innovative idea among the related applications of intelligent transportation system (ITS). Our system makes use of the video-typed files recorded from the rear-view mirror in the vehicle that should be obtained from the digital image capturing devices. It also constantly reevaluates the information of motion vectors in surrounding environments to update the useful information between the driver’s vehicle and background. In order to tackle the problem of real-time processing, this paper simplifies the processes of feature selection and analysis for video processing in particular. The crucial features used to give dynamic information, motion vectors, can be obtained from defined consecutive images. We define a variable number of images according to the extent of motion variation in different real-time situations. Our dynamic features are composed of geometrical and statistical characteristics from each processed image in the defined duration. Our scheme can identify the difference between the background and object of interest, which also reveals the dynamic information of the determined number of images extracted from the whole video file. Our experimental results show that the proposed features can give the useful information in a given traffic condition, such as the locations of surrounding vehicles, and the way of vehicles’ moving. The real-time problems of ITS are taken into consideration in this paper and the developed feature series are flexible to the changes of occasions. More useful features in dynamic environments as well as our feature series will be applied in our systematical mechanisms, and the improvement on real-time problems by motion vectors should be progressively made in the near future.
APA, Harvard, Vancouver, ISO, and other styles
8

Imai, Seira, Yasuharu Nakajima, and Motohiko Murai. "Experimental Study on Bubble Size Measurement for Development of Seafloor Massive Sulfides." In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/omae2019-95186.

Full text
Abstract:
Abstract Seafloor Massive Sulfides have been expected to be future mineral resources. To promote the development of Seafloor Massive Sulfides, Seafloor Mineral Processing, a method of extracting valuable minerals from the ores on deep seafloor using flotation to reduce the lifting cost of ores from the seafloor to the sea surface, was proposed. To apply flotation for the seafloor mineral processing, a measurement method of bubble size applicable to deep-sea conditions has been desired because it is necessary to generate fine air bubbles suitable to flotation under pressure conditions on deep seafloor. Then, the authors have studied on bubble size measurement by image analysis, which is expected to be applicable to deep-sea conditions. At the first phase of this study, photographic conditions suitable to image analysis of air bubbles were distinguished. Air bubbles were generated by using a porous nozzle at some air flow rates in a bubble column with a rectangular cross-section. Video images of air bubbles were taken by using both high-speed camera and video camera with a low frame rate. Bubble size was measured by binarizing the video images of bubbles. Under optimal photographic conditions, bubble size was obtained from not only the high-speed camera but also the video camera; and both size data agreed relatively well, which implies that bubble size measurement by using image analysis would be applicable to deep-sea conditions. At the second phase, experiments were carried out under high-pressure conditions up to 2.4 MPa. Single bubble generation by using a capillary nozzle in a small pressure chamber with a sight glass was observed by using a digital microscope. Bubble size measurement by image analysis was carried out by the procedure established at the first phase. While the process of bubble generation at the high pressures was similar to that at the atmospheric pressure, the bubble size was decreased as the pressure rose. The result implies there is a strong correlation between the pressure and the bubble size.
APA, Harvard, Vancouver, ISO, and other styles
9

Ivanov, Oleg, Alexey Danilovich, Vyacheslav Stepanov, Sergey Smirnov, and Victor Potapov. "Remote Measurements of Radioactivity Distribution With BROKK Robotic System." In ASME 2009 12th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2009. http://dx.doi.org/10.1115/icem2009-16147.

Full text
Abstract:
Robotic system for the remote measurement of radioactivity in the reactor areas was developed. The BROKK robotic system replaces hand-held radiation measuring tools. The system consists of a collimated gamma detector, a standard gamma detector, color CCD video camera and searchlights, all mounted on a robotic platform (BROKK). The signals from the detectors are coupled with the video signals and are transferred to an operator’s console via a radio channel or a cable. Operator works at a safe position. The video image of the object with imposed exposure dose rate from the detectors generates an image on the monitor screen, and the images are recorded for subsequent analysis. Preliminary work has started for the decommissioning of a research reactor at the RRC «Kurchatov Institute». Results of the remote radioactivity measurements with new system during radiation inspection waste storage of this reactor are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Hsu, Fu Kuo, Eddy C. Tam, Taiwei Lu, Francis T. S. Yu, Eiichiro Nishihara, and Takashi Nishikawa. "Implementation and analysis of optical-disk-based joint-transform correlators." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/oam.1990.thii2.

Full text
Abstract:
We have previously proposed a jointtransform correlator incorporating an optical disk as a reference library, for which the reference image is parallelly read out and directly used to obtain joint transform with the real-time input image. The architecture combines the features of the conventional jointtransform correlator and the compact disk optical head. In this paper we shall propose two joint-transform correlator schemes that use optical disks; namely, the equal-image-dimension (EID) and the direct-joint-transform (DJT) schemes. In both schemes a real-time input image is obtained through a video camera and is displayed on an electrically addressable spatial light modulator (SLM). To alleviate the dimensional mismatch of the input images, the reference image is first magnified and then displayed on an SLM by means of the EID scheme. For the DJT scheme, the Fourier transform of the input image is magnified before being combined with the Fourier spectrum of the reference image. Systems analysis and design of both architectures are presented, and experimental demonstrations of these two techniques are also provided.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video image analysi"

1

Bandat, N. E. Video image analysis using the Selective Video Processor development platform. Office of Scientific and Technical Information (OSTI), August 1989. http://dx.doi.org/10.2172/6161006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Yiqing. Video Retrieval Based on Language and Image Analysis. Fort Belvoir, VA: Defense Technical Information Center, May 1999. http://dx.doi.org/10.21236/ada364129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bovik, Alan C. AM-FM Analysis of Images and Video. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada387139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brumby, Steven P. Video Analysis & Search Technology (VAST): Automated content-based labeling and searching for video and images. Office of Scientific and Technical Information (OSTI), May 2014. http://dx.doi.org/10.2172/1133765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Full text
Abstract:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
APA, Harvard, Vancouver, ISO, and other styles
6

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Full text
Abstract:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
APA, Harvard, Vancouver, ISO, and other styles
7

Sapiro, Guillermo. Structured and Collaborative Signal Models: Theory and Applications in Image, Video, and Audio Analysis. Fort Belvoir, VA: Defense Technical Information Center, January 2013. http://dx.doi.org/10.21236/ada586672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jerosch, K., A. Luedtke, P. Pledge, O. Paitich, and V E Kostylev. Automatic image analysis of sediment types: mapping from georeferenced video footage on the Labrador Shelf. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2011. http://dx.doi.org/10.4095/288055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baluk, Nadia, Natalia Basij, Larysa Buk, and Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Full text
Abstract:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combined are defined and characterized. The most important in journalism is verbal content, it is the one that carries the main information load. The dynamic development of converged media leads to the dominance of image and video content; the likelihood of increasing the secondary content of the text increases. Given the market situation, the effective information product is a combined content that combines text with images, spreadsheets with video, animation with infographics, etc. Increasing number of new media are using applications and website platforms to interact with recipients. To proceed, the peculiarities of the new content of new media with the involvement of augmented reality are determined. Examples of successful interactive communication between recipients, the leading news agencies and commercial structures are provided. The conditions for effective use of VR / AR-technologies in the media content of new media, the involvement of viewers in changing stories with augmented reality are determined. The so-called immersive effect with the use of VR / AR-technologies involves complete immersion, immersion of the interested audience in the essence of the event being relayed. This interaction can be achieved through different types of VR video interactivity. One of the most important results of using VR content is the spatio-temporal and emotional immersion of viewers in the plot. The recipient turns from an external observer into an internal one; but his constant participation requires that the user preferences are taken into account. Factors such as satisfaction, positive reinforcement, empathy, and value influence the choice of VR / AR content by viewers.
APA, Harvard, Vancouver, ISO, and other styles
10

Pikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko, and Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4431.

Full text
Abstract:
The article is devoted to a comparative analysis of popular online dictionaries and an overview of the main tools of these resources to study a language. The use of dictionaries in learning a foreign language is an important step to understanding the language. The effectiveness of this process increases with the use of online dictionaries, which have a lot of tools for improving the educational process. Based on the Alexa Internet resource it was found the most popular online dictionaries: Cambridge Dictionary, Wordreference, Merriam–Webster, Wiktionary, TheFreeDictionary, Dictionary.com, Glosbe, Collins Dictionary, Longman Dictionary, Oxford Dictionary. As a result of the deep analysis of these online dictionaries, we found out they have the next standard functions like the word explanations, transcription, audio pronounce, semantic connections, and examples of use. In propose dictionaries, we also found out the additional tools of learning foreign languages (mostly English) that can be effective. In general, we described sixteen functions of the online platforms for learning that can be useful in learning a foreign language. We have compiled a comparison table based on the next functions: machine translation, multilingualism, a video of pronunciation, an image of a word, discussion, collaborative edit, the rank of words, hints, learning tools, thesaurus, paid services, sharing content, hyperlinks in a definition, registration, lists of words, mobile version, etc. Based on the additional tools of online dictionaries we created a diagram that shows the functionality of analyzed platforms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography