Journal articles on the topic 'Video image analysi'

To see the other types of publications on this topic, follow the link: Video image analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video image analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alothman, Raya Basil, Imad Ibraheem Saada, and Basma Salim Bazel Al-Brge. "A Performance-Based Comparative Encryption and Decryption Technique for Image and Video for Mobile Computing." Journal of Cases on Information Technology 24, no. 2 (April 2022): 1–18. http://dx.doi.org/10.4018/jcit.20220101.oa1.

Full text
Abstract:
When data exchange advances through the electronic system, the need for information security has become a must. Protection of images and videos is important in today's visual communication system. Confidential image / video data must be shielded from unauthorized uses. Detecting and identifying unauthorized users is a challenging task. Various researchers have suggested different techniques for securing the transfer of images. In this research, the comparative study of these current technologies also addressed the types of images / videos and the different techniques of image / video processing with the steps used to process the image or video. This research classifies the two types of Encryption Algorithm, Symmetric and Encryption Algorithm, and provides a comparative analysis of its types, such as AES, MAES, RSA, DES, 3DES and BLOWFISH.
APA, Harvard, Vancouver, ISO, and other styles
2

Deng, Zhaopeng, Maoyong Cao, Yushui Geng, and Laxmisha Rai. "Generating a Cylindrical Panorama from a Forward-Looking Borehole Video for Borehole Condition Analysis." Applied Sciences 9, no. 16 (August 20, 2019): 3437. http://dx.doi.org/10.3390/app9163437.

Full text
Abstract:
Geological exploration plays a fundamental and crucial role in geological engineering. The most frequently used method is to obtain borehole videos using an axial view borehole camera system (AVBCS) in a pre-drilled borehole. This approach to surveying the internal structure of a borehole is based on the video playback and video screenshot analysis. One of the drawbacks of AVBCS is that it provides only a qualitative description of borehole information with a forward-looking borehole video, but quantitative analysis of the borehole data, such as the width and dip angle of fracture, are unavailable. In this paper, we proposed a new approach to create a whole borehole-wall cylindrical panorama from the borehole video acquired by AVBCS, which provides a possibility for further analysis of borehole information. Firstly, based on the Otsu and region labeling algorithms, a borehole center location algorithm is proposed to extract the borehole center of each video image automatically. Afterwards, based on coordinate mapping (CM), a virtual coordinate graph (VCG) is designed in the unwrapping process of the front view borehole-wall image sequence, generating the corresponding unfolded image sequence and reducing the computational cost. Subsequently, based on the sum of absolute difference (SAD), a projection transformation SAD (PTSAD), which considers the gray level similarity of candidate images, is proposed to achieve the matching of the unfolded image sequence. Finally, an image filtering module is introduced to filter the invalid frames and the remaining frames are stitched into a complete cylindrical panorama. Experiments on two real-world borehole videos demonstrate that the proposed method can generate panoramic borehole-wall unfolded images from videos with satisfying visual effect for follow up geological condition analysis. From the resulting image, borehole information, including the rock mechanical properties, distribution and width of fracture, fault distribution and seam thickness, can be further obtained and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
3

Livingston, Merlin L. M., and Agnel L. G. X. Livingston. "Processing of Images and Videos for Extracting Text Information from Clustered Features Using Graph Wavelet Transform." Journal of Computational and Theoretical Nanoscience 16, no. 2 (February 1, 2019): 557–61. http://dx.doi.org/10.1166/jctn.2019.7768.

Full text
Abstract:
Image processing is an interesting domain for extracting knowledge from real time video and images for surveillance, automation, robotics, medical and entertainment industries. The data obtained from videos and images are continuous and hold a primary role in semantic based video analysis, retrieval and indexing. When images and videos are obtained from natural and random sources, they need to be processed for identifying text, tracking, binarization and recognising meaningful information for succeeding actions. This proposal defines a solution with assistance of Spectral Graph Wave Transform (SGWT) technique for localizing and extracting text information from images and videos. K Means clustering technique precedes the SGWT process to group features in an image from a quantifying Hill Climbing algorithm. Precision, Sensitivity, Specificity and Accuracy are the four parameters which declares the efficiency of proposed technique. Experimentation is done from training sets from ICDAR and YVT for videos.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Jianbang, Peng Sun, and Sang-Bing Tsai. "A Study on the Optimization Simulation of Big Data Video Image Keyframes in Motion Models." Wireless Communications and Mobile Computing 2022 (March 16, 2022): 1–12. http://dx.doi.org/10.1155/2022/2508174.

Full text
Abstract:
In this paper, the signal of athletic sports video image frames is processed and studied according to the technology of big data. The sports video image-multiprocessing technology achieves interference-free research and analysis of sports technology and can meet multiple visual needs of sports technology analysis and evaluation through key technologies such as split-screen synchronous comparison, superimposed synchronous comparison, and video trajectory tracking. The sports video image-processing technology realizes the rapid extraction of key technical parameters of the sports scene, the panoramic map technology of sports video images, the split-lane calibration technology, and the development of special video image analysis software that is innovative in the field of athletics research. An image-blending approach is proposed to alleviate the problem of simple and complex background data imbalance, while enhancing the generalization ability of the network trained using small-scale datasets. Local detail features of the target are introduced in the online-tracking process by an efficient block-filter network. Moreover, online hard-sample learning is utilized to avoid the interference of similar objects to the tracker, thus improving the overall tracking performance. For the feature extraction problem of fuzzy videos, this paper proposes a fuzzy kernel extraction scheme based on the low-rank theory. The scheme fuses multiple fuzzy kernels of keyframe images by low-rank decomposition and then deblurs the video. Next, a double-detection mechanism is used to detect tampering points on the blurred video frames. Finally, the video-tampering points are located, and the specific way of video tampering is determined. Experiments on two public video databases and self-recorded videos show that the method is robust in fuzzy video forgery detection, and the efficiency of fuzzy video detection is improved compared to traditional video forgery detection methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Aparna, RR. "Swarm Intelligence for Automatic Video Image Contrast Adjustment." International Journal of Rough Sets and Data Analysis 3, no. 3 (July 2016): 21–37. http://dx.doi.org/10.4018/ijrsda.2016070102.

Full text
Abstract:
Video surveillance has become an integrated part of today's life. We are surrounded by video cameras in all the public places and organizations in our day to day life. Many useful information like face detection, traffic analysis, object classification, crime analysis can be assessed from the recorded videos. Image enhancement plays a vital role to extract any useful information from the images. Enhancing the video frames is a major part as it serves the further analysis of video sequences. The proposed paper discusses the automatic contrast adjustment in the video frames. A new hybrid algorithm was developed using the spatial domain method and Artificial Bee Colony Algorithm (ABC), a swarm intelligence based technique for image enhancement. The proposed algorithm was tested using the traffic surveillance images. The proposed method produced good results and better quality picture for varied levels of poor quality video frames.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Jie-Hyun, Sang-Il Oh, So-Young Han, Ji-Soo Keum, Kyung-Nam Kim, Jae-Young Chun, Young-Hoon Youn, and Hyojin Park. "An Optimal Artificial Intelligence System for Real-Time Endoscopic Prediction of Invasion Depth in Early Gastric Cancer." Cancers 14, no. 23 (December 5, 2022): 6000. http://dx.doi.org/10.3390/cancers14236000.

Full text
Abstract:
We previously constructed a VGG-16 based artificial intelligence (AI) model (image classifier [IC]) to predict the invasion depth in early gastric cancer (EGC) using endoscopic static images. However, images cannot capture the spatio-temporal information available during real-time endoscopy—the AI trained on static images could not estimate invasion depth accurately and reliably. Thus, we constructed a video classifier [VC] using videos for real-time depth prediction in EGC. We built a VC by attaching sequential layers to the last convolutional layer of IC v2, using video clips. We computed the standard deviation (SD) of output probabilities for a video clip and the sensitivities in the manner of frame units to observe consistency. The sensitivity, specificity, and accuracy of IC v2 for static images were 82.5%, 82.9%, and 82.7%, respectively. However, for video clips, the sensitivity, specificity, and accuracy of IC v2 were 33.6%, 85.5%, and 56.6%, respectively. The VC performed better analysis of the videos, with a sensitivity of 82.3%, a specificity of 85.8%, and an accuracy of 83.7%. Furthermore, the mean SD was lower for the VC than IC v2 (0.096 vs. 0.289). The AI model developed utilizing videos can predict invasion depth in EGC more precisely and consistently than image-trained models, and is more appropriate for real-world situations.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Longcheng, Deokhwan Choi, and Zeyun Yang. "Deep Neural Network-Based Sports Marketing Video Detection Research." Scientific Programming 2022 (March 19, 2022): 1–7. http://dx.doi.org/10.1155/2022/8148972.

Full text
Abstract:
With the rapid development of short video, the mode of sports marketing has diversified, and the difficulty of accurately detecting marketing videos has increased. Identifying certain key images in the video is the focus of detection, and then, analysis can effectively detect sports marketing videos. The research of video key image detection based on deep neural network is proposed to solve the problem of unclear and unrecognizable boundaries of key images for multiscene recognition. First, the key image detection model of the feedback network is proposed, and ablation experiments are conducted on a simple test set of DAVSOD. The experimental results show that the proposed model achieves better performance in both quantitative evaluation and visual effects and can accurately capture the overall shape of significant objects. The hybrid loss function is also introduced to identify the boundaries of key images, and the experimental results show that the proposed model outperforms or is comparable to the current state-of-the-art video significant object detection models in terms of quantitative evaluation and visual effects.
APA, Harvard, Vancouver, ISO, and other styles
8

Guangyu, Han. "Analysis of Sports Video Intelligent Classification Technology Based on Neural Network Algorithm and Transfer Learning." Computational Intelligence and Neuroscience 2022 (March 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/7474581.

Full text
Abstract:
With the rapid development of information technology, digital content shows an explosive growth trend. Sports video classification is of great significance for digital content archiving in the server. Therefore, the accurate classification of sports video categories is realized by using deep neural network algorithm (DNN), convolutional neural network (CNN), and transfer learning. Block brightness comparison coding (BICC) and block color histogram are proposed, which reflect the brightness relationship between different regions in video and the color information in the region. The maximum mean difference (MMD) algorithm is adopted to achieve the purpose of transfer learning. On the basis of obtaining the features of sports video images, the sports video image classification method based on deep learning coding model is adopted to realize sports video classification. The results show that, for different types of sports videos, the overall classification effect of this method is obviously better than other current sports video classification methods, which greatly improves the classification effect of sports videos.
APA, Harvard, Vancouver, ISO, and other styles
9

Lokkondra, Chaitra Yuvaraj, Dinesh Ramegowda, Gopalakrishna Madigondanahalli Thimmaiah, Ajay Prakash Bassappa Vijaya, and Manjula Hebbaka Shivananjappa. "ETDR: An Exploratory View of Text Detection and Recognition in Images and Videos." Revue d'Intelligence Artificielle 35, no. 5 (October 31, 2021): 383–93. http://dx.doi.org/10.18280/ria.350504.

Full text
Abstract:
Images and videos with text content are a direct source of information. Today, there is a high need for image and video data that can be intelligently analyzed. A growing number of researchers are focusing on text identification, making it a hot issue in machine vision research. Since this opens the way, several real-time-based applications such as text detection, localization, and tracking have become more prevalent in text analysis systems. To find out more about how text information may be extracted, have a look at our survey. This study presents a trustworthy dataset for text identification in images and videos at first. The second part of the article details the numerous text formats, both in images and video. Third, the process flow for extracting information from the text and the existing machine learning and deep learning techniques used to train the model was described. Fourth, explain assessment measures that are used to validate the model. Finally, it integrates the uses and difficulties of text extraction across a wide range of fields. Difficulties focus on the most frequent challenges faced in the actual world, such as capturing techniques, lightning, and environmental conditions. Images and videos have evolved into valuable sources of data. The text inside the images and video provides a massive quantity of facts and statistics. However, such data is not easy to access. This exploratory view provides easier and more accurate mathematical modeling and evaluation techniques to retrieve the text in image and video into an accessible form.
APA, Harvard, Vancouver, ISO, and other styles
10

Yao-DongWang, Idaku Ishii, Takeshi Takaki, and Kenji Tajima. "An Intelligent High-Frame-Rate Video Logging System for Abnormal Behavior Analysis." Journal of Robotics and Mechatronics 23, no. 1 (February 20, 2011): 53–65. http://dx.doi.org/10.20965/jrm.2011.p0053.

Full text
Abstract:
This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and High-Frame-Rate (HFR) video recording simultaneously. In IDP Express, 512×512 pixel images from two camera heads and the processed results on a dedicated FPGA (Field Programmable Gate Array) board are transferred to standard PC memory at a rate of 1000 fps or more. Owing to the simultaneous HFR video processing and recording, IDP Express can be used as an intelligent video logging system for long-term high-speed phenomenon analysis. In this paper, a real-time abnormal behavior detection algorithm was implemented on IDP-Express to capture HFR videos of crucial moments of unpredictable abnormal behaviors in high-speed periodic motions. Several experiments were performed for a high-speed slider machine with repetitive operation at a frequency of 15 Hz and videos of the abnormal behaviors were automatically recorded to verify the effectiveness of our intelligent HFR video logging system.
APA, Harvard, Vancouver, ISO, and other styles
11

Pipek, P., J. Jeleníková, and L. Sarnovský. "The use of video image analysis for fat content estimation." Czech Journal of Animal Science 49, No. 3 (December 12, 2011): 115–20. http://dx.doi.org/10.17221/4288-cjas.

Full text
Abstract:
The composition of selected cuts of cattle carcasses was determined in connection with the search for new methods of carcass classification. The content of adipose tissue and intramuscular fat in the cross-section of beef loin was estimated. A total of 79 samples was taken for investigations in a broad range of cattle category. The classical extraction method in Soxhlet extractor was compared with video image analysis, which measured the ratio of muscle to fat areas. The size and the shape of the musculus longissimus lumborum et thoracis (MLLT) and its ratio in the loin cross-section was also estimated. A good correlation (r = 0.99, P < 0.05) between both methods was found for the estimation of intramuscular fat in MLLT. The correlation in the case of the whole cross-section was influenced by the connective tissue that gives also white areas similarly like the adipose tissue, but the fat content is different.  
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Wei Hua. "The Design and Implementation of a Video Image Acquisition System Based on VFW." Applied Mechanics and Materials 380-384 (August 2013): 3787–90. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3787.

Full text
Abstract:
Along with the development of technologies on computer, electronic and communication, there are more and more applications of digital image acquisition and processing technology used in computer and portable systems, such as videophone, digital cameras, digital television, video monitoring, camera phones and video conferencing etc. Digitized image makes image digital transmit with high quality, which facilitates image retrieval, analysis, processing and storage. In applications such as video conferencing, it is a crucial premise to capture videos. So, in this paper, we mainly introduce the video capture technology by exploiting the VFM video services library developed by Microsoft. Software based on VFW can directly capture digital videos or digitize the traditional analog videos and then clipping them.
APA, Harvard, Vancouver, ISO, and other styles
13

Yin, Xiao-lei, Dong-xue Liang, Lu Wang, Jing Qiu, Zhi-yun Yang, Jian-zeng Dong, and Zhao-yuan Ma. "Analysis of Coronary Angiography Video Interpolation Methods to Reduce X-ray Exposure Frequency Based on Deep Learning." Cardiovascular Innovations and Applications 6, no. 1 (September 1, 2021): 17–24. http://dx.doi.org/10.15212/cvia.2021.0011.

Full text
Abstract:
Cardiac coronary angiography is a major technique that assists physicians during interventional heart surgery. Under X-ray irradiation, the physician injects a contrast agent through a catheter and determines the coronary arteries’ state in real time. However, to obtain a more accurate state of the coronary arteries, physicians need to increase the frequency and intensity of X-ray exposure, which will inevitably increase the potential for harm to both the patient and the surgeon. In the work reported here, we use advanced deep learning algorithms to find a method of frame interpolation for coronary angiography videos that reduces the frequency of X-ray exposure by reducing the frame rate of the coronary angiography video, thereby reducing X-ray-induced damage to physicians. We established a new coronary angiography image group dataset containing 95,039 groups of images extracted from 31 videos. Each group includes three consecutive images, which are used to train the video interpolation network model. We apply six popular frame interpolation methods to this dataset to confirm that the video frame interpolation technology can reduce the video frame rate and reduce exposure of physicians to X-rays.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Yuehong. "Research on Sports Video Image Analysis Based on the Fuzzy Clustering Algorithm." Wireless Communications and Mobile Computing 2021 (January 27, 2021): 1–8. http://dx.doi.org/10.1155/2021/6630130.

Full text
Abstract:
Aimed at the shortcomings of the current sports video image segmentation methods, such as rough image segmentation results and high spatial distortion rate, a sports video image segmentation method based on a fuzzy clustering algorithm is proposed. The second-order fuzzy attribute with normal distribution and gravity value is established by using the time-domain difference image, and the membership function of the fuzzy attribute is given; then, the time-domain difference image is fuzzy clustered, and the motion video image segmentation result is obtained by edge detection. Experimental results show that this method has high spatial accuracy, good noise iteration performance, and low spatial distortion rate and can accurately segment complex moving video images and obtain high-definition images. The application of this video image analysis method will help master the rules of sports technology and the characteristics of healthy people’s sports skills through video image analysis and help improve physical education, national fitness level, and competitive sports level.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Xiao Ling. "Forest Fire Monitoring System Based on Image Analysis." Advanced Materials Research 267 (June 2011): 155–59. http://dx.doi.org/10.4028/www.scientific.net/amr.267.155.

Full text
Abstract:
Large forest fires are very hazardous and often cause greater losses. However, due to the vast coverage of forests and the environmental complexity, monitoring which relies on sensor detection of direct contact almost cannot achieve its monitoring function. In this paper, the author proposes that long-range images will first be acquired and so do the images of a larger dynamic range of the observed region, then an embedded video monitoring system will be formed realizing a dynamic monitoring. With the techniques of dynamic image analysis and processing, dynamic image alarms and control signals can be obtained and such videos will be transmitted over the Internet in real-time. Experiments shows this system respond to fires with a fast speed of less than 20ms and the control of fire is highly accurate with a dynamic image area that is less than 1% of the background area.
APA, Harvard, Vancouver, ISO, and other styles
16

Smékal, O., P. Pipek, M. Miyahara, and J. Jeleníková. "Use of video image analysis for the evaluation of beef carcasses." Czech Journal of Food Sciences 23, No. 6 (November 15, 2011): 240–45. http://dx.doi.org/10.17221/3397-cjfs.

Full text
Abstract:
Video image analysis was used for the objective evaluation of beef carcasses parts as an additional parameter of the carcass classification. The pictures of cross sections of dorsal parts of beef carcasses between 8<sup>th</sup> and 9<sup>th</sup> vertebra were taken under industrial conditions and evaluated using LUCIA software. The areas of muscle and adipose tissues were thresholded on several ways (the whole loin area, the areas of individual muscles and those of different loin sections) and the correlations between these areas were searched. The best correlations were found between large areas. Video image analysis proved to be a suitable method for the evaluation of selected carcass parts. &nbsp;
APA, Harvard, Vancouver, ISO, and other styles
17

Ge, Shishun, and Chunhong Zhu. "Application of Motion Video Analysis System Based on SISA Neural Network in Sports Training." Advances in Multimedia 2022 (July 7, 2022): 1–11. http://dx.doi.org/10.1155/2022/3692902.

Full text
Abstract:
This paper proposes the study of motion video image classification and recognition, extracts the motion target image features, designs the image classification process, and establishes the neural network image classification model to complete the image recognition. In view of the different angles of the same element, the motion video image classification and recognition under the neural network are completed by using the error back-propagation algorithm. The performance of the proposed method is verified by simulation experiments. Experimental results show that the proposed method has a high recognition rate of moving video images, the accuracy rate is more than 98%, and the image recognition classification is comprehensive. The proposed method can classify the elements in the motion video image, which solves the technical problem that the traditional method cannot identify unclear images and has low recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Yoo, Yeongsik, and Woo Sik Yoo. "Turning Image Sensors into Position and Time Sensitive Quantitative Colorimetric Data Sources with the Aid of Novel Image Processing/Analysis Software." Sensors 20, no. 22 (November 10, 2020): 6418. http://dx.doi.org/10.3390/s20226418.

Full text
Abstract:
Still images and video images acquired from image sensors are very valuable sources of information. From still images, position-sensitive, quantitative intensity, or colorimetric information can be obtained. Video images made of a time series of still images can provide time-dependent, quantitative intensity, or colorimetric information in addition to the position-sensitive information from a single still image. With the aid of novel image processing/analysis software, extraction of position- and time-sensitive quantitative colorimetric information was demonstrated from still image and video images of litmus test strips for pH tests of solutions. Visual inspection of the color change in the litmus test strips after chemical reaction with chemical solutions is typically exercised. Visual comparison of the color of the test solution with a standard color chart provides an approximate pH value to the nearest whole number. Accurate colorimetric quantification and dynamic analysis of chemical properties from captured still images and recorded video images of test solutions using novel image processing/analysis software are proposed with experimental feasibility study results towards value-added image sensor applications. Position- and time-sensitive quantitative colorimetric measurements and analysis examples are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
19

Stutte, G. W. "Analysis of Video Images Using an Interactive Image Capture and Analysis System." HortScience 25, no. 6 (June 1990): 695–97. http://dx.doi.org/10.21273/hortsci.25.6.695.

Full text
Abstract:
Interactive Image Capture and Analysis System (ICAS) provides for real-time capture of video images using an imaging board and software in a personal computer. Through the use of selective filters on the video input source, images of specific reflective wavelengths are obtained and then analyzed for intensity distribution using interactive software designed for scientific agriculture. Conversion of video cameras into two-dimensional near real-time visual and near infrared (NIR) spectral sensors through the use of filters provides information on the physiological status of the tissue following ICAS analysis. However, caution must be observed to minimize equipment-induced artifacts during image acquisition and analysis.
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Shize, Liangliang Yu, Fan Zhang, Wei Zhu, and Qiyi Guo. "Cluster Analysis Based Arc Detection in Pantograph-Catenary System." Journal of Advanced Transportation 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/1329265.

Full text
Abstract:
The pantograph-catenary system, which ensures the transmission of electrical energy, is a critical component of a high-speed electric multiple unit (EMU) train. The pantograph-catenary arc directly affects the power supply quality. The Chinese Railway High-speed (CRH) is equipped with a 6C system to obtain pantograph videos. However, it is difficult to automatically identify the arc image information from the vast amount of videos. This paper proposes an effective approach with which pantograph video can be separated into continuous frame-by-frame images. Because of the interference from the complex operating environment, it is unreasonable to directly use the arc parameters to detect the arc. An environmental segmentation algorithm is developed to eliminate the interference. Time series in the same environment is analyzed via cluster analysis technique (CAT) to find the abnormal points and simplified arc model to find arc events accurately. The proposed approach is tested with real pantograph video and performs well.
APA, Harvard, Vancouver, ISO, and other styles
21

Stidham, R., H. Yao, S. Bishu, M. Rice, J. Gryak, H. J. Wilkins, and K. Najarian. "P595 Feasibility and performance of a fully automated endoscopic disease severity grading tool for ulcerative colitis using unaltered multisite videos." Journal of Crohn's and Colitis 14, Supplement_1 (January 2020): S495—S496. http://dx.doi.org/10.1093/ecco-jcc/jjz203.723.

Full text
Abstract:
Abstract Background Endoscopic assessment is a core component of disease severity in ulcerative colitis (UC), but subjectivity threatens accuracy and reproducibility. We aimed to develop and test a fully-automated video analysis system for endoscopic disease severity in UC. Methods A developmental dataset of local high-resolution UC colonoscopy videos were generated with Mayo endoscopic scores (MES) provided by experienced local reviewers. Videos were converted into still images stacks and annotated for both sufficient image quality for scoring (informativeness) and MES grade (e.g. Mayo 0,1,2,3). Convolutional neural networks (CNNs) were used to train models to predict still image informativeness and disease severity grading with 5-fold cross-validation. Whole video MES models were developed by matching reviewer MES scores with the proportion of still image predicted scores within each video using a template matching grid search. The automated whole video MES workflow was tested in a separate endoscopic video set from an international multicenter UC clinical trial (LYC-30937-EC). Cohen’s kappa coefficient with quadratic weighting was used for agreement assessment. Results The developmental set included 51 high-resolution videos (Mayo 2,3 41.2%), with the multicenter clinical trial containing 264 videos (Mayo 2,3 83.7%, p &lt; .0001) from 157 subjects. In 34,810 frames, the still image informative classifier had excellent performance with an AUC of 0.961, sensitivity of 0.902, and specificity of 0.870. In high-resolution videos, agreement between reviewers and fully-automated MES was very good with correct prediction of exact MES in 78% (40/51,κ=0.84, 95% CI 0.75–0.92) of videos (Figure 1). In external clinical trial videos where dual central review was performed, reviewers agreed on exact MES in 82.8% (140/169) of videos (κ = 0.78, 95% CI 0.71–0.86). Automated MES grading of the clinical trial videos (often low resolution) correctly distinguished Mayo 0,1 vs. 2,3 in 83.7% (221/264) of videos. Agreement between automated and central reviewer on exact MES occurred in 57.1% of videos (κ=0.59, 95% CI 0.46–0.71), but improved to 69.5% when accounting for human reviewer disagreement. Automated MES was within 1-level of central scores in 93.5% of videos (247/264). Ordinal characteristics are shown for the automated process, predicting progressively increasing disease severity. TPR, true positive rate; FPR, false-positive rate. Conclusion Though premature for immediate deployment, these early results support the feasibility for artificial intelligence to approach expert-level endoscopic disease grading in UC.
APA, Harvard, Vancouver, ISO, and other styles
22

ZHAO, YUTAO, MING ZHANG, and YUNCAI LIU. "FLOW VIDEO SYNTHESIS FROM AN IMAGE." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 03 (May 2010): 421–31. http://dx.doi.org/10.1142/s0218001410007981.

Full text
Abstract:
A new method of image based flow analysis and synthesis from a still image is presented. We analyze the flow parts from a still image in order to get continuous video by image matting, image inpainting, projecting flow field onto the image and modulation. We construct and project 3D flow models onto still images and propose 2D modulation methods that are more suitable and practical to our synthesis. Our technique can edit still images that have flow parts. The experiments show the method is effective.
APA, Harvard, Vancouver, ISO, and other styles
23

Imamverdiyev, Yadigar, and Firangiz Musayeva. "Analysis of generative adversarial networks." Problems of Information Technology 13, no. 1 (January 24, 2022): 20–27. http://dx.doi.org/10.25045/jpit.v13.i1.03.

Full text
Abstract:
Recently, a lot of research has been done on the use of generative models in the field of computer vision and image classification. At the same time, effective work has been done with the help of an environment called generative adversarial networks, such as video generation, music generation, image synthesis, text-to-image conversion. Generative adversarial networks are artificial intelligence algorithms designed to solve the problems of generative models. The purpose of the generative model is to study the set of training patterns and their probable distribution. The article discusses generative adversarial networks, their types, problems, and advantages, as well as classification and regression, segmentation of medical images, music generation, best description capabilities, text image conversion, video generation, etc. general information is given. In addition, comparisons were made between the generative adversarial network algorithms analyzed on some criteria.
APA, Harvard, Vancouver, ISO, and other styles
24

Shapiro, Robert, Chris Blow, and Greg Rash. "Video Digitizing Analysis System." International Journal of Sport Biomechanics 3, no. 1 (February 1987): 80–86. http://dx.doi.org/10.1123/ijsb.3.1.80.

Full text
Abstract:
The use of video images in biomechanical analyses has become more realistic since the introduction of the shuttered video camera. Although recording rates are still limited to 60 Hz, exposure times can be reduced to prevent blurring in most situations. This paper presents a system for manually digitizing video images, a system that utilizes a video overlay board to place a set of cross hairs directly on a previously recorded or live video image. A cursor is used to move the cross hairs over required points. A BASIC program was written for a IBM PC-AT computer to accomplish this task. Video images of a known set of points were digitized, and calculated distances between points were compared to real distances. The mean of the observed errors was 0.79%. It was concluded that this digitizing system, within the limitations of video resolution, yielded digitizing errors similar in magnitude to those observed in cinematographic analyses.
APA, Harvard, Vancouver, ISO, and other styles
25

Ma, Jun. "Research on Sports Video Image Based on Clustering Extraction." Mathematical Problems in Engineering 2021 (June 12, 2021): 1–9. http://dx.doi.org/10.1155/2021/9996782.

Full text
Abstract:
Today, with the continuous sports events, the major sports events are also loved by the majority of the audience, so the analysis of the video data of the games has higher research value and application value. This paper takes the video of volleyball, tennis, baseball, and water polo as the research background and analyses the video images of these four sports events. Firstly, image graying, image denoising, and image binarization are used to preprocess the images of the four sports events. Secondly, feature points are used to detect the four sports events. According to the characteristics of these four sports events, SIFT algorithm is adopted to detect the good performance of SIFT feature points in feature matching. According to the simulation experiment, it can be seen that the SIFT algorithm can effectively detect football and have good anti-interference. For sports recognition, this document adopts the frame cross-sectional cumulative algorithm. Through simulation experiments, it can be seen that the grouping algorithm can achieve a recognition rate of more than 80% for sporting events, so it can be seen that the recognition algorithm is suitable for recognizing sports events videos.
APA, Harvard, Vancouver, ISO, and other styles
26

Tosi, Sébastien, Lídia Bardia, Maria Jose Filgueira, Alexandre Calon, and Julien Colombelli. "LOBSTER: an environment to design bioimage analysis workflows for large and complex fluorescence microscopy data." Bioinformatics 36, no. 8 (December 20, 2019): 2634–35. http://dx.doi.org/10.1093/bioinformatics/btz945.

Full text
Abstract:
Abstract Summary Open source software such as ImageJ and CellProfiler greatly simplified the quantitative analysis of microscopy images but their applicability is limited by the size, dimensionality and complexity of the images under study. In contrast, software optimized for the needs of specific research projects can overcome these limitations, but they may be harder to find, set up and customize to different needs. Overall, the analysis of large, complex, microscopy images is hence still a critical bottleneck for many Life Scientists. We introduce LOBSTER (Little Objects Segmentation and Tracking Environment), an environment designed to help scientists design and customize image analysis workflows to accurately characterize biological objects from a broad range of fluorescence microscopy images, including large images exceeding workstation main memory. LOBSTER comes with a starting set of over 75 sample image analysis workflows and associated images stemming from state-of-the-art image-based research projects. Availability and implementation LOBSTER requires MATLAB (version ≥ 2015a), MATLAB Image processing toolbox, and MATLAB statistics and machine learning toolbox. Code source, online tutorials, video demonstrations, documentation and sample images are freely available from: https://sebastients.github.io. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
27

Kiy, K. I., and D. A. Anokhin. "A NEW TECHNIQUE FOR OBJECT DETECTION AND TRACKING AND ITS APPLICATION TO ANALYSIS OF ROAD SCENE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-2/W1-2021 (April 15, 2021): 119–24. http://dx.doi.org/10.5194/isprs-archives-xliv-2-w1-2021-119-2021.

Full text
Abstract:
Abstract. In this paper, a new technique for real-time object detection and tracking is presented. This technique is based on the geometrized histograms method (GHM) for segmenting and describing color images (frames of video sequences) and on the facilities for global image analysis provided by this method. Basic elements of the technique that make it possible to solve image understanding problems almost without using the pixel arrays of images are introduced and discussed.A real-time parallel software implementation of the developed technique is briefly discussed. This technique is applied to solving problems of road scene analysis. The application to finding small contrast objects in images, like traffic signals and signal zones of vehicles is given. The developed technique is applied also to detecting other vehicles in the frame. The results of processing different frame of videos of road scenes are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Yixuan. "A Study on the Success of Short Videos Themed on Beijing Cuisines and Its Influence on the City’s Image." SHS Web of Conferences 155 (2023): 02010. http://dx.doi.org/10.1051/shsconf/202315502010.

Full text
Abstract:
With the boom of the short video industry in recent years, cities with tremendous internet influences have witnessed rapid growth attributable to the rise of short video platforms where cuisinethemed short videos have become a new channel for the communication of a city’s image. On one of the platforms, TikTok, there exists a huge gap in terms of communication performance among cuisine-themed short videos of different kinds. Taking content analysis as the tool, this paper is engaged with the quantitative analysis of the content characteristics, food types, city image, video length and language characteristics of the short videos themed on Beijing cuisines that go viral on the Internet with the coded information extracted from the top 30 related short videos with most likes published on TikTok, so as to investigate their commonalities and shared features. An in-depth analysis is provided as the result with specific cases taken into account to propose for the production and communication of similar videos and to provide certain references for the construction and communication of the image of corresponding cities.
APA, Harvard, Vancouver, ISO, and other styles
29

Sunil Kumar, V., Vedashree C.R, and Sowmyashree S. "IMAGE SENTIMENTAL ANALYSIS: AN OVERVIEW." International Journal of Advanced Research 10, no. 03 (March 31, 2022): 361–70. http://dx.doi.org/10.21474/ijar01/14398.

Full text
Abstract:
Visual content, such as photographs and video, contains not only objects, locations, and events, but also emotional and sentimental clues. On social networking sites, images are the simplest way for people to communicate their emotions. Images and videos are increasingly being used by social media users to express their ideas and share their experiences. Sentiment analysis of such large-scale visual content can aid in better extracting user sentiments toward events or themes, such as those in image tweets, so that sentiment prediction from visual content can be used in conjunction with sentiment analysis of written content. Despite the fact that this topic is relatively new, a wide range of strategies for various data sources and challenges have been created, resulting in a substantial body of study. This paper introduces the area of Image Sentiment Analysis and examines the issues that it raises. A description of new obstacles is also included, as well as an assessment of progress toward more sophisticated systems and related practical applications, as well as a summary of the studys findings.
APA, Harvard, Vancouver, ISO, and other styles
30

Shin, Bumshick, and KyuHan Kim. "Analysis of Wave-Induced Current Using Digital Image Correlation Techniques." Journal of Sensors 2018 (2018): 1–6. http://dx.doi.org/10.1155/2018/1784507.

Full text
Abstract:
Recently, advancement of digital image techniques and communications technology has enabled the application of existing images for scientific purposes. Furthermore, both quantitative and qualitative analyses of images have become possible through image processing such as transmit/storage of digital image data and image rectification. In this study, a coast having representative characteristics of east coast of Korea was selected with having erosion in winter, and the sedimentation in summer takes place repeatedly. Three-dimensional hydraulic model test was conducted to analyze its outcomes by a digital image correlation technique in order to understand the wave-induced current affecting the sediment transport. For this study, images filmed by the high-sensitive and high-resolution video camera were converted into stopped images of regular intervals and then those converted images were used for the following procedure to analyze flow and velocity into digital coordinates. The outcomes from interpretation of images filmed by the high-sensitive and high-resolution video camera can be utilized as a very useful analysis method for appreciating the generation mechanism and movement route of longshore current and rip current.
APA, Harvard, Vancouver, ISO, and other styles
31

Nääs, Irenilza de A., Marcus Laganá, Mario Mollo Neto, Simone Canuto, and Danilo F. Pereira. "Image analysis for assessing broiler breeder behavior response to thermal environment." Engenharia Agrícola 32, no. 4 (August 2012): 624–32. http://dx.doi.org/10.1590/s0100-69162012000400001.

Full text
Abstract:
The research proposes a methodology for assessing broiler breeder response to changes in rearing thermal environment. The continuous video recording of a flock analyzed may offer compelling evidences of thermal comfort, as well as other indications of welfare. An algorithm for classifying specific broiler breeder behavior was developed. Videos were recorded over three boxes where 30 breeders were reared. The boxes were mounted inside an environmental chamber were ambient temperature varied from cold to hot. Digital images were processed based on the number of pixels, according to their light intensity variation and binary contrast allowing a sequence of behaviors related to welfare. The system used the default of x, y coordinates, where x represents the horizontal distance from the top left of the work area to the point P, and y is the vertical distance. The video images were observed, and a grid was developed for identifying the area the birds stayed and the time they spent at that place. The sequence was analyzed frame by frame confronting the data with specific adopted thermal neutral rearing standards. The grid mask overlapped the real bird image. The resulting image allows the visualization of clusters, as birds in flock behave in certain patterns. An algorithm indicating the breeder response to thermal environment was developed.
APA, Harvard, Vancouver, ISO, and other styles
32

Fan, Kai, and Xiaoye Gu. "Image Quality Evaluation of Sanda Sports Video Based on BP Neural Network Perception." Computational Intelligence and Neuroscience 2021 (October 27, 2021): 1–8. http://dx.doi.org/10.1155/2021/5904400.

Full text
Abstract:
In the special sports camera, there are subframes. A lens is composed of multiple frames. It will be unclear if a frame is cut out. The definition of video screenshots lies in the quality of video. To get clear screenshots, we need to find clear video. The purpose of this paper is to analyze and evaluate the quality of sports video images. Through the semantic analysis and program design of video using computer language, the video images are matched with the data model constructed by research, and the real-time analysis of sports video images is formed, so as to achieve the real-time analysis effect of sports techniques and tactics. In view of the defects of rough image segmentation and high spatial distortion rate in current sports video image evaluation methods, this paper proposes a sports video image evaluation method based on BP neural network perception. The results show that the optimized algorithm can overcome the slow convergence of weights of traditional algorithm and the oscillation in error convergence of variable step size algorithm. The optimized algorithm will significantly reduce the learning error of neural network and the overall error of network quality classification and greatly improve the accuracy of evaluation. Sanda motion video image quality evaluation method based on BP (back propagation) neural network perception has high spatial accuracy, good noise iteration performance, and low spatial distortion rate, so it can accurately evaluate Sanda motion video image quality.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Yingju, and Jeongkyu Lee. "A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video." Diagnostic and Therapeutic Endoscopy 2012 (November 13, 2012): 1–9. http://dx.doi.org/10.1155/2012/418037.

Full text
Abstract:
Wireless capsule endoscopy (WCE) enables a physician to diagnose a patient's digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans), tomography-based images (e.g., MRT and CT scans), and photography-based images (e.g., endoscopy, dermatology, and microscopic histology). Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI) pathology and methods of shot boundary detection.
APA, Harvard, Vancouver, ISO, and other styles
34

Rastogi, Rohit, Abhinav Tyagi, Himanshu Upadhyay, and Devendra Singh. "Algorithmic Analysis of Automatic Attendance System Using Facial Recognition." International Journal of Decision Support System Technology 14, no. 1 (January 2022): 1–19. http://dx.doi.org/10.4018/ijdsst.286688.

Full text
Abstract:
Attendance management can become a tedious task for teachers if it is performed manually.. This problem can be solved with the help of an automatic attendance management system. But validation is one of the main issues in the system. Generally, biometrics are used in the smart automatic attendance system. Managing attendance with the help of face recognition is one of the biometric methods with better efficiency as compared to others. Smart Attendance with the help of instant face recognition is a real-life solution that helps in handling daily life activities and maintaining a student attendance system. Face recognition-based attendance system uses face biometrics which is based on high resolution monitor video and other technologies to recognize the face of the student. In project, the system will be able to find and recognize human faces fast and accurately with the help of images or videos that will be captured through a surveillance camera. It will convert the frames of the video into images so that our system can easily search that image in the attendance database.
APA, Harvard, Vancouver, ISO, and other styles
35

A, Dela Fransiska, and Ike Janita Dewi. "PENGARUH KARAKTERISTIK PSIKOGRAFIS KONSUMEN, SIKAP TERHADAP VIDEO TUTORIAL MAKE-UP, DAN CITRA MERK PADA MINAT BELI." EXERO : Journal of Research in Business and Economics 1, no. 1 (November 30, 2018): 22–43. http://dx.doi.org/10.24071/exero.v1i1.1660.

Full text
Abstract:
This research aims to examine the influence of consumer psychographic characteristics (purchase experience, consumer innovativeness, vanity seeking, and variaty seeking behavior) on attitudes towards makeup tutorial videos, the influence of attitude towards on makeup tutorial videos on brand image and the influence of brand image on purchase interest. Data was collected through a survey on 300 female respondents in Yogyakarta Special Region and Central Java, aged 15-45 years old. Data were analyzed using linear regression analysis. The results showed that purchase experience, vanity seeking behavior, and variety seeking behavior positively effect attitudes towards makeup tutorials video, while consumer innovativenes does not affect attitude towards makeup tutorial video. Attitudes towards makeup tutorials video positively effect brand image and brand image positively effect purchase intention.
APA, Harvard, Vancouver, ISO, and other styles
36

Peng, Bo, Hanbo Zhang, Ni Yang, and Jiming Xie. "Vehicle Recognition from Unmanned Aerial Vehicle Videos Based on Fusion of Target Pre-Detection and Deep Learning." Sustainability 14, no. 13 (June 29, 2022): 7912. http://dx.doi.org/10.3390/su14137912.

Full text
Abstract:
For accurate and effective automatic vehicle identification, morphological detection and deep convolutional networks were combined to propose a method for locating and identifying vehicle models from unmanned aerial vehicle (UAV) videos. First, the region of interest of the video frame image was sketched and grey-scale processing was performed; sub-pixel-level skeleton images were generated based on the Canny edge detection results of the region of interest; then, the image skeletons were decomposed and reconstructed. Second, a combination of morphological operations and connected domain morphological features were applied for vehicle target recognition, and a deep learning image benchmark library containing 244,520 UAV video vehicle samples was constructed. Third, we improved the AlexNet model by adding convolutional layers, pooling layers, and adjusting network parameters, which we named AlexNet*. Finally, a vehicle recognition method was established based on a candidate target extraction algorithm with AlexNet*. The validation analysis revealed that AlexNet* achieved a mean F1 of 85.51% for image classification, outperforming AlexNet (82.54%), LeNet (63.88%), CaffeNet (46.64%), VGG16 (16.67%), and GoogLeNet (14.38%). The mean values of Pcor, Pre, and Pmiss for cars and buses reached 94.63%, 6.87%, and 4.40%, respectively, proving that this method can effectively identify UAV video targets.
APA, Harvard, Vancouver, ISO, and other styles
37

Et. al., Ria Ambrocio Sagum, MCS. "Incorporating Deblurring Techniques in Multiple Recognition of License Plates from Video Sequences." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 5447–52. http://dx.doi.org/10.17762/turcomat.v12i3.2194.

Full text
Abstract:
Recognition of license plate is the process of wherein photographic video or images of license plates are being captured and then processed using an application that implements series of algorithms that will provide the alpha numeric conversion of the captured data. In this study, the researchers developed a license plate recognition that incorporates image deblurring to accommodate multiple recognition from video sequences. The approach uses Background Subtraction and Connected Component Analysis for the detection of license plates, Image deblurring to enhance the image and reduce the difficulties in recognition, and LBP Cascade Classifier was implemented for recognition of characters. Since multiple detection for license plate produces different difficulties such as motion blur and camera angle view the approach attempts to minimize the effects of these problems while still being applicable to multiple detection. 30 videos with 3 minutes length each of actual traffic situation were gathered and recorded at the footbridge of UP Ayala Technohub, Commowealth Ave. Quezon City, Philippines and 10 of these videos were used as input for the testing and experiment of the system. The accuracy for plate detection were computed using F-measure which yields to 87.32% for both system with image deblurring and none, while the accuracy for character recognition is 62.66% for system with image deblurring and 48.25% for the system without image deblurring. The result shows that there is a significant difference in the accuracy of license plate recognition between the system with image deblurring and without image deblurring
APA, Harvard, Vancouver, ISO, and other styles
38

Junaid, Muhammad, Luqman Shah, Ali Imran Jehangiri, Fahad Ali Khan, Yousaf Saeed, Mehmood Ahmed, and Muhammad Naeem. "Recognition of Images Formed in Pho on the Eyes of different Subjects." Revista Gestão Inovação e Tecnologias 11, no. 4 (July 22, 2021): 3023–29. http://dx.doi.org/10.47059/revistageintec.v11i4.2335.

Full text
Abstract:
With each passing day resolutions of still image/video cameras are on the rise. This amelioration in resolutions has the potential to extract useful information on the view opposite the photographed subjects from their reflecting parts. Especially important is the idea to capture images formed on the eyes of photographed people and animals. The motivation behind this research is to explore the forensic importance of the images/videos to especially analyze the reflections of the background of the camera. This analysis may include extraction/ detection/recognition of the objects in front of the subjects but on the back of the camera. In the national context such videos/photographs are not rare and, specifically speaking, an abductee’s video footage at a good resolution may give some important clues to the identity of the person who kidnapped him/her. Our aim would be to extract visual information formed in human eyes from still images as well as from video clips. After extraction, our next task would be to recognize the extracted visual information. Initially our experiments would be limited on characters’ extraction and recognition, including characters of different styles and font sizes (computerized) as well as hand written. Although varieties of Optical Character Recognition (OCR) tools are available for characters’ extraction and recognition but, the problem is that they only provide results for clear images (zoomed).
APA, Harvard, Vancouver, ISO, and other styles
39

Thinh, Bui Van, Tran Anh Tuan, Ngo Quoc Viet, and Pham The Bao. "Content based video retrieval system using principal object analysis." Tạp chí Khoa học 14, no. 9 (September 20, 2019): 24. http://dx.doi.org/10.54607/hcmue.js.14.9.291(2017).

Full text
Abstract:
Video retrieval is a searching problem on videos or clips based on the content of video clips which relates to the input image or video. Some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. We propose a content based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is evaluated on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show.
APA, Harvard, Vancouver, ISO, and other styles
40

Umar, Rusydi, Abdu Fadlil, and Alfiansyah Imanda Putra. "Analisis Forensics Untuk Mendeteksi Pemalsuan Video." J-SAKTI (Jurnal Sains Komputer dan Informatika) 3, no. 2 (September 13, 2019): 193. http://dx.doi.org/10.30645/j-sakti.v3i2.140.

Full text
Abstract:
The current technology is proving that the ease with which crimes occur using computer science in the field of video editing, in addition from time to time more and more video editing software and increasingly eassy to use, but the development of this technology is widely misused by video creators to manipulate video hoaxes that cause disputes, so many video cases are spread which cannot be trusted by the public. Counterfeiting is an act of modifying documents, products, images or videos, among other media. Forensic video is one of the scientific methods in research that aims to obtain evidence and facts in determining the authenticity of a video. This makes the basis of research to detect video falsification. This study uses analysis with 2 forensic tools, forevid and VideoCleaner. The result of this study is the detection of differences in metadata, hash and contrast of original videos and manipulated videos.
APA, Harvard, Vancouver, ISO, and other styles
41

Perera, Asanka G., Fatema-Tuz-Zohra Khanam, Ali Al-Naji, and Javaan Chahl. "Detection and Localisation of Life Signs from the Air Using Image Registration and Spatio-Temporal Filtering." Remote Sensing 12, no. 3 (February 9, 2020): 577. http://dx.doi.org/10.3390/rs12030577.

Full text
Abstract:
In search and rescue operations, it is crucial to rapidly identify those people who are alive from those who are not. If this information is known, emergency teams can prioritize their operations to save more lives. However, in some natural disasters the people may be lying on the ground covered with dust, debris, or ashes making them difficult to detect by video analysis that is tuned to human shapes. We present a novel method to estimate the locations of people from aerial video using image and signal processing designed to detect breathing movements. We have shown that this method can successfully detect clearly visible people and people who are fully occluded by debris. First, the aerial videos were stabilized using the key points of adjacent image frames. Next, the stabilized video was decomposed into tile videos and the temporal frequency bands of interest were motion magnified while the other frequencies were suppressed. Image differencing and temporal filtering were performed on each tile video to detect potential breathing signals. Finally, the detected frequencies were remapped to the image frame creating a life signs map that indicates possible human locations. The proposed method was validated with both aerial and ground recorded videos in a controlled environment. Based on the dataset, the results showed good reliability for aerial videos and no errors for ground recorded videos where the average precision measures for aerial videos and ground recorded videos were 0.913 and 1 respectively.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Zhi, and Huijie Zhu. "Online Image and Self-Presentation: A Study on Chinese Rural Female Vloggers." Online Media and Global Communication 1, no. 2 (June 1, 2022): 387–409. http://dx.doi.org/10.1515/omgc-2022-0015.

Full text
Abstract:
Abstract Purpose The purpose of this empirical study is to examine how Chinese rural female vloggers present and express themselves online when they are increasingly exposed to the Internet, and to analyze the motivations of their self-expression and whether the media images on the Internet correctly reflect the real life of rural women in China. Design/methodology/approach We conducted content analysis and case studies on 30 rural female vloggers and 2580 videos selected from Xi Gua Video, one of the most popular video platforms in China. The English name of Xi Gua Video is Watermelon Video. It belongs to the same company as Douyin(the Chinese version of Tik Tok) and has a large number of creators. There is a Rural category in Xi Gua Video which shows its importance to the video platform. Findings Married women are the mainstay of rural female vloggers, and they tend to use unique username settings. Among different avatars, a confident smile is the mainstream. The themes of the videos mainly center on rural family life. Based on the analysis of the video themes, we speculated that whatever techniques rural women take for video shooting, their image formation is still subject to the gaze of men and the patriarchal social order. In general, they are still disciplined, though their self-awareness has already begun to awaken. Research implications The research unveils the gendered frame behind the rural-related short videos in contemporary China. It can also help the rural female vloggers identify themselves. Practical implications Internet policymakers can guide the dissemination of short videos of rural women based on this research, thereby improving the status and lives of rural women. Originality/value This is an empirical study to examine the rural Chinese female vloggers’ attitude and cognitive competence based on the feminist theory.
APA, Harvard, Vancouver, ISO, and other styles
43

Hao, Luoying, Yan Hu, Risa Higashita, James J. Q. Yu, Ce Zheng, and Jiang Liu. "Multiview Volume and Temporal Difference Network for Angle-Closure Glaucoma Screening from AS-OCT Videos." Journal of Healthcare Engineering 2022 (April 7, 2022): 1–9. http://dx.doi.org/10.1155/2022/2722608.

Full text
Abstract:
Background. Precise and comprehensive characterizations from anterior segment optical coherence tomography (AS-OCT) are of great importance in facilitating the diagnosis of angle-closure glaucoma. Existing automated analysis methods focus on analyzing structural properties identified from the single AS-OCT image, which is limited to comprehensively representing the status of the anterior chamber angle (ACA). Dynamic iris changes are evidenced as a risk factor in primary angle-closure glaucoma. Method. In this work, we focus on detecting the ACA status from AS-OCT videos, which are captured in a dark-bright-dark changing environment. We first propose a multiview volume and temporal difference network (MT-net). Our method integrates the spatial structural information from multiple views of AS-OCT videos and utilizes temporal dynamics of iris regions simultaneously based on image difference. Moreover, to reduce the video jitter caused by eye movement, we employ preprocessing to align the corneal part between video frames. The regions of interest (ROIs) in appearance and dynamics are also automatically detected to intensify the related informative features. Results. In this work, we employ two AS-OCT video datasets captured by two different devices to evaluate the performance, which includes a total of 342 AS-OCT videos. For the Casia dataset, the classification accuracy for our MT-net is 0.866 with a sensitivity of 0.857 and a specificity of 0.875, which achieves superior performance compared with the results of the algorithms based on AS-OCT images with an obvious gap. For the Zeiss AS-OCT video dataset, our method also gets better performance against the methods based on AS-OCT images with a classification accuracy of 0.833, a sensitivity of 0.860, and a specificity of 0.800. Conclusions. The AS-OCT videos captured under changing environments can be a comprehended means for angle-closure classification. The effectiveness of our proposed MT-net is proved by two datasets from different manufacturers
APA, Harvard, Vancouver, ISO, and other styles
44

Mufford, Justin T., David J. Hill, Nancy J. Flood, and John S. Church. "Use of unmanned aerial vehicles (UAVs) and photogrammetric image analysis to quantify spatial proximity in beef cattle." Journal of Unmanned Vehicle Systems 7, no. 3 (September 1, 2019): 194–206. http://dx.doi.org/10.1139/juvs-2018-0025.

Full text
Abstract:
Spatial proximity is an important metric in cattle behaviour, which is used to study social structure, dyadic relationships, as well as grazing and maternal behaviours. We developed an efficient, novel, non-invasive method to quantify the spatial proximity of beef cattle by using UAV-based image acquisition and photogrammetric analysis. Orthomosaics constructed by images obtained from UAVs were used to measure, with an accuracy of ±1.96 m (95% likelihood), the inter-individual distances between cows and calves. Aerial videos of the calves and their dams, held in a 5 ha pasture, were made over four days using UAVs. We used two UAVs to video-capture the following: (i) the location of all individuals (UAV flown at 100 m) and (ii) the identity of cow–calf pairs (UAV flown at 15–30 m). Still-images extracted from the UAV-acquired video screenshots were used to produce orthomosaics. The orthomosaics captured all the cows and calves in a single image, from which we measured the distance between related and non-related cow–calf pairs. This UAV-based orthomosaic method clearly showed that members of related pairs were closer than non-related ones, and that the distance was greater in the evening, demonstrating the utility of UAVs to accurately measure cattle spatial proximity.
APA, Harvard, Vancouver, ISO, and other styles
45

Rawlins, D. J. "Video Microscopy." Microscopy Today 3, no. 1 (February 1995): 8–9. http://dx.doi.org/10.1017/s1551929500062180.

Full text
Abstract:
There has been a tremendous expansion in the use of video in microscopy in recent years. This is probably not due to the most obvious advantage of video, that of real-time imaging of moving events, but to its use in image analysis systems. The cheapness of PCs and the associated hardware and software to store and manipulate images, coupled with the availability of lowcost CCD cameras has made the analysis of microscope images commonplace. In the article the equipment used and the sorts of analysis which can be performed will be briefly outlined. The standard text on video is that of Inoue; a more recent review is by Weiss et al. while the major video equipment manufacturers will also provide literature and advice on the more up-todate equipment.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Wanlian, Zhang Chen, and Wenjie Guo. "Visual Representation of Tourism Image in Short Video: Comparison between Agency-Generated Video and User-Generated Video." Frontiers in Business, Economics and Management 3, no. 1 (February 25, 2022): 31–39. http://dx.doi.org/10.54097/fbem.v3i1.237.

Full text
Abstract:
Short video has become one of the important tourism marketing media in the era of mobile internet. Taking Xixianan village, Huangshan City, Anhui Province as an example, this study selects one agency-generated video (AGV) and one user-generated video (UGV) as research materials, and adopts content analysis method to analyze the content theme and structure difference of two videos on tourism image representation of Xixinan. It is found that differences exist in content structure of visual representation of tourism image of Xixinan between the two short videos. AGV focuses on the comprehensive representation of natural, cultural, folk and other attractive elements of Xixinan, while UGV lays emphasis on the representation of detailed elements such as tourist experience. When producing short videos, destination marketing organizations need to take into account the representation of tourist experience elements, and promote AGV and UGV to play their respective positive effects in tourism image promotion.
APA, Harvard, Vancouver, ISO, and other styles
47

Guo, Rui Liang, Bing Liu, and Hong Yuan Huang. "The Study of Female Fashion Model’s Basic Walking Posture." Advanced Materials Research 332-334 (September 2011): 1272–75. http://dx.doi.org/10.4028/www.scientific.net/amr.332-334.1272.

Full text
Abstract:
Perfect walking posture has been of enormous significance for fashion model. Models with good walking postures can convey the fashion design to the audience clearly and further attract them. This paper studied and analyzed the basic walking postures of famous female fashion models by methods of video and image analysis. Through analysis of measurements from videos and images, some basic principles of walking posture were acquired. At last, this paper fabricated a virtual model with a group of standard walking postures in a gait cycle by using 3D software.
APA, Harvard, Vancouver, ISO, and other styles
48

Feng, Yafeng, and Xianguo Liu. "Application of Video Processing Technology Based on Diffusion Equation Model in Basketball Analysis." Advances in Mathematical Physics 2021 (December 26, 2021): 1–12. http://dx.doi.org/10.1155/2021/7522973.

Full text
Abstract:
Video event detection and annotation work is an important content of video analysis and the basis of video content retrieval. Basketball is one of the most popular types of sports. Event detection and labeling of basketball videos can help viewers quickly locate events of interest and meet retrieval needs. This paper studies the application of anisotropic diffusion in video image smoothing, denoising, and enhancement. An improved form of anisotropic diffusion that can be used for video image enhancement is analyzed. This paper studies the anisotropic diffusion method for coherent speckle noise removal and proposes a video image denoising method that combines anisotropic diffusion and stationary wavelet transform. This paper proposes an anisotropic diffusion method based on visual characteristics, which adds a factor of video image detail while smoothing, and improves the visual effect of diffusion. This article discusses how to apply anisotropic diffusion methods and ideas to video image segmentation. We introduced the classic watershed segmentation algorithm and used forward-backward diffusion to process video images to reduce oversegmentation, introduced the active contour model and its improved GVF Snake, and analyzed the idea of how to use anisotropic diffusion and improve the GVF Snake model to get a new GGVF Snake model. In the study of basketball segmentation of close-up shots, we propose an improved Hough transform method based on a variable direction filter, which can effectively extract the center and radius of the basketball. The algorithm has good robustness to basketball partial occlusion and motion blur. In the basketball segmentation research of the perspective shot, the commonly used object segmentation method based on the change area detection is very sensitive to noise and requires the object not to move too fast. In order to correct the basketball segmentation deviation caused by the video noise and the fast basketball movement, we make corrections based on the peak characteristics of the edge gradient. At the same time, the internal and external energy calculation methods of the traditional active contour model are improved, and the judgment standard of the regional optimal solution and segmentation validity is further established. In the basketball tracking research, an improved block matching method is proposed. On the one hand, in order to overcome the influence of basketball’s own rotation, this article establishes a matching criterion that has nothing to do with the location of the area. On the other hand, this article improves the diamond motion search path based on the basketball’s motion correlation and center offset characteristics to reduce the number of searches and improve the tracking speed.
APA, Harvard, Vancouver, ISO, and other styles
49

Jarvis, L. R. "Microcomputer video image analysis." Journal of Microscopy 150, no. 2 (May 1988): 83–97. http://dx.doi.org/10.1111/j.1365-2818.1988.tb04601.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lo, Shi Wei. "Video Matching by One-Dimensional PSNR Profile." Applied Mechanics and Materials 479-480 (December 2013): 174–78. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.174.

Full text
Abstract:
This paper addresses a compact framework to matching video sequences through a PSNR-based profile. This simplify video profile is suitable to matching process when apply in disordered undersea videos. As opposed to using color and motion feature across the video sequence, we use the image quality of successive frames to be a feature of videos. We employ the PSNR quality feature to be a video profile rather than the complex contend-based analysis. The experimental results show that the proposed approach permits accurate of matching video. The performance is satisfactory on determine correct video from undersea dataset.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography