Статті в журналах з теми "SIMULINK MODELS FOR VIDEO PROCESSING"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: SIMULINK MODELS FOR VIDEO PROCESSING.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "SIMULINK MODELS FOR VIDEO PROCESSING".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

DOBLANDER, ANDREAS, DIETMAR GÖSSERINGER, BERNHARD RINNER, and HELMUT SCHWABACH. "AN EVALUATION OF MODEL-BASED SOFTWARE SYNTHESIS FROM SIMULINK MODELS FOR EMBEDDED VIDEO APPLICATIONS." International Journal of Software Engineering and Knowledge Engineering 15, no. 02 (April 2005): 343–48. http://dx.doi.org/10.1142/s0218194005002038.

Повний текст джерела
Анотація:
In next generation video surveillance systems there is a trend towards embedded solutions. Digital signal processors (DSP) are often used to provide the necessary computing power. The limited resources impose significant challenges for software development. Resource constraints must be met while facing increasing application complexity and pressing time-to-market demands. Recent advances in synthesis tools for Simulink suggest a high-level approach to algorithm implementation for embedded DSP systems. The model-based visual development process of Simulink facilitates simulation as well as synthesis of target specific code. In this work the modeling and code generation capabilities of Simulink are evaluated with respect to video analysis algorithms. Different models of a motion detection algorithm are used to synthesize code. The generated code targeted at a Texas Instruments TMS320C6416 DSP is compared to a hand-optimized reference. Experiments show that an ad hoc approach to synthesize complex image processing algorithms hardly yields optimal code for DSPs. However, several optimizations can be applied to improve performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Naso, David, Olha Pohudina, Andrii Pohudin, Sergiy Yashin, and Rossella Bartolo. "Autonomous flight insurance method of unmanned aerial vehicles Parot Mambo using semantic segmentation data." Radioelectronic and Computer Systems, no. 1 (March 7, 2023): 147–54. http://dx.doi.org/10.32620/reks.2023.1.12.

Повний текст джерела
Анотація:
Autonomous navigation of unmanned aerial vehicles (UAVs) has become in the past decade an extremely attracting topic, also due to the increasing availability of affordable equipment and open-source control and processing software environments. This demand has also raised a strong interest in developing accessible experimental platforms to train engineering students in the rapidly evolving area of autonomous navigation. In this paper, we describe a platform based on low-cost off-the-shelf hardware that takes advantage of the Matlab/Simulink programming environment to tackle most of the problems related to UAV autonomous navigation. More specifically, the subject of this paper is the autonomous control of the flight of a small UAV, which must explore and patrol an indoor unknown environment. Objectives: to analyse the existing hardware platforms for autonomous flight indoors, choose a flight exploration scenario of unknown premises, to formalize the procedure for obtaining a model of knowledge for semantic classification of premises, to formalize obtaining distance to obstacles using data camera horizontally employment and building on its barrier map. Namely, we use the method of image segmentation based on the brightness threshold, a method of training the semantic segmentation network, and computer algorithms in probabilistic robotics for mobile robots. We consider both the case of navigation guided by structural visual information placed in the environment, e.g., contrast markers for flight (such as path marked by a red tape), and the case of navigation based on unstructured information such as recognizable objects or human gestures. Basing on preliminary tests, the most suitable method for autonomous in-door navigation is by using\ object classification and segmentation, so that the UAV gradually analyses the surrounding objects in the room and makes decisions on path planning. The result of our investigation is a method that is suitable to allow the autonomous flight of a UAV with a frontal video camera. Conclusions. The scientific novelty of the obtained results is as follows: we have improved the method of autonomous flight of small UAVs by using the semantic network model and determining the purpose of flight only at a given altitude to minimize the computational costs of limited autopilot capabilities for low-cost small UAV models. The results of our study can be further extended by means of a campaign of experiments in different environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Coman, Mircea, and Balan Radu. "Video Camera Measuring Application Using Matlab." Solid State Phenomena 166-167 (September 2010): 139–44. http://dx.doi.org/10.4028/www.scientific.net/ssp.166-167.139.

Повний текст джерела
Анотація:
This paper presents the implementation in the Matlab/Simulink environment of an application for measuring distances using a video camera. Some of the advantages of using image processing as a method of measurement and of the Matlab for designing de application. The principles that where use to obtain de distance calculated where presented. Also the steps of the application implementation in Simulink where described.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bobyr, Maxim, Alexander Arkhipov, and Aleksey Yakushev. "Shade recognition of the color label based on the fuzzy clustering." Informatics and Automation 20, no. 2 (March 30, 2021): 407–34. http://dx.doi.org/10.15622/ia.2021.20.2.6.

Повний текст джерела
Анотація:
In this article the task of determining the current position of pneumatic actuators is considered. The solution to the given task is achieved by using a technical vision system that allows to apply the fuzzy clustering method to determine in real time the center coordinates and the displacement position of a color label located on the mechatronic complex actuators. The objective of this work is to improve the accuracy of the moving actuator’s of mechatronic complex by improving the accuracy of the color label recognition. The intellectualization of process of the color shade recognition is based on fuzzy clustering. First, a fuzzy model is built, that allows depending on the input parameters of the color intensity for each of the RGB channels and the color tone component, to select a certain color in the image. After that, the color image is binarized and noise is suppressed. The authors used two defuzzification models during simulation a fuzzy system: one is based on the center of gravity method (CoG) and the other is based on the method of area ratio (MAR). The model is implemented based on the method of area ratio and allows to remove the dead zones that are present in the center of gravity model. The method of area ratio determines the location of the color label in the image frame. Subsequently, when the actuator is moved longitudinally, the vision system determines the location of the color label in the new frame. The color label position offset between the source and target images allows to determine the moved distance of the color label. In order to study how noise affects recognition accuracy, the following digital filters were used: median, Gaussian, matrix and binomial. Analysis of the accuracy of these filters showed that the best result was obtained when using a Gaussian filter. The estimation was based on the signal-to-noise coefficient. The mathematical models of fuzzy clustering of color label recognition were simulated in the Matlab/Simulink environment. Experimental studies of technical vision system performance with the proposed fuzzy clustering model were carried out on a pneumatic mechatronic complex that performs processing, moving and storing of details. During the experiments, a color label was placed on the cylinder, after which the cylinder moved along the guides in the longitudinal direction. During the movement, video recording and image recognition were performed. To determine the accuracy of color label recognition, the PSNR and RMSE coefficients were calculated which were equal 38.21 and 3.14, respectively. The accuracy of determining the displacement based on the developed model for recognizing color labels was equal 99.7%. The defuzzifier speed has increased to 590 ns.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xie, Xiao Peng, and Yun Yi Li. "Computer Simulation Study Based on Matlab." Applied Mechanics and Materials 513-517 (February 2014): 3049–52. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3049.

Повний текст джерела
Анотація:
This paper describes the use of one of the Matlab toolbox dynamic simulation tool Simulink simulation methods, and improve simulation speed, the simulation results analysis conducted in-depth elaboration. Also describes the use of SIMULINK simulation tools to achieve automatic control system modeling, analysis and design, simulation methods and Simulink-based video and image processing module sets into visualization, modular modeling idea.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xin Li. "Video Processing Via Implicit and Mixture Motion Models." IEEE Transactions on Circuits and Systems for Video Technology 17, no. 8 (August 2007): 953–63. http://dx.doi.org/10.1109/tcsvt.2007.896656.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhong, Zhaoqian, and Masato Edahiro. "Model-Based Parallelization for Simulink Models on Multicore CPUs and GPUs." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 20 (January 4, 2020): 1–13. http://dx.doi.org/10.24297/ijct.v20i.8533.

Повний текст джерела
Анотація:
In this paper we propose a model-based approach to parallelize Simulink models of image processing algorithms on homogeneous multicore CPUs and NVIDIA GPUs at the block level and generate CUDA C codes for parallel execution on the target hardware. In the proposed approach, the Simulink models are converted to directed acyclic graphs (DAGs) based on their block diagrams, wherein the nodes represent tasks of grouped blocks or subsystems in the model and the edges represent the communication behaviors between blocks. Next, a path analysis is conducted on the DAGs to extract all execution paths and calculate their respective lengths, which comprises the execution times of tasks and the communication times of edges on the path. Then, an integer linear programming (ILP) formulation is used to minimize the length of the critical path of the DAG, which represents the execution time of the Simulink model. The ILP formulation also balances workloads on each CPU core for optimized hardware utilization. We parallelized image processing models on a platform of two homogeneous CPU cores and two GPUs with our approach and observed a speedup performance between 8.78x and 15.71x.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Arsinte, Radu, and Eugen Lupu. "Prototyping Industrial Vision Applications and Implementations on Multimedia Processors." Applied Mechanics and Materials 808 (November 2015): 321–26. http://dx.doi.org/10.4028/www.scientific.net/amm.808.321.

Повний текст джерела
Анотація:
The article presents a development system for industrial vision applications based on multimedia processors. For simulation of the algorithm and prototyping the application we can use either standard Matlab / Simulink environment or specialized environments such as Open eVision. The system components are presented with examples of application prototyping and implementation on standard platform Digital Video Development Platform DVDP6437 from Spectrum Digital, equipped with Texas Instruments TMS320DM6437. In the presented system we did experiments regarding the Rapid Prototyping concept in industrial vision application, starting from Embedded Target Library components from Matlab / Simulink. The experiments revealed that multimedia processors are usable both in video and still image processing and are possible to be considered an option in standard industrial vision applications: positioning, visual inspection. The paper contains also a brief study of the necessary components of a proposed embedded architecture, intended to be realized for evaluation of the technology in industrial environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hanh, Le Thi My, Nguyen Thanh Binh, and Khuat Thanh Tung. "Parallel Mutant Execution Techniques in Mutation Testing Process for Simulink Models." Journal of Telecommunications and Information Technology 4 (December 20, 2017): 90–100. http://dx.doi.org/10.26636/jtit.2017.113617.

Повний текст джерела
Анотація:
Mutation testing – a fault-based technique for software testing – is a computationally expensive approach. One of the powerful methods to improve the performance of mutation without reducing effectiveness is to employ parallel processing, where mutants and tests are executed in parallel. This approach reduces the total time needed to accomplish the mutation analysis. This paper proposes three strategies for parallel execution of mutants on multicore machines using the Parallel Computing Toolbox (PCT) with the Matlab Distributed Computing Server. It aims to demonstrate that the computationally intensive software testing schemes, such as mutation, can be facilitated by using parallel processing. The experiments were carried out on eight different Simulink models. The results represented the efficiency of the proposed approaches in terms of execution time during the testing process.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Maqsood, Azka, Imran Touqir, Adil Masood Siddiqui, and Maham Haider. "Wavelet Based Video Denoising using Probabilistic Models." January 2019 38, no. 1 (January 1, 2019): 17–30. http://dx.doi.org/10.22581/muet1982.1901.02.

Повний текст джерела
Анотація:
Wavelet based image processing techniques do not strictly follow the conventional probabilistic models that are unrealistic for real world images. However, the key features of joint probability distributions of wavelet coefficients are well captured by HMT (Hidden Markov Tree) model. This paper presents the HMT model based technique consisting of Wavelet based Multiresolution analysis to enhance the results in image processing applications such as compression, classification and denoising. The proposed technique is applied to colored video sequences by implementing the algorithm on each video frame independently. A 2D (Two Dimensional) DWT (Discrete Wavelet Transform) is used which is implemented on popular HMT model used in the framework of Expectation-Maximization algorithm. The proposed technique can properly exploit the temporal dependencies of wavelet coefficients and their non-Gaussian performance as opposed to existing wavelet based denoising techniques which consider the wavelet coefficients to be jointly Gaussian or independent. Denoised frames are obtained by processing the wavelet coefficients inversely. Comparison of proposed method with the existing techniques based on CPSNR (Coloured Peak Signal to Noise Ratio), PCC (Pearson’s Correlation Coefficient) and MSSIM (Mean Structural Similarity Index) has been carried out in detail.The proposed denoising method reveals improved results in terms of quantitative and qualitative analysis for both additive and multiplicative noise and retains nearly all the structural contents of a video frame.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ermakov, S. G., A. V. Zabrodin, A. V. Krasnovidov, and A. D. Homonenko. "INTERACTION SIMULINK-MODELS OF COMPLEX AND INTELLIGENT SYSTEMS WITH PROGRAMS IN HIGH-LEVEL LANGUAGES." T-Comm 16, no. 12 (2022): 23–31. http://dx.doi.org/10.36724/2072-8735-2022-16-12-23-31.

Повний текст джерела
Анотація:
The MatLab package and its extension in the form of Simulink block modeling system form a visual and effective tool for modeling complex and intelligent systems. This bundle provides one of the most effective ways to reduce the time to determine optimal parameters of control actions in the simulation. A further increase in the efficiency of modeling complex and intelligent systems can be achieved through the use of high-level programming languages. The purpose of the study is to consider the methods of interaction between MatLab and Simulink with programs in high-level languages C and C++. to improve the efficiency of the modeling process. Methods and means. Interaction between the MatLab and Simulink packages is implemented using the following methods: executing a file from the S-model window, as well as by launching the S-model from the MatLab command line or from an m-file, followed by processing the simulation results using MatLab software and C or C programming languages ++. Results. A practical implementation of the interaction of these tools (Matlab + Simulink + high-level programming languages C or C++) has been completed. Practical significance. Software implementations for creating S-functions of levels 1 and 2 are presented with a demonstration of the results of work and the implementation of S-functions in a high-level language is considered. The proposed organization of the interaction between MatLab, Simulink and C or C++ languages makes it possible to increase the system simulation efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Chądzyńska, Dominika, and Dariusz Gotlib. "Spatial data processing for the purpose of video games." Polish Cartographical Review 48, no. 1 (March 1, 2016): 41–50. http://dx.doi.org/10.1515/pcr-2016-0001.

Повний текст джерела
Анотація:
Abstract Advanced terrain models are currently commonly used in many video/computers games. Professional GIS technologies, existing spatial datasets and cartographic methodology are more widely used in their development. This allows for achieving a realistic model of the world. On the other hand, the so-called game engines have very high capability of spatial data visualization. Preparing terrain models for the purpose of video games requires knowledge and experience of GIS specialists and cartographers, although it is also accessible for non-professionals. The authors point out commonness and variety of use of terrain models in video games and the existence of a series of ready, advanced tools and procedures of terrain model creating. Finally the authors describe the experiment of performing the process of data modeling for “Condor Soar Simulator”.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Saidani, T., and R. Ghodhbani. "Hardware Acceleration of Video Edge Detection with Hight Level Synthesis on the Xilinx Zynq Platform." Engineering, Technology & Applied Science Research 12, no. 1 (February 12, 2022): 8007–12. http://dx.doi.org/10.48084/etasr.4615.

Повний текст джерела
Анотація:
The study conducted in the current paper consists of validating an original design flow for the rapid prototyping of real-time image and video processing applications on FPGAs. A video application for edge detection with Simulink HDL coder and Vivado High-Level Synthesis (HLS) has been designed as if the code was going to be executed on a conventional processor. The developed tools will automatically translate the code into VHDL hardware language using an advanced compilation technique. This amounts to embedding processors on Xilinx Zynq-7000 System on-Chip (SoC) device in an optimal manner. This automated hardware design flow reduces the time to create a prototype since only the high-level description is required. The design of the video edge detection system is implemented on Xilinx Zynq-7000 platform. The result of the implementation gave effective resource utilization and a good frame rate (95 FPS) under 170MHz frequency.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Bilal, Muhammad, Wail Harasani, and Liang Yang. "Rapid Prototyping of Image Contrast Enhancement Hardware Accelerator on FPGAs Using High-Level Synthesis Tools." Jordan Journal of Electrical Engineering 9, no. 3 (2023): 322. http://dx.doi.org/10.5455/jjee.204-1673105856.

Повний текст джерела
Анотація:
Rapid prototyping tools have become essential in the race to market. In this work, we have explored employing rapid prototyping approach to develop an intellectual property core for real-time contrast enhancement which is a commonly employed image processing task. Specifically, the task involves real-time contrast enhancement of video frames, which is used to repair washed out (overexposed) or darkened (underexposed) appearance. Such scenario is frequently encountered in video footage captured underwater. Since the imaging conditions are not known a priori, the lower and upper limits of the dynamic range of acquired luminance values need to be adaptively determined and mapped to the full range permitted by the allocated bitwidth so that the processed image has a high-contrast appearance. This paper describes a hardware implementation of this operation using contrast stretching algorithm with the help of Simulink high-level synthesis tool using rapid prototyping paradigm. The developed model can be directly used as a drop-in module in larger computer vision systems to enhance Simulink computer vision toolbox capabilities, which does not support this operation for direct FPGA implementation yet. The synthesized core consumes less than 1% of total FPGA slice logic resources while dissipating only 7 mW dynamic power. To this end, look-up table has been employed to implement the division operator which otherwise requires exorbitantly large number of logic resources. Moreover, an online algorithm has been proposed which avoids multiple memory accesses. The hardware module has been tested in a real-time video processing scenario at 100 MHz clock rate and depicts functional accuracy at par with the software while consuming lower logic resources than competitive designs. These results demonstrate that the appropriate use of modern rapid prototyping tools can be highly effective in reducing the development time without compromising the functional accuracy and resource utilization.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Yazdi, Mehran, Mohammad A. Bagherzadeh, Mehdi Jokar, and Mohammad A. Abasi. "Block-Wise Background Subtraction Based on Gaussian Mixture Models." Applied Mechanics and Materials 490-491 (January 2014): 1221–27. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1221.

Повний текст джерела
Анотація:
The background subtraction of an image enables us to distinguish a moving object in a video sequence and enter higher levels of video processing. Background processing is an essential strategy for many video processing applications and its most primary method (utilized to determine the difference of sequential frames) is very rapid and easy, but not appropriate for complicated scenes. In this article we introduce a method to remove the false distinction of the foreground. In our proposed method the updating of which is automatic, a mixture of Gaussians have been used. In addition this method depends on the passage of time. The previously introduced methods are often based on processing on the pixel and ignore the neighbor pixels in order to improve the background. Our method does not make do with one pixel but rather benefits from a block of pixels in order to include all the pixels included in the block. Experimental findings indicate considerable developments in the proposed method which can quickly and without morphological filtering model the background using image supervising cameras placed in both inside and outside locations and lighting changes, repetitive motion from clutter and scene changes. Subsequently the moving object in the scene can appear for real-time tracking and recognition applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Aksenova, Olesya, Evgenia Nikolaeva, and Ibodat Haldybaeva. "Modelling the Process of Processing Ash and Slag Wastes of TPP by Means of Consistent Application of CAE- and CAD- systems." E3S Web of Conferences 174 (2020): 01019. http://dx.doi.org/10.1051/e3sconf/202017401019.

Повний текст джерела
Анотація:
The article considers the possibility of joint application of mathematical processing and computer 3D modeling of the technological process line for processing ash and slag waste from thermal power plants (TPP). The authors suggest considering an approach to the design of the ash and slag waste processing site by mathematical processing and 3D computer modeling. The mathematical processing with the help of E- network device and the creation of a 3D model allows to plan the site for processing ash and slag waste, select the appropriate technology and thereby ensure the environmental effect of both existing and projected power plants. The authors present the results of processing the technological process of recycling ash and slag wastes of TPP in terms of E-networks using mathematical processing in the Simulink application, which displays the device model from the standard blocks available in the program and performs the necessary calculations. 3D models of individual equipment units selected on the basis of mathematical processing calculations in the Simulink application, were created using computer 3D modeling in a graphical editor. A 3D visualization of the technological section of ash and slag waste processing was performed, which allows to clearly show the planned section at the design stage, which, if necessary, will allow to easily make changes to the project.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ali, Walid S. Ibrahim. "Scaling the Evolutionary Models for Signal Processing System Optimization with Applications in Digital Video Processing." Color and Imaging Conference 9, no. 1 (January 1, 2001): 291–97. http://dx.doi.org/10.2352/cic.2001.9.1.art00053.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Korneyev, Vladimir S., and Valery A. Raychert. "DIGITAL TECHNOLOGIES FOR OPTICAL IMAGES PROCESSING IN LABORATORY PRACTICE IN PHYSICS." Actual Problems of Education 1 (January 30, 2020): 185–90. http://dx.doi.org/10.33764/2618-8031-2020-1-185-190.

Повний текст джерела
Анотація:
Examples of a laboratory work on wave optics are considered, in which, using the MS Office Excel program, computer processing of images of interference patterns (Newton's rings) obtained with a digital video camera is performed. The sequence of computer processing of the obtained images is described and the possibility of comparing the experimentally obtained data with theoretical models of the intensity distribution is shown. Computer processing of the images obtained with the digital video camera can be used in subsequent courses in the study of special disciplines, when performing research work, in processing measurement results and constructing models of physical phenomena.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Badawi, Aiman, and Muhammad Bilal. "A Hardware-Software Co-Design for Object Detection Using High-Level Synthesis Tools." International Journal of Electronics, Communications, and Measurement Engineering 8, no. 1 (January 2019): 63–73. http://dx.doi.org/10.4018/ijecme.2019010105.

Повний текст джерела
Анотація:
Object detection is a vital component of modern video processing systems, and despite the availability of several efficient open-source feature-classifier frameworks and their corresponding implementation schemes, inclusion of this feature as a drop-in module in larger computer vision systems is still considered a daunting task. To this end, this work describes an open-source unified framework which can be used to train, test, and deploy an SVM-based object detector as a hardware-software co-design on FPGA using Simulink high-level synthesis tool. The proposed modular design can be seamlessly integrated within full systems developed using Simulink Computer Vision toolbox for rapid deployment. FPGA synthesis results show that the proposed hardware architecture utilizes fewer logic resources than the contemporary designs for similar operation. Moreover, experimental evidence has been provided to prove the generalization of the framework in efficiently detecting a variety of objects of interest including pedestrians, faces and traffic signs.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Chellappa, Rama. "Statistical Methods and Models for Video-Based Tracking, Modeling, and Recognition." Foundations and Trends® in Signal Processing 3, no. 1-2 (2009): 1–151. http://dx.doi.org/10.1561/2000000007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Badawi, Aiman, and Muhammad Bilal. "High-Level Synthesis of Online K-Means Clustering Hardware for a Real-Time Image Processing Pipeline." Journal of Imaging 5, no. 3 (March 14, 2019): 38. http://dx.doi.org/10.3390/jimaging5030038.

Повний текст джерела
Анотація:
The growing need for smart surveillance solutions requires that modern video capturing devices to be equipped with advance features, such as object detection, scene characterization, and event detection, etc. Image segmentation into various connected regions is a vital pre-processing step in these and other advanced computer vision algorithms. Thus, the inclusion of a hardware accelerator for this task in the conventional image processing pipeline inevitably reduces the workload for more advanced operations downstream. Moreover, design entry by using high-level synthesis tools is gaining popularity for the facilitation of system development under a rapid prototyping paradigm. To address these design requirements, we have developed a hardware accelerator for image segmentation, based on an online K-Means algorithm using a Simulink high-level synthesis tool. The developed hardware uses a standard pixel streaming protocol, and it can be readily inserted into any image processing pipeline as an Intellectual Property (IP) core on a Field Programmable Gate Array (FPGA). Furthermore, the proposed design reduces the hardware complexity of the conventional architectures by employing a weighted instead of a moving average to update the clusters. Experimental evidence has also been provided to demonstrate that the proposed weighted average-based approach yields better results than the conventional moving average on test video sequences. The synthesized hardware has been tested in real-time environment to process Full HD video at 26.5 fps, while the estimated dynamic power consumption is less than 90 mW on the Xilinx Zynq-7000 SOC.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Sabot, F., M. Naaim, F. Granada, E. Suriñach, P. Planet, and G. Furdada. "Study of avalanche dynamics by seismic methods, image-processing techniques and numerical models." Annals of Glaciology 26 (1998): 319–23. http://dx.doi.org/10.3189/1998aog26-1-319-323.

Повний текст джерела
Анотація:
Seismic signals of avalanches, related video images and numerical models were compared to improve the characterization of avalanche phenomena. Seismic data and video images from two artificially released avalanches were analysed to obtain more information about the origin of the signals. Image processing was used to compare the evolution of one avalanche front and the corresponding seismic signals. A numerical model was also used to simulate an avalanche flow in order to obtain mean- and maximum-velocity profiles. Prior to this, the simulated avalanche was verified using video images. The results indicate that the seismic signals recorded correspond to changes in avalanche type and path slope, interaction with obstacles and to phenomena associated with the stopping stage of the avalanche, suggesting that only part of the avalanche was recorded. These results account for the seismic signals previously obtained automatically in a wide avalanche area.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Sabot, F., M. Naaim, F. Granada, E. Suriñach, P. Planet, and G. Furdada. "Study of avalanche dynamics by seismic methods, image-processing techniques and numerical models." Annals of Glaciology 26 (1998): 319–23. http://dx.doi.org/10.1017/s0260305500015032.

Повний текст джерела
Анотація:
Seismic signals of avalanches, related video images and numerical models were compared to improve the characterization of avalanche phenomena. Seismic data and video images from two artificially released avalanches were analysed to obtain more information about the origin of the signals. Image processing was used to compare the evolution of one avalanche front and the corresponding seismic signals. A numerical model was also used to simulate an avalanche flow in order to obtain mean- and maximum-velocity profiles. Prior to this, the simulated avalanche was verified using video images. The results indicate that the seismic signals recorded correspond to changes in avalanche type and path slope, interaction with obstacles and to phenomena associated with the stopping stage of the avalanche, suggesting that only part of the avalanche was recorded. These results account for the seismic signals previously obtained automatically in a wide avalanche area.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zeng, Xiang Feng, and Yu Hong Yang. "Anti-Jamming Technology Performance Analysis of SATCOM Systems." Applied Mechanics and Materials 198-199 (September 2012): 1627–31. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.1627.

Повний текст джерела
Анотація:
Numerous technologies have been adopted in modern SATCOM to ensure communica-tion smooth. Among them, frequency hopping and onboard processing technology will be researched mainly in this paper. First, models of frequency hopping system and onboard processing system were built by Simulink. Then, the effect of coupling phenomenon analyzed on the theoretical. At last, Anti-jamming capability studied after simulating with different jamming. All above will provide convincing support to further research on SATCOM technologies; also, will guide the development of satellite communication.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Chen, Yong Qing, Xin He Xu, Hua Ling Zhu, Shi De Ye, and Liang Tao Li. "Simulation Research on the Basic Platform's Automatic Leveling System Based on SimHydraulics." Applied Mechanics and Materials 345 (August 2013): 99–103. http://dx.doi.org/10.4028/www.scientific.net/amm.345.99.

Повний текст джерела
Анотація:
Based on the MATLAB/SimHydraulics toolbox, an automatic leveling system controlled by electro-hydraulic proportional valve was simulated and researched. With the help of Hydraulic components models in the SimHydraulics toolbox, the SimHydraulics Physical Network simulation and the Simulink control system simulation was integrated used, and the Simulink modules powerful numerical processing capability helped to improve the efficiency and accuracy of the system design. The simulation results showed that:The use of SimHydraulics toolbox on the simulation study of automatic leveling system controlled by electro-hydraulic proportional valve is feasible; The adjustment time of the automatic leveling system is short and the steady-state accuracy is high based on the PID controller.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chang, Hao, Dafang Wang, Hui Wei, Qi Zhang, and GuangLi Dong. "Design of Tracked Model Vehicle Measurement and Control System Based on VeriStand and Simulink." MATEC Web of Conferences 175 (2018): 03047. http://dx.doi.org/10.1051/matecconf/201817503047.

Повний текст джерела
Анотація:
This paper presents the composition and working principle of the hardware and software of the measurement and control system based on VeriStand and Simulink for a tracked model vehicle. The hardware of the system is composed of a CompactRIO controller and acquisition cards, torque/speed sensors and a gyroscope. The vehicle control logic model and data processing model are built in Matlab/Simulink. VeriStand is adopted to manage models and interact with people. The system can control motor speed on both sides of the vehicle and collect data such as sprocket torque and speed, vehicle attitude on real-time. At last, we test and verify the system can work successfully.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wen, Hong Yuan. "Design of DSP Video Image Processing System Control Software." Advanced Materials Research 139-141 (October 2010): 2299–302. http://dx.doi.org/10.4028/www.scientific.net/amr.139-141.2299.

Повний текст джерела
Анотація:
In order to realize the DSP Video Image Processing System works well in the highlight environment, the system control software is designed. In this control part, the ultra-low power MSP430 single chip microcomputer (MCU) is the core, which can be programmed to control the DSP Video Image Processing System, the video A/D converter and the highlight protection circuit by the Inter-Integrated Circuit (I2C) bus. The Image Processing algorithm models can be selected. Whether the highlight protection circuit is turned on or not depends on the comparison result of the environment light and the MCU light threshold. The control program code has been debugged and tested through the MSP development board and the IAR C-SPY debugger. The result shows the DSP Video Image Processing System Control Software is ideal.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yadav, Piyush, Dhaval Salwala, Felipe Arruda Pontes, Praneet Dhingra, and Edward Curry. "Query-driven video event processing for the internet of multimedia things." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2847–50. http://dx.doi.org/10.14778/3476311.3476360.

Повний текст джерела
Анотація:
Advances in Deep Neural Network (DNN) techniques have revolutionized video analytics and unlocked the potential for querying and mining video event patterns. This paper details GNOSIS, an event processing platform to perform near-real-time video event detection in a distributed setting. GNOSIS follows a serverless approach where its component acts as independent microservices and can be deployed at multiple nodes. GNOSIS uses a declarative query-driven approach where users can write customize queries for spatiotemporal video event reasoning. The system converts the incoming video streams into a continuous evolving graph stream using machine learning (ML) and DNN models pipeline and applies graph matching for video event pattern detection. GNOSIS can perform both stateful and stateless video event matching. To improve Quality of Service (QoS), recent work in GNOSIS incorporates optimization techniques like adaptive scheduling, energy efficiency, and content-driven windows. This paper demonstrates the Occupational Health and Safety query use cases to show the GNOSIS efficacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Pal, Ratnabali, Arif Ahmed Sekh, Debi Prosad Dogra, Samarjit Kar, Partha Pratim Roy, and Dilip K. Prasad. "Topic-based Video Analysis." ACM Computing Surveys 54, no. 6 (July 2021): 1–34. http://dx.doi.org/10.1145/3459089.

Повний текст джерела
Анотація:
Manual processing of a large volume of video data captured through closed-circuit television is challenging due to various reasons. First, manual analysis is highly time-consuming. Moreover, as surveillance videos are recorded in dynamic conditions such as in the presence of camera motion, varying illumination, or occlusion, conventional supervised learning may not work always. Thus, computer vision-based automatic surveillance scene analysis is carried out in unsupervised ways. Topic modelling is one of the emerging fields used in unsupervised information processing. Topic modelling is used in text analysis, computer vision applications, and other areas involving spatio-temporal data. In this article, we discuss the scope, variations, and applications of topic modelling, particularly focusing on surveillance video analysis. We have provided a methodological survey on existing topic models, their features, underlying representations, characterization, and applications in visual surveillance’s perspective. Important research papers related to topic modelling in visual surveillance have been summarized and critically analyzed in this article.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Xiang, Tao, and Shaogang Gong. "Optimising dynamic graphical models for video content analysis." Computer Vision and Image Understanding 112, no. 3 (December 2008): 310–23. http://dx.doi.org/10.1016/j.cviu.2008.05.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Chen, Datong, Qiang Liu, Mingui Sun, and Jie Yang. "Mining Appearance Models Directly From Compressed Video." IEEE Transactions on Multimedia 10, no. 2 (February 2008): 268–76. http://dx.doi.org/10.1109/tmm.2007.911835.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Song, Wei, and Dian W. Tjondronegoro. "Acceptability-Based QoE Models for Mobile Video." IEEE Transactions on Multimedia 16, no. 3 (April 2014): 738–50. http://dx.doi.org/10.1109/tmm.2014.2298217.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Hyder, Rakib, and M. Salman Asif. "Generative Models for Low-Dimensional Video Representation and Reconstruction." IEEE Transactions on Signal Processing 68 (2020): 1688–701. http://dx.doi.org/10.1109/tsp.2020.2977256.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kumar, Dr T. Senthil. "Video based Traffic Forecasting using Convolution Neural Network Model and Transfer Learning Techniques." Journal of Innovative Image Processing 2, no. 3 (June 17, 2020): 128–34. http://dx.doi.org/10.36548/jiip.2020.3.002.

Повний текст джерела
Анотація:
The ideas, algorithms and models developed for application in one particular domain can be applied for solving similar issues in a different domain using the modern concept termed as transfer learning. The connection between spatiotemporal forecasting of traffic and video prediction is identified in this paper. With the developments in technology, traffic signals are replaced with smart systems and video streaming for analysis and maintenance of the traffic all over the city. Processing of these video streams requires lot of effort due to the amount of data that is generated. This paper proposed a simplified technique for processing such voluminous data. The large data set of real-world traffic is used for prediction and forecasting the urban traffic. A combination of predefined kernels are used for spatial filtering and several such transferred techniques in combination will convolutional artificial neural networks that use spectral graphs and time series models. Spatially regularized vector autoregression models and non‐spatial time series models are the baseline traffic forecasting models that are compared for forecasting the performance. In terms of training efforts, development as well as forecasting accuracy, the efficiency of urban traffic forecasting is high on implementation of video prediction algorithms and models. Further, the potential research directions are presented along the obstacles and problems in transferring schemes.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kolarik, Michal, Martin Sarnovsky, Jan Paralic, and Frantisek Babic. "Explainability of deep learning models in medical video analysis: a survey." PeerJ Computer Science 9 (March 14, 2023): e1253. http://dx.doi.org/10.7717/peerj-cs.1253.

Повний текст джерела
Анотація:
Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applications, especially in healthcare. Therefore, explainability of the machine learning models, which focuses on providing of the comprehensible explanations of model outputs, may affect the possibility of adoption of such models in clinical use. There are various studies reviewing approaches to explainability in multiple domains. This article provides a review of the current approaches and applications of explainable deep learning for a specific area of medical data analysis—medical video processing tasks. The article introduces the field of explainable AI and summarizes the most important requirements for explainability in medical applications. Subsequently, we provide an overview of existing methods, evaluation metrics and focus more on those that can be applied to analytical tasks involving the processing of video data in the medical domain. Finally we identify some of the open research issues in the analysed area.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Aksenova, Olesya, and Evgenia Nikolaeva. "Application of various approaches to modeling the gas emissions from TPP cleaning processes." E3S Web of Conferences 315 (2021): 01003. http://dx.doi.org/10.1051/e3sconf/202131501003.

Повний текст джерела
Анотація:
The article discusses the possibility of 3D computer modeling tools complex use based on the mathematical processing of the gas emissions from TPP cleaning process. The authors propose to consider an approach to designing a site for capturing solid particles in gas emissions that appear during the production activities of an industrial enterprise by modeling a technological site in various programs. Mathematical processing with the use of the E-network device and the creation of a 3D model enables to plan a site for capturing solid particles in gas emissions, choose the appropriate technology and thereby ensure the ecological effect of both existing and projected power plants. The authors present the results of processing the technological process of gas purification at thermal power plants in terms of E-networks using mathematical processing in the Simulink application, which displays a device model from the standard units available in the program and performs the necessary calculations. 3D models of individual pieces of equipment selected on the basis of mathematical processing calculations in the Simulink application were created using computer 3D modeling in a graphical editor. A 3D visualization of the technological site for capturing solid particles in gas emissions was carried out, enabling a visual display of the planned site at the design stage, which, if necessary, will allow an easy introduction of modifications to the project.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kepalas, Vytautas, Česlovas Ramonas, Stanislovas Marcinkevičius, and Romualdas Konstantinas Masteika. "Research of Tight Band Coiling Dynamics." Solid State Phenomena 113 (June 2006): 271–76. http://dx.doi.org/10.4028/www.scientific.net/ssp.113.271.

Повний текст джерела
Анотація:
Purpose of the work is to analyze tight band coiling machine dynamics for processing wire, thread, texture, paper, leatherette or other elastic bands with a steady-state tension force. Band coiling block diagram and mathematical models for cases with one and two regulators has been made for research of elastic band coiling processing using software MATLAB-SIMULINK. Transient response curves of linear speeds in the band feed section and in a coiling machine, of band tension force and of relative elongation are presented. The digital model was used for obtaining parameters of controllers when parameters of feedback and regulators have been changing.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kang, Samuel, Young-Min Seo, and Yong-Suk Choi. "Video Super Resolution Using a Selective Edge Aggregation Network." Applied Sciences 12, no. 5 (February 27, 2022): 2492. http://dx.doi.org/10.3390/app12052492.

Повний текст джерела
Анотація:
An edge map is a feature map representing the contours of the object in the image. There was a Single Image Super Resolution (SISR) method using the edge map, which achieved a notable SSIM performance improvement. Unlike SISR, Video Super Resolution (VSR) uses video, which consists of consecutive images with temporal features. Therefore, some VSR models adopted motion estimation and motion compensation to apply spatio-temporal feature maps. Unlike the models above, we tried a different method by adding edge structure information and its related post-processing to the existing model. Our model “Video Super Resolution Using a Selective Edge Aggregation Network (SEAN)” consists of a total of two stages. First, the model selectively generates an edge map using the target frame and also the neighboring frame. At this stage, we adopt the magnitude loss function so that the output of SEAN more clearly learns the contours of each object. Second, the final output is generated using the refinement (post-processing) module. SEAN shows more distinct object contours and better color correction compared to other existing models.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Setlak, Lucjan, and Rafał Kowalik. "Examination of Multi-Pulse Rectifiers of PES Systems Used on Airplanes Compliant with the Concept of Electrified Aircraft." Applied Sciences 9, no. 8 (April 12, 2019): 1520. http://dx.doi.org/10.3390/app9081520.

Повний текст джерела
Анотація:
This article focuses on power electronic multi-pulse 12-, 24- and 36-impulse rectifiers based on multi-winding rectifier transformers. The effectiveness of voltage processing with different variants of supply voltage sources is discussed and arguments are formulated for limiting oneself to 24-pulse processing, which is used in the latest technological solutions of modern aviation technology. The main purpose of this article is to conduct a study (analysis, mathematical models, simulations) of selected multi-pulse rectifiers in the context of testing their properties in relation to the impact on the electrified power supply network. The secondary objective of the article is to assess the possibility of using Matlab/Simulink to analyze the work of rectifier circuits implemented in aircraft networks compliant with the more/all electric aircraft (MEA/AEA) concept. The simulation tests included designing a typical auto-transformer rectifier unit (ATRU) system in the Simulink program and generating output voltage waveforms in this program in the absence of damage to the rectifier elements. In the final part of this work, based on a critical analysis of the literature on the subject of the study, simulations were made of exemplary rectifiers in the Matlab/Simulink programming environment along with their brief analysis. Practical conclusions resulting from the implementation of the MEA/AEA concept in modern aviation were formulated.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

González-San Román, J. D., J. U. Liceaga-Castro, I. I. Siller-Alcalá, and E. Campero-Littlewood. "Structural Analysis of 8/6 Switched Reluctance Motor Linear and Non-linear Models." International Journal of Circuits, Systems and Signal Processing 15 (September 17, 2021): 1464–74. http://dx.doi.org/10.46300/9106.2021.15.159.

Повний текст джерела
Анотація:
This work presents the process of obtaining the simplified model of a switched reluctance motor (SRM) 8/6. Subsequently, the structure of the single-phase model is analyzed, obtaining an exact linearization and zero dynamics of the system. Finally, the model is linearized at an operating point set at 2000 rpm The model includes Coulomb plus viscous friction nonlinearity and an ideal inverter circuit based on bridge converter topology. The simplified and linear models are simulated and compared in the Matlab®/Simulink software in order to validate the design of a classic controller using the linear model.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Uslan, M. M., R. Shen, and Y. Shragai. "The Evolution of Video Magnification Technology." Journal of Visual Impairment & Blindness 90, no. 6 (November 1996): 465–78. http://dx.doi.org/10.1177/0145482x9609000604.

Повний текст джерела
Анотація:
Closed-circuit television (CCTV) systems are the product of a long line of technological advances in several fields, including optics, electrical signal processing, and video display technology. Many different models are now on the market, and more advanced ones are frequently introduced. This article traces the development of early CCTV systems, examines CCTVs that are on the market today, and speculates on video magnification technology of the future, which will make extensive use of computer-related technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Delakis, Manolis, Guillaume Gravier, and Patrick Gros. "Audiovisual integration with Segment Models for tennis video parsing." Computer Vision and Image Understanding 111, no. 2 (August 2008): 142–54. http://dx.doi.org/10.1016/j.cviu.2007.09.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Qiu, Ji, Lide Wang, Yu Hen Hu, and Yin Wang. "Two motion models for improving video object tracking performance." Computer Vision and Image Understanding 195 (June 2020): 102951. http://dx.doi.org/10.1016/j.cviu.2020.102951.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Paz Penagos, Hernan. "OFDM comparison with FFT and DWT processing for DVB-T2 wireless channels." INGE CUC 14, no. 2 (December 19, 2018): 97–105. http://dx.doi.org/10.17981/ingecuc.14.2.2018.09.

Повний текст джерела
Анотація:
Introduction: Recent studies on the FFT processing (Fast Fourier Transform) or DWT (Discrete Wavelet Transform) of the OFDM signal (Orthogonal Frequency Division Multiplexing) have shown pros and cons for DVB-T2 (Digital Video Broadcasting-Second Generation Terrestrial) radio communications; however, the benefits of both types of processing have yet to be compared for the same scenario. Objective: The objective of this research is to compare the response of the wireless channel with AWGN noise (Additive White Gaussian Noise Channel) and Rayleigh and Rician fading in the UHF (Ultra High Frequency) band. Methodology: The transmission of DVB-T2 information with OFDM modulation and FFT and DWT processing was simulated in Matlab®, specifically in Simulink. Results: The results of the study proved to be more efficient for DWT system than FFT system, due to the low rate of erroneous bits, spectral efficiency and reduction of the Peak-to-Average Power Ratio (PAPR), for Eb / No relations greater than 10dB. Conclusions: In this article, we present the designs of both systems and the results of the research experience; likewise, the practical applicability of these systems is discussed, and improvements are suggested for future work.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Phuc, Dang Thi, Tran Quang Trieu, Nguyen Van Tinh, and Dau Sy Hieu. "Video captioning in Vietnamese using deep learning." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 3 (June 1, 2022): 3092. http://dx.doi.org/10.11591/ijece.v12i3.pp3092-3103.

Повний текст джерела
Анотація:
<p><span>With the development of today's society, demand for applications using digital cameras jumps over year by year. However, analyzing large amounts of video data causes one of the most challenging issues. In addition to storing the data captured by the camera, intelligent systems are required to quickly analyze the data to correct important situations. In this paper, we use deep learning techniques to build automatic models that describe movements on video. To solve the problem, we use three deep learning models: sequence-to-sequence model based on recurrent neural network, sequence-to-sequence model with attention and transformer model. We evaluate the effectiveness of the approaches based on the results of three models. To train these models, we use microsoft research video description corpus (MSVD) dataset including 1970 videos and 85,550 captions translated into Vietnamese. In order to ensure the description of the content in Vietnamese, we also combine it with the natural language processing (NLP) model for Vietnamese.</span></p>
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Boyun, Vitaliy. "Directions of Development of Intelligent Real Time Video Systems." Application and Theory of Computer Technology 2, no. 3 (April 27, 2017): 48. http://dx.doi.org/10.22496/atct.v2i3.65.

Повний текст джерела
Анотація:
Real time video systems play a significant role in many fields of science and technology. The range of their applications is constantly increasing together with requirements to them, especially it concerns to real time video systems with the feedbacks. Conventional fundamentals and principles of real-time video systems construction are extremely redundant and do not take into consideration the peculiarities of real time processing and tasks, therefore they do not meet the system requirements neither in technical plan nor in informational and methodical one. Therefore, the purpose of this research is to increase responsiveness, productivity and effectiveness of real time video systems with a feedback during the operation with the high-speed objects and dynamic processes. The human visual analyzer is considered as a prototype for the construction of intelligent real time video systems. Fundamental functions, structural and physical peculiarities of adaptation and processes taking place in a visual analyzer relating to the information processing, are considered. High selectivity of information perception and wide parallelism of information processing on the retinal neuron layers and on the higher brain levels are most important peculiarities of a visual analyzer for systems with the feedback. The paper considers two directions of development of intelligent real time video systems. First direction based on increasing intellectuality of video systems at the cost of development of new information and dynamic models for video information perception processes, principles of control and reading parameters of video information from the sensor, adapting them to the requirements of concrete task, and combining of input processes with data processing. Second direction is associated with the development of new architectures for parallel perception and level-based processing of information directly on a video sensor matrix. The principles of annular and linear structures on the neurons layers, of close-range interaction and specialization of layers, are used to simplify the neuron network.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Yadav, Piyush, Dhaval Salwala, Dibya Prakash Das, and Edward Curry. "Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing." International Journal of Semantic Computing 14, no. 03 (September 2020): 423–55. http://dx.doi.org/10.1142/s1793351x20500051.

Повний текст джерела
Анотація:
Complex Event Processing (CEP) is an event processing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, CEP is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform matching over them. This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph-driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP-based state optimization — VEKG-Time Aggregated Graph (VEKG-TAG) — is proposed over VEKG representation for faster event detection. VEKG-TAG is a spatiotemporal graph aggregation method that provides a summarized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with [Formula: see text]-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19[Formula: see text] faster search time, achieving sub-second median latency of 4–20[Formula: see text]ms.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Nicolas, Henri, and Mathieu Brulin. "Video traffic analysis using scene and vehicle models." Signal Processing: Image Communication 29, no. 8 (September 2014): 807–30. http://dx.doi.org/10.1016/j.image.2014.06.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Kaur, Lakhwinder, Turki Aljrees, Ankit Kumar, Saroj Kumar Pandey, Kamred Udham Singh, Pankaj Kumar Mishra, and Teekam Singh. "Gated Recurrent Units and Recurrent Neural Network Based Multimodal Approach for Automatic Video Summarization." Traitement du Signal 40, no. 3 (June 28, 2023): 1227–34. http://dx.doi.org/10.18280/ts.400340.

Повний текст джерела
Анотація:
A typical video record aggregation system requires the concurrent performance of a large number of image processing tasks, including but not limited to image acquisition, pre-processing, segmentation, feature extraction, verification, and description. These tasks must be executed with utmost precision to ensure smooth system performance. Among these tasks, feature extraction and selection are the most critical. Feature extraction involves converting the large-scale image data into smaller mathematical vectors, and this process requires great skill. Various feature extraction models are available, including wavelet, cosine, Fourier, histogram-based, and edge-based models. The key objective of any feature extraction model is to represent the image data with minimal attributes and no loss of information. In this study, we propose a novel feature-variance model that detects differences in video features and generates feature-reduced video frames. These frames are then fed into a GRU-based RNN model, which classifies them as either keyframes or non-keyframes. Keyframes are then extracted to create a summarized video, while non-keyframes are reduced. Various key-frame extraction models are also discussed in this section, followed by a detailed analysis of the proposed summarization model and its results. Finally, we present some interesting observations about the proposed model and suggest ways to improve it.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tseng, Shu-Ming, Zhi-Ting Yeh, Chia-Yang Wu, Jia-Bin Chang, and Mehdi Norouzi. "Video Scene Detection Using Transformer Encoding Linker Network (TELNet)." Sensors 23, no. 16 (August 9, 2023): 7050. http://dx.doi.org/10.3390/s23167050.

Повний текст джерела
Анотація:
This paper introduces a transformer encoding linker network (TELNet) for automatically identifying scene boundaries in videos without prior knowledge of their structure. Videos consist of sequences of semantically related shots or chapters, and recognizing scene boundaries is crucial for various video processing tasks, including video summarization. TELNet utilizes a rolling window to scan through video shots, encoding their features extracted from a fine-tuned 3D CNN model (transformer encoder). By establishing links between video shots based on these encoded features (linker), TELNet efficiently identifies scene boundaries where consecutive shots lack links. TELNet was trained on multiple video scene detection datasets and demonstrated results comparable to other state-of-the-art models in standard settings. Notably, in cross-dataset evaluations, TELNet demonstrated significantly improved results (F-score). Furthermore, TELNet’s computational complexity grows linearly with the number of shots, making it highly efficient in processing long videos.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії