Статті в журналах з теми "VIDEO COMPONENT"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: VIDEO COMPONENT.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "VIDEO COMPONENT".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Sari, Ilmiyati, Asep Juarna, Suryadi Harmanto, and Djati Kerami. "Background Estimation Using Principal Component Analysis Based on Limited Memory Block Krylov Subspace Optimization." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (October 1, 2018): 2847. http://dx.doi.org/10.11591/ijece.v8i5.pp2847-2856.

Повний текст джерела
Анотація:
<p>Given a video of 𝑀 frames of size ℎ × 𝑤. Background components of a video are the elements matrix which relative constant over 𝑀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 𝑤 × 𝑀) into 2 dimensions one (𝑁 × 𝑀), where 𝑁 is a linear array of size ℎ × 𝑤. The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds). </p>
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Булага Костянтин Миколайович. "ДИДАКТИКО-ТЕХНІЧНЕ ЗАБЕЗПЕЧЕННЯ НАВЧАЛЬНОЇ ДІЯЛЬНОСТІ ВИХОВАНЦІВ ДИТЯЧОГО ХОРЕОГРАФІЧНОГО КОЛЕКТИВУ". Science Review, № 4(31) (30 квітня 2020): 20–24. http://dx.doi.org/10.31435/rsglobal_sr/30042020/7052.

Повний текст джерела
Анотація:
The article describes the didactic and technical support of the educational activity of the pupils of the children's choreographic team as a set of didactic and technical components. The didactic component provides videos of educational and general information content (videos of lessons, exercises, concert numbers, performances, etc.) using creolized media texts. The technical component is represented by the cloud-based YouTube service as a modern platform for hosting and openly accessing educational and general-purpose video content.Development of didactic and technical support was based on the provisions of media didactics, principles of clarity, multimedia. Didactic and technical support of the educational activities of the children of the choreographic team is a YouTube video channel as a training tool, filled with videos of educational and general information content (videos of lessons, exercises, concert numbers, performances, useful video, etc.) using creolized media text organization and interactive interaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xiao, He. "Gradient-Mapping-Based Method for Video Enhancement." Applied Mechanics and Materials 685 (October 2014): 559–62. http://dx.doi.org/10.4028/www.scientific.net/amm.685.559.

Повний текст джерела
Анотація:
In order to improve the visual effect of nighttime videos, an effective video enhancement method based on gradient mapping is proposed. The proposed method firstly is to change color space from RGB to HSI, secondly the horizontal gradient and vertical gradient components of frames are calculated in I component. Then frame’s gradient is enhanced by the proposed global mapping method. Meanwhile, saturation S component is enhanced to reduce the color distortion. Finally, color video frames are reconstructed. Experimental results show that the proposed method can obtain more appealing perceptual quality than the state-of-the-art algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Plotnitskiy, Yuriy E. "Structural and semantic characteristics of music videos, containing a choreographic component." Neophilology, no. 22 (2020): 336–45. http://dx.doi.org/10.20310/2587-6953-2020-6-22-336-345.

Повний текст джерела
Анотація:
The paper studies specific characteristics of the music videos containing a choreographic component. Different viewpoints on the notion of corporality have been analysed and its significance for understanding conceptual and axiological aspects of the song text, as well as specific features of contemporary dance from the angle of its influence on the audience. Further we give a general description of the study material, which includes 20 music videos in different styles. Then the researcher gives detailed analysis of the videos in which the choreographic visual component carries out the function of illustration, symbolically conveys the conceptual meaning of the song lyrics or is in the contrast relations with it. The research has also revealed the cases of visual choreographic component performing the complementary function by way of adding extra semantic aspects to the meaning of the song, as well as the function that can be called “providing a storyline”, where the visual component is characterised by absolute novelty in relation to the verbal component or the song lyrics. Such parameters as correlation between the verbal and the visual components of a music video, functions of the choreographic visual component and the specifics of conveying conceptual information by means of dance movements in a music video have been investigated.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Skelton, J. R., S. Field, and C. Wiskin. "The video component of summative assessment." Patient Education and Counseling 34 (May 1998): S63—S64. http://dx.doi.org/10.1016/s0738-3991(98)90152-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Rzeszewski, Theodore S., and Robert L. Pawelski. "Efficient Transmission of Digital Component Video." SMPTE Journal 95, no. 9 (September 1986): 889–98. http://dx.doi.org/10.5594/j03245.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Desai, T. S., and D. C. Kulkarni. "Assessment of Interactive Video to Enhance Learning Experience: A Case Study." Journal of Engineering Education Transformations 35, S1 (January 1, 2022): 74–80. http://dx.doi.org/10.16920/jeet/2022/v35is1/22011.

Повний текст джерела
Анотація:
In modern STEM classrooms, video learning holds an important place, since it offers flexibility of time, place and content. But a lot of improvement is needed to enhance the learning experience because conventional video lecture lacks interaction that is indispensable component of teaching – learning process. Interactive video is highly recommended to resolve this issue as it allows proactive and random access to video content and promotes learner – content interactivity by inserting interactive elements. Interactive kind of video facilitates students’ engagement and active learning through incorporated interactive components. Present study employed two settings: learning using demonstrative video and learning using interactive video. It is observed that, students’ performance enhanced significantly in the post video quiz of interactive video and thus interactive video leads to better learners’ satisfaction. A study was carried out with 240 number of first year Engineering students for the course of Applied Physics. We collected data from post- video quiz performance and feedback from the students. The grades obtained by the students in post-video quiz for demonstrative and interactive videos were compared. For the interactive type of videos, the average marks scored were 82.79% and for demonstrative type of videos, average marks obtained were of 64.41%. This study brings forth superiority of interactive video over linear, demonstrative video as it offers enhancement of the level of conceptual understanding and attainment of desired learning outcomes through the management of cognitive and germane load by enhancing students’ engagement through active learning. Keywords—cognitive load; demonstrative video; germane load; interactive video; Learning Design; learning outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hou, Yan Yan. "Video Copy Detection Based on Principal Component Analysis." Applied Mechanics and Materials 548-549 (April 2014): 693–97. http://dx.doi.org/10.4028/www.scientific.net/amm.548-549.693.

Повний текст джерела
Анотація:
Content-based video hashing was proposed for the purpose of video copy detection. Conventional video copy detection algorithms apply image hashing algorithm to either every frame or key frame which is sensitive to video variation. In our proposed algorithm, key frames including temporal and spatial information are used to video copy detection, Discrete cosine transform (DCT) is done for video key frame and feature vector is extracted by principal component analysis ( PCA ). An average true positive rate of 99.31% and false positive rate of 0.37% demonstrate the robustness and uniqueness of the proposed algorithm. Experiments indicate that it is easy to implement and more efficient than other video copy detection algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lonhus, Kirill, Renata Rychtáriková, Ali Ghaznavi, and Dalibor Štys. "Estimation of rheological parameters for unstained living cells." European Physical Journal Special Topics 230, no. 4 (April 19, 2021): 1105–12. http://dx.doi.org/10.1140/epjs/s11734-021-00084-2.

Повний текст джерела
Анотація:
AbstractIn video-records, objects moving in intracellular regions are often hardly detectable and identifiable. To squeeze the information on the intracellular flows, we propose an automatic method of reconstruction of intracellular flow velocity fields based only on a recorded video of an unstained cell. The basis of the method is detection of speeded-up robust features (SURF) and assembling them into trajectories. Two components of motion—direct and Brownian—are separated by an original method based on minimum covariance estimation. The Brownian component gives a spatially resolved diffusion coefficient. The directed component yields a velocity field, and after fitting the vorticity equation, estimation of the spatially distributed effective viscosity. The method was applied to videos of a human osteoblast and a hepatocyte. The obtained parameters are in agreement with the literature data.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rekha, Bhanu, and Ravi Kumar AV. "High Quality Video Assessment Using Salient Features." Indonesian Journal of Electrical Engineering and Computer Science 7, no. 3 (September 1, 2017): 761. http://dx.doi.org/10.11591/ijeecs.v7.i3.pp761-772.

Повний текст джерела
Анотація:
<p class="Abstract">An efficient modified video compression HEVC technique based on high quality assessment saliency features presented for the assessment of high quality videos. To create an efficient saliency map we extract global temporal alignment component and robust spatial components. To obtain high quality saliency here, we combine spatial saliency features and temporal saliency features together for different macroblocks in association with transformed residuals. In this way, our saliency model outperforms all the existing techniques. In this paper, we have generated high reconstruction quality video after compression considering SFU dataset. Our experimental results outperforms all the existing techniques in terms of saliency map detection, PSNR and high-resolution quality.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Söderström, U., and H. Li. "Asymmetrical principal component analysis for video coding." Electronics Letters 44, no. 4 (2008): 276. http://dx.doi.org/10.1049/el:20083631.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kramber‡, W. J., A. J. Richardson§, P. R. Nixon§, and K. Lulla†. "Principal component analysis of aerial video imagery†." International Journal of Remote Sensing 9, no. 9 (September 1988): 1415–22. http://dx.doi.org/10.1080/01431168808954949.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Gao, Lianli, Pengpeng Zeng, Jingkuan Song, Yuan-Fang Li, Wu Liu, Tao Mei, and Heng Tao Shen. "Structured Two-Stream Attention Network for Video Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6391–98. http://dx.doi.org/10.1609/aaai.v33i01.33016391.

Повний текст джерела
Анотація:
To date, visual question answering (VQA) (i.e., image QA and video QA) is still a holy grail in vision and language understanding, especially for video QA. Compared with image QA that focuses primarily on understanding the associations between image region-level details and corresponding questions, video QA requires a model to jointly reason across both spatial and long-range temporal structures of a video as well as text to provide an accurate answer. In this paper, we specifically tackle the problem of video QA by proposing a Structured Two-stream Attention network, namely STA, to answer a free-form or open-ended natural language question about the content of a given video. First, we infer rich longrange temporal structures in videos using our structured segment component and encode text features. Then, our structured two-stream attention component simultaneously localizes important visual instance, reduces the influence of background video and focuses on the relevant text. Finally, the structured two-stream fusion component incorporates different segments of query and video aware context representation and infers the answers. Experiments on the large-scale video QA dataset TGIF-QA show that our proposed method significantly surpasses the best counterpart (i.e., with one representation for the video input) by 13.0%, 13.5%, 11.0% and 0.3 for Action, Trans., TrameQA and Count tasks. It also outperforms the best competitor (i.e., with two representations) on the Action, Trans., TrameQA tasks by 4.1%, 4.7%, and 5.1%.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

C, Chanjal. "Feature Re-Learning for Video Recommendation." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 3143–49. http://dx.doi.org/10.22214/ijraset.2021.35350.

Повний текст джерела
Анотація:
Predicting the relevance between two given videos with respect to their visual content is a key component for content-based video recommendation and retrieval. The application is in video recommendation, video annotation, Category or near-duplicate video retrieval, video copy detection and so on. In order to estimate video relevance previous works utilize textual content of videos and lead to poor performance. The proposed method is feature re-learning for video relevance prediction. This work focus on the visual contents to predict the relevance between two videos. A given feature is projected into a new space by an affine transformation. Different from previous works this use a standard triplet ranking loss that optimize the projection process by a novel negative-enhanced triplet ranking loss. In order to generate more training data, propose a data augmentation strategy which works directly on video features. The multi-level augmentation strategy works for video features, which benefits the feature relearning. The proposed augmentation strategy can be flexibly used for frame-level or video-level features. The loss function that consider the absolute similarity of positive pairs and supervise the feature re-learning process and a new formula for video relevance computation.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Liang, Xinyun. "Communication Characteristics and Development Path of Online Game Video." Art and Society 2, no. 1 (February 2023): 57–60. http://dx.doi.org/10.56397/as.2023.02.09.

Повний текст джерела
Анотація:
Since the development of We-media platform, game video has always been an important component field after vertical subdivision of platform content, and the popularity of game video results from the support of video games themselves. Video games provide participants with wonderful experiences that can’t be obtained in reality. In this paper, the characteristics of game communication and the development path of game videos were discussed. In the future development, how can game video creators still utilize the advantages of game subjects and We-media platforms to give full play to their professional knowledge and innovative consciousness, so as to achieve rapid development?
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lee, Jin Young, Cheonshik Kim, and Ching-Nung Yang. "Reversible Data Hiding Using Inter-Component Prediction in Multiview Video Plus Depth." Electronics 8, no. 5 (May 9, 2019): 514. http://dx.doi.org/10.3390/electronics8050514.

Повний текст джерела
Анотація:
With the advent of 3D video compression and Internet technology, 3D videos have been deployed worldwide. Data hiding is a part of watermarking technologies and has many capabilities. In this paper, we use 3D video as a cover medium for secret communication using a reversible data hiding (RDH) technology. RDH is advantageous, because the cover image can be completely recovered after extraction of the hidden data. Recently, Chung et al. introduced RDH for depth map using prediction-error expansion (PEE) and rhombus prediction for marking of 3D videos. The performance of Chung et al.’s method is efficient, but they did not find the way for developing pixel resources to maximize data capacity. In this paper, we will improve the performance of embedding capacity using PEE, inter-component prediction, and allowable pixel ranges. Inter-component prediction utilizes a strong correlation between the texture image and the depth map in MVD. Moreover, our proposed scheme provides an ability to control the quality of depth map by a simple formula. Experimental results demonstrate that the proposed method is more efficient than the existing RDH methods in terms of capacity.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Dewi Juliah Ratnaningsih and Siti Hadijah Hasanah. "Development of Website-Based Statistics Learning Videos." JTP - Jurnal Teknologi Pendidikan 24, no. 2 (August 28, 2022): 271–82. http://dx.doi.org/10.21009/jtp.v24i2.27153.

Повний текст джерела
Анотація:
One source of learning in the learning process is learning materials. In distance education, learning resources prepared for students must be able to be studied independently. The separation of lecturers and students needs to be bridged by learning materials that can be understood by students. A video is a form of learning materials that can be used to help students understand the material. Through videos, students can learn anywhere and anytime according to the time available. In the video, the duration of the material delivery is very significant. Likewise, components such as intro, greeting, and outro greatly support student motivation. The effective video duration in learning website-based statistical data analysis is 10-15 minutes. The component of presenting material that is in great demand by students in the discussion of sample questions and their application using R software.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lutkovska, Natalia. "AUDIOVISUAL COMPONENT IN THE CYCLE OF DEVELOPING COMMUNICATIVE FOREIGN LANGUAGE COMPETENCE IN MONOLOGUE PRODUCTION BASED ON IMPLICIT SPECIALIZING OF NON-LINGUISTIC STUDENTS." АRS LINGUODIDACTICAE, no. 1 (2017): 62–73. http://dx.doi.org/10.17721/2663-0303.2017.1.10.

Повний текст джерела
Анотація:
Background: Video is an audio-visual medium, with the sound and vision being separate components or played together. On the strength of these two modes of video application methodologists distinguish between two basic approaches: 1) manip­ulative prognostic strategy and 2) integrated audio and video combined strategy. The former fosters students’ ability to predict developments in the video content in prospect or retrospect and produce utterances on the basis of what the learners have seen or heard. But the latter strategy is of more communicative value as it presents complete communicative situations with verbal and non-verbal aspects of communication in the real world. Purpose. The purpose of our research is to devise audiovisual implicit specializing with the view to developing integrated socio-cultural and professionally oriented communicative foreign language competence of non-linguistic students at the prima­ry stage of higher education on the basis of integrated audio and video combined strategy. Results. The implementation of audiovisual component implies: 1) meeting the requirements of the National Curriculum for Universities in English for Specific Purposes (ESP); 2) using video for implicit specializing; 3) following successive stages of gradual speech skills development; 4) employing collective interaction patterns. Guided by the Curriculum, we selected videos with typical communicative situations in the areas of economy and tourism. Though covering communication in non-professional spheres, these videos contain quite a number of lexical units that can be employed to develop integrated socio-cultural and professionally oriented communicative foreign language competence at the primary stage of higher education. We also devised successive stages of gradual speech skills development with video and proposed exercises based on role play and collective interaction patterns. Discussion. If compared with implicit specializing on the basis of printed materials audiovisual implicit specializing has a number of advantages. Audiovisual component: - increases the number of lexical units memorized by students twice; - provides the articulation patterns for lexical units of economic and tourist registers to be perceived and followed by stu­dents; - promotes the development of speech fluency at the stage of synchronized reproductive speaking while viewing the video with the sound off; - imparts dynamic character to communicative exercises.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chau, Gustavo, and Paul Rodríguez. "Panning and Jitter Invariant Incremental Principal Component Pursuit for Video Background Modeling." Journal of Electrical and Computer Engineering 2019 (February 3, 2019): 1–15. http://dx.doi.org/10.1155/2019/7675805.

Повний текст джерела
Анотація:
Video background modeling is an important preprocessing stage for various applications, and principal component pursuit (PCP) is among the state-of-the-art algorithms for this task. One of the main drawbacks of PCP is its sensitivity to jitter and camera movement. This problem has only been partially solved by a few methods devised for jitter or small transformations. However, such methods cannot handle the case of moving or panning cameras in an incremental fashion. In this paper, we greatly expand the results of our earlier work, in which we presented a novel, fully incremental PCP algorithm, named incPCP-PTI, which was able to cope with panning scenarios and jitter by continuously aligning the low-rank component to the current reference frame of the camera. To the best of our knowledge, incPCP-PTI is the first low-rank plus additive incremental matrix method capable of handling these scenarios in an incremental way. The results on synthetic videos and Moseg, DAVIS, and CDnet2014 datasets show that incPCP-PTI is able to maintain a good performance in the detection of moving objects even when panning and jitter are present in a video. Additionally, in most videos, incPCP-PTI obtains competitive or superior results compared to state-of-the-art batch methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhang, Zhenduo. "Cross-Category Highlight Detection via Feature Decomposition and Modality Alignment." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3525–33. http://dx.doi.org/10.1609/aaai.v37i3.25462.

Повний текст джерела
Анотація:
Learning an autonomous highlight video detector with good transferability across video categories, called Cross-Category Video Highlight Detection(CC-VHD), is crucial for the practical application on video-based media platforms. To tackle this problem, we first propose a framework that treats the CC-VHD as learning category-independent highlight feature representation. Under this framework, we propose a novel module, named Multi-task Feature Decomposition Branch which jointly conducts label prediction, cyclic feature reconstruction, and adversarial feature reconstruction to decompose the video features into two independent components: highlight-related component and category-related component. Besides, we propose to align the visual and audio modalities to one aligned feature space before conducting modality fusion, which has not been considered in previous works. Finally, the extensive experimental results on three challenging public benchmarks validate the efficacy of our paradigm and the superiority over the existing state-of-the-art approaches to video highlight detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Mao, Jianguo, Wenbin Jiang, Hong Liu, Xiangdong Wang, and Yajuan Lyu. "Inferential Knowledge-Enhanced Integrated Reasoning for Video Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 13380–88. http://dx.doi.org/10.1609/aaai.v37i11.26570.

Повний текст джерела
Анотація:
Recently, video question answering has attracted growing attention. It involves answering a question based on a fine-grained understanding of video multi-modal information. Most existing methods have successfully explored the deep understanding of visual modality. We argue that a deep understanding of linguistic modality is also essential for answer reasoning, especially for videos that contain character dialogues. To this end, we propose an Inferential Knowledge-Enhanced Integrated Reasoning method. Our method consists of two main components: 1) an Inferential Knowledge Reasoner to generate inferential knowledge for linguistic modality inputs that reveals deeper semantics, including the implicit causes, effects, mental states, etc. 2) an Integrated Reasoning Mechanism to enhance video content understanding and answer reasoning by leveraging the generated inferential knowledge. Experimental results show that our method achieves significant improvement on two mainstream datasets. The ablation study further demonstrates the effectiveness of each component of our approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fischer, Tillmann, Paul Stumpf, Peter E. H. Schwarz, and Patrick Timpel. "Video-based smartphone app (‘VIDEA bewegt’) for physical activity support in German adults: a single-armed observational study." BMJ Open 12, no. 1 (January 2022): e052818. http://dx.doi.org/10.1136/bmjopen-2021-052818.

Повний текст джерела
Анотація:
ObjectivesThe primary objective of this study was to investigate the effect of the video-based smartphone app ‘VIDEA bewegt’ over eight programme weeks on physical activity in German adults.DesignThe study used a single-arm observational design, assessing the app’s effectiveness under real-life conditions. Data were collected from July 2019 to July 2020.SettingThe app is enabling users to access video-based educational content via their smartphone. A clinical visit or in-person contact was not required.ParticipantsAll individuals registered in the freely available app were invited to take part in the study.InterventionsThe app aims to increase physical activity in everyday life. It combines educative videos on lifestyle-related benefits and instructional videos of strength and endurance exercises to do at home with motivational components like goal setting, documentation of progress and personalised messages.Primary and secondary outcome measuresPrimary outcomes were physical activity based one MET minutes per week (metabolic equivalent) and step numbers.Secondary outcomes included physical self-efficacy (motivational, maintenance, recovery self-efficacy), health-related quality of life: Mental Health Component Summary score and Physical Health Component Summary score.ResultsOf 97 people included in the data analysis, 55 successfully completed the programme and all questionnaires. Significant increases over eight programme weeks (between T0 and T2) were observed in physical activity based on MET minutes per week, health-related quality of life, and recovery self-efficacy. Time spent sitting and body mass index significantly decreased for those completing the programme.ConclusionsAlthough significant benefits of physical activity were observed following a complete-case analysis, results should be dealt with caution. Studies with a larger and less heterogeneous sample and robust study designs able to measure causal effects would be desirable.Trial registration numberDRKS00017392.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

강민구 and JeanHun Chung. "Research for the component of perceptible video content." Korean Journal of Art and Media 10, no. 2 (November 2011): 119–28. http://dx.doi.org/10.36726/cammp.2011.10.2.119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Choppin, Jeffrey, and Kevin Meuwissen. "Threats to Validity in the edTPA Video Component." Action in Teacher Education 39, no. 1 (February 2017): 39–53. http://dx.doi.org/10.1080/01626620.2016.1245638.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Liu, Sheng, Mingming Gu, Qingchun Zhang, and Bing Li. "Principal component analysis algorithm in video compressed sensing." Optik 125, no. 3 (February 2014): 1149–53. http://dx.doi.org/10.1016/j.ijleo.2013.07.120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Rodriguez, Paul, and Brendt Wohlberg. "Incremental Principal Component Pursuit for Video Background Modeling." Journal of Mathematical Imaging and Vision 55, no. 1 (October 26, 2015): 1–18. http://dx.doi.org/10.1007/s10851-015-0610-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Acker, David E. "Parallel Component Analog Video Timing and Amplitude Considerations." SMPTE Journal 96, no. 7 (July 1987): 654–59. http://dx.doi.org/10.5594/j03112.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gauci, Jean, Kenneth P. Camilleri, and Owen Falzon. "Principal component analysis for dynamic thermal video analysis." Infrared Physics & Technology 109 (September 2020): 103359. http://dx.doi.org/10.1016/j.infrared.2020.103359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Becerra Martinez, Helard, Andrew Hines, and Mylène C. Q. Farias. "Perceptual Quality of Audio-Visual Content with Common Video and Audio Degradations." Applied Sciences 11, no. 13 (June 23, 2021): 5813. http://dx.doi.org/10.3390/app11135813.

Повний текст джерела
Анотація:
Audio-visual quality assessment remains as a complex research field. A great effort is being made to understand how visual and auditory domains are integrated and processed by humans. In this work, we analyzed and compared the results of three psychophisical experiments that collected quality and content scores given by a pool of subjects. The experiments include diverse content audio-visual material, e.g., Sports, TV Commercials, Interviews, Music, Documentaries and Cartoons, impaired with several visual (bitrate compression, packet-loss, and frame-freezing) and auditory (background noise, echo, clip, chop) distortions. Each experiment explores a particular domain. In Experiment 1, the video component was degraded with visual artifacts, meanwhile, the audio component did not suffer any type of degradation. In Experiment 2, the audio component was degraded while the video component remained untouched. Finally, in Experiment 3 both audio and video components were degraded. As expected, results confirmed a dominance of the visual component in the overall audio-visual quality. However, a detailed analysis showed that, for certain types of audio distortions, the audio component played a more important role in the construction of the overall perceived quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yong, Chen, Shuai Feng, and Zhan Di. "Real-time Colorized Video Images Optimization Method in Scotopic Vision." TELKOMNIKA Indonesian Journal of Electrical Engineering 15, no. 2 (August 1, 2015): 321. http://dx.doi.org/10.11591/tijee.v15i2.1545.

Повний текст джерела
Анотація:
<p><em>In low light environment, the surveillance video image has lower contrast</em><em>,</em><em>less information and uneven brightness. To solve this problem, this paper puts forward a contrast resolution compensation algorithm based on human visual perception model. It extracts Y component from the YUV video image acquired by camera originally to subtract contrast feature parameters, then makes a proportional integral type contrast resolution compensation for low light pixels in Y component and makes index contrast resolution compensation for high light pixels adaptively to enhance brightness of the video image while maintains the U and V components. Then it compresses the video images and transmits them via internet. Finally, it decodes and displays the video image on the device of intelligent surveillance system. The experimental results show that, the algorithm can effectively improve the contrast resolution of the video image and maintain the color of video image well. It also can meet the real-time requirement of video monitoring.</em><em></em></p>
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Faiza, Delsina, Thamrin Thamrin, and Ahmaddul Hadi. "PENGARUH PENGGUNAAN ELECTRONIC COMPONENT TESTER SEBAGAI MEDIA PEMBELAJARAN TERHADAP HASIL BELAJAR SISWA PROGRAM KEAHLIAN TEKNIK AUDIO VIDEO SMKN 1 SUMBAR." Jurnal Teknologi Informasi dan Pendidikan 10, no. 3 (October 3, 2017): 73–86. http://dx.doi.org/10.24036/tip.v10i3.22.

Повний текст джерела
Анотація:
This study aims to determine the effect of the use of Electronic Component Tester as instructional media to the students' learning outcomes in Audio Video Techniques Program SMK N 1 Sumbar on basic competence classify passive and active components in electrical and electronic circuit. The research method used in this research, especially in the design of Electronic Component Tester is Research and Development (R & D) research, which consists of media design, validation, revision, product manufacture, and testing. After the learning media is validated, experimental research will be conducted where the Electronic Component Tester will be used as a learning medium for the students of Audio Video Technique SMKN 1 Sumbar in basic competence to classify passive and active components in electrical and electronic circuits. Prior to treatment, students were given a pre-test to determine initial ability. After being given the use of Electronic Component Tester as a learning medium, a post-test was done to determine the group's ability improvement. Pre-test results obtained an average value of 50.60, while the post-test results obtained an average value of 61.90. Learning outcomes after applied Electronic Component Tester learning media was Increased by 11.31 point. Result of calculation at significant level α = 0,05 got tcount > ttable that is 2,905> 2,056. It can be concluded there is improvement of student learning outcome after using instructional media Electronic Component Tester in basic competence classify passive and active component in circuit electrical and electronics students Audio Video Technique Program in SMK N 1 Sumbar. Keywords: Electronic Component Tester, Learning Media, Learning Outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Li, Minghui, Zhaohong Li, and Zhenzhen Zhang. "A VVC Video Steganography Based on Coding Units in Chroma Components with a Deep Learning Network." Symmetry 15, no. 1 (December 31, 2022): 116. http://dx.doi.org/10.3390/sym15010116.

Повний текст джерела
Анотація:
Versatile Video Coding (VVC) is the latest video coding standard, but currently, most steganographic algorithms are based on High-Efficiency Video Coding (HEVC). The concept of symmetry is often adopted in deep neural networks. With the rapid rise of new multimedia, video steganography shows great research potential. This paper proposes a VVC steganographic algorithm based on Coding Units (CUs). Considering the novel techniques in VVC, the proposed steganography only uses chroma CUs to embed secret information. Based on modifying the partition modes of chroma CUs, we propose four different embedding levels to satisfy the different needs of visual quality, capacity and video bitrate. In order to reduce the bitrate of stego-videos and improve the distortion caused by modifying them, we propose a novel convolutional neural network (CNN) as an additional in-loop filter in the VVC codec to achieve better restoration. Furthermore, the proposed steganography algorithm based on chroma components has an advantage in resisting most of the video steganalysis algorithms, since few VVC steganalysis algorithms have been proposed thus far and most HEVC steganalysis algorithms are based on the luminance component. Experimental results show that the proposed VVC steganography algorithm achieves excellent performance on visual quality, bitrate cost and capacity.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zhu, Xiaobin, Zhuangzi Li, Xiao-Yu Zhang, Changsheng Li, Yaqi Liu, and Ziyu Xue. "Residual Invertible Spatio-Temporal Network for Video Super-Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5981–88. http://dx.doi.org/10.1609/aaai.v33i01.33015981.

Повний текст джерела
Анотація:
Video super-resolution is a challenging task, which has attracted great attention in research and industry communities. In this paper, we propose a novel end-to-end architecture, called Residual Invertible Spatio-Temporal Network (RISTN) for video super-resolution. The RISTN can sufficiently exploit the spatial information from low-resolution to high-resolution, and effectively models the temporal consistency from consecutive video frames. Compared with existing recurrent convolutional network based approaches, RISTN is much deeper but more efficient. It consists of three major components: In the spatial component, a lightweight residual invertible block is designed to reduce information loss during feature transformation and provide robust feature representations. In the temporal component, a novel recurrent convolutional model with residual dense connections is proposed to construct deeper network and avoid feature degradation. In the reconstruction component, a new fusion method based on the sparse strategy is proposed to integrate the spatial and temporal features. Experiments on public benchmark datasets demonstrate that RISTN outperforms the state-ofthe-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Maryanchik, Viktoriya A., and Larisa V. Popova. "Learning about the Arctic and the Russian North (Experience of Distance Schools)." Arctic and North, no. 47 (June 28, 2022): 268–76. http://dx.doi.org/10.37482/issn2221-2698.2022.47.268.

Повний текст джерела
Анотація:
The article describes the experience of seasonal language and culture schools organized for foreign students at the Northern (Arctic) Federal University. One of the factors that attract participants is the Arctic component in the program content, acquaintance with nature, culture, customs and modern life of the Russian North. The pandemic situation made only a distance format of such schools possible. At the same time, the task of preserving the cultural specificity realized in the activity component was solved. The authors described the main content components of the program of the School of Russian Language and Culture related to the topic of the North and the Arctic: video tours of Solovki, the Museum “Malye Korely”, city tours (videos about Arkhangelsk), texts about Arctic research and travelling to the Arctic, “Northern text” of Russian literature, video lectures and master classes. It is emphasized that the images of the Arctic and the Russian North are the conceptual core of the content of remote seasonal schools. The next distance school of Russian Language and Culture is announced.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Chen, Jia, Cui-xia Ma, Hong-an Wang, Hai-yan Yang, and Dong-xing Teng. "Sketch Based Video Annotation and Organization System in Distributed Teaching Environment." International Journal of Distributed Systems and Technologies 1, no. 4 (October 2010): 27–41. http://dx.doi.org/10.4018/jdst.2010100103.

Повний текст джерела
Анотація:
As the use of instructional video is becoming a key component of e-learning, there is an increasing need for a distributed system which supports collaborative video annotation and organization. In this paper, the authors construct a distributed environment on the top of NaradaBrokering to support collaborative operations on video material when users are located in different places. The concept of video annotation is enriched, making it a powerful media to improve the instructional video organizing and viewing. With panorama based and interpolation based methods, all related users can annotate or organize videos simultaneously. With these annotations, a video organization structure is consequently built through linking them with other video clips or annotations. Finally, an informal user study was conducted and result shows that this system improves the efficiency of video organizing and viewing and enhances user’s participating into the design process with good user experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Selvan, B., and R. J. Green. "Objective noise performance evaluation of component video signals using an electronic color video simulator." IEEE Transactions on Broadcasting 39, no. 3 (1993): 327–30. http://dx.doi.org/10.1109/11.237711.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Le, Huy D., Tuyen Ngoc Le, Jing-Wein Wang, and Yu-Shan Liang. "Singular Spectrum Analysis for Background Initialization with Spatio-Temporal RGB Color Channel Data." Entropy 23, no. 12 (December 7, 2021): 1644. http://dx.doi.org/10.3390/e23121644.

Повний текст джерела
Анотація:
In video processing, background initialization aims to obtain a scene without foreground objects. Recently, the background initialization problem has attracted the attention of researchers because of its real-world applications, such as video segmentation, computational photography, video surveillance, etc. However, the background initialization problem is still challenging because of the complex variations in illumination, intermittent motion, camera jitter, shadow, etc. This paper proposes a novel and effective background initialization method using singular spectrum analysis. Firstly, we extract the video’s color frames and split them into RGB color channels. Next, RGB color channels of the video are saved as color channel spatio-temporal data. After decomposing the color channel spatio-temporal data by singular spectrum analysis, we obtain the stable and dynamic components using different eigentriple groups. Our study indicates that the stable component contains a background image and the dynamic component includes the foreground image. Finally, the color background image is reconstructed by merging RGB color channel images obtained by reshaping the stable component data. Experimental results on the public scene background initialization databases show that our proposed method achieves a good color background image compared with state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Al-Jarrah, Mohammad A., and Faruq A. Al-Omari. "Fast Video Shot Boundary Detection Technique based on Stochastic Model." International Journal of Computer Vision and Image Processing 6, no. 2 (July 2016): 1–17. http://dx.doi.org/10.4018/ijcvip.2016070101.

Повний текст джерела
Анотація:
A video is composed of set of shots, where shot is defined as a sequence of consecutive frames captured by one camera without interruption. In video shot transition could be a prompt (hard cut) or gradual (fade, dissolve, and wipe). Shot boundary detection is an essential component of video processing. These boundaries are utilized on many aspect of video processing such as video indexing, and video in demand. In this paper, the authors proposed a new shot boundary detection algorithm. The proposed algorithm detects all type of shot boundaries in a high accuracy. The algorithm is developed based on a global stochastic model for video stream. The proposed stochastic model utilizes the joined characteristic function and consequently the joined momentum to model the video stream. The proposed algorithm is implemented and tested against different types of categorized videos. The proposed algorithm detects cuts fades, dissolves, and wipes transitions. Experimental results show that the algorithm has high performance. The computed precision and recall rates validated its performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Zhu, Tao, and Wei Jun Hong. "Effect Evaluation of Video Surveillance System on the Basis of Principal Component Analysis." Applied Mechanics and Materials 713-715 (January 2015): 479–81. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.479.

Повний текст джерела
Анотація:
The effect evaluation of video surveillance system is important for the effect of expected protection on the system. A comprehensive effect evaluation index system of video surveillance system is established. The Principal Component Analysis (PCA) method is applied on the established index system to obtain a new evaluation index system. It is proved in instances that the effect evaluation method of video surveillance system with the application of the index system is capable of evaluating the video surveillance system effectively and quantitatively. The protective effect of the video surveillance system is evaluated objectively on the basis of the new index system with the PCA.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Dos Reis, Antonio, Nataliia Morze, and Svitlana Vasylenko. "DIDACTIC VIDEO CREATION AS A COMPONENT OF THE IMPLEMENTATION XXI CENTURY TEACHERS’ METHODOLOGICAL COMPETENCIES." OPEN EDUCATIONAL E-ENVIRONMENT OF MODERN UNIVERSITY, no. 4 (2018): 1–10. http://dx.doi.org/10.28925/2414-0325.2018.4.1a10.

Повний текст джерела
Анотація:
This article is devoted to the results of the study of the developing method for creating didactic video and its use in educational process of higher education institutions. The authors made teacher’s survey about they experience in creating video, knowledge of the principles and stages, tools for creating and editing video. The article proves the choice of the topic because of the predominance of the visual style of perception of information by the educated youth. The list of IT-competencies and methodological competences of teachers necessary for the creation of the didactic video is listed. A certain list of tools that can be used to create a didactic video with algorithms for the use of these tools in the preparation of high-quality educational video materials is proposed. Provides some information about online and software applications for video editing. In addition, the authors emphasize that the quality of the didactic video is determined by the quality of multimedia presentations, the description of which requirements is also given in the article.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Liu, Yi. "Evaluation Study on the Network Impact Index of WeChat Based on Principal Component Analysis." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 7676–79. http://dx.doi.org/10.1166/jctn.2016.6085.

Повний текст джерела
Анотація:
WeChat software is an important social tool in modern society. This paper discusses the network impact of WeChat from ten aspects including WeChat popularity, attention, video observability, network reputation, function usability, dissemination speed of information, transmission ratio of positive energy and impact of WeChat on network economy, politics and culture, and questionnaires on these ten influence factors are distributed to college students for investigation. Principal component analysis is used to deal with the survey results, the principal components of the ten factors are extracted, and the results show that WeChat popularity, attention, video observability, network reputation and function usability are the main components, in which WeChat popularity, attention and video observability are the factors having the greatest impact on the calculation. And this paper presents the function relationship between the main principal components of WeChat network impact index and these ten influence factors, to evaluate the network impact index of WeChat.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Romanenko, Olena. "UKRAINIAN VIDEO POETRY AS AN AESTHETIC PHENOMENON OF THE MODERN LITERARY PROCESS." Bulletin of Taras Shevchenko National University of Kyiv. Literary Studies. Linguistics. Folklore Studies, no. 31 (2022): 61–65. http://dx.doi.org/10.17721/1728-2659.2022.31.12.

Повний текст джерела
Анотація:
The results of the study of video poetry as a genre are presented. The object of analysis is Ukrainian video poetry, created within the projects CYCLOP (Ukraine), "Overcoming Silence" (Ella Yevtushenko, Daryna Gladun, Lesyk Panasyuk, and others), #DigitalShevchenko, "Power of Speech", "Subjective" (Sofia Bezverkha and Marichka Yarmola), "rosdilovi" (Olga Mykhailyuk and Serhiy Zhadan), "By the way" (Natalia Parshchyk, etc.), ZEBRA festival (Germany). Ukrainian video poetry has been developing since the 2000s and was first presented as part of the CYCLOP festival, which became both a venue for a video poetry competition and a platform for theoretical discussions. The significance of poetic intonation and the interaction of reader and author in video poetry is described. Genre features of video poetry are established, and also types of genre transformation are allocated. It has been proved that video poetry is a syncretic multimedia genre, which combines verbal and nonverbal components. The verbal component is the voice of the reader or poet, as well as the melody that complements them. The non-verbal component is based on the development of a visual metaphor or game plot, it also expresses the perception of images, metaphors, etc. poem. The movement of frames in video poetry is based either on the metric-rhythmic identity of the poem (and the melody that accompanies the reading of poetry) and the video sequence, or the dissonance between the verbal and nonverbal components. Genre features of video poetry can be described as a combination of kinetic images, sound, visual metaphors, the voice of the reader, melody, and others. Genre transformations of video poetry are defined through the creation through animation, feature film script, improvisational transformation, and interaction between different arts or the use of digital technology. It is determined that an important aspect of video poetry is the interaction of the author and the recipient. Watching video poetry, on the one hand, actualizes the associations (verbal and nonverbal) in the memory of the recipient, on the other – enhances the aesthetic experience of perception of the work. It gives a powerful aesthetic and emotional effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Lykkegaard, E., and K. R. Jensen. "Video component system controller for use with scrambled cable." IEEE Transactions on Consumer Electronics 35, no. 3 (1989): 469–75. http://dx.doi.org/10.1109/30.44306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Adeli, Vida, Ehsan Fazl-Ersi, and Ahad Harati. "A component-based video content representation for action recognition." Image and Vision Computing 90 (October 2019): 103805. http://dx.doi.org/10.1016/j.imavis.2019.08.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Dalton, Chris J., and Norman W. Green. "Experience with an Experimental Digital Component Video Production Facility." SMPTE Journal 98, no. 5 (May 1989): 348–52. http://dx.doi.org/10.5594/j02756.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Awais, Muhammad, Sohail Abbas, Farahat Ali, and Ali Ashraf. "Media Exposure and Fear About Crime: An Application of Mediated Fear Model." Journal of Social Sciences Research, no. 67 (July 30, 2020): 720–26. http://dx.doi.org/10.32861/jssr.67.720.726.

Повний текст джерела
Анотація:
Social behavior can be troubled by the constant concern of crime. Research on the relationship between traditional media crime exposure, social media crime videos, and fear about the crime is scarce. The present study is designed to investigate whether social media exposure, TV news crime viewing, crime drama exposure is directly or indirectly associated to fear about crime. The theoretical framework of the study is based on the mediated fear model and cultivation theory. A sample of 371 university students was selected through a convenience sampling technique. SPSS 25 was used to analyze the data and Model 4 of Process Macro was used to examine the mediating role of the cognitive component of fear of crime (perceived seriousness, perceived risk, and perceived control). The results show that television news crime viewing, crime drama, and social media crime video exposure is positively associated with fear about crime. Moreover, three cognitive components of fear of crime played a mediatory role between traditional media exposure and fear of crime. In addition to this, the relationship between social media crime video exposure and fear about crime was mediated by the cognitive component of fear of crime.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Singh, Raahat Devender, and Naveen Aggarwal. "Optical Flow and Prediction Residual Based Hybrid Forensic System for Inter-Frame Tampering Detection." Journal of Circuits, Systems and Computers 26, no. 07 (March 17, 2017): 1750107. http://dx.doi.org/10.1142/s0218126617501079.

Повний текст джерела
Анотація:
In the wake of widespread proliferation of inexpensive and easy-to-use digital content editing software, digital videos have lost the idealized reputation they once held as universal, objective and infallible evidence of occurrence of events. The pliability of digital content and its innate vulnerability to unobtrusive alterations causes us to become skeptical of its validity. However, in spite of the fact that digital videos may not always present a truthful picture of reality, their usefulness in today’s world is incontrovertible. Therefore, the need to verify the integrity and authenticity of the contents of a digital video becomes paramount, especially in critical scenarios such as defense planning and legal trials where reliance on untrustworthy evidence could have grievous ramifications. Inter-frame tampering, which involves insertion/removal/replication of sets of frames into/from/within a video sequence, is among the most un-convoluted and elusive video forgeries. In this paper, we propose a potent hybrid forensic system that detects inter-frame forgeries in compressed videos. The system encompasses two forensic techniques. The first is a novel optical flow analysis based frame-insertion and removal detection procedure, where we focus on the brightness gradient component of optical flow and detect irregularities caused therein by post-production frame-tampering. The second component is a prediction residual examination based scheme that expedites detection and localization of replicated frames in video sequences. Subjective and quantitative results of comprehensive tests on an elaborate dataset under diverse experimental set-ups substantiate the effectuality and robustness of the proposed system.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zheng, Kun, Junjie Shen, Guangmin Sun, Hui Li, and Yu Li. "Shielding facial physiological information in video." Mathematical Biosciences and Engineering 19, no. 5 (2022): 5153–68. http://dx.doi.org/10.3934/mbe.2022241.

Повний текст джерела
Анотація:
<abstract> <p>With the recent development of non-contact physiological signal detection methods based on videos, it is possible to obtain the physiological parameters through the ordinary video only, such as heart rate and its variability of an individual. Therefore, personal physiological information may be leaked unknowingly with the spread of videos, which may cause privacy or security problems. In this paper a new method is proposed, which can shield physiological information in the video without reducing the video quality significantly. Firstly, the principle of the most widely used physiological signal detection algorithm: remote photoplethysmography (rPPG) was analyzed. Then the region of interest (ROI) of face contain physiological information with high signal to noise ratio was selected. Two physiological information forgery operation: single-channel periodic noise addition with blur filtering and brightness fine-tuning are conducted on the ROIs. Finally, the processed ROI images are merged into video frames to obtain the processed video. Experiments were performed on the VIPL-HR video dataset. The interference efficiencies of the proposed method on two mainly used rPPG methods: Independent Component Analysis (ICA) and Chrominance-based Method (CHROM) are 82.9 % and 84.6 % respectively, which demonstrated the effectiveness of the proposed method.</p> </abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Khilya, Anna. "VIDEO PROJECTS AS A COMPONENT OF INCLUSIVE COMPETENCE OF FUTURE TEACHERS." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 2 (June 13, 2023): 132–35. http://dx.doi.org/10.17770/etr2023vol2.7257.

Повний текст джерела
Анотація:
In our paper, we propose to focus on the use of video products in the educational process of teacher education in Ukraine.First of all, it is important for us to present a justification of the features and importance of using video projects as part of theoretical material / educational content / online and offline courses. After all, the use of video projects allows to deepen the knowledge of future teachers, including audiovisual involvement in the perception of terminology, complex elements of teacher organization, legislation or other theoretical aspects.Secondly, we believe that it is worth noting and presenting the experience of Ukrainian students of pedagogical specialties' involvement in video projects (in particular, the creation of a social video, etc.). After all, such work allows you to reassess your own skills in using technology, acquire new knowledge, formulate a strategy for implementing project-based learning in secondary school practice, and build strong cause-and-effect relationships between the knowledge and experience of using it through technology and social media.All of this together allows us to observe the formation of a future teacher who is not separated from the challenges of the technology world, but who uses the opportunities of our time in a harmonious and high-quality way.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Othman, Hanis Salwani, Syamsul Bahrin Zaibon, and Ahmad Hisham Zainal Abidin. "The Significance of Edutainment Concept in Video-Based Learning in Proposing the Elements of Educational Music Video for Children’s Learning." International Journal of Interactive Mobile Technologies (iJIM) 16, no. 05 (March 8, 2022): 91–106. http://dx.doi.org/10.3991/ijim.v16i05.23711.

Повний текст джерела
Анотація:
A video-based learning (VBL) has long practically been used in the education environment to help students enhance their level of understanding. Educational content for VBL has been widely produced on various platforms to support achieving the optimum level of student performance. Additionally, the concept of edutainment using video in education is gaining traction, particularly in children's learning. Implementing the edutainment notion into the learning process is a critical component of successful learning. Previous research has examined the beneficial effects of edutainment using video for learning purposes. It has been shown to improve the effectiveness of the learning process and generate enjoyable learning experiences for students. As the paper is aimed at children, the EMV can capture their attention even more than a standard educational video does due to its interactive content. However, studies on edutainment for educational music videos, in particular, have not been studied much. Thus, the objective of this paper is two-fold; i) to review and discuss the conceptual approach of edutainment and its significance towards student learning experiences, and ii) to propose the component and element of edutainment in the educational music video (EMV) for children learning. The findings of this study outline the characteristics of edutainment that can be utilised as criteria for selecting the finest EMV in learning for children.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії