Статті в журналах з теми "Convert video files to mp4"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Convert video files to mp4.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-27 статей у журналах для дослідження на тему "Convert video files to mp4".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Parapat, Rosdelima. "Kompresi Video Digital Menggunakan Metode Embedded Zerotree Wavelet (EZW)." Journal of Computing and Informatics Research 1, no. 3 (July 31, 2022): 65–70. http://dx.doi.org/10.47065/comforch.v1i3.320.

Повний текст джерела
Анотація:
In the era of technology in the field of multimedia, especially in processing moving images or often called digital video, it is very fast and competes in image and sound quality. Video files in MP4 format are very widely used today on social media which results in little storage space remaining in memory. Data compression is a process that can convert an input data stream (original data) into another data stream (compressed data) which has a smaller size. The compression process is the process of reducing the size of a data to produce a digital representation that is dense or compact but can still represent the quantity of information contained. Digital video is a computer file that is used to store a collection of digital files such as video, audio, metadata, information, chapter divisions, and titles at once, which can be played or used through certain software on a computer. There are many data compression methods currently available, but in this study the working principle of the EZW method will be discussed. The Embedded Zerotree Wavelet (EZW) method is an image compression algorithm that is simple, very effective, has properties where the bits in the bit stream are sorted according to their importance and produces a fully embedded code
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Siregar, Rewindo. "Perancangan Aplikasi Duplicate Video Scanner Menerapkan Algoritma Sha-256." Bulletin of Artificial Intelligence 1, no. 1 (April 30, 2022): 25–32. http://dx.doi.org/10.62866/buai.v1i1.6.

Повний текст джерела
Анотація:
In this increasingly advanced era, new technologies that have never been imagined before are increasingly emerging, starting from the emergence of computers, cellphones, and many more. One of the emerging technologies is MPEG4 (Moving Picture Expert Group)-4 Video Player 3 or better known as MP4. MP4 Player can make a person listen to video digitally, no need to use cassettes or CDs like in the past and allows millions of people around the world to exchange music recordings through computers connected to computer networks. MP4 format which is commonly played by MP4 Player is one of the most frequently used video formats in video data storage. In cryptography there is a hash function which is a one-way encryption that produces a value that has a fixed length of a message. In this case the intended message is a video file. The hash function can be used to generate the identity of a video file, where if there are two or more video files that have the same hash value, it can be concluded that the video file is duplicate or duplicate. This will certainly make it easier for users to delete duplicate or duplicate video files.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

D.P., Gangwar, Anju Pathania, Anand -, and Shivanshu -. "AUTHENTICATION OF DIGITAL MP4 VIDEO RECORDINGS USING FILE CONTAINERS AND METADATA PROPERTIES." International Journal of Computer Science Engineering 10, no. 2 (April 30, 2021): 28–38. http://dx.doi.org/10.21817/ijcsenet/2021/v10i2/211002004.

Повний текст джерела
Анотація:
The authentication of digital video recording plays a very important role in forensic science as well as for other crime investigation purposes. The field of forensic examination of digital video is continuously facing new challenges. At present the authentication of the video is carried out on the basis of pixel-based analysis. Due to the change in technology, it was felt that a new approach is required for the authentication of digital video recordings. In the present work a new approach i.e. analysis of media Information and structural analysis of video containers (boxes/ atoms) of mp4 file format have been applied for identification of original and edited videos. This work is limited only for Mp4 file format because the MP4 compressed format is widely used in most of the mobile phone for video recording and transmission purposes. For this purpose, we recorded more than 200 video samples using more than 20 different mobile phones of different make and models and more than 12 video editors, which are available in open source used for editing purpose. The original and edited MP4 video files were analyzed for their different metadata and structural contents analysis of different file containers (boxes/atoms) using different freeware tools. The details of the work are described below.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

ALEKSEY S., GERASKIN, and UKOLOV RODION V. "RESEARCH ON APPLICATION OF NEURAL NETWORKS FOR RESTORING DAMAGED VIDEO FILES OF AVI AND MP4 FORMATS." CASPIAN JOURNAL: Control and High Technologies 55, no. 3 (2021): 25–32. http://dx.doi.org/10.21672/2074-1707.2021.55.3.025-032.

Повний текст джерела
Анотація:
In the modern world, video files play a special role. The development of video compression algorithms and the growth of the Internet’s capabilities make it possible to transfer video files. Damage inevitably occurs when transferring video files. Accordingly, the question arises about restoring a damaged file and obtaining information from it. The article discusses the most commonly used video file extensions AVI, MP4. As a result of the study, it was revealed that the most common damage is in the headers, which leads to errors when opening files by players, data is damaged less often. Data corruption leads to the fact that a certain fragment of the video file is either played with errors, distortions, or is skipped. The article discusses the possibility of recovering damaged video file using the removal of undistorted data and proposes an algorithm for analyzing frames using a neural network. As part of the algorithm, a neural network is used to identify damaged frames in video data. The algorithm was implemented as a software product. For the first stage of checking the efficiency of the algorithm, deliberate distortions of one frame were made for each video file under study. As a result of experimental verification of the developed algorithm, it was revealed that it provides high accuracy in detecting distorted frame sequences.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zawali, Bako, Richard A. Ikuesan, Victor R. Kebande, Steven Furnell, and Arafat A-Dhaqm. "Realising a Push Button Modality for Video-Based Forensics." Infrastructures 6, no. 4 (April 2, 2021): 54. http://dx.doi.org/10.3390/infrastructures6040054.

Повний текст джерела
Анотація:
Complexity and sophistication among multimedia-based tools have made it easy for perpetrators to conduct digital crimes such as counterfeiting, modification, and alteration without being detected. It may not be easy to verify the integrity of video content that, for example, has been manipulated digitally. To address this perennial investigative challenge, this paper proposes the integration of a forensically sound push button forensic modality (PBFM) model for the investigation of the MP4 video file format as a step towards automated video forensic investigation. An open-source multimedia forensic tool was developed based on the proposed PBFM model. A comprehensive evaluation of the efficiency of the tool against file alteration showed that the tool was capable of identifying falsified files, which satisfied the underlying assertion of the PBFM model. Furthermore, the outcome can be used as a complementary process for enhancing the evidence admissibility of MP4 video for forensic investigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Rakha Maulana, Bahteramon Bintang Sanjaya Manurung, Nathanael Berliano Novanka Putra, and Aqwam Rosadi Kardian. "Efficiency Analysis of Compression Software (WINRAR and 7-Zip) Across Diverse Data Types on Windows 11 and Ubuntu 23.10." Jurnal Info Sains : Informatika dan Sains 13, no. 03 (December 11, 2023): 921–26. http://dx.doi.org/10.54209/infosains.v13i03.3505.

Повний текст джерела
Анотація:
This paper presents a comprehensive analysis of the performance and efficiency of two widely used compression software, WINRAR and 7-Zip, across various data types. The study focuses on evaluating their effectiveness on different operating systems, specifically Windows 11 and Ubuntu 23.10. The analysis encompasses considerations such as compression ratios, resource utilization, and processing times. WINRAR and 7-Zip are examined in diverse scenarios, including the compression of text files (.txt), image files (.png), audio files (.flac), and video files (.mp4). The study reveals notable variations in compression outcomes influenced by intrinsic complexities of each file format. Moreover, the investigation extends beyond the initially studied operating systems, suggesting potential applications on other platforms like Kali Linux. The findings contribute insights into the nuanced performance of compression software across varied data types and operating environments, facilitating informed decision-making for users seeking optimal compression solutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Koptyra, Katarzyna, and Marek R. Ogiela. "Steganography in IoT: Information Hiding with Joystick and Touch Sensors." Sensors 23, no. 6 (March 20, 2023): 3288. http://dx.doi.org/10.3390/s23063288.

Повний текст джерела
Анотація:
This paper describes a multi-secret steganographic system for the Internet-of-Things. It uses two user-friendly sensors for data input: thumb joystick and touch sensor. These devices are not only easy to use, but also allow hidden data entry. The system conceals multiple messages into the same container, but with different algorithms. The embedding is realized with two methods of video steganography that work on mp4 files, namely, videostego and metastego. These methods were chosen because of their low complexity so that they may operate smoothly in environments with limited resources. It is possible to replace the suggested sensors with others that offer similar functionality.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Krittinatham, W., K. Kaewkhong, and N. Emarat. "Python programming code for stellar photometry in astrophysics teaching on a cloud computing service." Journal of Physics: Conference Series 2431, no. 1 (January 1, 2023): 012038. http://dx.doi.org/10.1088/1742-6596/2431/1/012038.

Повний текст джерела
Анотація:
Abstract Nowadays, there is various software used for both education and astronomy research. For photometry, licensed software and high-performance computer operating systems are required, which is a fund limitation for some schools in Thailand. Thus, in this article, we develop and present the Demonstration Photometry Scripts for Astrophysics Teaching (DPSAT version 1.0). The program is designed to work on cloud computing services via internet browsers to avoid hardware and operation requirement pain points. The DPSAT is programming on flexible, low-cost, on-trend language, Python, and Jupyter Notebook online editor. In advance, our new code supports the home-use image or video file format, i.e., jpg, png, or mp4. Thus, it will be more accessible for teachers and students who do not have standard astronomical instruments. The DPSAT measures the stellar light intensity from the time-series still-images or video files from a smartphone or other digital devices. The code can extract video files into sequenced still images, then transform the RGB color space images into greyscale. The light intensity signal of selected pixels is counted with a simple aperture method in time series. It shows the results, for example, the mean signal, standard variation, measured signal as light intensity versus time, and image of light sources. This will be fruitful for low-cost and easily accessible teaching of astrophysics subjects.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhao, Hongna, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, and Cheng Li. "Transcoding Based Video Caching Systems: Model and Algorithm." Wireless Communications and Mobile Computing 2018 (August 1, 2018): 1–8. http://dx.doi.org/10.1155/2018/1818690.

Повний текст джерела
Анотація:
The explosive demand of online video watching brings huge bandwidth pressure to cellular networks. Efficient video caching is critical for providing high-quality streaming Video-on-Demand (VoD) services to satisfy the rapid increasing demands of online video watching from mobile users. Traditional caching algorithms typically treat individual video files separately and they tend to keep the most popular video files in cache. However, in reality, one video typically corresponds to multiple different files (versions) with different sizes and also different video resolutions. Thus, caching of such files for one video leads to a lot of redundancy since one version of a video can be utilized to produce other versions of the video by using certain video coding techniques. Recently, fog computing pushes computing power to edge of network to reduce distance between service provider and users. In this paper, we take advantage of fog computing and deploy cache system at network edge. Specifically, we study transcoding based video caching in cellular networks where cache servers are deployed at the edge of cellular network for providing improved quality of online VoD services to mobile users. By using transcoding, a cached video can be used to convert to different low-quality versions of the video as needed by different users in real time. We first formulate the transcoding based caching problem as integer linear programming problem. Then we propose a Transcoding based Caching Algorithm (TCA), which iteratively finds the placement leading to the maximal delay gain among all possible choices. We deduce the computational complexity of TCA. Simulation results demonstrate that TCA significantly outperforms traditional greedy caching algorithm with a decrease of up to 40% in terms of average delivery delay.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fang, Jin, Jun Xiang, Li Ma, Hao Liu, Chenxiang Wang, and Shan Liang. "Gas-Driven Endoscopic Robot for Visual Inspection of Corrosion Defects Inside Gas Pipelines." Processes 11, no. 4 (April 4, 2023): 1098. http://dx.doi.org/10.3390/pr11041098.

Повний текст джерела
Анотація:
The internal inspection of corrosion in large natural gas pipelines is a fundamental task for the prevention of possible failures. Photos and videos provide direct proof of internal corrosion defects. However, the implementation of this technique is limited by fast robot motion and poor lighting conditions, with high-quality images being key to its success. In this work, we developed a natural gas-driven pipeline endoscopic robot (GDPER) for the visual inspection of the inner wall surfaces of pipelines. GDPER consists of driving, odometer, and vision modules connected by universal joints. It is designed to work in a 154 mm gas-pressurized pipeline up to a maximum of 6 MPa, allowing it to smoothly pass through bends and cross-ring welds at a maximum speed of 3 m/s using gas pressure driving. Test results have shown that HD MP4 video files can be obtained, and the location of defects on the pipelines can be detected by intelligent video image post-processing. The gas-driven function enables the survey of very long pipelines without impacting the transport of the pipage.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Mohialden, Yasmin M., Nadia M. Hussien, and Muna A. Radhi. "GlobalLingua: Empowering Multilingual Access to YouTube Video Transcripts with Automated Translation." Journal of Prospective Researches 24, no. 2 (April 9, 2024): 37–41. http://dx.doi.org/10.61704/jpr.v24i2.pp37-41.

Повний текст джерела
Анотація:
GlobalLingua removes language barriers that block online video content worldwide. This application lets content creators, educators, and individuals obtain and translate YouTube transcripts. The software uses clever technology to simplify user-provided YouTube video URLs. PyTube quickly gets videos, titles, and transcripts using the YouTube_transcript_api. The software also easily converts English texts into the user-specified language with the Google Convert API. It powerfully removes temporary files and processes movies to increase speed and reliability. Automatic YouTube video transcript translation software increases global content accessibility, knowledge exchange, and cross-cultural communication. The application automates transcript extraction and translation, enabling multilingual content and digital inclusion
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Susilo, Agus, and Ratna Wulansari. "Pelatihan Membuat Media Pembelajaran Dengan Aplikasi Ulead Video Studio 11 Bagi Guru SMP dan Mahasiswa." Jurnal Pengabdian Masyarakat Madani (JPMM) 1, no. 2 (November 10, 2021): 120–31. http://dx.doi.org/10.51805/jpmm.v1i2.23.

Повний текст джерела
Анотація:
Kegiatan yang berlangsung selama dua hari ini, mengambil tema pelatihan membuat media pembelajaran dengan aplikasi Ulead Video Studio 11 Bagi Guru SMP dan mahasiswa Kota Lubuklinggau. Di era globalisasi ini, guru dan mahasiswa harus terbiasa dalam menggunakan atau memanfaatkan aplikasi online maupun offline untuk menunjang aktivitas pembelajaran. Dalam pelaksanaan kegiatan pengabdian kepada masyarakat ini, tim pelaksanaan kegiatan menggunakan metode secara langsung atau tatap muka. Jadi peserta kegiatan langsung mendengarkan pemaparan dan praktik untuk mendesain media dengan aplikasi Ulead Video Studio 11. Kegiatan pengabdian kepada masyarakat ini dilaksanakan disebuah Ruko di jalan Simpang Priuk Kota Lubuklinggau selama 2 hari pelaksanaannya. Hasil dari kegiatan pengabdian kepada masyarakat ini adalah selama kegiatan berlangsung Guru dan mahasiswa yang mengikuti kegiatan, dibimbing oleh pemateri yang sudah memiliki pengalaman dalam menggunakan aplikasi Ulead Video Studio 11. Selain itu, peserta juga telah membawa perlengkapan sendiri, berupa Laptop dan paket data untuk mencari materi yang akan dipergunakan dalam kegiatan pelatihan. Pelaksaan kegiatan pengabdian kepada masyarakat yang dilaksanakan selama 2 hari ini, meliputi install aplikasi, mengenalkan kegunaan Ulead Video Studio 11, dan convert menjadi media pembelajaran MP4. Kegiatan ini diakhiri dengan foto bersama pemateri dan peserta kegiatan pengabdian kepada masyarakat. Simpulannya adalah pelatihan membuat media pembelajaran dengan aplikasi Ulead Video Studio 11 bagi Guru SMP dan mahasiswa Kota Lubuklinggau sangat memberikan manfaat dan pengalaman bagi para peserta kegiatan. Para peserta yang merupakan guru dan mahasiswa dapat menerapkannya dalam kegiatan yang menyangkut pembelajaran. Selain sederhana, penggunaan aplikasi Ulead Video Studio 11, dapat digunakan secara offline tanpa jaringan internet.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ketcham, Mahasak, and Thittaporn Ganokratanaa. "The analysis of lane detection algorithms using histogram shapes and Hough transform." International Journal of Intelligent Computing and Cybernetics 8, no. 3 (August 10, 2015): 262–78. http://dx.doi.org/10.1108/ijicc-05-2014-0024.

Повний текст джерела
Анотація:
Purpose – The purpose of this paper is to develop a lane detection analysis algorithm by Hough transform and histogram shapes, which can effectively detect the lane markers in various lane road conditions, in driving system for drivers. Design/methodology/approach – Step 1: receiving image: the developed system is able to acquire images from video files. Step 2: splitting image: the system analyzes the splitting process of video file. Step 3: cropping image: specifying the area of interest using crop tool. Step 4: image enhancement: the system conducts the frame to convert RGB color image into grayscale image. Step 5: converting grayscale image to binary image. Step 6: segmenting and removing objects: using the opening morphological operations. Step 7: defining the analyzed area within the image using the Hough transform. Step 8: computing Houghline transform: the system operates the defined segment to analyze the Houghline transform. Findings – This paper presents the useful solution for lane detection by analyzing histogram shapes and Hough transform algorithms through digital image processing. The method has tested on video sequences filmed by using a webcam camera to record the road as a video file in a form of avi. The experimental results show the combination of two algorithms to compare the similarities and differences between histogram and Hough transform algorithm for better lane detection results. The performance of the Hough transform is better than the histogram shapes. Originality/value – This paper proposed two algorithms by comparing the similarities and differences between histogram shapes and Hough transform algorithm. The concept of this paper is to analyze between algorithms, provide a process of lane detection and search for the algorithm that has the better lane detection results.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Perry, Jo. "Telling Small Stories With Power Point As Video In Lockdown." Pacific Journal of Technology Enhanced Learning 3, no. 1 (February 16, 2021): 20. http://dx.doi.org/10.24135/pjtel.v3i1.92.

Повний текст джерела
Анотація:
The 2020 pandemic and ensuing lockdowns came as a shock nationally and internationally. As a result, the change in approaches to teaching for many was fast and absolute. One minute the face-to-face ethos was humming along as 'normal', the next it was fully on line and taking teachers and students into a story many would never have considered. This brought with it the challenge of continuing to build and maintain relationships with the students in order to support their road to success. Storytelling has always been an important part of my practice in developing relationships through sharing my own experiences and encouraging the students to share theirs. In this way, we co-construct understanding of the class content and get to know each other. Going into fully online teaching would potentially change this. Given the speed of the changes required, this project was never meant to be overtly innovative but was designed to allow me to continue using narratives of content and practice to build communities of learning in the online environment. As a teacher, Power Point was familiar, so I started there and simply changed to saving them as mp4 files. The presentation plots this journey as a teacher taking storytelling from a face-to-face classroom across the lockdown in a way that continued supporting relationships and learning. The first attempts showed me that online stories are not the same as class power points where I physically created the narrative that linked the slides together. As I viewed my first attempt, it became clear that I was trying to tell a story that was in my head but not translated to the screen and I needed to adopt an approach that clearly spoke to a listener/audience i.e. my community of learning. I learned that, up to this point, I had used power point as a guide as I wove a story around the weekly content in a face-to-face classroom. In other words, the whole thing was heavily dependent on me. In this new environment, the story had to be told in a different way. It had to stand as a discrete artefact on its own, speaking to anyone that logged on, enabling me to reach out to that other human being without the unique connection that develops between story-teller and listener in the face to face world. Through three more cycles of research, I found that this new kind of story depended on a delicate balance between visual and oral, the context, content and the affective and how each was portrayed. Ultimately, the focus had to remain on the relationships I could build and the impact they could have. Therefore, this project came to be about keeping storytelling, whether face-to-face or online, “a uniquely human experience through which people make sense of past experience, convey emotions and ultimately connect with each other” (Christianson, 2011, p. 289).
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ferreira, Sara, Mário Antunes, and Manuel E. Correia. "A Dataset of Photos and Videos for Digital Forensics Analysis Using Machine Learning Processing." Data 6, no. 8 (August 5, 2021): 87. http://dx.doi.org/10.3390/data6080087.

Повний текст джерела
Анотація:
Deepfake and manipulated digital photos and videos are being increasingly used in a myriad of cybercrimes. Ransomware, the dissemination of fake news, and digital kidnapping-related crimes are the most recurrent, in which tampered multimedia content has been the primordial disseminating vehicle. Digital forensic analysis tools are being widely used by criminal investigations to automate the identification of digital evidence in seized electronic equipment. The number of files to be processed and the complexity of the crimes under analysis have highlighted the need to employ efficient digital forensics techniques grounded on state-of-the-art technologies. Machine Learning (ML) researchers have been challenged to apply techniques and methods to improve the automatic detection of manipulated multimedia content. However, the implementation of such methods have not yet been massively incorporated into digital forensic tools, mostly due to the lack of realistic and well-structured datasets of photos and videos. The diversity and richness of the datasets are crucial to benchmark the ML models and to evaluate their appropriateness to be applied in real-world digital forensics applications. An example is the development of third-party modules for the widely used Autopsy digital forensic application. This paper presents a dataset obtained by extracting a set of simple features from genuine and manipulated photos and videos, which are part of state-of-the-art existing datasets. The resulting dataset is balanced, and each entry comprises a label and a vector of numeric values corresponding to the features extracted through a Discrete Fourier Transform (DFT). The dataset is available in a GitHub repository, and the total amount of photos and video frames is 40,588 and 12,400, respectively. The dataset was validated and benchmarked with deep learning Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) methods; however, a plethora of other existing ones can be applied. Generically, the results show a better F1-score for CNN when comparing with SVM, both for photos and videos processing. CNN achieved an F1-score of 0.9968 and 0.8415 for photos and videos, respectively. Regarding SVM, the results obtained with 5-fold cross-validation are 0.9953 and 0.7955, respectively, for photos and videos processing. A set of methods written in Python is available for the researchers, namely to preprocess and extract the features from the original photos and videos files and to build the training and testing sets. Additional methods are also available to convert the original PKL files into CSV and TXT, which gives more flexibility for the ML researchers to use the dataset on existing ML frameworks and tools.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Scott, George. "Digital imagery for making plates." Journal of Micropalaeontology 14, no. 2 (October 1, 1995): 118. http://dx.doi.org/10.1144/jm.14.2.118.

Повний текст джерела
Анотація:
Abstract. Although the resolution and depth of focus provided by scanning electron microscopy (SEM) revolutionized the examination of several groups of microfossils, conventional photographic techniques are normally outlined in instructions for preparation of micrographs for publication (Whittaker & Hodgkinson, 1991). While the quality of results attainable by following these methods is very high, digital image recording and processing techniques are now well developed and readily available. This note outlines some advantages of digital techniques in the preparation of SEM images for publication.DIGITAL RECORDINGSecondary electron and other detectors attached to the SEM produce analogue (waveform) signals. In early instruments only these analogue signals were processed and displayed. Modern designs quantize signals from the detector as pixels (picture elements) which represent grey levels along scan lines. Pixel information is processed by the SEM on-board computer and saved as an image file. Importantly, the basic hardware to convert the analogue signal to digital form is simple and can be readily retro-fitted to early instruments. Our Philips PSEM 500 was adapted to record 128 grey levels at 800 pixels/line over 600 lines/frame, a minimum specification for professional work. Many micropalaeontologists will find that their SEM laboratories can supply digital files at higher resolutions. However, an essential point is to work with images recorded digitally directly from the SEM video channel, so avoiding potential degradation due to scanning of images recorded on film from the SEM monitors.DIGITAL PROCESSINGI use Photostyler (a PC image editor by Aldus Corp.) for plate composition. It resembles. . .
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Windayani, Ni Luh Ika, and Ni Wayan Risna Dewi. "Instilling Hindu Character Values through The Development of E-Comic Based Balinese Satua in Kindergarten." Jurnal Pendidikan Anak Usia Dini Undiksha 12, no. 1 (June 8, 2024): 70–79. http://dx.doi.org/10.23887/paud.v12i1.68841.

Повний текст джерела
Анотація:
There are many benefits associated with the usage of e-comics as a medium for developing Hindu character in early childhood. The aim of the research is to develop e-comic based Balinese media to instill Hindu characters in early childhood. The research uses Research and Development (R&D) with a 4D (define, design, develop, disseminate) model. The e-comic based Satua Bali prototype to instill Hindu characters in students was developed through the stages of making a script, characterizing, sketching, digitizing characters, making storyboards and transferring image files to the LibreOffice Impress (PowerPoint) program. Continued to convert the PowerPoint file into a video. Instruments include observation sheets, interviews, as well as validation, practicality and effectiveness sheets. The data analysis technique for validity uses an average score with valid criteria with a score of more than 8.00. The practical aspect uses three criteria; very practical (7-10), practical (4-6) and impractical (1-3). The effectiveness aspect is carried out using the t-test. Validation results from experts and practitioners show that the material aspect is in the valid category. The practicality of e-comic based Satua Bali to instill Hindu character values ​​in students gets practical response results. The results show that there is an influence of implementing e-comic based Satua Bali in instilling Hindu character values ​​in students. Therefore, it can be concluded that e-comic based Balinese Satua media in instilling Hindu characters in early childhood has been well developed and is useful as a learning medium.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Izzatin Kamala and Tsaqifa Taqiyya Ulfah. "Inclusive Science Learning for Deaf Students in the Pandemi Era." Edulab : Majalah Ilmiah Laboratorium Pendidikan 7, no. 2 (April 13, 2023): 225–41. http://dx.doi.org/10.14421/edulab.2022.72.07.

Повний текст джерела
Анотація:
This study aims to describe online learning in science courses for deaf students and the role of peers in science learning in online learning in the inclusion class of the Bachelor Of Education For Islamic Elementary School Teachers Programme. The problem in this study is that there are obstacles experienced by deaf students due to the transformation of the implementation of learning during the covid-19 pandemic. This study uses qualitative methods with data collection techniques in the form of interviews and observations. The results showed that 1) learning during the covid-19 pandemic was carried out online by utilizing several applications that could help deaf students participate in learning, 2) deaf students in science learning could understand the material through material files that had been shared and accompanied by friends. peers and deaf assistants who serve as sign language translators and notetakers, 3) assignments in science learning for all students, both normal and deaf students in the form of papers, video presentations, and practicums accompanied by the role of peers and deaf assistants, 4) if the learning takes place synchronously deaf students do not have assistants, deaf students understand the material independently and are assisted by the google transcript application which functions to convert oral to written, 5) collaboration between lecturers, deaf assistants, and peers is an important element in the implementation of inclusive science learning. Abstrak Penelitian ini bertujuan untuk mendeskripsikan pembelajaran daring pada mata kuliah IPA bagi mahasiswa tuli dan peran teman sebaya dalam pembelajaran IPA pada pembelajaran daring di kelas inklusi Prodi PGMI. Masalah dalam penelitian ini yaitu terdapat kendala yang dialami mahasiswa tuli dikarenakan adanya transformasi pelaksanaan pembelajaran di masa pandemi covid-19. Penelitian ini menggunakan metode kualitatif dengan teknik pengumpulan data berupa wawancara dan observasi. Hasil penelitian menunjukkan bahwa 1) pembelajaran selama masa pandemi covid-19, dilaksanakan secara daring dengan memanfaatkan beberapa aplikasi yang dapat membantu mahasiswa tuli agar dapat mengikuti pembelajaran, 2) mahasiswa tuli dalam pembelajaran IPA dapat memahami materi melalui file materi yang telah dibagikan dan didampingi teman sebaya serta pendamping tuli yang bertugas sebagai penerjemah bahasa isyarat dan notaker, 3) penugasan pada pembelajaran IPA bagi semua mahasiswa baik mahasiswa normal maupun tuli berupa makalah, video presentasi, dan praktikum dengan disertai peran teman sebaya dan pendamping tuli, 4) Jika pelaksanaan pembelajaran berlangsung secara sinkronus mahasiswa tuli tidak ada pendamping, mahasiswa tuli memahami materi secara mandiri dan dibantu aplikasi google transkip yang berfungsi untuk mengkonversi lisan menjadi tulisan, 5) kolaborasi antara dosen, pendamping tuli, dan teman sebaya menjadi unsur penting dalam pelaksanaan pembelajaran IPA yang inklusif.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Abiodun Adegbola, Oluwole, Abdulkadir Zinat Alabi, Peter Olalekan Idowu, and David Olugbenga Aborisade. "Detection and Tracking of a Moving Object Using Canny Edge and Optical Flow Techniques." Asian Journal of Research in Computer Science, January 17, 2022, 43–56. http://dx.doi.org/10.9734/ajrcos/2022/v13i130306.

Повний текст джерела
Анотація:
Aims: In the discipline of computer vision, detecting and tracking moving objects in a succession of video frames is a critical process. Image noise, complicated object motion and forms, and video real-time processing are some of the challenges faced by existing methods. Hence, they are computationally complex and susceptible to noise. This work utilized Canny Edge and Optical Flow (CE-OF) techniques for identifying and tracking moving objects in video files. Methodology: Video sequence datasets in Avi and Mp4 format from MathWorks and YouTube were used to evaluate the developed CE-OF technique. The video clip's frames were sampled several times and the frame rate display was calculated. The original images were converted to grayscale, preprocessed, and CE-OP was applied to identify and track the moving object. The results of the CE-OF and optical flow techniques in terms of accuracy, precision, false acceptance rate, false rejection rate, and processing time were obtained and compared. The performance of the developed technique was evaluated using accuracy, precision, false acceptance rate (FAR), false rejection rate (FRR) and processing time. The results obtained were 94.12%, 92.86%, 25.00%, 25.00%, and 19.51s for Mp4; and 93.33%, 90.91%, 20.00%, 20.00%, and 44.11s for Avi video 1 format, respectively. Conclusion: The developed CE-OF is a better competition in terms of accuracy and time compared with well-known techniques in the literature. The CE-OF technique performed better compared with the conventional methods in detecting and tracking a moving object. Therefore, it can be adopted in the designing of intelligent surveillance systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Mukesh choudhary, Anshuman v ramani, and vishwas bhardwaj. "The Significance of Metadata and Video Compression for Investigating Video Files on Social Media Forensic." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, May 10, 2023, 304–13. http://dx.doi.org/10.32628/cseit2390373.

Повний текст джерела
Анотація:
Digital forensics is an essential aspect of cyber security and the investigation of digital crimes. Digital recordings are routinely used as important evidence sources in the identification, analysis, presentation, and reporting of evidence. There has recently been concern that images and videos cannot be used as solid evidence since they may be altered very quickly due to the abundance of technologies available for the gathering and processing of multimedia data. The main goal of this endeavour is to comprehend advanced forensic video analysis methods to assist in criminal investigations. We first propose the acquisition extraction analysis in a forensic video analysis framework that employs efficient video and image enhancement techniques for low-quality video that would be transferred through social media applications and for CCTV footage analysis. The reliability of digital video recordings is essential in forensic science and other criminal investigation fields. Digital video forensic analysis is a technique that constantly faces new challenges. Currently, videos are authenticated using a variety of parameters, including pixel-based analysis, frame rate analysis, bit rate analysis, hash value analysis, and, most importantly, metadata analysis. It was believed that the development of technology required the development of a new method for the verification of digital video recordings. In this review study, we made a novel attempt by reviewing the media. Information and structural analysis of video containers in the MP4 file format have been used to distinguish between real and altered videos.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

"Video Summarization Based on Gaussian Mixture Model and Kernel Support Vector Machine for Forest Fire Detection." International Journal of Engineering and Advanced Technology 9, no. 1 (October 30, 2019): 1827–31. http://dx.doi.org/10.35940/ijeat.a1442.109119.

Повний текст джерела
Анотація:
Exponential growth in the generation of multimedia data especially videos resulted to the development of video summarization concept. The summary of the videos offers a collection of frames which precisely define the video content in a considerably compacted form. Video summarization models find its applicability in various domains especially surveillance. This paper intends to develop a video summarization technique for the application of forest fire detection. The proposed method involves a set of processes namely convert frames, key frame extraction, feature extraction and classification. Here, a Merged Gaussian Mixture Model (MGMM) is applied for the process of extracting key frames and kernel support vector machine (KSVM) is employed for classifying a frame into normal frame and forest fire frame. The simulation analysis is performed on the forest fire video files from FIRESENSE database and the results are assessed under several dimensions. The final outcome proves the efficiency of the presented MGMM-KSVM model in a considerable way.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Meinig, Holger, Teresa Nava, Sarah Thivierge, Sven Büchner, and Johannes Lang. "The masked singer: vocalization in the Garden Dormouse (Eliomys quercinus)." ARPHA Conference Abstracts 5 (April 15, 2022). http://dx.doi.org/10.3897/aca.5.e84775.

Повний текст джерела
Анотація:
Many animals make sounds for various reasons, mostly for mating and agonistic behaviour, but also for more complex social communication. These sounds are used for mapping and monitoring many animal groups and species (e.g. birds, bats, whales, grasshoppers, crickets) for mapping and monitoring. Although Glirids are known to use sounds for communication, to our knowledge vocalisations have only been used to map the Edible Dormouse. We checked the possibility of detecting the Garden Dormouse calls and used oscillograms and spectrograms to analyze these sounds. Garden Dormouse calls were recorded as mp4 files and converted to WAV format for this purpose. In combination with video recordings, the vocalisations could often be associated with the respective behaviour of the animals. Most analysed calls were related to apparent arousal, intraspecific aggression, mating or social communication within a family group between old and young animals. Some of the different calls are not yet clearly understood in their ethological context. Regardless of this, Garden Dormouse vocalizations can be clearly assigned to the species and distinguished from other species. It therefore provides a new method for mapping this species. When Garden Dormice mainly call in urban habitats, human impacts like habitat fragmentation, direct disturbance or noise pollution may challenge their acoustic behaviour in this environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Qu, Susu, Qingjie Zhu, Han Zhou, Yuan Gao, Yi Wei, Yuan Ma, Zhicheng Wang, et al. "EasyFlyTracker: A Simple Video Tracking Python Package for Analyzing Adult Drosophila Locomotor and Sleep Activity to Facilitate Revealing the Effect of Psychiatric Drugs." Frontiers in Behavioral Neuroscience 15 (February 10, 2022). http://dx.doi.org/10.3389/fnbeh.2021.809665.

Повний текст джерела
Анотація:
The mechanism of psychiatric drugs (stimulant and non-stimulant) is still unclear. Precision medication of psychiatric disorders faces challenges in pharmacogenetics and pharmacodynamics research due to difficulties in recruiting human subjects because of possibility of substance abuse and relatively small sample sizes. Drosophila is a powerful animal model for large-scale studies of drug effects based on the precise quantification of behavior. However, a user-friendly system for high-throughput simultaneous tracking and analysis of drug-treated individual adult flies is still lacking. It is critical to quickly setup a working environment including both the hardware and software at a reasonable cost. Thus, we have developed EasyFlyTracker, an open-source Python package that can track single fruit fly in each arena and analyze Drosophila locomotor and sleep activity based on video recording to facilitate revealing the psychiatric drug effects. The current version does not support multiple fruit fly tracking. Compared with existing software, EasyFlyTracker has the advantages of low cost, easy setup and scaling, rich statistics of movement trajectories, and compatibility with different video recording systems. Also, it accepts multiple video formats such as common MP4 and AVI formats. EasyFlyTracker provides a cross-platform and user-friendly interface combining command line and graphic configurations, which allows users to intuitively understand the process of tracking and downstream analyses and automatically generates multiple files, especially plots. Users can install EasyFlyTracker, go through tutorials, and give feedback on http://easyflytracker.cibr.ac.cn. Moreover, we tested EasyFlyTracker in a study of Drosophila melanogaster on the hyperactivity-like behavior effects of two psychiatric drugs, methylphenidate and atomoxetine, which are two commonly used drugs treating attention-deficit/hyperactivity disorder (ADHD) in human. This software has the potential to accelerate basic research on drug effect studies with fruit flies.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

"An Apriori Method for Topic Extraction from Text Files." International Journal of Recent Technology and Engineering 8, no. 2 (July 30, 2019): 2516–21. http://dx.doi.org/10.35940/ijrte.a3068.078219.

Повний текст джерела
Анотація:
In this data age peta-bytes of data is generated every day. One of the biggest challenge today is to convert this data into useful information, this is known as data mining. Important kinds of data include text-based data, audio-based data, imagebased data, video-based data etc. An important challenge in mining useful information from text-based data source (text mining) is topic modeling which is to find out the topic the text is talking about. The solution to this problem finds application, in clustering files based on the topic, pre-processing method in information retrieval, ontology of medical record etc. A lot of research work has gone into this area of topic modeling, and many approaches have been formulated. Some of these approaches take into account the occurrence and frequency of occurrence of words/terms, these models come under the Bag Of Words(BOW) approach. Others take into account the underlying structure in the corpus of text used, Wikipedia category graph is an example of this approach. This paper, provides an unsupervised solution to the above problem by extracting keywords that represent the topic of the text document. In our approach, topic modeling is carried out with a hybrid model which makes use of WordNet and Wikipedia Corpus. Promising experimental results have been obtained for well- known news dataset (BBCNews) from our model. We present the experimental result for our proposed approach along with the results of others in the same domain and show that our approach provides better results.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Chaparro-Moreno, Leydi Johana, Hugo Gonzalez Villasanti, Laura M. Justice, Jing Sun, and Mary Beth Schmitt. "Accuracy of Automatic Processing of Speech-Language Pathologist and Child Talk During School-Based Therapy Sessions." Journal of Speech, Language, and Hearing Research, July 17, 2024, 1–16. http://dx.doi.org/10.1044/2024_jslhr-23-00310.

Повний текст джерела
Анотація:
Purpose: This study examines the accuracy of Interaction Detection in Early Childhood Settings (IDEAS), a program that automatically transcribes audio files and estimates linguistic units relevant to speech-language therapy, including part-of-speech units that represent features of language complexity, such as adjectives and coordinating conjunctions. Method: Forty-five video-recorded speech-language therapy sessions involving 27 speech-language pathologists (SLPs) and 56 children were used. The F measure determines the accuracy of IDEAS diarization (i.e., speech segmentation and speaker classification). Two additional evaluation metrics, namely, median absolute relative error and correlation, indicate the accuracy of IDEAS for the estimation of linguistic units as compared with two conditions, namely, Oracle (manual diarization) and Voice Type Classifier (existing diarizer with acceptable accuracy). Results: The high F measure for SLP talk data suggests high accuracy of IDEAS diarization for SLP talk but less so for child talk. These differences are reflected in the accuracy of IDEAS linguistic unit estimates. IDEAS median absolute relative error and correlation values for nine of the 10 SLP linguistic unit estimates meet the accuracy criteria, but none of the child linguistic unit estimates meet these criteria. The type of linguistic units also affects IDEAS accuracy. Conclusions: IDEAS was tailored to educational settings to automatically convert audio recordings into text and to provide linguistic unit estimates in speech-language therapy sessions and classroom settings. Although not perfect, IDEAS is reliable in automatically capturing and returning linguistic units, especially in SLP talk, that are relevant in research and practice. The tool offers a way to automatically measure SLP talk in clinical settings, which will support research seeking to understand how SLP talk influences children's language growth.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Hollier, Scott, Katie M. Ellis, and Mike Kent. "User-Generated Captions: From Hackers, to the Disability Digerati, to Fansubbers." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1259.

Повний текст джерела
Анотація:
Writing in the American Annals of the Deaf in 1931, Emil S. Ladner Jr, a Deaf high school student, predicted the invention of words on screen to facilitate access to “talkies”. He anticipated:Perhaps, in time, an invention will be perfected that will enable the deaf to hear the “talkies”, or an invention which will throw the words spoken directly under the screen as well as being spoken at the same time. (Ladner, cited in Downey Closed Captioning)This invention would eventually come to pass and be known as captions. Captions as we know them today have become widely available because of a complex interaction between technological change, volunteer effort, legislative activism, as well as increasing consumer demand. This began in the late 1950s when the technology to develop captions began to emerge. Almost immediately, volunteers began captioning and distributing both film and television in the US via schools for the deaf (Downey, Constructing Closed-Captioning in the Public Interest). Then, between the 1970s and 1990s Deaf activists and their allies began to campaign aggressively for the mandated provision of captions on television, leading eventually to the passing of the Television Decoder Circuitry Act in the US in 1990 (Ellis). This act decreed that any television with a screen greater than 13 inches must be designed/manufactured to be capable of displaying captions. The Act was replicated internationally, with countries such as Australia adopting the same requirements with their Australian standards regarding television sets imported into the country. As other papers in this issue demonstrate, this market ultimately led to the introduction of broadcasting requirements.Captions are also vital to the accessibility of videos in today’s online and streaming environment—captioning is listed as the highest priority in the definitive World Wide Web Consortium (W3C) Web Content Accessibility Guideline’s (WCAG) 2.0 standard (W3C, “Web Content Accessibility Guidelines 2.0”). This recognition of the requirement for captions online is further reflected in legislation, from both the US 21st Century Communications and Video Accessibility Act (CVAA) (2010) and from the Australian Human Rights Commission (2014).Television today is therefore much more freely available to a range of different groups. In addition to broadcast channels, captions are also increasingly available through streaming platforms such as Netflix and other subscription video on demand providers, as well as through user-generated video sites like YouTube. However, a clear discrepancy exists between guidelines, legislation and the industry’s approach. Guidelines such as the W3C are often resisted by industry until compliance is legislated.Historically, captions have been both unavailable (Ellcessor; Ellis) and inadequate (Ellis and Kent), and in many instances, they still are. For example, while the provision of captions in online video is viewed as a priority across international and domestic policies and frameworks, there is a stark contrast between the policy requirements and the practical implementation of these captions. This has led to the active development of a solution as part of an ongoing tradition of user-led development; user-generated captions. However, within disability studies, research around the agency of this activity—and the media savvy users facilitating it—has gone significantly underexplored.Agency of ActivityInformation sharing has featured heavily throughout visions of the Web—from Vannevar Bush’s 1945 notion of the memex (Bush), to the hacker ethic, to Zuckerberg’s motivations for creating Facebook in his dorm room in 2004 (Vogelstein)—resulting in a wide agency of activity on the Web. Running through this development of first the Internet and then the Web as a place for a variety of agents to share information has been the hackers’ ethic that sharing information is a powerful, positive good (Raymond 234), that information should be free (Levey), and that to achieve these goals will often involve working around intended information access protocols, sometimes illegally and normally anonymously. From the hacker culture comes the digerati, the elite of the digital world, web users who stand out by their contributions, success, or status in the development of digital technology. In the context of access to information for people with disabilities, we describe those who find these workarounds—providing access to information through mainstream online platforms that are not immediately apparent—as the disability digerati.An acknowledged mainstream member of the digerati, Tim Berners-Lee, inventor of the World Wide Web, articulated a vision for the Web and its role in information sharing as inclusive of everyone:Worldwide, there are more than 750 million people with disabilities. As we move towards a highly connected world, it is critical that the Web be useable by anyone, regardless of individual capabilities and disabilities … The W3C [World Wide Web Consortium] is committed to removing accessibility barriers for all people with disabilities—including the deaf, blind, physically challenged, and cognitively or visually impaired. We plan to work aggressively with government, industry, and community leaders to establish and attain Web accessibility goals. (Berners-Lee)Berners-Lee’s utopian vision of a connected world where people freely shared information online has subsequently been embraced by many key individuals and groups. His emphasis on people with disabilities, however, is somewhat unique. While maintaining a focus on accessibility, in 2006 he shifted focus to who could actually contribute to this idea of accessibility when he suggested the idea of “community captioning” to video bloggers struggling with the notion of including captions on their videos:The video blogger posts his blog—and the web community provides the captions that help others. (Berners-Lee, cited in Outlaw)Here, Berners-Lee was addressing community captioning in the context of video blogging and user-generated content. However, the concept is equally significant for professionally created videos, and media savvy users can now also offer instructions to audiences about how to access captions and subtitles. This shift—from user-generated to user access—must be situated historically in the context of an evolving Web 2.0 and changing accessibility legislation and policy.In the initial accessibility requirements of the Web, there was little mention of captioning at all, primarily due to video being difficult to stream over a dial-up connection. This was reflected in the initial WCAG 1.0 standard (W3C, “Web Content Accessibility Guidelines 1.0”) in which there was no requirement for videos to be captioned. WCAG 2.0 went some way in addressing this, making captioning online video an essential Level A priority (W3C, “Web Content Accessibility Guidelines 2.0”). However, there were few tools that could actually be used to create captions, and little interest from emerging online video providers in making this a priority.As a result, the possibility of user-generated captions for video content began to be explored by both developers and users. One initial captioning tool that gained popularity was MAGpie, produced by the WGBH National Center for Accessible Media (NCAM) (WGBH). While cumbersome by today’s standards, the arrival of MAGpie 2.0 in 2002 provided an affordable and professional captioning tool that allowed people to create captions for their own videos. However, at that point there was little opportunity to caption videos online, so the focus was more on captioning personal video collections offline. This changed with the launch of YouTube in 2005 and its later purchase by Google (CNET), leading to an explosion of user-generated video content online. However, while the introduction of YouTube closed captioned video support in 2006 ensured that captioned video content could be created (YouTube), the ability for users to create captions, save the output into one of the appropriate captioning file formats, upload the captions, and synchronise the captions to the video remained a difficult task.Improvements to the production and availability of user-generated captions arrived firstly through the launch of YouTube’s automated captions feature in 2009 (Google). This service meant that videos could be uploaded to YouTube and, if the user requested it, Google would caption the video within approximately 24 hours using its speech recognition software. While the introduction of this service was highly beneficial in terms of making captioning videos easier and ensuring that the timing of captions was accurate, the quality of captions ranged significantly. In essence, if the captions were not reviewed and errors not addressed, the automated captions were sometimes inaccurate to the point of hilarity (New Media Rock Stars). These inaccurate YouTube captions are colloquially described as craptions. A #nomorecraptions campaign was launched to address inaccurate YouTube captioning and call on YouTube to make improvements.The ability to create professional user-generated captions across a variety of platforms, including YouTube, arrived in 2010 with the launch of Amara Universal Subtitles (Amara). The Amara subtitle portal provides users with the opportunity to caption online videos, even if they are hosted by another service such as YouTube. The captioned file can be saved after its creation and then uploaded to the relevant video source if the user has access to the location of the video content. The arrival of Amara continues to provide ongoing benefits—it contains a professional captioning editing suite specifically catering for online video, the tool is free, and it can caption videos located on other websites. Furthermore, Amara offers the additional benefit of being able to address the issues of YouTube automated captions—users can benefit from the machine-generated captions of YouTube in relation to its timing, then download the captions for editing in Amara to fix the issues, then return the captions to the original video, saving a significant amount of time when captioning large amounts of video content. In recent years Google have also endeavoured to simplify the captioning process for YouTube users by including its own captioning editors, but these tools are generally considered inferior to Amara (Media Access Australia).Similarly, several crowdsourced caption services such as Viki (https://www.viki.com/community) have emerged to facilitate the provision of captions. However, most of these crowdsourcing captioning services can’t tap into commercial products instead offering a service for people that have a video they’ve created, or one that already exists on YouTube. While Viki was highlighted as a useful platform in protests regarding Netflix’s lack of captions in 2009, commercial entertainment providers still have a responsibility to make improvements to their captioning. As we discuss in the next section, people have resorted extreme measures to hack Netflix to access the captions they need. While the ability for people to publish captions on user-generated content has improved significantly, there is still a notable lack of captions for professionally developed videos, movies, and television shows available online.User-Generated Netflix CaptionsIn recent years there has been a worldwide explosion of subscription video on demand service providers. Netflix epitomises the trend. As such, for people with disabilities, there has been significant focus on the availability of captions on these services (see Ellcessor, Ellis and Kent). Netflix, as the current leading provider of subscription video entertainment in both the US and with a large market shares in other countries, has been at the centre of these discussions. While Netflix offers a comprehensive range of captioned video on its service today, there are still videos that do not have captions, particularly in non-English regions. As a result, users have endeavoured to produce user-generated captions for personal use and to find workarounds to access these through the Netflix system. This has been achieved with some success.There are a number of ways in which captions or subtitles can be added to Netflix video content to improve its accessibility for individual users. An early guide in a 2011 blog post (Emil’s Celebrations) identified that when using the Netflix player using the Silverlight plug-in, it is possible to access a hidden menu which allows a subtitle file in the DFXP format to be uploaded to Netflix for playback. However, this does not appear to provide this file to all Netflix users, and is generally referred to as a “soft upload” just for the individual user. Another method to do this, generally credited as the “easiest” way, is to find a SRT file that already exists for the video title, edit the timing to line up with Netflix, use a third-party tool to convert it to the DFXP format, and then upload it using the hidden menu that requires a specific keyboard command to access. While this may be considered uncomplicated for some, there is still a certain amount of technical knowledge required to complete this action, and it is likely to be too complex for many users.However, constant developments in technology are assisting with making access to captions an easier process. Recently, Cosmin Vasile highlighted that the ability to add captions and subtitle tracks can still be uploaded providing that the older Silverlight plug-in is used for playback instead of the new HTML5 player. Others add that it is technically possible to access the hidden feature in an HTML5 player, but an additional Super Netflix browser plug-in is required (Sommergirl). Further, while the procedure for uploading the file remains similar to the approach discussed earlier, there are some additional tools available online such as Subflicks which can provide a simple online conversion of the more common SRT file format to the DFXP format (Subflicks). However, while the ability to use a personal caption or subtitle file remains, the most common way to watch Netflix videos with alternative caption or subtitle files is through the use of the Smartflix service (Smartflix). Unlike other ad-hoc solutions, this service provides a simplified mechanism to bring alternative caption files to Netflix. The Smartflix website states that the service “automatically downloads and displays subtitles in your language for all titles using the largest online subtitles database.”This automatic download and sharing of captions online—known as fansubbing—facilitates easy access for all. For example, blog posts suggest that technology such as this creates important access opportunities for people who are deaf and hard of hearing. Nevertheless, they can be met with suspicion by copyright holders. For example, a recent case in the Netherlands ruled fansubbers were engaging in illegal activities and were encouraging people to download pirated videos. While the fansubbers, like the hackers discussed earlier, argued they were acting in the greater good, the Dutch antipiracy association (BREIN) maintained that subtitles are mainly used by people downloading pirated media and sought to outlaw the manufacture and distribution of third party captions (Anthony). The fansubbers took the issue to court in order to seek clarity about whether copyright holders can reserve exclusive rights to create and distribute subtitles. However, in a ruling against the fansubbers, the court agreed with BREIN that fansubbing violated copyright and incited piracy. What impact this ruling will have on the practice of user-generated captioning online, particularly around popular sites such as Netflix, is hard to predict; however, for people with disabilities who were relying on fansubbing to access content, it is of significant concern that the contention that the main users of user-generated subtitles (or captions) are engaging in illegal activities was so readily accepted.ConclusionThis article has focused on user-generated captions and the types of platforms available to create these. It has shown that this desire to provide access, to set the information free, has resulted in the disability digerati finding workarounds to allow users to upload their own captions and make content accessible. Indeed, the Internet and then the Web as a place for information sharing is evident throughout this history of user-generated captioning online, from Berner-Lee’s conception of community captioning, to Emil and Vasile’s instructions to a Netflix community of captioners, to finally a group of fansubbers who took BRIEN to court and lost. Therefore, while we have conceived of the disability digerati as a conflation of the hacker and the acknowledged digital influencer, these two positions may again part ways, and the disability digerati may—like the hackers before them—be driven underground.Captioned entertainment content offers a powerful, even vital, mode of inclusion for people who are deaf or hard of hearing. Yet, despite Berners-Lee’s urging that everything online be made accessible to people with all sorts of disabilities, captions were not addressed in the first iteration of the WCAG, perhaps reflecting the limitations of the speed of the medium itself. This continues to be the case today—although it is no longer difficult to stream video online, and Netflix have reached global dominance, audiences who require captions still find themselves fighting for access. Thus, in this sense, user-generated captions remain an important—yet seemingly technologically and legislatively complicated—avenue for inclusion.ReferencesAnthony, Sebastian. “Fan-Made Subtitles for TV Shows and Movies Are Illegal, Court Rules.” Arstechnica UK (2017). 21 May 2017 <https://arstechnica.com/tech-policy/2017/04/fan-made-subtitles-for-tv-shows-and-movies-are-illegal/>.Amara. “Amara Makes Video Globally Accessible.” Amara (2010). 25 Apr. 2017. <https://amara.org/en/ 2010>.Berners-Lee, Tim. “World Wide Web Consortium (W3C) Launches International Web Accessibility Initiative.” Web Accessibility Initiative (WAI) (1997). 19 June 2010. <http://www.w3.org/Press/WAI-Launch.html>.Bush, Vannevar. “As We May Think.” The Atlantic (1945). 26 June 2010 <http://www.theatlantic.com/magazine/print/1969/12/as-we-may-think/3881/>.CNET. “YouTube Turns 10: The Video Site That Went Viral.” CNET (2015). 24 Apr. 2017 <https://www.cnet.com/news/youtube-turns-10-the-video-site-that-went-viral/>.Downey, Greg. Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television. Baltimore: John Hopkins UP, 2008.———. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” Info: The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82.Ellcessor, Elizabeth. “Captions On, Off on TV, Online: Accessibility and Search Engine Optimization in Online Closed Captioning.” Television & New Media 13.4 (2012): 329-352. <http://tvn.sagepub.com/content/early/2011/10/24/1527476411425251.abstract?patientinform-links=yes&legid=sptvns;51v1>.Ellis, Katie. “Television’s Transition to the Internet: Disability Accessibility and Broadband-Based TV in Australia.” Media International Australia 153 (2014): 53–63.Ellis, Katie, and Mike Kent. “Accessible Television: The New Frontier in Disability Media Studies Brings Together Industry Innovation, Government Legislation and Online Activism.” First Monday 20 (2015). <http://firstmonday.org/ojs/index.php/fm/article/view/6170>.Emil’s Celebrations. “How to Add Subtitles to Movies Streamed in Netflix.” 16 Oct. 2011. 9 Apr. 2017 <https://emladenov.wordpress.com/2011/10/16/how-to-add-subtitles-to-movies-streamed-in-netflix/>.Google. “Automatic Captions in Youtube.” 2009. 24 Apr. 2017 <https://googleblog.blogspot.com.au/2009/11/automatic-captions-in-youtube.html>.Jaeger, Paul. “Disability and the Internet: Confronting a Digital Divide.” Disability in Society. Ed. Ronald Berger. Boulder, London: Lynne Rienner Publishers, 2012.Levey, Steven. Hackers: Heroes of the Computer Revolution. North Sebastopol: O’Teilly Media, 1984.Media Access Australia. “How to Caption a Youtube Video.” 2017. 25 Apr. 2017 <https://mediaaccess.org.au/web/how-to-caption-a-youtube-video>.New Media Rock Stars. “Youtube’s 5 Worst Hilariously Catastrophic Auto Caption Fails.” 2013. 25 Apr. 2017 <http://newmediarockstars.com/2013/05/youtubes-5-worst-hilariously-catastrophic-auto-caption-fails/>.Outlaw. “Berners-Lee Applies Web 2.0 to Improve Accessibility.” Outlaw News (2006). 25 June 2010 <http://www.out-law.com/page-6946>.Raymond, Eric S. The New Hacker’s Dictionary. 3rd ed. Cambridge: MIT P, 1996.Smartflix. “Smartflix: Supercharge Your Netflix.” 2017. 9 Apr. 2017 <https://www.smartflix.io/>.Sommergirl. “[All] Adding Subtitles in a Different Language?” 2016. 9 Apr. 2017 <https://www.reddit.com/r/netflix/comments/32l8ob/all_adding_subtitles_in_a_different_language/>.Subflicks. “Subflicks V2.0.0.” 2017. 9 Apr. 2017 <http://subflicks.com/>.Vasile, Cosmin. “Netflix Has Just Informed Us That Its Movie Streaming Service Is Now Available in Just About Every Country That Matters Financially, Aside from China, of Course.” 2016. 9 Apr. 2017 <http://news.softpedia.com/news/how-to-add-custom-subtitles-to-netflix-498579.shtml>.Vogelstein, Fred. “The Wired Interview: Facebook’s Mark Zuckerberg.” Wired Magazine (2009). 20 Jun. 2010 <http://www.wired.com/epicenter/2009/06/mark-zuckerberg-speaks/>.W3C. “Web Content Accessibility Guidelines 1.0.” W3C Recommendation (1999). 25 Jun. 2010 <http://www.w3.org/TR/WCAG10/>.———. “Web Content Accessibility Guidelines (WCAG) 2.0.” 11 Dec. 2008. 21 Aug. 2013 <http://www.w3.org/TR/WCAG20/>.WGBH. “Magpie 2.0—Free, Do-It-Yourself Access Authoring Tool for Digital Multimedia Released by WGBH.” 2002. 25 Apr. 2017 <http://ncam.wgbh.org/about/news/pr_05072002>.YouTube. “Finally, Caption Video Playback.” 2006. 24 Apr. 2017 <http://googlevideo.blogspot.com.au/2006/09/finally-caption-playback.html>.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wagman, Ira. "Wasteaminute.com: Notes on Office Work and Digital Distraction." M/C Journal 13, no. 4 (August 18, 2010). http://dx.doi.org/10.5204/mcj.243.

Повний текст джерела
Анотація:
For those seeking a diversion from the drudgery of work there are a number of websites offering to take you away. Consider the case of wasteaminute.com. On the site there is everything from flash video games, soft-core pornography and animated nudity, to puzzles and parlour games like poker. In addition, the site offers links to video clips grouped in categories such as “funny,” “accidents,” or “strange.” With its bright yellow bubble letters and elementary design, wasteaminute will never win any Webby awards. It is also unlikely to be part of a lucrative initial public offering for its owner, a web marketing company based in Lexington, Kentucky. The internet ratings company Alexa gives wasteaminute a ranking of 5,880,401 when it comes to the most popular sites online over the last three months, quite some way behind sites like Wikipedia, Facebook, and Windows Live.Wasteaminute is not unique. There exists a group of websites, a micro-genre of sorts, that go out of their way to offer momentary escape from the more serious work at hand, with a similar menu of offerings. These include sites with names such as ishouldbeworking.com, i-am-bored.com, boredatwork.com, and drivenbyboredom.com. These web destinations represent only the most overtly named time-wasting opportunities. Video sharing sites like YouTube or France’s DailyMotion, personalised home pages like iGoogle, and the range of applications available on mobile devices offer similar opportunities for escape. Wasteaminute inspired me to think about the relationship between digital media technologies and waste. In one sense, the site’s offerings remind us of the Internet’s capacity to re-purpose old media forms from earlier phases in the digital revolution, like the retro video game PacMan, or from aspects of print culture, like crosswords (Bolter and Grusin; Straw). For my purposes, though, wasteaminute permits the opportunity to meditate, albeit briefly, on the ways media facilitate wasting time at work, particularly for those working in white- and no-collar work environments. In contemporary work environments work activity and wasteful activity exist on the same platform. With a click of a mouse or a keyboard shortcut, work and diversion can be easily interchanged on the screen, an experience of computing I know intimately from first-hand experience. The blurring of lines between work and waste has accompanied the extension of the ‘working day,’ a concept once tethered to the standardised work-week associated with modernity. Now people working in a range of professions take work out of the office and find themselves working in cafes, on public transportation, and at times once reserved for leisure, like weekends (Basso). In response to the indeterminate nature of when and where we are at work, the mainstream media routinely report about the wasteful use of computer technology for non-work purposes. Stories such as a recent one in the Washington Post which claimed that increased employee use of social media sites like Facebook and Twitter led to decreased productivity at work have become quite common in traditional media outlets (Casciato). Media technologies have always offered the prospect of making office work more efficient or the means for management to exercise control over employees. However, those same technologies have also served as the platforms on which one can engage in dilatory acts, stealing time from behind the boss’s back. I suggest stealing time at work may well be a “tactic,” in the sense used by Michel de Certeau, as a means to resist the rules and regulations that structure work and the working life. However, I also consider it to be a tactic in a different sense: websites and other digital applications offer users the means to take time back, in the form of ‘quick hits,’ providing immediate visual or narrative pleasures, or through interfaces which make the time-wasting look like work (Wagman). Reading sites like wasteaminute as examples of ‘office entertainment,’ reminds us of the importance of workers as audiences for web content. An analysis of a few case studies also reveals how the forms of address of these sites themselves recognise and capitalise on an understanding of the rhythms of the working day, as well as those elements of contemporary office culture characterised by interruption, monotony and surveillance. Work, Media, Waste A mass of literature documents the transformations of work brought on by industrialisation and urbanisation. A recent biography of Franz Kafka outlines the rigors imposed upon the writer while working as an insurance agent: his first contract stipulated that “no employee has the right to keep any objects other than those belonging to the office under lock in the desk and files assigned for its use” (Murray 66). Siegfried Kracauer’s collection of writings on salaried workers in Germany in the 1930s argues that mass entertainment offers distractions that inhibit social change. Such restrictions and inducements are exemplary of the attempts to make work succumb to managerial regimes which are intended to maximise productivity and minimise waste, and to establish a division between ‘company time’ and ‘free time’. One does not have to be an industrial sociologist to know the efforts of Frederick W. Taylor, and the disciplines of “scientific management” in the early twentieth century which were based on the idea of making work more efficient, or of the workplace sociology scholarship from the 1950s that drew attention to the ways that office work can be monotonous or de-personalising (Friedmann; Mills; Whyte). Historian JoAnne Yates has documented the ways those transformations, and what she calls an accompanying “philosophy of system and efficiency,” have been made possible through information and communication technologies, from the typewriter to carbon paper (107). Yates evokes the work of James Carey in identifying these developments, for example, the locating of workers in orderly locations such as offices, as spatial in nature. The changing meaning of work, particularly white-collar or bureaucratic labour in an age of precarious employment and neo-liberal economic regimes, and aggressive administrative “auditing technologies,” has subjected employees to more strenuous regimes of surveillance to ensure employee compliance and to protect against waste of company resources (Power). As Andrew Ross notes, after a deep period of self-criticism over the drudgery of work in North American settings in the 1960s, the subsequent years saw a re-thinking of the meaning of work, one that gradually traded greater work flexibility and self-management for more assertive forms of workplace control (9). As Ross notes, this too has changed, an after-effect of “the shareholder revolution,” which forced companies to deliver short-term profitability to its investors at any social cost. With so much at stake, Ross explains, the freedom of employees assumed a lower priority within corporate cultures, and “the introduction of information technologies in the workplace of the new capitalism resulted in the intensified surveillance of employees” (12). Others, like Dale Bradley, have drawn attention to the ways that the design of the office itself has always concerned itself with the bureaucratic and disciplinary control of bodies in space (77). The move away from physical workspaces such as ‘the pen’ to the cubicle and now from the cubicle to the virtual office is for Bradley a move from “construction” to “connection.” This spatial shift in the way in which control over employees is exercised is symbolic of the liquid forms in which bodies are now “integrated with flows of money, culture, knowledge, and power” in the post-industrial global economies of the twenty-first century. As Christena Nippert-Eng points out, receiving office space was seen as a marker of trust, since it provided employees with a sense of privacy to carry out affairs—both of a professional or of a personal matter—out of earshot of others. Privacy means a lot of things, she points out, including “a relative lack of accountability for our immediate whereabouts and actions” (163). Yet those same modalities of control which characterise communication technologies in workspaces may also serve as the platforms for people to waste time while working. In other words, wasteful practices utilize the same technology that is used to regulate and manage time spent in the workplace. The telephone has permitted efficient communication between units in an office building or between the office and outside, but ‘personal business’ can also be conducted on the same line. Radio stations offer ‘easy listening’ formats, providing unobtrusive music so as not to disturb work settings. However, they can easily be tuned to other stations for breaking news, live sports events, or other matters having to do with the outside world. Photocopiers and fax machines facilitate the reproduction and dissemination of communication regardless of whether it is it work or non-work related. The same, of course, is true for computerised applications. Companies may encourage their employees to use Facebook or Twitter to reach out to potential clients or customers, but those same applications may be used for personal social networking as well. Since the activities of work and play can now be found on the same platform, employers routinely remind their employees that their surfing activities, along with their e-mails and company documents, will be recorded on the company server, itself subject to auditing and review whenever the company sees fit. Employees must be careful to practice image management, in order to ensure that contradictory evidence does not appear online when they call in sick to the office. Over time the dynamics of e-mail and Internet etiquette have changed in response to such developments. Those most aware of the distractive and professionally destructive features of downloading a funny or comedic e-mail attachment have come to adopt the acronym “NSFW” (Not Safe for Work). Even those of us who don’t worry about those things are well aware that the cache and “history” function of web browsers threaten to reveal the extent to which our time online is spent in unproductive ways. Many companies and public institutions, for example libraries, have taken things one step further by filtering out access to websites that may be peripheral to the primary work at hand.At the same time contemporary workplace settings have sought to mix both work and play, or better yet to use play in the service of work, to make “work” more enjoyable for its workers. Professional development seminars, team-building exercises, company softball games, or group outings are examples intended to build morale and loyalty to the company among workers. Some companies offer their employees access to gyms, to game rooms, and to big screen TVs, in return for long and arduous—indeed, punishing—hours of time at the office (Dyer-Witheford and Sherman; Ross). In this manner, acts of not working are reconfigured as a form of work, or at least as a productive experience for the company at large. Such perks are offered with an assumption of personal self-discipline, a feature of what Nippert-Eng characterises as the “discretionary workplace” (154). Of course, this also comes with an expectation that workers will stay close to the office, and to their work. As Sarah Sharma recently argued in this journal, such thinking is part of the way that late capitalism constructs “innovative ways to control people’s time and regulate their movement in space.” At the same time, however, there are plenty of moments of gentle resistance, in which the same machines of control and depersonalisation can be customised, and where individual expressions find their own platforms. A photo essay by Anna McCarthy in the Journal of Visual Culture records the inspirational messages and other personalised objects with which workers adorn their computers and work stations. McCarthy’s photographs represent the way people express themselves in relation to their work, making it a “place where workplace politics and power relations play out, often quite visibly” (McCarthy 214). Screen SecretsIf McCarthy’s photo essay illustrates the overt ways in which people bring personal expression or gentle resistance to anodyne workplaces, there are also a series of other ‘screen acts’ that create opportunities to waste time in ways that are disguised as work. During the Olympics and US college basketball playoffs, both American broadcast networks CBS and NBC offered a “boss button,” a graphic link that a user could immediately click “if the boss was coming by” that transformed the screen to something was associated with the culture of work, such as a spreadsheet. Other purveyors of networked time-wasting make use of the spreadsheet to mask distraction. The website cantyouseeimbored turns a spreadsheet into a game of “Breakout!” while other sites, like Spreadtweet, convert your Twitter updates into the form of a spreadsheet. Such boss buttons and screen interfaces that mimic work are the presentday avatars of the “panic button,” a graphic image found at the bottom of websites back in the days of Web 1.0. A click of the panic button transported users away from an offending website and towards something more legitimate, like Yahoo! Even if it is unlikely that boss keys actually convince one’s superiors that one is really working—clicking to a spreadsheet only makes sense for a worker who might be expected to be working on those kinds of documents—they are an index of how notions of personal space and privacy play out in the digitalised workplace. David Kiely, an employee at an Australian investment bank, experienced this first hand when he opened an e-mail attachment sent to him by his co-workers featuring a scantily-clad model (Cuneo and Barrett). Unfortunately for Kiely, at the time he opened the attachment his computer screen was visible in the background of a network television interview with another of the bank’s employees. Kiely’s inauspicious click (which made his the subject of an investigation by his employees) continues to circulate on the Internet, and it spawned a number of articles highlighting the precarious nature of work in a digitalised environment where what might seem to be private can suddenly become very public, and thus able to be disseminated without restraint. At the same time, the public appetite for Kiely’s story indicates that not working at work, and using the Internet to do it, represents a mode of media consumption that is familiar to many of us, even if it is only the servers on the company computer that can account for how much time we spend doing it. Community attitudes towards time spent unproductively online reminds us that waste carries with it a range of negative signifiers. We talk about wasting time in terms of theft, “stealing time,” or even more dramatically as “killing time.” The popular construction of television as the “boob tube” distinguishes it from more ‘productive’ activities, like spending time with family, or exercise, or involvement in one’s community. The message is simple: life is too short to be “wasted” on such ephemera. If this kind of language is less familiar in the digital age, the discourse of ‘distraction’ is more prevalent. Yet, instead of judging distraction a negative symptom of the digital age, perhaps we should reinterpret wasting time as the worker’s attempt to assert some agency in an increasingly controlled workplace. ReferencesBasso, Pietro. Modern Times, Ancient Hours: Working Lives in the Twenty-First Century. London: Verso, 2003. Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 2000.Bradley, Dale. “Dimensions Vary: Technology, Space, and Power in the 20th Century Office”. Topia 11 (2004): 67-82.Casciato, Paul. “Facebook and Other Social Media Cost UK Billions”. Washington Post, 5 Aug. 2010. 11 Aug. 2010 ‹http://www.washingtonpost.com/wp-dyn/content/article/2010/08/05/AR2010080503951.html›.Cuneo, Clementine, and David Barrett. “Was Banker Set Up Over Saucy Miranda”. The Daily Telegraph 4 Feb. 2010. 21 May 2010 ‹http://www.dailytelegraph.com.au/entertainment/sydney-confidential/was-banker-set-up-over-saucy-miranda/story-e6frewz0-1225826576571›.De Certeau, Michel. The Practice of Everyday Life. Vol. 1. Berkeley: U of California P. 1988.Dyer-Witheford, Nick, and Zena Sharman. "The Political Economy of Canada's Video and Computer Game Industry”. Canadian Journal of Communication 30.2 (2005). 1 May 2010 ‹http://www.cjc-online.ca/index.php/journal/article/view/1575/1728›.Friedmann, Georges. Industrial Society. Glencoe, Ill.: Free Press, 1955.Kracauer, Siegfried. The Salaried Masses. London: Verso, 1998.McCarthy, Anna. Ambient Television. Durham: Duke UP, 2001. ———. “Geekospheres: Visual Culture and Material Culture at Work”. Journal of Visual Culture 3 (2004): 213-21.Mills, C. Wright. White Collar. Oxford: Oxford UP, 1951. Murray, Nicholas. Kafka: A Biography. New Haven: Yale UP, 2004.Newman, Michael. “Ze Frank and the Poetics of Web Video”. First Monday 13.5 (2008). 1 Aug. 2010 ‹http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2102/1962›.Nippert-Eng, Christena. Home and Work: Negotiating Boundaries through Everyday Life. Chicago: U. of Chicago P, 1996.Power, Michael. The Audit Society. Oxford: Oxford UP, 1997. Ross, Andrew. No Collar: The Humane Workplace and Its Hidden Costs. Philadelphia: Temple UP, 2004. Sharma, Sarah. “The Great American Staycation and the Risk of Stillness”. M/C Journal 12.1 (2009). 11 May 2010 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/122›. Straw, Will. “Embedded Memories”. Residual Media Ed. Charles Acland. U. of Minnesota P., 2007. 3-15.Whyte, William. The Organisation Man. New York: Simon and Schuster, 1957. Wagman, Ira. “Log On, Goof Off, Look Up: Facebook and the Rhythms of Canadian Internet Use”. How Canadians Communicate III: Contexts for Popular Culture. Eds. Bart Beaty, Derek, Gloria Filax Briton, and Rebecca Sullivan. Athabasca: Athabasca UP 2009. 55-77. ‹http://www2.carleton.ca/jc/ccms/wp-content/ccms-files/02_Beaty_et_al-How_Canadians_Communicate.pdf›Yates, JoAnne. “Business Use of Information Technology during the Industrial Age”. A Nation Transformed by Information. Eds. Alfred D. Chandler & James W. Cortada. Oxford: Oxford UP., 2000. 107-36.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії