Academic literature on the topic 'Long-form Audio'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Long-form Audio.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Long-form Audio"

1

McHugh, Siobhan. "Audio Storytelling." Asia Pacific Media Educator 24, no. 2 (December 2014): 141–56. http://dx.doi.org/10.1177/1326365x14555277.

Full text
Abstract:
Audio storytelling is booming. From crafted, long-form documentaries to short digital narratives, podcasting, social media and online streaming have liberated audio from the confines of a live radio schedule and created huge new transnational audiences. But how can the burgeoning influence of audio storytelling be harnessed in educational and community sectors? This article examines an initiative designed to advance the use of audio storytelling in educational contexts: the emotional history project, an intensive teaching model that trains undergraduate students with no prior audio experience to create powerful short audio stories in a 4 x 3 hour module. It relies on the capacity of audio to convey emotion, and the power of emotion to transcend social, cultural and racial differences and forge a visceral connection. By gathering deeply personal emotional moments, students not only have a heightened incentive to learn technical production skills, they are also motivated to consider ethical issues and vital principles of empathy and responsibility.
APA, Harvard, Vancouver, ISO, and other styles
2

Lavechin, Marvin, Maureen de Seyssel, Lucas Gautheron, Emmanuel Dupoux, and Alejandrina Cristia. "Reverse Engineering Language Acquisition with Child-Centered Long-Form Recordings." Annual Review of Linguistics 8, no. 1 (January 14, 2022): 389–407. http://dx.doi.org/10.1146/annurev-linguistics-031120-122120.

Full text
Abstract:
Language use in everyday life can be studied using lightweight, wearable recorders that collect long-form recordings—that is, audio (including speech) over whole days. The hardware and software underlying this technique are increasingly accessible and inexpensive, and these data are revolutionizing the language acquisition field. We first place this technique into the broader context of the current ways of studying both the input being received by children and children's own language production, laying out the main advantages and drawbacks of long-form recordings. We then go on to argue that a unique advantage of long-form recordings is that they can fuel realistic models of early language acquisition that use speech to represent children's input and/or to establish production benchmarks. To enable the field to make the most of this unique empirical and conceptual contribution, we outline what this reverse engineering approach from long-form recordings entails, why it is useful, and how to evaluate success.
APA, Harvard, Vancouver, ISO, and other styles
3

Mulauzi, Felesia, Phiri Bwalya, Chishimba Soko, Vincent Njobvu, Jane Katema, and Felix Silungwe. "Preservation of audio-visual archives in Zambia." ESARBICA Journal: Journal of the Eastern and Southern Africa Regional Branch of the International Council on Archives 40 (November 6, 2021): 42–59. http://dx.doi.org/10.4314/esarjo.v40i1.4.

Full text
Abstract:
Audio-visual records and archives constitute a fundamental heritage that satisfies multiple needs, including education, training, research and entertainment. As such, there is a need to appropriately preserve and conserve them so they can be accessed for as long as they are needed. In spite of their significant role in safeguarding cultural heritage, audio-visual records and archives, are often neglected and accorded less attention than paper-based records, especially in developing countries. Hence, there is a risk of losing information held in audio-visual form. That is why this study looked at how the National Archives of Zambia (NAZ) and the Zambia National Broadcasting Corporation (ZNBC) preserve audio-visual materials to ensure long-term accessibility of the information. The study investigated the types of audio-visual collections held, the storage equipment used, measures put in place to ensure long-term accessibility of audio-visual materials, the disaster preparedness plans in place to safeguard audio-visual archives and the major challenges encountered in the preservation of audio-visual materials. The findings of the study revealed that films (microfilm and microfiche), photographs and manuscripts, and video (video tapes) and sound recordings (compact cassette) constitute the biggest audio-visual collection preserved. The equipment used to store audio-visual materials included open shelves, specialised cabinets, electronic database for digitised materials, aisle mobiles and cupboards. The measures taken to ensure the long-term accessibility of audio-visual collection included digitisation and migration of endangered records and archives; fumigation of storage areas; conservation of damaged materials and regulation of temperatures and humidity in the storage areas. The disaster preparedness plans in place mostly covered structure insurance; protection against fire and water by way of installing fire extinguishers; smoke sensors; fire detectors and construction of purpose-built structures. The major challenges faced were financial constraints; technological obsolescence; lack of playback equipment; limited training; lack of strong back-up systems and inadequate storage facilities.
APA, Harvard, Vancouver, ISO, and other styles
4

Mulauzi, Felesia, Phiri Bwalya, Chishimba Soko, Vincent Njobvu, Jane Katema, and Felix Silungwe. "Preservation of audio-visual archives in Zambia." ESARBICA Journal: Journal of the Eastern and Southern Africa Regional Branch of the International Council on Archives 40 (November 6, 2021): 42–59. http://dx.doi.org/10.4314/esarjo.v40i.4.

Full text
Abstract:
Audio-visual records and archives constitute a fundamental heritage that satisfies multiple needs, including education, training, research and entertainment. As such, there is a need to appropriately preserve and conserve them so they can be accessed for as long as they are needed. In spite of their significant role in safeguarding cultural heritage, audio-visual records and archives, are often neglected and accorded less attention than paper-based records, especially in developing countries. Hence, there is a risk of losing information held in audio-visual form. That is why this study looked at how the National Archives of Zambia (NAZ) and the Zambia National Broadcasting Corporation (ZNBC) preserve audio-visual materials to ensure long-term accessibility of the information. The study investigated the types of audio-visual collections held, the storage equipment used, measures put in place to ensure long-term accessibility of audio-visual materials, the disaster preparedness plans in place to safeguard audio-visual archives and the major challenges encountered in the preservation of audio-visual materials. The findings of the study revealed that films (microfilm and microfiche), photographs and manuscripts, and video (video tapes) and sound recordings (compact cassette) constitute the biggest audio-visual collection preserved. The equipment used to store audio-visual materials included open shelves, specialised cabinets, electronic database for digitised materials, aisle mobiles and cupboards. The measures taken to ensure the long-term accessibility of audio-visual collection included digitisation and migration of endangered records and archives; fumigation of storage areas; conservation of damaged materials and regulation of temperatures and humidity in the storage areas. The disaster preparedness plans in place mostly covered structure insurance; protection against fire and water by way of installing fire extinguishers; smoke sensors; fire detectors and construction of purpose-built structures. The major challenges faced were financial constraints; technological obsolescence; lack of playback equipment; limited training; lack of strong back-up systems and inadequate storage facilities.
APA, Harvard, Vancouver, ISO, and other styles
5

Silitonga, Parasian D. P., and Irene Sri Morina. "Compression and Decompression of Audio Files Using the Arithmetic Coding Method." Scientific Journal of Informatics 6, no. 1 (May 24, 2019): 73–81. http://dx.doi.org/10.15294/sji.v6i1.17839.

Full text
Abstract:
Audio file size is relatively larger when compared to files with text format. Large files can cause various obstacles in the form of large space requirements for storage and a long enough time in the shipping process. File compression is one solution that can be done to overcome the problem of large file sizes. Arithmetic coding is one algorithm that can be used to compress audio files. The arithmetic coding algorithm encodes the audio file and changes one row of input symbols with a floating point number and obtains the output of the encoding in the form of a number of values greater than 0 and smaller than 1. The process of compression and decompression of audio files in this study is done against several wave files. Wave files are standard audio file formats developed by Microsoft and IBM that are stored using PCM (Pulse Code Modulation) coding. The wave file compression ratio obtained in this study was 16.12 percent with an average compression process time of 45.89 seconds, while the average decompression time was 0.32 seconds.
APA, Harvard, Vancouver, ISO, and other styles
6

Hartono, Henry, Viny Christanti Mawardi, and Janson Hendryli. "PERANCANGAN SISTEM PENCARIAN LAGU INDONESIA MENGGUNAKAN QUERY BY HUMMING BERBASIS LONG SHORT-TERM MEMORY." Jurnal Ilmu Komputer dan Sistem Informasi 9, no. 1 (January 18, 2021): 106. http://dx.doi.org/10.24912/jiksi.v9i1.11567.

Full text
Abstract:
Song identification dan query by humming is an application that is developed using Mel-frequency cepstral coefficients (MFCC) and Long Short-Term Memory (LSTM) algorithm.The application purpose is to detect and recognize humming from the input data. In this application the humming input will be divided into two parts, namely the training audio and test audio. For the training audio, the training audio will be divided into two process stages, namely recognizing humming and searching for the unique features of a humming audio.To recognize the humming feature, the humming will be processed using the MFCC method. After obtaining a part of the MFCC Features, the MFCC features will be saved as a vector model. The feature that has been extracted will be learned by the LSTM method. For the test audio of the stages carried out as in the training audio, after the MFCC Feature is detected, an introduction will be made based on learning that has been done with the LSTM method to obtain output in the form of a song name that is successfully recognized and detected will be labeled by the application.
APA, Harvard, Vancouver, ISO, and other styles
7

Berry, Richard. "No longer the only game in town: British indies, podcasts and the new audio economy of independent production." Interactions: Studies in Communication & Culture 12, no. 1 (April 1, 2021): 51–64. http://dx.doi.org/10.1386/iscc_00036_1.

Full text
Abstract:
The world of audio is undergoing seismic changes. Traditionally a space dominated by linear radio programmes, in the 2020s this form of listening remains relevant but is now being challenged by other forms and global media platforms. In particular, podcasting offers producers opportunities to pitch for commissions from brands and platforms or to make work independently. Historically, podcasting was a medium led and shaped by amateurs and distributed for free by independent creators. In the 2020s this relationship began to shift. In the United Kingdom, audio producers are now finding success outside of radio, in what was once a market dominated by the BBC. Forming what we might term a ‘new audio economy’, producers and creatives are working across multiple forms of production in the long tail of audio media. Through interviews and analysis, this article will explore the impact of podcasting and other forms of audio production on the UK-independent radio/audio sector, noting the influence of the BBC and shifting patterns of production within the sector
APA, Harvard, Vancouver, ISO, and other styles
8

Levaux, Christophe. "The Forgotten History of Repetitive Audio Technologies." Organised Sound 22, no. 2 (July 12, 2017): 187–94. http://dx.doi.org/10.1017/s1355771817000097.

Full text
Abstract:
In the literature dedicated to twentieth-century music, the early history of electronic music is regularly presented hand in hand with the development of technical repetitive devices such as closed grooves and magnetic tape loops. Consequently, the idea that such devices were ‘invented’ in the studios of the first great representatives of electronic music tends to appear as an implicit consequence. However, re-examination of the long history of musical technology, from the ninth-century Banu Musa automatic flute to the Hammond organ of the 1930s, reveals that repetitive devices not only go right back to the earliest days of musical automation, but also evolved in a wide variety of contexts wholly unconnected from any form of musical institution. This article aims to shed light on this other, forgotten, history of repetitive audio technologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Goldstein, Seth Copen, Todd C. Mowry, Jason D. Campbell, Michael P. Ashley-Rollman, Michael De Rosa, Stanislav Funiak, James F. Hoburg, et al. "Beyond Audio and Video: Using Claytronics to Enable Pario." AI Magazine 30, no. 2 (July 7, 2009): 29. http://dx.doi.org/10.1609/aimag.v30i2.2241.

Full text
Abstract:
In this article, we describe the hardware and software challenges involved in realizing Claytronics, a form of programmable matter made out of very large numbers-potentially millions-of submillimeter sized spherical robots. The goal of the claytronics project is to create ensembles of cooperating submillimeter robots, which work together to form dynamic 3D physical objects. For example, claytronics might be used in telepresense to mimic, with high-fidelity and in 3-dimensional solid form, the look, feel, and motion of the person at the other end of the telephone call. To achieve this long-range vision we are investigating hardware mechanisms for constructing submillimeter robots, which can be manufactured en masse using photolithography. We also propose the creation of a new media type, which we call pario. The idea behind pario is to render arbitrary moving, physical 3-dimensional objects that you can see, touch, and even hold in your hands. In parallel with our hardware effort, we are developing novel distributed programming languages and algorithms to control the ensembles, LDP and Meld. Pario may fundamentally change how we communicate with others and interact with the world around us. Our research results to date suggest that there is a viable path to implementing both the hardware and software necessary for claytronics, which is a form of programmable matter that can be used to implement pario. While we have made significant progress, there is still much research ahead in order to turn this vision into reality.
APA, Harvard, Vancouver, ISO, and other styles
10

Reddy, D. Akash, T. Venkat Raju, and V. Shashank. "Audio Assistant Based Image Captioning System Using RLSTM and CNN." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 1864–67. http://dx.doi.org/10.22214/ijraset.2022.44289.

Full text
Abstract:
Abstract-- As we know, visually impaired or partially sighted people face a lot of problems reading or identifying any local scenarios. To vanquish this situation, we developed an audio-based image captioner that will identify the objects in an image and form a meaningful sentence that gives the output in the aural form. Image processing is a widely used method for developing many new applications. It isalso open source, so developers can use it easily. We used NLP (Natural Language Processing) to understand the description of an imageand convert the text to speech. A combination of R-LSTM and CNN is used, which is nothing but a reference based long-short term memory which matches different text data and takes it as reference and gives the output. Some of the other applications of image captioning are social media platforms like Instagram, etc., virtual assistants, and video editing software.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Long-form Audio"

1

Robert Louis Stevenson. Treasure Island: Audio IBook. Primedia eLaunch LLC, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robert Louis Stevenson. Treasure Island. Livello 3 (A2). Con CD Audio. Helbling Languages GmbH, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Louis, Stevenson Robert. Treasure Island (Classic Audios). Hodder Audio, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Long-form Audio"

1

Hartsock, Ralph, and Daniel G. Alemneh. "Electronic Theses and Dissertations (ETDs)." In Encyclopedia of Information Science and Technology, Fourth Edition, 6748–55. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2255-3.ch584.

Full text
Abstract:
Electronic Theses and Dissertations (ETDs) have been a recent addition to the library's online access system, or digital project. This chapter traces the history of dissertations, from their printed form and issuance in microform by various agencies. It examines the changes in textual content and its presentation from the pre-digital to digitized documents, and the relation to software developed for music and other fields. It then examines the evolution of audio and video formats for the accompanying materials, particularly in the performing arts, and the content of these materials. It concludes with issues in ETDs Management and Ensuring Long-Term Access and Preservation, such as digital quality and copyright.
APA, Harvard, Vancouver, ISO, and other styles
2

Hartsock, Ralph, and Daniel G. Alemneh. "Electronic Theses and Dissertations (ETDs)." In Advances in Library and Information Science, 543–51. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7659-4.ch043.

Full text
Abstract:
Electronic theses and dissertations (ETDs) have been a recent addition to the library's online access system, or digital project. This chapter traces the history of dissertations, from their printed form and issuance in microform by various agencies. It examines the changes in textual content and its presentation from the pre-digital to digitized documents, and the relation to software developed for music and other fields. It then examines the evolution of audio and video formats for the accompanying materials, particularly in the performing arts, and the content of these materials. It concludes with issues in ETDs management and ensuring long-term access and preservation, such as digital quality and copyright.
APA, Harvard, Vancouver, ISO, and other styles
3

Blankenship, Rebecca J. "Extended Reality (XR) Teaching in the Era of Deepfakes." In Deep Fakes, Fake News, and Misinformation in Online Teaching and Learning Technologies, 24–38. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-6474-5.ch002.

Full text
Abstract:
The use of existing and emerging technologies in teaching and learning provides the opportunity to present subject-area content using devices, programs, and venues in ways that promote higher-order thinking and long-term retention. In the last decade, advances in artificial intelligence (AI) have resulted in the development of virtual reality programs that enable end-users to interact with content in third and fourth-dimensional interactive spaces (i.e., extended reality or XR). The transformation beyond traditional face-to-face or two-dimensional teaching and learning has resulted in an unforeseen digital side effect. The digital side effect presents in the form of deepfakes or the deliberate alteration of audio and visual content to advance a specific point of view. The premise of this chapter is to present a primer using the technological, pedagogical, and content knowledge (TPACK) and levels of use construct to mitigate the presence of deepfake and malinformation in subject-area content when working in XR environments.
APA, Harvard, Vancouver, ISO, and other styles
4

Daub, Adrian. "Epilogue." In What the Ballad Knows, 263—C9.N8. Oxford University PressNew York, 2022. http://dx.doi.org/10.1093/oso/9780190885496.003.0010.

Full text
Abstract:
Abstract The link between the ballad and nationalism was largely derived from the idea that the ballad preserved something: an oral culture about to vanish in an age of print, a popular trove of legends and tales that was on the verge of being erased by “high” national cultures, an authentic life of an organic community. To repeat a ballad, in however modernized and adapted a form, was to some extent to hearken back to the ballad as record. By the turn of the twentieth century, however, balladic records in this sense collided with a simple technological innovation: ballads—long preserved through recitation—became physical records. Whether the song settings of Carl Loewe and Franz Schubert, or the canonic ballad texts as spoken word, German ballads became easy and natural candidates for audio recording technologies as soon as it emerged. This chapter traces out the tensions in between these two kinds of balladic record: the living, breathing, organic memory created by recitation and repetition on the one hand, and the mechanical churn of the cylinder or record on the other.
APA, Harvard, Vancouver, ISO, and other styles
5

Kang, Hang-Bong. "Video Abstraction Techniques for a Digital Library." In Distributed Multimedia Databases, 120–32. IGI Global, 2002. http://dx.doi.org/10.4018/978-1-930708-29-7.ch008.

Full text
Abstract:
The abstraction of a long video is often useful to a user in determining whether the video is worth viewing or not. In particular, video abstraction guarantees users of digital libraries with the fast, safe and reliable access of video data. Two approaches, such as summary sequences and highlights are possible in video abstraction. The summary sequences are good for documentaries because they give an overview of the contents of the entire video, whereas highlights are good for movie trailers because they contain only the most interesting video segments. The video abstraction can be generated by three steps: analyzing video, selecting video clips, and synthesizing the output. In the analyzing video step, salient features, structures, or patterns in visual information, audio information, and textual information are detected. In the selecting step, meaningful clips are selected from detected features in the previous step. In the output synthesis step, the selected video clips are composed into the final form of the abstract. In this chapter, we will discuss various video abstraction techniques for digital libraries. In addition, we will also discuss a context-based video abstraction method in which contextual information of the video shot is computed. This method is useful in generating highlights because the contextual information of the video shot reflects semantics in video data.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar Singh, Pritam, Swades Kumar Chaulya, and Vinod Kumar Singh. "Intelligent Mine Periphery Surveillance using Microwave Radar." In Mining Technology [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.100521.

Full text
Abstract:
This paper deals with an intelligent mine periphery surveillance system, which has been developed by CSIR-Central Institute of Mining and Fuel Research, Dhanbad, India, as an aid for keeping constant vigilance on a selected area even in adverse weather conditions like foggy weather, rainy weather, dusty environment, etc. The developed system consists of a frequency modulated continuous wave radar, a pan-tilt camera, a wireless sensor network, a fast dedicated graphics processing unit, and a display unit. It can be spotting an unauthorized vehicle or person into the opencast mine area, thereby avoiding a threat to safety and security in the area. When an intrusion is detected, the system automatically gives an audio-visual warning at the intrusion site where the radar is installed as well as in the control room. The system has the facility to record the intrusion data as well as video footage with timestamp events in the form of a log. Further, the system has a long-range detection capability covering around 400 m distance with an integration facility using a dynamic wireless sensor network for deploying multiple systems to protect the extended periphery of an opencast mine. The field trial of this low-cost mine periphery surveillance system has been carried out at Tirap Opencast Coal Mine of North Eastern Coalfields in Margherita Area, Assam, India and it has proved its efficacy in preventing revenue loss due to illicit mining, unauthorized transportation of minerals, and ensuring safety and security of the mine to a great extent.
APA, Harvard, Vancouver, ISO, and other styles
7

Case, Thomas L., Geoffrey N. Dick, and Craig Van Slyke. "Expediting Personalized Just-in-Time Training with E Learning Management Systems." In Encyclopedia of Human Resources Information Systems, 378–85. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-59904-883-3.ch056.

Full text
Abstract:
E-learning may be described as the utilization of technology to support the delivery of education. Although e-learning has been around for a long time, the use of the “e” in front of “learning” began soon after the start of using the “e” in front of other terms such as “commerce,” “business,” and “governance.” More than 25 years ago, training firms began bringing students into training centers and sitting them in front of terminals hooked to boxes equipped with headphones. Training center staff would assist trainees in inserting video disks that included lessons on new products, processes, or programs. Training sessions typically lasted two or three hours or more. This was e-learning in its infancy and it was well-received by students because they could needed training when they wanted it; they no longer had to wait for the next instructor-led class scheduled for months in the future. E-learning also has roots in distance education (DE)—the process of providing education where the instruction and learning are in different physical locations (Kelly, 2000). Historically, distance education first emerged in the form of correspondence courses; materials would be mailed to students who would complete readings, reports, and exams and mail them back to course instructors to be evaluated. Television, videotaping, and satellite broadcasting allowed distance education to expand beyond textbooks and printed materials. Using these technologies, learners could experience a classroom-like environment without physically attending class. However, expensive production environments were required to achieve such learning experiences. Computer-based training (CBT) technologies are other precursors of e-learning. These evolved during the 1980s but because early multimedia development tools were primitive and hardware-dependent, the cost associated with CBT delivery was too high to foster widespread adoption. CBT growth was also limited by the need to physically distribute training new media such as CDs whenever updates to training content were made. Today, intranets and the public Internet make it unnecessary for learners to travel training centers because similar types of learning can be delivered directly to the desktop. Learning can take place 24/7 at locations and times that are most convenient to the learner. Intranets and the Internet provide a low-cost medium for content delivery and a cost-effective course development environment. Streaming video and audio is increasingly used to enliven the training/learning experience. Today’s e-learning technologies also enable trainers to simulate the environment in which learning will be applied and to provide the practice needed to master context-specific skills. Training content is now being personalized to ensure that individual students complete only the learning modules that they need or want. And, the development of systems to manage such learning is now producing world class training program content from mixtures of internal and external expertise.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Long-form Audio"

1

Kanda, Naoyuki, Xiong Xiao, Jian Wu, Tianyan Zhou, Yashesh Gaur, Xiaofei Wang, Zhong Meng, Zhuo Chen, and Takuya Yoshioka. "A Comparative Study of Modular and Joint Approaches for Speaker-Attributed ASR on Monaural Long-Form Audio." In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2021. http://dx.doi.org/10.1109/asru51503.2021.9687974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cano, Jaime A., and Calvin M. Stewart. "Application of the Wilshire Stress-Rupture and Minimum-Creep-Strain-Rate Prediction Models for Alloy P91 in Tube, Plate and Pipe Form." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-90625.

Full text
Abstract:
Abstract There exists a challenge in predicting the long-term creep of materials (3 105 hours) where 11+ years of continuous testing is required to physically collect creep data. As an alternative to physical testing, constitutive models are calibrated to short-term data (< 104 hours) and employed to extrapolate the long-term creep behavior. The Wilshire model was introduced to predict the stress-rupture and minimum-creep-strain-rate behavior of materials and the model is well-accepted due to the explicit description of stress- and temperature-dependence allowing predictions across isotherms and stress levels. There is an ongoing effort to determine how alloy form affects the long-term creep predictions of the Wilshire model. In this study, stress-rupture and minimum-creep-strain-rate predictions are generated for alloy P91 in tube, plate, and pipe form. Data is gathered from the National Institute of Materials Science (NIMS) material database for alloy P91 at multiple isotherms. Following the establish calibration method for the Wilshire model, post-audit validation is performed using short-term data from NIMS to vet the extrapolations accuracy of each form at different isotherms. The Wilshire model demonstrates successful extrapolative techniques for the stress-rupture and minimum-creep-strain-rate of tube, plate, and pipe forms across multiple isotherms. Overall the form with the highest extrapolative accuracy for both stress-rupture and minimum-creep-strain-rate is the plate and the lowest one is the pipe. Stress-rupture design maps are provided where stress and temperature are axes and rupture-time is in contour. The design maps can be applied to: (a) given the boundary conditions, determine the design life (b) given the design life, determine the acceptable range of a boundary conditions. The latter is more useful in turbomachinery design.
APA, Harvard, Vancouver, ISO, and other styles
3

Brett Talbot, Thomas, and Chinmay Chinara. "Open Medical Gesture: An Open-Source Experiment in Naturalistic Physical Interactions for Mixed and Virtual Reality Simulations." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002054.

Full text
Abstract:
Mixed (MR) and Virtual Reality (VR) simulations are hampered by requirements for hand controllers or attempts to perseverate in use of two-dimensional computer interface paradigms from the 1980s. From our efforts to produce more naturalistic interactions for combat medic training for the military, we have developed an open-source toolkit that enables direct hand controlled responsive interactions that is sensor independent and can function with depth sensing cameras, webcams or sensory gloves. From this research and review of current literature, we have discerned several best approaches for hand-based human computer interactions which provide intuitive, responsive, useful, and low frustration experiences for VR users. The center of an effective gesture system is a universal hand model that can map to inputs from several different kinds of sensors rather than depending on a specific commercial product. Parts of the hand are effectors in simulation space with a physics-based model. Therefore, translational and rotational forces from the hands will impact physical objects in VR which varies based on the mass of the virtual objects. We incorporate computer code w/ objects, calling them “Smart Objects”, which allows such objects to have movement properties and collision detection for expected manipulation. Examples of smart objects include scissors, a ball, a turning knob, a moving lever, or a human figure with moving limbs. Articulation points contain collision detectors and code to assist in expected hand actions. We include a library of more than 40 Smart Objects in the toolkit. Thus, is it possible to throw a ball, hit that ball with a bat, cut a bandage, turn on a ventilator or to lift and inspect a human arm.We mediate the interaction of the hands with virtual objects. Hands often violate the rules of a virtual world simply by passing through objects. One must interpret user intent. This can be achieved by introducing stickiness of the hands to objects. If the human’s hands overshoot an object, we place the hand onto that object’s surface unless the hand passes the object by a significant distance. We also make hands and fingers contact an object according to the object’s contours and do not allow fingers to sink into the interior of an object. Haptics, or a sense of physical resistance and tactile sensation from contacting physical objects is a supremely difficult technical challenge and is an expensive pursuit. Our approach ignores true haptics, but we have experimented with an alternative approach, called audio tactile synesthesia where we substitute the sensation of touch for that of sound. The idea is to associate parts of each hand with a tone of a specific frequency upon contacting objects. The attack rate of the sound envelope varies with the velocity of contact and hardness of the object being ‘touched’. Such sounds can feel softer or harder depending on the nature of ‘touch’ being experienced. This substitution technique can provide tactile feedback through indirect, yet still naturalistic means. The artificial intelligence (AI) technique to determine discrete hand gestures and motions within the physical space is a special form of AI called Long Short Term Memory (LSTM). LSTM allows much faster and flexible recognition than other machine learning approaches. LSTM is particularly effective with points in motion. Latency of recognition is very low. In addition to LSTM, we employ other synthetic vision & object recognition AI to the discrimination of real-world objects. This allows for methods to conduct virtual simulations. For example, it is possible to pick up a virtual syringe and inject a medication into a virtual patient through hand motions. We track the hand points to contact with the virtual syringe. We also detect when the hand is compressing the syringe plunger. We could also use virtual medications & instruments on human actors or manikins, not just on virtual objects. With object recognition AI, we can place a syringe on a tray in the physical world. The human user can pick up the syringe and use it on a virtual patient. Thus, we are able to blend physical and virtual simulation together seamlessly in a highly intuitive and naturalistic manner.The techniques and technologies explained here represent a baseline capability whereby interacting in mixed and virtual reality can now be much more natural and intuitive than it has ever been. We have now passed a threshold where we can do away with game controllers and magnetic trackers for VR. This advancement will contribute to greater adoption of VR solutions. To foster this, our team has committed to freely sharing these technologies for all purposes and at no cost as an open-source tool. We encourage the scientific, research, educational and medical communities to adopt these resources and determine their effectiveness and utilize these tools and practices to grow the body of useful VR applications.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Long-form Audio"

1

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography