Dissertations / Theses on the topic 'Influence of audio signal'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Influence of audio signal.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Saruwatari, Hiroshi. "BLIND SIGNAL SEPARATION OF AUDIO SIGNALS." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10406.
Full textScott, Hugh R. R. "Multiresolution techniques for audio signal restoration." Thesis, University of Warwick, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307347.
Full textЯсунова, Масума Пулатівна. "Метод оцінки інтегральної активності ЕЕГ під впливом аудіо сигналів." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2021. https://ela.kpi.ua/handle/123456789/43674.
Full textThe scope of the report is 57 pages, contains 32 illustrations, 14 tables, 2 annexes. In total, 36 sources were used. The relevance of this work lies in determining the dependence of the bioelectric activity of the brain on the amplitude-frequency characteristics of the sound signal. Nowadays, music accompanies our life, so it is important to determine what impact it has on the electrical activity of the brain and how the characteristics of the indicators of brain activity change when listening to musical signals. Purpose: to determine the effectiveness of audio signal impact of different amplitude-frequency composition on the change of brain integral activity. To achieve the goal of the thesis, the following tasks were formed: 1. Analyze the amplitude-frequency characteristics of the selected audio signals; 2. Investigate the change in EEG rhythms against the background of the influence of selected audio signals; 3. Investigate the influence of frequency characteristics of the audio signal on the integrated electrical activity of the brain.
Chiu, Leung Kin. "Efficient audio signal processing for embedded systems." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44775.
Full textWellhausen, Jens. "Algorithms for audio signal segmentation and separation /." Aachen : Shaker, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016149157&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Full textKwong, Mylène. "Détection de transitoires dans un signal audio." Sherbrooke : Université de Sherbrooke, 2004.
Find full textMoinnereau, Marc-Antoine. "Encodage d'un signal audio dans un électroencéphalogramme." Mémoire, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10554.
Full textKwong, Mylène. "Détection de transitoires dans un signal audio." Mémoire, Université de Sherbrooke, 2004. http://savoirs.usherbrooke.ca/handle/11143/1254.
Full textAnderson, David Verl. "Audio signal enhancement using multi-resolution sinusoidal modeling." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15394.
Full textCarlo, Diego Di. "Echo-aware signal processing for audio scene analysis." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S075.
Full textMost of audio signal processing methods regard reverberation and in particular acoustic echoes as a nuisance. However, they convey important spatial and semantic information about sound sources and, based on this, recent echo-aware methods have been proposed. In this work we focus on two directions. First, we study the how to estimate acoustic echoes blindly from microphone recordings. Two approaches are proposed, one leveraging on continuous dictionaries, one using recent deep learning techniques. Then, we focus on extending existing methods in audio scene analysis to their echo-aware forms. The Multichannel NMF framework for audio source separation, the SRP-PHAT localization method, and the MVDR beamformer for speech enhancement are all extended to their echo-aware versions
Ekström, Mattias. "Acoustic feedback suppression in audio mixer for PA applications." Thesis, Umeå universitet, Institutionen för fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136841.
Full textNär en talare talar för en publik används ofta ett PA system bestående av en mikrofon och en högtalare. Om mikrofonen tar upp för mycket av ljudet från högtalaren finns en överhängande risk för akustisk rundgång i form av ett karaktäristiskt oönskat tjut. Limes Audio är ett företag som utvecklar mjukvara för att förbättra ljudkvaliten i digital kommunikation, främst inom konferenstelefoni. De har utvecklat en demonstrationsprodukt, Magnetomixern, som kan användas som en konferenstelefon för att demonstrera deras programvara TrueVoice. Företaget önskar nu utveckla Magnetomixern till att även fungera som en ljudmixer för PA-scenarion, eller konferenstelefoni där intern ljudförstärkning i rummet behövs, och för detta behövs en funktion för att ta bort eventuell rundgång. Detta examensarbete har som mål att lägga grunden för en sådan funktion i Magnetomixern genom att undersöka marknaden och litteraturen på området. Tre metoder för att eliminera rundgång utvärderas i simuleringar och jämförs beträffande maximal stabil förstärkning (MSG) och subjektiv ljudkvalitet. Metoden ”Acoustic feedback cancellation” tillsammans med ett 5 Hz frekvensskifte på högtalarsignalen gav högst MSG och bäst ljudkvalitet. Metoden använder ett adaptivt filter för att approximera den akustiska återkopplingsvägen mellan högtalare och mikrofon samt tar bort rundgångskomponenter från mikrofonsignalen. I simuleringarna kunde metoden öka den maximala stabila förstärkningen med upp till 10 dB medan en god ljudkvalitet på talet bibehölls.
Papadopoulos, Hélène. "Estimation conjointe d'information de contenu musical d'un signal audio." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00548952.
Full textWang, Shuai. "Embedding data in an audio signal, using acoustic OFDM." Thesis, Linköpings universitet, Kommunikationssystem, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-71427.
Full textPalmer, Duncan. "Position estimation using the Digital Audio Broadcast (DAB) signal." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12456/.
Full textGower, Ephraim. "Mathematical Analysis and Audio Applications in Blind Signal Decomposition." Thesis, University of Essex, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520088.
Full textWellhausen, Jens [Verfasser]. "Algorithms for Audio Signal Segmentation and Separation / Jens Wellhausen." Aachen : Shaker, 2007. http://d-nb.info/1166510050/34.
Full textSavard, Patrick-André. "Méthode hybride de modification de durée d'un signal audio." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1440.
Full textXiao, Zhongzhe Chen Liming. "Recognition of emotions in audio signals." Ecully : Ecole Centrale de Lyon, 2008. http://bibli.ec-lyon.fr/exl-doc/zxiao.pdf.
Full textHuber, Rainer. "Objective assessment of audio quality using an auditory processing model." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=971182167.
Full textDe, Campos Teixeira Gomes Leandro. "Tatouage de signaux audio." Paris 5, 2002. http://www.theses.fr/2002PA05S009.
Full textAmphlett, Robert W. "Multiprocessor techniques for high quality digital audio." Thesis, University of Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337273.
Full textBalraj, Navaneethakrishnan. "AUTOMATED ACCIDENT DETECTION IN INTERSECTIONS VIA DIGITAL AUDIO SIGNAL PROCESSING." MSSTATE, 2003. http://sun.library.msstate.edu/ETD-db/theses/available/etd-10212003-102715/.
Full textLindström, Fredric. "Digital signal processing methods and algorithms for audio conferencing systems /." Karlskrona : Department of Signal Processing, School of Engineering, Blekinge Institute of Technology, 2007. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/9cc008f2fa400e82c12572bb00331533?OpenDocument.
Full textPapadopoulos, Hélène. "Joint estimation of musical content information[ from an audio signal]." Paris 6, 2010. http://www.theses.fr/2010PA066224.
Full textMarchand, Ugo. "Caractérisation du rythme à partir de l'analyse du signal audio." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066453/document.
Full textThis thesis is within the scope of Music Information Retrieval. The goal of this research field is to extract meaningful informations from music. There are numerous applications: music recommendation systems, music transcription to a score or automatic generation of music. In this manuscript, oOur objective is to propose new rhythm descriptions inspired from perceptual and neurological studies.Rhythm representation of a musical signal is a complex problem. Detecting attack positions and note durations is not sufficient: we have the model the temporal interaction between the different instruments collaborating together to create rhythm. We try to obtain representations that are invariant to some parameters (like the position over time, the small tempo or instrumentation variations) but sensitive to other parameters (like the rhythm pattern or the swing factor). We study the three key aspect of rhythm description: tempo, deviations and rhythm pattern
Tegendal, Lukas. "Watermarking in Audio using Deep Learning." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159191.
Full textBando, Yoshiaki. "Robust Audio Scene Analysis for Rescue Robots." Kyoto University, 2018. http://hdl.handle.net/2433/232410.
Full textXiao, Zhongzhe. "Recognition of emotions in audio signals." Ecully, Ecole centrale de Lyon, 2008. http://www.theses.fr/2008ECDL0002.
Full textThis Ph. D thesis work is dedicated to automatic emotion/mood recognition in audio signals. Indeed, audio emotion is high semantic information and its automatic analysis may have many applications such as smart human-computer interactions or multimedia indexing. The purpose of this thesis is thus to investigate machine-based audio emotion analysis solutions for both speech and music signals. Our work makes use of a discrete emotional model combined with the dimensional one and relies upon existing studies on acoustics correlates of emotional speech and music mood. The key contributions are the following. First, we have proposed, in complement to popular frequency-based and energy-based features, some new audio features, namely harmonic and Zipf features, to better characterize timbre and prosodic properties of emotional speech. Second, as there exists very few emotional resources either for speech or music for machine learning as compared to audio features that one can extract, an evidence theory-based feature selection scheme named Embedded Sequential Forward Selection (ESFS) is proposed to deal with the classic “curse of dimensionality” problem and thus over-fitting. Third, using a manually built dimensional emotion model-based hierarchical classifier to deal with fuzzy borders of emotional states, we demonstrated that a hierarchical classification scheme performs better than single global classifier mostly used in the literature. Furthermore, as there does not exist any universal agreement on basic emotion definition and as emotional states are typically application dependent, we also proposed a ESFS-based algorithm for automatically building a hierarchical classification scheme (HCS) which is best adapted to a specific set of application dependent emotional states. The HCS divides a complex classification problem into simpler and smaller problems by combining several binary sub-classifiers in the structure of a binary tree in several stages, and gives the result as the type of emotional states of the audio samples. Finally, to deal with the subjective nature of emotions, we also proposed an evidence theory-based ambiguous classifier allowing multiple emotions labeling as human often does. The effectiveness of all these recognition techniques was evaluated on Berlin and DES datasets for emotional speech recognition and on a music mood dataset that we collected in our laboratory as there exist no public dataset so far. Keywords: audio signal, emotion classification, music mood analysis, audio features, feature selection, hierarchical classification, ambiguous classification, evidence theory
Coulibaly, Patrice Yefoungnigui. "Codage audio à bas débit avec synthèse sinusoïdale." Sherbrooke : Université de Sherbrooke, 2001.
Find full textKhemiri, Houssemeddine. "Approche générique appliquée à l'indexation audio par modélisation non supervisée." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0055/document.
Full textThe amount of available audio data, such as broadcast news archives, radio recordings, music and songs collections, podcasts or various internet media is constantly increasing. Therefore many audio indexing techniques are proposed in order to help users to browse audio documents. Nevertheless, these methods are developed for a specific audio content which makes them unsuitable to simultaneously treat audio streams where different types of audio document coexist. In this thesis we report our recent efforts in extending the ALISP approach developed for speech as a generic method for audio indexing, retrieval and recognition. The particularity of ALISP tools is that no textual transcriptions are needed during the learning step. Any input speech data is transformed into a sequence of arbitrary symbols. These symbols can be used for indexing purposes. The main contribution of this thesis is the exploitation of the ALISP approach as a generic method for audio indexing. The proposed system consists of three steps; an unsupervised training to model and acquire the ALISP HMM models, ALISP segmentation of audio data using the ALISP HMM models and a comparison of ALISP symbols using the BLAST algorithm and Levenshtein distance. The evaluations of the proposed systems are done on the YACAST and other publicly available corpora for several tasks of audio indexing
Vemulapalli, Smita. "Audio-video based handwritten mathematical content recognition." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45958.
Full textNishibori, Kento, Yoshinori Takeuchi, Tetsuya Matsumoto, Hiroaki Kudo, and Noboru Ohnishi. "An Active Correspondence of Audio-Visual Events by using Motor signal." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2005. http://hdl.handle.net/2237/10376.
Full textHübner, Sebastian Valentin. "Wissensbasierte Modellierung von Audio-Signal-Klassifikatoren zur Bioakustik von Tursiops truncatus." Potsdam Univ.-Verl, 2006. http://d-nb.info/1000230988/34.
Full textFong, W. N. W. "Model-based methods for linear and non-linear audio signal enhancement." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599095.
Full textBenjelloun, Touimi Abdellatif. "Traitement du signal audio dans le domaine codé : techniques et applications /." Paris : École nationale supérieure des télécommunications, 2001. http://catalogue.bnf.fr/ark:/12148/cb388319544.
Full textLipstreu, William F. "Digital Signal Processing Laboratory Using Real-Time Implementations of Audio Applications." Cleveland, Ohio : Case Western Reserve University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1240836810.
Full textBenjelloun, Touimi Abdellatif. "Traitement du signal audio dans le domaine code : techniques et applications." Paris, ENST, 2001. http://www.theses.fr/2001ENST0018.
Full textOlivero, Anaik. "Les multiplicateurs temps-fréquence : Applications à l’analyse et la synthèse de signaux sonores et musicaux." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4788/document.
Full textAnalysis/Transformation/Synthesis is a generalparadigm in signal processing, that aims at manipulating or generating signalsfor practical applications. This thesis deals with time-frequencyrepresentations obtained with Gabor atoms. In this context, the complexity of a soundtransformation can be modeled by a Gabor multiplier. Gabormultipliers are linear diagonal operators acting on signals, andare characterized by a time-frequency transfer function of complex values, called theGabor mask. Gabor multipliers allows to formalize the conceptof filtering in the time-frequency domain. As they act by multiplying in the time-frequencydomain, they are "a priori'' well adapted to producesound transformations like timbre transformations. In a first part, this work proposes to model theproblem of Gabor mask estimation between two given signals,and provides algorithms to solve it. The Gabor multiplier between two signals is not uniquely defined and the proposed estimationstrategies are able to generate Gabor multipliers that produce signalswith a satisfied sound quality. In a second part, we show that a Gabor maskcontain a relevant information, as it can be viewed asa time-frequency representation of the difference oftimbre between two given sounds. By averaging the energy contained in a Gabor mask, we obtain a measure of this difference that allows to discriminate different musical instrumentsounds. We also propose strategies to automaticallylocalize the time-frequency regions responsible for such a timbre dissimilarity between musicalinstrument classes. Finally, we show that the Gabor multipliers can beused to construct a lot of sounds morphing trajectories,and propose an extension
Bland, Denise. "Alias-free signal processing of nonuniformly sampled signals." Thesis, University of Westminster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322992.
Full textPotard, Guillaume. "3D-audio object oriented coding." Access electronically, 2006. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20061109.111639/index.html.
Full textLee, Hyeon. "Spatial Audio for Bat Biosonar." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99833.
Full textDoctor of Philosophy
While bats are one of the most intriguing creatures to the general population, they are also a popular subject of study in various disciplines. Their extraordinary ability to navigate and forage irrespective of clutter using echolocation has gotten attention from many scientists and engineers. Research investigating bats typically includes analysis of acoustic signals from microphones and/or microphone arrays. Using time difference of arrival (TDOA) between the array elements or the microphones is probably the most popular method to locate flying bats (azimuth and elevation). Microphone responses to transmitted signals and echoes near a bat provide sound pressure but no directional information. This dissertation proposes a complementary way to the current TDOA methods, that delivers directional information by introducing spatial audio techniques. This work shows a couple of feasible methods based on spatial audio techniques, that can both track bats in flight and pinpoint the directions of echoes received by a bat. An ultrasonic tetrahedral soundfield microphone is introduced as a measurement tool for sounds in the sonar frequency range (20-80 kHz) of the big brown bat (Eptesicus fuscus). Ambisonics, a signal processing technique used in three-dimensional (3D) audio applications, is used for the basic processing of the signals measured by the soundfield microphone. Ambisonics also reproduces a measured signal containing its directional properties. As the first method, a spatial audio decoding technique called HARPEx (High Angular Resolution Planewave Expansion) was used to build a system providing angle and elevation estimates. HARPEx can estimate the direction of arrivals (DOA) for up to two simultaneous sound sources. Experiments proved that the estimation system based on HARPEx provides accurate DOA estimates of static or moving sources. The performance of the system was also assessed using statistical analyses of simulations. Medians and RMSEs (root-mean-square error) of 10,000 simulations for each simulation case represent the accuracy and precision of the estimations, respectively. Results show shorter distance between a capsule and the soundfield microphone center, or/and higher SNR (signal-to-noise ratio) are required to achieve higher performance. For the second method, the matched-filter technique is used to build another estimation system. This is a sonar-like estimation system that provides information of the target (range, direction, and velocity) using matched-filter responses and sonar fundamentals. Experiments using a loudspeaker (emitter) and an artificial or natural target (either stationary or moving) show the system provides accurate estimates of the target's direction and range. Simulations imitating a situation where a bat emits a pulse and receives an echo from a target (30°) were also performed. The system processed the virtual bat pulse and echo, and accurately estimated the direction, range, and velocity of the target. The suggested methods provide accurate estimates of the direction, range, or/and velocity of a bat based on its pulses or of a target based on echoes. This demonstrates these methods can be used as key tools to reconstruct bat biosonar. They would be also an independent tool or a complementary option to TDOA based methods, for bat echolocation studies. The developed methods are also believed to be useful in improving sonar technology.
Lam, Vicky Yin Hay. "Audio signal compression and modelling using psychoacoustic excitation pattern and loudness models." Thesis, University of Strathclyde, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248501.
Full textOudre, Laurent. "Reconnaissance d'accords à partir de signaux audio par l'utilisation de gabarits théoriques." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00542840.
Full textParekh, Sanjeel. "Learning representations for robust audio-visual scene analysis." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015/document.
Full textThe goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
Adistambha, Kevin. "Embedded lossless audio coding using linear prediction and cascade coding." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060724.122433/index.html.
Full textTAKEDA, Kazuya, Takanori NISHINO, and Kenta NIWA. "Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology." Institute of Electronics, Information and Communication Engineers, 2009. http://hdl.handle.net/2237/15055.
Full textFan, Yun-Hui. "A stereo audio coder with a nearly constant signal-to-noise ratio." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/14788.
Full textAlameda-Pineda, Xavier. "Egocentric Audio-Visual Scene Analysis : a machine learning and signal processing approach." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM024/document.
Full textAlong the past two decades, the industry has developed several commercial products with audio-visual sensing capabilities. Most of them consists on a videocamera with an embedded microphone (mobile phones, tablets, etc). Other, such as Kinect, include depth sensors and/or small microphone arrays. Also, there are some mobile phones equipped with a stereo camera pair. At the same time, many research-oriented systems became available (e.g., humanoid robots such as NAO). Since all these systems are small in volume, their sensors are close to each other. Therefore, they are not able to capture de global scene, but one point of view of the ongoing social interplay. We refer to this as "Egocentric Audio-Visual Scene Analysis''.This thesis contributes to this field in several aspects. Firstly, by providing a publicly available data set targeting applications such as action/gesture recognition, speaker localization, tracking and diarisation, sound source localization, dialogue modelling, etc. This work has been used later on inside and outside the thesis. We also investigated the problem of AV event detection. We showed how the trust on one of the modalities (visual to be precise) can be modeled and used to bias the method, leading to a visually-supervised EM algorithm (ViSEM). Afterwards we modified the approach to target audio-visual speaker detection yielding to an on-line method working in the humanoid robot NAO. In parallel to the work on audio-visual speaker detection, we developed a new approach for audio-visual command recognition. We explored different features and classifiers and confirmed that the use of audio-visual data increases the performance when compared to auditory-only and to video-only classifiers. Later, we sought for the best method using tiny training sets (5-10 samples per class). This is interesting because real systems need to adapt and learn new commands from the user. Such systems need to be operational with a few examples for the general public usage. Finally, we contributed to the field of sound source localization, in the particular case of non-coplanar microphone arrays. This is interesting because the geometry of the microphone can be any. Consequently, this opens the door to dynamic microphone arrays that would adapt their geometry to fit some particular tasks. Also, because the design of commercial systems may be subject to certain constraints for which circular or linear arrays are not suited
Paraskevas, Ioannis. "Phase as a feature extraction tool for audio classification and signal localisation." Thesis, University of Surrey, 2005. http://epubs.surrey.ac.uk/843856/.
Full textTrinkaus, Trevor R. "Perceptual coding of audio and diverse speech signals." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13883.
Full text