Dissertations / Theses on the topic 'Signed English'

To see the other types of publications on this topic, follow the link: Signed English.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 dissertations / theses for your research on the topic 'Signed English.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Crawley, Victoria Louise. "Achieving understanding via interpreter participation in sign language/English map task dialogues : an analysis of repair sequences involving ambiguity and underspecificity in signed and spoken modes." Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/15694/.

Full text
Abstract:
Research into the role of the interpreter in dialogue interpreting has so far established that the interpreter participates in the interaction just as much as the two primary participants,particularly in the area of turn-taking. Less has been written about the nature of participation by the interpreter when interpreting. This thesis has contributed to knowledge through research into the extent and the manner of participation by the interpreter when there are problems due to seeing/hearing, producing or understanding: “repair” (Schegloff , Sacks and Jefferson 1977). Using an established tool (a Map Task) in order to distract participants from their language use, the actions of the interpreter were examined through a Conversation Analysis lens, to observe what it is that interpreters do in these situations of uncertainty. The findings were that the participation by interpreters, often described by practitioners as “clarifying”, was due, for the most part, to what I have defined as “ambiguity” and “underspecificity”. The interpreter must change stance from “other” to “self”. I have considered this action, positing a model Stop – Account – Act, and also the responses from the participants when the interpreter changes from “other” to “self” and back, using those responses to show whether the clients understand the interpreter’s change of stance. It is already known that understanding is collaboratively achieved in interpreted interactions just as it is in monolingual conversations. My contribution to interpreting studies is to strengthen this understanding by empirical research. Interlocutors do not present an absolute meaning in one language which is then reframed in another language; meanings are differentiated between collaboratively through further talk. I show that an interpreter is tightly constrained in their participation, and that their overriding job of interpreting dictates the reasons for their participation. The interpreter seeks not “what does that mean?” but rather “what do you mean?”.
APA, Harvard, Vancouver, ISO, and other styles
2

Nakano, Aiko. "Comparisons of harmony and rhythm of Japanese and English through signal processing." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54519.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 24-25).
Japanese and English speech structures are different in terms of harmony, rhythm, and frequency of sound. Voice samples of 5 native speakers of English and Japanese were collected and analyzed through fast Fourier transform, autocorrelation, and statistical analysis. The harmony of language refers to the spatial frequency content of speech and is analyzed through two different measures, Harmonics-to-Noise-Ratio (HNR) developed by Boersma (1993) and a new parameter "harmonicity" which evaluates the consistency of the frequency content of a speech sample. Higher HNR values and lower harmonicity values mean that the speech is more harmonious. The HNR values are 9.6+0.6Hz and 8.9±0.4Hz and harmonicities are 27±13Hz and 41+26Hz, for Japanese and English, respectively; therefore, both parameters show that Japanese speech is more harmonious than English. A profound conclusion can be drawn from the harmonicity analysis that Japanese is a pitch-type language in which the exact pitch or tone of the voice is a critical parameter of speech, whereas in English the exact pitch is less important. The rhythm of the language is measured by "rhythmicity", which relates to the periodic structure of speech in time and identifies the overall periodicity in continuous speech. Lower rhythmicity values indicate that the speech for one language is more rhythmic than another. The rhythmicities are 0.84±0.02 and 1.35±0.02 for Japanese and English respectively, indicating that Japanese is more rhythmic than English. An additional parameter, the 80th percentile frequency, was also determined from the data to be 1407±242 and 2021±642Hz for the two languages. They are comparable to the known values from previous research.
by Aiko Nakano.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
3

Vigliotti, Jeanette C. "The Double Sighted: Visibility, Identity, and Photographs on Facebook." UNF Digital Commons, 2014. http://digitalcommons.unf.edu/etd/506.

Full text
Abstract:
The primary objective of this analysis is to uncover the tools of Facebook identity construction. Because Facebook users have the ability to control the images and information associated with their profiles, reactionary scholars typically classify Facebook identity as a symptom of cultural narcissism. However, I seek to displace the fixation on the newness of the medium in order to interrogate the possibility of a society that has internalized surveillance. Using Michel Foucault’s theories on panopticism and heterotopia, I examine the role photographs play in the construction of an individual on Facebook, and the ways in which user photographs are positioned into social memory construction.
APA, Harvard, Vancouver, ISO, and other styles
4

Sáez, Sáez Natalia. "Auditory discrimination of highly similar L2 english consonant sounds by blind compared to sighted adult spanish speakers." Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/111453.

Full text
Abstract:
Tesis para optar al grado de Magíster en Estudios Cognitivos
Objective: To carry out a pilot experiment so as to draw results and research design improvements supporting the hypothesis that sight deprivation, both for long periods of time and only during moments where auditory information is presented (blindfolding), can lead to better auditory discrimination of highly similar L2 English sounds. Method: 8 late blind adults (age M=36), 8 sighted and blindfolded adults (age M=26), and a control group of 8 sighted and not blindfolded adults (age M=31) participated in this study. All participants were Spanish native speakers of Chilean origin, with little knowledge of the English language. The participants attended five sessions, in which they underwent training stages where they were exposed to English words and nonsense words frequently containing 3 pairs of highly similar English consonant sounds. Two types of minimal pair discrimination tests were administered at the end of each session, with and without background noise. All participants’ levels of exposure to street noise, as well as blind participants’ years of blindness and ages of blindness onset were correlated with their test scores. Results: The three groups showed increases in their scores on the minimal pair discrimination tests throughout the five sessions. The Blind Group tended to outperform the two Sighted Groups, especially in the tests with background noise. A strong correlation was found between the levels of exposure to street noise and the average scores on the auditory discrimination tests with background noise for the Blind and Sighted Blindfolded Groups. A tendency for the B Group’s ages of blindness onset to correlate with their test scores was observed, but no correlation was seen for their number of years of blindness. Conclusions: As expected, blind adults exhibited an enhanced potential to auditorily discriminate the highly similar English consonant sounds selected for this study, compared to the blindfolded and not blindfolded sighted groups. Blind participants’ performance on the minimal pair tests with background noise was higher than any other score in this pilot study, which may be mediated by the levels at which they are generally exposed to street noise, their enhanced capacity for Auditory Scene Analysis 2 (Bregman, 1990) and selective attention, which, in turn, are supported by the neural remodeling that they undergo, as reported in the literature. Although the experimental design yielded results that tend to support the hypothesis of this pilot study, further studies with larger population samples should be carried out to validate these findings.
APA, Harvard, Vancouver, ISO, and other styles
5

Kudrins, Vitalijs. "Development of Software Library for Open Source GNSS Receiver with Focus on Physical Layer Signal Processing." Thesis, Luleå tekniska universitet, Rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-84772.

Full text
Abstract:
In order to directly interface with signal broadcast by global navigation satellite systems (GNSS) – such as GPS or Galileo – for the purpose of calculating location, a potential user is required to extract great amount of information from interface control documents (ICD) as well as build custom software tools to process this information. This is time consuming and inefficient. Instead it would be great if such tools and information was readily available in one single project. This thesis addresses this issue by designing a universal data structure which is able to accommodate all necessary information to interface with any GNSS. Universal GNSS data structure is designed in such a way so that software tools can be entirely generic across all GNSS, i.e. do not include any functionality specific to only one GNSS. This is done by embedding certain logic parameters inside data structure itself, which determine how software tools behave. The data structure realized in the form of XML file with specific rules and syntax. Data from GPS and Galileo ICDs is scraped and compiled into XML file. A Rust tool-set is created to read XML file and extract information such as pseudo-random noise codes and navigation message structure. Using this information, it is possible to decode a raw bit stream broadcast by GNSS spacecraft, although currently additional tools need to be added to completely automatize this process.
APA, Harvard, Vancouver, ISO, and other styles
6

Kanekama, Yori. "Effects of speechreading and signal-to-noise ratio on understanding mainstream American English by American and Indian adults." Diss., Wichita State University, 2009. http://hdl.handle.net/10057/2369.

Full text
Abstract:
The purpose of this study was to measure effects of speechreading and signal-to-noise ratio (SNR) on understanding mainstream American English (MAE) heard by 30 Indian adults compared to 30 American adults. Participants listened to a recording of a female speaker of MAE saying 10 lists of 10 different Everyday Speech Sentences per list. Participants heard sentences from a TV loudspeaker at a conversational speech level while a four-talker babble played through two surrounding loudspeakers at a +6, 0, -6, -12, or -18 dB SNR. Participants heard and watched a different list of sentences at each SNR (i.e., through the Auditory-Visual modality) and only heard a different list of sentences at each SNR (i.e., through an Auditory modality). After listening to each sentence, participants wrote verbatim what they thought the speaker said. Each participant’s speechreading performance at each SNR was computed as the difference in words correctly heard through Auditory-Visual versus Auditory modalities. Consistent with most previous research, American participants benefitted significantly more from speechreading at poorer SNRs than at favorable SNRs. The novel finding of this study, however, was that Indian participants benefitted less from speechreading than American participants at poorer SNRs, but benefitted more from speechreading than American participants at favorable SNRs. Linguistic (and, possibly, nonlinguistic) variables may have accounted for these findings; including an increased need for Indian participants to integrate more auditory cues with visual cues to benefit from speechreading, presumably because they only spoke English as a second language. These findings have theoretical implications for understanding the role of auditory-visual integration on cross-language perception of speech, and practical implications for understanding how much speechreading helps people understand a second language in noisy environments.
Thesis (Ph.D.)--Wichita State University, College of Health Professions, Dept. of Communication Sciences and Disorders
APA, Harvard, Vancouver, ISO, and other styles
7

Della, Corte Giuseppe. "Text and Speech Alignment Methods for Speech Translation Corpora Creation : Augmenting English LibriVox Recordings with Italian Textual Translations." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413064.

Full text
Abstract:
The recent uprise of end-to-end speech translation models requires a new generation of parallel corpora, composed of a large amount of source language speech utterances aligned with their target language textual translations. We hereby show a pipeline and a set of methods to collect hundreds of hours of English audio-book recordings and align them with their Italian textual translations, using exclusively public domain resources gathered semi-automatically from the web. The pipeline consists in three main areas: text collection, bilingual text alignment, and forced alignment. For the text collection task, we show how to automatically find e-book titles in a target language by using machine translation, web information retrieval, and named entity recognition and translation techniques. For the bilingual text alignment task, we investigated three methods: the Gale–Church algorithm in conjunction with a small-size hand-crafted bilingual dictionary, the Gale–Church algorithm in conjunction with a bigger bilingual dictionary automatically inferred through statistical machine translation, and bilingual text alignment by computing the vector similarity of multilingual embeddings of concatenation of consecutive sentences. Our findings seem to indicate that the consecutive-sentence-embeddings similarity computation approach manages to improve the alignment of difficult sentences by indirectly performing sentence re-segmentation. For the forced alignment task, we give a theoretical overview of the preferred method depending on the properties of the text to be aligned with the audio, suggesting and using a TTS-DTW (text-to-speech and dynamic time warping) based approach in our pipeline. The result of our experiments is a publicly available multi-modal corpus composed of about 130 hours of English speech aligned with its Italian textual translation and split in 60561 triplets of English audio, English transcript, and Italian textual translation. We also post-processed the corpus so as to extract 40-MFCCs features from the audio segments and released them as a data-set.
APA, Harvard, Vancouver, ISO, and other styles
8

Méli, Adrien. "A longitudinal study of the oral properties of the French-English interlanguage : a quantitative approach of the acquisition of the /ɪ/-/iː/ and /ʊ/-/uː/ contrasts." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC097/document.

Full text
Abstract:
Ce travail entreprend d'évaluer l'évolution de l'acquisition phonologique par des étudiants français des contrastes anglais /ɪ/-/i:/ et /ʊ/-/u:/. Le corpus étudié provient d'enregistrements de conversations spontanées menées avec des étudiants natifs. 12 étudiants, 9 femmes et 3 hommes,ont été suivis lors de 4 sessions espacées chacune d'un intervalle de six mois. L'approche adoptée est résolument quantitative, et agnostique quant aux théories d'acquisition d'une deuxième langue (par exemple Flege 2005, Best 1995,Kuhl 2008). Afin d'estimer les éventuels changements de prononciation, une procédure automatique d'alignement et d'extraction des données acoustiques a été conçue à partir du logiciel PRAAT (Boersma 2001). Dans un premier temps, deux autres logiciels (SPPAS et P2FA, Bigi 2012 et Yuan &Liberman 2008) avaient aligné les transcriptions des enregistrements au phonème près. Plus de 90 000 voyelles ont ainsi été analysées. Les données extraites sont constituées d'informations telles que le nombre de syllabes du mot, de sa transcription acoustique dans le dictionnaire, de la structure syllabique, des phonèmes suivant et précédant la voyelle, de leur lieu et manière d'articulation, de leur appartenance ou non au même mot, mais surtout des relevés formantiques de F0, F1, F2, F3 et F4. Ces relevés formantiques ont été effectués à chaque pourcentage de la durée de la voyelle afin de pouvoir tenir compte des influences des environnements consonantiques sur ces formants. Par ailleurs, des théories telles que le changement spectral inhérent aux voyelles (Nearey & Assmann(1986), Morrison & Nearey (2006), Hillenbrand (2012),Morrison (2012)), ou des méthodes de modélisation du signal telles que la transformation cosinoïdale discrète(Harrington 2010) requièrent que soient relevées les valeurs formantiques des voyelles tout au long de leur durée. Sont successivement étudiées la fiabilité de l'extraction automatique, les distributions statistiques des valeurs formantiques de chaque voyelle et les méthodes de normalisation appropriées aux conversations spontanées. Les différences entre les locuteurs sont ensuite évaluées en analysant tour à tour et après normalisation les changements spectraux, les valeurs formantiques à la moitié de la durée de la voyelle et les transformations cosinoïdales. Les méthodes déployées sont les k plus proches voisins, les analyses discriminantes quadratiques et linéaires, ainsi que les régressions linéaires à effets mixtes. Une conclusion temporaire de ce travail est que l'acquisition du contraste/ɪ/-/i:/ semble plus robuste que celle de /ʊ/-/u:/
This study undertakes to assess the evolution of the phonological acquisition of the English /ɪ/-/i:/ and /ʊ/-/u:/ contrasts by French students. The corpus is made up of recordings of spontaneous conversations with native speakers. 12 students, 9 females and 3 males, were recorded over 4 sessions in six-month intervals. The approach adopted here is resolutely quantitative, and agnostic with respect to theories of second language acquisition such as Flege's, Best's or Kuhl's. In order to assess the potential changes in pronunciations, an automatic procedure of alignment and extraction has been devised, based on PRAAT (Boersma 2001). Phonemic and word alignments had been carried out with SPPAS (Bigi 2012) and P2FA (Yuan & Liberman 2008) beforehand. More than 90,000 vowels were thus collected and analysed. The extracted data consist of information such as the number of syllables in the word, the transcription of its dictionary pronunciation, the structure of the syllable the vowel appears in, of the preceding and succeeding phonemes, their places and manners of articulation, whether they belong to the same word or not, but also especially of the F0, F1, F2, F3 and F4 formant values. These values were collected at each centile of the duration of the vowel, in order to be able to take into account of the influences of consonantal environments. Besides, theories such as vowel-inherent spectral changes (Nearey & Assmann (1986), Morrison & Nearey (2006), Hillenbrand (2012), Morrison (2012)), and methods of signal modelling such as discrete cosine transforms (Harrington 2010) need formant values all throughout the duration of the vowel. Then the reliability of the automatic procedure, the per-vowel statistical distributions of the formant values, and the normalization methods appropriate to spontaneous speech are studied in turn. Speaker differences are assessed by analysing spectral changes, mid-temporal formant values and discrete cosine transforms with normalized values. The methods resorted to are the k nearest neighbours, linear and quadratic discriminant analyses and linear mixed effects regressions. A temporary conclusion is that the acquisition of the /ɪ/-/i:/ contrast seems more robust than that of the /ʊ/-/u:/ contrast
APA, Harvard, Vancouver, ISO, and other styles
9

Mitra, Jhimli. "Multimodal Image Registration applied to Magnetic Resonance and Ultrasound Prostatic Images." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00786032.

Full text
Abstract:
This thesis investigates the employment of different deformable registration techniques to register pre-operative magnetic resonance and inter-operative ultrasound images during prostate biopsy. Accurate registration ensures appropriate biopsy sampling of malignant prostate tissues and reduces the rate of re-biopsies. Therefore, we provide comparisons and experimental results for some landmark- and intensity-based registration methods: thin-plate splines, free-form deformation with B-splines. The primary contribution of this thesis is a new spline-based diffeomorphic registration framework for multimodal images. In this framework we ensure diffeomorphism of the thin-plate spline-based transformation by incorporating a set of non-linear polynomial functions. In order to ensure clinically meaningful deformations we also introduce the approximating thin-plate splines so that the solution is obtained by a joint-minimization of the surface similarities of the segmented prostate regions and the thin-plate spline bending energy. The method to establish point correspondences for the thin-plate spline-based registration is a geometric method based on prostate shape symmetry but a further improvement is suggested by computing the Bhattacharyya metric on shape-context based representation of the segmented prostate contours. The proposed deformable framework is computationally expensive and is not well-suited for registration of inter-operative images during prostate biopsy. Therefore, we further investigate upon an off-line learning procedure to learn the deformation parameters of a thin-plate spline from a training set of pre-operative magnetic resonance and its corresponding inter-operative ultrasound images and build deformation models by applying spectral clustering on the deformation parameters. Linear estimations of these deformation models are then applied on a test set of inter-operative and pre-operative ultrasound and magnetic resonance images respectively. The problem of finding the pre-operative magnetic resonance image slice from a volume that matches the inter-operative ultrasound image has further motivated us to investigate on shape-based and image-based similarity measures and propose for slice-to-slice correspondence based on joint-maximization of the similarity measures.
APA, Harvard, Vancouver, ISO, and other styles
10

Cox, Troy L. "Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3929.

Full text
Abstract:
Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the potential of using ASR timing fluency features to predict speech ratings and the effect of prompt difficulty in that process. A speaking test with ten prompts representing five different intended difficulty levels was administered to 201 subjects. The speech samples obtained were then (a) rated by human raters holistically, (b) rated by human raters analytically at the item level, and (c) scored automatically using PRAAT to calculate ten different ASR timing fluency features. The ratings and scores of the speech samples were analyzed with Rasch measurement to evaluate the functionality of the scales and the separation reliability of the examinees, raters, and items. There were three ASR timed fluency features that best predicted human speaking ratings: speech rate, mean syllables per run, and number of silent pauses. However, only 31% of the score variance was predicted by these features. The significance in this finding is that those fluency features alone likely provide insufficient information to predict human rated speaking ability accurately. Furthermore, neither the item difficulties calculated by the ASR nor those rated analytically by the human raters aligned with the intended item difficulty levels. The misalignment of the human raters with the intended difficulties led to a further analysis that found that it was problematic for raters to use a holistic scale at the item level. However, modifying the holistic scale to a scale that examined if the response to the prompt was at-level resulted in a significant correlation (r = .98, p < .01) between the item difficulties calculated analytically by the human raters and the intended difficulties. This result supports the hypothesis that item prompts are important when it comes to obtaining quality speech samples. As test developers seek to use ASR to score speaking assessments, caution is warranted to ensure that score differences are due to examinee ability and not the prompt composition of the test.
APA, Harvard, Vancouver, ISO, and other styles
11

Leni, Pierre-Emmanuel. "Nouvelles méthodes de traitement de signaux multidimensionnels par décomposition suivant le théorème de Superposition de Kolmogorov." Phd thesis, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00581756.

Full text
Abstract:
Le traitement de signaux multidimensionnels reste un problème délicat lorsqu'il s'agit d'utiliser des méthodes conçues pour traiter des signaux monodimensionnels. Il faut alors étendre les méthodes monodimensionnelles à plusieurs dimensions, ce qui n'est pas toujours possible, ou bien convertir les signaux multidimensionnels en signaux 1D. Dans ce cas, l'objectif est de conserver le maximum des propriétés du signal original. Dans ce contexte, le théorème de superposition de Kolmogorov fournit un cadre théorique prometteur pour la conversion de signaux multidimensionnels. En effet, en 1957, Kolmogorov a démontré que toute fonction multivariée pouvait s'écrire comme sommes et compositions de fonctions monovariées. Notre travail s'est focalisé sur la décomposition d'images suivant le schéma proposé par le théorème de superposition, afin d''etudier les applications possibles de cette d'ecomposition au traitement d'image. Pour cela, nous avons tout d'abord 'etudi'e la construction des fonctions monovari'ees. Ce probl'eme a fait l'objet de nombreuses 'etudes, et r'ecemment, deux algorithmes ont 'et'e propos'es. Sprecher a propos'e dans [Sprecher, 1996; Sprecher, 1997] un algorithme dans lequel il d'ecrit explicitement la m'ethode pour construire exactement les fonctions monovari'ees, tout en introduisant des notions fondamentales 'a la compr'ehension du th'eor'eme. Par ailleurs, Igelnik et Parikh ont propos'e dans [Igelnik and Parikh, 2003; Igelnik, 2009] un algorithme pour approximer les fonctions monovariéees par un réseau de splines. Nous avons appliqué ces deux algorithmes à la décomposition d'images. Nous nous sommes ensuite focalisés sur l'étude de l'algorithme d'Igelnik, qui est plus facilement modifiable et offre une repréesentation analytique des fonctions, pour proposer deux applications originales répondant à des problématiques classiques de traitement de l'image : pour la compression : nous avons étudié la qualité de l'image reconstruite par un réseau de splines généré avec seulement une partie des pixels de l'image originale. Pour améliorer cette reconstruction, nous avons proposé d'effectuer cette décomposition sur des images de détails issues d'une transformée en ondelettes. Nous avons ensuite combiné cette méthode à JPEG 2000, et nous montrons que nous améliorons ainsi le schéma de compression JPEG 2000, même à bas bitrates. Pour la transmission progressive : en modifiant la génération du réseau de splines, l'image peut être décomposée en une seule fonction monovariée. Cette fonction peut être transmise progressivement, ce qui permet de reconstruire l'image en augmentant progressivement sa résolution. De plus, nous montrons qu'une telle transmission est résistante à la perte d'information.
APA, Harvard, Vancouver, ISO, and other styles
12

Mignard, Clément. "SIGA3D : modélisation, échange et visualisation d'objets 3D du bâtiment et d'objets urbains géoréférencés ; application aux IFC pour la gestion technique de patrimoine immobilier et urbain." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00842227.

Full text
Abstract:
Cette thèse définit une nouvelle approche de gestion technique de biens immobiliers et urbains. Pour cela, un processus de production et de gestion de l'information du bâtiment, de son environnement proche et des objets urbains est défini. Il permet de gérer ces objets tout au long de leur cycle de vie, au sein d'un concept appelé Urban Facility Management, ou gestion technique de patrimoine urbain. Cette technologie s'appuie sur un Modèle d'Information Urbain qui permet de modéliser dans une ontologie dynamique et évolutive toute l'information de la ville et du bâtiment. Un mécanisme de niveaux de détail géométrico-contextuels a par ailleurs été définit afin de supporter la montée en charge d'un tel système. En effet, le nombre d'objets à gérer dans une scène par rapport à un système de modélisation du bâtiment est bien supérieur, tout comme la superficie de l'information à traiter. Aussi, les niveaux de détail contextuels permettent d'optimiser la scène en fonction de critères sémantiques.La proposition que nous avons faite se base sur une architecture dérivée des travaux en systèmes hypermédia adaptatifs. Elle est composée de six couches permettant de résoudre les problématiques de manière systémique : une couche de connexion aux sources de données, une couche d'import de ces données dans le système, une couche sémantique, une couche de contextualisation, une couche de connexion et une couche d'interface avec l'utilisateur. Cette architecture autorise la définition d'un workflow que nous avons décomposé au sein d'une architecture de processus. Celui-ci décrit la manière d'acquérir des données à la fois de sources SIG mais aussi IFC ou plus généralement CAO. Ces données sont importées dans le modèle d'information de la ville au sein d'une ontologie définie à l'aide d'outils présents dans la couche sémantique. Ces outils consistent en un ensemble d'opérateurs permettant de définir des concepts, relations et énoncés logiques. Ils sont couplés à un mécanisme de contextes locaux qui permet de définir des niveaux de détail contextuels au sein de l'ontologie. Associé à des représentations graphiques provenant des sources données, il est ainsi possible d'associer plusieurs représentations géométriques, 2D ou 3D, à un objet de notre base de connaissance, et de choisir le modèle à appliquer en fonction de critères sémantiques. La couche de gestion du contexte apporte des informations contextuelles au niveau des modèles de données grâce à un mécanisme basé sur les named graphs. Une fois l'ontologie mise en place, les données peuvent être exploitées par l'intermédiaire d'un moteur graphique que nous avons développé et qui gère l'information contextuelle et les niveaux de détail.Cette architecture de processus a été implémentée et adaptée sur la plateforme Active3D. Une partie des travaux de recherche a consisté à adapter l'architecture formelle du modèle d'information urbain à l'architecture existante afin de répondre aux contraintes industrielles du projet et aux critères de performances requis. La dernière partie de cette thèse présente les développements techniques nécessaires à la réalisation de ces objectifs
APA, Harvard, Vancouver, ISO, and other styles
13

Hall, William L. "The In Pulse." Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1225066283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Geldenhuys, Vincent. "A signification in stone the lapis as metaphor for visual hybridisation in the Harry Potter films /." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-11132008-191836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ghose, Soumya. "Robust image segmentation applied to magnetic resonance and ultrasound images of the prostate." Doctoral thesis, Universitat de Girona, 2012. http://hdl.handle.net/10803/98524.

Full text
Abstract:
Prostate segmentation in trans rectal ultrasound (TRUS) and magnetic resonance images (MRI) facilitates volume estimation, multi-modal image registration, surgical planing and image guided prostate biopsies. The objective of this thesis is to develop computationally efficient prostate segmentation algorithms in both TRUS and MRI image modalities. In this thesis we propose a probabilistic learning approach to achieve a soft classification of the prostate for automatic initialization and evolution of a deformable model for prostate segmentation. Two deformable models are developed for the TRUS segmentation. An explicit shape and region prior based deformable model and an implicit deformable model guided by an energy minimization framework. Besides, in MRI, the posterior probabilities are fused with the soft segmentation coming from an atlas segmentation and a graph cut based energy minimization achieves the final segmentation. In both image modalities, statistically significant improvement are achieved compared to current works in the literature.
La segmentació de la pròstata en imatge d'ultrasò (US) i de ressonància magnètica (MRI) permet l'estimació del volum, el registre multi-modal i la planificació quirúrgica de biòpsies guiades per imatge. L'objectiu d'aquesta tesi és el desenvolupament d'algorismes automàtics per a la segmentació de la pròstata en aquestes modalitats. Es proposa un aprenentatge automàtic inical per obtenir una primera classificació de la pròstata que permet, a continuació, la inicialització i evolució de diferents models deformables. Per imatges d'US, es proposen un model explícit basat en forma i informació regional i un model implícit basat en la minimització d'una funció d'energia. En MRI, les probalitats inicials es fusionen amb una imatge de probabilitat provinent d'una segmentació basada en atlas, i la minimització es realitza mitjançant tècniques de grafs. El resultat final és una significant millora dels algorismes actuals en ambdues modalitats d'imatge.
APA, Harvard, Vancouver, ISO, and other styles
16

Affeich, Andrée. "Rupture et continuité dans le discours technique arabe d’Internet." Thesis, Lyon 2, 2010. http://www.theses.fr/2010LYO20001.

Full text
Abstract:
Ce travail de recherche mené sur un corpus qui rassemble onze pays arabes, vise à soulever une problématique liée à la rupture et à la continuité au sein de la terminologie arabe d’Internet, terminologie créée dans le monde anglophone, aux États-Unis précisément. Les termes « rupture » et « continuité » montrent un conflit réel entre deux systèmes linguistiques différents : le système de la langue arabe que nous appelons « système autochtone » et celui de la langue anglaise que nous appelons « système étranger ». L’image qui se dessine est celle de deux systèmes qui se disputent une partie d’un jeu d’échec. À l’ouverture, les cavaliers des deux côtés se mobilisent rapidement. Ceux du « système étranger » essaient d’instaurer d’emblée des éléments que nous appelons « éléments de rupture ». Ces derniers se manifestent à travers le phénomène de l’emprunt linguistique sous ses deux formes : emprunt entier et siglaison. En guise de réponse, le « système autochtone » mobilise tout d’abord ses deux cavaliers : il s’agit bien évidemment de deux moyens morpho-syntaxiques : le sous-système de nomination et le sous-système de communication. Ensuite, et afin de ne pas s’écrouler, « le système autochtone » renforce ses positions à l’aide de deux autres procédés : procédé sémantique qu’est la métaphore, et procédé discursive qu’est la reformulation. Dans notre travail, il ne s’agit pas de dire lequel des deux systèmes a gagné, En effet, sur une période de dix ans, nous avons accompagné l’évolution de la terminologie arabe d’Internet afin de tirer des conclusions et de relever plus exactement une certaine tendance générale et globale à la lumière des changements que cette terminologie a connus, changements qui ne sont certainement pas définitifs
This research carried out on a corpus which gathers eleven Arab countries, aims at raising problems related to the rupture and continuity within the Arabic terminology of Internet, terminology created in the Anglophone world, more precisely in the United States. The terms “rupture” and “continuity” show a real conflict between two different linguistic systems: the Arabic language system which we call “indigenous system” and the English language system which we call “foreign system”. The image which takes shape is that of two systems playing chess. At the beginning of the game, the knights of the two sides are mobilized quickly. Those of the “foreign system” try to impose elements which we call “elements of rupture”. The latter appear through the linguistic loan phenomenon with its two forms: the integral loan and the acronyms. In response to these “elements of rupture”, the “indigenous system” mobilizes first of all its two knights, i.e. its two morpho-syntactic means: the subsystem of nomination and the subsystem of communication. Then, in order not to collapse, the “indigenous system” fortifies its position using two other processes: the semantic process which is the metaphor and the discursive process which is the rewording. In this study, we are not aiming at saying which of the two systems won. Indeed, within a period of ten years, we followed the evolution of the Arabic terminology of Internet in order to draw conclusions, and more exactly to draw a general tendency in light of changes that this terminology has known, changes which are certainly not final
APA, Harvard, Vancouver, ISO, and other styles
17

Швед, Є. В. "Переклад невербальних одиниць в англомовних художніх текстах." Master's thesis, Сумський державний університет, 2022. https://essuir.sumdu.edu.ua/handle/123456789/87428.

Full text
Abstract:
Робота присвячена дослідженню особливостей перекладу невербальних одиниць в англомовних художніх текстах. Розглянуті сучасні підходи до вивчення невербальних комунікативних одиниць, класифікації невербальних комунікативних засобів у сучасній англійській мові, функції невербальних комунікативних засобів у художньому тексті, а також проаналізована лексична репрезентація невербальних знаків у англомовних художніх текстах. Особлива увага приділена дослідженню стратегій та прийомів перекладу невербальних одиниць в англомовних художніх текстах і аналізу труднощів їх перекладу. Також розроблена система вправ для навчання студентів з перекладу невербальних одиниць в англомовних художніх текстах.
Работа посвящена исследованию особенностей перевода невербальных единиц в англоязычных художественных текстах. Рассмотрены современные подходы к изучению невербальных коммуникативных единиц, классификации невербальных коммуникативных средств в современном английском языке, функции невербальных коммуникативных средств в художественном тексте, а также проанализирована лексическая репрезентация невербальных знаков в англоязычных текстах. Особое внимание уделено исследованию стратегий и приемов перевода невербальных единиц в англоязычных художественных текстах и ​​анализу трудностей их перевода. Также разработана система упражнений для обучения студентов по переводу невербальных единиц в англоязычных художественных текстах.
The work is devoted to the study of the peculiarities of the translation of nonverbal units in English-language literary texts. We considered modern approaches to the study of nonverbal communication units, classification of nonverbal communication tools in modern English, functions of nonverbal communication tools in literary texts, and lexical representation of nonverbal signs in English literary texts. Particular attention is paid to the study of strategies and techniques of translation of nonverbal units in English-language literary texts and analysis of the difficulties of their translation. A system of exercises has also been developed to teach students how to translate nonverbal units in English-language literary texts.
APA, Harvard, Vancouver, ISO, and other styles
18

Gayk, Shannon Noelle. ""Sensible signes" mediating images in late medieval literature /." 2005. http://etd.nd.edu/ETD-db/theses/available/etd-07192005-141904/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mehlenbacher, Alan. "Multiagent system simulations of sealed-sid, English, and treasury auctions." Thesis, 2007. http://hdl.handle.net/1828/255.

Full text
Abstract:
I have developed a multiagent system platform that provides a valuable complement to the alternative research methods. The platform facilitates the development of heterogeneous agents in complex environments. The first application of the multiagent system is to the study of sealed-bid auctions with two-dimensional value signals from pure private to pure common value. I find that several auction outcomes are significantly nonlinear across the two-dimensional value signals. As the common value percent increases, profit, revenue, and efficiency all decrease monotonically, but they decrease in different ways. Finally, I find that forcing revelation by the auction winner of the true common value may have beneficial revenue effects when the common-value percent is high and there is a high degree of uncertainty about the common value. The second application of the multiagent system is to the study of English auctions with two-dimensional value signals using agents that learn a signal-averaging factor. I find that signal averaging increases nonlinearly as the common value percent increases, decreases with the number of bidders, and decreases at high common value percents when the common value signal is more uncertain. Using signal averaging, agents increase their profit when the value is more uncertain. The most obvious effect of signal averaging is on reducing the percentage of auctions won by bidders with the highest common value signal. The third application of the multiagent system is to the study of the optimal payment rule in Treasury auctions using Canadian rules. The model encompasses the when-issued, auction, and secondary markets, as well as constraints for primary dealers. I find that the Spanish payment rule is revenue inferior to the Discriminatory payment rule across all market price spreads, but the Average rule is revenue superior. For most market-price spreads, Uniform payment results in less revenue than Discriminatory, but there are many cases in which Vickrey payment produces more revenue.
APA, Harvard, Vancouver, ISO, and other styles
20

Mehlenbacher, Alan. "Multiagent system simulations of sealed-bid, English, and treasury auctions." Thesis, 2007. http://hdl.handle.net/1828/255.

Full text
Abstract:
I have developed a multiagent system platform that provides a valuable complement to the alternative research methods. The platform facilitates the development of heterogeneous agents in complex environments. The first application of the multiagent system is to the study of sealed-bid auctions with two-dimensional value signals from pure private to pure common value. I find that several auction outcomes are significantly nonlinear across the two-dimensional value signals. As the common value percent increases, profit, revenue, and efficiency all decrease monotonically, but they decrease in different ways. Finally, I find that forcing revelation by the auction winner of the true common value may have beneficial revenue effects when the common-value percent is high and there is a high degree of uncertainty about the common value. The second application of the multiagent system is to the study of English auctions with two-dimensional value signals using agents that learn a signal-averaging factor. I find that signal averaging increases nonlinearly as the common value percent increases, decreases with the number of bidders, and decreases at high common value percents when the common value signal is more uncertain. Using signal averaging, agents increase their profit when the value is more uncertain. The most obvious effect of signal averaging is on reducing the percentage of auctions won by bidders with the highest common value signal. The third application of the multiagent system is to the study of the optimal payment rule in Treasury auctions using Canadian rules. The model encompasses the when-issued, auction, and secondary markets, as well as constraints for primary dealers. I find that the Spanish payment rule is revenue inferior to the Discriminatory payment rule across all market price spreads, but the Average rule is revenue superior. For most market-price spreads, Uniform payment results in less revenue than Discriminatory, but there are many cases in which Vickrey payment produces more revenue.
APA, Harvard, Vancouver, ISO, and other styles
21

Crickmore, Barbara Lee. "An Historical Perpsective On the Academic Education Of Deaf Children In New South Wales 1860s-1990s." Thesis, 2000. http://hdl.handle.net/1959.13/24905.

Full text
Abstract:
This is an historical investigation into the provision of education services for deaf children in the State of New South Wales in Australia since 1860. The main focus is those deaf children without additional disabilities who have been placed in mainstream classes, special classes for the deaf and special schools for the deaf. The study places this group at centre stage in order to better understand their educational situation in the late 1990s. The thesis has taken a chronological and thematic approach. The chapters are defined by significant events that impacted on the education of the deaf, such as the establishment of special schools in New South Wales, the rise of the oral movement, and aftermath of the rubella epidemic in Australia during the 1940s. Within each chapter, there is a core of key elements around which the analysis is based. These key elements tend to be based on institutions, players, and specific educational features, such as the mode of instruction or the curriculum. The study found general agreement that language acquisition was a fundamental prerequisite to academic achievement. Yet the available evidence suggests that educational programs for most deaf children in New South Wales have seldom focused on ensuring adequate language acquisition in conjunction with the introduction of academic subjects. As a result, language and literacy competencies of deaf students in general have frequently been acknowledged as being below those of five their hearing counterparts, to the point of presenting a barrier to successful post-secondary study. It is proposed that the reasons for the academic failings of the deaf are inherent in five themes.
PhD Doctorate
APA, Harvard, Vancouver, ISO, and other styles
22

Crickmore, Barbara Lee. "An Historical Perpsective On the Academic Education Of Deaf Children In New South Wales 1860s-1990s." 2000. http://hdl.handle.net/1959.13/24905.

Full text
Abstract:
This is an historical investigation into the provision of education services for deaf children in the State of New South Wales in Australia since 1860. The main focus is those deaf children without additional disabilities who have been placed in mainstream classes, special classes for the deaf and special schools for the deaf. The study places this group at centre stage in order to better understand their educational situation in the late 1990s. The thesis has taken a chronological and thematic approach. The chapters are defined by significant events that impacted on the education of the deaf, such as the establishment of special schools in New South Wales, the rise of the oral movement, and aftermath of the rubella epidemic in Australia during the 1940s. Within each chapter, there is a core of key elements around which the analysis is based. These key elements tend to be based on institutions, players, and specific educational features, such as the mode of instruction or the curriculum. The study found general agreement that language acquisition was a fundamental prerequisite to academic achievement. Yet the available evidence suggests that educational programs for most deaf children in New South Wales have seldom focused on ensuring adequate language acquisition in conjunction with the introduction of academic subjects. As a result, language and literacy competencies of deaf students in general have frequently been acknowledged as being below those of five their hearing counterparts, to the point of presenting a barrier to successful post-secondary study. It is proposed that the reasons for the academic failings of the deaf are inherent in five themes.
PhD Doctorate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography