Dissertations / Theses on the topic 'Feature Adaptation'

To see the other types of publications on this topic, follow the link: Feature Adaptation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 dissertations / theses for your research on the topic 'Feature Adaptation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Harris, Matthew. "Flow feature aligned mesh generation and adaptation." Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/4192/.

Full text
Abstract:
Methods which allow for construction of flow feature aligned meshes in two- and three-dimensions have been developed in this thesis to investigate their potential for improvements in the numerical solution relative to globally refining the mesh. Of particular interest in the work is the generation of high-quality quadrilateral and hexahedral elements aligned with the dominant flow features. The two-dimensional techniques are applied on unstructured quad-dominant meshes, whilst the three-dimensional problems involve embedding high-quality hex-dominant mesh blocks into a hybrid volume mesh to improve their ability to capture anisotropic flow features such as shock waves, trailing shear layers/wakes and wing tip vortices. A method involving the medial axis has been studied to provide a geometric representation of two-dimensional flow features to allow feature-aligned meshes to be generated. Due to the flexibility of the approach, a range of complex features can be represented as simple geometric entities. These curves are embedded into the domain as virtual geometries to force alignment of unstructured quad-dominant surface mesh elements. The mesh locally mimics the attributes of a structured grid and provides high quality numerical solutions due to the alignment of the cell interfaces with the flow features. To improve the capability of hybrid meshes to resolve anisotropic flow physics, a method involving the extrusion of quad-dominant surface meshes has been developed. Surface meshes are extruded in the direction of extracted flow features, yielding feature-aligned semi-structured hex-dominant mesh blocks which can be embedded into the hybrid volume mesh. The presence of feature-aligned hexahedra has been shown to greatly enhance the resolution of anisotropic flow features compared with both isotropic and anisotropic tetrahedral elements, due to a significant reduction in numerical diffusion. Furthermore, improvements in the numerical solution have been also been obtained in a more efficient manner than isotropically refining the hybrid mesh. The results indicate that the type, orientation and size of the elements are significant contributing factors in the resolution of the dominant flow features.
APA, Harvard, Vancouver, ISO, and other styles
2

Stoltzfus, Daniel Paul. "Predictions on markedness and feature resilience in loanword adaptation." Doctoral thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25567.

Full text
Abstract:
Normalement, un emprunt est adapté afin que ses éléments étrangers s’intègrent au système phonologique de la langue emprunteuse. Certains auteurs (cf. Miao 2005; Steriade 2001b, 2009) ont soutenu que, lors de l’adaptation d’une consonne, les traits de manière d’articulation sont plus résistants au changement que les traits laryngaux (ex. : le voisement) ou que ceux de place. Mes résultats montrent cependant que les traits de manière (ex. : [±continu]) sont impliqués dans les adaptations consonantiques aussi fréquemment que les autres traits (ex. [±voisé] et [±antérieur]). Par exemple, le /Z/ français est illicite à l’initiale en anglais. Les options d’adaptation incluent /Z/ → [z] (changement de place), /Z/ → [S] (changement de voisement) et /Z/ → [dZ] (changement de manière). Contrairement aux prédictions des auteurs précités, l’adaptation primaire en anglais est /Z/ → [dZ], avec changement de manière (ex. français [Zelatin] gélatine → anglais [dZElœtIn]). Plutôt qu’une résistance des traits de manière, les adaptations étudiées dans ma thèse font ressortir une nette tendance à la simplification. Mon hypothèse est que les langues adaptent les consonnes étrangères en en éliminant les complexités. Donc un changement impliquant l’élimination plutôt que l’insertion d’un trait marqué sera préféré. Ma thèse innove aussi en montrant qu’une consonne est le plus souvent importée lorsque sa stratégie d’adaptation primaire implique l’insertion d’un trait marqué. Les taux d’importation sont systématiquement élevés pour les consonnes dont l’adaptation impliquerait l’insertion d’un tel trait (ici [+continu] ou [+voisé]). Par exemple, /dZ/ en anglais, lorsque adapté, devient /Z/ en français après l’insertion de [+continu]; cependant, l’importation de /dZ/ est de loin préférée à son adaptation (89%). En comparaison, /dZ/ est rarement importé (10%) en germano-pennsylvanien (GP) parce que l’adaptation de /dZ/ à [tS] (élision du trait marqué [+voisé]) est disponible, contrairement au cas du français. Cependant, le /t/ anglais à l’initiale, lui, est majoritairement importé (74%) en GP parce que son adaptation en /d/ impliquerait l’insertion du trait marqué [+voisé]. Ma thèse permet non seulement de mieux cerner la direction des adaptations, mais repère aussi ce qui favorise fortement les importations sur la base d’une notion déjà établie en phonologie : la marque.
A loanword is normally adapted to fit its foreign elements to the phonological system of the borrowing language (L1). Recently, some authors (e.g. Miao 2005; Steriade 2001b, 2009) have proposed that during the adaptation process of a second language (L2) consonant, manner features are more resistant to change than are non-manner features. A careful study of my data indicate that manner features (e.g. [±continuant]) are as likely to be involved in the adaptation process as are non-manner [±voice] and [±anterior]. For example, French /Z/ is usually not tolerated word-initially in English. Adaptation options include /Z/ → [z] (change of place), /Z/ → [S] (change of voicing) and /Z/ → [dZ] (change of manner). The primary adaptation in English is /Z/ → [dZ] (e.g. French [Zelatin] gélatine → English [dZElœtIn]) where manner is in fact the less resistant. Instead, during loanword adaptation there is a clear tendency towards unmarkedness. My hypothesis is that languages overwhelmingly adapt with the goal of eliminating the complexities of the L2; a change that involves deletion instead of insertion of a marked feature is preferred. Furthermore, my thesis shows for the first time that a consonant is statistically most likely to be imported if its preferred adaptation strategy involves insertion of a marked feature (e.g. [+continuant] or [+voice]). For example, the adaptation of English /dZ/ is /Z/ in French after insertion of marked [+continuant], but /dZ/ is overwhelmingly imported (89%), instead of adapted in French. I argue that this is to avoid the insertion of marked [+continuant]. This contrasts with Pennsylvania German (PG) where English /dZ/ is rarely imported (10%). This is because unlike in French, there is an option to adapt /dZ/ to /tS/ (deletion of marked [+voice]) in PG. However, English word-initial /t/ is heavily imported (74%), not adapted, in PG because adaptation to /d/ involves insertion of marked [+voice]. Not only does my thesis better determine the direction of adaptations but it also establishes the circumstances where L2 consonants are most likely to be imported instead of being adapted, on the basis of a well-known notion in phonology: markedness.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Jianhua. "Missing feature decoding and model adaptation for noisy speech recognition." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zennaro, Fabio. "Feature distribution learning for covariate shift adaptation using sparse filtering." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/feature-distribution-learning-for-covariate-shift-adaptation-using-sparse-filtering(67989db2-b8a0-4fac-8832-f611e9236ed5).html.

Full text
Abstract:
This thesis studies a family of unsupervised learning algorithms called feature distribution learning and their extension to perform covariate shift adaptation. Unsupervised learning is one of the most active areas of research in machine learning, and a central challenge in this field is to develop simple and robust algorithms able to work in real-world scenarios. A traditional assumption of machine learning is the independence and identical distribution of data. Unfortunately, in realistic conditions this assumption is often unmet and the performances of traditional algorithms may be severely compromised. Covariate shift adaptation has then developed as a lively sub-field concerned with designing algorithms that can account for covariate shift, that is for a difference in the distribution of training and test samples. The first part of this dissertation focuses on the study of a family of unsupervised learning algorithms that has been recently proposed and has shown promise: feature distribution learning; in particular, sparse filtering, the most representative feature distribution learning algorithm, has commanded interest because of its simplicity and state-of-the-art performance. Despite its success and its frequent adoption, sparse filtering lacks any strong theoretical justification. This research questions how feature distribution learning can be rigorously formalized and how the dynamics of sparse filtering can be explained. These questions are answered by first putting forward a new definition of feature distribution learning based on concepts from information theory and optimization theory; relying on this, a theoretical analysis of sparse filtering is carried out, which is validated on both synthetic and real-world data sets. In the second part, the use of feature distribution learning algorithms to perform covariate shift adaptation is considered. Indeed, because of their definition and apparent insensitivity to the problem of modelling data distributions, feature distribution learning algorithms seems particularly fit to deal with covariate shift. This research questions whether and how feature distribution learning may be fruitfully employed to perform covariate shift adaptation. After making explicit the conditions of success for performing covariate shift adaptation, a theoretical analysis of sparse filtering and another novel algorithm, periodic sparse filtering, is carried out; this allows for the determination of the specific conditions under which these algorithms successfully work. Finally, a comparison of these sparse filtering-based algorithms against other traditional algorithms aimed at covariate shift adaptation is offered, showing that the novel algorithm is able to achieve competitive performance. In conclusion, this thesis provides a new rigorous framework to analyse and design feature distribution learning algorithms; it sheds light on the hidden assumptions behind sparse filtering, offering a clear understanding of its conditions of success; it uncovers the potential and the limitations of sparse filtering-based algorithm in performing covariate shift adaptation. These results are relevant both for researchers interested in furthering the understanding of unsupervised learning algorithms and for practitioners interested in deploying feature distribution learning in an informed way.
APA, Harvard, Vancouver, ISO, and other styles
5

Collet, Philippe. "Taming Complexity of Large Software Systems: Contracting, Self-Adaptation and Feature Modeling." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00657444.

Full text
Abstract:
Nos travaux s'inscrivent dans le domaine du génie logiciel pour les systèmes informatiques à large échelle. Notre objectif est de fournir des techniques et des outils pour aider les architectes logiciels à maîtriser la complexité toujours grandissante de ces systèmes. Principalement fondées sur des approches par ingénierie des modèles, nos contributions s'organisent autour de trois axes. Le premier axe concerne le développement de systèmes à la fois fiables et flexibles, et ce à base de composants hiérarchiques équipés de capacités de reconfiguration dynamique. Par l'utilisation de nouvelles formes de contrats logiciels, les systèmes et frameworks que nous proposons prennent en compte differents formalismes de spécification et maintiennent les contrats à jour pendant l'exécution. Une seconde partie de nos travaux s'intéresse à fournir des capacités auto-adaptatives à ces systèmes contractuels, à travers des mécanismes de négociation de contrats et des sous-systèmes de monitoring eux-mêmes auto-adaptatifs. Un troisième axe concerne les lignes de produits logiciels dans lesquelles les features models sont largement utilisés pour modéliser la variabilité. Nos contributions consistent en un ensemble d'opérateurs de composition bien définis et implémentés efficacement pour les feature models, ainsi qu'un langage dédié permettant leur gestion à large échelle.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Baoyao. "Distribution alignment for unsupervised domain adaptation: cross-domain feature learning and synthesis." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/556.

Full text
Abstract:
In recent years, many machine learning algorithms have been developed and widely applied in various applications. However, most of them have considered the data distributions of the training and test datasets to be similar. This thesis concerns on the decrease of generalization ability in a test dataset when the data distribution is different from that of the training dataset. As labels may be unavailable in the test dataset in practical applications, we follow the effective approach of unsupervised domain adaptation and propose distribution alignment methods to improve the generalization ability of models learned from the training dataset in the test dataset. To solve the problem of joint distribution alignment without target labels, we propose a new criterion of domain-shared group sparsity that is an equivalent condition for equal conditional distribution. A domain-shared group-sparse dictionary learning model is built with the proposed criterion, and a cross-domain label propagation method is developed to learn a target-domain classifier using the domain-shared group-sparse representations and the target-specific information from the target data. Experimental results show that the proposed method achieves good performance on cross-domain face and object recognition. Moreover, most distribution alignment methods have not considered the difference in distribution structures, which results in insufficient alignment across domains. Therefore, a novel graph alignment method is proposed, which aligns both data representations and distribution structural information across the source and target domains. An adversarial network is developed for graph alignment by mapping both source and target data to a feature space where the data are distributed with unified structure criteria. Promising results have been obtained in the experiments on cross-dataset digit and object recognition. Problem of dataset bias also exists in human pose estimation across datasets with different image qualities. Thus, this thesis proposes to synthesize target body parts for cross-domain distribution alignment, to address the problem of cross-quality pose estimation. A translative dictionary is learned to associate the source and target domains, and a cross-quality adaptation model is developed to refine the source pose estimator using the synthesized target body parts. We perform cross-quality experiments on three datasets with different image quality using two state-of-the-art pose estimators, and compare the proposed method with five unsupervised domain adaptation methods. Our experimental results show that the proposed method outperforms not only the source pose estimators, but also other unsupervised domain adaptation methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Lock, G. "'Petticoat Sailor' to 'She Crossing' : adaptation in process : a writer's reflection on adapting a feature length screenplay into a novel." Thesis, Nottingham Trent University, 2015. http://irep.ntu.ac.uk/id/eprint/27947/.

Full text
Abstract:
This PhD thesis consists of a novel entitled 'She Crossing', and a Commentary. The Commentary reflects on my practice of constructing the novel by adapting it from my previously existing screenplay, 'Petticoat Sailor'. They both derive from 'The Seafaring Maiden', a Nova Scotian newspaper article dating from 1957. All of these narratives are concerned with a nineteenth-century woman who had to captain a commercial sailing ship across the Atlantic. My screenplay, 'Petticoat Sailor', is set entirely in the nineteenth century. Both the newspaper article and my novel frame the protagonist’s voyage as a twentieth-century reminiscence. My novel also introduces a fictional subplot, which was not present in the screenplay. This subplot derives from my research into accounts of cross-dressed women who went to sea when only men were legally employed as sailors. The direction of my adaptation from a screenplay into a novel is unusual. Most adaptations move from novel to script and this is reflected in the secondary literature about adaptation. Novels resulting from adapting scripts have attracted little academic analysis as artefacts, and even less theorization of their creative processes. There is also an absence of sustained reflection by other writers who have turned their screenplays into novels. Aiming to increase understanding of the novelizing process, my thesis addresses these absences. The Commentary discusses the differences and similarities in writing screenplay and writing prose fiction by reflecting on my processes in writing this novel. I particularly explore the effects of contingency in adjusting theoretical principles during the creative process of novelization. I also examine to what extent the ways in which I write are themselves adapted from my own experience of directing and acting for stage and film.
APA, Harvard, Vancouver, ISO, and other styles
8

Kleynhans, Neil Taylor. "Automatic speech recognition for resource-scarce environments / N.T. Kleynhans." Thesis, North-West University, 2013. http://hdl.handle.net/10394/9668.

Full text
Abstract:
Automatic speech recognition (ASR) technology has matured over the past few decades and has made significant impacts in a variety of fields, from assistive technologies to commercial products. However, ASR system development is a resource intensive activity and requires language resources in the form of text annotated audio recordings and pronunciation dictionaries. Unfortunately, many languages found in the developing world fall into the resource-scarce category and due to this resource scarcity the deployment of ASR systems in the developing world is severely inhibited. In this thesis we present research into developing techniques and tools to (1) harvest audio data, (2) rapidly adapt ASR systems and (3) select “useful” training samples in order to assist with resource-scarce ASR system development. We demonstrate an automatic audio harvesting approach which efficiently creates a speech recognition corpus by harvesting an easily available audio resource. We show that by starting with bootstrapped acoustic models, trained with language data obtain from a dialect, and then running through a few iterations of an alignment-filter-retrain phase it is possible to create an accurate speech recognition corpus. As a demonstration we create a South African English speech recognition corpus by using our approach and harvesting an internet website which provides audio and approximate transcriptions. The acoustic models developed from harvested data are evaluated on independent corpora and show that the proposed harvesting approach provides a robust means to create ASR resources. As there are many acoustic model adaptation techniques which can be implemented by an ASR system developer it becomes a costly endeavour to select the best adaptation technique. We investigate the dependence of the adaptation data amount and various adaptation techniques by systematically varying the adaptation data amount and comparing the performance of various adaptation techniques. We establish a guideline which can be used by an ASR developer to chose the best adaptation technique given a size constraint on the adaptation data, for the scenario where adaptation between narrow- and wide-band corpora must be performed. In addition, we investigate the effectiveness of a novel channel normalisation technique and compare the performance with standard normalisation and adaptation techniques. Lastly, we propose a new data selection framework which can be used to design a speech recognition corpus. We show for limited data sets, independent of language and bandwidth, the most effective strategy for data selection is frequency-matched selection and that the widely-used maximum entropy methods generally produced the least promising results. In our model, the frequency-matched selection method corresponds to a logarithmic relationship between accuracy and corpus size; we also investigated other model relationships, and found that a hyperbolic relationship (as suggested from simple asymptotic arguments in learning theory) may lead to somewhat better performance under certain conditions.
Thesis (PhD (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
APA, Harvard, Vancouver, ISO, and other styles
9

Cardace, Adriano. "Learning Features Across Tasks and Domains." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20050/.

Full text
Abstract:
The absence of in-domain labeled data hinders the applicability of powerful deep neural networks. Unsupervised Domain Adaptation (UDA) methods have emerged to exploit such models even when labeled data is not available in the target domain. All these techniques aim to reduce the distribution shift problem that afflicts these models when trained on one dataset and tested in a different one. However, most of the works, do not consider relationships among tasks to further boost performances. In this thesis, we study a recent method called AT/DT (Across Tasks Domain Transfer), that seeks to apply Domain Adaptation together with Task Adaptation, leveraging on the correlation of two popular Vision tasks such as Semantic Segmentation and Monocular Depth Estimation. Inspired by the Domain Adaptation literature, we propose many extensions to the original work and show how these enhance the framework performances. Our contributions are applied at different levels: we first study how different architectures affect the transferability of features across tasks. We further improve performances by deploying Adversarial training. Finally, we explore the possibility of replacing Depth Estimation with popular Self-supervised tasks, demonstrating that two tasks must be semantically connected to be able to transfer features among them.
APA, Harvard, Vancouver, ISO, and other styles
10

Svensson, Ylva. "Embedded in a context : the adaptation of immigrant youth." Doctoral thesis, Örebro universitet, Institutionen för juridik, psykologi och socialt arbete, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-24172.

Full text
Abstract:
With rising levels of immigration comes a need to know what fosters positive adaptation for the youth growing up in a new culture of settlement.The issue is increasingly studied; however, little of the research conducted has combined a developmental with a contextual approach. The aim of this dissertation was to explore the adaptation of immigrant youth on the basis of developmental theories and models which put emphasis on setting or contextual conditions. This entailed viewing immigrant youths as developing organisms that actively interact with their environments. Further, immigrant youths were seen as embedded in multiple settings, at different levels and with different contextual features. Two of the overall research questions addressed how contextual features of the settings in which the youth are embedded were related to adaptation. Results from all three studies combined to show that the contextual feature of a setting is not of prime or sole importance for the adaptation of immigrant youth, and that the contextual feature of SES diversity is of greater importance than theethnic compositions of settings. The next two overall research questions addressed how the linkage between settings was related to adaptation. The results indicated that adaptation is not always setting specific and that what is happening in one setting can be related to adaptation in anothersetting. Further, it was found that the cultural distance between settings is related to adaption, but that contextual factors affect this relationship. Overall, the results of the dissertation suggests that the adaptation of immigrant youth is a complex matter that is explained better by interaction and indirect effects than by main and direct effects. This highlights the importance of taking all settings in which the immigrant youths are embedded into account and to account for how the settings interact to understand the factors that foster and hinder positive adaptation of immigrant youth.

The article "Homophily in friendship networks of immigrant and nonimmigrantyouth: Does context matter?" in the list of studies is published electronically as "Peer selection and influence of delinquent behavior of immigrant and nonimmigrant youths: does context matter?"

APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Ye. "Application of Convolutional Deep Belief Networks to Domain Adaptation." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397728737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Salman, Nader. "From 3D point clouds to feature preserving meshes." Nice, 2010. http://www.theses.fr/2010NICE4086.

Full text
Abstract:
La majorité des algorithmes de reconstruction de surface sont optimisés pour s’appliquer à des données de haute qualité. Les résultats obtenus peuvent alors être utilisables si les données proviennent de solutions d’acquisition bon marché. Notre première contribution est un algorithme de reconstruction de surfaces à partir de données de stéréo vision. Il combine les informations liées aux points 3D avec les images calibrées afin de combler l’imprécision des données. L’algorithme construit une soupe de triangles 3D à l’aide des images calibrées et à l’issue d’une phase de prétraitement du nuage de points. Pour épouser au mieux la surface de la scène, on contraint cette soupe de triangle 3D à respecter des critères de visibilité et de photo-consistance. On calcule ensuite un maillage à partir de la soupe de triangles à l’aide d’une technique de reconstruction qui combine les triangulations de Delaunay contraintes et le raffinement de Delaunay. Notre seconde contribution est un algorithme qui construit, à partir d’un nuage de points 3D échantillonnés sur une surface, un maillage de surface qui représente fidèlement les arrêtes vives. Cet algorithme génère un bon compromis entre précision et complexité du maillage. Dans un premier temps, on extrait une approximation des arrêtes vives de la surface sous-jacente à partir du nuage de points. Dans un deuxième temps, on utilise une variante du raffinement de Delaunay pour générer un maillage qui combine les arrêtes vives extraites avec une surface implicite obtenue à partir du nuage de points. Notre méthode se révèle flexible, robuste au bruit ; cette méthode peut prendre en compte la résolution du maillage ciblé et un champ de taille défini par l’utilisateur. Nos deux contributions génèrent des résultats efficaces sur une variété de scènes et de modèles. Notre méthode améliore l’état de l’art en termes de précision
Most of the current surface reconstruction algorithms target high quality data and can produce some intractable results when used with point clouds acquired through profitable 3D acquisitions methods. Our first contribution is a surface reconstruction, algorithm from stereo vision data copes with the data’s fuzziness using information from both the acquired D point cloud and the calibrated images. After pre-processing the point cloud, the algorithm builds, using the calibrated images, 3D triangular soup consistent with the surface of the scene through a combination of visibility and photo-consistency constraints. A mesh is then computed from the triangle soup using a combination of restricted Delaunay triangulation and Delaunay refinement methods. Our second contribution is an algorithm that builds, given a 3D point cloud sampled on a surface, an approximating surface mesh with an accurate representation of surface sharp edges, providing an enhanced trade-off between accuracy and mesh complexity. We first extract from the point cloud an approximation of the sharp edges of the underlying surface. Then a feature preserving variant of a Delaunay refinement process generates a mesh combining a faithful representation of the extracted sharp edges with an implicit surface obtained from the point cloud. The method is shown to be flexible, robust to noise and tuneable to adapt to the scale of the targeted mesh and to a user defined sizing field. We demonstrate the effectiveness of both contributions on a variety of scenes and models acquired with different hardware and show results that compare favourably, in terms of accuracy, with the current state of the art
APA, Harvard, Vancouver, ISO, and other styles
13

DiCintio, Matt. "A Pilgrim, An Outlaw: Features of Dramatic Adaptation and Theodore Dreiser’s Sister Carrie." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/305.

Full text
Abstract:
Although there are countless manuals devoted to playwriting, very few take up the craft of dramatic adaptation in a practical context. My rendering of Theodore Dreiser’s Sister Carrie is an exploration of fundamental elements that require consideration when adapting for the stage. My approach to the characters’ inarticulateness reveals an inherent theatricality in the novel, which both respects Dreiser’s themes and makes them accessible through the conventions of the stage. I suggest the craft of dramatic adaptation should strike a delicate balance between being a “pilgrim” toward the intentions of the source and an “outlaw” in its innovative theatrical representation of them.
APA, Harvard, Vancouver, ISO, and other styles
14

Martin, Helen Mary. "Children's adaptations: A consideration of children's adaptations from popular written texts into stage, filmic and televisual formats. Examined are the features that determine their popularity and the pertinent questions that surround the role of adaptation, and its future possibilities." Thesis, Martin, Helen Mary (1996) Children's adaptations: A consideration of children's adaptations from popular written texts into stage, filmic and televisual formats. Examined are the features that determine their popularity and the pertinent questions that surround the role of adaptation, and its future possibilities. Masters by Coursework thesis, Murdoch University, 1996. https://researchrepository.murdoch.edu.au/id/eprint/52859/.

Full text
Abstract:
This dissertation considers selected children's popular texts in print form as they have been adapted for stage, film and television. Examined are the features that determine their popularity, in their written mode and adapted form, and the apparent similarities and differences found in both. There are also questions answered surrounding the 'why' and 'how' of adaptation, its role and future possibilities. The 'why' of adaptation looks at the reason to turn a successful story into a film or play, while the 'how' question looks at the transformation processes necessary to transpose a story from one genre into another. As well, the adaptation itself is considered. A semiotic approach is used in looking at the texts, predominantly of Australian origin, although some classical examples are used. These are included to differentiate between the identified strands of children's adaptation, that is, those texts from classical origins; those texts from the bestsellers' lists and awarded shelves; those texts deemed to be popular and finally those texts perceived as generators of additional financial gains for their producers. Discussions look at the dramatic world; the film auteur theory; television’s function and notions surrounding its benefits. Primary source materials of texts and films combine with interviews viewed and attended with children’s authors, and notes of a conference from a recent Children's Television Foundation's discussion on television for children. Secondary sources are from published journals and newspaper articles. Theoretical texts cover stage, film and television theory and several sources on text and performance. Conclusions reached proffer the view that children's adaptations, from a mainly written module in this study, have a credibility and relevance in their own right, within a framework of highly refined reception indicators. There is a plea to establish additional sign systems to cope with the newer more technologically based emerging forms in children's genres and to reconsider adult input regarding what children want. At the same time the importance of children's adaptations as an on-going art form owes much to their originating source which provides the vital seed from which these adapted works grow. It is suggested that this vital seed not be forgotten, in any comprehensive analysis of popular children's texts in all their diverse forms.
APA, Harvard, Vancouver, ISO, and other styles
15

Costa, Paulo Alexandre da Silva. "Uma ferramenta para anÃlise automÃtica de modelos de caracterÃsticas de linhas de produtos de software sensÃvel ao contexto." Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=10462.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
As Linhas de produtos de software sÃo uma forma de maximizar o reuso de software, dado que proveem a customizaÃÃo de software em massa. Recentemente, Linhas de produtos de software (LPSs) tÃm sido usadas para oferecer suporte ao desenvolvimento de aplicaÃÃes sensÃveis ao contexto nas quais adaptabilidade em tempo de execuÃÃo à um requisito importante. Neste caso, as LPSs sÃo denominadas Linhas de produtos de software sensÃveis ao contexto (LPSSCs). O sucesso de uma LPSSC depende, portanto, da modelagem de suas caracterÃsticas e do contexto que lhe à relevante. Neste trabalho, essa modelagem à feita usando o diagrama de caracterÃsticas e o diagrama de contexto. Entretanto, um processo manual para construÃÃo e configuraÃÃo desses modelos pode facilitar a inclusÃo de diversos erros, tais como duplicaÃÃo de caracterÃsticas, ciclos, caracterÃsticas mortas e falsos opcionais sendo, portanto, necessÃrio o uso de tÃcnicas de verificaÃÃo de consistÃncia. A verificaÃÃo de consistÃncia neste domÃnio de aplicaÃÃes assume um papel importante, pois as aplicaÃÃes usam contexto tanto para prover serviÃos como para auto-adaptaÃÃo caso seja necessÃrio. Neste sentido, as adaptaÃÃes disparadas por mudanÃas de contexto podem levar a aplicaÃÃo a um estado indesejado. AlÃm disso, a descoberta de que algumas adaptaÃÃes podem levar a estados indesejados sà pode ser atestada durante a execuÃÃo pois o erro à condicionado à configuraÃÃo atual do produto. Ao considerar que tais aplicaÃÃes estÃo sujeitas a um grande volume de mudanÃas contextuais, a verificaÃÃo manual torna-se impraticÃvel. Logo, à interessante que seja possÃvel realizar a verificaÃÃo da consistÃncia de forma automatizada de maneira que uma entidade computacional possa realizar essas operaÃÃes. Dado o pouco suporte automatizado oferecido a esses processos, o objetivo deste trabalho à propor a automatizaÃÃo completa desses processos com uma ferramenta, chamada FixTure (FixTure), para realizar a verificaÃÃo da construÃÃo dos modelos de caracterÃsticas para LPSSC e da configuraÃÃo de produtos a partir desses modelos. A ferramenta FixTure tambÃm provà uma simulaÃÃo de situaÃÃes de contexto no ciclo de vida de uma aplicaÃÃo de uma LPSSC, com o objetivo de identificar inconsistÃncias que ocorreriam em tempo de execuÃÃo.
Software product lines are a way to maximize software reuse once it provides mass software customization. Software product lines (SPLs) have been also used to support contextaware applicationâs development where adaptability at runtime is an important issue. In this case, SPLs are known as Context-aware software product lines. Context-aware software product line (CASPL) success depends on the modelling of their features and relevant context. However, a manual process to build and configure these models can add several errors such as replicated features, loops, and dead and false optional features. Because of this, there is a need of techniques to verify the model consistency. In the context-aware application domain, the consistency verification plays an important role, since application in this domain use context to both provide services and self-adaptation, when it is needed. In this sense, context-triggered adaptations may lead the application to undesired state. Moreover, in some cases, the statement that a contex-triggered adaptation is undesired only can be made at runtime, because the error is conditioned to the current product configuration. Additionally, applications in this domain are submitted to large volumes of contextual changes, which imply that manual verification is virtually not viable. So, it is interesting to do consistency verification in a automated way such that a computational entity may execute these operations. As there is few automated support for these proccesses, the objective of this work is to propose the complete automation of these proccesses with a software tool, called FixTure, that does consistency verification of feature diagrams during their development and product configuration. FixTure tool also supports contextual changes simulation during the lifecycle of a CASPL application in order to identify inconsistencies that can happen at runtime.
APA, Harvard, Vancouver, ISO, and other styles
16

Haque, Serajul. "Perceptual features for speech recognition." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0187.

Full text
Abstract:
Automatic speech recognition (ASR) is one of the most important research areas in the field of speech technology and research. It is also known as the recognition of speech by a machine or, by some artificial intelligence. However, in spite of focused research in this field for the past several decades, robust speech recognition with high reliability has not been achieved as it degrades in presence of speaker variabilities, channel mismatch condi- tions, and in noisy environments. The superb ability of the human auditory system has motivated researchers to include features of human perception in the speech recognition process. This dissertation investigates the roles of perceptual features of human hearing in automatic speech recognition in clean and noisy environments. Methods of simplified synaptic adaptation and two-tone suppression by companding are introduced by temporal processing of speech using a zero-crossing algorithm. It is observed that a high frequency enhancement technique such as synaptic adaptation performs better in stationary Gaussian white noise, whereas a low frequency enhancement technique such as the two-tone sup- pression performs better in non-Gaussian non-stationary noise types. The effects of static compression on ASR parametrization are investigated as observed in the psychoacoustic input/output (I/O) perception curves. A method of frequency dependent asymmetric compression technique, that is, higher compression in the higher frequency regions than the lower frequency regions, is proposed. By asymmetric compression, degradation of the spectral contrast of the low frequency formants due to the added compression is avoided. A novel feature extraction method for ASR based on the auditory processing in the cochlear nucleus is presented. The processings for synchrony detection, average discharge (mean rate) processing and the two tone suppression are segregated and processed separately at the feature extraction level according to the differential processing scheme as observed in the AVCN, PVCN and the DCN, respectively, of the cochlear nucleus. It is further observed that improved ASR performances can be achieved by separating the synchrony detection from the synaptic processing. A time-frequency perceptual spectral subtraction method based on several psychoacoustic properties of human audition is developed and evaluated by an ASR front-end. An auditory masking threshold is determined based on these psychoacoustic e?ects. It is observed that in speech recognition applications, spec- tral subtraction utilizing psychoacoustics may be used for improved performance in noisy conditions. The performance may be further improved if masking of noise by the tonal components is augmented by spectral subtraction in the masked region.
APA, Harvard, Vancouver, ISO, and other styles
17

Charnay, Clément. "Enhancing supervised learning with complex aggregate features and context sensitivity." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD025/document.

Full text
Abstract:
Dans cette thèse, nous étudions l'adaptation de modèles en apprentissage supervisé. Nous adaptons des algorithmes d'apprentissage existants à une représentation relationnelle. Puis, nous adaptons des modèles de prédiction aux changements de contexte.En représentation relationnelle, les données sont modélisées par plusieurs entités liées par des relations. Nous tirons parti de ces relations avec des agrégats complexes. Nous proposons des heuristiques d'optimisation stochastique pour inclure des agrégats complexes dans des arbres de décisions relationnels et des forêts, et les évaluons sur des jeux de données réelles.Nous adaptons des modèles de prédiction à deux types de changements de contexte. Nous proposons une optimisation de seuils sur des modèles à scores pour s'adapter à un changement de coûts. Puis, nous utilisons des transformations affines pour adapter les attributs numériques à un changement de distribution. Enfin, nous étendons ces transformations aux agrégats complexes
In this thesis, we study model adaptation in supervised learning. Firstly, we adapt existing learning algorithms to the relational representation of data. Secondly, we adapt learned prediction models to context change.In the relational setting, data is modeled by multiples entities linked with relationships. We handle these relationships using complex aggregate features. We propose stochastic optimization heuristics to include complex aggregates in relational decision trees and Random Forests, and assess their predictive performance on real-world datasets.We adapt prediction models to two kinds of context change. Firstly, we propose an algorithm to tune thresholds on pairwise scoring models to adapt to a change of misclassification costs. Secondly, we reframe numerical attributes with affine transformations to adapt to a change of attribute distribution between a learning and a deployment context. Finally, we extend these transformations to complex aggregates
APA, Harvard, Vancouver, ISO, and other styles
18

Ramos, Jonathan da Silva. "Algoritmos de casamento de imagens com filtragem adaptativa de outliers." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-02022017-110428/.

Full text
Abstract:
O registro de imagens tem um papel importante em várias aplicações, tais como reconstrução de objetos 3D, reconhecimento de padrões, imagens microscópicas, entre outras. Este registro é composto por três passos principais: (1) seleção de pontos de interesse; (2) extração de características dos pontos de interesse; (3) correspondência entre os pontos de interesse de uma imagem para a outra. Para os passos 1 e 2, algoritmos como SIFT e SURF têm apresentado resultados satisfatórios. Entretanto, para o passo 3 ocorre a presença de outliers, ou seja, pontos de interesse que foram incorretamente correspondidos. Uma única correspondência incorreta leva a um resultado final indesejável. Os algoritmos para remoção de outliers (consenso) possuem um alto custo computacional, que cresce à medida que a quantidade de outliers aumenta. Com o objetivo de reduzir o tempo de processamento necessário por esses algoritmos, o algoritmo FOMP(do inglês, Filtering out Outliers from Matched Points), foi proposto e desenvolvido neste trabalho para realizar a filtragem de outliers no conjunto de pontos inicialmente correspondidos. O método FOMP considera cada conjunto de pontos como um grafo completo, no qual os pesos são as distâncias entre os pontos. Por meio da soma de diferenças entre os pesos das arestas, o vértice que apresentar maior valor é removido. Para validar o método FOMP, foram realizados experimentos utilizando quatro bases de imagens. Cada base apresenta características intrínsecas: (a) diferenças de rotação zoom da câmera; (b) padrões repetitivos, os quais geram duplicidade nos vetores de características; (c) objetos de formados, tais como plásticos, papéis ou tecido; (d) transformações afins (diferentes pontos de vista). Os experimentos realizados mostraram que o filtro FOMP remove mais de 65% dos outliers, enquanto mantém cerca de 98%dos inliers. A abordagem proposta mantém a precisão dos métodos de consenso, enquanto reduz o tempo de processamento pela metade para os métodos baseados em grafos.
Image matching plays a major role in many applications, such as pattern recognition and microscopic imaging. It encompasses three steps: 1) interest point selection; 2) feature extraction from each point; 3) feature point matching. For steps 1 and 2, traditional interest point detectors/ extractors have worked well. However, for step 3 even a few points incorrectly matched (outliers), might lead to an undesirable result. State-of-the-art consensus algorithms present a high time cost as the number of outlier increases. Aiming at overcoming this problem, we present FOMP, a preprocessing approach, that reduces the number of outliers in the initial set of matched points. FOMP filters out the vertices that present a higher difference among their edges in a complete graph representation of the points. To validate the proposed method, experiments were performed with four image database: (a) variations of rotation or camera zoom; (b) repetitive patterns, which leads to duplicity of features vectors; (c) deformable objects, such as plastics, clothes or papers; (d) affine transformations (different viewpoint). The experimental results showed that FOMP removes more than 65% of the outliers, while keeping over 98% of the inliers. Moreover, the precision of traditional methods is kept, while reducing the processing time of graph based approaches by half.
APA, Harvard, Vancouver, ISO, and other styles
19

Kuliešienė, Asta. "Vaikų iš socialiai remtinų šeimų ankstyvoji adaptacija ikimokyklinio ugdymo įstaigose." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130125_130134-90440.

Full text
Abstract:
Šiuolaikinėje industrinėje visuomenėje, kai susiklostė sudėtingos ekonominės sąlygos, iš esmės pakito ir bendravisuomeninė šeimos samprata. Vaikas kasdien suvokia realią tikrovę ir pats joje dalyvauja perimdamas iš tėvų įvairias elgesio, veiklos, charakterio savybes. Pakitusiose socialinėse ekonominėse sąlygose, tam tikros grupės žmonių jaučiasi nesaugios, joms sunku prisitaikyti prie šiuolaikinio gyvenimo tempo, adaptuotis prie socialinių ekonominių ir politinių pokyčių. Ypatingai didelis nedarbo lygis išryškino šeimų, gaunančių mažas pajamas, socialines problemas.Tyrime dalyvavo 185 apklaustieji: 87 tėvai ir 18 ikimokyklinio ugdymo pedagogų, iš Kauno miesto Dainavos mikrorajono ikimokyklinio ugdymo įstaigų, kurie stebėjo ir įvertino 80 vaikų elgesį.
In modern industrial society, with difficult economic conditions, the family concept has been substantially altered. A child on a daily basis understands the reality and takes part in it taking over the characteristics of parental behavior, actions and temper. Certain groups of people feel unsafe in changed socio-economic conditions. They are difficult to adapt to the pace of modern life, to the socio-economic and political changes. Extremely high unemployment highlighted social problems of the families with low income The study involved 185 respondents: 87 parents and 18 preschool teachers from the preschools of Kaunas Dainava district; who monitored and evaluated the behavior of 80 children.
APA, Harvard, Vancouver, ISO, and other styles
20

Омаров, М. А., В. М. Карташов, and Р. И. Цехмистро. "Features of the Use of Microprocessors in the Systems of Ovojectors in their Adaptation to the Conditions of the Former CIS." Thesis, NURE, MC&FPGA, 2019. https://mcfpga.nure.ua/conf/2019-mcfpga/10-35598-mcfpga-2019-012.

Full text
Abstract:
This electronic document describes the use of microcontrollers in the regulations of automatic sorting of eggs with a live (dead) embryo with further automatic vaccination of live embryos. These installations are produced only in 3 countries and are called Ovojectors.
APA, Harvard, Vancouver, ISO, and other styles
21

Nicely, Kenneth Edward. "Middle Level Schools in an Era of Standards and Accountability: Adaptations of the Features of the Middle School Concept." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26551.

Full text
Abstract:
The literature related to the development of education in the middle grades and to the features associated with the implementation of the middle school concept provides a theoretical grounding for the development and testing of an Innovation Configuration map for the middle school concept. The description provided of the historical development of middle-grades education presents the context for recent research studies and ongoing policy debate. In addition, features of the middle school concept as described within the literature are identified and an overview of salient research findings related to these features is given. A synthesis and critical review of previous research methodologies and findings reveal the need for further research. The purpose of the instrument development and testing process was to identify critical features of the middle school concept implemented in the context of standards and accountability. The instrument development and testing process investigated the nature of the implementation of middle school concept features, recognizing that actual practices in schools may vary somewhat without the schools losing their identity as middle level schools. The principle product of the process was the development of a diagnostic tool that may be used in future research to identify acceptable forms of implementation of the middle level philosophy of education. The instrument development and testing process employed research methodology based on the Concerns-Based Adoption Model (CBAM) of Hall and Hord (2006). Specifically, an Innovation Configuration map was developed identifying components of the middle level philosophy of education and describing variations in implementation of the components.
Ed. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Muhammad, Aminu. "Contextual lexicon-based sentiment analysis for social media." Thesis, Robert Gordon University, 2016. http://hdl.handle.net/10059/1571.

Full text
Abstract:
Sentiment analysis concerns the computational study of opinions expressed in text. Social media domains provide a wealth of opinionated data, thus, creating a greater need for sentiment analysis. Typically, sentiment lexicons that capture term-sentiment association knowledge are commonly used to develop sentiment analysis systems. However, the nature of social media content calls for analysis methods and knowledge sources that are better able to adapt to changing vocabulary. Invariably existing sentiment lexicon knowledge cannot usefully handle social media vocabulary which is typically informal and changeable yet rich in sentiment. This, in turn, has implications on the analyser's ability to effectively capture the context therein and to interpret the sentiment polarity from the lexicons. In this thesis we use SentiWordNet, a popular sentiment-rich lexicon with a substantial vocabulary coverage and explore how to adapt it for social media sentiment analysis. Firstly, the thesis identifies a set of strategies to incorporate the effect of modifiers on sentiment-bearing terms (local context). These modifiers include: contextual valence shifters, non-lexical sentiment modifiers typical in social media and discourse structures. Secondly, the thesis introduces an approach in which a domain-specific lexicon is generated using a distant supervision method and integrated with a general-purpose lexicon, using a weighted strategy, to form a hybrid (domain-adapted) lexicon. This has the dual purpose of enriching term coverage of the general purpose lexicon with non-standard but sentiment-rich terms as well as adjusting sentiment semantics of terms. Here, we identified two term-sentiment association metrics based on Term Frequency and Inverse Document Frequency that are able to outperform the state-of-the-art Point-wise Mutual Information on social media data. As distant supervision may not be readily applicable on some social media domains, we explore the cross-domain transferability of a hybrid lexicon. Thirdly, we introduce an approach for improving distant-supervised sentiment classification with knowledge from local context analysis, domain-adapted (hybrid) and emotion lexicons. Finally, we conduct a comprehensive evaluation of all identified approaches using six sentiment-rich social media datasets.
APA, Harvard, Vancouver, ISO, and other styles
23

Balan, André Guilherme Ribeiro. "Métodos adaptativos de segmentação aplicados à recuperação de imagens por conteúdo." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15062007-150711/.

Full text
Abstract:
A possibilidade de armazenamento de imagens no formato digital favoreceu a evolução de diversos ramos de atividades, especialmente as áreas de pesquisa e clínica médica. Ao mesmo tempo, o volume crescente de imagens armazenadas deu origem a um problema de relevância e complexidade consideráveis: a Recuperação de Imagens Baseada em Conteúdo, que, em outras palavras, diz respeito à capacidade de um sistema de armazenamento processar operações de consulta de imagens a partir de características visuais, extraídas automaticamente por meio de métodos computacionais. Das principais questões que constituem este problema, amplamente conhecido pelo termo CBIR - Content-Based Image Retrieval, fazem parte as seguintes: Como interpretar ou representar matematicamente o conteúdo de uma imagem? Quais medidas que podem caracterizar adequadamente este conteúdo? Como recuperar imagens de um grande repositório utilizando o conteúdo extraído? Como estabelecer um critério matemático de similaridade entre estas imagens? O trabalho desenvolvido e apresentado nesta tese busca, exatamente, responder perguntas deste tipo, especialmente para os domínios de imagens médicas e da biologia genética, onde a demanda por sistemas computacionais que incorporam técnicas CBIR é consideravelmente alta por diversos motivos. Motivos que vão desde a necessidade de se buscar informação visual que estava até então inacessível pela falta de anotações textuais, até o interesse em poder contar com auxílio computacional confiável para a importante tarefa de diagnóstico clínico. Neste trabalho são propostos métodos e soluções inovadoras para o problema de segmentação e extração de características de imagens médicas e imagens de padrões espaciais de expressão genética. A segmentação é o processo de delimitação automático de regiões de interesse da imagem que possibilita uma caracterização bem mais coerente do conteúdo visual, comparado com as tradicionais técnicas de caracterização global e direta da imagem. Partindo desta idéia, as técnicas de extração de características desenvolvidas neste trabalho empregam métodos adaptativos de segmentação de imagens e alcançam resultados excelentes na tarefa de recuperação baseada em conteúdo
Storing images in digital format has supported the evolution of several branches of activities, specially the research area and medical clinic. At the same time, the increasing volume of stored images has originated a topic of considerable relevance and complexity: the Content- Based Imagem Retrieval, which, in other works, is related to the ability of a computational system in processing image queries based on visual features automatically extracted by computational methods. Among the main questions that constitute this issue, widely known as CBIR, are these: How to mathematically express image content? What measures can suitably characterize this content? How to retrieve images from a large dataset employing the extracted content? How to establish a mathematical criterion of similarity among the imagens? The work developed and presented in this thesis aims at answering questions like those, especially for the medical images domain and genetical biology, where the demand for computational systems that embody CBIR techniques is considerably high for several reasons. Reasons that range from the need for retrieving visual information that was until then inaccessible due to the lack of textual annotations, until the interest in having liable computational support for the important task of clinical diagnosis. In this work are proposed innovative methods and solutions for the problem of image segmentation and feature extraction of medical images and images of gene expression patterns. Segmentation is the process that enables a more coherent representation of image?s visual content than that provided by traditional methods of global and direct representation. Grounded in such idea, the feature extraction techniques developed in this work employ adaptive image segmentation methods, and achieve excellent results on the task of Content-Based Image Retrieval
APA, Harvard, Vancouver, ISO, and other styles
24

Gangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.

Full text
Abstract:
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features. We present a pre-training method for RNNLMs, in which the output weights of a feed-forward neural network language model (NNLM) are shared with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. To investigate the effectiveness of the proposed pre-training method, we have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED lectures data. Across the experiments, we observe small but significant improvements in perplexity (PPL) and ASR WER. Next, we present unsupervised adaptation of RNNLMs. We adapted the RNNLMs to a target domain (topic or genre or television programme (show)) at test time using ASR transcripts from first pass recognition. We investigated two approaches to adapt the RNNLMs. In the first approach the forward propagating hidden activations are scaled - learning hidden unit contributions (LHUC). In the second approach we adapt all parameters of RNNLM.We evaluated the adapted RNNLMs by showing the WERs on multi genre broadcast speech data. We observe small (on an average 0.1% absolute) but significant improvements in WER compared to a strong unadapted RNNLM model. Finally, we present the context-enhancement of RNNLMs using prosody and syntactic features. The prosody features were computed from the acoustics of the context words and the syntactic features were from the surface form of the words in the context. We trained the RNNLMs with word duration, pause duration, final phone duration, syllable duration, syllable F0, part-of-speech tag and Combinatory Categorial Grammar (CCG) supertag features. The proposed context-enhanced RNNLMs were evaluated by reporting PPL and WER on two speech recognition tasks, Switchboard and TED lectures. We observed substantial improvements in PPL (5% to 15% relative) and small but significant improvements in WER (0.1% to 0.5% absolute).
APA, Harvard, Vancouver, ISO, and other styles
25

Sellami, Akrem. "Interprétation sémantique d'images hyperspectrales basée sur la réduction adaptative de dimensionnalité." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0037/document.

Full text
Abstract:
L'imagerie hyperspectrale permet d'acquérir des informations spectrales riches d'une scène dans plusieurs centaines, voire milliers de bandes spectrales étroites et contiguës. Cependant, avec le nombre élevé de bandes spectrales, la forte corrélation inter-bandes spectrales et la redondance de l'information spectro-spatiale, l'interprétation de ces données hyperspectrales massives est l'un des défis majeurs pour la communauté scientifique de la télédétection. Dans ce contexte, le grand défi posé est la réduction du nombre de bandes spectrales inutiles, c'est-à-dire de réduire la redondance et la forte corrélation de bandes spectrales tout en préservant l'information pertinente. Par conséquent, des approches de projection visent à transformer les données hyperspectrales dans un sous-espace réduit en combinant toutes les bandes spectrales originales. En outre, des approches de sélection de bandes tentent à chercher un sous-ensemble de bandes spectrales pertinentes. Dans cette thèse, nous nous intéressons d'abord à la classification d'imagerie hyperspectrale en essayant d'intégrer l'information spectro-spatiale dans la réduction de dimensions pour améliorer la performance de la classification et s'affranchir de la perte de l'information spatiale dans les approches de projection. De ce fait, nous proposons un modèle hybride permettant de préserver l'information spectro-spatiale en exploitant les tenseurs dans l'approche de projection préservant la localité (TLPP) et d'utiliser l'approche de sélection non supervisée de bandes spectrales discriminantes à base de contraintes (CBS). Pour modéliser l'incertitude et l'imperfection entachant ces approches de réduction et les classifieurs, nous proposons une approche évidentielle basée sur la théorie de Dempster-Shafer (DST). Dans un second temps, nous essayons d'étendre le modèle hybride en exploitant des connaissances sémantiques extraites à travers les caractéristiques obtenues par l'approche proposée auparavant TLPP pour enrichir la sélection non supervisée CBS. En effet, l'approche proposée permet de sélectionner des bandes spectrales pertinentes qui sont à la fois informatives, discriminantes, distinctives et peu redondantes. En outre, cette approche sélectionne les bandes discriminantes et distinctives en utilisant la technique de CBS en injectant la sémantique extraite par les techniques d'extraction de connaissances afin de sélectionner d'une manière automatique et adaptative le sous-ensemble optimal de bandes spectrales pertinentes. La performance de notre approche est évaluée en utilisant plusieurs jeux des données hyperspectrales réelles
Hyperspectral imagery allows to acquire a rich spectral information of a scene in several hundred or even thousands of narrow and contiguous spectral bands. However, with the high number of spectral bands, the strong inter-bands spectral correlation and the redundancy of spectro-spatial information, the interpretation of these massive hyperspectral data is one of the major challenges for the remote sensing scientific community. In this context, the major challenge is to reduce the number of unnecessary spectral bands, that is, to reduce the redundancy and high correlation of spectral bands while preserving the relevant information. Therefore, projection approaches aim to transform the hyperspectral data into a reduced subspace by combining all original spectral bands. In addition, band selection approaches attempt to find a subset of relevant spectral bands. In this thesis, firstly we focus on hyperspectral images classification attempting to integrate the spectro-spatial information into dimension reduction in order to improve the classification performance and to overcome the loss of spatial information in projection approaches.Therefore, we propose a hybrid model to preserve the spectro-spatial information exploiting the tensor model in the locality preserving projection approach (TLPP) and to use the constraint band selection (CBS) as unsupervised approach to select the discriminant spectral bands. To model the uncertainty and imperfection of these reduction approaches and classifiers, we propose an evidential approach based on the Dempster-Shafer Theory (DST). In the second step, we try to extend the hybrid model by exploiting the semantic knowledge extracted through the features obtained by the previously proposed approach TLPP to enrich the CBS technique. Indeed, the proposed approach makes it possible to select a relevant spectral bands which are at the same time informative, discriminant, distinctive and not very redundant. In fact, this approach selects the discriminant and distinctive spectral bands using the CBS technique injecting the extracted rules obtained with knowledge extraction techniques to automatically and adaptively select the optimal subset of relevant spectral bands. The performance of our approach is evaluated using several real hyperspectral data
APA, Harvard, Vancouver, ISO, and other styles
26

Evmenova, Anna S. "Lights! Camera! Captions! The effects of picture and/or word captioning adaptations, alternative narration, and interactive features on video comprehension by students with intellectual disabilities /." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3071.

Full text
Abstract:
Thesis (Ph.D.)--George Mason University, 2008.
Vita: p. 388. Thesis director: Michael M. Behrmann. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Education. Title from PDF t.p. (viewed July 3, 2008). Includes bibliographical references (p. 349-387). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
27

Tomashenko, Natalia. "Speaker adaptation of deep neural network acoustic models using Gaussian mixture model framework in automatic speech recognition systems." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1040/document.

Full text
Abstract:
Les différences entre conditions d'apprentissage et conditions de test peuvent considérablement dégrader la qualité des transcriptions produites par un système de reconnaissance automatique de la parole (RAP). L'adaptation est un moyen efficace pour réduire l'inadéquation entre les modèles du système et les données liées à un locuteur ou un canal acoustique particulier. Il existe deux types dominants de modèles acoustiques utilisés en RAP : les modèles de mélanges gaussiens (GMM) et les réseaux de neurones profonds (DNN). L'approche par modèles de Markov cachés (HMM) combinés à des GMM (GMM-HMM) a été l'une des techniques les plus utilisées dans les systèmes de RAP pendant de nombreuses décennies. Plusieurs techniques d'adaptation ont été développées pour ce type de modèles. Les modèles acoustiques combinant HMM et DNN (DNN-HMM) ont récemment permis de grandes avancées et surpassé les modèles GMM-HMM pour diverses tâches de RAP, mais l'adaptation au locuteur reste très difficile pour les modèles DNN-HMM. L'objectif principal de cette thèse est de développer une méthode de transfert efficace des algorithmes d'adaptation des modèles GMM aux modèles DNN. Une nouvelle approche pour l'adaptation au locuteur des modèles acoustiques de type DNN est proposée et étudiée : elle s'appuie sur l'utilisation de fonctions dérivées de GMM comme entrée d'un DNN. La technique proposée fournit un cadre général pour le transfert des algorithmes d'adaptation développés pour les GMM à l'adaptation des DNN. Elle est étudiée pour différents systèmes de RAP à l'état de l'art et s'avère efficace par rapport à d'autres techniques d'adaptation au locuteur, ainsi que complémentaire
Differences between training and testing conditions may significantly degrade recognition accuracy in automatic speech recognition (ASR) systems. Adaptation is an efficient way to reduce the mismatch between models and data from a particular speaker or channel. There are two dominant types of acoustic models (AMs) used in ASR: Gaussian mixture models (GMMs) and deep neural networks (DNNs). The GMM hidden Markov model (GMM-HMM) approach has been one of the most common technique in ASR systems for many decades. Speaker adaptation is very effective for these AMs and various adaptation techniques have been developed for them. On the other hand, DNN-HMM AMs have recently achieved big advances and outperformed GMM-HMM models for various ASR tasks. However, speaker adaptation is still very challenging for these AMs. Many adaptation algorithms that work well for GMMs systems cannot be easily applied to DNNs because of the different nature of these models. The main purpose of this thesis is to develop a method for efficient transfer of adaptation algorithms from the GMM framework to DNN models. A novel approach for speaker adaptation of DNN AMs is proposed and investigated. The idea of this approach is based on using so-called GMM-derived features as input to a DNN. The proposed technique provides a general framework for transferring adaptation algorithms, developed for GMMs, to DNN adaptation. It is explored for various state-of-the-art ASR systems and is shown to be effective in comparison with other speaker adaptation techniques and complementary to them
APA, Harvard, Vancouver, ISO, and other styles
28

Lamago, Merlin Ferdinand. "Réingénierie des fonctions des plateformes LMS par l'analyse et la modélisation des activités d'apprentissage : application à des contextes éducatifs avec fracture numérique." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0589/document.

Full text
Abstract:
Cette thèse modélise l’activité d’apprentissage dans les plateformes LMS (Learning Management System) en vue d’optimiser l’efficacité des utilisateurs. Ce projet est né d’une préoccupation pratique, à savoir comment faciliter l’utilisation des plateformes LMS chez les enseignants et apprenants des pays en voie de développement qui font face à la fracture numérique et éducative. Cette recherche aborde le problème de l’adaptabilité des LMS et suppose deux niveaux de modélisation : l’outil d’apprentissage et le contexte d’utilisation projeté. Pour traiter cette question d’adaptabilité, nous procédons par une double approche : l’analyse fonctionnelle des outils LMS et la réingénierie des interfaces utilisateurs. La première consiste à définir une approche d’analyse de l’activité d’enseignement-apprentissage dans les plateformes LMS. Ceci passe par une modélisation des situations courantes d’apprentissage et un croisement avec les fonctionnalités offertes dans les solutions LMS existantes. Ce travail préliminaire a permis de construire et proposer un modèle d’analyse utilisationnelle des plateformes que nous désignons méthode OCAPI fondé sur cinq catégories fonctionnelles : Organiser-Collaborer-Accompagner-Produire-Informer. La seconde approche s’inspire de la recherche fondamentale menée en amont et propose une réingénierie adaptative des LMS basée sur le contexte d’utilisation. Il s’agit d’un configurateur automatique embarqué qui adapte l’environnement de travail pour chaque usage et usager. Le prototype est articulé dans l’intention manifeste d’assurer une prise en main rapide des novices et se veut le moins contraignant possible du point de vue technologique
The present research aims to model learning processes on Learning ManagementSystems (LMS) in a bid to maximize users’ efficiency. We came about this idea whilethinking over the possible ways of facilitating the use of LMS for teachers and learnersin countries affected by the digital divide. Drawing from that, the following question hasbeen stated: in a given learning context, how can we insert a Learning ManagementSystem that provides users with both easy handling and optimal using conditions? Thisissue raises the problem of LMS adaptability and suggests two levels of modeling: thelearning tool on one hand and the planned context of use on the other. To address thisissue of adaptability, we adopt a two-pronged approach including the functionalanalysis of LMS tools and the reengineering of user interfaces. The first step is todevelop an approach for the analysis of teaching and learning processes on LMS. Thisentails modeling common learning situations and cross-checking them with thefeatures available in LMS solutions. This preliminary work enabled to build a formalismfor LMS analysis which is referred to as the OCGPI approach (Organize-Collaborate-Guide-Produce-Inform). The second step proposes an adaptive reengineering of LMSbased on the context of use. This is namely an embedded configurator which adaptsthe working environment according to each use and each user. This tool aims at givingbeginners the possibility of acquainting themselves quickly with the virtual platform
APA, Harvard, Vancouver, ISO, and other styles
29

Luo, Guoliang. "Segmentation de maillages dynamiques et son application pour le calcul de similarité." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD026/document.

Full text
Abstract:
Avec le développement important des techniques d’animation, les maillages animés sont devenus un sujet de recherche important en informatique graphique, comme la segmentation de maillages animés ou la compression. Ces maillages animés qui sont créés à l’aide de logiciels ou à partir de données de capture de mouvements sont composés d’une séquence ordonnée de maillages de forme statique et dont la topologie reste la même (nombre fixe de sommets et de triangles). Bien qu’un grand nombre de travaux ont été menés sur les maillages statiques durant les deux dernières décennies, le traitement et la compression de maillages animés présentent de nombreuses difficultés techniques. En particulier, les traitements de maillages dynamiques nécessitent une représentation de données efficace basée sur la segmentation. Plusieurs travaux ont été publiés par le passé et qui permettent de segmenter un maillage animé en un ensemble de composants rigides.Dans cette thèse, nous présentons plusieurs techniques qui permettent de calculer une segmentation spatio-temporelle d’un maillage animé ; de tels travaux n’ont pas encore été publiés sur ce sujet. De plus, nous avons étendu cette méthode pour pouvoir comparer ces maillages animés entre eux à l’aide d’une métrique. À notre connaissance, aucune méthode existante ne permet de comparer des maillages animés entre eux
With an abundance of animation techniques available today, animated mesh has become a subject of various data processing techniques in Computer Graphics community, such as mesh segmentation and compression. Created from animation software or from motion capture data, a large portion of the animated meshes are deforming meshes, i.e. ordered sequences of static meshes whose topology is fixed (fixed number of vertices and fixed connectivity). Although a great deal of research on static meshes has been reported in the last two decades, the analysis, retrieval or compressions of deforming meshes remain as new research challenges. Such tasks require efficient representations of animated meshes, such as segmentation. Several spatial segmentation methods based on the movements of each vertex, or each triangle, have been presented in existing works that partition a given deforming mesh into rigid components. In this thesis, we present segmentation techniques that compute the temporal and spatio-temporal segmentation for deforming meshes, which both have not been studied before. We further extend the segmentation results towards the application of motion similarity measurement between deforming meshes. This may be significant as it solves the problem that cannot be handled by current approaches
APA, Harvard, Vancouver, ISO, and other styles
30

Borelli, Helberth. "Uma linguagem de modelagem de domínio específico para linhas de produto de software dinâmicas." Universidade Federal de Goiás, 2016. http://repositorio.bc.ufg.br/tede/handle/tede/5893.

Full text
Abstract:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-08-09T16:58:08Z No. of bitstreams: 2 Dissertação - Helberth Borelli - 2016.pdf: 5479597 bytes, checksum: c182a5a918e2fda8bf310ba47bc494e4 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-10T11:31:18Z (GMT) No. of bitstreams: 2 Dissertação - Helberth Borelli - 2016.pdf: 5479597 bytes, checksum: c182a5a918e2fda8bf310ba47bc494e4 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2016-08-10T11:31:18Z (GMT). No. of bitstreams: 2 Dissertação - Helberth Borelli - 2016.pdf: 5479597 bytes, checksum: c182a5a918e2fda8bf310ba47bc494e4 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-05-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Systems which involve adaptations due to context changes have the challenge of adapting software systems at runtime. This thesis adopts as proposal the adaptation of resources in the form of features, involving concepts of Feature Oriented Domain Analysis. A possible approach to develop systems based on adaptable features at runtime is the concept of Dynamic Software Product Line (DSPL), which can be implemented by Metamodels. The aim of this thesis is the development of a Domain Specific Modeling Language (DSML) for DSPL, designed from the construction of a metamodel for the development of DSPLs, which is divided in three metamodels: of features, of variabilities and of applications to derive products. The variabilities metamodel aims at modeling contracts that must negotiate the product adaptation to the features that may be present or not in the execution environment. Adaptations are based in state machines, which address changes of feature state or changes by transitions of equivalent features, in order to keep the execution of the software product. The developed DSML still plays the role of extending the constraints imposed by the metamodels, as well as to generate codes in general-purpose language based on modeling features, variabilities and applications. In order to validate the proposal, the DSML was used to model two DSPLs, including the derivation of products and the execution in a platform based in OSGi specification.
Sistemas que envolvem adaptação em decorrência de mudanças de contexto possuem como desafio a adaptação do sistema de software em tempo de execução. Esta dissertação adota como proposta a adaptação de recursos na forma de características, envolvendo o conceito de Análise de Domínio Orientada a Características. Uma abordagem para o desenvolvimento de sistemas baseados em características adaptáveis em tempo de execução é o conceito de Linha de Produto de Software Dinâmica (LPSD), o qual pode ser implementado por meio do desenvolvimento de Metamodelos. O objetivo desta dissertação é o desenvolvimento de uma Linguagem de Modelagem de Domínio Específico (do inglês, Domain Specific Modeling Language - DSML) para LPSD, concebida a partir da construção de um metamodelo para o desenvolvimento de LPSDs, o qual está dividido em três metamodelos: de características, de variabilidades e de aplicação para derivação de produtos. Em destaque, o metamodelo de variabilidade tem como objetivo a modelagem de contratos que devem negociar a adaptação dos produtos às características que poderão estar ou não presentes no ambiente de execução. As adaptações são baseadas em máquinas de estado, as quais abordam a mudança de estado de uma característica ou a mudança por transição de características equivalentes, com o intuito de manter a execução do produto de software. A DSML desenvolvida tem ainda o papel de estender as restrições impostas pelos metamodelos, assim como gerar códigos em linguagem de propósito geral com base na modelagem de características, variabilidades e aplicações. No sentido de validar a proposta, a DSML foi usada para a modelagem de duas LPSDs, incluindo a derivação de produtos e a execução em uma plataforma baseada na especificação OSGi.
APA, Harvard, Vancouver, ISO, and other styles
31

Kachouri, Rostom. "Classification multi-modèles des images dans les bases Hétérogènes." Phd thesis, Université d'Evry-Val d'Essonne, 2010. http://tel.archives-ouvertes.fr/tel-00526649.

Full text
Abstract:
La reconnaissance d'images est un domaine de recherche qui a été largement étudié par la communauté scientifique. Les travaux proposés dans ce cadre s'adressent principalement aux diverses applications des systèmes de vision par ordinateur et à la catégorisation des images issues de plusieurs sources. Dans cette thèse, on s'intéresse particulièrement aux systèmes de reconnaissance d'images par le contenu dans les bases hétérogènes. Les images dans ce type de bases appartiennent à différents concepts et représentent un contenu hétérogène. Pour ce faire, une large description permettant d'assurer une représentation fiable est souvent requise. Cependant, les caractéristiques extraites ne sont pas nécessairement toutes appropriées pour la discrimination des différentes classes d'images qui existent dans une base donnée d'images. D'où, la nécessité de sélection des caractéristiques pertinentes selon le contenu de chaque base. Dans ce travail, une méthode originale de sélection adaptative est proposée. Cette méthode permet de considérer uniquement les caractéristiques qui sont jugées comme les mieux adaptées au contenu de la base d'image utilisée. Par ailleurs, les caractéristiques sélectionnées ne disposent pas généralement des mêmes performances. En conséquence, l'utilisation d'un algorithme de classification, qui s'adapte aux pouvoirs discriminants des différentes caractéristiques sélectionnées par rapport au contenu de la base d'images utilisée, est vivement recommandée. Dans ce contexte, l'approche d'apprentissage par noyaux multiples est étudiée et une amélioration des méthodes de pondération des noyaux est présentée. Cette approche s'avère incapable de décrire les relations non-linéaires des différents types de description. Ainsi, nous proposons une nouvelle méthode de classification hiérarchique multi-modèles permettant d'assurer une combinaison plus flexible des caractéristiques multiples. D'après les expérimentations réalisées, cette nouvelle méthode de classification assure des taux de reconnaissance très intéressants. Enfin, les performances de la méthode proposée sont mises en évidence à travers une comparaison avec un ensemble d'approches cité dans la littérature récente du domaine.
APA, Harvard, Vancouver, ISO, and other styles
32

Ferreira, Leite Alessandro. "A user-centered and autonomic multi-cloud architecture for high performance computing applications." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112355/document.

Full text
Abstract:
Le cloud computing a été considéré comme une option pour exécuter des applications de calcul haute performance. Bien que les plateformes traditionnelles de calcul haute performance telles que les grilles et les supercalculateurs offrent un environnement stable du point de vue des défaillances, des performances, et de la taille des ressources, le cloud computing offre des ressources à la demande, généralement avec des performances imprévisibles mais à des coûts financiers abordables. Pour surmonter les limites d’un cloud individuel, plusieurs clouds peuvent être combinés pour former une fédération de clouds, souvent avec des coûts supplémentaires légers pour les utilisateurs. Une fédération de clouds peut aider autant les fournisseurs que les utilisateurs à atteindre leurs objectifs tels la réduction du temps d’exécution, la minimisation des coûts, l’augmentation de la disponibilité, la réduction de la consommation d’énergie, pour ne citer que ceux-Là. Ainsi, la fédération de clouds peut être une solution élégante pour éviter le sur-Approvisionnement, réduisant ainsi les coûts d’exploitation en situation de charge moyenne, et en supprimant des ressources qui, autrement, resteraient inutilisées et gaspilleraient ainsi de énergie. Cependant, la fédération de clouds élargit la gamme des ressources disponibles. En conséquence, pour les utilisateurs, des compétences en cloud computing ou en administration système sont nécessaires, ainsi qu’un temps d’apprentissage considérable pour maîtrises les options disponibles. Dans ce contexte, certaines questions se posent: (a) Quelle ressource du cloud est appropriée pour une application donnée? (b) Comment les utilisateurs peuvent-Ils exécuter leurs applications HPC avec un rendement acceptable et des coûts financiers abordables, sans avoir à reconfigurer les applications pour répondre aux normes et contraintes du cloud ? (c) Comment les non-Spécialistes du cloud peuvent-Ils maximiser l’usage des caractéristiques du cloud, sans être liés au fournisseur du cloud ? et (d) Comment les fournisseurs de cloud peuvent-Ils exploiter la fédération pour réduire la consommation électrique, tout en étant en mesure de fournir un service garantissant les normes de qualité préétablies ? À partir de ces questions, la présente thèse propose une solution de consolidation d’applications pour la fédération de clouds qui garantit le respect des normes de qualité de service. On utilise un système multi-Agents pour négocier la migration des machines virtuelles entre les clouds. En nous basant sur la fédération de clouds, nous avons développé et évalué une approche pour exécuter une énorme application de bioinformatique à coût zéro. En outre, nous avons pu réduire le temps d’exécution de 22,55% par rapport à la meilleure exécution dans un cloud individuel. Cette thèse présente aussi une architecture de cloud baptisée « Excalibur » qui permet l’adaptation automatique des applications standards pour le cloud. Dans l’exécution d’une chaîne de traitements de la génomique, Excalibur a pu parfaitement mettre à l’échelle les applications sur jusqu’à 11 machines virtuelles, ce qui a réduit le temps d’exécution de 63% et le coût de 84% par rapport à la configuration de l’utilisateur. Enfin, cette thèse présente un processus d’ingénierie des lignes de produits (PLE) pour gérer la variabilité de l’infrastructure à la demande du cloud, et une architecture multi-Cloud autonome qui utilise ce processus pour configurer et faire face aux défaillances de manière indépendante. Le processus PLE utilise le modèle étendu de fonction avec des attributs pour décrire les ressources et les sélectionner en fonction des objectifs de l’utilisateur. Les expériences réalisées avec deux fournisseurs de cloud différents montrent qu’en utilisant le modèle proposé, les utilisateurs peuvent exécuter leurs applications dans un environnement de clouds fédérés, sans avoir besoin de connaître les variabilités et contraintes du cloud
Cloud computing has been seen as an option to execute high performance computing (HPC) applications. While traditional HPC platforms such as grid and supercomputers offer a stable environment in terms of failures, performance, and number of resources, cloud computing offers on-Demand resources generally with unpredictable performance at low financial cost. Furthermore, in cloud environment, failures are part of its normal operation. To overcome the limits of a single cloud, clouds can be combined, forming a cloud federation often with minimal additional costs for the users. A cloud federation can help both cloud providers and cloud users to achieve their goals such as to reduce the execution time, to achieve minimum cost, to increase availability, to reduce power consumption, among others. Hence, cloud federation can be an elegant solution to avoid over provisioning, thus reducing the operational costs in an average load situation, and removing resources that would otherwise remain idle and wasting power consumption, for instance. However, cloud federation increases the range of resources available for the users. As a result, cloud or system administration skills may be demanded from the users, as well as a considerable time to learn about the available options. In this context, some questions arise such as: (a) which cloud resource is appropriate for a given application? (b) how can the users execute their HPC applications with acceptable performance and financial costs, without needing to re-Engineer the applications to fit clouds' constraints? (c) how can non-Cloud specialists maximize the features of the clouds, without being tied to a cloud provider? and (d) how can the cloud providers use the federation to reduce power consumption of the clouds, while still being able to give service-Level agreement (SLA) guarantees to the users? Motivated by these questions, this thesis presents a SLA-Aware application consolidation solution for cloud federation. Using a multi-Agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-Cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-Scale cloud-Unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user's configuration. Finally, this thesis presents a product line engineering (PLE) process to handle the variabilities of infrastructure-As-A-Service (IaaS) clouds, and an autonomic multi-Cloud architecture that uses this process to configure and to deal with failures autonomously. The PLE process uses extended feature model (EFM) with attributes to describe the resources and to select them based on users' objectives. Experiments realized with two different cloud providers show that using the proposed model, the users could execute their application in a cloud federation environment, without needing to know the variabilities and constraints of the clouds
APA, Harvard, Vancouver, ISO, and other styles
33

Fang, Chao-Chi, and 方肇基. "Speech Recognition by Dynamic Adaptation of Speaker Feature." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75694878163906105604.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
94
This research uses Mel-Frequency Cepstrum Coefficients as the characteristic parameter at voice signal level. And making use of Discrete Hidden Markov Model constructs the monosyllabic model related to the speaker. After accumulating certain amounts of speaker models, we can dynamically generate a speaker model to nonspecific speaker by adopting the characteristic of the speaker, and execute the recognition of syllable cuttings and syllable models by continuing utilizing One Stage Dynamic Processing Algorithm. For the adoption of the speaker's pronunciation characteristic, we can get the general model parameters and transfer matrixes by utilizing Maximum Likelihood Linear Regression during training process and record each speaker's specific model and skew vector of the model. When using voice recognition of nonspecific speakers, we can use the identical method to get the skew vector of the model and get the speaker's feature vector by further utilizing the method of least square. Comparing the other method of the speaker's adoption before, this research propose a speaker's characteristic dynamic adoption method which can get higher operation efficiency and favorable to the execution of real time recognition. The implementation of programming is also utilizing the time division multiplexing property of operation system to process the characteristic fetch, analysis, build optimum model and recognition simultaneously when pronouncing. We can get up to 80.16% recognition rate when applying in continuous voice of monosyllabic recognition and 92.13% by further applying a keyword model.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Chia-Hsin, and 陳佳欣. "Mandarin Syllable Recognition Based on Landmark Knowledge and Dynamic Feature Adaptation." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/35990480683718161884.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學系
85
For speech recognition research, there are two main approaches. One isknowledge based approach, the other is statistics approach. In this thesis,landmark knowledge is included into a statistics based system to enhance theperformance of the system. In the fist part of the thesis, landmark detection algorithm is developed.From the experiment result, the detection rate is higher than 88% for the 3test speakers. In the second part of the thesis, landmark knowledge is included to improvethe initial modeling. For the conventional method, uniform segmentation isused in initial segmentation. In this thesis, landmark knowledge is used tofind the initial-final boundary, uniform segmentation is then used to segmentthe initial and final respectively. The experiment result shows that theaccuracy can be improved from 63.79% to 65.72% for isolated syllable and from67.46% to 68.34% for continuous speech in speaker independent recognition. In the last part of the thesis, dynamic feature weighting is included intothe model parameters and GPD is used to adapt these parameters. The experimentresult shows that for speaker independent recognition, the accuracy can beimproved from 68.55% to 70.53% if dynamic feature weighting is adapted. Theaccuracy can be further improved to 75.37%, if mixture gain, mean andvariance are adapted in the same time.
APA, Harvard, Vancouver, ISO, and other styles
35

Hung, Ying-Ning, and 洪穎寧. "A Domain Adaptation Method Based on Feature-disentangling Generative Adversarial Networks." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yft79s.

Full text
Abstract:
碩士
國立臺灣海洋大學
資訊工程學系
107
The purpose of this thesis focuses on the domain adaptation effects of a specific feature-disentangling generative adversarial networks, in which the characteristics of handwritten digit images are disentangled into class and style features automatically. The class characteristics consist of primarily the invariant information relevant to distinguish ten digits and the style characteristics contain other information common to all images like colors, fonts, thickness, and angles, etc. The training algorithm extracts the class-relevant feature vectors directly from the training data with only class labels through adversarial learning strategies. Two handwritten digit datasets in quite different image domains, one in gray-level which is the main training data and the other in random color, are used to demonstrate the effects of domain adaptation. We show that the disentangling networks help a classifier acquire the generalization ability such that it is trained mainly on the first dataset but still recognizes the untrained class of digits of the second dataset.
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Chun Hsin, and 吳俊欣. "A Speaker Adaptation Method Based on MFCC Feature Space Coordinate System Mapping." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/26805190317076600956.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
91
In this thesis, a speaker adaptation method is developed. This method needs only a small quantity of training utterances because the adaptation mechanism is operated on the level of MFCC feature parameter. First, an individual coordinate system is built for each new speaker in order that his MFCC feature vectors can be decomposed into coordinate coefficients of the system. Then, the coordinate coefficients are directly mapped as coefficients of the coordinate system for a target person. Even though this mechanism is simple, it can indeed obtain good adaptation performance. To verify the performance of our adaptation method, we have executed several recognition experiments under different conditions. The conditions are for different kinds of vocabularies, including sing-vowel vocabulary, multi-vowel vocabulary, nasal-containing syllable vocabulary and dissyllabic word vocabulary. In speaker non-adapted mode, the original recognition error rates are 30.3%, 20.7%, 38.3% and 21.3% respectively. However, in speaker adapted mode, the error rates are reduced to 3.3%, 9.8%, 22.5% and 12.3% respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Mazloom, Reza. "Classification of Twitter disaster data using a hybrid feature-instance adaptation approach." Thesis, 2018. http://hdl.handle.net/2097/38872.

Full text
Abstract:
Master of Science
Department of Computer Science
Doina Caragea
Huge amounts of data that are generated on social media during emergency situations are regarded as troves of critical information. The use of supervised machine learning techniques in the early stages of a disaster is challenged by the lack of labeled data for that particular disaster. Furthermore, supervised models trained on labeled data from a prior disaster may not produce accurate results. To address these challenges, domain adaptation approaches, which learn models for predicting the target, by using unlabeled data from the target disaster in addition to labeled data from prior source disasters, can be used. However, the resulting models can still be affected by the variance between the target domain and the source domain. In this context, we propose to use a hybrid feature-instance adaptation approach based on matrix factorization and the k-nearest neighbors algorithm, respectively. The proposed hybrid adaptation approach is used to select a subset of the source disaster data that is representative of the target disaster. The selected subset is subsequently used to learn accurate supervised or domain adaptation Naïve Bayes classifiers for the target disaster. In other words, this study focuses on transforming the existing source data to bring it closer to the target data, thus overcoming the domain variance which may prevent effective transfer of information from source to target. A combination of selective and transformative methods are used on instances and features, respectively. We show experimentally that the proposed approaches are effective in transferring information from source to target. Furthermore, we provide insights with respect to what types and combinations of selections/transformations result in more accurate models for the target.
APA, Harvard, Vancouver, ISO, and other styles
38

Chien, Yu-Lin, and 簡友琳. "Machine Learning Based Rate Adaptation with Elastic Feature Selection for HTTP-Based Streaming." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/59932945918614891709.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
103
Dynamic Adaptive Streaming over HTTP (DASH) has become an emerging application nowadays. Video rate adaptation is a key to determine the video quality of HTTP-based media streaming. Recent works have proposed several algorithms that allow a DASH client to adapt its video encoding rate to network dynamics. While network conditions are typically affected by many different factors, these algorithms however usually consider only a few representative information, e.g., predicted available bandwidth or fullness of its playback buffer. In addition, the error in bandwidth estimation could significantly degrade their performance. Therefore, this paper presents Machine- Learning-based Adaptive Streaming over HTTP (MLASH), an elastic framework that exploits a wide range of useful network-related features to train a rate classification model. The distinct properties of MLASH are that its machine-learning-based framework can be incorporated with any existing adaptation algorithm and utilize big data characteristics to improve prediction accuracy. We show via trace-based simulations that machine-learning-based adaptation can achieve a better performance than traditional adaptation algorithms in terms of their target quality of experience (QoE) metrics.
APA, Harvard, Vancouver, ISO, and other styles
39

Prabuchandran, K. J. "Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic Control." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2664.

Full text
Abstract:
Many sequential decision making problems under uncertainty arising in engineering, science and economics are often modelled as Markov Decision Processes (MDPs). In the setting of MDPs, the goal is to and a state dependent optimal sequence of actions that minimizes a certain long-term performance criterion. The standard dynamic programming approach to solve an MDP for the optimal decisions requires a complete model of the MDP and is computationally feasible only for small state-action MDPs. Reinforcement learning (RL) methods, on the other hand, are model-free simulation based approaches for solving MDPs. In many real world applications, one is often faced with MDPs that have large state-action spaces whose model is unknown, however, whose outcomes can be simulated. In order to solve such (large) MDPs, one either resorts to the technique of function approximation in conjunction with RL methods or develops application specific RL methods. A solution based on RL methods with function approximation comes with the associated problem of choosing the right features for approximation and a solution based on application specific RL methods primarily relies on utilizing the problem structure. In this thesis, we investigate the problem of choosing the right features for RL methods based on function approximation as well as develop novel RL algorithms that adaptively obtain best features for approximation. Subsequently, we also develop problem specie RL methods for applications arising in the areas of wireless sensor networks and road traffic control. In the first part of the thesis, we consider the problem of finding the best features for value function approximation in reinforcement learning for the long-run discounted cost objective. We quantify the error in the approximation for any given feature and the approximation parameter by the mean square Bellman error (MSBE) objective and develop an online algorithm to optimize MSBE. Subsequently, we propose the first online actor-critic scheme with adaptive bases to find a locally optimal (control) policy for an MDP under the weighted discounted cost objective. The actor performs gradient search in the space of policy parameters using simultaneous perturbation stochastic approximation (SPSA) gradient estimates. This gradient computation however requires estimates of the value function of the policy. The value function is approximated using a linear architecture and its estimate is obtained from the critic. The error in approximation of the value function, however, results in sub-optimal policies. Thus, we obtain the best features by performing a gradient descent on the Grassmannian of features to minimize a MSBE objective. We provide a proof of convergence of our control algorithm to a locally optimal policy and show numerical results illustrating the performance of our algorithm. In our next work, we develop an online actor-critic control algorithm with adaptive feature tuning for MDPs under the long-run average cost objective. In this setting, a gradient search in the policy parameters is performed using policy gradient estimates to improve the performance of the actor. The computation of the aforementioned gradient however requires estimates of the differential value function of the policy. In order to obtain good estimates of the differential value function, the critic adaptively tunes the features to obtain the best representation of the value function using gradient search in the Grassmannian of features. We prove that our actor-critic algorithm converges to a locally optimal policy. Experiments on two different MDP settings show performance improvements resulting from our feature adaptation scheme. In the second part of the thesis, we develop problem specific RL solution methods for the two aforementioned applications. In both the applications, the size of the state-action space in the formulated MDPs is large. However, by utilizing the problem structure we develop scalable RL algorithms. In the wireless sensor networks application, we develop RL algorithms to find optimal energy management policies (EMPs) for energy harvesting (EH) sensor nodes. First, we consider the case of a single EH sensor node and formulate the problem of finding an optimal EMP in the discounted cost MDP setting. We then propose two RL algorithms to maximize network performance. Through simulations, our algorithms are seen to outperform the algorithms in the literature. Our RL algorithms for the single EH sensor node do not scale when there are multiple sensor nodes. In our second work, we consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the -greedy method as well as upper confidence bound (UCB). We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization in order to find near optimal energy sharing policies. Through numerical experiments, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method. In the context of road traffic control, optimal control of traffic lights at junctions or traffic signal control (TSC) is essential for reducing the average delay experienced by the road users. This problem is hard to solve when simultaneously considering all the junctions in the road network. So, we propose a decentralized multi-agent reinforcement learning (MARL) algorithm for solving this problem by considering each junction in the road network as a separate agent (controller) to obtain dynamic TSC policies. We propose two approaches to minimize the average delay. In the first approach, each agent decides the signal duration of its phases in a round-robin (RR) manner using the multi-agent Q-learning algorithm. We show through simulations over VISSIM (microscopic traffic simulator) that our round-robin MARL algorithms perform significantly better than both the standard fixed signal timing (FST) algorithm and the saturation balancing (SAT) algorithm over two real road networks. In the second approach, instead of optimizing green light duration, each agent optimizes the order of the phase sequence. We then employ our MARL algorithms by suitably changing the state-action space and cost structure of the MDP. We show through simulations over VISSIM that our non-round robin MARL algorithms perform significantly better than the FST, SAT and the round-robin MARL algorithms based on the first approach. However, on the other hand, our round-robin MARL algorithms are more practically viable as they conform with the psychology of road users.
APA, Harvard, Vancouver, ISO, and other styles
40

Peng, Xingchao. "Domain adaptive learning with disentangled features." Thesis, 2020. https://hdl.handle.net/2144/42065.

Full text
Abstract:
Recognizing visual information is crucial for many real artificial-intelligence-based applications, ranging from domestic robots to autonomous vehicles. However, the success of deep learning methods on visual recognition tasks is highly dependent on access to large-scale labeled datasets, which are expensive and cumbersome to collect. Transfer learning provides a way to alleviate the burden of annotating data, which transfers the knowledge learned from a rich-labeled source domain to a scarce-labeled target domain. However, the performance of deep learning models degrades significantly when testing on novel domains due to the presence of domain shift. To tackle the domain shift, conventional domain adaptation methods diminish the domain shift between two domains with a distribution matching loss or adversarial loss. These models align the domain-specific feature distribution and the domain-invariant feature distribution simultaneously, which is sub-optimal towards solving deep domain adaptation tasks, given that deep neural networks are known to extract features in which multiple hidden factors are highly entangled. This thesis explores how to learn effective transferable features by disentangling the deep features. The following questions are studied: (1) how to disentangle the deep features into domain-invariant and domain-specific features? (2) how would feature disentanglement help to learn transferable features under a synthetic-to-real domain adaptation scenario? (3) how would feature disentanglement facilitate transfer learning with multiple source or target domains? (4) how to leverage feature disentanglement to boost the performance in a federated system? To address these needs, this thesis proposes deep adversarial feature disentanglement: a class/domain identifier is trained on the labeled source domain and the disentangler generates features to fool the class/domain identifier. Extensive experiments and empirical analysis demonstrate the effectiveness of the feature disentanglement method on many real-world domain adaptation tasks. Specifically, the following three unsupervised domain adaptation scenarios are explored: (1) domain agnostic learning with disentangled representations, (2) unsupervised federated domain adaptation, (3) multi-source domain adaptation.
APA, Harvard, Vancouver, ISO, and other styles
41

Nemri, Abdellatif. "Codage de l’information visuelle par la plasticité et la synchronisation des réponses neuronales dans le cortex visuel primaire du chat." Thèse, 2010. http://hdl.handle.net/1866/4756.

Full text
Abstract:
Les systèmes sensoriels encodent l’information sur notre environnement sous la forme d’impulsions électriques qui se propagent dans des réseaux de neurones. Élucider le code neuronal – les principes par lesquels l’information est représentée dans l’activité des neurones – est une question fondamentale des neurosciences. Cette thèse constituée de 3 études (E) s’intéresse à deux types de codes, la synchronisation et l’adaptation, dans les neurones du cortex visuel primaire (V1) du chat. Au niveau de V1, les neurones sont sélectifs pour des propriétés comme l’orientation des contours, la direction et la vitesse du mouvement. Chaque neurone ayant une combinaison de propriétés pour laquelle sa réponse est maximale, l’information se retrouve distribuée dans différents neurones situés dans diverses colonnes et aires corticales. Un mécanisme potentiel pour relier l’activité de neurones répondant à des items eux-mêmes reliés (e.g. deux contours appartenant au même objet) est la synchronisation de leur activité. Cependant, le type de relations potentiellement encodées par la synchronisation n’est pas entièrement clair (E1). Une autre stratégie de codage consiste en des changements transitoires des propriétés de réponse des neurones en fonction de l’environnement (adaptation). Cette plasticité est présente chez le chat adulte, les neurones de V1 changeant d’orientation préférée après exposition à une orientation non préférée. Cependant, on ignore si des neurones spatialement proches exhibent une plasticité comparable (E2). Finalement, nous avons étudié la dynamique de la relation entre synchronisation et plasticité des propriétés de réponse (E3). Résultats principaux — (E1) Nous avons montré que deux stimuli en mouvement soit convergent soit divergent élicitent plus de synchronisation entre les neurones de V1 que deux stimuli avec la même direction. La fréquence de décharge n’était en revanche pas différente en fonction du type de stimulus. Dans ce cas, la synchronisation semble coder pour la relation de cocircularité dont le mouvement convergent (centripète) et divergent (centrifuge) sont deux cas particuliers, et ainsi pourrait jouer un rôle dans l’intégration des contours. Cela indique que la synchronisation code pour une information qui n’est pas présente dans la fréquence de décharge des neurones. (E2) Après exposition à une orientation non préférée, les neurones changent d’orientation préférée dans la même direction que leurs voisins dans 75% des cas. Plusieurs propriétés de réponse des neurones de V1 dépendent de leur localisation dans la carte fonctionnelle corticale pour l’orientation. Les comportements plus diversifiés des 25% de neurones restants sont le fait de différences fonctionnelles que nous avons observé et qui suggèrent une localisation corticale particulière, les singularités, tandis que la majorité des neurones semblent situés dans les domaines d’iso-orientation. (E3) Après adaptation, les paires de neurones dont les propriétés de réponse deviennent plus similaires montrent une synchronisation accrue. Après récupération, la synchronisation retourne à son niveau initial. Par conséquent, la synchronisation semble refléter de façon dynamique la similarité des propriétés de réponse des neurones. Conclusions — Cette thèse contribue à notre connaissance des capacités d’adaptation de notre système visuel à un environnement changeant. Nous proposons également des données originales liées au rôle potentiel de la synchronisation. En particulier, la synchronisation semble capable de coder des relations entre objets similaires ou dissimilaires, suggérant l’existence d’assemblées neuronales superposées.
Sensory systems encode information about our environment into electrical impulses that propagate in networks of neurons. Understanding the neural code – the principles by which information is represented in neuronal activity – is one of the most fundamental issues in neuroscience. This thesis investigates in a series of 3 studies (S) two coding mechanisms, synchrony and adaptation, in neurons of the cat primary visual cortex (V1). In V1, neurons display selectivity for image features such as contour orientation, motion direction and velocity. Each neuron has at least one combination of features that elicits its maximum firing rate. Visual information is thus distributed among numerous neurons within and across cortical columns, modules and areas. Synchronized electrical activity between cells was proposed as a potential mechanism underlying the binding of related features to form coherent perception. However, the precise nature of the relations between image features that may elicit neuronal synchrony remains unclear (S1). In another coding strategy, sensory neurons display transient changes of their response properties following prolonged exposure to an appropriate stimulus (adaptation). In adult cat V1, orientation-selective neurons shift their preferred orientation after being exposed to a non-preferred orientation. How the adaptive behavior of a neuron is related to that of its neighbors remains unclear (S2). Finally, we investigated the relationship between synchrony and orientation tuning in neuron pairs, especially how synchrony is modulated during adaptation-induced plasticity (S3). Main results — (S1) We show that two stimuli in either convergent or divergent motion elicit significantly more synchrony in V1 neuron pairs than two stimuli with the same motion direction. Synchronization seems to encode the relation of cocircularity, of which convergent (centripetal) and divergent (centrifugal) motion are two special instances, and could thus play a role in contour integration. Our results suggest that V1 neuron pairs transmit specific information on distinct image configurations through stimulus-dependent synchrony of their action potentials. (S2) We show that after being adapted to a non-preferred orientation, cells shift their preferred orientation in the same direction as their neighbors in most cases (75%). Several response properties of V1 neurons depend on their location within the cortical orientation map. The differences we found between cell clusters that shift in the same direction and cell clusters with both attractive and repulsive shifts suggest a different cortical location, iso-orientation domains for the former and pinwheel centers for the latter. (S3) We found that after adaptation, neuron pairs that share closer tuning properties display a significant increase of synchronization. Recovery from adaptation is accompanied by a return to the initial synchrony level. Synchrony therefore seems to reflect the similarity in neurons’ response properties, and varies accordingly when these properties change. Conclusions — This thesis further advances our understanding of how visual neurons adapt to a changing environment, especially regarding cortical network dynamics. We also propose novel data about the potential role of synchrony. Especially, synchrony appears capable of binding various features, whether similar or dissimilar, suggesting superimposed neural assemblies.
APA, Harvard, Vancouver, ISO, and other styles
42

Hu, Shu-E., and 胡淑娥. "The relationship of cognitive features, coping, and adaptation in rheumatoid arthritis patients." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/vpx4m7.

Full text
Abstract:
碩士
中原大學
心理學研究所
93
ABSTRACT Derived the concepts of self-regulation and positive psychology, this study purposed to establish a hypothetical model about the coping and adaptation of the rheumatoid arthritis patients, and to examine the relations among four groups of variables of this model, which are expectation discrepancy, cognitive expectancy, coping types and adaptational outcomes. The study has been divided into three stages, Firstly, using analysis semistructure questions by interviewed with the rheumatoid arthritis patients, the researcher compiled two questionnaires, the Expectation Discrepancy Questionnaire and the Coping Questionnaire. Secondly, a preliminary study which was purposed to revise and validate these two questionnaires . Thirdly , a final study which involving 171 rheumatoid arthritis patients , 21 males and 150 females were scheduled. Five measurements including the Expectation Discrepancy Questionnaire, the Expectancy Appraisal Questionnaire, the Coping Questionnaire, the Rheumatoid Arthritis Quality of Life Questionnaire and the Positive-Negative Affect schedule were used to test our hypothesis. As the proposed model predicted that the coping types via various paths could have mediatory effects, the results of the study come out that the give up coping type has played mediatory effect when using these variables such as certainty expectancy, cognitive discrepancy of finance , dignity , career and interpersonal relationship predict the participants’ quality of life . Give up also appears a mediatory feature when using these variables such as certainty expectancy, cognitive discrepancy of finance and dignity predict the quality of life . Additionally, proactive coping type also demonstrates mediatory effect when using positive expectation predict the adaptation of the positive affect. Discrepancy of dignity also appears to be a mediator when the certainty expectancy can predict the give up coping. Furtherly, cognitive expectancy specially appears mediatory feature when using cognitive discrepancy variables predict adjust aim, proactive ,quality of life and negative affect. As focused on general stress by most previous studies, this study specially considered the chronic characteristics of the rheumatoid arthritis patients, from our regulational model took variables such as cognitive features, proactive coping take into the coping and adaptation model. The results of the study have demonstrated that the coping types and cognitive expectancy seem to play both important mediator and protective role for the rheumatoid arthritis patients, although they could also be aware of their own cognitive discrepancies. It is seemed that proper cognition and behavioral intervention provided by professionals of health psychology could help rheumatoid arthritis patients for better adaptation.
APA, Harvard, Vancouver, ISO, and other styles
43

Barrowman, Hannah M. "Assessing emerging governance features for community-based adaptation in Timor-Leste and Indonesia: What works and why?" Phd thesis, 2018. http://hdl.handle.net/1885/149421.

Full text
Abstract:
Community-based adaptation (CBA) aims to address local vulnerabilities and build the adaptive capacities of rural, remote and/or poor communities to plan for, and cope with, the impacts of future climate changes. Despite its growing popularity in international development, recent experience indicates that CBA interventions are rarely sustained past their project life cycle and often fail to reduce vulnerability. To address these challenges, calls have been made to embed CBA interventions into governance systems that engage stakeholder groups and institutions operating across multiple governance domains and scales, promote local accountability and are supported by leaders or institutional entrepreneurs. However, there is limited understanding as to how and to what extent such features of ‘emerging governance’ can foster more sustained and durable CBA in low-income nations. This thesis addresses this gap. To do so, this thesis assesses the features of governance associated with two CBA programmes implemented in rural and remote communities in Timor-Leste and Indonesia: the Mudansa Klimatica iha Ambiente Seguru (MAKA’AS) programme and the Climate Change Adaptation Project (CCAP). Specifically, the thesis focuses on the roles of networks and multi-scale interactions; accountability in local-level institutions and institutional entrepreneurship in generating more durable CBA in rural areas of low-income nations. Complex systems thinking is used to frame the analytical approach of this thesis and data is collected and assessed through a mixed-methods approach which combines social network analysis with more qualitative research forms of investigation. Three main conclusions emerge from the thesis. First, local participation and ownership of CBA interventions are critical for their durability as local participation helps to ensure activities are locally relevant and legitimate. Second, local participation is enhanced through greater stakeholder diversity and by engagement with private sector groups. Private sector groups are found to play a particularly important role in CBA governance by providing access to funds and resources needed to maintain CBA interventions and support behaviour change. The presence of the private sector is particularly important in areas where local government remains weak or lacks the will or mandate to participate in CBA interventions. To this end, emerging governance approaches can be effective in building more durable CBA because of their emphasis on engaging private and public sector actors and on building the general adaptive capacities of all of those involved in governance. However, numerous contextual challenges and the conceptual underpinnings of emerging governance make such approaches highly challenging in low-income nations like Timor-Leste and Indonesia. More specifically, this thesis finds that emerging governance approaches may be unrealistic in the context of low-income nations as they require both a well-developed private sector and/or private actors that are willing to go beyond or even contradict their own self-interest. In addition, the social and political shifts taking place in rural communities, and the uncertainties of climate change itself, combine to make governance goals and objectives ambiguous. Accordingly, this thesis argues that calls to embed CBA into multi-scale governance systems should be treated with strong caution; linking CBA with emerging forms of governance is challenging and not a panacea that should be expected. This thesis therefore calls for a shift away from project-based interventions supported by multi-scale governance towards a focus on local decision-making and support for cognate institutional structures that are already supporting rural development efforts in low-income nations.
APA, Harvard, Vancouver, ISO, and other styles
44

Regel, Diemut. "Complex Dynamics Enabled by Basic Neural Features." Doctoral thesis, 2019. http://hdl.handle.net/21.11130/00-1735-0000-0005-13D9-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Van, Jaarsveld Ernst Jacobus. "Cremnophilous succulents of southern Africa : diversity, structure and adaptations." Thesis, 2012. http://hdl.handle.net/2263/25107.

Full text
Abstract:
The vertical cliff-face habitat is renowned for many specifically adapted plant species that exhibit a high degree of local endemism. Over a period of nine years the succulents and bulbous succulents on cliff faces in South Africa and Namibia were systematically surveyed and documented. Distinction was made between succulents growing on cliffs as part of a wider habitat and those found only on cliffs (obligate cremnophytes). Most major cliff-face habitats in the study area were visited and all plants were documented. A check list and descriptions (including adaptive traits) of the 220 obligate cremnophilous taxa are provided. During the study some 45 new cremnophilous succulent taxa were discovered and named, representing almost 20% of the total and proving that cliff habitats are some of the least studied environments, not only in southern Africa but globally. Among the newly described cremnophilous taxa is the genus Dewinteria (Pedaliaceae). Using stem length, three basic cliff-face growth forms are identified - compact or cluster-forming ‘cliff huggers’, cliff shrublets or ‘cliff squatters’ and pendent ‘cliff hangers’. Compact growth (often tight clusters or mats) is mainly associated with the winter-rainfall Succulent Karoo and Thicket regions, especially Namaqualand. However, further north the same compact growth forms are associated with an increase in altitude such as the Drakensberg Escarpment and other northern mountains. Most pendent growth forms are associated with the eastern and southeastern summer-rainfall regions; a number of smaller pendent shrublets occur on the high quartzitic sandstone mountains of the Western Cape. The degree of specialisation varies from highly adapted (smaller percentage) to less specialised (often eco-forms), and some taxa have no obvious adaptations. This study revealed a general increase in succulence in most obligate cremnophilous succulent species (compared to closely related species in other habitats), a reflection of their xeric habitat, and plants tend to be more compact. Also, there is a shift in reproductive output, including an increase in vegetative reproduction (backup), wind-dispersed seed and enriched flowering associated with certain species. Most obligate cremnophilous succulent plants in the study area have cliff-adapted features, ensuring long-term survival.
Thesis (PhD)--University of Pretoria, 2012.
Plant Science
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
46

彭祥恩. "A tracking system with features of fuzzy contour control and feed adaptation for hybrid motion platform." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/43549000155439127251.

Full text
Abstract:
碩士
國立交通大學
機械工程系所
92
In machining there is a continuous effort in pursuing high speed and high precision. The purpose of this work is to improve the precision of the tracking scheme “multi-axis cross-coupled pre-compensation method (MCCPM)” by adoption of some proven features. The tracking method is implemented on a hybrid motion platform which is a combination of Stewart platform and X-Y table. One feature adopted is the adaptation of feeding speed with respect to the curvature of the trajectory being tracked; another feature is Fuzzy Logic Controller for dealing with contour errors. It has been shown that the incarnation of these two features in the MCCPM has improved the precision of tracking.
APA, Harvard, Vancouver, ISO, and other styles
47

Лисенко, Олександра Юріївна. "Соціально-психологічні чинники адаптації першокласників в умовах нової української школи." Магістерська робота, 2020. https://dspace.znu.edu.ua/jspui/handle/12345/1765.

Full text
Abstract:
Лисенко О. Ю. Соціально-психологічні чинники адаптації першокласників в умовах нової української школи : кваліфікаційна робота магістра спеціальності 053 «Психологія» / наук. керівник О. М. Грединарова. Запоріжжя : ЗНУ, 2020. 99 с.
UA : Робота викладена на 99 сторінках, 10 таблиць, 2 додатка. Перелік посилань включає 52 джерела. Об’єкт дослідження: процес адаптації першокласників. Нова українська школа - це головна реформа Міністерства освіти і науки, розпочата в останні роки і запланована на десятиліття вперед. Її ключова мета - створити школу, «в якій буде приємно вчитися і яка буде давати учням не тільки знання, як це відбувається зараз, а й уміння застосовувати їх в життя». Коли дитина вступає до першого класу змінюється його життя все починає підпорядковуватися навчанню, шкільним справам, школі. Адаптація – важливий етап в житті маленьких школярів, тому що саме в цей період життя першокласник освоює і приймає шкільні норми, формується інтерес до шкільного навчання, з’являється віра в свої сили, діти стають дорослішими.. Наукова новизна роботи полягає в тому, що вперше теоретико – емпіричним шляхом було встановлено, що успішна адаптація дитини до школи залежить від допомоги батьків та вчителів.
EN : The work is set out on 99 pages, 10 tables, 2 appendices. The list of links includes 52 sources. Object of study: the process of adaptation of first graders. The new Ukrainian school is a major reform of the Ministry of Education and Science, started in recent years and planned for decades to come. Its key goal is to create a school "that will be enjoyable to learn and that will give students not only the knowledge as it is now, but also the ability to apply them to life." When a child enters the first grade, his life changes, everything begins to be subordinate to school, school, and school. Adaptation is an important stage in the life of young students, because it is during this period of life that the first-grader learns and adopts school rules, an interest in schooling is formed, there is a belief in their own strength, children become adults .. The scientific novelty of the work is that for the first time, it has been established empirically that successful adaptation of a child to school depends on the assistance of parents and teachers.
APA, Harvard, Vancouver, ISO, and other styles
48

Бельмега, Ірина Василівна. "Вплив акцентуйованих рис характеру на виникнення стану соціально-психологічної дезадаптації у військовослужбовців НТУ." Магістерська робота, 2020. https://dspace.znu.edu.ua/jspui/handle/12345/2133.

Full text
Abstract:
Бельмега І. В. Вплив акцентуйованих рис характеру на виникнення стану соціально-психологічної дезадаптації у військовослужбовців НТУ : кваліфікаційна робота магістра спеціальності 053 «Психологія» / наук. керівник О. А. Лукасевич. Запоріжжя : ЗНУ, 2020. 102 с.
UA : Робота викладена на 102 сторінки, 7 таблиць, 1 малюнок, 5 додатків. Перелік посилань включає 57 джерел. Об’єкт дослідження: соціально-психологічна адаптація військовослужбовців. В усіх сферах сучасного суспільства зміни вимагають від кожної особистості все більш високого рівня соціальної та індивідуальної компетентності, здатності до самоорганізації в усіх видах життєдіяльності. Проблема адаптації військовослужбовців до умов служби завжди була в полі зору дослідників, проте особливий інтерес до даної теми, випливає зі специфіки та високої соціальної значимості діяльності людей щодо захисту державних інтересів і безпеки країни. Саме тому ця діяльність досі привертає особливу увагу дослідників різних наукових напрямків. Стан психічного здоров'я юнаків до призовного та призовного віку – одна з медико-соціальних проблем комплектування Національної гвардії України. На сучасному етапі досі постають важливі проблеми погіршення адаптації молодих воїнів в перші місяці служби в Національній гвардії та зниження готовності юнаків до служби.. Наукова новизна кваліфікаційної роботи полягає в тому, що визначено чинники розвитку деструктивних акцентуацій саме для військовослужбовців строкової служби НГУ; удосконалено уявлення про вплив акцентуацій характеру на діяльність в особливих умовах життєдіяльності.
EN : The work is spread over 102 pages, 7 tables, 1 figure, 5 appendices. The list of links includes 57 sources. Object of study: social and psychological adaptation of servicemen. In all spheres of modern society, changes require each individual to have a higher level of social and individual competence, the ability to organize himself in all kinds of life. The problem of adaptation of military personnel to the conditions of service has always been in the field of view of researchers, but the particular interest in this topic stems from the specificity and high social importance of people's activities in the protection of state interests and security of the country. That is why this activity still attracts the special attention of researchers from different scientific fields. The state of mental health of young men before the draft and draft age is one of the medical and social problems of manning the National Guard of Ukraine. At the present stage, significant problems still arise in the worsening of young soldiers' adaptation in the first months of service in the National Guard and in the reduction of young men's readiness for service. the idea of the influence of character accentuation on activity in special conditions of life is improved.
APA, Harvard, Vancouver, ISO, and other styles
49

Pinto, Pedro Miguel Sequeira. "Influência dos tipos de preparações em restaurações dento-suportadas." Master's thesis, 2019. http://hdl.handle.net/10284/9034.

Full text
Abstract:
A reabilitação oral com recurso a prótese fixa é uma solução cada vez mais utilizada pelos médicos dentistas. Deste facto advém a necessidade de preparar o dente para receber a restauração. Desde a decisão do tipo de preparação dentária a usar e localização da linha de terminação, existe um estudo das condições e possibilidades do caso. É então fundamental perceber quais são as situações nas quais se aplica cada tipo preparação, baseado na evidência científica, por forma a obter o sucesso clínico. Deste modo, torna-se indispensável o conhecimento sobre a influência de cada um destes tipos, nomeadamente nos parâmetros periodontais – índice de placa, índice gengival, profundidade de sondagem e hemorragia à sondagem – e a consequente resposta histológica tecidular.
Oral rehabilitation by fixed prothesis is becoming a more often used solution by dentists. From this, comes the need to prepare the tooth to receive the restauration. Before the decision on the type of preparation to be used and location of the finishing line, there must be a study of the condition and case possibilities. For this, it's essential to understand what are the situations that each type of preparation is suited for, based on scientific evidence, in order to reach clinical success. Therefore, it's indispensable to have the knowledge on the influence of each one of these types, specifically on the periodontal parameters – plaque index, gingival index, probe depth and bleeding on probe – and its consequent tissue histological response.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography