Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Acquisition modele.

Dissertationen zum Thema „Acquisition modele“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Acquisition modele" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Bellecave, Isabelle. „Vers un modele d'acquisition -semantique et ou syntaxique- des prepositions chez l'enfant de 2 a 10 ans“. Toulouse 2, 1993. http://www.theses.fr/1993TOU20072.

Der volle Inhalt der Quelle
Annotation:
Cette these a pour objet d'etablir le role respectif de differents parametres dans un modele de l'acquisition de quelques prepositions par l'enfant francophone de 2 a 10, 11 ans : a, de, dans, pres de, sur et entre. D'une part, elle se propose de tester la validite d'une dichotomie des prepositions selon leur valeur (spatiale ou casuelle). D'autre part, elle tente de determiner quel modele d'acquisition (perceptivo-semantique et ou cognitif) est le plus pertinent pour expliquer la maitrise de ces elements. Apres un resume des theories proposees sur la preposition et sur l'apprentissage, elle presente un protocole en quatre taches : deux epreuves de comprehension et deux de production. Les resultats montrent que la valeur spatiale ou casuelle des prepositions n'est pas un facteur facilitant l'apprentissage bien qu'il semble que les enfants en aient une certaine conscience tres tot. La maitrise des prepositions se fait d'abord en comprehension avant de se faire en expression. Cependant, en comprehension, les resultats ne nous premettent pas de nous prononcer de maniere categorique en faveur de l'un ou l'autre des modeles. Par contre, en expression, les choses paraissent plus claires : un modele fonde sur le developpement cognitif parait pertinent pour expliquer la maniere dont les enfants maitrisent ces elements. En effet, ils commencent par exprimer les prepositions "pres de" et "dans" faisant appel a des notions d'espace topologique (entourage, voisinage. . . ). Avec l'apparition de la droite projective vers 6 ans, apparaissent les prepositions faisant intervenir l'espace projectif. En conclusion, il est fort possible que la "verite" se trouve dans un modele "mixte", integrant des faits semantico-perceptifs et des concepts cognitifs, sans oublier la part des strategies utilisees
The aim of this thesis is to study different components in the acquisition of prepositions among french-speaking children aged from 2 to 10. 11 : a, de, dans, pres de, sur, entre. First, it assesses the redevance of a dichotomy between prepositions : spatial or casual. Second, it tries to define wich model (sementically-perceptive and or cognitive) is the best to explain this acquisition. After a synthetic presentation of theories on preposition acquisition, it offers an investigation with 4 tasks (2 production and 2 comprehension). Results show that the dichotomy is not pertinent ot explain acquisition although young children seem to have a certain "conscience" of it. Acquisition begins by comprehension ; but the results of this part don't permit to determine wich model is the more adapted. On the other hand, in production, we can assume that a model based on cognitive development is pertinent. Children begin to produce prepositions ("pres de", "dans") involving topologic space (encirclement, proximity. . . ). With projective right's apparition, at 6, they produce prepositions involving pronective space. To conclude, it's likely that the "thruth" is a mixed model including semanticallyperceptive facts, cognitive notions togother with strategies used by choldren
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rodet, Luc. „La représentation des objets en mémoire : role de l'historique d'apprentissage“. Grenoble INPG, 1996. http://www.theses.fr/1996INPG0123.

Der volle Inhalt der Quelle
Annotation:
Des resultats experimentaux obtenus sur des objets inconnus montrent que l'historique d'apprentissage est un element clef de la representation des objets en memoire. Le systeme visuel humain representerait le meme objet de maniere differente en fonction des objets precedemment vus. Les parties d'un objet ne seraient pas uniquement extraites par segmentation perceptive mais aussi par confrontation avec les objets deja representes en memoire. La theorie proposee montre comment ce processus descendant peut diriger la segmentation des objets
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zurcher, James. „Model-based knowledge acquisition using adaptive piecewise linear models“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0018/NQ46956.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Waegner, Nicholas Paul. „Stochastic models for language acquisition“. Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309214.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Buttery, P. J. „Computational models for first language acquisition“. Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597195.

Der volle Inhalt der Quelle
Annotation:
This work investigates a computational model of first language acquisition; the Categorical Grammar Learner or CGL. The model builds on the work of Villavicenio, who created a parametric Categorical Grammar learner that organises its parameters into an inheritance hierarchy, and also on the work of Buszkowski and Kanazawa, who demonstrated the learnability of a k-valued Classic Categorial Grammar (which uses only the rules of function application) from strings. The CGL is able to learn a k-valued General Categorial Grammar (which uses the rules of function application, function composition and Generalised Weak Permutation). The novel concept of Sentence Objects (simple strings, augmented strings, unlabelled structures and functor-argument structures) are presented as potential points from which learning may commence. Augmented strings (which are stings augmented with some basic syntactic information) are suggested as a sensible input to the CGL as they are cognitively plausible objects and have greater information content than strings alone. Building on the work of Siskind, a method for constructing augmented strings from unordered logic forms is detailed and it is suggested that augmented strings are simply a representation of the constraints placed on the space of possible parses due to a sting’s associated semantic content. The CGL make crucial use of a statistical Memory Module (constructed from a type memory and Word Order Memory) that is used to both constrain hypotheses and handle data which is noisy or parametrically ambiguous. A consequence of the Memory Module is that the CGL learns in an incremental fashion. This echoes real child learning as documented in Brown’s Stages of Language Development and also as alluded to by an included corpus study of child speech. Furthermore, the CGL learns faster when initially presented with simpler linguistic data; a further corpus study of child-directed speech suggests that this echoes the input provided to children. The CGL is demonstrated to learn from real data. It is evaluated against previous parametric learners (the Triggering Learning Algorithm of Gibson and Wexler and the Structural Triggers Learner of Fodor and Sakas) and is found to be more efficient.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Frank, Stella Christina. „Bayesian models of syntactic category acquisition“. Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/6693.

Der volle Inhalt der Quelle
Annotation:
Discovering a word’s part of speech is an essential step in acquiring the grammar of a language. In this thesis we examine a variety of computational Bayesian models that use linguistic input available to children, in the form of transcribed child directed speech, to learn part of speech categories. Part of speech categories are characterised by contextual (distributional/syntactic) and word-internal (morphological) similarity. In this thesis, we assume language learners will be aware of these types of cues, and investigate exactly how they can make use of them. Firstly, we enrich the context of a standard model (the Bayesian Hidden Markov Model) by adding sentence type to the wider distributional context.We show that children are exposed to a much more diverse set of sentence types than evident in standard corpora used for NLP tasks, and previous work suggests that they are aware of the differences between sentence type as signalled by prosody and pragmatics. Sentence type affects local context distributions, and as such can be informative when relying on local context for categorisation. Adding sentence types to the model improves performance, depending on how it is integrated into our models. We discuss how to incorporate novel features into the model structure we use in a flexible manner, and present a second model type that learns to use sentence type as a distinguishing cue only when it is informative. Secondly, we add a model of morphological segmentation to the part of speech categorisation model, in order to model joint learning of syntactic categories and morphology. These two tasks are closely linked: categorising words into syntactic categories is aided by morphological information, and finding morphological patterns in words is aided by knowing the syntactic categories of those words. In our joint model, we find improved performance vis-a-vis single-task baselines, but the nature of the improvement depends on the morphological typology of the language being modelled. This is the first token-based joint model of unsupervised morphology and part of speech category learning of which we are aware.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Iiyama, Masaaki. „3D object model acquisition from silhouettes“. 京都大学 (Kyoto University), 2006. http://hdl.handle.net/2433/64946.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rodrigeuz-Sanchez, I. „Matrix models of second language vocabulary acquisition“. Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638702.

Der volle Inhalt der Quelle
Annotation:
Most of the current research in L2 vocabulary acquisition has been too focused on what it is to learn a word, and has neglected how whole vocabularies grow or decline. In general, it is assumed that vocabulary gains and losses are incremental and follow a linear progression. This thesis postulates a model which considers several discrete stages of knowledge and accounts for the unstable nature of vocabulary knowledge, where words can change from one state to any other. Matrix algebra is a tool capable to operate with such a model and produce long-term forecasts of vocabulary size. Our experimental work describes the retention and the overall growth of the vocabulary of advanced learners of Spanish. These experiments show that forecasts of vocabulary size generated by the matrix model are far more accurate than those generated by a linear model. With data from two self-rating tasks containing a large number of words completed within a given lapse we build matrices which generate forecasts of vocabulary knowledge. These forecasts highly correlate to the actual knowledge measured three and four months later. This methodology is tested with subjects of various groups, using words from different frequency bands, and different measurement scales. In addition, we indicate ways of identifying matrices likely to generate inaccurate predictions. This methodology is considered one step forward towards the establishment of a model for L2 vocabulary acquisition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pasiouras, Fotios. „Development of bank acquisition targets prediction models“. Thesis, Coventry University, 2005. http://curve.coventry.ac.uk/open/items/ecf1b00d-da92-9bd2-5b02-fa4fab8afb0c/1.

Der volle Inhalt der Quelle
Annotation:
This thesis develops a range of prediction models for the purpose of predicting the acquisition of commercial banks in the European Union using publicly available data. Over the last thirty years, there have been approximately 30 studies that have attempted to identify potential acquisition targets, all of them focusing on non-bank sectors. We consider that prediction models developed specifically for the banking industry are essential due to the unusual structure of banks' financial statements, differences in the environment in which banks operate and other specific characteristics of banks that in general distinguish them from non-financial firms. We focus specifically on the EU banking sector, where M&As activity has been considerable in recent years, yet academic research relating to the EU has been rather limited compared to the case of the US. The methodology for developing prediction models involved identifying past cases of acquired banks and combining these with non-acquired banks in order to evaluate the prediction accuracy of various quantitative classification techniques. In this study, we construct a base sample of commercial banks covering 15 EU countries, and financial variables measuring capital strength, profit and cost efficiency, liquidity, growth, size and market power, with data in both raw and country-adjusted (i.e. raw variables divided by the average of the banking sector for the corresponding country) form. In order to allow for a proper comparative evaluation of classification methods, we select common subsets of the base sample and variables with high discriminatory power, dividing the sample period (1998-2002) into training sub-sample for model development (1998-2000), and holdout sub-sample for model evaluation (2001-2002). Although the results tend to support the findings of studies on non-financial firms, highlighting the difficulties in predicting acquisition targets, the prediction models we develop show classification accuracies generally higher than chance assignment based on prior probabilities. We also consider the use of equal and unequal matched holdout samples for evaluation, and find that overall classification accuracy tends to increase in the unequal matched samples, implying that equal matched samples do not necessarily overstate the prediction ability of models. The main goal of this study has been to compare and evaluate a variety of classification methods including statistical, econometric, machine learning and operational research techniques, as well as integrated techniques combining the predictions of individual classification methods. We found that some methods achieved very high accuracies in classifying non-acquired banks, but at the cost of relatively poor accuracy performance in classifying acquired banks. This suggests a trade-off in achieving high classification accuracy, although some methods (e.g. Discriminant) performed reasonably well in terms of achieving balanced overall classification accuracies of above chance predictions. Integrated prediction models offer the advantage of counterbalancing relatively poor performance of some classification methods with good performance of others, but in doing so could not out-perform all individual classification methods considered. In general, we found that the outcome of which method performed best depended largely on the group classification accuracy considered, as well as to some extent on the choice of the discriminatory variables. Concerning the use of raw or country-adjusted data, we found no clear effect on the prediction ability of the classification methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Neo, Say Beng. „On some Markovian Salvo combat models“. Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://edocs.nps.edu/npspubs/scholarly/theses/2008/Dec/08Dec%5FNeo.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, December 2008.
Thesis Advisor(s): Kress, Moshe; Szechtman, Roberto. "December 2008." Description based on title screen as viewed on January 28, 2009. Includes bibliographical references (p. 55-56). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Campbell, N. D. F. „Automatic 3D model acquisition from uncalibrated images“. Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597247.

Der volle Inhalt der Quelle
Annotation:
This work address all of the stages required to take a sequence of images of an object and recover a 3D model in order to produce as system that maximises automation and minimises the demands placed on the user. To that end we present a practical implementation of an automatic method for recovering the positions and properties of the cameras used to take a series of images using a textured ground-plane. We then offer two contributions to simplify the task of segmentation an object observed in multiple images. The first, applicable to more simple scenes, automatically segments the object fixated upon by the camera. We achieve this by iteratively exploiting the rigid structure of the scene, to perform the segmentation in 3D across all the images simultaneously, and the consistent appearance of the object. For more complex scenes we move to our second algorithm that allows the user to select the required object in an interaction manner whilst minimising demands on their time. We combine the different appearance and spatial constraints to produce a clustering problem to group regions across images that allows the user to label many images at the same time. Finally we present an automatic reconstruction algorithm that improve the performance of existing state-of-the-art methods to allow accurate models to be obtained from smaller image sequences. This takes the form of a filtering process that rejects erroneous depth estimates by considering multiple depth hypotheses and identifying the true depth or an unknown state using a 2D Markov Random Field framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ingle, Nicholas. „A process model for acquisition integration success“. Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2856.

Der volle Inhalt der Quelle
Annotation:
On average over the last 30 years 50% of all mergers and acquisitions have failed, with a third of all these failures having been caused by poor integration. This study sets out to examine a potential solution to improve the chances of integration success. An evaluation of the published acquisition integration process models that had strategic alignment of the acquisition strategy at their core was carried out and these were found to be incomplete and deficient in various aspects, including integrating fit factors, defined process stages and their interconnectedness. A conceptual acquisition integration process model was developed, based on a review of the literature which was subsequently used to design an appropriate research methodology to enhance and validate this model. In subsequent field work a qualitative case study approach, incorporating interviews, documents and comparative data analysis, was undertaken using four organisations and sixteen interviews, to assess how those organisations carry out the integration process. The results were combined with the conceptual model to develop an interim integration process model. This model was subsequently tested on the previous case organisations through semi-structured interviews. The conceptual process model was re-appraised and an internal and a limited external validation study were carried out on the revised model. From this the final complete acquisition integration process model and acquisition planning and integration implementation ‘onion’ was developed that is both practical and empirically tested, albeit on a small sample set.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Frermann, Lea. „Bayesian models of category acquisition and meaning development“. Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25379.

Der volle Inhalt der Quelle
Annotation:
The ability to organize concepts (e.g., dog, chair) into efficient mental representations, i.e., categories (e.g., animal, furniture) is a fundamental mechanism which allows humans to perceive, organize, and adapt to their world. Much research has been dedicated to the questions of how categories emerge and how they are represented. Experimental evidence suggests that (i) concepts and categories are represented through sets of features (e.g., dogs bark, chairs are made of wood) which are structured into different types (e.g, behavior, material); (ii) categories and their featural representations are learnt jointly and incrementally; and (iii) categories are dynamic and their representations adapt to changing environments. This thesis investigates the mechanisms underlying the incremental and dynamic formation of categories and their featural representations through cognitively motivated Bayesian computational models. Models of category acquisition have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this thesis, we focus on categories acquired from natural language stimuli, using nouns as a stand-in for their reference concepts, and their linguistic contexts as a representation of the concepts’ features. The use of text corpora allows us to (i) develop large-scale unsupervised models thus simulating human learning, and (ii) model child category acquisition, leveraging the linguistic input available to children in the form of transcribed child-directed language. In the first part of this thesis we investigate the incremental process of category acquisition. We present a Bayesian model and an incremental learning algorithm which sequentially integrates newly observed data. We evaluate our model output against gold standard categories (elicited experimentally from human participants), and show that high-quality categories are learnt both from child-directed data and from large, thematically unrestricted text corpora. We find that the model performs well even under constrained memory resources, resembling human cognitive limitations. While lists of representative features for categories emerge from this model, they are neither structured nor jointly optimized with the categories. We address these shortcomings in the second part of the thesis, and present a Bayesian model which jointly learns categories and structured featural representations. We present both batch and incremental learning algorithms, and demonstrate the model’s effectiveness on both encyclopedic and child-directed data. We show that high-quality categories and features emerge in the joint learning process, and that the structured features are intuitively interpretable through human plausibility judgment evaluation. In the third part of the thesis we turn to the dynamic nature of meaning: categories and their featural representations change over time, e.g., children distinguish some types of features (such as size and shade) less clearly than adults, and word meanings adapt to our ever changing environment and its structure. We present a dynamic Bayesian model of meaning change, which infers time-specific concept representations as a set of feature types and their prevalence, and captures their development as a smooth process. We analyze the development of concept representations in their complexity over time from child-directed data, and show that our model captures established patterns of child concept learning. We also apply our model to diachronic change of word meaning, modeling how word senses change internally and in prevalence over centuries. The contributions of this thesis are threefold. Firstly, we show that a variety of experimental results on the acquisition and representation of categories can be captured with computational models within the framework of Bayesian modeling. Secondly, we show that natural language text is an appropriate source of information for modeling categorization-related phenomena suggesting that the environmental structure that drives category formation is encoded in this data. Thirdly, we show that the experimental findings hold on a larger scale. Our models are trained and tested on a larger set of concepts and categories than is common in behavioral experiments and the categories and featural representations they can learn from linguistic text are in principle unrestricted.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

McCandless, Michael Kyle. „Automatic acquisition of language models for speech recognition“. Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36462.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 138-141).
by Michael Kyle McCanless.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Clark, Stephen. „Class-based statistical models for lexical knowledge acquisition“. Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341541.

Der volle Inhalt der Quelle
Annotation:
This thesis is about the automatic acquisition of a particular kind of lexical knowledge, namely the knowledge of which noun senses can fill the argument slots of predicates. The knowledge is represented using probabilities, which agrees with the intuition that there are no absolute constraints on the arguments of predicates, but that the constraints are satisfied to a certain degree; thus the problem of knowledge acquisition becomes the problem of probability estimation from corpus data. The problem with defining a probability model in terms of senses is that this involves a huge number of parameters, which results in a sparse data problem. The proposal here is to define a probability model over senses in a semantic hierarchy, and exploit the fact that senses can be grouped into classes consisting of semantically similar senses. A novel class-based estimation technique is developed, together with a procedure that determines a suitable class for a sense (given a predicate and argument position). The problem of determining a suitable class can be thought of as finding a suitable level of generalisation in the hierarchy. The generalisation procedure uses a statistical test to locate areas consisting of semantically similar senses, and, as well as being used for probability estimation, is also employed as part of a re-estimation algorithm for estimating sense frequencies from incomplete data. The rest of the thesis considers how the lexical knowledge can be used to resolve structural ambiguities, and provides empirical evaluations. The estimation techniques are first integrated into a parse selection system, using a probabilistic dependency model to rank the alternative parses for a sentence. Then, a PP-attachment task is used to provide an evaluation which is more focussed on the class-based estimation technique, and, finally, a pseudo disambiguation task is used to compare the estimation technique with alternative approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Lockerman, Yitzchak. „Facilitating the Acquisition of Realistic Material Appearance Models“. Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10584954.

Der volle Inhalt der Quelle
Annotation:

Over the last two decades, tools for rendering realistic three dimensional scenes have become available to anyone. Unfortunately, tools for acquiring realistic material appearance models to render have lagged behind. In this dissertation, we demonstrate a number of tools that provide low cost methods to capture these models. Particularly, we focus on two different aspects of appearance: the spatial variance encoded in texture and the subsurface scattering of light within an object.

Our first tool allows users to extract textures from arbitrary natural images. These images can be acquired from the web or captured by a camera. An interface then allows even a novice user to easily specify the minimum information needed to extract a desired texture. This tool is freely available as an online web application.

Next we provide a generalization of our first tool to allow for the extraction of all textures in an image at multiple levels of scale. In addition to creating more realistic textures, this allows the user to use this information in a number of novel ways to create new works of art.

Finally, we show that low cost consumer level equipment can be used to acquire the subsurface scattering properties of three dimensional objects. Additionally, we obtain geometric information from objects with strong subsurface scattering, a difficult challenge for most commercial shape acquisition systems.

We provide a discussion of future plans to continue our work to democratize material acquisition. This includes a design for a handheld version of our material acquisition system. We also discuss the possibility to apply our work to other fields such as medical imaging.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Mlakar, Joseph A. „Aggregate models for target acquisition in urban terrain“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FMlakar.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Operations Research and M.S. in Applied Mathematics)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Craig W. Rasmussen, Thomas M. Cioppa. Includes bibliographical references (p. 131-132). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Rahman, Atiqur. „Technological progress and technology acquisition : models with and without rivalry“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0030/NQ64654.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Cameron, Heather M. „Constraint satisfaction for interactive 3-D model acquisition“. Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/28937.

Der volle Inhalt der Quelle
Annotation:
More and more computer applications are using three-dimensional models for a variety of uses (e.g. CAD, graphics, recognition). A major bottleneck is the acquisition of these models. The easiest method for designing the models is to build them directly from images of the object being modelled. This paper describes the design of a system, MOLASYS (for MOdeL Acquisition SYStem), that allows the user to build object models interactively from underlying images. This would not only be easier for the user, but also more accurate as the models will be built directly satisfying the dimensions, shape, and other constraints present in the images. The object models are constructed by constraining model points and edges to match points in the image objects. The constraints are defined by the user and expressed using a Jacobian matrix of partial derivatives of the errors with respect to a set of camera and model parameters. MOLASYS then uses Newton's method to solve for corrections to the parameters that will reduce the errors specified in the constraints to zero. Thus the user describes how the system will change, and the program determines the best way to accomplish the desired changes. The above techniques, implemented in MOLASYS, have resulted in an intuitive and flexible tool for the interactive creation of three-dimensional models.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Swartzendruber, Kara Louise McDowell Kim. „The picture word inductive model and vocabulary acquisition“. Diss., A link to full text of this thesis in SOAR, 2007. http://soar.wichita.edu/dspace/handle/10057/1178.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.Ed.)--Wichita State University, College of Education, Dept. of Curriculum and Instruction.
"May 2007." Title from PDF title page (viewed on Dec. 29, 2007). Thesis adviser: Kim McDowell. Includes bibliographic references (leaves 42-46).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

McQueen, Thomas. „STORM : an unsupervised connectionist model for language acquisition“. Thesis, Nottingham Trent University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429266.

Der volle Inhalt der Quelle
Annotation:
Language acquisition is one of the core problems in artificial intelligence. Current performance bottlenecks in natural language processing (NLP) systems result from a prerequisite for an incalculable amount oflanguage and domain-specific knowledge. Consequently, the creation of an automated language acquisition system would revolutionize the field of NLP. Connectionist models that learn by example (i.e. artificial neural networks) have been successfully applied to many areas of language acquisition. However, the most widely used class of these models, known as supervised connectionist models, have a number of major limitations, including an inability to represent variables and a limited ability to generalize from sparse data. Such limitations have prevented connectionist models from being applied to large-scale language acquisition. This research considers the alternative and less widely used class of unsupervised connectionist models and investigates whether such models can capture the finite-state properties of language. A novel unsupervised connectionist model, STORM (Spatio Temporal Self-Organizing Recurrent Map), is proposed that uses a memory-rule based approach to learn a regular grammar from a set of positive example sequences. STORM's learning algorithm uses a derivation of functional-equivalence theory that allows the model to learn via similarity of behaviour, rather than just similar of form. This novel functional generalization ability allows STORM to learn a perfect and stable representation of the Reber grammar from a sparse training set of just 30 sequences, as opposed to the 60,000 sequences required to train a supervised connectionist model. Unlike supervised models, once STORM has learnt the grammar it can generalize to test sequences of any length or depth of embedding. Extensions to the model are proposed to show how STORM can learn context-free grammars. These extensions also solve the logical problem of language acquisition by recovering from overgeneralizations without the need for negative evidence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Walter, Michael. „Automatic model acquisition and recognition of human gestures“. Thesis, University of Westminster, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434422.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Van, den Honert Robin Charles. „Game-theoretic models for mergers and acquisitions“. Doctoral thesis, University of Cape Town, 1995. http://hdl.handle.net/11427/22556.

Der volle Inhalt der Quelle
Annotation:
This thesis examines the corporate merger process as a bargaining game, under the assumption that the two companies are essentially in conflict over the single issue of the price to be offered by the acquirer to the target. The first part of the thesis deals with the construction and testing of analytical game-theoretic models to explain the proportion of the synergy gains accruing to the target company under different assumptions about the players' a priori knowledge. Assuming full certainty amongst the players about the pre- and post-merger values of the companies, the distribution of gains between target and acquiring companies that would be consistent with the Nash-Kalai axioms is determined in principle. The resulting model depends on the players' utility functions, and is parameterised by the relative bargaining strength of the players and their risk aversion coefficients. An operational version of the model is fitted to empirical data from a set of 24 recent mergers of companies quoted on the Johannesburg Stock Exchange. The model is shown to have good predictive power within this data set. Under the more realistic assumption of shared uncertainty amongst the two players about the post-merger value of the combined company, a Nash-Kalai bargaining model incorporating this uncertainty is developed. This model is an improvement over those with complete certainty in that it offers improved model fit in terms of predicting the total amount paid by an acquirer, and is able to dichotomise this payment into a cash amount and a share transfer amount. The theoretical model produced some results of practical value. Firstly, a cash-only offer is never optimal. Conditions under which shares only should be tendered are identified. Secondly, the optimal offer amount depends on the form of payment and the level of perceived risk. In a share-only offer the amount is constant regardless of risk, whilst if cash is included an increase in risk will imply a decrease in the optimal amount of cash offered. The Nash-Kalai model incorporating shared uncertainty is empirically tested on the same data set used previously. This allows a comparison with earlier results and estimation of the extent of the uncertainty. An extension of this model is proposed, incorporating an alternative form of the utility functions. The second part of the thesis makes use of ideas from negotiation analysis to construct a dynamic model of the complex processes involved in negotiation. It offers prescriptive advice to one of the players on likely Pareto-optimal bargaining strategies, given a description of the strategy the other party is likely to employ. The model describes the negotiating environment and each player's negotiating strategy in terms of a few simple parameters. The model is implemented via a Monte Carlo simulation procedure, which produces expected gains to each player and average transaction values for a wide range of each of the players' strategies. The resulting two-person game bimatrix is analysed to offer general insights into negotiated outcomes, and using conventional game-theoretic and Bayesian approaches to identify "optimal" strategies for each of the players. It is shown that for the purposes of identifying optimal negotiating strategies, the players strategies (described by parameters which are continuous in nature) can be adequately approximated by a sparse grid of discrete strategies, providing that these discrete strategies are chosen so as to achieve an even spread across the set of continuous strategies. A sensitivity analysis on the contextual parameters shows that the optimal strategy pair is very robust to changes to the negotiating environment, and any such changes that have the players start negotiating from positions more removed from one another is more detrimental to the target. A conceptual decision support system which uses the model and simulated results as key components is proposed and outlined.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Hagiwara, Masato, Yasuhiro Ogawa und Katsuhiko Toyama. „AUTOMATIC ACQUISITION OF LEXICAL KNOWLEDGE USING LATENT SEMANTIC MODELS“. INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10444.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Thakur, Ravi Bhushan. „Low power design implementation of a signal acquisition module“. Thesis, Kansas State University, 2010. http://hdl.handle.net/2097/4617.

Der volle Inhalt der Quelle
Annotation:
Master of Science
Department of Electrical and Computer Engineering
Don M. Gruenbacher
As semiconductor technologies advance, the smallest feature sizes that can be fabricated get smaller. This has led to the development of high density FPGAs capable of supporting high clock speeds, which allows for the implementation of larger more complex designs on a single chip. Over the past decade the technology market has shifted toward mobile devices with low power consumption at or near the top of design considerations. By reducing power consumption in FPGAs we can achieve greater reliability, lower cooling cost, simpler power supply and delivery, and longer battery life. In this thesis, FPGA technology is discussed for the design and commercial implementation of low power systems as compared to ASICs or microprocessors, and a few techniques are suggested for lowering power consumption in FPGA designs. The objective of this research is to implement some of these approaches and attempt to design a low power signal acquisition module. Designing for low power consumption without compromising performance requires a power-efficient FPGA architecture and good design practices to leverage the architectural features. With various power conservation techniques suggested for every stage of the FPGA design flow, the following approach was used in the design process implementation: the switching activity is addressed in the design entry, and synthesis level and software tools are utilized to get an initial estimate of and optimize the design’s power consumption. Finally, the device choice is made based on its features that will enhance the optimization achieved in the previous stages; it is configured and real time board level power measurements are made to verify the implementation’s efficacy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ross, James P. „A risk management model for the Federal Acquisition Process/“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA368012.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Management) Naval Postgraduate School, June 1999.
"June 1999". Thesis advisor(s): David A. Smith, Mark E. Nissen. Includes bibliographical references (p. 155-158). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Yu, Ting. „Stereo-Based Three-Dimensional Model Acquisition and Motion Detection“. Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28866.

Der volle Inhalt der Quelle
Annotation:
Deformable models have a long tradition in computer graphics and computer vision. This thesis looks at the capture of surface deformation based on stereo vision. In recent years, 3D reconstruction and motion detection has attracted great attention. In this thesis a framework for 3D reconstruction from mutli-view images followed by isometry-based motion detection is proposed. For 3D reconstruction, the thesis proposes a multi-view stereo algorithm based on well-known window-based matching combined with fusion of multiple matching results. To improve the matching result, some low-level image processing algorithms, camera calibration and background detection are utilized. For window-based matching, a new hybrid matching method is introduced by combining both, a measure of intensity difference and intensity distribution difference. Multiple MVS pointclouds from different reference views are fused with two new fusion strategies to generate a better final reconstruction. To characterize the performance of our matching method and fusion strategies, an evaluation based on the quality of reconstruction is given in the thesis. Based on 3D pointclouds of object surface obtained with stereo, the deformation of the surface is captured. To generate dense motion vectors over a deformed surface, a simple window-based 3D flow method is applied by using isometry of the observed surface as its primary matching constraint. The method uses feature points as anchoring references of the surface deformation. Given a set of matched features no other intensity information is used and hence the method can tolerate intensity changes over time. The approach is shown to work well on two example scenes which capture non-rigid isometric and general deformations. The thesis also presents experiments demonstrating the stability of the geodesic approximation employed in the isometry-based matching when the 3D pointclouds are sparse.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Nicholson, Chris L. „A Markov Model for Marine Corps Acquisition Force Planning“. Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/7393.

Der volle Inhalt der Quelle
Annotation:
This research is in response to a request by the Marine Aviation Detachment at Naval Air Station Patuxent River, MD. Currently, no manpower planning tools exist for force shaping of the Marine Corps Acquisition Community. This thesis creates a force shaping and forecasting tool for Marine Corps manpower planners. The tool assists planners in forecasting inventory levels across rank and Military Occupational Specialty combinations and in determining the most robust force structure for the acquisition officer community. Validation of the model reveals the usefulness of the planning tool for forecasting inventory levels, but it also indicates weakness in force structure analysis. This weakness is due to the small size and nascency of the current community, further data collection is required to validate the model for future use in force structure development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Molnar, Raymond A. (Raymond Alexander) 1977. „"Generalize and Sift" as a model of inflection acquisition“. Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86820.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (leaves 72-76).
by Raymond A. Molnar.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Voisin, Sophie. „3D model acquisition, segmentation and reconstruction using primitive fitting“. Dijon, 2008. http://www.theses.fr/2008DIJOS056.

Der volle Inhalt der Quelle
Annotation:
La rétro-conception d'un objet 3D consiste à retrouver les parties principales, ou primitives, qui reconstruisent au mieux le nuage de points qui le décrit. Le succès du processus de reconstruction étant grandement influencé par les erreurs engendrées le long de la chaîne de rétro-conception, nous nous sommes intéressés à améliorer deux des étapes du procédé. Dans un premier temps, afin de minimiser les erreurs liées à l’acquisition du nuage de point au moyen d'un scanneur à projection de lumière structurée, nous présentons une méthode permettant de choisir les conditions optimales d’éclairage et les couleurs les plus adaptées pour l'apparence de l'objet en fonction des possibilités du scanneur utilisé. Puis, afin d’obtenir une représentation plus abstraite de l’objet tout en gardant une certaine précision et réalité des parties le composant, nous présentons des méthodes entrant dans le cadre de la représentation du modèle via les étapes de reconstruction et de segmentation. L’originalité de ces méthodes est l’utilisation des algorithmes génétiques permettant la représentation du modèle au moyen de primitives, dans notre cas des superquadriques ou des supershapes. Les particularités de ces méthodes résident dans la flexibilité apportée par les algorithmes génétiques dans le mécanisme de résolution des problèmes d’optimisation qui ne dépend pas de l’initialisation du processus, et dans possibilités de la représentation par supershapes permettant la reconstruction d’objets de formes très complexes. En dépit de temps de calcul relativement importants, les résultats obtenus montrent de bonnes performances en termes de reconstruction et de segmentation d’objets ou de scènes
The reverse engineering of a 3D object consists to identify the main parts or primitives, which best reconstruct its 3D point cloud. Because the success of the reconstruction process is greatly influenced by the errors generated along the reverse engineering chain, we focus our research on improving two phases of the process. Firstly, in order to minimize the point cloud acquisition errors associated with the use of a structured light projection scanner, we present a method to select the best illumination source and the best object appearance colors depending on the characteristics of the scanner used. Secondly, in order to obtain a simplified representation of the object while maintaining accuracy and realistic representation, we present novel 3D reconstruction and segmentation methods. The originality of these methods is the use of genetic algorithms to obtain the representation of the model using primitives, in our case using superquadriques or supershapes. The particularities of these methods lie in the flexibility provided by the genetic algorithms in solving optimization problems since they do not depend on the initialization process, and lie on the capabilities of the supershapes representation allowing to reconstruct very complex 3D shapes. Despite computing time relatively expensive, we present good performance results in terms of reconstruction and segmentation of objects and/or scenes
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Telgen, S. J. „Motor learning during reaching movements : model acquisition and recalibration“. Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1464422/.

Der volle Inhalt der Quelle
Annotation:
This thesis marks a departure from the traditional task-based distinction between sensorimotor adaptation and skill learning by focusing on the mechanisms that underlie adaptation and skill learning. I argue that adaptation is a recalibration of an existing control policy, whereas skill learning is the acquisition and subsequent automatization of a new control policy. A behavioral criterion to distinguish the two mechanisms is offered. The first empirical chapter contrasts learning in visuomotor rotations of 40° with learning left-right reversals during reaching movements. During left-right reversals, speed-accuracy trade-offs increased and offline gains emerged, whereas during visual rotations, speed-accuracy trade-offs remained constant and instead of offline gains, there was offline forgetting. I argue that these dissociations reflect differences in the underlying learning mechanisms: acquisition and recalibration. The second empirical chapter tests whether the dissociation based on time-accuracy trade-offs reveals a general property of recalibration or whether instead the interpretation is limited to the specific contrast between left-right reversals and visuomotor rotations. When the size of the prediction error– the difference between intended and perceived movement – was gradually increased participants switched from recalibration to control policy acquisition. This switching point can be derived by considering the role of internal models in recalibration: If the internal model that learns from errors and the environment are too dissimilar – e.g. in left-right reversal and large rotations– recalibration would cause the system to learn from errors in the wrong way, such that prediction errors would increase further. To address this problem the final empirical chapter explores if the way the system learns from errors can be reversed. In conclusion, the results provide behavioral criteria to differentiate between adaptation and skill learning. By exploring the boundaries of recalibration this thesis contributes to a more principled understanding of the mechanisms involved in adaptation and skill learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Adams, Richard. „The Advanced Data Acquisition Model (ADAM): A process model for digital forensic practice“. Thesis, Adams, Richard (2012) The Advanced Data Acquisition Model (ADAM): A process model for digital forensic practice. PhD thesis, Murdoch University, 2012. https://researchrepository.murdoch.edu.au/id/eprint/14422/.

Der volle Inhalt der Quelle
Annotation:
Given the pervasive nature of information technology, the nature of evidence presented in court is now less likely to be paper-based and in most instances will be in electronic form . However, evidence relating to computer crime is significantly different from that associated with the more ‘traditional’ crimes for which, in contrast to digital forensics, there are well-established standards, procedures and models to which law courts can refer. The key problem is that, unlike some other areas of forensic practice, digital forensic practitioners work in a number of different environments and existing process models have tended to focus on one particular area, such as law enforcement, and fail to take into account the different needs of those working in other areas such as incident response or ‘commerce’. This thesis makes an original contribution to knowledge in the field of digital forensics by developing a new process model for digital data acquisition that addresses both the practical needs of practitioners working in different areas of the field and the expectation of law courts for a formal description of the process undertaken to acquire digital evidence. The methodology adopted for this research is design science on the basis that it is particularly suited to the task of creating a new process model and an ‘ideal approach’ in the problem domain of digital forensic evidence. The process model employed is the Design Science Research Process (DSRP) (Peffers, Tuunanen, Gengler, Rossi, Hui, Virtanen and Bragge, 2006) that has been widely utilised within information systems research. A review of current process models involving the acquisition of digital data is followed by an assessment of each of the models from a theoretical perspective, by drawing on the work of Carrier and Spafford (2003)1, and from a legal perspective by reference to the Daubert test2. The result of the model assessment is that none provide a description of a generic process for the acquisition of digital data, although a few models contain elements that could be considered for adaptation as part of a new model. Following the identification of key elements for a new model (based on the literature review and model assessment) the outcome of the design stage is a three-stage process model called the Advance Data Acquisition Model (ADAM) that comprises of three UML3 Activity diagrams, overriding Principles and an Operation Guide for each stage. Initial testing of the ADAM (the Demonstration stage from the DSRP) involves a ‘desk check’ using both in-house documentation relating to three digital forensic investigations and four narrative scenarios. The results of this exercise are fed back into the model design stage and alterations made as appropriate. The main testing of the model (the DSRP Evaluation stage) involves independent verification and validation of the ADAM utilising two groups of ‘knowledgeable people’. The first group, the Expert Panel, consists of international ‘subject matter experts’ from the domain of digital forensics. The second group, the Practitioner Panel, consists of peers from around Australia that are digital forensic practitioners and includes a representative from each of the areas of relevance for this research, namely: law enforcement, commerce and incident response. Feedback from the two panels is considered and modifications applied to the ADAM as appropriate. This thesis builds on the work of previous researchers and demonstrates how the UML can be practically applied to produce a generic model of one of the fundamental digital forensic processes, paving the way for future work in this area that could include the creation of models for other activities undertaken by digital forensic practitioners. It also includes the most comprehensive review and critique of process models incorporating the acquisition of digital forensics yet undertaken.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Ekström, Thomas. „Public Private Business Models for Defence Acquisition : A Multiple Case Study of Defence Acquisition Projects in the UK“. Licentiate thesis, Division of Engineering Logistics, Department of Industrial Management and Logistics, Lund University, Lund, Sweden, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:fhs:diva-9061.

Der volle Inhalt der Quelle
Annotation:
Since the ending of the Cold War, the defence sector, particularly the areas of military logistics and defence acquisition, has been undergoing a comprehensive transformation. There are several factors that explain this transformation: changes in defence and security policies for nations and organisations; reductions in defence expenditure; participation in Peace Support Operations; Lessons Learned from these operations, especially in the area of logistics; revolutionary development in the area of Information and Communication Technology; emergence of novel Commercial Best Practises in the areas of business and business logistics; and changes in the legislation regarding the conduct of public procurement in Europe. In military logistics, the relatively easily described static supply and support chains of the Cold War Era, designed for military units that stood in preparedness, Just-in-Case, of full-scale military conflicts in Europe, are now being substituted for flexible, dynamic operational supply and support chains, designed for military units that are deployed on Peace Support Operations around the globe. Hence, new types of missions have to be provided for. As a consequence, new military concepts have to be considered; new technology is being implemented; and new Commercial Best Practises are being evaluated, adapted and adopted; in order to enhance performance and ensure Value-for-Money. In defence acquisition, the single Business Model of the Cold War Era, i.e. procurement of equipment, is being replaced by a spectrum of emerging Business Models, ranging from the traditional procurement of equipment, via acquisition of equipment and support, to acquisition of availability and capability, i.e. acquisition of performance. Consequently, new Commercial Best Practises are being evaluated, adapted and adopted; Commercial and Military-Off-The-Shelf products and services are being utilised; and Public Private Participation, Cooperation, and Partnerships are being investigated and initiated; in order to enhance performance and ensure Value-for-Money, while simultaneously mitigating operational risk in the supply and support chains. This licentiate thesis reports on a research project that was commissioned by FMV, the Swedish Defence Materiel Administration, and conducted in order to "study, analyse, and evaluate Business Models regarding how they can handle the new supply concept that a new logistical interface brings about, with a particular emphasis on the risk taking that is part of the business concept". This research purpose was used to formulate three Research Questions: • Research Question 1: How can a generic Business Model for a non-profit, governmental, Defence Procurement Agency be described? • Research Question 2: Which strengths and weaknesses do different Business Models have in the context of defence acquisition? • Research Question 3: Which risks are associated with different Business Models in the context of defence acquisition? Using constructs from: Business Model theory, Public Private Participation theory, defence acquisition theory and practise, and military logistics theory and practise; a generic Public Private Business Model for defence acquisition was developed. The generic model consists of numerous variables, which enables an array of possible configurations. The model was used in a multiple case study to describe and analyse four defence acquisition projects in the UK. The multiple case study demonstrated that the generic Public Private Business Model is useful in order to describe defence acquisition projects. The model has also demonstrated that it is useful in order to analyse acquisition projects, including performance and risk. The Public Private Business Model has demonstrated its usefulness by discovering internal and external misalignments. The internal misalignments are Business Model configurations where the different building blocks are working against each other. The research has revealed examples where the mitigation of operational risk in the supply and support chains creates new risks in other building blocks. An external misalignment occurs when a Business Model configuration works against the deal for which it was designed, or the strategy that it is intended to realise. The research has revealed examples where there is a risk that the Business Model configuration is detrimental to the overarching strategy, e.g. transferring risk to the private sector or incentivising industry to enhance performance. Hence, the Public Private Business Model ought to be useful to identify and eradicate negative patterns and to identify and reinforce positive patterns. The research has revealed three potential generic problems for Performance Based Contracts: a "definition problem" (i.e. what to measure); a "measurement problem" (i.e. when, where and how to measure); and a "comparison problem" (i.e. with what to compare). The research results demonstrate that it must be made explicit which dimensions of performance; e.g. speed, quality, cost, flexibility and dependability; that should be measured, and why others should be omitted. The research suggests that performance must be explicitly specified for any Performance Based Contract in order to avoid any unnecessary problems with interpretations. Furthermore, the research indicates that performance metrics must be explicitly described. In addition, the results emphasise the importance of having an established baseline, against which to compare the measurements of Key Performance Indicators.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Paviotti, Anna. „Acquisition and processing of multispectral data for texturing 3D models“. Doctoral thesis, Università degli studi di Padova, 2009. http://hdl.handle.net/11577/3426851.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with three problems concerning the use of a multispectral imaging spectrograph in applications of cultural heritage. The multispectral camera is part of an instrument developed within the project "Shape&Color" (CARIPARO, 2003-2005), coupling the spectrograph with a 3D laser scanner. Although the issues we have addressed arose from the characteristics of this specic instrument, they can be regarded as general problems concerning multispectral imaging, and are therefore of broader interest. The first part relates on the characterization of the spectrograph performance in measuring spectral reflectance under different illumination conditions. Four different illumination setups have been used to acquire a set of colored calibrated tiles. The system performance has been evaluated through a metrologically-inspired procedure, using as descriptors the average error (AE) and the average error standard deviation (AESTD), calculated by means of error propagation formula. The best results have been obtained with a metallic iodide lamp and an incandescence lamp used in a sequence, juxtaposing the spectral reflectance measured with the metallic iodide lamp in the 400-600 nm interval and that obtained with the halogen lamp in the 600-900 nm interval. The second presented issue concerns the problem of separating spectral illumination and spectral reflectance from the acquired color signal (the global radiation signal reflected by a target object). Since the latter can be considered as the product of illumination and spectral reflectance, this is an ill-posed problem. Methods in the literature estimate the two functions apart from a scale factor. The proposed solution attempts at the recovery of this scale factor using a statistical-based approach. The core of the algorithm consists of the estimation of the illumination intensity through a modification of the RANSAC algorithm, using relations derived from the physical constraints of the illumination and the spectral reflectance. The spectral reflectance is subsequently computed from the measured color signal and the estimated illumination function. The algorithm has been tested on four case studies, representing artworks of different pictorial techniques, color characteristics and dimensions. The results are good in terms of mean relative error, while the infinity norm of the relative error sometimes assumes high values. The last problem we have dealt with is that of using the multispectral images acquired with the Shape&Color scanner to texturize uncalibrated 3D data. What makes the problem worth addressing is that the spectral camera is not pinhole, but can be classified as a cylindrical panoramic camera. In this thesis, the general problem of estimating the extrinsic parameters of the camera from a known set of 3D-2D correspondences has been considered. The chosen approach is the classical reprojection error minimization procedure. As the projection operator is nonlinear, the objective function has a very complicated structure. Due to this and to the high dimensionality of the problem, the minimization results are strongly sensitive to the choice of the initial parameter values. This work proposes a way of finding a reliable initial point for the minimization function, so as to lower the risk of being trapped into local minima.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Oladejo, J. A. „The acquisition of English modals by Yoruba learners“. Thesis, University of Reading, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370645.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Gandell, Joy. „Mergers and acquisitions : a unified human resources model“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59280.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Nutting, Ryan Todd. „Model lives : the changing meanings of miniature ethnographic models from acquisition to interpretation at the Horniman Free Museum, 1894-1898“. Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40036.

Der volle Inhalt der Quelle
Annotation:
Although many contemporary museums possess collections of miniature ethnographic models, few scholars have explored how these objects emphasize ideas of intellectual control. This thesis examines the use and interpretation of miniature ethnographic models in the late nineteenth century. I demonstrate how the interpretation of these objects reinforced British intellectual control over the peoples of India and Burma during this period by focusing on four sets of miniature ethnographic models purchased by Frederick Horniman in the mid-1890s and displayed in the Horniman Free Museum until it closed in January 1898. Building on the theories of miniature objects developed by Susan Stewart and others, scholarship on the development of tourist art, late nineteenth-century museum education theories, and postcolonial theories the thesis examines the biography of these objects between 1894 and 1898. By drawing on archival documents from the museum, articles about Horniman and the museum from this period, and newspaper articles chronicling Horniman’s journal of his travels between 1894 and 1896, this thesis traces the interpretation of these miniature models from their purchase through their display within the museum to the description of these models by visitors to the museum, and in each case shows how these models embodied notions of intellectual control over the peoples of India and Burma. Where previous studies have focused on only one or two of these phases of objects’ lives this thesis demonstrates that all three phases of these models’ lives (collection, display, and visitor interpretations) within the period reveal aspects of colonial control. Consequently, this thesis provides a basis for further work on investigating how late nineteenth-century collectors and museums utilized objects to both construct knowledge and implicitly highlight aspects of colonial control.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Bauer, Louis M., und Marc M. Meeker. „An acquistion [i.e. acquisition] leader's model for building collaborative capacity“. Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/10738.

Der volle Inhalt der Quelle
Annotation:
MBA Professional Report
This report begins by defining collaboration. Next, the report provides examples of how effective collaboration within the Defense's (DoD) acquisition community is lacking. Based on these examples, the project asks its main research question: "How can DoD acquisition leaders improve their collaborative capacity to improve cost, schedule and performance?" Next, the project provides a model for how to do just that. The project, "An Acquisition Leader's Model for Building Collaborative Capacity" presents a three-step model. Step one is to assess and analyze collaboration capacity with regard to the elements of one's own organization, the organization's stakeholders, and the network (or the relationships between stakeholders). Next, based on the analyses from step one, step two calls for making plans to improve collaboration capacity, again, along the same elements previously analyzed: one's organization, stakeholders, and the network. Lastly, the model calls for executing the plans made in step 2. This process is repeated until the desired collaboration capacity has been reached. Last, the project provides a detailed hypothetical example of how the model can be applied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Reid, Jack Burnett. „Assessing and mitigating vulnerability chains in model-centric acquisition programs“. Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117789.

Der volle Inhalt der Quelle
Annotation:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged student-submitted from PDF version of thesis.
Includes bibliographical references (pages 143-157).
Acquisition programs increasingly use model-centric approaches, generating and using digital assets throughout the lifecycle. Model-centric practices have matured, yet in spite of sound practices there are uncertainties that may impact programs over time. The emergent uncertainties (policy change, budget cuts, disruptive technologies, threats, changing demographics, etc.) and related programmatic decisions (e.g., staff cuts, reduced training hours) may lead to cascading vulnerabilities within model-centric acquisition programs, potentially jeopardizing program success. Program managers are increasingly faced with novel vulnerabilities. They need to be equipped with the means to identify model-centric program vulnerabilities and determine where interventions can most effectively be taken. In this research, Cause-Effect Mapping (CEM), a vulnerability assessment technique, is employed to examine these vulnerabilities. Using a combination of literature investigation, expert interviews, and usability testing, a CEM is created to represent the novel vulnerabilities posed by model-centric practices in acquisition programs. Particular attention is paid to cybersecurity vulnerabilities, which pose a serious threat to the successful implementation of model-centric practices. From this CEM, key gaps in program manager knowledge and organizational policies are identified and potential responses proposed.
Naval Postgraduate School Acquisition Research Programs Grant No. N00244-17-1-0011
by Jack Burnett Reid.
S.M.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Sanoudaki, Eirini. „A CVCV model of consonant cluster acquisition : evidence from Greek“. Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1446084/.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis is to develop a model of the acquisition of consonant clusters within the phonological framework of CVCV theory. This is the first attempt to link CVCV to the area of language acquisition. It thus provides a new domain within which CVCV can be evaluated against other phonological theories. The core claim of CVCV is that syllable structure consists solely of onsets and nuclei, without any branching constituents. Consonant clusters are separated by empty nuclei, whose distribution is controlled by binary parameters. The model developed in this thesis is based on the assumption that a central part of the acquisition process is the gradual setting of these parameters to the appropriate value. The model, apart from covering familiar acquisition data, makes a number of predictions about the order of acquisition of consonant clusters. Of particular importance are predictions regarding word initial clusters of non-rising sonority, whose acquisition has attracted little attention. The predictions are tested against experimental data of cluster production by fifty-nine children acquiring Greek as their first language. The experimental results indicate that a CVCV model can account for consonant cluster acquisition. With regard to word initial position, the results support the proposed CVCV analysis by providing evidence for the existence of a word initial Onset-Nucleus unit. Moreover, the notoriously complex issue of s+consonant clusters is examined, and new evidence for the structure and markedness of these clusters is provided. Finally, the results offer a new perspective on a manner dissimilation phenomenon in Greek, whereby clusters of two voiceless fricatives or two voiceless stops turn into a fricative plus stop. A parametric analysis, based on segmental complexity, is proposed, and it is argued that this analysis can explain the acquisition data as well as the historical evolution of Greek clusters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Cluff, Sarah Zitting. „A Model of Grammatical Category Acquisition Using Adaptation and Selection“. BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4086.

Der volle Inhalt der Quelle
Annotation:
By the later preschool years, most children have a knowledge of the grammatical categories of their native language and are capable of expanding this knowledge to novel words. To model this accomplishment, researchers have created a variety of explicit, testable models or algorithms. These have had partial but promising success in extracting grammatical word categories from transcriptions of caregiver input to young children. Additional insight into children's learning of the grammatical categories of words might be gained from evolutionary computing algorithms, which apply principles of evolutionary biology such as variation, adaptive change, self-regulation, and inheritance to computational models. The current thesis applied such a model to the language addressed to five children, whose ages ranged from 1;1 to 5;1 (years;months). The model evolved dictionaries linking words to their grammatical tags and was run for 4000 cycles; four different rates of mutation of offspring dictionaries were assessed. The accuracy for coding the words in the corpora of language addressed to the children averaged 92.74%. Directions for further development and evaluation of the model are proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Attallah, May. „Strategies of Information Acquisition Under Uncertainty“. Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1G023/document.

Der volle Inhalt der Quelle
Annotation:
L’objectif de cette thèse est de présenter quatre essais en économie comportementale et expérimentale sur prise de décision dans le risque et l’ambiguïté. Le premier essai présente une synthèse et un point de vue sur la représentativité des résultats expérimentaux en matière de préférences : préférences sociales et préférences concernant le risque et le temps dans les pays développés ainsi que dans les pays en voie de développement. Le deuxième essai explore expérimentalement l’effet du risque et de l’ambiguïté sur le comportement de recherche d’emploi en horizon infini. Les résultats montrent qu’en risque et ambiguïté, les salaires de réservation sont inférieurs aux valeurs théoriques et diminuent au cours du processus de recherche. De même, les sujets se comportent comme des agents neutre à l’ambiguïté. Le troisième et quatrième essai étudient l’effet du contexte social et la corrélation des paiements sur les attitudes face au risque et à l’ambiguïté respectivement dans le domaine de gain, perte et le domaine mixte. Les résultats montrent que l’introduction du contexte social a un effet significatif sur les attitudes face au risque dans les trois domaines. Néanmoins, la corrélation des risques a un effet sur les attitudes face au risque seulement dans le domaine mixte. Les attitudes face à l’ambiguïté varient selon le domaine. De même, la corrélation des paiements diminuent l’aversion à l’ambiguïté
The objective of this thesis is to present four essays in behavioral and experimental economics on decision-making under risk and ambiguity. The first essay presents a synthesis and a point of view on the representativeness of experimental results regarding individual preferences: social preferences and risk and time preferences, in developed countries as well as in developing countries. The second essay explores experimentally the effect of risk and ambiguity on job search behavior in an infinite horizon. The results show that in risk and ambiguity, reservation wages are lower than the theoretical values and decrease during the search process. Similarly, subjects behave as ambiguity neutral agents. The third and fourth essay examine the effect of the social context and the correlation of payments on attitudes towards risk and ambiguity respectively in gain, loss and mixed domain. The results show that the introduction of the social context has a significant effect on attitudes towards risk in all three domains. Nevertheless, the correlation of risks has an effect on risk attitudes only in the mixed domain. As for ambiguity, ambiguity attitudes vary across domains. The correlation of payments decreases ambiguity aversion
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

PEH, LIK CHUN. „DEVELOPMEMNT OF A MODELING AND SIMULATION TRAINING NEEDS MODEL FOR SELECTED DEFENSE ACQUISITION WORKFORCE COMMUNITIES“. Master's thesis, University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2868.

Der volle Inhalt der Quelle
Annotation:
The DoD Modeling and Simulation Steering Committee (M&S SC) identified Modeling and Simulation (M&S) as an educational objective for the Acquisition, Technology and Logistics (AT&L) workforce. Notably, past usages of M&S in system acquisitions for both DoD and commercial industry have demonstrated improvements in efficiency and effectiveness over traditional acquisition techniques. However, to achieve expected and consistent performance by this workforce in these new techniques, the M&S essential skill requirements for this workforce may be extensive. This research aims to validate the content and level of competency in selected M&S tools and technology necessary for consistent workforce performance. The notion here is to achieve greater efficiency and effectiveness in the acquisition process through thresholds of competency that must be resident in or available to the acquisition workforce. This research proposes a matrix of training objectives and levels of competency for portions of the AT&L workforce that was validated through survey by individuals who are leading experts in both M&S and acquisition. This effort combines rigorously defined learning objectives and parameters by academia with practical learning insights from the military and industry ground perspectives. The resultant Joint Learning Model aims to identify the workforce educational foundations necessary to achieve more widespread efficiency and effectiveness in current and future DoD acquisitions.
M.S.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering MS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Rubinstein, Judith. „Study of the Light Utility Helicopter (LUH) acquisition program as a model for defense acquisition of non-developmental items“. Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/44655.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited
The UH-72A Light UtilityHelicopter (LUH)was acquired for performance of general support tasks (training, medical evacuation, law enforcement, etc.) in permissive (non-combat) environments, to replace Vietnam-era helicopters, and to free up Black Hawk UH-60 helicopters for combat use. This acquisition program is the Army’s first major acquisition of commercially available helicopters subsequently modified for military use. Although initial testing and use indicated the need for unforeseen modifications to the helicopters, in most respects, this program was successful. The successes included expeditious acquisition and fielding, avoidance of excessive costs, and acquisition of helicopters that incorporated the latest available technology (developed at industry, not at government, expense). Additionally, the helicopters could be, and were, readily tailored for diverse uses. Also, they highly satisfied users’ requirements. Finally, all deliveries were on-time or ahead of schedule. These successes occurred largely because the UH-72A was a non-developmental item with mature technology at the time of acquisition. The time and expense that would otherwise have been needed for development and for ramp-up of production were avoided. Additional factors contributing to the success of the program were clear definition of the requirement, avoidance of scope creep, and close cooperation among all stakeholders.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Warren, Robert Henry. „Domain knowledge acquisition from solid waste management models using rough sets“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60259.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Taylor, Conrad F. „The acquisition of phonemic constraints : implications for models of phonological encoding“. Thesis, Bangor University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409651.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Fong, Philip. „Data-based models for deformable objects : sensing, acquisition, and interactive playback /“. May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Dillon, Andrew. „Knowledge acquisition and conceptual models: A Cognitive analysis of the interface“. Cambridge: Cambridge University Press, 1987. http://hdl.handle.net/10150/106468.

Der volle Inhalt der Quelle
Annotation:
This item is not the definitive copy. Please use the following citation when referencing this material: Dillon, A. (1987) Knowledge acquisition and conceptual models: a cognitive analysis of the interface. In: D. Diaper and R.Winder (eds.) People and Computers III. Cambridge: Cambridge University Press, 371-379. Abstract: Understanding how users process the information available to them through the computer interface can greatly enhance our abilities to design usable systems. This paper details the results of a longitudinal psychological experiment investigating the effect of interface style on user performance, knowledge acquisition and conceptual model development. Through the use of standard performance measures, interactive error scoring and protocol analysis techniques it becomes possible to identify crucial psychological factors in successful human computer use. Results indicate that a distinction between "deep" and "shallow" knowledge of system functioning can be drawn where both types of user appear to interact identically with the machine although significant differences in their respective knowledge exists. The effect of these differences on user ability to perform under stress and transfer to similar systems is noted. Implications for the design of usable systems are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Furlan, Benjamin, Harald Oberhofer und Hannes Winner. „A Note on Merger and Acquisition Evaluation“. Oxford University Press, 2016. http://dx.doi.org/10.1093/icc/dtv033.

Der volle Inhalt der Quelle
Annotation:
This note proposes the continuous treatment approach as a valuable alternative to propensity score matching for evaluating economic effects of merger and acquisitions (M&As). This framework allows considering the variation in treatment intensities explicitly, and it does not call for an arbitrary definition of cutoff values in traded ownership shares to construct a binary treatment indicator. We demonstrate the usefulness of this approach using data from European M&As and by relying on the example of post-M&A employment effects. The empirical exercise reveals some heterogeneities over the whole distribution of acquired ownership shares and across different types of M&As and country groups.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Martin, Samuel, Jackeline Mayer, Parker Owan, Kyle Stephens und Lee Suring. „NASA Remote Imaging System Acquisition (RISA) Multispectral Imager Development Updates“. International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581742.

Der volle Inhalt der Quelle
Annotation:
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California
The NASA Remote Imaging System Acquisition (RISA) project is a prototype camera intended to be used by future NASA astronauts. NASA has commissioned the development of this engineering camera to support new mission objectives and perform multiple functions. These objectives require the final prototype to be radiation hardened, multispectral, completely wireless in data transmission and communication, and take high quality still images. This year's team was able to successfully develop an optical system that uses a liquid lens element for focus adjustment. The electrical system uses an Overo Fire computer-on-module (COM) developed by Gumstix. The OMAP processor onboard handles all communication with a monochromatic CMOS sensor, liquid lens control circuitry, pixel data acquisition and processing, and wireless communication with a host computer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie