Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Data and human knowledge learning.

Дисертації з теми "Data and human knowledge learning"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Data and human knowledge learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

McKay, Elspeth, and elspeth@rmit edu au. "Instructional strategies integrating cognitive style construct: A meta-knowledge processing model." Deakin University. School of Computing and Mathematics, 2000. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20061011.122556.

Повний текст джерела
Анотація:
The overarching goal of this dissertation was to evaluate the contextual components of instructional strategies for the acquisition of complex programming concepts. A meta-knowledge processing model is proposed, on the basis of the research findings, thereby facilitating the selection of media treatment for electronic courseware. When implemented, this model extends the work of Smith (1998), as a front-end methodology, for his glass-box interpreter called Bradman, for teaching novice programmers. Technology now provides the means to produce individualized instructional packages with relative ease. Multimedia and Web courseware development accentuate a highly graphical (or visual) approach to instructional formats. Typically, little consideration is given to the effectiveness of screen-based visual stimuli, and curiously, students are expected to be visually literate, despite the complexity of human-computer interaction. Visual literacy is much harder for some people to acquire than for others! (see Chapter Four: Conditions-of-the-Learner) An innovative research programme was devised to investigate the interactive effect of instructional strategies, enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style, on the acquisition of a special category of abstract (process) programming concept. This type of concept was chosen to focus on the role of analogic knowledge involved in computer programming. The results are discussed within the context of the internal/external exchange process, drawing on Ritchey's (1980) concepts of within-item and between-item encoding elaborations. The methodology developed for the doctoral project integrates earlier research knowledge in a novel, interdisciplinary, conceptual framework, including: from instructional science in the USA, for the concept learning models; British cognitive psychology and human memory research, for defining the cognitive style construct; and Australian educational research, to provide the measurement tools for instructional outcomes. The experimental design consisted of a screening test to determine cognitive style, a pretest to determine prior domain knowledge in abstract programming knowledge elements, the instruction period, and a post-test to measure improved performance. This research design provides a three-level discovery process to articulate: 1) the fusion of strategic knowledge required by the novice learner for dealing with contexts within instructional strategies 2) acquisition of knowledge using measurable instructional outcome and learner characteristics 3) knowledge of the innate environmental factors which influence the instructional outcomes This research has successfully identified the interactive effect of instructional strategy, within an individual's cognitive style construct, in their acquisition of complex programming concepts. However, the significance of the three-level discovery process lies in the scope of the methodology to inform the design of a meta-knowledge processing model for instructional science. Firstly, the British cognitive style testing procedure, is a low cost, user friendly, computer application that effectively measures an individual's position on the two cognitive style continua (Riding & Cheema,1991). Secondly, the QUEST Interactive Test Analysis System (Izard,1995), allows for a probabilistic determination of an individual's knowledge level, relative to other participants, and relative to test-item difficulties. Test-items can be related to skill levels, and consequently, can be used by instructional scientists to measure knowledge acquisition. Finally, an Effect Size Analysis (Cohen,1977) allows for a direct comparison between treatment groups, giving a statistical measurement of how large an effect the independent variables have on the dependent outcomes. Combined with QUEST's hierarchical positioning of participants, this tool can assist in identifying preferred learning conditions for the evaluation of treatment groups. By combining these three assessment analysis tools into instructional research, a computerized learning shell, customised for individuals' cognitive constructs can be created (McKay & Garner,1999). While this approach has widespread application, individual researchers/trainers would nonetheless, need to validate with an extensive pilot study programme (McKay,1999a; McKay,1999b), the interactive effects within their specific learning domain. Furthermore, the instructional material does not need to be limited to a textual/graphical comparison, but could be applied to any two or more instructional treatments of any kind. For instance: a structured versus exploratory strategy. The possibilities and combinations are believed to be endless, provided the focus is maintained on linking of the front-end identification of cognitive style with an improved performance outcome. My in-depth analysis provides a better understanding of the interactive effects of the cognitive style construct and instructional format on the acquisition of abstract concepts, involving spatial relations and logical reasoning. In providing the basis for a meta-knowledge processing model, this research is expected to be of interest to educators, cognitive psychologists, communications engineers and computer scientists specialising in computer-human interactions.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Pomponio, Laura. "Definition of a human-machine learning process from timed observations : application to the modelling of human behaviourfor the detection of abnormal behaviour of old people at home." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4358.

Повний текст джерела
Анотація:
L'acquisition et la modélisation de connaissances ont été abordés jusqu'à présent selon deux approches principales : les êtres humains (experts) à l'aide des méthodologies de l'Ingénierie des Connaissances et le Knowledge Management, et les données à l'aide des techniques relevant de la découverte de connaissances à partir du contenu de bases de données (fouille de données). Cette thèse porte sur la conception d'un processus d'apprentissage conjoint par l'être humain et la machine combinant une approche de modélisation des connaissances de type Ingénierie des Connaissances (TOM4D, Timed Observation Modelling for Diagnosis) et une approche d'apprentissage automatique fondée sur un processus de découverte de connaissances à partir de données datées (TOM4L, Timed Observation Mining for Learning). Ces deux approches étant fondées sur la Théorie des Observations Datées, les modèles produits sont représentés dans le même formalisme ce qui permet leur comparaison et leur combinaison. Le mémoire propose également une méthode d'abstraction, inspiée des travaux de Newell sur le "Knowledge Level'' et fondée sur le paradigme d'observation datée, qui a pour but de traiter le problème de la différence de niveau d'abstraction inhérent entre le discours d'un expert et les données mesurées sur un système par un processus d'abstractions successives. Les travaux présentés dans ce mémoire ayant été menés en collaboration avec le CSTB de Sophia Antipolis (Centre Scientifique et Technique du Bâtiment), ils sont appliqués à la modélisation de l'activité humaine dans le cadre de l'aide aux personnes âgées maintenues à domicile
Knowledge acquisition has been traditionally approached from a primarily people-driven perspective, through Knowledge Engineering and Management, or from a primarily data-driven approach, through Knowledge Discovery in Databases, rather than from an integral standpoint. This thesis proposes then a human-machine learning approach that combines a Knowledge Engineering modelling approach called TOM4D (Timed Observation Modelling For Diagnosis) with a process of Knowledge Discovery in Databases based on an automatic data mining technique called TOM4L (Timed Observation Mining For Learning). The combination and comparison between models obtained through TOM4D and those ones obtained through TOM4L is possible, owing to that TOM4D and TOM4L are based on the Theory of Timed Observations and share the same representation formalism. Consequently, a learning process nourished with experts' knowledge and knowledge discovered in data is defined in the present work. In addition, this dissertation puts forward a theoretical framework of abstraction levels, in line with the mentioned theory and inspired by the Newell's Knowledge Level work, in order to reduce the broad gap of semantic content that exists between data, relative to an observed process, in a database and what can be inferred in a higher level; that is, in the experts' discursive level. Thus, the human-machine learning approach along with the notion of abstraction levels are then applied to the modelling of human behaviour in smart environments. In particular, the modelling of elderly people's behaviour at home in the GerHome Project of the CSTB (Centre Scientifique et Technique du Bâtiment) of Sophia Antipolis, France
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gaspar, Paulo Miguel da Silva. "Computational methods for gene characterization and genomic knowledge extraction." Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13949.

Повний текст джерела
Анотація:
Doutoramento conjunto MAPi em Ciências da Computação
Motivation: Medicine and health sciences are changing from the classical symptom-based to a more personalized and genetics-based paradigm, with an invaluable impact in health-care. While advancements in genetics were already contributing significantly to the knowledge of the human organism, the breakthrough achieved by several recent initiatives provided a comprehensive characterization of the human genetic differences, paving the way for a new era of medical diagnosis and personalized medicine. Data generated from these and posterior experiments are now becoming available, but its volume is now well over the humanly feasible to explore. It is then the responsibility of computer scientists to create the means for extracting the information and knowledge contained in that data. Within the available data, genetic structures contain significant amounts of encoded information that has been uncovered in the past decades. Finding, reading and interpreting that information are necessary steps for building computational models of genetic entities, organisms and diseases; a goal that in due course leads to human benefits. Aims: Numerous patterns can be found within the human variome and exome. Exploring these patterns enables the computational analysis and manipulation of digital genomic data, but requires specialized algorithmic approaches. In this work we sought to create and explore efficient methodologies to computationally calculate and combine known biological patterns for various purposes, such as the in silico optimization of genetic structures, analysis of human genes, and prediction of pathogenicity from human genetic variants. Results: We devised several computational strategies to evaluate genes, explore genomes, manipulate sequences, and analyze patients’ variomes. By resorting to combinatorial and optimization techniques we were able to create and combine sequence redesign algorithms to control genetic structures; by combining the access to several web-services and external resources we created tools to explore and analyze available genetic data and patient data; and by using machine learning we developed a workflow for analyzing human mutations and predicting their pathogenicity.
Motivação: A medicina e as ciências da saúde estão atualmente num processo de alteração que muda o paradigma clássico baseado em sintomas para um personalizado e baseado na genética. O valor do impacto desta mudança nos cuidados da saúde é inestimável. Não obstante as contribuições dos avanços na genética para o conhecimento do organismo humano até agora, as descobertas realizadas recentemente por algumas iniciativas forneceram uma caracterização detalhada das diferenças genéticas humanas, abrindo o caminho a uma nova era de diagnóstico médico e medicina personalizada. Os dados gerados por estas e outras iniciativas estão disponíveis mas o seu volume está muito para lá do humanamente explorável, e é portanto da responsabilidade dos cientistas informáticos criar os meios para extrair a informação e conhecimento contidos nesses dados. Dentro dos dados disponíveis estão estruturas genéticas que contêm uma quantidade significativa de informação codificada que tem vindo a ser descoberta nas últimas décadas. Encontrar, ler e interpretar essa informação são passos necessários para construir modelos computacionais de entidades genéticas, organismos e doenças; uma meta que, em devido tempo, leva a benefícios humanos. Objetivos: É possível encontrar vários padrões no varioma e exoma humano. Explorar estes padrões permite a análise e manipulação computacional de dados genéticos digitais, mas requer algoritmos especializados. Neste trabalho procurámos criar e explorar metodologias eficientes para o cálculo e combinação de padrões biológicos conhecidos, com a intenção de realizar otimizações in silico de estruturas genéticas, análises de genes humanos, e previsão da patogenicidade a partir de diferenças genéticas humanas. Resultados: Concebemos várias estratégias computacionais para avaliar genes, explorar genomas, manipular sequências, e analisar o varioma de pacientes. Recorrendo a técnicas combinatórias e de otimização criámos e conjugámos algoritmos de redesenho de sequências para controlar estruturas genéticas; através da combinação do acesso a vários web-services e recursos externos criámos ferramentas para explorar e analisar dados genéticos, incluindo dados de pacientes; e através da aprendizagem automática desenvolvemos um procedimento para analisar mutações humanas e prever a sua patogenicidade.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zeni, Mattia. "Bridging Sensor Data Streams and Human Knowledge." Doctoral thesis, University of Trento, 2017. http://eprints-phd.biblio.unitn.it/2724/1/Thesis.pdf.

Повний текст джерела
Анотація:
Generating useful knowledge out of personal big data in form of sensor streams is a difficult task that presents multiple challenges due to the intrinsic characteristics of these type of data, namely their volume, velocity, variety and noisiness. This problem is a well-known long standing problem in computer science called the Semantic Gap Problem. It was originally defined in the research area of image processing as "... the lack of coincidence between the information that one can extract from the visual data and the interpretation that the same data have for a user in a given situation..." [Smeulders et al., 2000]. In the context of this work, the lack of coincidence is between low-level raw streaming sensor data collected by sensors in a machine-readable format and higher-level semantic knowledge that can be generated from these data and that only humans can understand thanks to their intelligence, habits and routines. This thesis addresses the semantic gap problem in the context above, proposing an interdisciplinary approach able to generate human level knowledge from streaming sensor data in open domains. It leverages on two different research fields: one regarding the collection, management and analysis of big data and the field of semantic computing, focused on ontologies, which respectively map to the two elements of the semantic gap mentioned above. The contributions of this thesis are: • The definition of a methodology based on the idea that the user and the world surrounding him can be modeled, defining most of the elements of her context as entities (locations, people, objects, among other, and the relations among them) in addition with the attributes for all of them. The modeling aspects of this ontology are outside of the scope of this work. Having such a structure, the task of bridging the semantic gap is divided in many, less complex, modular and compositional micro-tasks that are which consist in mapping the streaming sensor data using contextual information to the attribute values of the corresponding entities. In this way we can create a structure out of the unstructured, noisy and highly variable sensor data that can then be used by the machine to provide personalized, context-aware services to the final user; • The definition of a reference architecture that applies the methodology above and addresses the semantic gap problem in streaming sensor data; • The instantiation of the architecture above in the Stream Base System (SB), resulting in the implementation of its main components using state-of-the-art software solutions and technologies; • The adoption of the Stream Base System in four use cases that have very different objectives one respect to the other, proving that it works in open domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Ping. "Learning from Multiple Knowledge Sources." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214795.

Повний текст джерела
Анотація:
Computer and Information Science
Ph.D.
In supervised learning, it is usually assumed that true labels are readily available from a single annotator or source. However, recent advances in corroborative technology have given rise to situations where the true label of the target is unknown. In such problems, multiple sources or annotators are often available that provide noisy labels of the targets. In these multi-annotator problems, building a classifier in the traditional single-annotator manner, without regard for the annotator properties may not be effective in general. In recent years, how to make the best use of the labeling information provided by multiple annotators to approximate the hidden true concept has drawn the attention of researchers in machine learning and data mining. In our previous work, a probabilistic method (i.e., MAP-ML algorithm) of iteratively evaluating the different annotators and giving an estimate of the hidden true labels is developed. However, the method assumes the error rate of each annotator is consistent across all the input data. This is an impractical assumption in many cases since annotator knowledge can fluctuate considerably depending on the groups of input instances. In this dissertation, one of our proposed methods, GMM-MAPML algorithm, follows MAP-ML but relaxes the data-independent assumption, i.e., we assume an annotator may not be consistently accurate across the entire feature space. GMM-MAPML uses a Gaussian mixture model (GMM) and Bayesian information criterion (BIC) to find the fittest model to approximate the distribution of the instances. Then the maximum a posterior (MAP) estimation of the hidden true labels and the maximum-likelihood (ML) estimation of quality of multiple annotators at each Gaussian component are provided alternately. Recent studies show that it is not the case that employing more annotators regardless of their expertise will result in improved highest aggregating performance. In this dissertation, we also propose a novel algorithm to integrate multiple annotators by Aggregating Experts and Filtering Novices, which we call AEFN. AEFN iteratively evaluates annotators, filters the low-quality annotators, and re-estimates the labels based only on information obtained from the good annotators. The noisy annotations we integrate are from any combination of human and previously existing machine-based classifiers, and thus AEFN can be applied to many real-world problems. Emotional speech classification, CASP9 protein disorder prediction, and biomedical text annotation experiments show a significant performance improvement of the proposed methods (i.e., GMM-MAPML and AEFN) as compared to the majority voting baseline and the previous data-independent MAP-ML method. Recent experiments include predicting novel drug indications (i.e., drug repositioning) for both approved drugs and new molecules by integrating multiple chemical, biological or phenotypic data sources.
Temple University--Theses
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lazzarini, Nicola. "Knowledge extraction from biomedical data using machine learning." Thesis, University of Newcastle upon Tyne, 2017. http://hdl.handle.net/10443/3839.

Повний текст джерела
Анотація:
Thanks to the breakthroughs in biotechnologies that have occurred during the recent years, biomedical data is accumulating at a previously unseen pace. In the field of biomedicine, decades-old statistical methods are still commonly used to analyse such data. However, the simplicity of these approaches often limits the amount of useful information that can be extracted from the data. Machine learning methods represent an important alternative due to their ability to capture complex patterns, within the data, likely missed by simpler methods. This thesis focuses on the extraction of useful knowledge from biomedical data using machine learning. Within the biomedical context, the vast majority of machine learning applications focus their e↵ort on the generation and validation of prediction models. Rarely the inferred models are used to discover meaningful biomedical knowledge. The work presented in this thesis goes beyond this scenario and devises new methodologies to mine machine learning models for the extraction of useful knowledge. The thesis targets two important and challenging biomedical analytic tasks: (1) the inference of biological networks and (2) the discovery of biomarkers. The first task aims to identify associations between di↵erent biological entities, while the second one tries to discover sets of variables that are relevant for specific biomedical conditions. Successful solutions for both problems rely on the ability to recognise complex interactions within the data, hence the use of multivariate machine learning methods. The network inference problem is addressed with FuNeL: a protocol to generate networks based on the analysis of rule-based machine learning models. The second task, the biomarker discovery, is studied with RGIFE, a heuristic that exploits the information extracted from machine learning models to guide its search for minimal subsets of variables. The extensive analysis conducted for this dissertation shows that the networks inferred with FuNeL capture relevant knowledge complementary to that extracted by standard inference methods. Furthermore, the associations defined by FuNeL are discovered - 6 - more pertinent in a disease context. The biomarkers selected by RGIFE are found to be disease-relevant and to have a high predictive power. When applied to osteoarthritis data, RGIFE confirmed the importance of previously identified biomarkers, whilst also extracting novel biomarkers with possible future clinical applications. Overall, the thesis shows new e↵ective methods to leverage the information, often remaining buried, encapsulated within machine learning models and discover useful biomedical knowledge.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lipton, Zachary C. "Learning from Temporally-Structured Human Activities Data." Thesis, University of California, San Diego, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10683703.

Повний текст джерела
Анотація:

Despite the extraordinary success of deep learning on diverse problems, these triumphs are too often confined to large, clean datasets and well-defined objectives. Face recognition systems train on millions of perfectly annotated images. Commercial speech recognition systems train on thousands of hours of painstakingly-annotated data. But for applications addressing human activity, data can be noisy, expensive to collect, and plagued by missing values. In electronic health records, for example, each attribute might be observed on a different time scale. Complicating matters further, deciding precisely what objective warrants optimization requires critical consideration of both algorithms and the application domain. Moreover, deploying human-interacting systems requires careful consideration of societal demands such as safety, interpretability, and fairness.

The aim of this thesis is to address the obstacles to mining temporal patterns in human activity data. The primary contributions are: (1) the first application of RNNs to multivariate clinical time series data, with several techniques for bridging long-term dependencies and modeling missing data; (2) a neural network algorithm for forecasting surgery duration while simultaneously modeling heteroscedasticity; (3) an approach to quantitative investing that uses RNNs to forecast company fundamentals; (4) an exploration strategy for deep reinforcement learners that significantly speeds up dialogue policy learning; (5) an algorithm to minimize the number of catastrophic mistakes made by a reinforcement learner; (6) critical works addressing model interpretability and fairness in algorithmic decision-making.

Стилі APA, Harvard, Vancouver, ISO та ін.
8

Varol, Gül. "Learning human body and human action representations from visual data." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE029.

Повний текст джерела
Анотація:
Le contenu visuel se concentre souvent sur les humains. L’analyse automatique des humains à partir de données visuelles revêt donc une grande importance pour de nombreuses applications. Le but de cette thèse est d’apprendre des représentations visuelles pour l’analyse des humains. Un accent particulier est mis sur deux domaines étroitement liés de la vision artificielle : l’analyse du corps humain et la reconnaissance des actions. En résumé, nos contributions sont les suivantes : (i) nous générons des données synthétiques photoréalistes de personnes permettant l’entraînement de CNNs pour l’analyse du corps humain, (ii) nous proposons une architecture multitâche permettant d’obtenir une représentation volumétrique du corps à partir d’une seule image, (iii) nous étudions les avantages des convolutions temporelles à long terme pour la reconnaissance de l’action humaine à l’aide de CNNs 3D, (iv) nous incorporons une fonction de coût de similarité des vidéos multi-vues pour concevoir des représentations invariantes au changement de vue
The focus of visual content is often people. Automatic analysis of people from visual data is therefore of great importance for numerous applications in content search, autonomous driving, surveillance, health care, and entertainment. The goal of this thesis is to learn visual representations for human understanding. Particular emphasis is given to two closely related areas of computer vision: human body analysis and human action recognition. In summary, our contributions are the following: (i) we generate photo-realistic synthetic data for people that allows training CNNs for human body analysis, (ii) we propose a multi-task architecture to recover a volumetric body shape from a single image, (iii) we study the benefits of long-term temporal convolutions for human action recognition using 3D CNNs, (iv) we incorporate similarity training in multi-view videos to design view-independent representations for action recognition
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kaithi, Bhargavacharan Reddy. "Knowledge Graph Reasoning over Unseen RDF Data." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1571955816559707.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Toussaint, Ben-Manson. "Apprentissage automatique à partir de traces multi-sources hétérogènes pour la modélisation de connaissances perceptivo-gestuelles." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM063/document.

Повний текст джерела
Анотація:
Les connaissances perceptivo-gestuelles sont difficiles à saisir dans les Systèmes Tutoriels Intelligents. Ces connaissances sont multimodales : elles combinent des connaissances théoriques, ainsi que des connaissances perceptuelles et gestuelles. Leur enregistrement dans les Systèmes Tutoriels Intelligents implique l'utilisation de plusieurs périphériques ou capteurs couvrant les différentes modalités des interactions qui les sous-tendent. Les « traces » de ces interactions –aussi désignées sous le terme "traces d'activité"- constituent la matière première pour la production de services tutoriels couvrant leurs différentes facettes. Les analyses de l'apprentissage ou les services tutoriels privilégiant une facette de ces connaissances au détriment des autres, sont incomplets. Cependant, en raison de la diversité des périphériques, les traces d'activité enregistrées sont hétérogènes et, de ce fait, difficiles à modéliser et à traiter. Mon projet doctoral adresse la problématique de la production de services tutoriels adaptés à ce type de connaissances. Je m'y intéresse tout particulièrement dans le cadre des domaines dits mal-définis. Le cas d'étude de mes recherches est le Système Tutoriel Intelligent TELEOS, un simulateur dédié à la chirurgie orthopédique percutanée. Les propositions formulées se regroupent sous trois volets : (1) la formalisation des séquences d'interactions perceptivo-gestuelles ; (2) l'implémentation d'outils capables de réifier le modèle conceptuel de leur représentation ; (3) la conception et l'implémentation d'outils algorithmiques favorisant l'analyse de ces séquences d'un point de vue didactique
Perceptual-gestural knowledge is multimodal : they combine theoretical and perceptual and gestural knowledge. It is difficult to capture in Intelligent Tutoring Systems. In fact, its capture in such systems involves the use of multiple devices or sensors covering all the modalities of underlying interactions. The "traces" of these interactions -also referred to as "activity traces"- are the raw material for the production of key tutoring services that consider their multimodal nature. Methods for "learning analytics" and production of "tutoring services" that favor one or another facet over others, are incomplete. However, the use of diverse devices generates heterogeneous activity traces. Those latter are hard to model and treat.My doctoral project addresses the challenge related to the production of tutoring services that are congruent to this type of knowledge. I am specifically interested to this type of knowledge in the context of "ill-defined domains". My research case study is the Intelligent Tutoring System TELEOS, a simulation platform dedicated to percutaneous orthopedic surgery.The contributions of this thesis are threefold : (1) the formalization of perceptual-gestural interactions sequences; (2) the implementation of tools capable of reifying the proposed conceptual model; (3) the conception and implementation of algorithmic tools fostering the analysis of these sequences from a didactic point of view
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Buehner, Marc. "Delay and knowledge mediation in human causal reasoning." Thesis, University of Sheffield, 2002. http://etheses.whiterose.ac.uk/3418/.

Повний текст джерела
Анотація:
Contemporary theories of causal induction have focussed largely on the question of how evidence in the form of covariations between causes and effects is used to compute measures of causal strength. A very important precursor enabling such computations is that the reasoner notices that a cause and effect have co-occurred. Standard laboratory experiments have usually bypassed this problem by presenting participants directly with covariational information. As a result, relatively little is known about how humans identify causal relations in real time. What evidence exists, however, paints a rather unflattering picture of human causal induction and converges to the conclusion that humans cannot identify causal relations if cause and effect are separated by more than a few seconds. Associative learning theory has interpreted these findings to indicate that temporal contiguity is essential to causal inference. I argue instead that contiguity is not essential, but that the influence of time in causal inference is crucially dependent on people's beliefs and expectations about the timeframe of the causal relation in question. First I demonstrate that humans are capable of dissociating temporal contiguity from causal strength; more specifically, they can learn that a given event exerts a stronger causal influence when it is temporally separated from the effect than when it is contiguous with it. Then I re-investigate a paradigm commonly used to study the effects of delay on human causal induction. My experiments employed one crucial additional manipulation regarding participants' awareness of potential delays. This manipulation was sufficient to reduce the detrimental effects of delay. Three other experiments employed a similar strategy, but relied on implicit instructions about the timeframe of the causal relation in question. Overall, results support the hypothesis that knowledge mediates the timeframe of covariation assessment in human causal induction. Implications for associative learning and causal power theories are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Borchmann, Daniel. "Learning Terminological Knowledge with High Confidence from Erroneous Data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-152028.

Повний текст джерела
Анотація:
Description logics knowledge bases are a popular approach to represent terminological and assertional knowledge suitable for computers to work with. Despite that, the practicality of description logics is impaired by the difficulties one has to overcome to construct such knowledge bases. Previous work has addressed this issue by providing methods to learn valid terminological knowledge from data, making use of ideas from formal concept analysis. A basic assumption here is that the data is free of errors, an assumption that can in general not be made for practical applications. This thesis presents extensions of these results that allow to handle errors in the data. For this, knowledge that is "almost valid" in the data is retrieved, where the notion of "almost valid" is formalized using the notion of confidence from data mining. This thesis presents two algorithms which achieve this retrieval. The first algorithm just extracts all almost valid knowledge from the data, while the second algorithm utilizes expert interaction to distinguish errors from rare but valid counterexamples.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhang, Shanshan. "Deep Learning for Unstructured Data by Leveraging Domain Knowledge." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/580099.

Повний текст джерела
Анотація:
Computer and Information Science
Ph.D.
Unstructured data such as texts, strings, images, audios, videos are everywhere due to the social interaction on the Internet and the high-throughput technology in sciences, e.g., chemistry and biology. However, for traditional machine learning algorithms, classifying a text document is far more difficult than classifying a data entry in a spreadsheet. We have to convert the unstructured data into some numeric vectors which can then be understood by machine learning algorithms. For example, a sentence is first converted to a vector of word counts, and then fed into a classification algorithm such as logistic regression and support vector machine. The creation of such numerical vectors is very challenging and difficult. Recent progress in deep learning provides us a new way to jointly learn features and train classifiers for unstructured data. For example, recurrent neural networks proved successful at learning from a sequence of word indices; convolutional neural networks are effective to learn from videos, which are sequences of pixel matrices. Our research focuses on developing novel deep learning approaches for text and graph data. Breakthroughs using deep learning have been made during the last few years for many core tasks in natural language processing, such as machine translation, POS tagging, named entity recognition, etc. However, when it comes to informal and noisy text data, such as tweets, HTMLs, OCR, there are two major issues with modern deep learning technologies. First, deep learning requires large amount of labeled data to train an effective model; second, neural network architectures that work with natural language are not proper with informal text. In this thesis, we address the two important issues and develop new deep learning approaches in four supervised and unsupervised tasks with noisy text. We first present a deep feature engineering approach for informative tweets discovery during the emerging disasters. We propose to use unlabeled microblogs to cluster words into a limited number of clusters and use the word clusters as features for tweets discovery. Our results indicate that when the number of labeled tweets is 100 or less, the proposed approach is superior to the standard classification based on the bag or words feature representation. We then introduce a human-in-the-loop (HIL) framework for entity identification from noisy web text. Our work explores ways to combine the expressive power of REs, ability of deep learning to learn from large data into a new integrated framework for entity identification from web data. The evaluation on several entity identification problems shows that the proposed framework achieves very high accuracy while requiring only a modest human involvement. We further extend the framework of entity identification to an iterative HIL framework that addresses the entity recognition problem. We particularly investigate how human invest their time when a user is allowed to choose between regex construction and manual labeling. Finally, we address a fundamental problem in the text mining domain, i.e, embedding of rare and out-of-vocabulary (OOV) words, by refining word embedding models and character embedding models in an iterative way. We illustrate the simplicity but effectiveness of our method when applying it to online professional profiles allowing noisy user input. Graph neural networks have been shown great success in the domain of drug design and material sciences, where organic molecules and crystal structures of materials are represented as attributed graphs. A deep learning architecture that is capable of learning from graph nodes and graph edges is crucial for property estimation of molecules. In this dissertation, We propose a simple graph representation for molecules and three neural network architectures that is able to directly learn predictive functions from graphs. We discover that, it is true graph networks are superior than feature-driven algorithms for formation energy prediction. However, the superiority can not be reproduced on band gap prediction. We also discovered that our proposed simple shallow neural networks perform comparably with the state-of-the-art deep neural networks.
Temple University--Theses
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Allen, Brett. "Learning body shape models from real-world data /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6969.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ali, Syed. "Towards Human-Like Automated Driving| Learning Spacing Profiles from Human Driving Data." Thesis, Wayne State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10637971.

Повний текст джерела
Анотація:

For automated driving vehicles to be accepted by their users and safely integrate with traffic involving human drivers, they need to act and behave like human drivers. This not only involves understanding how the human driver or occupant in the automated vehicle expects their vehicle to operate, but also involves how other road users perceive the automated vehicle’s intentions. This research aimed at learning how drivers space themselves while driving around other vehicles. It is shown that an optimized lane change maneuver does create a solution that is much different than what a human would do. There is a need to learn complex driving preferences from studying human drivers.

This research fills the gap in terms of learning human driving styles by providing an example of learned behavior (vehicle spacing) and the needed framework for encapsulating the learned data. A complete framework from problem formulation to data gathering and learning from human driving data was formulated as part of this research. On-road vehicle data were gathered while a human driver drove a vehicle. The driver was asked to make lane changes for stationary vehicles in his path with various road curvature conditions and speeds. The gathered data, as well as Learning from Demonstration techniques, were used in formulating the spacing profile as a lane change maneuver. A concise feature set from captured data was identified to strongly represent a driver’s spacing profile and a model was developed. The learned model represented the driver’s spacing profile from stationary vehicles within acceptable statistical tolerance. This work provides a methodology for many other scenarios from which human-like driving style and related parameters can be learned and applied to automated vehicles

Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kong, Shumin. "Towards Lightweight Neural Networks with Few Data via Knowledge Distillation." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/24650.

Повний текст джерела
Анотація:
The advancement of deep learning technology has been concentrating on deploying end-to-end solutions using high dimensional data, such as images. With the rapid increase in benchmark performance comes the significant resource requirements to train the network and make inferences with it. Deep learning models that achieve state-of-the-art benchmark result may require a huge amount of computing resources and data. To alleviate this problem, knowledge distillation with teacher-student learning has drawn much attention in compressing neural networks on low-end edge devices, such as mobile phones and wearable watches. However, current teacher-student learning algorithms mainly assume that the complete dataset for the teacher network is also available for the training of the student network. However, for real-world scenarios, users may only access to part of training examples due to commercial profits or data privacy, and severe over-fitting issues would happen as a result. In this study, we tackle the challenge of learning student networks with few data by investigating the ground-truth data-generating distribution underlying these few data. Taking Wasserstein distance as the measurement, we assume that this ideal data distribution lies in a neighborhood of the discrete empirical distribution induced by the training examples. Thus we propose to safely optimize the worst-case cost within this neighborhood to boost the generalization. Furthermore, with theoretical analysis, we derive a novel and easy-to-implement loss for training the student network in an end-to-end fashion. Our analysis is empirically validated through experiments using benchmark datasets, and the result indicates the effectiveness of our proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Grubinger, Thomas. "Knowledge Extraction from Logged Truck Data using Unsupervised Learning Methods." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-1147.

Повний текст джерела
Анотація:

The goal was to extract knowledge from data that is logged by the electronic system of

every Volvo truck. This allowed the evaluation of large populations of trucks without requiring additional measuring devices and facilities.

An evaluation cycle, similar to the knowledge discovery from databases model, was

developed and applied to extract knowledge from data. The focus was on extracting

information in the logged data that is related to the class labels of different populations,

but also supported knowledge extraction inherent from the given classes. The methods

used come from the field of unsupervised learning, a sub-field of machine learning and

include the methods self-organizing maps, multi-dimensional scaling and fuzzy c-means

clustering.

The developed evaluation cycle was exemplied by the evaluation of three data-sets.

Two data-sets were arranged from populations of trucks differing by their operating

environment regarding road condition or gross combination weight. The results showed

that there is relevant information in the logged data that describes these differences

in the operating environment. A third data-set consisted of populations with different

engine configurations, causing the two groups of trucks being unequally powerful.

Using the knowledge extracted in this task, engines that were sold in one of the two

configurations and were modified later, could be detected.

Information in the logged data that describes the vehicle's operating environment,

allows to detect trucks that are operated differently of their intended use. Initial experiments

to find such vehicles were conducted and recommendations for an automated

application were given.

Стилі APA, Harvard, Vancouver, ISO та ін.
18

Farrash, Majed. "Machine learning ensemble method for discovering knowledge from big data." Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/59367/.

Повний текст джерела
Анотація:
Big data, generated from various business internet and social media activities, has become a big challenge to researchers in the field of machine learning and data mining to develop new methods and techniques for analysing big data effectively and efficiently. Ensemble methods represent an attractive approach in dealing with the problem of mining large datasets because of their accuracy and ability of utilizing the divide-and-conquer mechanism in parallel computing environments. This research proposes a machine learning ensemble framework and implements it in a high performance computing environment. This research begins by identifying and categorising the effects of partitioned data subset size on ensemble accuracy when dealing with very large training datasets. Then an algorithm is developed to ascertain the patterns of the relationship between ensemble accuracy and the size of partitioned data subsets. The research concludes with the development of a selective modelling algorithm, which is an efficient alternative to static model selection methods for big datasets. The results show that maximising the size of partitioned data subsets does not necessarily improve the performance of an ensemble of classifiers that deal with large datasets. Identifying the patterns exhibited by the relationship between ensemble accuracy and partitioned data subset size facilitates the determination of the best subset size for partitioning huge training datasets. Finally, traditional model selection is inefficient in cases wherein large datasets are involved.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Li, Xin. "Graph-based learning for information systems." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/193827.

Повний текст джерела
Анотація:
The advance of information technologies (IT) makes it possible to collect a massive amount of data in business applications and information systems. The increasing data volumes require more effective knowledge discovery techniques to make the best use of the data. This dissertation focuses on knowledge discovery on graph-structured data, i.e., graph-based learning. Graph-structured data refers to data instances with relational information indicating their interactions in this study. Graph-structured data exist in a variety of application areas related to information systems, such as business intelligence, knowledge management, e-commerce, medical informatics, etc. Developing knowledge discovery techniques on graph-structured data is critical to decision making and the reuse of knowledge in business applications.In this dissertation, I propose a graph-based learning framework and identify four major knowledge discovery tasks using graph-structured data: topology description, node classification, link prediction, and community detection. I present a series of studies to illustrate the knowledge discovery tasks and propose solutions for these example applications. As to the topology description task, in Chapter 2 I examine the global characteristics of relations extracted from documents. Such relations are extracted using different information processing techniques and aggregated to different analytical unit levels. As to the node classification task, Chapter 3 and Chapter 4 study the patent classification problem and the gene function prediction problem, respectively. In Chapter 3, I model knowledge diffusion and evolution with patent citation networks for patent classification. In Chapter 4, I extend the context assumption in previous research and model context graphs in gene interaction networks for gene function prediction. As to the link prediction task, Chapter 5 presents an example application in recommendation systems. I frame the recommendation problem as link prediction on user-item interaction graphs, and propose capturing graph-related features to tackle this problem. Chapter 6 examines the community detection task in the context of online interactions. In this study, I propose to take advantage of the sentiments (agreements and disagreements) expressed in users' interactions to improve community detection effectiveness. All these examples show that the graph representation allows the graph structure and node/link information to be more effectively utilized in addressing the four knowledge discovery tasks.In general, the graph-based learning framework contributes to the domain of information systems by categorizing related knowledge discovery tasks, promoting the further use of the graph representation, and suggesting approaches for knowledge discovery on graph-structured data. In practice, the proposed graph-based learning framework can be used to develop a variety of IT artifacts that address critical problems in business applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Simonsson, Simon. "Learning of robot-to-human object handovers." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251505.

Повний текст джерела
Анотація:
In this thesis we propose a system for robots to learn through a semisupervised approach from observations, proper handover features for objects that can be applied onto new objects. Using recordings of handovers, features are extracted for the purpose of classifying the objects through unsupervised learning. The results from the classification are used to train a network in a supervised fashion as to properly identify handover class from images. The results of this work show that objects with similar visual features are handed over in similar way and that with a limited amount of data a model can be fitted as to properly predict handover settings for an object that has been previously encountered or not.
I detta examensarbete presenteras ett förslag på ett system för robotar att lära sig på ett autonomt semi-supervised vis egenskaper vid överlämning för olika objekt genom att observera människor, som kan senare även användas till nya objekt. Med hjälp av inspelat material på överlämningar, identifierar vi egenskaper som gör det möjligt att klassificera objekten genom unsupervised learning. Resultaten från denna klassificering kombineras med bilder på objekten som används till att träna ett nätverk på ett supervised vis, som lär sig att förutspå korrekt klass för ett objekt via bilddata. Resultaten från detta arbete visar att objekt som överlämnas på liknande vis även har liknande visuella egenskaper, och med en begränsad mängd med data kan vi träna en modell som med hög träffsäkerhet ger oss inställningarna för överlämningen utav ett objekt vare sig det har påträffats tidigare eller inte.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Rosquist, Christine. "Text Classification of Human Resources-related Data with Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302375.

Повний текст джерела
Анотація:
Text classification has been an important application and research subject since the origin of digital documents. Today, as more and more data are stored in the form of electronic documents, the text classification approach is even more vital. There exist various studies that apply machine learning methods such as Naive Bayes and Convolutional Neural Networks (CNN) to text classification and sentiment analysis. However, most of these studies do not focus on cross- domain classification i.e., machine learning models that have been trained on a dataset from one context are tested on another dataset from another context. This is useful when there is not enough training data for the specific domain where text data is to be classified. This thesis investigates how the machine learning methods Naive Bayes and CNN perform when they are trained in one context and then tested in another slightly different context. The study uses data from employee reviews in order to train the models, and the models are then tested on both the employee-review data but also on human resources-related data. Thus, the aim with the thesis is to gain insights on how to develop a system with the capability to perform an accurate cross-domain classification, and to provide more insights to the text classification research area in general. A comparative analysis of the models Naive Bayes and CNN was done, and the results showed that both of the models performed quite similarly when classifying sentences by only using the employee-review data to train and test the models. However, CNN performed slightly better when it comes to multiclass classification for the employee data, which indicates that CNN might be a better model in that context. From a cross-domain perspective, Naive Bayes turned out to be the better model since it performed better in all of the metrics evaluated. However, both of the models can be used as guidance tools in order to classify human-resources related data quickly, even if Naive Bayes is the model that performs the best in the cross-domain context. The results can possibly be improved with more research and need to be verified with more data. Suggestions on how to improve the results are among others to enhance the hyperparameter optimization, use another approach to handle the data imbalance, and adjust the preprocessing methods used. It is also worth noting that the statistical significance could not be confirmed in all of the different test cases, meaning that no absolute conclusions can be drawn, but the results from this thesis work still provide an indication of how well the models perform.
Textklassificering har varit en viktig tillämpning och ett viktigt forskningsämne sedan uppkomsten av digitala dokument. Idag, i och med att allt mer data sparas i form av elektroniska dokument, är textklassificeringen ännu mer relevant. Det existerar flera studier som applicerar maskininlärningsmodeller så som Naive Bayes och Convolutional Neural Networks (CNN) på textklassificering och sentimentanalys. Dock ligger inte fokuset i dessa studier på en krossdomän-klassificering, vilket innebär att maskinlärningsmodellerna tränas på ett dataset från en viss kontext och sedan testas på ett dataset från en annan kontext. Detta är användbart när det inte finns tillräckligt med träningsdata från den specifika domänen där textdata ska klassificeras. Den här studien undersöker hur maskininlärningsmodellerna Naive Bayes och CNN presterar när de är tränade i en viss kontext och sedan testade i en annan, något annorlunda, kontext. Studien använder data från recensioner gjorda av anställda för att träna modellerna, som sedan testas på den datan men också på personalavdelningsrelaterad data. Således är syftet med denna studie att bidra med insikt i hur ett system kan utvecklas med kapabilitet att utföra en korrekt krossdomän-klassificering, samt bidra med generell insikt till forskningsämnet textklassificering. En jämförande analys av modellerna Naive Bayes och CNN utfördes, och resultaten visade att modellerna presterar lika när det kom till att klassificera text genom att enbart använda datan med recensioner gjorda av anställda för att träna och testa modellerna. Dock visade det sig att CNN presterade bättre när det kom till multiklass-klassificering av datan med recensioner gjorda av anställda, vilket indikerar att CNN kan vara en bättre modell i den kontexten. Från ett krossdomän-perspektiv visade det sig att Naive Bayes var den bättre modellen, i och med att den modellen presterade bäst i alla mätningar. Båda modellerna kan användas som guidningsverktyg för att klassificera personalavdelningsrelaterad data, trots att Naive Bayes var modellen som presterade bäst i ett krossdomän-perspektiv. Resultatet kan förbättrats en del med mer forskning, och behöver verifieras med mer data. Förslag på hur resultaten kan förbättras är att förbättra hyperparameteroptimeringen, använda en annan metod för att hantera den obalanserade datan samt att justera förbehandlingen av datan. Det är också värt att notera att den statistiska signifikansen inte kunde bekräftas i alla testfall, vilket innebär att inga egentliga slutsatser kan dras, även om det fortfarande bidrar med en indikering om hur bra de olika modellerna presterar i de olika fallen.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Trinh, Viet. "CONTEXTUALIZING OBSERVATIONAL DATA FOR MODELING HUMAN PERFORMANCE." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2747.

Повний текст джерела
Анотація:
This research focuses on the ability to contextualize observed human behaviors in efforts to automate the process of tactical human performance modeling through learning from observations. This effort to contextualize human behavior is aimed at minimizing the role and involvement of the knowledge engineers required in building intelligent Context-based Reasoning (CxBR) agents. More specifically, the goal is to automatically discover the context in which a human actor is situated when performing a mission to facilitate the learning of such CxBR models. This research is derived from the contextualization problem left behind in Fernlund's research on using the Genetic Context Learner (GenCL) to model CxBR agents from observed human performance [Fernlund, 2004]. To accomplish the process of context discovery, this research proposes two contextualization algorithms: Contextualized Fuzzy ART (CFA) and Context Partitioning and Clustering (COPAC). The former is a more naive approach utilizing the well known Fuzzy ART strategy while the latter is a robust algorithm developed on the principles of CxBR. Using Fernlund's original five drivers, the CFA and COPAC algorithms were tested and evaluated on their ability to effectively contextualize each driver's individualized set of behaviors into well-formed and meaningful context bases as well as generating high-fidelity agents through the integration with Fernlund's GenCL algorithm. The resultant set of agents was able to capture and generalized each driver's individualized behaviors.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Yuan. "Mastering the Game of Gomoku without Human Knowledge." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1865.

Повний текст джерела
Анотація:
Gomoku, also called Five in a row, is one of the earliest checkerboard games invented by humans. For a long time, it has brought countless pleasures to us. We humans, as players, also created a lot of skills in playing it. Scientists normalize and enter these skills into the computer so that the computer knows how to play Gomoku. However, the computer just plays following the pre-entered skills, it doesn’t know how to develop these skills by itself. Inspired by Google’s AlphaGo Zero, in this thesis, by combining the technologies of Monte Carlo Tree Search, Deep Neural Networks, and Reinforcement Learning, we propose a system that trains machine Gomoku players without prior human skills. These are self-evolving players that no prior knowledge is given. They develop their own skills from scratch by themselves. We have run this system for a month and half, during which time 150 different players were generated. The later these players were generated, the stronger abilities they have. During the training, beginning with zero knowledge, these players developed a row-based bottom-up strategy, followed by a column-based bottom-up strategy, and finally, a more flexible and intelligible strategy with a preference to the surrounding squares. Although even the latest players do not have strong capacities and thus couldn’t be regarded as strong AI agents, they still show the abilities to learn from the previous games. Therefore, this thesis proves that it is possible for the machine Gomoku player to evolve by itself without human knowledge. These players are on the right track, with continuous training, they would become better Gomoku players.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Natvig, Filip. "Knowledge Transfer Applied on an Anomaly Detection Problem Using Financial Data." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-451884.

Повний текст джерела
Анотація:
Anomaly detection in high-dimensional financial transaction data is challenging and resource-intensive, particularly when the dataset is unlabeled. Sometimes, one can alleviate the computational cost and improve the results by utilizing a pre-trained model, provided that the features learned from the pre-training are useful for learning the second task. Investigating this issue was the main purpose of this thesis. More specifically, it was to explore the potential gain of pre-training a detection model on one trader's transaction history and then retraining the model to detect anomalous trades in another trader's transaction history. In the context of transfer learning, the pre-trained and the retrained model are usually referred to as the source model and target model, respectively.  A deep LSTM autoencoder was proposed as the source model due to its advantages when dealing with sequential data, such as financial transaction data. Moreover, to test its anomaly detection ability despite the lack of labeled true anomalies, synthetic anomalies were generated and included in the test set. Various experiments confirmed that the source model learned to detect synthetic anomalies with highly distinctive features. Nevertheless, it is hard to draw any conclusions regarding its anomaly detection performance due to the lack of labeled true anomalies. While the same is true for the target model, it is still possible to achieve the thesis's primary goal by comparing a pre-trained model with an identical untrained model. All in all, the results suggest that transfer learning offers a significant advantage over traditional machine learning in this context.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Fredin, Haslum Johan. "Deep Reinforcement Learning for Adaptive Human Robotic Collaboration." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251013.

Повний текст джерела
Анотація:
Robots are expected to become an increasingly common part of most humans everyday lives. As the number of robots increase, so will also the number of human-robot interactions. For these interactions to be valuable and intuitive, new advanced robotic control policies will be necessary. Current policies often lack flexibility, rely heavily on human expertise and are often programmed for very specific use cases. A promising alternative is the use of Deep Reinforcement Learning, a family of algorithms that learn by trial and error. Following the recent success of Reinforcement Learning (RL) to areas previously considered too complex, RL has emerged as a possible method to learn Robotic Control Policies. This thesis explores the possibility of using Deep Reinforcement Learning (DRL) as a method to learn Robotic Control Policies for Human Robotic Collaboration (HRC). Specifically, it will evaluate if DRL algorithms can be used to train a robot to collaboratively balance a ball with a human along a predetermined path on a table. To evaluate if it is possible several experiments are performed in a simulator, where two robots jointly balance a ball, one emulating a human and one relying on the policy from the DRL algorithm. The experiments performed suggest that DRL can be used to enable HRC which perform equivalently or better than an emulated human performing the task alone. Further, the experiments indicate that less skilled human collaborators performance can be improved by cooperating with a DRL trained robot.
Närvaron av robotar förväntas bli en allt vanligare del av de flesta människors vardagsliv. När antalet robotar ökar, så ökar även antalet människa-robot-interaktioner. För att dessa interaktioner ska vara användbara och intuitiva, kommer nya avancerade robotkontrollstrategier att vara nödvändiga. Nuvarande strategier saknar ofta flexibilitet, är mycket beroende av mänsklig kunskap och är ofta programmerade för mycket specifika användningsfall. Ett lovande alternativ är användningen av Deep Reinforcement Learning, en familj av algoritmer som lär sig genom att testa sig fram, likt en människa. Efter den senaste tidens framgångar inom Reinforcement Learning (RL) vilket applicerats på områden som tidigare ansetts vara för komplexa har RL nu blivit ett möjlig alternativ till mer etablerade metoder för att lära sig kontrollstrategier för robotar. Denna uppsats undersöker möjligheten att använda Deep Reinforcement Learning (DRL) som metod för att lära sig sådana kontrollstrategier för människa-robot-samarbeten. Specifikt kommer den att utvärdera om DRL-algoritmer kan användas för att träna en robot och en människa att tillsammans balansera en boll längs en förutbestämd bana på ett bord. För att utvärdera om det är möjligt utförs flera experiment i en simulator, där två robotar gemensamt balanserar en boll, en simulerar en människa och den andra en robot som kontrolleras med hjälp av DRLalgoritmen. De utförda experimenten tyder på att DRL kan användas för att möjliggöra människa-robot-samarbeten som utförs lika bra eller bättre än en simulerad människa som utför uppgiften ensam. Vidare indikerar experimenten att prestationer med mindre kompetenta mänskliga deltagare kan förbättras genom att samarbeta med en DRLalgoritm-kontrollerad robot.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Gheyas, Iffat A. "Novel computationally intelligent machine learning algorithms for data mining and knowledge discovery." Thesis, University of Stirling, 2009. http://hdl.handle.net/1893/2152.

Повний текст джерела
Анотація:
This thesis addresses three major issues in data mining regarding feature subset selection in large dimensionality domains, plausible reconstruction of incomplete data in cross-sectional applications, and forecasting univariate time series. For the automated selection of an optimal subset of features in real time, we present an improved hybrid algorithm: SAGA. SAGA combines the ability to avoid being trapped in local minima of Simulated Annealing with the very high convergence rate of the crossover operator of Genetic Algorithms, the strong local search ability of greedy algorithms and the high computational efficiency of generalized regression neural networks (GRNN). For imputing missing values and forecasting univariate time series, we propose a homogeneous neural network ensemble. The proposed ensemble consists of a committee of Generalized Regression Neural Networks (GRNNs) trained on different subsets of features generated by SAGA and the predictions of base classifiers are combined by a fusion rule. This approach makes it possible to discover all important interrelations between the values of the target variable and the input features. The proposed ensemble scheme has two innovative features which make it stand out amongst ensemble learning algorithms: (1) the ensemble makeup is optimized automatically by SAGA; and (2) GRNN is used for both base classifiers and the top level combiner classifier. Because of GRNN, the proposed ensemble is a dynamic weighting scheme. This is in contrast to the existing ensemble approaches which belong to the simple voting and static weighting strategy. The basic idea of the dynamic weighting procedure is to give a higher reliability weight to those scenarios that are similar to the new ones. The simulation results demonstrate the validity of the proposed ensemble model.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zhao, Zilong. "Extracting knowledge from macroeconomic data, images and unreliable data." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT074.

Повний текст джерела
Анотація:
L'identification de système et l'apprentissage automatique sont deux concepts similaires utilisés indépendamment dans la communauté automatique et informatique. L'identification des systèmes construit des modèles à partir de données mesurées. Les algorithmes d'apprentissage automatique construisent des modèles basés sur des données d'entraînement (propre ou non), afin de faire des prédictions sans être explicitement programmé pour le faire. Sauf la précision de prédiction, la vitesse de convergence et la stabilité sont deux autres facteurs clés pour évaluer le processus de l'apprentissage, en particulier dans le cas d'apprentissage en ligne, et ces propriétés ont déjà été bien étudiées en théorie du contrôle. Donc, cette thèse implémente des recherches suivantes : 1) Identification du système et contrôle optimal des données macroéconomiques : Nous modélisons d'abord les données macroéconomiques chinoises sur le modèle VAR (Vector Auto-Regression), puis identifions la relation de cointégration entre les variables et utilisons le Vector Error Correction Model (VECM) pour étudier le court terme fluctuations autour de l'équilibre à long terme, la causalité de Granger est également étudiée avec VECM. Ce travail révèle la tendance de la transition de la croissance économique de la Chine : de l'exportation vers la consommation ; La deuxième étude est avec des données de la France. On représente le modèle dans l'espace d'états, mettons le modèle dans un cadre de feedback-control, le contrôleur est conçu par un régulateur linéaire-quadratique (LQR). On peut également imposer des perturbations sur les sorties et des contraintes sur les entrées, ce qui simule la situation réelle de crise économique. 2) Utilisation de la théorie du contrôle pour améliorer l'apprentissage en ligne du réseau neuronal profond : Nous proposons un algorithme de taux d'apprentissage basé sur les performances : E (Exponential)/PD (Proportional Derivative) contrôle, qui considère le Convolutional Neural Network (CNN) comme une plante, taux d'apprentissage comme signal de commande et valeur de loss comme signal d'erreur. Le résultat montre que E/PD surpasse l'état de l'art en termes de précision finale, de loss finale et de vitesse de convergence, et le résultat est également plus stable. Cependant, une observation des expériences E/PD est que le taux d'apprentissage diminue tandis que la loss diminue continuellement. Mais la loss diminue, le modèle s’approche d’optimum, on ne devait pas diminuer le taux d'apprentissage. Pour éviter cela, nous proposons un event-based E/PD. Le résultat montre qu'il améliore E/PD en précision finale, loss finale et vitesse de convergence ; Une autre observation de l'expérience E/PD est que l'apprentissage en ligne fixe des époques constantes pour chaque batch. Puisque E/PD converge rapidement, l'amélioration significative ne vient que des époques initiales. Alors, nous proposons un autre event-based E/PD, qui inspecte la loss historique. Le résultat montre qu'il peut épargner jusqu'à 67% d'époques sur la donnée CIFAR-10 sans dégrader beaucoup les performances.3) Apprentissage automatique à partir de données non fiables : Nous proposons un cadre générique : Robust Anomaly Detector (RAD), la partie de sélection des données de RAD est un cadre à deux couches, où la première couche est utilisée pour filtrer les données suspectes, et la deuxième couche détecte les modèles d'anomalie à partir des données restantes. On dérive également trois variantes de RAD : voting, active learning et slim, qui utilisent des informations supplémentaires, par exempe, les opinions des classificateurs conflictuels et les requêtes d'oracles. Le résultat montre que RAD peut améliorer la performance du modèle en présence de bruit sur les étiquettes de données. Trois variations de RAD montrent qu'elles peuvent toutes améliorer le RAD original, et le RAD Active Learning fonctionne presque aussi bien que dans le cas où il n'y a pas de bruit sur les étiquettes
System identification and machine learning are two similar concepts independently used in automatic and computer science community. System identification uses statistical methods to build mathematical models of dynamical systems from measured data. Machine learning algorithms build a mathematical model based on sample data, known as "training data" (clean or not), in order to make predictions or decisions without being explicitly programmed to do so. Except prediction accuracy, converging speed and stability are another two key factors to evaluate the training process, especially in the online learning scenario, and these properties have already been well studied in control theory. Therefore, this thesis will implement the interdisciplinary researches for following topic: 1) System identification and optimal control on macroeconomic data: We first modelize the China macroeconomic data on Vector Auto-Regression (VAR) model, then identify the cointegration relation between variables and use Vector Error Correction Model (VECM) to study the short-time fluctuations around the long-term equilibrium, Granger Causality is also studied with VECM. This work reveals the trend of China's economic growth transition: from export-oriented to consumption-oriented; Due to limitation of China economic data, we turn to use France macroeconomic data in the second study. We represent the model in state-space, put the model into a feedback control framework, the controller is designed by Linear-Quadratic Regulator (LQR). The system can apply the control law to bring the system to a desired state. We can also impose perturbations on outputs and constraints on inputs, which emulates the real-world situation of economic crisis. Economists can observe the recovery trajectory of economy, which gives meaningful implications for policy-making. 2) Using control theory to improve the online learning of deep neural network: We propose a performance-based learning rate algorithm: E (Exponential)/PD (Proportional Derivative) feedback control, which consider the Convolutional Neural Network (CNN) as plant, learning rate as control signal and loss value as error signal. Results show that E/PD outperforms the state-of-the-art in final accuracy, final loss and converging speed, and the result are also more stable. However, one observation from E/PD experiments is that learning rate decreases while loss continuously decreases. But loss decreases mean model approaches optimum, we should not decrease the learning rate. To prevent this, we propose an event-based E/PD. Results show that it improves E/PD in final accuracy, final loss and converging speed; Another observation from E/PD experiment is that online learning fixes a constant training epoch for each batch. Since E/PD converges fast, the significant improvement only comes from the beginning epochs. Therefore, we propose another event-based E/PD, which inspects the historical loss, when the progress of training is lower than a certain threshold, we turn to next batch. Results show that it can save up to 67% epochs on CIFAR-10 dataset without degrading much performance. 3) Machine learning out of unreliable data: We propose a generic framework: Robust Anomaly Detector (RAD), The data selection part of RAD is a two-layer framework, where the first layer is used to filter out the suspicious data, and the second layer detects the anomaly patterns from the remaining data. We also derive three variations of RAD namely, voting, active learning and slim, which use additional information, e.g., opinions of conflicting classifiers and queries of oracles. We iteratively update the historical selected data to improve accumulated data quality. Results show that RAD can continuously improve model's performance under the presence of noise on labels. Three variations of RAD show they can all improve the original setting, and the RAD Active Learning performs almost as good as the case where there is no noise on labels
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Qian, Weizhu. "Discovering human mobility from mobile data : probabilistic models and learning algorithms." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA025.

Повний текст джерела
Анотація:
Les données d'utilisation des smartphones peuvent être utilisées pour étudier la mobilité humaine que ce soit en environnement extérieur ouvert ou à l'intérieur de bâtiments. Dans ce travail, nous étudions ces deux aspects de la mobilité humaine en proposant des algorithmes de machine learning adapté aux sources d'information disponibles dans chacun des contextes.Pour l'étude de la mobilité en environnement extérieur, nous utilisons les données de coordonnées GPS collectées pour découvrir les schémas de mobilité quotidiens des utilisateurs. Pour cela, nous proposons un algorithme de clustering automatique utilisant le Dirichlet process Gaussian mixture model (DPGMM) afin de regrouper les trajectoires GPS quotidiennes. Cette méthode de segmentation est basée sur l'estimation des densités de probabilité des trajectoires, ce qui atténue les problèmes causés par le bruit des données.Concernant l'étude de la mobilité humaine dans les bâtiments, nous utilisons les données d'empreintes digitales WiFi collectées par les smartphones. Afin de prédire la trajectoire d'un individu à l'intérieur d'un bâtiment, nous avons conçu un modèle hybride d'apprentissage profond, appelé convolutional mixture density recurrent neural network (CMDRNN), qui combine les avantages de différents réseaux de neurones profonds multiples. De plus, en ce qui concerne la localisation précise en intérieur, nous supposons qu'il existe une distribution latente régissant l'entrée et la sortie en même temps. Sur la base de cette hypothèse, nous avons développé un modèle d'apprentissage semi-supervisé basé sur le variational autoencoder (VAE). Dans la procédure d'apprentissage non supervisé, nous utilisons un modèle VAE pour apprendre une distribution latente de l'entrée qui est composée de données d'empreintes digitales WiFi. Dans la procédure d'apprentissage supervisé, nous utilisons un réseau de neurones pour calculer la cible, coordonnées par l'utilisateur. De plus, sur la base de la même hypothèse utilisée dans le modèle d'apprentissage semi-supervisé basé sur le VAE, nous exploitons la théorie des goulots d'étranglement de l'information pour concevoir un modèle basé sur le variational information bottleneck (VIB). Il s'agit d'un modèle d'apprentissage en profondeur de bout en bout plus facile à former et offrant de meilleures performances.Enfin, les méthodes proposées ont été validées sur plusieurs jeux de données publics acquis en situation réelle. Les résultats obtenus ont permis de vérifier l'efficacité de nos méthodes par rapport à l'existant
Smartphone usage data can be used to study human indoor and outdoor mobility. In our work, we investigate both aspects in proposing machine learning-based algorithms adapted to the different information sources that can be collected.In terms of outdoor mobility, we use the collected GPS coordinate data to discover the daily mobility patterns of the users. To this end, we propose an automatic clustering algorithm using the Dirichlet process Gaussian mixture model (DPGMM) so as to cluster the daily GPS trajectories. This clustering method is based on estimating probability densities of the trajectories, which alleviate the problems caused by the data noise.By contrast, we utilize the collected WiFi fingerprint data to study indoor human mobility. In order to predict the indoor user location at the next time points, we devise a hybrid deep learning model, called the convolutional mixture density recurrent neural network (CMDRNN), which combines the advantages of different multiple deep neural networks. Moreover, as for accurate indoor location recognition, we presume that there exists a latent distribution governing the input and output at the same time. Based on this assumption, we develop a variational auto-encoder (VAE)-based semi-supervised learning model. In the unsupervised learning procedure, we employ a VAE model to learn a latent distribution of the input, the WiFi fingerprint data. In the supervised learning procedure, we use a neural network to compute the target, the user coordinates. Furthermore, based on the same assumption used in the VAE-based semi-supervised learning model, we leverage the information bottleneck theory to devise a variational information bottleneck (VIB)-based model. This is an end-to-end deep learning model which is easier to train and has better performance.Finally, we validate thees proposed methods on several public real-world datasets providing thus results that verify the efficiencies of our methods as compared to other existing methods generally used
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Tuovinen, L. (Lauri). "From machine learning to learning with machines:remodeling the knowledge discovery process." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526205243.

Повний текст джерела
Анотація:
Abstract Knowledge discovery (KD) technology is used to extract knowledge from large quantities of digital data in an automated fashion. The established process model represents the KD process in a linear and technology-centered manner, as a sequence of transformations that refine raw data into more and more abstract and distilled representations. Any actual KD process, however, has aspects that are not adequately covered by this model. In particular, some of the most important actors in the process are not technological but human, and the operations associated with these actors are interactive rather than sequential in nature. This thesis proposes an augmentation of the established model that addresses this neglected dimension of the KD process. The proposed process model is composed of three sub-models: a data model, a workflow model, and an architectural model. Each sub-model views the KD process from a different angle: the data model examines the process from the perspective of different states of data and transformations that convert data from one state to another, the workflow model describes the actors of the process and the interactions between them, and the architectural model guides the design of software for the execution of the process. For each of the sub-models, the thesis first defines a set of requirements, then presents the solution designed to satisfy the requirements, and finally, re-examines the requirements to show how they are accounted for by the solution. The principal contribution of the thesis is a broader perspective on the KD process than what is currently the mainstream view. The augmented KD process model proposed by the thesis makes use of the established model, but expands it by gathering data management and knowledge representation, KD workflow and software architecture under a single unified model. Furthermore, the proposed model considers issues that are usually either overlooked or treated as separate from the KD process, such as the philosophical aspect of KD. The thesis also discusses a number of technical solutions to individual sub-problems of the KD process, including two software frameworks and four case-study applications that serve as concrete implementations and illustrations of several key features of the proposed process model
Tiivistelmä Tiedonlouhintateknologialla etsitään automoidusti tietoa suurista määristä digitaalista dataa. Vakiintunut prosessimalli kuvaa tiedonlouhintaprosessia lineaarisesti ja teknologiakeskeisesti sarjana muunnoksia, jotka jalostavat raakadataa yhä abstraktimpiin ja tiivistetympiin esitysmuotoihin. Todellisissa tiedonlouhintaprosesseissa on kuitenkin aina osa-alueita, joita tällainen malli ei kata riittävän hyvin. Erityisesti on huomattava, että eräät prosessin tärkeimmistä toimijoista ovat ihmisiä, eivät teknologiaa, ja että heidän toimintansa prosessissa on luonteeltaan vuorovaikutteista eikä sarjallista. Tässä väitöskirjassa ehdotetaan vakiintuneen mallin täydentämistä siten, että tämä tiedonlouhintaprosessin laiminlyöty ulottuvuus otetaan huomioon. Ehdotettu prosessimalli koostuu kolmesta osamallista, jotka ovat tietomalli, työnkulkumalli ja arkkitehtuurimalli. Kukin osamalli tarkastelee tiedonlouhintaprosessia eri näkökulmasta: tietomallin näkökulma käsittää tiedon eri olomuodot sekä muunnokset olomuotojen välillä, työnkulkumalli kuvaa prosessin toimijat sekä niiden väliset vuorovaikutukset, ja arkkitehtuurimalli ohjaa prosessin suorittamista tukevien ohjelmistojen suunnittelua. Väitöskirjassa määritellään aluksi kullekin osamallille joukko vaatimuksia, minkä jälkeen esitetään vaatimusten täyttämiseksi suunniteltu ratkaisu. Lopuksi palataan tarkastelemaan vaatimuksia ja osoitetaan, kuinka ne on otettu ratkaisussa huomioon. Väitöskirjan pääasiallinen kontribuutio on se, että se avaa tiedonlouhintaprosessiin valtavirran käsityksiä laajemman tarkastelukulman. Väitöskirjan sisältämä täydennetty prosessimalli hyödyntää vakiintunutta mallia, mutta laajentaa sitä kokoamalla tiedonhallinnan ja tietämyksen esittämisen, tiedon louhinnan työnkulun sekä ohjelmistoarkkitehtuurin osatekijöiksi yhdistettyyn malliin. Lisäksi malli kattaa aiheita, joita tavallisesti ei oteta huomioon tai joiden ei katsota kuuluvan osaksi tiedonlouhintaprosessia; tällaisia ovat esimerkiksi tiedon louhintaan liittyvät filosofiset kysymykset. Väitöskirjassa käsitellään myös kahta ohjelmistokehystä ja neljää tapaustutkimuksena esiteltävää sovellusta, jotka edustavat teknisiä ratkaisuja eräisiin yksittäisiin tiedonlouhintaprosessin osaongelmiin. Kehykset ja sovellukset toteuttavat ja havainnollistavat useita ehdotetun prosessimallin merkittävimpiä ominaisuuksia
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Sadeghian, Paria. "Human mobility behavior : Transport mode detection by GPS data." Licentiate thesis, Högskolan Dalarna, Institutionen för information och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-36346.

Повний текст джерела
Анотація:
GPS tracking data are widely used to understand human travel behavior and to evaluate the impact of travel. A major advantage with the usage of GPS tracking devices for collecting data is that it enables the researcher to collect large amounts of highly accurate and detailed human mobility data. However, unlabeled GPS tracking data does not easily lend itself to detecting transportation mode and this has given rise to a range of methods and algorithms for this purpose. The algorithms used vary in design and functionality, from defining specific rules to advanced machine learning algorithms. There is however no previous comprehensive review of these algorithms and this thesis aims to identify their essential features and methods and to develop and demonstrate a method for the detection of transport mode in GPS tracking data. To do this, it is necessary to have a detailed description of the particular journey undertaken by an individual. Therefore, as part of the investigation, a microdata analytic approach is applied to the problem areas, including the stages of data collection, data processing, analyzing the data, and decision making. In order to fill the research gap, Paper I consists of a systematic literature review of the methods and essential features used for detecting the transport mode in unlabeled GPS tracking data. Selected empirical studies were categorized into rule-based methods, statistical methods, and machine learning methods. The evaluation shows that machine learning algorithms are the most common. In the evaluation, I compared the methods previously used, extracted features, types of dataset, and model accuracy of transport mode detection. The results show that there is no standard method used in transport mode detection. In the light of these results, I propose in Paper II a stepwise methodology to detect five transport modes taking advantage of the unlabeled GPS data by first using an unsupervised algorithm to detect the five transport modes. A GIS multi-criteria process was applied to label part of the dataset. The performance of the five supervised algorithms was evaluated by applying them to different portions of the labeled dataset. The results show that stepwise methodology can achieve high accuracy in detecting the transport mode by labeling only 10% of the data from the entire dataset.  For the future, one interesting area to explore would be the application of the stepwise methodology to a balanced and larger dataset. A semi-supervised deep-learning approach is suggested for development in transport mode detection, since this method can detect transport modes with only small amounts of labeled data. Thus, the stepwise methodology can be improved upon for further studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

FERRARI, ANNA. "Personalization of Human Activity Recognition Methods using Inertial Data." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/305222.

Повний текст джерела
Анотація:
Recognizing human activities and monitoring population behavior are fun- damental needs of our society. Population security, crowd surveillance, healthcare support and living assistance, lifestyle and behavior tracking are some of the main applications which require the recognition of activities. Activity recognition involves many phases, i.e. the collection, the elaboration and the analysis of information about human activities and behavior. These tasks can be fulfilled manually or automatically, even though a human-based recognition system is not long-term sustainable and scalable. Nevertheless, transforming a human-based recognition system to computer- based automatic system is not a simple task because it requires dedicated hardware and a sophisticated engineering computational and statistical techniques for data preprocessing and analysis. Recently, considerable changes in tech- nologies are largely facilitating this transformation. Indeed, new hardwares and softwares have drastically modified the activity recognition systems. For example, Micro-Electro-Mechanical Systems (MEMS) progress has enabled a reduction in the size of the hardware. Consequently, costs have decreased. Size and cost reduction allows to embed sophisticated sensors into simple devices, such as phones, watches, and even into shoes and clothes, also called wearable devices. Furthermore, low costs, lightness, and small size have made wearable devices’ highly pervasive and accelerated their spread among the population. Today, a very small part of the world population doesn’t own a smartphone. According to Digital 2020: Global Digital Overview, more than 5.19 billion people now use mobile phones. Among the western countries, smartphones and smartwatches are gadgets of people everyday life. The pervasiveness is an undoubted advantage in terms of data generation. Huge amount of data, that is big data, are produced every day. Furthermore, wearable devices together with new advanced software technologies enable data to be sent to servers and instantly analyzed by high performing computers. The availability of big data and new technology improvements, permitted Artificial Intelligence models to rise. In particular, machine learning and deep learning algorithms are predominant in activity recognition. Together with technological and algorithm innovations, the Human Ac- tivity recognition (HAR) research field has born. HAR is a field of research which aims at automatically recognizing people’s physical activities. HAR investigates on the selection of the best hardware, e. g. the best devices to be used for a given application, on the choice of the software to be dedicated to a specific task, and on the increasing of the algorithm performances. HAR has been a very active field of research for years and it is still considered one of the most promising research topic for a large spectrum of ap- plications. In particular, it remains a very challenging research field for many reasons. The selection of devices and sensors, the algorithm’s performances, the collection and the preprocessing of the data, all are requiring further investigation to improve the overall activity recognition system performances. In this work, two main aspects have been investigated: • the benefits of personalization on the algorithm performances, when trained on small size datasets: one of the main issue concerning HAR research community is the lack of the availability of public dataset and labelled data. [...] • a comparison of the performances in HAR obtained both from tradi- tional and personalized machine learning and deep learning techniques.[...]
Recognizing human activities and monitoring population behavior are fun- damental needs of our society. Population security, crowd surveillance, healthcare support and living assistance, lifestyle and behavior tracking are some of the main applications which require the recognition of activities. Activity recognition involves many phases, i.e. the collection, the elaboration and the analysis of information about human activities and behavior. These tasks can be fulfilled manually or automatically, even though a human-based recognition system is not long-term sustainable and scalable. Nevertheless, transforming a human-based recognition system to computer- based automatic system is not a simple task because it requires dedicated hardware and a sophisticated engineering computational and statistical techniques for data preprocessing and analysis. Recently, considerable changes in tech- nologies are largely facilitating this transformation. Indeed, new hardwares and softwares have drastically modified the activity recognition systems. For example, Micro-Electro-Mechanical Systems (MEMS) progress has enabled a reduction in the size of the hardware. Consequently, costs have decreased. Size and cost reduction allows to embed sophisticated sensors into simple devices, such as phones, watches, and even into shoes and clothes, also called wearable devices. Furthermore, low costs, lightness, and small size have made wearable devices’ highly pervasive and accelerated their spread among the population. Today, a very small part of the world population doesn’t own a smartphone. According to Digital 2020: Global Digital Overview, more than 5.19 billion people now use mobile phones. Among the western countries, smartphones and smartwatches are gadgets of people everyday life. The pervasiveness is an undoubted advantage in terms of data generation. Huge amount of data, that is big data, are produced every day. Furthermore, wearable devices together with new advanced software technologies enable data to be sent to servers and instantly analyzed by high performing computers. The availability of big data and new technology improvements, permitted Artificial Intelligence models to rise. In particular, machine learning and deep learning algorithms are predominant in activity recognition. Together with technological and algorithm innovations, the Human Ac- tivity recognition (HAR) research field has born. HAR is a field of research which aims at automatically recognizing people’s physical activities. HAR investigates on the selection of the best hardware, e. g. the best devices to be used for a given application, on the choice of the software to be dedicated to a specific task, and on the increasing of the algorithm performances. HAR has been a very active field of research for years and it is still considered one of the most promising research topic for a large spectrum of ap- plications. In particular, it remains a very challenging research field for many reasons. The selection of devices and sensors, the algorithm’s performances, the collection and the preprocessing of the data, all are requiring further investigation to improve the overall activity recognition system performances. In this work, two main aspects have been investigated: • the benefits of personalization on the algorithm performances, when trained on small size datasets: one of the main issue concerning HAR research community is the lack of the availability of public dataset and labelled data. [...] • a comparison of the performances in HAR obtained both from tradi- tional and personalized machine learning and deep learning techniques.[...]
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhong, Yuqing. "Investigating Human Gut Microbiome in Obesity with Machine Learning Methods." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1011875/.

Повний текст джерела
Анотація:
Obesity is a common disease among all ages that has threatened human health and has become a global concern. Gut microbiota can affect human metabolism and thus may modulate obesity. Certain mixes of gut microbiota can protect the host to be healthy or predispose the host to obesity. Modern next-generation sequencing technique allows accessing huge amount of genetic information underlying microbiota and thus provides new insights into the functionality of these micro-organisms and their interactions with the host. Multiple previous studies have demonstrated that the microbiome might contribute to obesity by increasing dietary energy harvest, promoting fat deposition and triggering systemic inflammation. However, these researches are either based on lab cultivation studies or basic statistical analysis. In order to further explore how gut microbiota affect obesity, this thesis utilize a series of machine learning methods to analyze large amount of metagenomics data from human gut microbiome. The publicly available HMP (Human Microbiome Project) metagenomic sequencing data, contain microbiome data for healthy adults, including overweight and obese individuals, were used for this study. HMP gut data were organized based on two different feature definitions: taxonomic information and metabolic reconstruction information. Several widely used classification algorithms: namely Naive Bayes, Random Forest, SVM and elastic net logistic regression were applied to predict healthy or obese status of the subjects based on the cross-validation accuracy. Furthermore, the corresponding feature selection algorithms were used to identify signature features in each dataset that lead to the differences between healthy and obese samples. The results showed that these algorithms perform poorly on taxonomic data than metabolic pathway data though lots of selected taxa are still supported by literature. Among all the combinations between different algorithms and data, elastic net logistic regression has the best cross-validation performance and thus becomes the best model. In this model, several important features are found and some of these are consistent with the previous studies. Rerunning classifiers by using features selected by elastic net logistic regression again further improved the performance of the classifiers. On the other hand, this study uncovered some new features that haven't been supported by previous studies. The new features could also be the potential target to distinguish obese and healthy subjects. The present thesis work compares the strengths and weaknesses of different machine learning techniques with different types of features originating from the same metagenomics data. The features selected by these models could provide a deep understanding of the metabolic mechanisms of micro-organisms. It is therefore worth to comprehensively understand the differences of gut microbiota between healthy and obese subjects, and particularly how gut microbiome affects obesity.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Swart, Juani. "Self-awareness and collective tacit knowledge : an exploratory approach." Thesis, University of Bath, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Distel, Felix. "Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-70199.

Повний текст джерела
Анотація:
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Dam, Hai Huong Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "A scalable evolutionary learning classifier system for knowledge discovery in stream data mining." Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/38865.

Повний текст джерела
Анотація:
Data mining (DM) is the process of finding patterns and relationships in databases. The breakthrough in computer technologies triggered a massive growth in data collected and maintained by organisations. In many applications, these data arrive continuously in large volumes as a sequence of instances known as a data stream. Mining these data is known as stream data mining. Due to the large amount of data arriving in a data stream, each record is normally expected to be processed only once. Moreover, this process can be carried out on different sites in the organisation simultaneously making the problem distributed in nature. Distributed stream data mining poses many challenges to the data mining community including scalability and coping with changes in the underlying concept over time. In this thesis, the author hypothesizes that learning classifier systems (LCSs) - a class of classification algorithms - have the potential to work efficiently in distributed stream data mining. LCSs are an incremental learner, and being evolutionary based they are inherently adaptive. However, they suffer from two main drawbacks that hinder their use as fast data mining algorithms. First, they require a large population size, which slows down the processing of arriving instances. Second, they require a large number of parameter settings, some of them are very sensitive to the nature of the learning problem. As a result, it becomes difficult to choose a right setup for totally unknown problems. The aim of this thesis is to attack these two problems in LCS, with a specific focus on UCS - a supervised evolutionary learning classifier system. UCS is chosen as it has been tested extensively on classification tasks and it is the supervised version of XCS, a state of the art LCS. In this thesis, the architectural design for a distributed stream data mining system will be first introduced. The problems that UCS should face in a distributed data stream task are confirmed through a large number of experiments with UCS and the proposed architectural design. To overcome the problem of large population sizes, the idea of using a Neural Network to represent the action in UCS is proposed. This new system - called NLCS { was validated experimentally using a small fixed population size and has shown a large reduction in the population size needed to learn the underlying concept in the data. An adaptive version of NLCS called ANCS is then introduced. The adaptive version dynamically controls the population size of NLCS. A comprehensive analysis of the behaviour of ANCS revealed interesting patterns in the behaviour of the parameters, which motivated an ensemble version of the algorithm with 9 nodes, each using a different parameter setting. In total they cover all patterns of behaviour noticed in the system. A voting gate is used for the ensemble. The resultant ensemble does not require any parameter setting, and showed better performance on all datasets tested. The thesis concludes with testing the ANCS system in the architectural design for distributed environments proposed earlier. The contributions of the thesis are: (1) reducing the UCS population size by an order of magnitude using a neural representation; (2) introducing a mechanism for adapting the population size; (3) proposing an ensemble method that does not require parameter setting; and primarily (4) showing that the proposed LCS can work efficiently for distributed stream data mining tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Fabian, Alain. "Creating an Interactive Learning Environment with Reusable HCI Knowledge." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33339.

Повний текст джерела
Анотація:
This thesis proposes creating an interactive learning environment for Human Computer Interaction (HCI) to facilitate access to, and learning of, important design knowledge. By encapsulating HCI knowledge into reusable claims stored in a knowledge repository, or claims library, this learning environment aims at allowing students to effectively explore design features to limit their reliance on intuition to mold their interfaces, help them address proper design concerns, and evaluate alternatives for their designs. This learning approach is based on active learning where students create their own knowledge by gathering information. However, building adequate development records from which students can gather HCI knowledge is critical to support this approach. This thesis explores using effective reusable design components to act as design records to create an interactive learning environment for students learning HCI design. An initial prototype for the learning environment introduces claims as an encapsulation mechanism for design features from which students can gather HCI knowledge. Pilot testing outlines the accessibility, applicability and reusability problems associated with this approach. To solve theses issues, a taxonomic organization of an improved form of claims (reference claims), is introduced to share core design knowledge among students. A taxonomy is designed as a way to expose students to important design concerns as well as a method to categorize claims. Reference claims are introduced as improved claims inspired by reference tasks to expose students to design alternatives for design concerns. A detailed taxonomy and a set of reference claims for the domain of notification systems demonstrate how existing theories of design can be translated into reference claims to create an interactive learning environment. An experiment illustrates the applicability and reusability of reference claims for various designs within a particular domain. Finally, an evaluation assesses the benefits of this learning environment based on reference claims in terms of improving student designs and increasing the amount of HCI knowledge they reuse. Results show that by exposing students to valuable concerns and alternatives for the design of interactive systems, an interactive learning environment based on reference claims can improve studentsâ understanding of the design scope and lead to an increased use of existing HCI knowledge in their designs.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Suutala, J. (Jaakko). "Learning discriminative models from structured multi-sensor data for human context recognition." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514298493.

Повний текст джерела
Анотація:
Abstract In this work, statistical machine learning and pattern recognition methods were developed and applied to sensor-based human context recognition. More precisely, we concentrated on an effective discriminative learning framework, where input-output mapping is learned directly from a labeled dataset. Non-parametric discriminative classification and regression models based on kernel methods were applied. They include support vector machines (SVM) and Gaussian processes (GP), which play a central role in modern statistical machine learning. Based on these established models, we propose various extensions for handling structured data that usually arise from real-life applications, for example, in a field of context-aware computing. We applied both SVM and GP techniques to handle data with multiple classes in a structured multi-sensor domain. Moreover, a framework for combining data from several sources in this setting was developed using multiple classifiers and fusion rules, where kernel methods are used as base classifiers. We developed two novel methods for handling sequential input and output data. For sequential time-series data, a novel kernel based on graphical presentation, called a weighted walk-based graph kernel (WWGK), is introduced. For sequential output labels, discriminative temporal smoothing (DTS) is proposed. Again, the proposed algorithms are modular, so different kernel classifiers can be used as base models. Finally, we propose a group of techniques based on Gaussian process regression (GPR) and particle filtering (PF) to learn to track multiple targets. We applied the proposed methodology to three different human-motion-based context recognition applications: person identification, person tracking, and activity recognition, where floor (pressure-sensitive and binary switch) and wearable acceleration sensors are used to measure human motion and gait during walking and other activities. Furthermore, we extracted a useful set of specific high-level features from raw sensor measurements based on time, frequency, and spatial domains for each application. As a result, we developed practical extensions to kernel-based discriminative learning to handle many kinds of structured data applied to human context recognition
Tiivistelmä Tässä työssä kehitettiin ja sovellettiin tilastollisen koneoppimisen ja hahmontunnistuksen menetelmiä anturipohjaiseen ihmiseen liittyvän tilannetiedon tunnistamiseen. Esitetyt menetelmät kuuluvat erottelevan oppimisen viitekehykseen, jossa ennustemalli sisääntulomuuttujien ja vastemuuttujan välille voidaan oppia suoraan tunnetuilla vastemuuttujilla nimetystä aineistosta. Parametrittomien erottelevien mallien oppimiseen käytettiin ydinmenetelmiä kuten tukivektorikoneita (SVM) ja Gaussin prosesseja (GP), joita voidaan pitää yhtenä modernin tilastollisen koneoppimisen tärkeimmistä menetelmistä. Työssä kehitettiin näihin menetelmiin liittyviä laajennuksia, joiden avulla rakenteellista aineistoa voidaan mallittaa paremmin reaalimaailman sovelluksissa, esimerkiksi tilannetietoisen laskennan sovellusalueella. Tutkimuksessa sovellettiin SVM- ja GP-menetelmiä moniluokkaisiin luokitteluongelmiin rakenteellisen monianturitiedon mallituksessa. Useiden tietolähteiden käsittelyyn esitetään menettely, joka yhdistää useat opetetut luokittelijat päätöstason säännöillä lopulliseksi malliksi. Tämän lisäksi aikasarjatiedon käsittelyyn kehitettiin uusi graafiesitykseen perustuva ydinfunktio sekä menettely sekventiaalisten luokkavastemuuttujien käsittelyyn. Nämä voidaan liittää modulaarisesti ydinmenetelmiin perustuviin erotteleviin luokittelijoihin. Lopuksi esitetään tekniikoita usean liikkuvan kohteen seuraamiseen. Menetelmät perustuvat anturitiedosta oppivaan GP-regressiomalliin ja partikkelisuodattimeen. Työssä esitettyjä menetelmiä sovellettiin kolmessa ihmisen liikkeisiin liittyvässä tilannetiedon tunnistussovelluksessa: henkilön biometrinen tunnistaminen, henkilöiden seuraaminen sekä aktiviteettien tunnistaminen. Näissä sovelluksissa henkilön asentoa, liikkeitä ja astuntaa kävelyn ja muiden aktiviteettien aikana mitattiin kahdella erilaisella paineherkällä lattia-anturilla sekä puettavilla kiihtyvyysantureilla. Tunnistusmenetelmien laajennuksien lisäksi jokaisessa sovelluksessa kehitettiin menetelmiä signaalin segmentointiin ja kuvaavien piirteiden irroittamiseen matalantason anturitiedosta. Tutkimuksen tuloksena saatiin parannuksia erottelevien mallien oppimiseen rakenteellisesta anturitiedosta sekä erityisesti uusia menettelyjä tilannetiedon tunnistamiseen
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Morgan, Bo. "Learning commonsense human-language descriptions from temporal and spatial sensor-network data." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37383.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Includes bibliographical references (p. 105-109) and index.
Embedded-sensor platforms are advancing toward such sophistication that they can differentiate between subtle actions. For example, when placed in a wristwatch, such platforms can tell whether a person is shaking hands or turning a doorknob. Sensors placed on objects in the environment now report many parameters, including object location, movement, sound, and temperature. A persistent problem, however, is the description of these sense data in meaningful human-language. This is an important problem that appears across domains ranging from organizational security surveillance to individual activity journaling. Previous models of activity recognition pigeon-hole descriptions into small, formal categories specified in advance; for example, location is often categorized as "at home" or "at the office." These models have not been able to adapt to the wider range of complex, dynamic, and idiosyncratic human activities. We hypothesize that the commonsense, semantically related, knowledge bases can be used to bootstrap learning algorithms for classifying and recognizing human activities from sensors.
(cont.) Our system, LifeNet, is a first-person commonsense inference model, which consists of a graph with nodes drawn from a large repository of commonsense assertions expressed in human-language phrases. LifeNet is used to construct a mapping between streams of sensor data and partially ordered sequences of events, co-located in time and space. Further, by gathering sensor data in vivo, we are able to validate and extend the commonsense knowledge from which LifeNet is derived. LifeNet is evaluated in the context of its performance on a sensor-network platform distributed in an office environment. We hypothesize that mapping sensor data into LifeNet will act as a "semantic mirror" to meaningfully interpret sensory data into cohesive patterns in order to understand and predict human action.
by Bo Morgan.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hjelm, Hans. "Cross-language Ontology Learning : Incorporating and Exploiting Cross-language Data in the Ontology Learning Process." Doctoral thesis, Stockholms universitet, Institutionen för lingvistik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8414.

Повний текст джерела
Анотація:
An ontology is a knowledge-representation structure, where words, terms or concepts are defined by their mutual hierarchical relations. Ontologies are becoming ever more prevalent in the world of natural language processing, where we currently see a tendency towards using semantics for solving a variety of tasks, particularly tasks related to information access. Ontologies, taxonomies and thesauri (all related notions) are also used in various variants by humans, to standardize business transactions or for finding conceptual relations between terms in, e.g., the medical domain. The acquisition of machine-readable, domain-specific semantic knowledge is time consuming and prone to inconsistencies. The field of ontology learning therefore provides tools for automating the construction of domain ontologies (ontologies describing the entities and relations within a particular field of interest), by analyzing large quantities of domain-specific texts. This thesis studies three main topics within the field of ontology learning. First, we examine which sources of information are useful within an ontology learning system and how the information sources can be combined effectively. Secondly, we do this with a special focus on cross-language text collections, to see if we can learn more from studying several languages at once, than we can from a single-language text collection. Finally, we investigate new approaches to formal and automatic evaluation of the quality of a learned ontology. We demonstrate how to combine information sources from different languages and use them to train automatic classifiers to recognize lexico-semantic relations. The cross-language data is shown to have a positive effect on the quality of the learned ontologies. We also give theoretical and experimental results, showing that our ontology evaluation method is a good complement to and in some aspects improves on the evaluation measures in use today.
För att köpa boken skicka en beställning till exp@ling.su.se/ To order the book send an e-mail to exp@ling.su.se
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kaden, Marika. "Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification Models." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-206413.

Повний текст джерела
Анотація:
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Nasiri, Khoozani Ehsan. "An ontological framework for the formal representation and management of human stress knowledge." Thesis, Curtin University, 2011. http://hdl.handle.net/20.500.11937/2220.

Повний текст джерела
Анотація:
There is a great deal of information on the topic of human stress which is embedded within numerous papers across various databases. However, this information is stored, retrieved, and used often discretely and dispersedly. As a result, discovery and identification of the links and interrelatedness between different aspects of knowledge on stress is difficult. This restricts the effective search and retrieval of desired information. There is a need to organize this knowledge under a unifying framework, linking and analysing it in mutual combinations so that we can obtain an inclusive view of the related phenomena and new knowledge can emerge. Furthermore, there is a need to establish evidence-based and evolving relationships between the ontology concepts.Previous efforts to classify and organize stress-related phenomena have not been sufficiently inclusive and none of them has considered the use of ontology as an effective facilitating tool for the abovementioned issues.There have also been some research works on the evolution and refinement of ontology concepts and relationships. However, these fail to provide any proposals for an automatic and systematic methodology with the capacity to establish evidence-based/evolving ontology relationships.In response to these needs, we have developed the Human Stress Ontology (HSO), a formal framework which specifies, organizes, and represents the domain knowledge of human stress. This machine-readable knowledge model is likely to help researchers and clinicians find theoretical relationships between different concepts, resulting in a better understanding of the human stress domain and its related areas. The HSO is formalized using OWL language and Protégé tool.With respect to the evolution and evidentiality of ontology relationships in the HSO and other scientific ontologies, we have proposed the Evidence-Based Evolving Ontology (EBEO), a methodology for the refinement and evolution of ontology relationships based on the evidence gleaned from scientific literature. The EBEO is based on the implementation of a Fuzzy Inference System (FIS).Our evaluation results showed that almost all stress-related concepts of the sample articles can be placed under one or more category of the HSO. Nevertheless, there were a number of limitations in this work which need to be addressed in future undertakings.The developed ontology has the potential to be used for different data integration and interoperation purposes in the domain of human stress. It can also be regarded as a foundation for the future development of semantic search engines in the stress domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Makarand, Tare, and tmakarand@swin edu au. "A future for human resources: A Specialised role in knowledge management." Swinburne University of Technology. School of Business, 2003. http://adt.lib.swin.edu.au./public/adt-VSWT20040311.093956.

Повний текст джерела
Анотація:
This thesis is broadly concerned with the future of the Human Resources function within organisations. The nature of these concerns is two-fold: first, how can Human Resources deal effectively with the challenges of organisational life today; second, how can Human Resources convince senior management that it is both relevant, and necessary, to the economic success of the enterprise, and so assure its future as an internal organisational function. This thesis posits that not only does an involvement in the knowledge management process hold considerable benefits for an organisation through a direct and positive influence on the �bottom-line�, but that such an involvement takes on a specialised set of aims and objectives within the human resource perspective that should not be ignored. The argument is that Human Resources, with its own knowledge-awareness and overview of the structures, manpower, performance and reward systems, and training and development programs, is uniquely placed to be instrumental in creating the open, unselfish culture required to make a success of Knowledge Management, and secure its own future as a trusted and valued strategic partner, fully contributing to the enhancement of organisational performance, and ultimately, the organisation�s place in the world. The thesis commences with an overview of how Human Resources has defined its role within organisations since the 1980s. The challenges and concerns of human resources professionals are discussed, and the opportunity for them to take the lead in developing the social networks that are vital to the capture and transfer of knowledge is foreshadowed. An examination of knowledge and knowledge management concepts and principles, and a discussion of the specialised aims and objectives that a knowledge management system can be argued to have within a human resources management perspective in the 21st century is discussed next. As learning from experience with the aim of improving business performance is one of the uses of knowledge management, a discussion of �learning� and the concepts of the �learning organisation� follows. The chapters in the first part of the thesis contain the theoretical material concerning knowledge and knowledge management, learning and the Learning Organisation, and the argument that Human Resources is in a position to play a major role in moving the organisation's culture to one of value creation and valuable strategic decision-making capability, through its awareness of the concept of knowledge and its implementation of knowledge systems, policies, and practices. The second part of the thesis is more empirically based, and reports the results of recent research by the author into the levels of awareness of the knowledge concept, and the degree to which knowledge management systems, policies, and practices are being implemented. The purpose of the study was to test a number of hypotheses about knowledge and knowledge management and the role of the Human Resources function vis-a-vis these issues. The results and their implications are subsequently discussed. The thesis concludes with some reflections on the concepts of knowledge and learning, and the specialised role that the Human Resources professional can play in knowledge work.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Michieletto, Stefano. "Robot Learning by observing human actions." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423766.

Повний текст джерела
Анотація:
Nowadays, robotics is entering in our life. One can see robot in industries, offices and even in homes. The more robots are in contact with people, the more requests of new capabilities and new features increase, in order to make robots able to act in case of need, help humans or be a companion. Therefore, it becomes essential to have a quick and easy way to teach new skills to robots. That is the aim of Robot Learning from Demonstration. This paradigm allows to directly program new tasks in a robot through demonstrations. This thesis proposes a novel approach to Robot Learning from Demonstration able to learn new skills from natural demonstrations carried out from naive users. To this aim, we introduce a novel Robot Learning from Demonstration framework by proposing novel approaches in all functional sub-units: from data acquisition to motion elaboration, from information modeling to robot control. A novel method is explained to extract 3D motion flow information from both RGB and depth data acquired by using recently introduced consumer RGB-D cameras. The motion data are computed over the time to recognize and classify human actions. In this thesis, we describe new techniques to remap human motion to robotic joints. Our methods allow people to natural interact with robots by re-targeting the whole body movements in an intuitive way. We develop algorithm for both humanoids and manipulators motion and test them in different situations. Finally, we improve modeling techniques by using a probabilistic method: the Donut Mixture Model. This model is able to manage several interpretations that different people can produce performing a task. The estimated model can also be updated directly by using new attempts carried out by the robot. This feature is very important to rapidly obtain correct robot trajectories by means of few human demonstrations. A further contribution of this thesis is the creation of a number of new virtual models for the different robots we used to test our algorithms. All the developed models are compliant with ROS, so they can be used to foster research in the field from all the community of this very diffuse robotics framework. Moreover, a new 3D dataset is collected to compare different action recognition algorithms. The dataset contains both RGB-D information coming directly from the sensor and skeleton data provided by a skeleton tracker.
La robotica sta ormai entrando nella nostra vita. Si possono trovare robot nelle industrie, negli uffici e perfino nelle case. Più i robot sono in contatto con le persone, più aumenta la richiesta di nuove funzionalità e caratteristiche per rendere i robot capaci di agire in caso di necessità, aiutare la gente o di essere di compagnia. Perciò è essenziale avere un modo rapido e facile di insegnare ai robot nuove abilità e questo è proprio l'obiettivo del Robot Learning from Demonstration. Questo paradigma consente di programmare nuovi task in un robot attraverso l'uso di dimostrazioni. Questa tesi propone un nuovo approccio al Robot Learning from Demonstration in grado di apprendere nuove abilità da dimostrazioni eseguite naturalmente da utenti inesperti. A questo scopo, è stato introdotto un innovativo framework per il Robot Learning from Demonstration proponendo nuovi approcci in tutte le sub-unità funzionali: dall'acquisizione dei dati all’elaborazione del movimento, dalla modellazione delle informazioni al controllo del robot. All’interno di questo lavoro è stato proposto un nuovo metodo per estrarre l’ informazione del flusso ottico 3D, combinando dati RGB e di profondità acquisiti tramite telecamere RGB-D introdotte di recente nel mercato consumer. Questo algoritmo calcola i dati di movimento lungo il tempo per riconoscere e classificare le azioni umane. In questa tesi, sono descritte nuove tecniche per rimappare il movimento umano alle articolazioni robotiche. I metodi proposti permettono alle persone di interagire in modo naturale con i robot effettuando un re-targeting intuitivo di tutti i movimenti del corpo. È stato sviluppato un algoritmo di re-targeting del movimento sia per robot umanoidi che per manipolatori, testando entrambi in diverse situazioni. Infine, sono state migliorate le tecniche di modellazione utilizzando un metodo probabilistico: il Donut Mixture Model. Questo modello è in grado di gestire le numerose interpretazioni che persone diverse possono produrre eseguendo un compito. Inoltre, il modello stimato può essere aggiornato utilizzando direttamente tentativi effettuati dal robot. Questa caratteristica è molto importante per ottenere rapidamente traiettorie robot corrette, mediante l’uso di poche dimostrazioni umane. Un ulteriore contributo di questa tesi è la creazione di una serie di nuovi modelli virtuali per i diversi robot utilizzati per testare i nostri algoritmi. Tutti i modelli sviluppati sono compatibili con ROS, in modo che possano essere utilizzati da tutta la comunità di questo framework per la robotica molto diffuso per promuovere la ricerca nel campo. Inoltre, è stato raccolto un nuovo dataset 3D al fine di confrontare diversi algoritmi di riconoscimento delle azioni, il dataset contiene sia informazioni RGB-D provenienti direttamente dal sensore che informazioni sullo scheletro fornite da uno skeleton tracker.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chen, Zhiang. "Deep-learning Approaches to Object Recognition from 3D Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1496303868914492.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Goldstein, Adam B. "Responding to Moments of Learning." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/685.

Повний текст джерела
Анотація:
In the field of Artificial Intelligence in Education, many contributions have been made toward estimating student proficiency in Intelligent Tutoring Systems (cf. Corbett & Anderson, 1995). Although the community is increasingly capable of estimating how much a student knows, this does not shed much light on when the knowledge was acquired. In recent research (Baker, Goldstein, & Heffernan, 2010), we created a model that attempts to answer that exact question. We call the model P(J), for the probability that a student just learned from the last problem they answered. We demonstrated an analysis of changes in P(J) that we call “spikiness", defined as the maximum value of P(J) for a student/knowledge component (KC) pair, divided by the average value of P(J) for that same student/KC pair. Spikiness is directly correlated with final student knowledge, meaning that spikes can be an early predictor of success. It has been shown that both over-practice and under-practice can be detrimental to student learning, so using this model can potentially help bias tutors toward ideal practice schedules. After demonstrating the validity of the P(J) model in both CMU's Cognitive Tutor and WPI's ASSISTments Tutoring System, we conducted a pilot study to test the utility of our model. The experiment included a balanced pre/post-test and three conditions for proficiency assessment tested across 6 knowledge components. In the first condition, students are considered to have mastered a KC after correctly answering 3 questions in a row. The second condition uses Bayesian Knowledge Tracing and accepts a student as proficient once they earn a current knowledge probability (Ln) of 0.95 or higher. Finally, we test P(J), which accepts mastery if a student's P(J) value spikes from one problem and the next first response is correct. In this work, we will discuss the details of deriving P(J), our experiment and its results, as well as potential ways this model could be utilized to improve the effectiveness of cognitive mastery learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Edman, Anneli. "Combining Knowledge Systems and Hypermedia for User Co-operation and Learning." Doctoral thesis, Uppsala : Dept. of Information Science [Institutionen för informationsvetenskap], Univ, 2001. http://publications.uu.se/theses/91-506-1526-2/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Snyders, Sean. "Inductive machine learning bias in knowledge-based neurocomputing." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53463.

Повний текст джерела
Анотація:
Thesis (MSc) -- Stellenbosch University , 2003.
ENGLISH ABSTRACT: The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving real-world problems. This paradigm named knowledge-based neurocomputing, provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bias which guides network training, and to extract refined knowledge from trained neural networks. The role of neural networks then becomes that of knowledge refinement. It thus provides a methodology for dealing with uncertainty in the initial domain theory. In this thesis, we address several advantages of this paradigm and propose a solution for the open question of determining the strength of this learning, or inductive, bias. We develop a heuristic for determining the strength of the inductive bias that takes the network architecture, the prior knowledge, the learning method, and the training data into consideration. We apply this heuristic to well-known synthetic problems as well as published difficult real-world problems in the domain of molecular biology and medical diagnoses. We found that, not only do the networks trained with this adaptive inductive bias show superior performance over networks trained with the standard method of determining the strength of the inductive bias, but that the extracted refined knowledge from these trained networks deliver more concise and accurate domain theories.
AFRIKAANSE OPSOMMING: Die integrasie van simboliese kennis met kunsmatige neurale netwerke word 'n toenemende gewilde paradigma om reelewereldse probleme op te los. Hierdie paradigma genoem, kennis-gebaseerde neurokomputasie, verskaf die vermoe om vooraf kennis te gebruik om die netwerkargitektuur te bepaal, om a subversameling van gewigte te programeer om 'n leersydigheid te induseer wat netwerkopleiding lei, en om verfynde kennis van geleerde netwerke te kan ontsluit. Die rol van neurale netwerke word dan die van kennisverfyning. Dit verskaf dus 'n metodologie vir die behandeling van onsekerheid in die aanvangsdomeinteorie. In hierdie tesis adresseer ons verskeie voordele wat bevat is in hierdie paradigma en stel ons 'n oplossing voor vir die oop vraag om die gewig van hierdie leer-, of induktiewe sydigheid te bepaal. Ons ontwikkel 'n heuristiek vir die bepaling van die induktiewe sydigheid wat die netwerkargitektuur, die aanvangskennis, die leermetode, en die data vir die leer proses in ag neem. Ons pas hierdie heuristiek toe op bekende sintetiese probleme so weI as op gepubliseerde moeilike reelewereldse probleme in die gebied van molekulere biologie en mediese diagnostiek. Ons bevind dat, nie alleenlik vertoon die netwerke wat geleer is met die adaptiewe induktiewe sydigheid superieure verrigting bo die netwerke wat geleer is met die standaardmetode om die gewig van die induktiewe sydigheid te bepaal nie, maar ook dat die verfynde kennis wat ontsluit is uit hierdie geleerde netwerke meer bondige en akkurate domeinteorie lewer.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sun, Feng-Tso. "Nonparametric Discovery of Human Behavior Patterns from Multimodal Data." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/359.

Повний текст джерела
Анотація:
Recent advances in sensor technologies and the growing interest in context- aware applications, such as targeted advertising and location-based services, have led to a demand for understanding human behavior patterns from sensor data. People engage in routine behaviors. Automatic routine discovery goes beyond low-level activity recognition such as sitting or standing and analyzes human behaviors at a higher level (e.g., commuting to work). The goal of the research presented in this thesis is to automatically discover high-level semantic human routines from low-level sensor streams. One recent line of research is to mine human routines from sensor data using parametric topic models. The main shortcoming of parametric models is that they assume a fixed, pre-specified parameter regardless of the data. Choosing an appropriate parameter usually requires an inefficient trial-and-error model selection process. Furthermore, it is even more difficult to find optimal parameter values in advance for personalized applications. The research presented in this thesis offers a novel nonparametric framework for human routine discovery that can infer high-level routines without knowing the number of latent low-level activities beforehand. More specifically, the frame-work automatically finds the size of the low-level feature vocabulary from sensor feature vectors at the vocabulary extraction phase. At the routine discovery phase, the framework further automatically selects the appropriate number of latent low-level activities and discovers latent routines. Moreover, we propose a new generative graphical model to incorporate multimodal sensor streams for the human activity discovery task. The hypothesis and approaches presented in this thesis are evaluated on public datasets in two routine domains: two daily-activity datasets and a transportation mode dataset. Experimental results show that our nonparametric framework can automatically learn the appropriate model parameters from multimodal sensor data without any form of manual model selection procedure and can outperform traditional parametric approaches for human routine discovery tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Boulis, Constantinos. "Topic learning in text and conversational speech /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/5914.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Fu, Tianjun. "CSI in the Web 2.0 Age: Data Collection, Selection, and Investigation for Knowledge Discovery." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/217073.

Повний текст джерела
Анотація:
The growing popularity of various Web 2.0 media has created massive amounts of user-generated content such as online reviews, blog articles, shared videos, forums threads, and wiki pages. Such content provides insights into web users' preferences and opinions, online communities, knowledge generation, etc., and presents opportunities for many knowledge discovery problems. However, several challenges need to be addressed: data collection procedure has to deal with unique characteristics and structures of various Web 2.0 media; advanced data selection methods are required to identify data relevant to specific knowledge discovery problems; interactions between Web 2.0 users which are often embedded in user-generated content also need effective methods to identify, model, and analyze. In this dissertation, I intend to address the above challenges and aim at three types of knowledge discovery tasks: (data) collection, selection, and investigation. Organized in this "CSI" framework, five studies which explore and propose solutions to these tasks for particular Web 2.0 media are presented. In Chapter 2, I study focused and hidden Web crawlers and propose a novel crawling system for Dark Web forums by addressing several unique issues to hidden web data collection. In Chapter 3 I explore the usage of both topical and sentiment information in web crawling. This information is also used to label nodes in web graphs that are employed by a graph-based tunneling mechanism to improve collection recall. Chapter 4 further extends the work in Chapter 3 by exploring the possibilities for other graph comparison techniques to be used in tunneling for focused crawlers. A subtree-based tunneling method which can scale up to large graphs is proposed and evaluated. Chapter 5 examines the usefulness of user-generated content in online video classification. Three types of text features are extracted from the collected user-generated content and utilized by several feature-based classification techniques to demonstrate the effectiveness of the proposed text-based video classification framework. Chapter 6 presents an algorithm to identify forum user interactions and shows how they can be used for knowledge discovery. The algorithm utilizes a bevy of system and linguistic features and adopts several similarity-based methods to account for interactional idiosyncrasies.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії