Дисертації з теми "Theory of applied learning of competencivism"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Theory of applied learning of competencivism.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Theory of applied learning of competencivism".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Mauricio, Palacio Sebastián. "Machine-Learning Applied Methods." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/669286.

Повний текст джерела
Анотація:
The presented discourse followed several topics where every new chapter introduced an economic prediction problem and showed how traditional approaches can be complemented with new techniques like machine learning and deep learning. These powerful tools combined with principles of economic theory is highly increasing the scope for empiricists. Chapter 3 addressed this discussion. By progressively moving from Ordinary Least Squares, Penalized Linear Regressions and Binary Trees to advanced ensemble trees. Results showed that ML algorithms significantly outperform statistical models in terms of predictive accuracy. Specifically, ML models perform 49-100\% better than unbiased methods. However, we cannot rely on parameter estimations. For example, Chapter 4 introduced a net prediction problem regarding fraudulent property claims in insurance. Despite the fact that we got extraordinary results in terms of predictive power, the complexity of the problem restricted us from getting behavioral insight. Contrarily, statistical models are easily interpretable. Coefficients give us the sign, the magnitude and the statistical significance. We can learn behavior from marginal impacts and elasticities. Chapter 5 analyzed another prediction problem in the insurance market, particularly, how the combination of self-reported data and risk categorization could improve the detection of risky potential customers in insurance markets. Results were also quite impressive in terms of prediction, but again, we did not know anything about the direction or the magnitude of the features. However, by using a Probit model, we showed the benefits of combining statistic models with ML-DL models. The Probit model let us get generalizable insights on what type of customers are likely to misreport, enhancing our results. Likewise, Chapter 2 is a clear example of how causal inference can benefit from ML and DL methods. These techniques allowed us to capture that 70 days before each auction there were abnormal behaviors in daily prices. By doing so, we could apply a solid statistical model and we could estimate precisely what the net effect of the mandated auctions in Spain was. This thesis aims at combining advantages of both methodologies, machine learning and econometrics, boosting their strengths and attenuating their weaknesses. Thus, we used ML and statistical methods side by side, exploring predictive performance and interpretability. Several conditions can be inferred from the nature of both approaches. First, as we have observed throughout the chapters, ML and traditional econometric approaches solve fundamentally different problems. We use ML and DL techniques to predict, not in terms of traditional forecast, but making our models generalizable to unseen data. On the other hand, traditional econometrics has been focused on causal inference and parameter estimation. Therefore, ML is not replacing traditional techniques, but rather complementing them. Second, ML methods focus in out-of-sample data instead of in-sample data, while statistical models typically focus on goodness-of-fit. It is then not surprising that ML techniques consistently outperformed traditional techniques in terms of predictive accuracy. The cost is then biased estimators. Third, the tradition in economics has been to choose a unique model based on theoretical principles and to fit the full dataset on it and, in consequence, obtaining unbiased estimators and their respective confidence intervals. On the other hand, ML relies on data driven selection models, and does not consider causal inference. Instead of manually choosing the covariates, the functional form is determined by the data. This also translates to the main weakness of ML, which is the lack of inference of the underlying data-generating process. I.e. we cannot derive economically meaningful conclusions from the coefficients. Focusing on out-of-sample performance comes at the expense of the ability to infer causal effects, due to the lack of standard errors on the coefficients. Therefore, predictors are typically biased, and estimators may not be normally distributed. Thus, we can conclude that in terms of out-sample performance it is hard to compete against ML models. However, ML cannot contend with the powerful insights that the causal inference analysis gives us, which allow us not only to get the most important variables and their magnitude but also the ability to understand economic behaviors.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Yue. "Sparsity in Image Processing and Machine Learning: Modeling, Computation and Theory." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1523017795312546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Andersson, Carl. "Deep learning applied to system identification : A probabilistic approach." Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-397563.

Повний текст джерела
Анотація:
Machine learning has been applied to sequential data for a long time in the field of system identification. As deep learning grew under the late 00's machine learning was again applied to sequential data but from a new angle, not utilizing much of the knowledge from system identification. Likewise, the field of system identification has yet to adopt many of the recent advancements in deep learning. This thesis is a response to that. It introduces the field of deep learning in a probabilistic machine learning setting for problems known from system identification. Our goal for sequential modeling within the scope of this thesis is to obtain a model with good predictive and/or generative capabilities. The motivation behind this is that such a model can then be used in other areas, such as control or reinforcement learning. The model could also be used as a stepping stone for machine learning problems or for pure recreational purposes. Paper I and Paper II focus on how to apply deep learning to common system identification problems. Paper I introduces a novel way of regularizing the impulse response estimator for a system. In contrast to previous methods using Gaussian processes for this regularization we propose to parameterize the regularization with a neural network and train this using a large dataset. Paper II introduces deep learning and many of its core concepts for a system identification audience. In the paper we also evaluate several contemporary deep learning models on standard system identification benchmarks. Paper III is the odd fish in the collection in that it focuses on the mathematical formulation and evaluation of calibration in classification especially for deep neural network. The paper proposes a new formalized notation for calibration and some novel ideas for evaluation of calibration. It also provides some experimental results on calibration evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mouton, Hildegarde Suzanne. "Reinforcement learning : theory, methods and application to decision support systems." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5304.

Повний текст джерела
Анотація:
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: In this dissertation we study the machine learning subfield of Reinforcement Learning (RL). After developing a coherent background, we apply a Monte Carlo (MC) control algorithm with exploring starts (MCES), as well as an off-policy Temporal-Difference (TD) learning control algorithm, Q-learning, to a simplified version of the Weapon Assignment (WA) problem. For the MCES control algorithm, a discount parameter of τ = 1 is used. This gives very promising results when applied to 7 × 7 grids, as well as 71 × 71 grids. The same discount parameter cannot be applied to the Q-learning algorithm, as it causes the Q-values to diverge. We take a greedy approach, setting ε = 0, and vary the learning rate (α ) and the discount parameter (τ). Experimentation shows that the best results are found with set to 0.1 and constrained in the region 0.4 ≤ τ ≤ 0.7. The MC control algorithm with exploring starts gives promising results when applied to the WA problem. It performs significantly better than the off-policy TD algorithm, Q-learning, even though it is almost twice as slow. The modern battlefield is a fast paced, information rich environment, where discovery of intent, situation awareness and the rapid evolution of concepts of operation and doctrine are critical success factors. Combining the techniques investigated and tested in this work with other techniques in Artificial Intelligence (AI) and modern computational techniques may hold the key to solving some of the problems we now face in warfare.
AFRIKAANSE OPSOMMING: Die fokus van hierdie verhandeling is die masjienleer-algoritmes in die veld van versterkingsleer. ’n Koherente agtergrond van die veld word gevolg deur die toepassing van ’n Monte Carlo (MC) beheer-algoritme met ondersoekende begintoestande, sowel as ’n afbeleid Temporale-Verskil beheer-algoritme, Q-leer, op ’n vereenvoudigde weergawe van die wapentoekenningsprobleem. Vir die MC beheer-algoritme word ’n afslagparameter van τ = 1 gebruik. Dit lewer belowende resultate wanneer toegepas op 7 × 7 roosters, asook op 71 × 71 roosters. Dieselfde afslagparameter kan nie op die Q-leer algoritme toegepas word nie, aangesien dit veroorsaak dat die Q-waardes divergeer. Ons neem ’n gulsige aanslag deur die gulsigheidsparameter te verstel na ε = 0. Ons varieer dan die leertempo ( α) en die afslagparameter (τ). Die beste eksperimentele resultate is behaal wanneer = 0.1 en as die afslagparameter vasgehou word in die gebied 0.4 ≤ τ ≤ 0.7. Die MC beheer-algoritme lewer belowende resultate wanneer toegepas op die wapentoekenningsprobleem. Dit lewer beduidend beter resultate as die Q-leer algoritme, al neem dit omtrent twee keer so lank om uit te voer. Die moderne slagveld is ’n omgewing ryk aan inligting, waar dit kritiek belangrik is om vinnig die vyand se planne te verstaan, om bedag te wees op die omgewing en die konteks van gebeure, en waar die snelle ontwikkeling van die konsepte van operasie en doktrine lei tot sukses. Die tegniekes wat in die verhandeling ondersoek en getoets is, en ander kunsmatige intelligensie tegnieke en moderne berekeningstegnieke saamgesnoer, mag dalk die sleutel hou tot die oplossing van die probleme wat ons tans in die gesig staar in oorlogvoering.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Grieve, Susan M. "Cognitive Load Theory Principles Applied to Simulation Instructional Design for Novice Health Professional Learners." Diss., NSUWorks, 2019. https://nsuworks.nova.edu/hpd_pt_stuetd/78.

Повний текст джерела
Анотація:
While the body of evidence supporting the use of simulation-based learning in the education of health professionals is growing, howor why simulation-based learning works is not yet understood. There is a clear need for evidence, grounded in contemporary educational theory, to clarify the features of simulation instructional design that optimize learning outcomes and efficiency in health care professional students. Cognitive Load Theory (CLT) is a theoretical framework focused on a learner’s working memory capacity. One principle of CLT is example based learning. While this principle has been applied in both traditional classroom and laboratory settings, and has shown positive performance and learning outcomes, example based learning has not yet been applied to the simulation setting. This study had two main objectives: to explore if the example-based learning principle could successfully be applied to the simulation learning environment, and to establish response process validation evidence for a tool designed to measure types of cognitive load. Fifty-eight novice students from nursing, podiatric medicine, physician assistant, physical and occupational therapy programs participated in a blinded randomized control study. The dependent variable was the simulation brief. Participants were randomly assigned to either a traditional brief or a facilitated tutored problem brief. Performance outcomes were measured with verbal communications skill presented in the Introduction, Situation, Background, Assessment, Recommendation (I-SBAR) format. Response process evidence was collected from cognitive interviews of 11 students. Results indicate participation in a tutored problem brief led to statistically significant differences at t(52)=-3.259, p=.002 in verbal communication performance compared to students who participated in a traditional brief. Effect size for this comparison was d=(6.06-4.61)/1.63 = .89 (95% CI 0.32-1.44). Response process evidence demonstrated that additional factors unique to the simulationlearning environment should be accounted for when measuring cognitive load in simulation based learning (SBL). This study suggests that example based learning principles can be successfully applied to SBL and result in positive performance outcomes for health professions students. Additionally, measures of cognitive load do not appear to capture all contribution toload imposed by the simulation environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chim, Tat-mei Alice, and 詹達美. "An instructional design theory guide for blended learning courses." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30406213.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hu, Qiao Ph D. Massachusetts Institute of Technology. "Application of statistical learning theory to plankton image analysis." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/39206.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2006.
Includes bibliographical references (leaves 155-173).
A fundamental problem in limnology and oceanography is the inability to quickly identify and map distributions of plankton. This thesis addresses the problem by applying statistical machine learning to video images collected by an optical sampler, the Video Plankton Recorder (VPR). The research is focused on development of a real-time automatic plankton recognition system to estimate plankton abundance. The system includes four major components: pattern representation/feature measurement, feature extraction/selection, classification, and abundance estimation. After an extensive study on a traditional learning vector quantization (LVQ) neural network (NN) classifier built on shape-based features and different pattern representation methods, I developed a classification system combined multi-scale cooccurrence matrices feature with support vector machine classifier. This new method outperforms the traditional shape-based-NN classifier method by 12% in classification accuracy. Subsequent plankton abundance estimates are improved in the regions of low relative abundance by more than 50%. Both the NN and SVM classifiers have no rejection metrics. In this thesis, two rejection metrics were developed.
(cont.) One was based on the Euclidean distance in the feature space for NN classifier. The other used dual classifier (NN and SVM) voting as output. Using the dual-classification method alone yields almost as good abundance estimation as human labeling on a test-bed of real world data. However, the distance rejection metric for NN classifier might be more useful when the training samples are not "good" ie, representative of the field data. In summary, this thesis advances the current state-of-the-art plankton recognition system by demonstrating multi-scale texture-based features are more suitable for classifying field-collected images. The system was verified on a very large real-world dataset in systematic way for the first time. The accomplishments include developing a multi-scale occurrence matrices and support vector machine system, a dual-classification system, automatic correction in abundance estimation, and ability to get accurate abundance estimation from real-time automatic classification. The methods developed are generic and are likely to work on range of other image classification applications.
by Qiao Hu.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shi, Bin. "A Mathematical Framework on Machine Learning: Theory and Application." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3876.

Повний текст джерела
Анотація:
The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries. Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Youngleson, Penelope. "Flourishing in fragility: how to build antifragile ecosystems of learning, that nurture healthy vulnerability, in fragile environments in the Western Cape (South Africa) with at-risk learners." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/32352.

Повний текст джерела
Анотація:
This research is a qualitative, autoethnographic study of antifragility in fragile spaces. It was written using data from Applied Theatre workshops, rehearsals and exercises; as well as questionnaires, semi-structured interviews and open discussions in focus groups with at-risk learners from Quintile 1-3 high schools, their educators, senior management staff, parents, caregivers and peers. Methodologically, social constructionism functioned as the schematic map that positioned the writing/writer between the self and others, and provided the philosophical scaffolding necessary to elucidate data analysis and interpretation. Institutional theory and organisational culture centered the analytical framework once thematic analysis had been conducted across the data sets. This reflexive, feminist paper exhumes and explores fragile spaces in Western Cape Quintile 1-3 schools, using drama and conscious, performed acts of vulnerability (on and off stage) as a means of activating antifragility in the performer and the observer. The data collection took place in the Western Cape in South Africa, and specifically refers to learners and their networks and blended learning ecosystems in that context. Noted conversants include Brown, Taleb and Butler. The findings of this study include a shift in how we define “success” in a fragile environment and an acknowledgment of antifragility as a strategy that is always in motion. Static achievement and a singular definition of learner excellence are shown to be the undesirable opposite of iterative antifragility and adaptive, holistic executive function and socio-cultural competence; and learner wholeness (as experienced and embodied by the learner themselves) is referred to as “flourishing”.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Opdenbosch, Patrick. "Auto-Calibration and Control Applied to Electro-Hydraulic Poppet Valves." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19758.

Повний текст джерела
Анотація:
Modern control design is sometimes accompanied by the challenge of dealing with nonlinear systems or plants. In some situations, due to the complexity of the plant and the unavailability of suitable models, the controls engineer opts for developing control schemes based on look-up tables. These tables, typically populated with the steady state inverse input-output characteristics of the plant, are used to compensate the plant via open-loop or closed-loop to solve the control problem. In an effort to present a new alternative, a general theoretical framework for online auto-calibration and control of general nonlinear systems is developed in this dissertation. This technique simultaneously learns the inverse input-state mapping (i.e. the calibration mapping) of the plant while forcing its state to follow a prescribed desired trajectory. The main requirements for the successful application of the novel control law are knowledge of the order of the plant and some generic data to initialize the inverse mapping. This last requirement can be easily fulfilled by using steady-state data or the equilibrium points of the plant. In this approach, the inverse mapping is learned from the current and past states. The learning is accomplished in a composite manner by employing input and state errors. The map is used simultaneously in the feedforward path to control the plant. The performance of the plant subject to this novel controller is validated through simulations and experimental data. The new control method is applied to a novel Electro-Hydraulic Poppet Valve (EHPV). These valves are used in a Wheatstone bridge arrangement for motion control of hydraulic actuators. This is preferred over the conventional use of spool valves due to the energy savings potential. It is shown in this dissertation that this method improves the value of using these types of valves for motion control in hydraulics. This is due to the combination of self-learning (auto-calibration) and better performance for a more efficient operation of hydraulic equipment. Additionally, it is shown that the auto-calibration of the valves can be used for health monitoring of the same, which consequently improves their reliability and expedites maintenance downtime.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Yu, Shen. "A Bayesian machine learning system for recognizing group behaviour." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:8881/R/?func=dbin-jump-full&object_id=32565.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Gard, Rikard. "Design-based and Model-assisted estimators using Machine learning methods : Exploring the k-Nearest Neighbor metod applied to data from the Recreational Fishing Survey." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-72488.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Rumantir, Grace Widjaja. "Minimum message length criterion for second-order polynomial model selection applied to tropical cyclone intensity forecasting." Monash University, School of Computer Science and Software Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/5813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Agerberg, Jens. "Statistical Learning and Analysis on Homology-Based Features." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273581.

Повний текст джерела
Анотація:
Stable rank has recently been proposed as an invariant to encode the result of persistent homology, a method used in topological data analysis. In this thesis we develop methods for statistical analysis as well as machine learning methods based on stable rank. As stable rank may be viewed as a mapping to a Hilbert space, a kernel can be constructed from the inner product in this space. First, we investigate this kernel in the context of kernel learning methods such as support-vector machines. Next, using the theory of kernel embedding of probability distributions, we give a statistical treatment of the kernel by showing some of its properties and develop a two-sample hypothesis test based on the kernel. As an alternative approach, a mapping to a Euclidean space with learnable parameters can be conceived, serving as an input layer to a neural network. The developed methods are first evaluated on synthetic data. Then the two-sample hypothesis test is applied on the OASIS open access brain imaging dataset. Finally a graph classification task is performed on a dataset collected from Reddit.
Stable rank har föreslagits som en sammanfattning på datanivå av resultatet av persistent homology, en metod inom topologisk dataanalys. I detta examensarbete utvecklar vi metoder inom statistisk analys och maskininlärning baserade på stable rank. Eftersom stable rank kan ses som en avbildning i ett Hilbertrum kan en kärna konstrueras från inre produkten i detta rum. Först undersöker vi denna kärnas egenskaper när den används inom ramen för maskininlärningsmetoder som stödvektormaskin (SVM). Därefter, med grund i teorin för inbäddning av sannolikhetsfördelningar i reproducing kernel Hilbertrum, undersöker vi hur kärnan kan användas för att utveckla ett test för statistisk hypotesprövning. Slutligen, som ett alternativ till metoder baserade på kärnor, utvecklas en avbildning i ett euklidiskt rum med optimerbara parametrar, som kan användas som ett ingångslager i ett neuralt nätverk. Metoderna utvärderas först på syntetisk data. Vidare utförs ett statistiskt test på OASIS, ett öppet dataset inom neuroradiologi. Slutligen utvärderas metoderna på klassificering av grafer, baserat på ett dataset insamlat från Reddit.

QC 20200523

Стилі APA, Harvard, Vancouver, ISO та ін.
15

Abbas, Kaja Moinudeen. "Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5302/.

Повний текст джерела
Анотація:
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Danielson, Jared Andrew. "The Design, Development and Evaluation of a Web-based Tool for Helping Veterinary Students Learn How to Classify Clinical Laboratory Data." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/28511.

Повний текст джерела
Анотація:
Veterinary students face the difficult task of learning to classify clinical laboratory data. In an effort to make this task easier, a computer and web based tool known as the Problem List Generator (PLG) was designed based on current literature dealing with learning theory and medical education which are reviewed in chapter 1. Chapter 2 describes the design and the development process for the PLG. The PLG allows the students to access any number of cases (determined by the professor) of increasing complexity which provide signalment, history, physical exam, and laboratory data for a number of patients. Using the PLG, students analyze the data, identify data abnormalities and mechanisms, arrange them in a problem list, diagnose the problem, and compare their problem list and diagnosis to an expert problem list and diagnosis. The PLG was evaluated using a four step evaluation process involving an expert review, one-to-one evaluations, small group evaluations, and a two-part field trial, and was evaluated in terms of clarity, feasibility, and impact. The PLG is usable, in terms of clarity and feasibility, though fixes are recommended. There is no evidence to infer, statistically, that the PLG has any effect on learning outcomes. However, trends in the quantitative data and logical inference based on the context of the evaluation suggest that the PLG might help students, particularly those of low and average ability to produce more accurate problem lists.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Liu, Yukang. "Virtualized Welding Based Learning of Human Welder Behaviors for Intelligent Robotic Welding." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/51.

Повний текст джерела
Анотація:
Combining human welder (with intelligence and sensing versatility) and automated welding robots (with precision and consistency) can lead to next generation intelligent welding systems. In this dissertation intelligent welding robots are developed by process modeling / control method and learning the human welder behavior. Weld penetration and 3D weld pool surface are first accurately controlled for an automated Gas Tungsten Arc Welding (GTAW) machine. Closed-form model predictive control (MPC) algorithm is derived for real-time welding applications. Skilled welder response to 3D weld pool surface by adjusting the welding current is then modeled using Adaptive Neuro-Fuzzy Inference System (ANFIS), and compared to the novice welder. Automated welding experiments confirm the effectiveness of the proposed human response model. A virtualized welding system is then developed that enables transferring the human knowledge into a welding robot. The learning of human welder movement (i.e., welding speed) is first realized with Virtual Reality (VR) enhancement using iterative K-means based local ANFIS modeling. As a separate effort, the learning is performed without VR enhancement utilizing a fuzzy classifier to rank the data and only preserve the high ranking “correct” response. The trained supervised ANFIS model is transferred to the welding robot and the performance of the controller is examined. A fuzzy weighting based data fusion approach to combine multiple machine and human intelligent models is proposed. The data fusion model can outperform individual machine-based control algorithm and welder intelligence-based models (with and without VR enhancement). Finally a data-driven approach is proposed to model human welder adjustments in 3D (including welding speed, arc length, and torch orientations). Teleoperated training experiments are conducted in which a human welder tries to adjust the torch movements in 3D based on his observation on the real-time weld pool image feedback. The data is off-line rated by the welder and a welder rating system is synthesized. ANFIS model is then proposed to correlate the 3D weld pool characteristic parameters and welder’s torch movements. A foundation is thus established to rapidly extract human intelligence and transfer such intelligence into welding robots.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lindahl, Fred. "Detection of Sparse and Weak Effects in High-Dimensional Supervised Learning Problems, Applied to Human Microbiome Data." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288503.

Повний текст джерела
Анотація:
This project studies the signal detection and identification problem in high-dimensional noisy data and the possibility of using it on microbiome data. An extensive simulation study was performed on generated data using as well as a microbiome dataset collected on patients with Parkinson's disease, using Donoho and Jin's Higher criticism, Jager and Wellner's phi-divergence-based goodness-of-fit-test and Stepanova and Pavlenko's CsCsHM statistic . We present some novel approaches based on established theory that perform better than existing methods and show that it is possible to use the signal identification framework to detect differentially abundant features in microbiome data. Although the novel approaches produce good results, they lack substantial mathematical foundations and should be avoided if theoretical rigour is needed. We also conclude that while we have found that it is possible to use signal identification methods to find abundant features in microbiome data, further refinement is necessary before it can be properly used in research.
Detta projekt studerar signaldetekterings- och identifieringsproblemet i högdimensionell brusig data och möjligheten att använda det på mikrobiomdata från människor. En omfattande simuleringsstudie utfördes på genererad data samt ett mikrobiomdataset som samlats in på patienter med Parkinsons sjukdom, med hjälp av ett antal goodness-of-fit-metoder: Donoho och Jins Higher criticis , Jager och Wellners phi-divergenser och Stepanova och Pavelenkos CsCsHM. Vi presenterar några nya tillvägagångssätt baserade på vedertagen teori som visar sig fungera bättre än befintliga metoder och visar att det är möjligt att använda signalidentifiering för att upptäcka olika funktioner i mikrobiomdata. Även om de nya metoderna ger goda resultat saknar de betydande matematiska grunder och bör undvikas om teoretisk formalism är nödvändigt. Vi drar också slutsatsen att medan vi har funnit att det är möjligt att använda signalidentifieringsmetoder för att hitta information i mikrobiomdata, är ytterligare experiment nödvändiga innan de kan användas på ett korrekt sätt i forskning.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Grady, Daniel J. "A Critical Review of the Application of Kolb?s Experiential Learning Theory Applied Through the use of Computer Based Simulations Within Virtual Environments 2000-2016." Thesis, State University of New York at Albany, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10282034.

Повний текст джерела
Анотація:

This integrative research review aims to examine the application of Kolb’s theory of experiential learning through the use of simulations within virtual learning environments. It will first cover the framework of experiential learning as stated by Kolb, a learning theory that is finding new life within the context of simulations, role-playing games (RPGs), massive multiplayer role playing games (MMORPGs) and virtual environments. This analysis was conducted by making use of combined research strategies that focused specifically on both qualitative and quantitative reviews that utilized Kolb’s experiential theory of learning (ELT) within the context of the application of computer based simulations in virtual environments used to facilitate learning. The review was guided by three principle questions: From the year 2000 to 2016, which research studies that examine the use of simulations to facilitate learning, use experiential learning theory as its foundational theoretical approach? Of the works that were selected, which studies were computer based simulations in virtual environments and demonstrated firm connections between Kolb’s ELT and the results of the study? And lastly, within the final group of studies identified what patterns emerge through the application of Kolb’s ELT within the context of computer based simulations in virtual environments?

Стилі APA, Harvard, Vancouver, ISO та ін.
20

Pastorek, Lukáš. "Bio-Inspired Prototype-Based Models and Applied Gompertzian Dynamics in Cluster Analysis." Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-200218.

Повний текст джерела
Анотація:
The thesis deals with the analysis of the clustering and mapping techniques derived from the principles of the neural and statistical learning and growth theory. The selected branch of the unsupervised bio-inspired prototype-based models is described in terms of the proposed logical framework, which highlights the continuity of these methods with the classical "pure" statistical methods. Moreover, as those methods are broadly understood as the "black boxes" with the unpredictable, unclear and especially hidden behavior, the examples of the spatial computational and organizational patterns in two-dimensional space are provided. Additionally, this thesis presents the novel concept based on the non-linear, non-Gaussian Gompertzian function, which has been widely used as the universal law in dynamic growth models, but has not yet been applied in the field of computational intelligence. The essence of Gompertzian dynamics is mathematically analyzed and a novel simple version of the Gompertzian normalized function is introduced. Furthermore, the function was modified for use in the field of artificial intelligence and neural implications were discussed. Additionally, the novel neural networks were proposed and derived from the topological principles of Kohonen's self-organizing maps and neural gas algorithm. The Gompertzian networks were evaluated using several indicators for various generated and real datasets. Gompertzian neural networks with fixed grid and integrated neighborhood ranking principle generally show lower mean squared errors than the original SOM algorithms. Likewise, the unconstrained Gompertzian networks have demonstrated overall low error rates comparable to neural gas algorithm, more stable and lower error solutions than the k- means sequential procedure. In conclusion, the Gompertzian function has been shown to be a viable concept and an effective computational tool for multidimensional data analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Elentari, Aruna. "Evaluating the effect of the Sensavis visual learning tool on student performance in a Swedish elementary school." Thesis, Umeå universitet, Institutionen för psykologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136302.

Повний текст джерела
Анотація:
Dual coding theory implies that engaging multiple modalities (e.g., visual, auditory) in instruction enhances learning. Presenting information via 3D images and 3D animations appears to improve student performance but the results are inconsistent across multiple studies. The present study investigated the effect of the Sensavis visual learning tool, a 3D educational software, on performance in chemistry among students in a Swedish elementary school. Thirty-seven students from grades 7 and 9 received training involving a 3D animation on chemical bonds while nineteen students in grade 8 had traditional instruction. ANCOVA results controlling for age and average chemistry grade revealed a statistically significant difference in the posttest performance with the control group outperforming both experimental groups. These results indicate that the Sensavis tool did not have a positive effect on learning chemistry compared to traditional instruction. Interpretation of the results is presented in discussion.
Enligt “dual coding theory” hjälper det att kombinera flera sätt att inta information (t.ex. visuellt, verbalt) inom lärandet. Presentation av information genom 3D-bilder och 3D-animationer verkar förbättra prestation bland elever, men resultaten är inkonsekventa i flera studier. Denna studie undersökte effekten av ett visuellt verktyg från Sensavis, en  pedagogisk programvara med 3D-animationer, på prestation inom kemi bland elever i en svensk grundskola. Trettiosju elever från årskurs 7 och 9 använde en 3D-animering om kemisk bindning förutom föreläsningar, medan nitton elever i årskurs 8 fick traditionell undervisning. ANCOVA-resultat som kontrollerade för ålder och kemibetyg visade att kontrollgruppen presterade bättre än bägge experimentgrupperna. Dessa resultat tyder på att Sensavis-verktyget inte hade en positiv effekt på lärande i kemi jämfört med traditionell undervisning. Tolkningen av resultaten presenteras i diskussion.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Brückner, Michael. "Prediction games : machine learning in the presence of an adversary." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/6037/.

Повний текст джерела
Анотація:
In many applications one is faced with the problem of inferring some functional relation between input and output variables from given data. Consider, for instance, the task of email spam filtering where one seeks to find a model which automatically assigns new, previously unseen emails to class spam or non-spam. Building such a predictive model based on observed training inputs (e.g., emails) with corresponding outputs (e.g., spam labels) is a major goal of machine learning. Many learning methods assume that these training data are governed by the same distribution as the test data which the predictive model will be exposed to at application time. That assumption is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for instance, in the above example of email spam filtering. Here, email service providers employ spam filters and spam senders engineer campaign templates such as to achieve a high rate of successful deliveries despite any filters. Most of the existing work casts such situations as learning robust models which are unsusceptible against small changes of the data generation process. The models are constructed under the worst-case assumption that these changes are performed such to produce the highest possible adverse effect on the performance of the predictive model. However, this approach is not capable to realistically model the true dependency between the model-building process and the process of generating future data. We therefore establish the concept of prediction games: We model the interaction between a learner, who builds the predictive model, and a data generator, who controls the process of data generation, as an one-shot game. The game-theoretic framework enables us to explicitly model the players' interests, their possible actions, their level of knowledge about each other, and the order at which they decide for an action. We model the players' interests as minimizing their own cost function which both depend on both players' actions. The learner's action is to choose the model parameters and the data generator's action is to perturbate the training data which reflects the modification of the data generation process with respect to the past data. We extensively study three instances of prediction games which differ regarding the order in which the players decide for their action. We first assume that both player choose their actions simultaneously, that is, without the knowledge of their opponent's decision. We identify conditions under which this Nash prediction game has a meaningful solution, that is, a unique Nash equilibrium, and derive algorithms that find the equilibrial prediction model. As a second case, we consider a data generator who is potentially fully informed about the move of the learner. This setting establishes a Stackelberg competition. We derive a relaxed optimization criterion to determine the solution of this game and show that this Stackelberg prediction game generalizes existing prediction models. Finally, we study the setting where the learner observes the data generator's action, that is, the (unlabeled) test data, before building the predictive model. As the test data and the training data may be governed by differing probability distributions, this scenario reduces to learning under covariate shift. We derive a new integrated as well as a two-stage method to account for this data set shift. In case studies on email spam filtering we empirically explore properties of all derived models as well as several existing baseline methods. We show that spam filters resulting from the Nash prediction game as well as the Stackelberg prediction game in the majority of cases outperform other existing baseline methods.
Eine der Aufgabenstellungen des Maschinellen Lernens ist die Konstruktion von Vorhersagemodellen basierend auf gegebenen Trainingsdaten. Ein solches Modell beschreibt den Zusammenhang zwischen einem Eingabedatum, wie beispielsweise einer E-Mail, und einer Zielgröße; zum Beispiel, ob die E-Mail durch den Empfänger als erwünscht oder unerwünscht empfunden wird. Dabei ist entscheidend, dass ein gelerntes Vorhersagemodell auch die Zielgrößen zuvor unbeobachteter Testdaten korrekt vorhersagt. Die Mehrzahl existierender Lernverfahren wurde unter der Annahme entwickelt, dass Trainings- und Testdaten derselben Wahrscheinlichkeitsverteilung unterliegen. Insbesondere in Fällen in welchen zukünftige Daten von der Wahl des Vorhersagemodells abhängen, ist diese Annahme jedoch verletzt. Ein Beispiel hierfür ist das automatische Filtern von Spam-E-Mails durch E-Mail-Anbieter. Diese konstruieren Spam-Filter basierend auf zuvor empfangenen E-Mails. Die Spam-Sender verändern daraufhin den Inhalt und die Gestaltung der zukünftigen Spam-E-Mails mit dem Ziel, dass diese durch die Filter möglichst nicht erkannt werden. Bisherige Arbeiten zu diesem Thema beschränken sich auf das Lernen robuster Vorhersagemodelle welche unempfindlich gegenüber geringen Veränderungen des datengenerierenden Prozesses sind. Die Modelle werden dabei unter der Worst-Case-Annahme konstruiert, dass diese Veränderungen einen maximal negativen Effekt auf die Vorhersagequalität des Modells haben. Diese Modellierung beschreibt die tatsächliche Wechselwirkung zwischen der Modellbildung und der Generierung zukünftiger Daten nur ungenügend. Aus diesem Grund führen wir in dieser Arbeit das Konzept der Prädiktionsspiele ein. Die Modellbildung wird dabei als mathematisches Spiel zwischen einer lernenden und einer datengenerierenden Instanz beschrieben. Die spieltheoretische Modellierung ermöglicht es uns, die Interaktion der beiden Parteien exakt zu beschreiben. Dies umfasst die jeweils verfolgten Ziele, ihre Handlungsmöglichkeiten, ihr Wissen übereinander und die zeitliche Reihenfolge, in der sie agieren. Insbesondere die Reihenfolge der Spielzüge hat einen entscheidenden Einfluss auf die spieltheoretisch optimale Lösung. Wir betrachten zunächst den Fall gleichzeitig agierender Spieler, in welchem sowohl der Lerner als auch der Datengenerierer keine Kenntnis über die Aktion des jeweils anderen Spielers haben. Wir leiten hinreichende Bedingungen her, unter welchen dieses Spiel eine Lösung in Form eines eindeutigen Nash-Gleichgewichts besitzt. Im Anschluss diskutieren wir zwei verschiedene Verfahren zur effizienten Berechnung dieses Gleichgewichts. Als zweites betrachten wir den Fall eines Stackelberg-Duopols. In diesem Prädiktionsspiel wählt der Lerner zunächst das Vorhersagemodell, woraufhin der Datengenerierer in voller Kenntnis des Modells reagiert. Wir leiten ein relaxiertes Optimierungsproblem zur Bestimmung des Stackelberg-Gleichgewichts her und stellen ein mögliches Lösungsverfahren vor. Darüber hinaus diskutieren wir, inwieweit das Stackelberg-Modell bestehende robuste Lernverfahren verallgemeinert. Abschließend untersuchen wir einen Lerner, der auf die Aktion des Datengenerierers, d.h. der Wahl der Testdaten, reagiert. In diesem Fall sind die Testdaten dem Lerner zum Zeitpunkt der Modellbildung bekannt und können in den Lernprozess einfließen. Allerdings unterliegen die Trainings- und Testdaten nicht notwendigerweise der gleichen Verteilung. Wir leiten daher ein neues integriertes sowie ein zweistufiges Lernverfahren her, welche diese Verteilungsverschiebung bei der Modellbildung berücksichtigen. In mehreren Fallstudien zur Klassifikation von Spam-E-Mails untersuchen wir alle hergeleiteten, sowie existierende Verfahren empirisch. Wir zeigen, dass die hergeleiteten spieltheoretisch-motivierten Lernverfahren in Summe signifikant bessere Spam-Filter erzeugen als alle betrachteten Referenzverfahren.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Van, Heerden Thomas. "A cultural-historical activity theory based analysis of lecturer and student understanding of learning in the Department of Mathematics and Applied Mathematics at the University of Cape Town." Master's thesis, Faculty of Humanities, 2019. http://hdl.handle.net/11427/30135.

Повний текст джерела
Анотація:
Cultural-historical activity theory (CHAT) provides a framework for analysing activity systems. I use that framework to investigate teaching and learning in two first-year university mathematics courses at the University of Cape Town. The focus of this investigation is whether the different subjects of this activity system (i.e. the students and the lecturers) have different conceptions of learning, and what those possible differences mean for teaching and learning. The CHAT framework is well-suited to this type of work. CHAT’s theoretical roots are in Hegel’s dialectics and Vygotsky’s mediation. Teaching and learning are higher-order mental phenomena. Dialectics allow us to aggregate our data to draw conclusions about this type of higher-order phenomenon, and the notion of mediation (extended from Vygotsky’s initial work by Leont’ev and others) provides a means to understand how learning happens. Data are collected both through face-to-face interviews with a small group of subjects (n = 6) and more broadly through an online questionnaire (n = 55). The face-to-face interviews and the questionnaires make it clear that students and lecturers do have different conceptions of learning; in the language of CHAT, there are tensions in the system. These tensions can be categorised into two major themes: what students do and how they do it. These tensions will not be easily resolved; I suggest teaching some meta-cognitive skills rather than only mathematics as a first step.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Fuglesang, Rutger. "Particle-Based Online Bayesian Learning of Static Parameters with Application to Mixture Models." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279847.

Повний текст джерела
Анотація:
This thesis investigates the possibility of using Sequential Monte Carlo methods (SMC) to create an online algorithm to infer properties from a dataset, such as unknown model parameters. Statistical inference from data streams tends to be difficult, and this is particularly the case for parametric models, which will be the focus of this paper. We develop a sequential Monte Carlo algorithm sampling sequentially from the model's posterior distributions. As a key ingredient of this approach, unknown static parameters are jittered towards the shrinking support of the posterior on the basis of an artificial Markovian dynamic allowing for correct pseudo-marginalisation of the target distributions. We then test the algorithm on a simple Gaussian model, a Gausian Mixture Model (GMM), as well as a variable dimension GMM. All tests and coding were done using Matlab. The outcome of the simulation is promising, but more extensive comparisons to other online algorithms for static parameter models are needed to really gauge the computational efficiency of the developed algorithm.
Detta examensarbete undersöker möjligheten att använda Sekventiella Monte Carlo metoder (SMC) för att utveckla en algoritm med syfte att utvinna parametrar i realtid givet en okänd modell. Då statistisk slutledning från dataströmmar medför svårigheter, särskilt i parameter-modeller, kommer arbetets fokus ligga i utvecklandet av en Monte Carlo algoritm vars uppgift är att sekvensiellt nyttja modellens posteriori fördelningar. Resultatet är att okända, statistiska parametrar kommer att förflyttas mot det krympande stödet av posterioren med hjälp utav en artificiell Markov dynamik, vilket tillåter en korrekt pseudo-marginalisering utav mål-distributionen. Algoritmen kommer sedan att testas på en enkel Gaussisk-modell, en Gaussisk mixturmodell (GMM) och till sist en GMM vars dimension är okänd. Kodningen i detta projekt har utförts i Matlab.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Samuelsson, Emma. "Using activity theory to describe patient safety : How Region Östergötland supports patient safety development in a low and middle-income country’s healthcare system." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170095.

Повний текст джерела
Анотація:
Region Östergötland engages in many international collaborations as a way to exchange knowledge and insights with other organizations. The organization has had a collaboration focused on patient safety with Moi Teaching and Referral Hospital in Eldoret, Kenya, since 2015. Kenya is considered a low and middle-income country, while Sweden is considered a high-income country. The aim of this study was to describe patient safety development using activity theory, with a special focus on how Region Östergötland supports patient safety development in a low and middle-income country’s healthcare system. Data was collected by conducting interviews with six participants involved in the patient safety collaboration, by visiting Eldoret to conduct a participant observation and by analyzing relevant policy documents. The results showed that many factors are involved in patient safety development, both within an organization and in supporting the development in a low and middle-income country’s healthcare system. Healthcare organizations should strive for commitment to patient safety development from all levels of the organization, and for a safety culture where staff members are comfortable reporting errors. The management must pursue patient safety questions and put aside resources for patient safety development. As Sweden and low and middle-income countries are different in many aspects, it’s important for the supporting part, in this case Region Östergötland, to be attentive to and understanding of prevailing differences caused by available resources, cultural norms, rules and organizational structures. Many of the requirements for an organization’s patient safety development, and for a successful collaboration between Region Östergötland and Moi Teaching and Referral Hospital, were shown to be achieved or at least functioning. Even though all requirements are not fulfilled, they are all matters that can be improved by the continuation of the collaboration. Region Östergötland can learn from the collaboration by seeing how results can be achieved in an organization with few resources, how efficiently changes can be made within an organization, as well as by gaining knowledge about another culture and country. These factors create opportunities for project participants to be inspired and question current methods and norms in their own organization, which can result in improvements of Region Östergötland as on organization in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Suta, Adin. "Multilabel text classification of public procurements using deep learning intent detection." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252558.

Повний текст джерела
Анотація:
Textual data is one of the most widespread forms of data and the amount of such data available in the world increases at a rapid rate. Text can be understood as either a sequence of characters or words, where the latter approach is the most common. With the breakthroughs within the area of applied artificial intelligence in recent years, more and more tasks are aided by automatic processing of text in various applications. The models introduced in the following sections rely on deep-learning sequence-processing in order to process and text to produce a regression algorithm for classification of what the text input refers to. We investigate and compare the performance of several model architectures along with different hyperparameters. The data set was provided by e-Avrop, a Swedish company which hosts a web platform for posting and bidding of public procurements. It consists of titles and descriptions of Swedish public procurements posted on the website of e-Avrop, along with the respective category/categories of each text. When the texts are described by several categories (multi label case) we suggest a deep learning sequence-processing regression algorithm, where a set of deep learning classifiers are used. Each model uses one of the several labels in the multi label case, along with the text input to produce a set of text - label observation pairs. The goal becomes to investigate whether these classifiers can carry out different levels of intent, an intent which should theoretically be imposed by the different training data sets used by each of the individual deep learning classifiers.
Data i form av text är en av de mest utbredda formerna av data och mängden tillgänglig textdata runt om i världen ökar i snabb takt. Text kan tolkas som en följd av bokstäver eller ord, där tolkning av text i form av ordföljder är absolut vanligast. Genombrott inom artificiell intelligens under de senaste åren har medfört att fler och fler arbetsuppgifter med koppling till text assisteras av automatisk textbearbetning. Modellerna som introduceras i denna uppsats är baserade på djupa artificiella neuronnät med sekventiell bearbetning av textdata, som med hjälp av regression förutspår tillhörande ämnesområde för den inmatade texten. Flera modeller och tillhörande hyperparametrar utreds och jämförs enligt prestanda. Datamängden som använts är tillhandahållet av e-Avrop, ett svenskt företag som erbjuder en webbtjänst för offentliggörande och budgivning av offentliga upphandlingar. Datamängden består av titlar, beskrivningar samt tillhörande ämneskategorier för offentliga upphandlingar inom Sverige, tagna från e-Avrops webtjänst. När texterna är märkta med ett flertal kategorier, föreslås en algoritm baserad på ett djupt artificiellt neuronnät med sekventiell bearbetning, där en mängd klassificeringsmodeller används. Varje sådan modell använder en av de märkta kategorierna tillsammans med den tillhörande texten, som skapar en mängd av text - kategori par. Målet är att utreda huruvida dessa klassificerare kan uppvisa olika former av uppsåt som teoretiskt sett borde vara medfört från de olika datamängderna modellerna mottagit.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Allbrink, Sofie, and Rebecka Sundin. "INDIVIDUELLA IDROTTARES FÖRUTSÄTTNINGAR FÖR SJÄLVREGLERAT LÄRANDE." Thesis, Umeå universitet, Institutionen för psykologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-183873.

Повний текст джерела
Анотація:
Självreglerat lärande (SRL) har visat sig vara en användbar strategi för både idrottslig inlärning och utveckling. Vilka miljömässiga förutsättningar idrottaren ges kan både främja och hämma denna utveckling, vilket få studier undersökt i en idrottslig kontext. Studiens syfte var därav att undersöka grad av SRL inom individuell idrott utifrån self-efficacy, könsidentitet och miljömässiga förutsättningar. Miljömässiga förutsättningar innefattade ledarskapsbeteenden som främjar både motivation, enligt Self-determination theory (SDT), och självreglerat lärande. Urvalet bestod av individuella idrottare, 16–60 år, som har en tränare (N = 251). Dessa tävlade inom 28 olika individuella idrotter och identifierade sig som kvinnor (n = 144), män (n = 106) och annat (n = 1). Frågeställningarna besvarades med hjälp av självskattningsformulären Self-Regulated Learning in Sport Practice (SRL-SP), Self-Regulated Environment (SRE) och Interpersonal Supportiveness Scale - Coach (ISS-C). Resultat från multipla och hierarkiska regressionsanalyser indikerade att self-efficacy hade ett positivt samband med utfallsmåtten planering, övervakning och reflektion, men inte ansträngning. Könsidentitet verkade inte moderera denna effekt. Vad gäller miljömässiga förutsättningar bidrog främst tränares förmåga att skapa möjligheter för SRL till idrottares självreglering. Samtidigt visades att idrottarnas grad av SRL påverkades av tränarens närvaro; ju mer närvarande tränare desto lägre grad av självregleringsstrategier uppvisar idrottarna. Slutsatsen blir därmed att idrottare verkar behöva ha en tilltro till sin förmåga samt befinna sig i en miljö med möjligheter för SRL, för att engagera sig i sin idrottsliga utveckling på ett fördelaktigt sätt. Detta samband verkar även påverkas av tränarens fysiska närvaro. Framtida studier kan med fördel vidare undersöka påverkan av de miljömässiga förutsättningarna på grad av SRL, samt om det skiljer sig åt beroende på idrott.
Self-Regulated Learning (SRL) has proven to be a useful strategy for athletes' learning and development. What conditions are given to athletes from their surrounding environment can both promote and inhibit these processes of learning and development. However, few studies have examined this relationship in a sports context. Thus, the present study aimed to investigate Self-Regulated Learning in individual sports based on self-efficacy, gender and environmental conditions. The environmental conditions were defined as leadership behaviors that promote motivation, according to Self-Determination Theory (SDT), and Self-Regulated Learning. The sample consisted of individual athletes, ranging from 16-60 years, with a coach (N = 251). The athletes competed in 28 different individual sports and identified themselves as women (n = 144), men (n = 106) and other (n = 1). The participants answered the self-report questionnaires Self-Regulated Learning in Sport Practice (SRL-SP), Self-Regulated Environment (SRE) and Interpersonal Supportiveness Scale - Coach (ISS-C). Using multiple and hierarchical regression analyses, this study provided support that self-efficacy positively influenced the outcome measures planning, monitoring, and reflection, but not effort. Gender did not appear to moderate this relationship. The environmental conditions associated with SRL was mainly the coaches' ability to create opportunities for SRL. Additionally, athletes' SRL were negatively influenced by how often the coach was present. The conclusion is that athletes, to beneficially engage in their own development, need to have a belief in their own ability and also be in an environment that enhances opportunities for SRL. However, this relationship is influenced by the coach's presence at practice. Future studies can further examine the relationship between the environmental conditions and SRL, and if the results may differ depending on sport.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Amethier, Patrik, and André Gerbaulet. "Sales Volume Forecasting of Ericsson Radio Units - A Statistical Learning Approach." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288504.

Повний текст джерела
Анотація:
Demand forecasting is a well-established internal process at Ericsson, where employees from various departments within the company collaborate in order to predict future sales volumes of specific products over horizons ranging from months to a few years. This study aims to evaluate current predictions regarding radio unit products of Ericsson, draw insights from historical volume data, and finally develop a novel, statistical prediction approach. Specifically, a two-part statistical model with a decision tree followed by a neural network is trained on previous sales data of radio units, and then evaluated (also on historical data) regarding predictive accuracy. To test the hypothesis that mid-range volume predictions of a 1-3 year horizon made by data-driven statistical models can be more accurate, the two-part model makes predictions per individual radio unit product based on several predictive attributes, mainly historical volume data and information relating to geography, country and customer trends. The majority of wMAPEs per product from the predictive model were shown to be less than 5% for the three different prediction horizons, which can be compared to global wMAPEs from Ericsson's existing long range forecast process of 9% for 1 year, 13% for 2 years and 22% for 3 years. These results suggest the strength of the data-driven predictive model. However, care must be taken when comparing the two error measures and one must take into account the large variances of wMAPEs from the predictive model.
Ericsson har en väletablerad intern process för prognostisering av försäljningsvolymer, där produktnära samt kundnära roller samarbetar med inköpsorganisationen för att säkra noggranna uppskattningar angående framtidens efterfrågan. Syftet med denna studie är att evaluera tidigare prognoser, och sedan utveckla en ny prediktiv, statistisk modell som prognostiserar baserad på historisk data. Studien fokuserar på produktkategorin radio, och utvecklar en två-stegsmodell bestående av en trädmodell och ett neuralt nätverk. För att testa hypotesen att en 1-3 års prognos för en produkt kan göras mer noggran med en datadriven modell, tränas modellen på attribut kopplat till produkten, till exempel historiska volymer för produkten, och volymtrender inom produktens marknadsområden och kundgrupper. Detta resulterade i flera prognoser på olika tidshorisonter, nämligen 1-12 månader, 13-24 månader samt 25-36 månder. Majoriteten av wMAPE-felen för dess prognoser visades ligga under 5%, vilket kan jämföras med wMAPE på 9% för Ericssons befintliga 1-årsprognoser, 13% för 2-årsprognerna samt 22% för 3-årsprognoserna. Detta pekar på att datadrivna, statistiska metoder kan användas för att producera gedigna prognoser för framtida försäljningsvolymer, men hänsyn bör tas till jämförelsen mellan de kvalitativa uppskattningarna och de statistiska prognoserna, samt de höga varianserna i felen.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Nilsson, Viktor. "Prediction of Dose Probability Distributions Using Mixture Density Networks." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273610.

Повний текст джерела
Анотація:
In recent years, machine learning has become utilized in external radiation therapy treatment planning. This involves automatic generation of treatment plans based on CT-scans and other spatial information such as the location of tumors and organs. The utility lies in relieving clinical staff from the labor of manually or semi-manually creating such plans. Rather than predicting a deterministic plan, there is great value in modeling it stochastically, i.e. predicting a probability distribution of dose from CT-scans and delineated biological structures. The stochasticity inherent in the RT treatment problem stems from the fact that a range of different plans can be adequate for a patient. The particular distribution can be thought of as the prevalence in preferences among clinicians. Having more information about the range of possible plans represented in one model entails that there is more flexibility in forming a final plan. Additionally, the model will be able to reflect the potentially conflicting clinical trade-offs; these will occur as multimodal distributions of dose in areas where there is a high variance. At RaySearch, the current method for doing this uses probabilistic random forests, an augmentation of the classical random forest algorithm. A current direction of research is learning the probability distribution using deep learning. A novel parametric approach to this is letting a suitable deep neural network approximate the parameters of a Gaussian mixture model in each volume element. Such a neural network is known as a mixture density network. This thesis establishes theoretical results of artificial neural networks, mainly the universal approximation theorem, applied to the activation functions used in the thesis. It will then proceed to investigate the power of deep learning in predicting dose distributions, both deterministically and stochastically. The primary objective is to investigate the feasibility of mixture density networks for stochastic prediction. The research question is the following. U-nets and Mixture Density Networks will be combined to predict stochastic doses. Does there exist such a network, powerful enough to detect and model bimodality? The experiments and investigations performed in this thesis demonstrate that there is indeed such a network.
Under de senaste åren har maskininlärning börjat nyttjas i extern strålbehandlingsplanering. Detta involverar automatisk generering av behandlingsplaner baserade på datortomografibilder och annan rumslig information, såsom placering av tumörer och organ. Nyttan ligger i att avlasta klinisk personal från arbetet med manuellt eller halvmanuellt skapa sådana planer. I stället för att predicera en deterministisk plan finns det stort värde att modellera den stokastiskt, det vill säga predicera en sannolikhetsfördelning av dos utifrån datortomografibilder och konturerade biologiska strukturer. Stokasticiteten som förekommer i strålterapibehandlingsproblemet beror på att en rad olika planer kan vara adekvata för en patient. Den särskilda fördelningen kan betraktas som förekomsten av preferenser bland klinisk personal. Att ha mer information om utbudet av möjliga planer representerat i en modell innebär att det finns mer flexibilitet i utformningen av en slutlig plan. Dessutom kommer modellen att kunna återspegla de potentiellt motstridiga kliniska avvägningarna; dessa kommer påträffas som multimodala fördelningar av dosen i områden där det finns en hög varians. På RaySearch används en probabilistisk random forest för att skapa dessa fördelningar, denna metod är en utökning av den klassiska random forest-algoritmen. En aktuell forskningsriktning är att generera in sannolikhetsfördelningen med hjälp av djupinlärning. Ett oprövat parametriskt tillvägagångssätt för detta är att låta ett lämpligt djupt neuralt nätverk approximera parametrarna för en Gaussisk mixturmodell i varje volymelement. Ett sådant neuralt nätverk är känt som ett mixturdensitetsnätverk. Den här uppsatsen fastställer teoretiska resultat för artificiella neurala nätverk, främst det universella approximationsteoremet, tillämpat på de aktiveringsfunktioner som används i uppsatsen. Den fortsätter sedan att utforska styrkan av djupinlärning i att predicera dosfördelningar, både deterministiskt och stokastiskt. Det primära målet är att undersöka lämpligheten av mixturdensitetsnätverk för stokastisk prediktion. Forskningsfrågan är följande. U-nets och mixturdensitetsnätverk kommer att kombineras för att predicera stokastiska doser. Finns det ett sådant nätverk som är tillräckligt kraftfullt för att upptäcka och modellera bimodalitet? Experimenten och undersökningarna som utförts i denna uppsats visar att det faktiskt finns ett sådant nätverk.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Waggoner, Alexander A. "Triple Non-negative Matrix Factorization Technique for Sentiment Analysis and Topic Modeling." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1550.

Повний текст джерела
Анотація:
Topic modeling refers to the process of algorithmically sorting documents into categories based on some common relationship between the documents. This common relationship between the documents is considered the “topic” of the documents. Sentiment analysis refers to the process of algorithmically sorting a document into a positive or negative category depending whether this document expresses a positive or negative opinion on its respective topic. In this paper, I consider the open problem of document classification into a topic category, as well as a sentiment category. This has a direct application to the retail industry where companies may want to scour the web in order to find documents (blogs, Amazon reviews, etc.) which both speak about their product, and give an opinion on their product (positive, negative or neutral). My solution to this problem uses a Non-negative Matrix Factorization (NMF) technique in order to determine the topic classifications of a document set, and further factors the matrix in order to discover the sentiment behind this category of product.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Herron, Christopher, and André Zachrisson. "Machine Learning Based Intraday Calibration of End of Day Implied Volatility Surfaces." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273419.

Повний текст джерела
Анотація:
The implied volatility surface plays an important role for Front office and Risk Management functions at Nasdaq and other financial institutions which require mark-to-market of derivative books intraday in order to properly value their instruments and measure risk in trading activities. Based on the aforementioned business needs, being able to calibrate an end of day implied volatility surface based on new market information is a sought after trait. In this thesis a statistical learning approach is used to calibrate the implied volatility surface intraday. This is done by using OMXS30-2019 implied volatility surface data in combination with market information from close to at the money options and feeding it into 3 Machine Learning models. The models, including Feed Forward Neural Network, Recurrent Neural Network and Gaussian Process, were compared based on optimal input and data preprocessing steps. When comparing the best Machine Learning model to the benchmark the performance was similar, indicating that the calibration approach did not offer much improvement. However the calibrated models had a slightly lower spread and average error compared to the benchmark indicating that there is potential of using Machine Learning to calibrate the implied volatility surface.
Implicita volatilitetsytor är ett viktigt vektyg för front office- och riskhanteringsfunktioner hos Nasdaq och andra finansiella institut som behöver omvärdera deras portföljer bestående av derivat under dagen men också för att mäta risk i handeln. Baserat på ovannämnda affärsbehov är det eftertraktat att kunna kalibrera de implicita volatilitets ytorna som skapas i slutet av dagen nästkommande dag baserat på ny marknadsinformation. I denna uppsats används statistisk inlärning för att kalibrera dessa ytor. Detta görs genom att uttnytja historiska ytor från optioner i OMXS30 under 2019 i kombination med optioner nära at the money för att träna 3 Maskininlärnings modeller. Modellerna inkluderar Feed Forward Neural Network, Recurrent Neural Network och Gaussian Process som vidare jämfördes baserat på data som var bearbetat på olika sätt. Den bästa Maskinlärnings modellen jämfördes med ett basvärde som bestod av att använda föregående dags yta där resultatet inte innebar någon större förbättring. Samtidigt hade modellen en lägre spridning samt genomsnittligt fel i jämförelse med basvärdet som indikerar att det finns potential att använda Maskininlärning för att kalibrera dessa ytor.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Svanberg, Philip. "Officersprogrammets etik- och moralutbildning : En idealtypsanalys." Thesis, Försvarshögskolan, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:fhs:diva-10106.

Повний текст джерела
Анотація:
With the continuation of Swedish Armed Forces personnel deployed in international service and the increase of national defence focus with new units being created, the average age of cadets enrolling in officers school is decreasing. Studies have previously shown that ethics and morals are linked to cognitive development, and cognitive development linked to age and education. The Neo-Kohlbergian Schema theory defines three general schemas of ethical and moral decision making. Where Postconventional schema is considered most beneficial for military officers, but the military culture seems to promote the Maintaining norms schema. This study is examining the regulating documents from the statute from the government, the program directions from the university, to individual courses programs and descriptions. In order to examine how the ethics and moral education in the Swedish officers school relates to the Neo-Kohlbergian schema theory. The study concludes that the education promotes both the Maintaining norms and Postconventional schemas rather equally with some aspects tending more to the maintaining norms schema while other tends more to the postconventional schema. Meaning that the cadets are given multiple tools to combat ethical and moral dilemmas, but by not focusing on one schema the speed in decision making is halted.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Fredriksson, Gustav, and Anton Hellström. "Restricted Boltzmann Machine as Recommendation Model for Venture Capital." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252703.

Повний текст джерела
Анотація:
Denna studie introducerar restricted Boltzmann machines (RBMs) som rekommendationsmodell i kontexten av riskkapital. Ett nätverk av relationer används som proxy för att modellera investerares bolagspreferenser. Studiens huvudfokus är att undersöka hur RBMs kan implementeras för ett dataset bestående av relationer mellan personer och bolag, samt att undersöka om modellen går att förbättra genom att tillföra av ytterligare information. Nätverket skapas från styrelsesammansättningar för svenska bolag. För nätverket implementeras RBMs både med och utan den extra informationen om bolagens ursprungsort. Vardera RBM-modell undersöks genom att utvärdera dess inlärningsförmåga samt förmåga att återskapa manuellt gömda relationer. Resultatet påvisar att RBM-modellerna har en bristfällig förmåga att återskapa borttagna relationer, dock noteras god inlärningsförmåga. Genom att addera ursprungsort som extra information förbättras modellerna markant och god potential som rekommendationsmodell går att urskilja, både med avseende på inlärningsförmåga samt förmåga att återskapa gömda relationer.
In this thesis, we introduce restricted Boltzmann machines (RBMs) as a recommendation model in the context of venture capital. A network of connections is used as a proxy for investors’ preferences of companies. The main focus of the thesis is to investigate how RBMs can be implemented on a network of connections and investigate if conditional information can be used to boost RBMs. The network of connections is created by using board composition data of Swedish companies. For the network, RBMs are implemented with and without companies’ place of origin as conditional data, respectively. The RBMs are evaluated by their learning abilities and their ability to recreate withheld connections. The findings show that RBMs perform poorly when used to recreate withheld connections but can be tuned to acquire good learning abilities. Adding place of origin as conditional information improves the model significantly and show potential as a recommendation model, both with respect to learning abilities and the ability to recreate withheld connections.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Åkerström, Otto. "Multi-Agent System for Coordinated Defence." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273582.

Повний текст джерела
Анотація:
Today defence systems are becoming more complex as technology advances and it is of great importance to explore new ways of solving problems and keep national defence current. In particular, Artificial Intelligence (AI) is used in an increasing number of industries such as logistic solutions, inventory management and defence. This thesis will evaluate the possibility to use Reinforcement Learning (RL) in an Air Defence Coordination(ADC) scenario at Saab AB. To evaluate RL, a simplified ADC-scenario is considered and solved using two different methods, Q-learning and Deep Q-learning (DQL). The results of the two methods are discussed as well as the limitations in scope and complexity for Q-learning. Deep Q-learning, on the other hand shows to be relatively easy to apply to more complicated scenarios. Finally, one last experiment with a far more complex scenario is constructed in order to show the scalability of DQL and create a foundation for future work in this field.
Dagens försvarssystem blir allt mer komplexa när tekniken utvecklas och det blir allt viktigare att utforska nya sätt att lösa problem för att ha ett toppmodernt försvar. I synnerhet används Artificiell intelligens (AI) i ett ökande antal branscher så som logistik, lagerhantering och försvar. Detta arbete kommer att utvärdera möjligheten att använda Förstärkt inlärning (RL) i ett Koordinerat luftförsvar (ADC) scenario hos Saab AB. För att utvärdera RL, löses ett förenklat ADC-scenario med två olika metoder, Q-learning och Deep Q-learning (DQL). Resultatet av de två metoderna diskuteras så väl som begränsningar för Q-learning. Å andra sidan visar sig DQL vara relativt enkelt att tillämpa i ett mer komplext scenario. Slutligen görs ett sista experiment med ett mycket mer komplicerat scenario för att visa skalbarheten för DQL och skapa en naturlig övergång till framtida arbete.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Karlsson, Anton, and Torbjörn Sjöberg. "Synthesis of Tabular Financial Data using Generative Adversarial Networks." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273633.

Повний текст джерела
Анотація:
Digitalization has led to tons of available customer data and possibilities for data-driven innovation. However, the data needs to be handled carefully to protect the privacy of the customers. Generative Adversarial Networks (GANs) are a promising recent development in generative modeling. They can be used to create synthetic data which facilitate analysis while ensuring that customer privacy is maintained. Prior research on GANs has shown impressive results on image data. In this thesis, we investigate the viability of using GANs within the financial industry. We investigate two state-of-the-art GAN models for synthesizing tabular data, TGAN and CTGAN, along with a simpler GAN model that we call WGAN. A comprehensive evaluation framework is developed to facilitate comparison of the synthetic datasets. The results indicate that GANs are able to generate quality synthetic datasets that preserve the statistical properties of the underlying data and enable a viable and reproducible subsequent analysis. It was however found that all of the investigated models had problems with reproducing numerical data.
Digitaliseringen har fört med sig stora mängder tillgänglig kunddata och skapat möjligheter för datadriven innovation. För att skydda kundernas integritet måste dock uppgifterna hanteras varsamt. Generativa Motstidande Nätverk (GANs) är en ny lovande utveckling inom generativ modellering. De kan användas till att syntetisera data som underlättar dataanalys samt bevarar kundernas integritet. Tidigare forskning på GANs har visat lovande resultat på bilddata. I det här examensarbetet undersöker vi gångbarheten av GANs inom finansbranchen. Vi undersöker två framstående GANs designade för att syntetisera tabelldata, TGAN och CTGAN, samt en enklare GAN modell som vi kallar för WGAN. Ett omfattande ramverk för att utvärdera syntetiska dataset utvecklas för att möjliggöra jämförelse mellan olika GANs. Resultaten indikerar att GANs klarar av att syntetisera högkvalitativa dataset som bevarar de statistiska egenskaperna hos det underliggande datat, vilket möjliggör en gångbar och reproducerbar efterföljande analys. Alla modellerna som testades uppvisade dock problem med att återskapa numerisk data.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Ragland, Debra A. "The Structural Basis for the Interdependence of Drug Resistance in the HIV-1 Protease." eScholarship@UMMS, 2012. http://escholarship.umassmed.edu/gsbs_diss/879.

Повний текст джерела
Анотація:
The human immunodeficiency virus type 1 (HIV-1) protease (PR) is a critical drug target as it is responsible for virion maturation. Mutations within the active site (1°) of the PR directly interfere with inhibitor binding while mutations distal to the active site (2°) to restore enzymatic fitness. Increasing mutation number is not directly proportional to the severity of resistance, suggesting that resistance is not simply additive but that it is interdependent. The interdependency of both primary and secondary mutations to drive protease inhibitor (PI) resistance is grossly understudied. To structurally and dynamically characterize the direct role of secondary mutations in drug resistance, I selected a panel of single-site mutant protease crystal structures complexed with the PI darunavir (DRV). From these studies, I developed a network hypothesis that explains how mutations outside the active site are able to perpetuate changes to the active site of the protease to disrupt inhibitor binding. I then expanded the panel to include highly mutated multi-drug resistant variants. To elucidate the interdependency between primary and secondary mutations I used statistical and machine-learning techniques to determine which specific mutations underlie the perturbations of key inter-molecular interactions. From these studies, I have determined that mutations distal to the active site are able to perturb the global PR hydrogen bonding patterns, while primary and secondary mutations cooperatively perturb hydrophobic contacts between the PR and DRV. Discerning and exploiting the mechanisms that underlie drug resistance in viral targets could proactively ameliorate both current treatment and inhibitor design for HIV-1 targets.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ragland, Debra A. "The Structural Basis for the Interdependence of Drug Resistance in the HIV-1 Protease." eScholarship@UMMS, 2016. https://escholarship.umassmed.edu/gsbs_diss/879.

Повний текст джерела
Анотація:
The human immunodeficiency virus type 1 (HIV-1) protease (PR) is a critical drug target as it is responsible for virion maturation. Mutations within the active site (1°) of the PR directly interfere with inhibitor binding while mutations distal to the active site (2°) to restore enzymatic fitness. Increasing mutation number is not directly proportional to the severity of resistance, suggesting that resistance is not simply additive but that it is interdependent. The interdependency of both primary and secondary mutations to drive protease inhibitor (PI) resistance is grossly understudied. To structurally and dynamically characterize the direct role of secondary mutations in drug resistance, I selected a panel of single-site mutant protease crystal structures complexed with the PI darunavir (DRV). From these studies, I developed a network hypothesis that explains how mutations outside the active site are able to perpetuate changes to the active site of the protease to disrupt inhibitor binding. I then expanded the panel to include highly mutated multi-drug resistant variants. To elucidate the interdependency between primary and secondary mutations I used statistical and machine-learning techniques to determine which specific mutations underlie the perturbations of key inter-molecular interactions. From these studies, I have determined that mutations distal to the active site are able to perturb the global PR hydrogen bonding patterns, while primary and secondary mutations cooperatively perturb hydrophobic contacts between the PR and DRV. Discerning and exploiting the mechanisms that underlie drug resistance in viral targets could proactively ameliorate both current treatment and inhibitor design for HIV-1 targets.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Grossman, Mikael. "Proposal networks in object detection." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241918.

Повний текст джерела
Анотація:
Locating and extracting useful data from images is a task that has been revolutionized in the last decade as computing power has risen to such a level to use deep neural networks with success. A type of neural network that uses the convolutional operation called convolutional neural network (CNN) is suited for image related tasks. Using the convolution operation creates opportunities for the network to learn their own filters, that previously had to be hand engineered. For locating objects in an image the state-of-the-art Faster R-CNN model predicts objects in two parts. Firstly, the region proposal network (RPN) extracts regions from the picture where it is likely to find an object. Secondly, a detector verifies the likelihood of an object being in that region.For this thesis, we review the current literature on artificial neural networks, object detection methods, proposal methods and present our new way of generating proposals. By replacing the RPN with our network, the multiscale proposal network (MPN), we increase the average precision (AP) with 12% and reduce the computation time per image by 10%.
Lokalisering av användbar data från bilder är något som har revolutionerats under det senaste decenniet när datorkraften har ökat till en nivå då man kan använda artificiella neurala nätverk i praktiken. En typ av ett neuralt nätverk som använder faltning passar utmärkt till bilder eftersom det ger möjlighet för nätverket att skapa sina egna filter som tidigare skapades för hand. För lokalisering av objekt i bilder används huvudsakligen Faster R-CNN arkitekturen. Den fungerar i två steg, först skapar RPN boxar som innehåller regioner där nätverket tror det är störst sannolikhet att hitta ett objekt. Sedan är det en detektor som verifierar om boxen är på ett objekt .I denna uppsats går vi igenom den nuvarande litteraturen i artificiella neurala nätverk, objektdektektering, förslags metoder och presenterar ett nytt förslag att generera förslag på regioner. Vi visar att genom att byta ut RPN med vår metod (MPN) ökar vi precisionen med 12% och reducerar tiden med 10%.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Guazzelli, Alex. "Aprendizagem em sistemas hibridos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1994. http://hdl.handle.net/10183/25776.

Повний текст джерела
Анотація:
O presente trabalho apresenta dois novas modelos conexionistas, baseados na teoria da adaptação ressonante (ART): Simplified Fuzzy ARTMAP e Semantic ART (SMART). Descreve-se a modelagem, adaptação, implementação e validação destes, enquanto incorporados ao sistema hibrido HYCONES, para resolução de problemas de diagnostico medico em cardiopatias congênitas e nefrologia. HYCONES é uma ferramenta para a construção de sistemas especialistas híbridos que integra redes neurais com frames, assimilando as qualidades inerentes aos dois paradigmas. 0 mecanismo de frames fornece tipos construtores flexíveis para a modelagem do conhecimento do domínio, enquanto as redes neurais, representadas na versão original de HYCONES pelo modelo neural combinatório (MNC), possibilitam tanto a automação da aquisição de conhecimento, a partir de uma base de casos, quanta a implementação de aprendizado indutivo e dedutivo. A teoria da adaptação ressonante 6 caracterizada, principalmente, pela manutenção do equilíbrio entre as propriedades de plasticidade e estabilidade durante o processo de aprendizagem. ART inclui vários modelos conexionistas, tais como: Fuzzy ARTMAP, Fuzzy ART, ART 1, ART 2 e ART 3. Dentre estes, a rede neural Fuzzy ARTMAP destaca-se por possibilitar o tratamento de padr6es analógicos a partir de dois módulos ART básicos. O modelo Simplified Fuzzy ARTMAP, como o pr6prio nome o diz, a uma simplificação da rede neural Fuzzy ARTMAP. Ao contrario desta, o novo modelo possibilita o tratamento de padrões analógicos, a partir de apenas um modulo ART, responsável pelo tratamento dos padrões de entrada, adicionado de uma camada, responsável pelos padrões alvo. Mesmo com apenas um modulo ART, o modelo Simplified Fuzzy ARTMAP 6 capaz de reter o mesmo nível de desempenho obtido com a rede neural Fuzzy ARTMAP pois, continua a garantir, conjuntamente, a maximização da generalização e a minimização do erro preditivo, através da execução da estratégia match-tracking. Para a construção da base de casos de cardiopatias congênitas, 66 prontuários médicos, das três cardiopatias congênitas mais freqüentes, foram extraídos do banco de dados de pacientes submetidos a cirurgia cardíaca no Instituto de Cardiologia RS (ICFUC-RS). Tais prontuários abrangem o período de janeiro de 1986 a dezembro de 1990 e reportam 22 casos de Comunicação Interatrial (CIA), 29 de Comunicação Interventricular (CIV) e 15 de Defeito Septal Atrioventricular (DSAV). Para a análise de desempenho do sistema, 33 casos adicionais, do referido período, foram extraídos aleatoriamente do banco de dados do ICFUC-RS. Destes 33 casos, 13 apresentam CIA, 10 CIV e 10 DSAV. Para a construção da base de casos de síndromes renais, 381 prontuários do banco de dados de síndromes renais da Escola Paulista de Medicina foram analisados e 58 evidencias, correspondentes a dados de hist6ria clinica e exame físico dos pacientes, foram extraídas semi-automaticamente. Do total de casos selecionados, 136 apresentam Uremia, 85 Nefrite, 100 Hipertensão e 60 Litiase. Dos 381 casos analisados, 254 foram escolhidos aleatoriamente para a composicao do conjunto de treinamento, enquanto que os demais foram utilizados para a elaboração do conjunto de testes. Para que HYCONES II fosse validado, foram construídas 46 versões da base de conhecimento hibrida (BCH) para o domínio de cardiopatias congênitas e 46 versões da BCH para o de nefrologia. Em ambos os domínios médicos as respectivas bases de conhecimento foram construídas, automaticamente, a partir das respectivas bases de casos de treinamento. Das 46 versões geradas para cada grupo, uma representa o modelo MNC e 45 os modelos ART. As versões ART dividem-se em grupos de 3: 15 versões foram formadas a partir do modelo Simplified Fuzzy ARTMAP; 15 a partir deste mesmo modelo, sem que os padrões de entrada fossem normalizados; e, finalmente, 15 para o modelo Semantic ART. Na base de testes CHD, o desempenho da versa° HYCONES II - Simplified Fuzzy ARTMAP foi semelhante ao da versa° MNC. A primeira acertou 29 dos 33 diagnósticos (87,9%), enquanto a segunda apontou corretamente 31 dos 33 diagnósticos apresentados (93,9%). Na base de testes de síndromes renais, o desempenho de HYCONES II Fuzzy ARTMAP foi superior ao da versão MNC (p < 0,05). Ambas -Simplified acertaram, respectivamente, 108 (85%) e 95 (74,8%) diagnósticos, em 127 casos submetidos. Ainda que o desempenho da versão HYCONES II - Simplified Fuzzy ARTMAP se revelasse promissor, ao se examinar o conteúdo das redes geradas por este modelo, pode-se observar que estas divergiam completamente daquelas obtidas pelo MNC. As redes que levaram a conclusão diagnostica, na versão HYCONES - MNC, possuíam conteúdo praticamente igual aos grafos de conhecimento, elicitados de especialistas em cardiopatias congênitas. JA, as redes ativadas na versa° HYCONES II - Simplified Fuzzy ARTMAP, além de representarem numero bem major de evidencias que as redes MNC, a grande maioria destas ultimas representam a negação do padrão de entrada. Este fato deve-se a um processo de normalização, inerente ao modelo Simplified Fuzzy ARTMAP, no qual cada padrão de entrada e duplicado. Nesta duplicação, são representadas as evidências presentes em cada caso e, ao mesmo tempo, complementarmente, as evidencias ausentes, em relação ao total geral das mesmas na base de casos. Esta codificação inviabiliza o mecanismo de explanação do sistema HYCONES, pois, na área módica, os diagnósticos costumam ser feitos a partir de um conjunto de evidencias presentes e, não, pela ausência delas. Tentou-se, então, melhorar o conteúdo semântico das redes Simplified Fuzzy ARTMAP. Para tal, o processo de normalização ou codificação complementar da implementação do modelo foi retirado, validando-o novamente, contra o mesma base de testes. Na base de testes CHD, o desempenho de HYCONES II - Simplified Fuzzy ARTMAP, sem a codificação complementar, foi inferior ao da versão MNC (p < 0,05). A primeira acertou 25 dos 33 diagnósticos (75,8%), enquanto a segunda apontou corretamente 31 dos mesmos (93,9%). Na base de testes renais, o desempenho da versa° HYCONES II - Simplified Fuzzy ARTMAP, sem a codificação complementar, foi semelhante ao da versa° MNC. Dos 127 casos apresentados, a primeira acertou 98 diagn6sticos (77,2%), contra 95 da segunda (74,8%). Constatou-se, ainda, que as categorias de reconhecimento formadas pelo modelo Simplified Fuzzy ARTMAP continuavam a apresentar diferenças marcantes quanto ao seu conteúdo, quando comparadas as redes MNC ou aos grafos de conhecimento elicitados de especialistas. O modelo Semantic ART foi, então, proposto, na tentativa de se melhorar o conteúdo semantic° das redes ART. Modificou-se, então, o algoritmo de aprendizado do modelo Simplified Fuzzy ARTMAP, introduzindo-se o mecanismo de aprendizado indutivo do modelo MNC, i.e., o algoritmo de punições e recompensas, associado ao de poda e normalização. Nova validação com a mesma base de testes foi realizada. Para a base de testes de CHD, o desempenho de HYCONES II - SMART foi semelhante ao da versão Simplified Fuzzy ARTMAP e da versão MNC. A primeira e a segunda acertaram 29 dos 33 diagnósticos (87,9%), enquanto a versão MNC apontou corretamente 31 dos 33 diagnósticos apresentados (93,9%). Na base de testes de síndromes renais, o desempenho de HYCONES II - SMART foi superior ao da versão MNC (p < 0,05) e igual ao da versão Simplified Fuzzy ARTMAP. A primeira e a Ultima acertaram 108 dos 127 diagnósticos (85%), enquanto a segunda apontou corretamente 95 dos mesmos (74,8%). Desta feita, observou-se que as redes neurais geradas por HYCONES II - SMART eram semelhantes em conteúdo as redes MNC e aos grafos de conhecimento elicitados de múltiplos especialistas. As principais contribuições desta dissertação são: o projeto, implementação e validação dos modelos Simplified Fuzzy ARTMAP e SMART. Destaca-se, porem, o modelo SMART, que apresentou major valor semântico nas categorias de reconhecimento do que o observado nos modelos ART convencionais, graças a incorporação dos conceitos de especificidade e relevância. Esta dissertação, entretanto, representa não só a modelagem e validação de dois novos modelos neurais, mas sim, o enriquecimento do sistema HYCONES, a partir da continuação de dissertação de mestrado previamente defendida. A partir do presente trabalho, portanto, é dada a possibilidade de escolha, ao engenheiro de conhecimento, de um entre três modelos neurais: o MNC, o Semantic ART e o Simplified Fuzzy ARTMAP que, sem exceção, apresentam Born desempenho. Os dois primeiros destacam-se, contudo, por suportarem semanticamente o contexto.
This dissertation presents two new connectionist models based on the adaptive resonance theory (ART): Simplified Fuzzy ARTMAP and Semantic ART (SMART). The modeling, adaptation, implementation and validation of these models are described, in their association to HYCONES, a hybrid connectionist expert system to solve classification problems. HYCONES integrates the knowledge representation mechanism of frames with neural networks, incorporating the inherent qualities of the two paradigms. While the frames mechanism provides flexible constructs for modeling the domain knowledge, neural networks, implemented in HYCONES' first version by the combinatorial neuron model (CNM), provide the means for automatic knowledge acquisition from a case database, enabling, as well, the implementation of deductive and inductive learning. The Adaptive Resonance Theory (ART) deals with a system involving selfstabilizing input patterns into recognition categories, while maintaining a balance between the properties of plasticity and stability. ART includes a series of different connectionist models: Fuzzy ARTMAP, Fuzzy ART, ART 1, ART 2, and ART 3. Among them, the Fuzzy ARTMAP one stands out for being capable of learning analogical patterns, using two basic ART modules. The Simplified Fuzzy ARTMAP model is a simplification of the Fuzzy ARTMAP neural network. Constrating the first model, the new one is capable of learning analogical patterns using only one ART module. This module is responsible for the categorization of the input patterns. However, it has one more layer, which is responsible for receiving and propagating the target patterns through the network. The presence of a single ART module does not hamper the Simplified Fuzzy ARTMAP model. The same performance levels are attained when the latter one runs without the second ART module. This is certified by the match-tracking strategy, that conjointly maximizes generalization and minimizes predictive error. Two medical domains were chosen to validate HYCONES performance: congenital heart diseases (CHD) and renal syndromes. To build up the CHD case base, 66 medical records were extracted from the cardiac surgery database of the Institute of Cardiology RS (ICFUC-RS). These records cover the period from January 1986 to December 1990 and describe 22 cases of Atrial Septal Defect (ASD), 29 of Ventriculal Septal Defect (VSD), and 15 of Atrial- Ventricular Septa! Defect (AVSD), the three most frequent congenital heart diseases. For validation purposes, 33 additional cases, from the same database and period mentioned above, were also extracted. From these cases, 13 report ASD, 10 VSD and 10 AVSD. To build the renal syndromes case base, 381 medical records from the database of the Escola Paulista de Medicina were analyzed and 58 evidences, covering the patients' clinical history and physical examination data, were semiautomatically extracted. From the total number of selected cases, 136 exhibit Uremia, 85 Nephritis, 100 Hypertension, and 60 Calculosis. From the 381 cases analyzed, 245 were randomically chosen to build the training set, while the remaining ones were used to build the testing set. To validate HYCONES II, 46 versions of the hybrid knowledge base (HKB) with congenital heart diseases were built; for the renal domain, another set of 46 HKB versions were constructed. For both medical domains, the HKBs were automatically generated from the training databases. From these 46 versions, one operates with the CNM model and the other 45 deals with two ART models. These ART versions are divided in three groups: 15 versions were built using the Simplified Fuzzy ARTMAP model; 15 used the Simplified Fuzzy ARTMAP model without the normalization of the input patterns, and 15 used the Semantic ART model. HYCONES II - Simplified Fuzzy ARTMAP and HYCONES - CNM performed similarly for the CH D domain. The first one pointed out correctly to 29 of the 33 testing cases (87,9%), while the second one indicated correctly 31 of the same cases (93,9%). In the renal syndromes domain, however, the performance of HYCONES II - Simplified Fuzzy ARTMAP was superior to the one exhibited by CNM (p < 0,05). Both versions pointed out correctly, respectively, 108 (85%) and 95 (74.8%) diagnoses of the 127 testing cases presented to the system. HYCONES II - Simplified Fuzzy ARTMAP, therefore, displayed a satisfactory performance. However, the semantic contents of the neural nets it generated were completely different from the ones stemming from the CNM version. The networks that pointed out the final diagnosis in HYCONES - CNM were very similar to the knowledge graphs elicited from experts in congenital heart diseases. On the other hand, the networks activated in HYCONES II - Simplified Fuzzy ARTMAP operated with far more evidences than the CNM version. Besides this quantitative difference, there was a striking qualitative discrepancy among these two models. The Simplified Fuzzy ARTMAP version, even though pointing out to the correct diagnoses, used evidences that represented the complementary coding of the input pattern. This coding, inherent to the Simplified Fuzzy ARTMAP model, duplicates the input pattern, generating a new one depicting the evidence observed (on-cell) and, at the same time, the absent evidence, in relation to the total evidence employed to represent the input cases (off-cell). This coding shuts out the HYCONES explanation mechanism, since medical doctors usually reach a diagnostic conclusion rather from a set of observed evidences than from their absence. The next step taken was to improve the semantic contents of the Simplified Fuzzy ARTMAP model. To achieve this, the complement coding process was removed and the modified model was, then, revalidated, through the same testing sets as above described. In the CHD domain, the performance of HYCONES II - Simplified Fuzzy ARTMAP, without complementary coding, proved to be inferior to the one presented by CNM (p < 0,05). The first model singled out correctly 25 out of the 33 testing cases (75,8%), while the second one singled out correctly 31 out of the same 33 cases (93,9%). In the renal syndromes domain, the performances of HYCONES II - Simplified Fuzzy ARTMAP, without complementary coding, and HYCONES - CNM were similar. The first pointed out correctly to 98 of the 127 testing cases (77,2%), while the second one pointed out correctly to 95 of the same cases (74.8%). However, the recognition categories formed by this modified Simplified Fuzzy ARTMAP still presented quantitative and qualitative differences in their contents, when compared to the networks activated by CNM and to the knowledge graphs elicited from experts. This discrepancy, although smaller than the one observed in the original Fuzzy ARTMAP model, still restrained HYCONES explanation mechanism. The Semantic ART model (SMART) was, then, proposed. Its goal was to improve the semantic contents of ART recognition categories. To build this new model, the Simplified Fuzzy ARTMAP archictecture was preserved, while its learning algorithm was replaced by the CNM inductive learning mechanism (the punishments and rewards algorithm, associated with the pruning and normalization mechanisms). A new validation phase was, then, performed over the same testing sets. For the CHD domain, the perfomance comparison among SMART, Simplified Fuzzy ARTMAP, and CNM versions showed similar results. The first and the second versions pointed out correctly to 29 of the 33 testing cases (87,9%), while the third one singled out correctly 31 of the same testing cases (93,9%). For the renal syndromes domain, the performance of HYCONES II - SMART was superior to the one presented by the CNM version (p < 0,05), and equal to the performance presented by the Simplified Fuzzy ARTMAP version. SMART and Simplified Fuzzy ARTMAP singled out correctly 108 of the 127 testing cases (85%), while the CNM version pointed out correctly 95 of the same 127 testing cases (74.8%). Finally, it was observed that the neural networks generated by HYCONES II - SMART had a similar content to the networks generated by CNM and to the knowledge graphs elicited from multiple experts. The main contributions of this dissertation are: the design, implementation and validation of the Simplified Fuzzy ARTMAP and SMART models. The latter one, however, stands out for its learning mechanism, which provides a higher semantic value to the recognition categories, when compared to the categories formed by conventional ART models. This important enhancement is obtained by incorporating specificity and relevance concepts to ART's dynamics. This dissertation, however, represents not only the design and validation of two new connectionist models, but also, the enrichment of HYCONES. This is obtained through the continuation of a previous MSc dissertation, under the same supervision supervision. From the present work, therefore, it is given to the knowledge engineering, the choice among three different neural networks: CNM, Semantic ART and Simplified Fuzzy ARTMAP, all of which, display good performance. Indeed, the first and second models, in contrast to the third, support the context in a semantic way.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Bergström, Sebastian. "Customer segmentation of retail chain customers using cluster analysis." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252559.

Повний текст джерела
Анотація:
In this thesis, cluster analysis was applied to data comprising of customer spending habits at a retail chain in order to perform customer segmentation. The method used was a two-step cluster procedure in which the first step consisted of feature engineering, a square root transformation of the data in order to handle big spenders in the data set and finally principal component analysis in order to reduce the dimensionality of the data set. This was done to reduce the effects of high dimensionality. The second step consisted of applying clustering algorithms to the transformed data. The methods used were K-means clustering, Gaussian mixture models in the MCLUST family, t-distributed mixture models in the tEIGEN family and non-negative matrix factorization (NMF). For the NMF clustering a slightly different data pre-processing step was taken, specifically no PCA was performed. Clustering partitions were compared on the basis of the Silhouette index, Davies-Bouldin index and subject matter knowledge, which revealed that K-means clustering with K = 3 produces the most reasonable clusters. This algorithm was able to separate the customer into different segments depending on how many purchases they made overall and in these clusters some minor differences in spending habits are also evident. In other words there is some support for the claim that the customer segments have some variation in their spending habits.
I denna uppsats har klusteranalys tillämpats på data bestående av kunders konsumtionsvanor hos en detaljhandelskedja för att utföra kundsegmentering. Metoden som använts bestod av en två-stegs klusterprocedur där det första steget bestod av att skapa variabler, tillämpa en kvadratrotstransformation av datan för att hantera kunder som spenderar långt mer än genomsnittet och slutligen principalkomponentanalys för att reducera datans dimension. Detta gjordes för att mildra effekterna av att använda en högdimensionell datamängd. Det andra steget bestod av att tillämpa klusteralgoritmer på den transformerade datan. Metoderna som användes var K-means klustring, gaussiska blandningsmodeller i MCLUST-familjen, t-fördelade blandningsmodeller från tEIGEN-familjen och icke-negativ matrisfaktorisering (NMF). För klustring med NMF användes förbehandling av datan, mer specifikt genomfördes ingen PCA. Klusterpartitioner jämfördes baserat på silhuettvärden, Davies-Bouldin-indexet och ämneskunskap, som avslöjade att K-means klustring med K=3 producerar de rimligaste resultaten. Denna algoritm lyckades separera kunderna i olika segment beroende på hur många köp de gjort överlag och i dessa segment finns vissa skillnader i konsumtionsvanor. Med andra ord finns visst stöd för påståendet att kundsegmenten har en del variation i sina konsumtionsvanor.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Silva, Daniel Lucas Alves da. "Racismo antinegro no português brasileiro e uma proposta de avaliação para professores de PLE." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/153687.

Повний текст джерела
Анотація:
Submitted by Daniel Lucas Alves da Silva (danilu157@hotmail.com) on 2018-04-20T14:49:03Z No. of bitstreams: 1 Dissertação Daniel - Versão Final - Repositório.pdf: 1014278 bytes, checksum: ae6cad39a074b69fadfc0158454d8be1 (MD5)
Rejected by Elza Mitiko Sato null (elzasato@ibilce.unesp.br), reason: Solicitamos que realize correções na submissão seguindo as orientações abaixo: Problema 01) Está faltando o LOGO da Universidade na capa do seu trabalho.(este item é obrigatório) Problema 02)Solicitamos que faça correção na descrição na folha de rosto e de aprovação. Agradecemos a compreensão. on 2018-04-20T15:49:09Z (GMT)
Submitted by Daniel Lucas Alves da Silva (danilu157@hotmail.com) on 2018-04-23T10:36:02Z No. of bitstreams: 1 Dissertação Daniel - Versão Final - Repositório I.pdf: 1110805 bytes, checksum: 9660f0b915e0db1a26aceb17210cee66 (MD5)
Rejected by Elza Mitiko Sato null (elzasato@ibilce.unesp.br), reason: Solicitamos que realize correções na submissão seguindo as orientações abaixo: - Está faltando o LOGO (Símbolo)da Universidade/Câmpus na capa do seu trabalho.(este item é obrigatório) - Solicito que corrija a descrição: Dissertação apresentada como parte dos requisitos para obtenção do título de Mestre em Estudos Linguísticos, junto ao Programa de Pós-Graduação em Estudos Linguísticos na área de concentração de Linguística Aplicada, linha de pesquisa de Ensino e Aprendizagem de Línguas, do Instituto de Biociências, Letras e Ciências Exatas da Universidade Estadual Paulista “Júlio de Mesquita Filho”, Câmpus de São José do Rio Preto. Lembramos que o arquivo depositado no repositório deve ser igual ao impresso. Agradecemos a compreensão. on 2018-04-23T18:15:13Z (GMT)
Submitted by Daniel Lucas Alves da Silva (danilu157@hotmail.com) on 2018-04-23T19:38:19Z No. of bitstreams: 1 Dissertação Daniel - Versão Final - Repositório II.pdf: 1111954 bytes, checksum: 4ed3ca18ab6bbadc62b1024124eb3be0 (MD5)
Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-04-23T20:09:44Z (GMT) No. of bitstreams: 1 silva_dla_me_sjrp.pdf: 1111954 bytes, checksum: 4ed3ca18ab6bbadc62b1024124eb3be0 (MD5)
Made available in DSpace on 2018-04-23T20:09:44Z (GMT). No. of bitstreams: 1 silva_dla_me_sjrp.pdf: 1111954 bytes, checksum: 4ed3ca18ab6bbadc62b1024124eb3be0 (MD5) Previous issue date: 2018-03-06
Na esteira do crescente interesse em português como língua estrangeira, doravante PLE, este trabalho se propõe a contribuir para a formação de professores de PLE quanto a seu entendimento da dinâmica das relações étnico-raciais que marcam a língua portuguesa na sua variante brasileira, sobretudo, no que diz respeito ao racismo antinegro. Espera-se que, por meio de um instrumento de avaliação voltado a professores de PLE, o EPPLE-PLE, esta formação seja melhor informada para a condução do debate acerca da temática racial e, por consequência, a prática docente destes profissionais possa instanciar uma maior sensibilização por parte de professores e aprendentes da língua para esta dimensão cultural que perpassa a historicidade do português brasileiro. Para tanto, valemo-nos da teoria racial crítica aplicada à formação de professores de língua estrangeira como apresentada por Ferreira (2015) e um seu desdobramento, qual seja o letramento racial segundo Skerret (2011), do conceito de washback by design conforme Messick (1989) e da teoria sociocultural nos termos de Vigostski (1987) para este que é um processo de legitimação da elaboração e da proposição de itens para o referido exame. Trata-se de um processo de legitimação de uma proposição de itens e sua posterior elaboração para o que se pretende possa ser uma intervenção benéfica para a prática de professores de PLE.
In the context of increasing interest in Portuguese as a foreign language (henceforth PFL) this project contributes to the understanding of teachers of PFL regarding the racial dynamics that manifest themselves in Brazilian Portuguese, in particular anti-black racism. We argue that considerations about race should form part of the elaboration of an assessment instrument designed for teachers of PFL, the EPPLE-PLE (a proficiency exam for teachers of foreign languages in its Portuguese acronym). In doing so, as an expected result, teaching can inculcate more awareness, both on the part of teachers and learners of PFL, regarding this cultural dimension that forms part of the history of Brazilian Portuguese. To this end, we make use of critical race theory applied to the education of teachers of a foreign language as presented by Ferreira (2015) and the idea of racial literacy according to Skerret (2011), the concept of washback by design by Messick (1989) and the theory of sociocultural perspective by Vigostki (1987), for the selection of items for the aforementioned exam. This is a legitimation process of the proposition of items and their elaboration for an exam that we deem can be a beneficial intervention in the practice of PFL teachers.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Bori, Pau. "Anàlisi crítica de llibres de text de català per a no catalanoparlants adults en temps de neoliberalisme." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/350798.

Повний текст джерела
Анотація:
Aquesta tesi estudia llibres de text per a no catalanoparlants adults publicats entre 2005 i 2015 des d’una perspectiva crítica. Els dos objectius principals són: (a) explorar de quina forma el context macro influencia la naturalesa dels materials objecte d’estudi, i (b) examinar la relació entre el contingut dels llibres i les condicions socioeconòmiques de l’actual fase del capitalisme. Per assolir el primer objectiu, fem una anàlisi crítica de l’evolució de l’ensenyament i materials de llengües estrangeres en relació amb el seu context macro. L’estudi suggereix que les polítiques del Consell d’Europa tenen un gran impacte en l’ensenyament de llengües estrangeres, i també en els llibres de català per a no catalanoparlants adults. Aquesta institució ha promogut l’ensenyament comunicatiu d’una llengua instrumental amb un seguit de projectes que han dut a un accelerat procés d’estandardització, centralització i homogeneïtzació de l’ensenyament i els materials didàctics. Les propostes del Consell d’Europa s’han desenvolupat en sintonia amb l’esperit mercantilista que el neoliberalisme propugna per a totes les esferes de la vida. Per atènyer el segon objectiu, es desenvolupa una anàlisi quantitativa del contingut del corpus seguida d’una de més interpretativa centrada en els móns de la feina, els viatges i l’habitatge. Els resultats apunten que les pràctiques i valors neoliberals solen aparèixer des d’una perspectiva positiva, com a fenòmens naturals, sense que s’hi mencionin aspectes negatius o limitacions. Els llibres, a més, proposen un tipus d’activitats que podrien contribuir a desenvolupar entre els aprenents els rols de consumidors i treballadors emprenedors i flexibles que l’ordre socioeconòmic vigent requereix.
This thesis studies contemporary course books for Catalan as a foreign language published from 2005 to 2015 from a critical perspective. Two main objectives of this study are: (a) to describe in what way the macro context influences the nature of the studied materials, and (b) to examine the relationship between the content of the course books and the socio-economical conditions in the latest phase of capitalism. In order to accomplish the first objective, the study explores the ways foreign and second language learning processes and textbook design evolved relating them to the wider macro context. The study suggests that the language policies from the Council of Europe have a major impact on foreign language teaching in Europe and are subsequently influencing on curriculum and course book design of Catalan as a foreign language. This institution has been actively involved in creation and promotion of the communicative language teaching with the emphasis on instrumental language. It has also been a firm promoter of the processes of standardization, centralization and homogenization of foreign language learning and course book design. The Council of Europe’s projects for language learning, have been developed in accord with the mercantilist spirit of neoliberalism that extends to all spheres of live. To accomplish the second objective a quantitative analysis of the corpus was developed followed by a more interpretative one centered on the topics of work, housing and travel. The results suggest that practices and values of neoliberalism usually appear in a positive, naturalized way without mentioning their negative aspects or limitations. Moreover, the course books analyzed propose activities which support and develop the roles of consumers and entrepreneurs in students that the current economic order requires.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Повний текст джерела
Анотація:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Azemikhah, Homayoon. "The Double Heuristic Method: perspectives on how teachers deal with an alternative model for teaching in the VET sector." Thesis, 2013. http://hdl.handle.net/2440/86274.

Повний текст джерела
Анотація:
The aim of this research has been to investigate how teachers in Vocational Education and Training (VET) in Australia deal with the Double Heuristic Method (DHM) as an alternative model for teaching in the VET Sector. The context is set within a vocational educational landscape globally, highlighting approaches in the delivery of vocational education in the USA, European countries and Australia. In pursuit of these approaches, various typologies of competency-based training (CBT) are explored. The VET systems around the world have been undergoing a period of continuous reform, moving VET systems towards a more holistic approach of teaching and learning in higher education. In such a context, the Australian VET sector has been and continues to be faced with challenges in the implementation of the Australian Training Packages, the core curriculum for the VET sector. This inquiry has been implemented within an interpretive paradigm in seeking to capture teachers’ perspectives of using the DHM and to investigate how VET teachers deal with the pedagogical challenges in their work. Central to the research question is investigating how teachers deal with an alternative model of pedagogy, the Double Heuristic Model, from their own frames of reference, and as they experience it. Qualitative methods have been used for data collection and analysis. The primary source of data was a series of semi-structured interviews. The data analysis was based on the principles of grounded theory method as outlined in the work of Strauss and Corbin (1990) whereby participants are placed in a position to consider a phenomenon and how they make meaning of that phenomena. In pursuing this approach to methodology in the two years of data collection and analysis, all three types of coding were utilised: open coding, axial coding and selective coding (Strauss and Corbin, 1990). As the Double Heuristic Method is a relatively new approach in vocational education in Australia, there has not been any prior research on its use in this area. Hence, this research contributes to developing new insights into teachers’ perspectives and responses to one approach in teaching the Units of Competency in the Training Packages in Australia, using an alternative pedagogical framework.
Thesis (Ph.D.) -- University of Adelaide, School of Education, 2013
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Gordon, Denise. "A Case Study of the Applied Learning Academy: Reconceptualized Quantum Design of Applied Learning." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7612.

Повний текст джерела
Анотація:
The purpose of this qualitative study was to examine the Applied Learning Academy (ALA) and allow the lessons learned from this public school to emerge from the narrative stories of past students, parents, teachers, administrators, and local business associates who have been directly involved and influenced by the applied learning teaching method. Accountability is critical for all public and charter schools. Districts have been trying to raise the standards with new programs and strategies in an effort to make learning experiences relevant to students? daily lives. Revisiting John Dewey?s philosophy from the progressive movement, project-based, service learning, community partnerships, and portfolio assessment helped to create the applied learning method. In the present study, a qualitative case study approach was utilized to identify successful factors, benefits, and drawbacks of applied learning in order to describe the transition of portfolio assessment, project-based learning, and community-based partnerships within the classroom and to understand the impact and misconceptions of applied learning as experienced through the Recognized Campus, ALA, a 6-8th public middle school within a large urban school district. Participant interviews, field observations, and historical records were collected which indicated that student centered project-based curriculum, small school size creating family relationships, community involvement with partnerships, service learning projects, and metacognitive development from portfolio assessments were the major factors that supported academic rigor and relevance because of the real educational applications in this applied learning middle school. Briefly defined, applied learning is when a problem is seen within the surrounding community, and the solution is generated by the students. This progressive 15-year impact of applied learning ultimately leads to the development of four applied learning schools despite the misconception that applied learning was a remedial or gifted program. Redefining applied learning for a better understanding developed a reconceptualized diagram borrowed from the quantum mechanics model. Reconceptualization expands the interpretation by increasing the intellectual flexibility. As the student becomes energized from the acquired knowledge of learning applicable skills through service learning, project-based curriculum, and portfolio assessment, the student?s academic growth should increase to a higher, educational ?energy level? supported by the critical, situated-learning, and feminist theories.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Tsai, Ching-Horng, and 蔡青宏. "Learning Theory and Regional Development of Applied Research in Taichung City." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/65628053278253873474.

Повний текст джерела
Анотація:
博士
逢甲大學
土木暨水利工程博士學位學程
99
The information technology had given rise to information explosion which led to the transformation from the traditional mode mainly supported by material and energy techniques to the Internet mode supported by information technology. The transformation resulted in the changes of producing methods and life style. Moreover, it became the common concern of many disciplines such as philosophy, sociology, economics and geography. Regions were the study core of geography, regional development was the main field of geography, and the advance of technology was the internal action of regional development. The information era brought opportunities and challenges. In order to adjust to an era of rapid changes and uncertainty, how to use information and create knowledge became the key of all regional development. The study synthesized the theories and approaches of applied economics, sociology, economic geography, regional science and so on. It also regarded regions as a development organism. A strong interaction existed between regions and the environment. The regions were the internal cause of regional development, while the environment was the external one. The initiative and creativity of regions determined the direction of their development. The study core was the mechanism for the formation and development of a learning region which included the development of action mechanism, cooperation mechanism, regional difference mechanism and adjustment mechanism in a learning region. From the enterprise, industrial and regional levels, the development causes of a learning region were viewed to explore the relationship between learning and regional economy, to analyze the connection between knowledge production and value actualization and regional development, to expound the intrinsic logic and to construct a complete development framework of a learning region. First, the study analyzed the theory of regional development and basic theory of an innovative city, and it also summarized and evaluated the results of a learning region. From the aspects of information, globalization and a dynamic market, it promulgated the macroscopic background derived from it. Next, the action mechanism was used to analyze regional development, and it also specifically analyzed the reformation of knowledge production, the reasons of enterprises becoming the main body of knowledge production, and the mechanism of the knowledge production mode of enterprises and the mechanism of the requirements of industry cooperation. Last but not the least, it analyzed time and space evolution, and explored the adjustment mechanism and measures of a learning region and an innovative city. Take the case study of home and abroad for example, the basic features and types of an innovative country and the development mode and evolution of an innovative city were addressed. The instance of a learning region in Scotland was reviewed, and it inspired others. The reforming practice study of our country was cited to illustrate the change from a city of a regional technology center to an innovative city. Take Taichung for example, the theory of a learning region was applied on the developmental problems of the city from a historical perspective to determine and analyze low learning abilities and to propose some measures for its regional development.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

CHEN, YUAN-KENG, and 陳源庚. "The Analyses of Teaching Effectiveness Based Learning Community Theory Applied on Interactive Whiteboard." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/76092394381970282962.

Повний текст джерела
Анотація:
碩士
亞洲大學
光電與通訊學系碩士在職專班
102
This study is to design an instructional model with the utilization of interactive whiteboard (IWB) in accordance with the theory of “learning community “proposed by professor misaki satoto to emphasize the interaction of teacher-student as well as peers and to investigate the advantages of the proposed instructional model for promoting learning effectiveness. The IWB system can be applied on the learning development of the self-construction knowledge and self-learning. The participants, who are students from two elementary schools in Nantou County, took the test during the experiment period. The descriptive results were obtained by t-test using statistics software SPSS based on the research hypotheses of the topic of the study. The findings in this research were concluded that with the application of the learning community theory being associated with the IWB instruction to turn into the so called “LCIWB”, the learning effectiveness of nature and life technology course in elementary school is remarkable with about 10 grade points being promoted. Therefore, the proposed LCIWB methodlogy is helpful to students for promotion of learning effectiveness at nature and life technology course in elementary school.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Fuentes, Erika. "Statistical and Machine Learning Techniques Applied to Algorithm Selection for Solving Sparse Linear Systems." 2007. http://trace.tennessee.edu/utk_graddiss/171.

Повний текст джерела
Анотація:
There are many applications and problems in science and engineering that require large-scale numerical simulations and computations. The issue of choosing an appropriate method to solve these problems is very common, however it is not a trivial one, principally because this decision is most of the times too hard for humans to make, or certain degree of expertise and knowledge in the particular discipline, or in mathematics, are required. Thus, the development of a methodology that can facilitate or automate this process and helps to understand the problem, would be of great interest and help. The proposal is to utilize various statistically based machine-learning and data mining techniques to analyze and automate the process of choosing an appropriate numerical algorithm for solving a specific set of problems (sparse linear systems) based on their individual properties.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wu, Mei-Chih, and 吳美枝. "A Research on the Learning Effect of Cooperative Learning Applied to Music Theory Teaching of the Junior High School." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/60004120145997113773.

Повний текст джерела
Анотація:
碩士
國立臺中教育大學
教育學系課程與教學碩士班
100
The main purpose of this research is to investigate the effects of the cooperative learning applied to music theory teaching of the junior high school. A nonequivalent pretest-posttest quasi-experimental design was adopted. Students of two classes in the junior high school were assigned into experimental group and control group. The experimental group was taught by the method of cooperative learning, and the control group was taught by the method of traditonal teaching. All the teaching contents, teacher, time length, and classroom for the students of the two groups were the same, and the experimental treatment had lasted for 9 weeks in the amout of 18 classes. Research data were collected through the self-designed instruments, including “ Music Theory Test A ”, “ Music Theory Test B”, “Questionnaire of Students’ Opinion about Cooperative Learning”…etc. The main findings of this research are listed below: 1. The learning achievements of experimental group were significantly effective in enhancing students’ ability of the basic music theory. But, there were no statistical differences between both the learning achievements of the experimental and the control groups. 2. Applying the Cooperative Learning to the music theory teaching of the junior high School Students was effective in increasing students’ learning motivation, cultivating the spirit of mutual assistance and cooperation among the peers, and producing the positive impact for the classroom climate. 3. Applying the Cooperative Learning to the music theory teaching of the junior high School Students was effective in promoting researcher’s expertise and changing her belief in teaching. According to the results, some suggestions were made for music teaching and future researches.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tong-Fa, Li, and 李同法. "Situational Learning Theory Applied to Display Research and Creation - take the results of the researcher company as an example." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/92rnqn.

Повний текст джерела
Анотація:
碩士
樹德科技大學
視覺傳達設計系碩士班
106
The main functions of the museum are collection, research, display, and education. Museums provides visitors with spontaneous and joyful environment by displaying. Exhibitions are visual feasts of comprehensive technology and arts. They construct interactive platforms between exhibits and visitors. The purpose of the research is to discuss how to design a “people-oriented “exhibition which cannot only offer people a great learning environment, but also be the exhibition design strategy. In this digital generation, people rely on computer, communication and consumer products to interact, which mean people can easily search for the specific information. Here comes to a question. How to stimulate the audience to spontaneously see the exhibition? How to design a real, exciting, and moving real-life display environment to enhance the audience''s enjoyment of learning and gaining knowledge is an important topic that must be addressed at present. Due to the rapid development of digital technology, how to balance the use of virtual images, the integration of real exhibits in the exhibition and the use of technology, art, creativity to construct a new interactive and interesting learning environment are the purposes of this research. Based on Lave''s "situational learning theory", we believe that the layout of learning environments is the key to success. Knowledge should be constructed in real activities. The audience should use the knowledge they learned to comprehend the contents and meanings behind exhibitions. By applying this theory, the individual''s exploration learning, teammates'' mutual learning, and mentoring learning can all generate knowledge exchange and recognition. In this paper, the researcher has been engaged in display design for many years and has rich practical experiences. In recent years, the case of the firm has focused on the display production of reservoir anti-silting tunnel engineering, designing virtual animation films and physical models of reservoirs, and integrating the design methods of “virtual” and “real”. Creating a dynamic situation display project for the anti-sludge tunnel project and using the company''s performance case to analyze the "situation learning theory", sharing the design development process, and making the most clear expression are strong proofs of the exhibition design research. Exploring the "situational learning theory" is proved that people have received great learning results.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії