Dissertations / Theses on the topic 'Accelerative learning'

To see the other types of publications on this topic, follow the link: Accelerative learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Accelerative learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Scharn, Kay. "Accelerative learning in review." Online version, 1999. http://www.uwstout.edu/lib/thesis/1999/1999scharnk.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mou, Dai, and manchurian0@yahoo com. "The Use of Suggestion as a Classroom Learning Strategy in China and Australia: An Assessment Scale with Structural Equation Explanatory Models in Terms of Stress, Depression, Learning Styles and Academic Grades." RMIT University. Education, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070207.152256.

Full text
Abstract:
This study is innovative in that it draws together the concepts of suggestion from several cultural groups and develops an inventory to account for variations the occurrence of scale to studies the relatively new area of the effects of suggestion in classrooms and compares effect on personality and academic variables. As new ideas and knowledge become more widespread and accepted by the community and teaching profession, precision in the applications of suggestion in the classroom is being seen as more important. Although new to education, suggestion and similar variations has always been central to influencing behaviour and learning among pastoral, counseling and hypnotherapy fields. Teachers who had experience or influence from those fields or the ideas of Lozanov (1978) or accelerated learning groups were and are more the exception than the rule. However, as new ideas become more influential, the influence of suggestion in is becoming increasingly important in progressive, modern education. A major goal of the study was to provide a valid instrument to compare Chinese and Australian differences and similarities in use of suggestion in learning. It was hoped that such a comparison would provide increased mutual understanding of values, strategies, practices and preferences by teachers and students. A second goal was to develop a causative model that explained the relationships between the measured variables of personality and learning behaviour and suggestion in teaching and learning.. A third aim was to make a comparison on effects and performance of suggestion in teaching and learning in Australian, Chinese and Australian accelerative learning classes. This study examined differences between Australian and Chinese high school Science classrooms in their use of suggestion in teaching and learning. To ascertain the prevalence and types of suggestion in the classroom the 39-item suggestion in teaching and learning (STL) scale was developed and validated v in Year 7, 9, and 11 high school classes in China and Australia. The STL scale categorized suggestion into the following types or subscales: Selfsuggestion, metaphor, indirect non-verbal suggestion, general spoken suggestion, negative suggestion, intuitive suggestion, direct verbal suggestion, relaxation, and de-suggestion. The study involved surveying 344 participants (n=182 female, n=162 male) from four high schools in Australia and China. A further 374 participants (n=108 teachers, n=266 students) from six high schools were surveyed for selecting a Chinese sample in a pilot study. About 284 participants (China: 200 students; Australia: 84 students [includes 8 adults]) were observed for validation of the STL instrument. All subjects and classes were randomly selected and were surveyed and observed for the purpose of scale and model development. The STL scale was found to be capable of distinguishing different types of suggestion within Chinese, Australian, and Australian Accelerative Learning classes. The STL scale was significant as a first scale to measure suggestion in teaching and learning in Australian and Chinese classrooms. Items in the scale were strongly and significantly correlated with other items within the subscales and with the overall scale. Path analytic techniques were used to explain relationships between the STL scale, its subscales, nation, gender and high school students profiles on stress, depression, learning styles and academic grades. Limitations of the study included problems arising from language and cultural differences as well as newness of the scale and the field of study. Recommendations for further study included strengthening aspects of the scale with new items and further qualitative and quantitative studies on the uses of suggestion in academic learning and other forms of change in childhood and adolescence.
APA, Harvard, Vancouver, ISO, and other styles
3

Mathari, Bakthavatsalam Pagalavan. "Hardware Acceleration of a Neighborhood Dependent Component Feature Learning (NDCFL) Super-Resolution Algorithm." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366034621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Samal, Kruttidipta. "FPGA acceleration of CNN training." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54467.

Full text
Abstract:
This thesis presents the results of an architectural study on the design of FPGA- based architectures for convolutional neural networks (CNNs). We have analyzed the memory access patterns of a Convolutional Neural Network (one of the biggest networks in the family of deep learning algorithms) by creating a trace of a well-known CNN architecture and by developing a trace-driven DRAM simulator. The simulator uses the traces to analyze the effect that different storage patterns and dissonance in speed between memory and processing element, can have on the CNN system. This insight is then used create an initial design for a layer architecture for the CNN using an FPGA platform. The FPGA is designed to have multiple parallel-executing units. We design a data layout for the on-chip memory of an FPGA such that we can increase parallelism in the design. As the number of these parallel units (and hence parallelism) depends on the memory layout of input and output, particularly if parallel read and write accesses can be scheduled or not. The on-chip memory layout minimizes access contention during the operation of parallel units. The result is an SoC (System on Chip) that acts as an accelerator and can have more number of parallel units than previous work. The improvement in design was also observed by comparing post synthesis loop latency tables between our design and one with a single unit design. This initial design can help in designing FPGAs targeted for deep learning algorithms that can compete with GPUs in terms of performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Karanpreet. "Accelerating Structural Design and Optimization using Machine Learning." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104114.

Full text
Abstract:
Machine learning techniques promise to greatly accelerate structural design and optimization. In this thesis, deep learning and active learning techniques are applied to different non-convex structural optimization problems. Finite Element Analysis (FEA) based standard optimization methods for aircraft panels with bio-inspired curvilinear stiffeners are computationally expensive. The main reason for employing many of these standard optimization methods is the ease of their integration with FEA. However, each optimization requires multiple computationally expensive FEA evaluations, making their use impractical at times. To accelerate optimization, the use of Deep Neural Networks (DNNs) is proposed to approximate the FEA buckling response. The results show that DNNs obtained an accuracy of 95% for evaluating the buckling load. The DNN accelerated the optimization by a factor of nearly 200. The presented work demonstrates the potential of DNN-based machine learning algorithms for accelerating the optimization of bio-inspired curvilinearly stiffened panels. But, the approach could have disadvantages for being only specific to similar structural design problems, and requiring large datasets for DNNs training. An adaptive machine learning technique called active learning is used in this thesis to accelerate the evolutionary optimization of complex structures. The active learner helps the Genetic Algorithms (GA) by predicting if the possible design is going to satisfy the required constraints or not. The approach does not need a trained surrogate model prior to the optimization. The active learner adaptively improve its own accuracy during the optimization for saving the required number of FEA evaluations. The results show that the approach has the potential to reduce the total required FEA evaluations by more than 50%. Lastly, the machine learning is used to make recommendations for modeling choices while analyzing a structure using FEA. The decisions about the selection of appropriate modeling techniques are usually based on an analyst's judgement based upon their knowledge and intuition from past experience. The machine learning-based approach provides recommendations within seconds, thus, saving significant computational resources for making accurate design choices.
Doctor of Philosophy
This thesis presents an innovative application of artificial intelligence (AI) techniques for designing aircraft structures. An important objective for the aerospace industry is to design robust and fuel-efficient aerospace structures. The state of the art research in the literature shows that the structure of aircraft in future could mimic organic cellular structure. However, the design of these new panels with arbitrary structures is computationally expensive. For instance, applying standard optimization methods currently being applied to aerospace structures to design an aircraft, can take anywhere from a few days to months. The presented research demonstrates the potential of AI for accelerating the optimization of an aircraft structures. This will provide an efficient way for aircraft designers to design futuristic fuel-efficient aircraft which will have positive impact on the environment and the world.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Zheng. "Accelerating Catalyst Discovery via Ab Initio Machine Learning." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/95915.

Full text
Abstract:
In recent decades, machine learning techniques have received an explosion of interest in the domain of high-throughput materials discovery, which is largely attributed to the fastgrowing development of quantum-chemical methods and learning algorithms. Nevertheless, machine learning for catalysis is still at its initial stage due to our insufficient knowledge of the structure-property relationships. In this regard, we demonstrate a holistic machine-learning framework as surrogate models for the expensive density functional theory to facilitate the discovery of high-performance catalysts. The framework, which integrates the descriptor-based kinetic analysis, material fingerprinting and machine learning algorithms, can rapidly explore a broad range of materials space with enormous compositional and configurational degrees of freedom prior to the expensive quantum-chemical calculations and/or experimental testing. Importantly, advanced machine learning approaches (e.g., global sensitivity analysis, principal component analysis, and exploratory analysis) can be utilized to shed light on the underlying physical factors governing the catalytic activity on a diverse type of catalytic materials with different applications. Chapter 1 introduces some basic concepts and knowledge relating to the computational catalyst design. Chapter 2 and Chapter 3 demonstrate the methodology to construct the machine-learning models for bimetallic catalysts. In Chapter 4, the multi-functionality of the machine-learning models is illustrated to understand the metalloporphyrin's underlying structure-property relationships. In Chapter 5, an uncertainty-guided machine learning strategy is introduced to tackle the challenge of data deficiency for perovskite electrode materials design in the electrochemical water splitting cell.
Doctor of Philosophy
Machine learning and deep learning techniques have revolutionized a range of industries in recent years and have huge potential to improve every aspect of our daily lives. Essentially, machine-learning provides algorithms the ability to automatically discover the hidden patterns of data without being explicitly programmed. Because of this, machine learning models have gained huge successes in applications such as website recommendation systems, online fraud detection, robotic technologies, image recognition, etc. Nevertheless, implementing machine-learning techniques in the field of catalyst design remains difficult due to 2 primary challenges. The first challenge is our insufficient knowledge about the structure-property relationships for diverse material systems. Typically, developing a physically intuitive material feature method requests in-depth expert knowledge about the underlying physics of the material system and it is always an active field. The second challenge is the lack of training data in academic research. In many cases, collecting a sufficient amount of training data is not always feasible due to the limitation of computational/experimental resources. Subsequently, the machine learning model optimized with small data tends to be over-fitted and could provide biased predictions with huge uncertainties. To address the above-mentioned challenges, this thesis focus on the development of robust feature methods and strategies for a variety of catalyst systems using the density functional theory (DFT) calculations. Through the case studies in the chapters, we show that the bulk electronic structure characteristics are successful features for capturing the adsorption properties of metal alloys and metal oxides. While molecular graphs are robust features for the molecular property, e.g., energy gap, of metal-organics compounds. Besides, we demonstrate that the adaptive machine learning workflow is an effective strategy to tackle the data deficiency issue in search of perovskite catalysts for the oxygen evolution reaction.
APA, Harvard, Vancouver, ISO, and other styles
7

Erickson, Xavante. "Acceleration of Machine-Learning Pipeline Using Parallel Computing." Thesis, Uppsala universitet, Signaler och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-441722.

Full text
Abstract:
Researchers from Lund have conducted research on classifying images in three different categories, faces, landmarks and objects from EEG data [1]. The researchers used SVMs (Support Vector Machine) to classify between the three different categories [2, 3]. The scripts written to compute this had the potential to be extremely parallelized and could potentially be optimized to complete the computations much faster. The scripts were originally written in MATLAB which is a propriety software and not the most popular language for machine learning. The aim of this project is to translate the MATLAB code in the aforementioned Lund project to Python and perform code optimization and parallelization, in order to reduce the execution time. With much other data science transitioning into Python as well, it was a key part in this project to understand the differences between MATLAB and Python and how to translate MATLAB code to Python. With the exception of the preprocessing scripts, all the original MATLAB scripts were translated to Python. The translated Python scripts were optimized for speed and parallelized to decrease the execution time even further. Two major parallel implementations of the Python scripts were made. One parallel implementation was made using the Ray framework to compute in the cloud [4]. The other parallel implementation was made using the Accelerator, a framework to compute using local threads[5]. After translation, the code was tested versus the original results and profiled for any key mistakes, for example functions which took unnecessarily long time to execute. After optimization the single thread script was twelve times faster than the original MATLAB script. The final execution times were around 12−15 minutes, compared to the benchmark of 48 hours it is about 200 times faster. The benchmark of the original code used less iterations than the researchers used, decreasing the computational time from a week to 48 hours. The results of the project highlight the importance of learning and teaching basic profiling of slow code. While not entirely considered in this project, doing complexity analysis of code is important as well. Future work includes a deeper complexity analysis on both a high and low level, since a high level language such as Python relies heavily on modules with low level code. Future work also includes an in-depth analysis of the NumPy source code, as the current code relies heavily on NumPy and has shown tobe a bottleneck in this project.
Datorer är en central och oundviklig del av mångas vardag idag. De framsteg som har gjorts inom maskin-inlärning har gjort det nästintill lika viktigt inom mångas vardag som datorer. Med de otroliga framsteg som gjorts inom maskininlärning så har man börjat använda det för att försöka tolka hjärnsignaler, i hopp om att skapa BCI (Brain Computer Interface) eller hjärn dator gränssnitt. Forskare på Lund Universitet genomförde ett experiment där de försökte kategorisera hjärnsignaler med hjälp av maskininlärning. Forskarna försökte kategorisera mellan tre olika saker, objekt, ansikten och landmärken. En av de större utmaningarna med projektet var att det tog väldigt lång tid att beräkna på en vanlig dator, runt en veckas tid. Det här projektet hade som uppgift att försöka förbättra och snabba upp beräkningstiden av koden. Projektet översatte den kod som skulle förbättras från programmeringspråket MATLAB till Python. Projektet använde sig utav profilering, kluster och av ett accelereringsverktyg. Med hjälp av profilering kan man lokalisera delar av kod som körs långsamt och förbättra koden till att vara snabbare, ett optimeringsverktyg helt enkelt. Kluster är en samling av datorer som man kan använda för att kollektivt beräkna större problem med, för att öka beräkningshastigheten. Det här projektet använde sig utav ett ramverk kallat Ray, vilket möjliggjorde beräkningar av koden på ett kluster ägt av Ericsson. Ett accellereringsverktyg kallat the Accelerator implementerades också, separat från Ray implementationen av koden. The Accelerator utnyttjar endast lokala processorer för att parallelisera ett problem gentemot att använda flera datorer. Den största fördelen med the Accelerator är att den kan hålla reda på vad som beräknats och inte och sparar alla resultat automatiskt. När the Accelerator håller reda på allt så kan det återanvända gamla resultat till nya beräkningar ifall gammal kod används. Återanvändningen av gamla resultat betyder att man undviker beräkningstiden det skulle ta att beräkna kod man redan har beräknat. Detta projekt förbättrade beräkningshastigheten till att vara över två hundra gånger snabbare än den var innan. Med både Ray och the Accelerator sågs en förbättring på över två hundra gånger snabbare, med de bästa resultaten från the Accelerator på runt två hundra femtio gånger snabbare. Det skall dock nämnas att de bästa resultaten från the Accelerator gjordes på en bra server processor. En bra server processor är en stor investering medan en klustertjänst endast tar betalt för tiden man använder, vilket kan vara billigare på kort sikt. Om man däremot behöver använda datorkraften mycket kan det vara mer lönsamt i längden att använda en serverprocessor. En förbättring på två hundra gånger kan ha stora konsekvenser, om man kan se en sådan förbättring i hastighet för BCI överlag. Man skulle potentiellt kunna se en tolkning av hjärnsignaler mer i realtid, vilket man kunde använda till att styra apparater eller elektronik med. Resultaten i det här projektet har också visat att NumPy, ett vanligt beräknings bibliotek i Python, har saktat ned koden med de standardinställningar det kommer med. NumPy gjorde kod långsammare genom att använda flera trådar i processorn, även i en flertrådad miljö där manuell parallelisering hade gjorts. Det visade sig att NumPy var långsammare för både den fler och entrådade implementationen, vilket antyder att NumPy kan sakta ned kod generellt, något många är omedvetna om. Efter att manuellt fixat de miljövariabler som NumPy kommer med, så var koden mer än tre gånger så snabb än innan.
APA, Harvard, Vancouver, ISO, and other styles
8

Irani, Arya John. "Utilizing negative policy information to accelerate reinforcement learning." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53481.

Full text
Abstract:
A pilot study by Subramanian et al. on Markov decision problem task decomposition by humans revealed that participants break down tasks into both short-term subgoals with a defined end-condition (such as "go to food") and long-term considerations and invariants with no end-condition (such as "avoid predators"). In the context of Markov decision problems, behaviors having clear start and end conditions are well-modeled by an abstraction known as options, but no abstraction exists in the literature for continuous constraints imposed on the agent's behavior. We propose two representations to fill this gap: the state constraint (a set or predicate identifying states that the agent should avoid) and the state-action constraint (identifying state-action pairs that should not be taken). State-action constraints can be directly utilized by an agent, which must choose an action in each state, while state constraints require an approximation of the MDP’s state transition function to be used; however, it is important to support both representations, as certain constraints may be more easily expressed in terms of one as compared to the other, and users may conceive of rules in either form. Using domains inspired by classic video games, this dissertation demonstrates the thesis that explicitly modeling this negative policy information improves reinforcement learning performance by decreasing the amount of training needed to achieve a given level of performance. In particular, we will show that even the use of negative policy information captured from individuals with no background in artificial intelligence yields improved performance. We also demonstrate that the use of options and constraints together form a powerful combination: an option and constraint can be taken together to construct a constrained option, which terminates in any situation where the original option would violate a constraint. In this way, a naive option defined to perform well in a best-case scenario may still accelerate learning in domains where the best-case scenario is not guaranteed.
APA, Harvard, Vancouver, ISO, and other styles
9

Rodrigues, Maia-Pinto Renata, and Fleith Denise de Souza. "Learning acceleration for gifted students: Favorable and unfavorable arguments." Pontificia Universidad Católica del Perú, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/102530.

Full text
Abstract:
This paper analyzes acceleration in education as a practice for meeting the educational needs of gifted students, and points out favorable and unfavorable arguments on the use of this practice. Acceleration is an educational practice consisting of several teaching strategies designed to encourage academically gifted students and reduce their time spent in school. It promotes faster learning by matching the curriculum to the student’s level of knowledge, interest and motivation. There are several arguments in favor of acceleration, such as the improvement of academic performance, self-esteem and student’s social adjustment. However, educators are reluctant to implement this practice, arguing that students may be immature or lose part of the content of the regular curriculum.
Se analiza la aceleración de la enseñanza como práctica de atención a las necesidades educacionales de alumnos superdotados y se presentan argumentos favorables y contrarios. La aceleración de la enseñanza es una práctica educacional compuesta por diversas estrategias para estimular al alumno académicamente superdotado y reducir su tiempo de permanencia en la escuela. Promueve un aprendizaje más rápido al equiparar el currículum al nivel de conocimiento, interés y motivación. Son varios los argumentos a favor de la aceleración, como mejora del desempeño académico, la autoestima y el ajuste social del alumno. Sin embargo, educadores se resisten a implementar esta práctica alegando que los alumnos pueden ser inmaduros o perder parte del contenido del currículum regular.
APA, Harvard, Vancouver, ISO, and other styles
10

Obeda, Larry. "Impact of Learning Acceleration Program on Students Academic Success." Thesis, Wingate University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10685692.

Full text
Abstract:

This study is a review of the Learning Acceleration Program and the impact it has on student academic success in the Rural School District (pseudonym). This mixed-methods study used qualitative and quantitative data analyses to identify the impact that the Learning Acceleration Program has on the overall attendance and graduation rates for the district. The study also provided an understanding of the impact the Learning Acceleration Program has on perceptions as it pertains to the program. Data for this study were collected for the period of three academic school years on attendance, graduation rate for each year, and surveys completed by participants who have first-hand knowledge of the Learning Acceleration Program. The participants in this study were high school principals, one assistant principal, high school counselors, and Learning Acceleration Program personnel. The findings exhibited statistical significant difference in attendance or graduation rates on district. Furthermore, the findings from the survey highlighted the ability to meet the needs of each individual on an individual basis and provide future recommendations.

APA, Harvard, Vancouver, ISO, and other styles
11

Walsh, Debra. "An analysis of the competencies that instructors need to teach using accelerated learning." Online version, 2002. http://www.uwstout.edu/lib/thesis/2002/2002walshd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jones, Matthew Cecil. "Accelerating Conceptual Design Analysis of Marine Vehicles through Deep Learning." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/89341.

Full text
Abstract:
Evaluation of the flow field imparted by a marine vehicle reveals the underlying efficiency and performance. However, the relationship between precise design features and their impact on the flow field is not well characterized. The goal of this work is first, to investigate the thermally-stratified near field of a self-propelled marine vehicle to identify the significance of propulsion and hull-form design decisions, and second, to develop a functional mapping between an arbitrary vehicle design and its associated flow field to accelerate the design analysis process. The unsteady Reynolds-Averaged Navier-Stokes equations are solved to compute near-field wake profiles, showing good agreement to experimental data and providing a balance between simulation fidelity and numerical cost, given the database of cases considered. Machine learning through convolutional networks is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct deep-learning networks. The first network directly maps explicitly-specified geometric design parameters to their corresponding flow fields. The second network considers the vehicle geometries themselves as tensors of geometric volume fractions to implicitly-learn the underlying parameter space. Once trained, both networks effectively generate realistic flow fields, accelerating the design analysis from a process that takes days to one that takes a fraction of a second. The implicit-parameter network successfully learns the underlying parameter space for geometries within the scope of the training data, showing comparable performance to the explicit-parameter network. With additions to the size and variability of the training database, this network has the potential to abstractly generalize the design space for arbitrary geometric inputs, even those beyond the scope of the training data.
Doctor of Philosophy
Evaluation of the flow field of a marine vehicle reveals the underlying performance, however, the exact relationship between design features and their impact on the flow field is not well established. The goal of this work is first, to investigate the flow surrounding a self–propelled marine vehicle to identify the significance of various design decisions, and second, to develop a functional relationship between an arbitrary vehicle design and its flow field, thereby accelerating the design analysis process. Near–field wake profiles are computed through simulation, showing good agreement to experimental data. Machine learning is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct approaches. The first approach directly maps explicitly–specified geometric design parameters to their corresponding flow fields. The second approach considers the vehicle geometries themselves to implicitly–learn the underlying relationships. Once trained, both approaches generate a realistic flow field corresponding to a user–provided vehicle geometry, accelerating the design analysis from a multi–day process to one that takes a fraction of a second. The implicit–parameter approach successfully learns from the underlying geometric features, showing comparable performance to the explicit–parameter approach. With a larger and more–diverse training database, this network has the potential to abstractly learn the design space relationships for arbitrary marine vehicle geometries, even those beyond the scope of the training database.
APA, Harvard, Vancouver, ISO, and other styles
13

MARTINS, FABIO JESSEN WERNECK DE ALMEIDA. "METHODS FOR ACCELERATION OF LEARNING PROCESS OF REINFORCEMENT LEARNING NEURO-FUZZY HIERARCHICAL POLITREE MODEL." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=16421@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
Neste trabalho foram desenvolvidos e avaliados métodos com o objetivo de melhorar e acelerar o processo de aprendizado do modelo de Reinforcement Learning Neuro-Fuzzy Hierárquico Politree (RL-NFHP). Este modelo pode ser utilizado para dotar um agente de inteligência através de processo de Aprendizado por Reforço (Reinforcement Learning). O modelo RL-NFHP apresenta as seguintes características: aprendizado automático da estrutura do modelo; auto-ajuste dos parâmetros associados à estrutura; capacidade de aprendizado da ação a ser adotada quando o agente está em um determinado estado do ambiente; possibilidade de lidar com um número maior de entradas do que os sistemas neuro-fuzzy tradicionais; e geração de regras linguísticas com hierarquia. Com intenção de melhorar e acelerar o processo de aprendizado do modelo foram implementadas seis políticas de seleção, sendo uma delas uma inovação deste trabalho (Q-DC-roulette); implementado o método early stopping para determinação automática do fim do treinamento; desenvolvido o eligibility trace cumulativo; criado um método de poda da estrutura, para eliminação de células desnecessárias; além da reescrita do código computacional original. O modelo RL-NFHP modificado foi avaliado em três aplicações: o benchmark Carro na Montanha simulado, conhecido na área de agentes autônomos; uma simulação robótica baseada no robô Khepera; e uma num robô real NXT. Os testes efetuados demonstram que este modelo modificado se ajustou bem a problemas de sistemas de controle e robótica, apresentando boa generalização. Comparado o modelo RL-NFHP modificado com o original, houve aceleração do aprendizado e obtenção de menores modelos treinados.
In this work, methods were developed and evaluated in order to improve and accelerate the learning process of Reinforcement Learning Neuro-Fuzzy Hierarchical Politree Model (RL-NFHP). This model is employed to provide an agent with intelligence, making it autonomous, due to the capacity of ratiocinate (infer actions) and learning, acquired knowledge through interaction with the environment by Reinforcement Learning process. The RL-NFHP model has the following features: automatic learning of structure of the model; self-adjustment of parameters associated with its structure, ability to learn the action to be taken when the agent is in a particular state of the environment; ability to handle a larger number of inputs than the traditional neuro-fuzzy systems; and generation of rules with linguistic interpretable hierarchy. With the aim to improve and accelerate the learning process of the model, six selection action policies were developed, one of them an innovation of this work (Q-DC-roulette); implemented the early stopping method for automatically determining the end of the training; developed a cumulative eligibility trace; created a method of pruning the structure, for removing unnecessary cells; in addition to rewriting the original computer code. The modified RL-NFHP model was evaluated in three applications: the simulated benchmark Car-Mountain problem, well known in the area of autonomous agents; a simulated application in robotics based on the Khepera robot; and an application in a real robot. The experiments show that this modified model fits well the problems of control systems and robotics, with a good generalization. Compared the modified RL-NFHP model with the original one, there was acceleration of learning process and smaller structures of the model trained.
APA, Harvard, Vancouver, ISO, and other styles
14

Ida, Yasutoshi. "Algorithms for Accelerating Machine Learning with Wide and Deep Models." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bhave, Sampada Vasant. "Novel dictionary learning algorithm for accelerating multi-dimensional MRI applications." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2182.

Full text
Abstract:
The clinical utility of multi-dimensional MRI applications like multi-parameter mapping and 3D dynamic lung imaging is limited by long acquisition times. Quantification of multiple tissue MRI parameters has been shown to be useful for early detection and diagnosis of various neurological diseases and psychiatric disorders. They also provide useful information about disease progression and treatment efficacy. Dynamic lung imaging enables the diagnosis of abnormalities in respiratory mechanics in dyspnea and regional lung function in pulmonary diseases like chronic obstructive pulmonary disease (COPD), asthma etc. However, the need for acquisition of multiple contrast weighted images as in case of multi-parameter mapping or multiple time points as in case of pulmonary imaging, makes it less applicable in the clinical setting as it increases the scan time considerably. In order to achieve reasonable scan times, there is often tradeoffs between SNR and resolution. Since, most MRI images are sparse in known transform domain; they can be recovered from fewer samples. Several compressed sensing schemes have been proposed which exploit the sparsity of the signal in pre-determined transform domains (eg. Fourier transform) or exploit the low rank characteristic of the data. However, these methods perform sub-optimally in the presence of inter-frame motion since the pre-determined dictionary does not account for the motion and the rank of the data is considerably higher. These methods rely on two step approach where they first estimate the dictionary from the low resolution data and using these basis functions they estimate the coefficients by fitting the measured data to the signal model. The main focus of the thesis is accelerating the multi-parameter mapping and 3D dynamic lung imaging applications to achieve desired volume coverage and spatio-temporal resolution. We propose a novel dictionary learning framework called the Blind compressed sensing (BCS) scheme to recover the underlying data from undersampled measurements, in which the underlying signal is represented as a sparse linear combination of basic functions from a learned dictionary. We also provide an efficient implementation using variable splitting technique to reduce the computational complexity by up to 15 fold. In both multi- parameter mapping and 3D dynamic lung imaging, the comparisons of BCS scheme with other schemes indicates superior performance as it provides a richer presentation of the data. The reconstructions from BCS scheme result in high accuracy parameter maps for parameter imaging and diagnostically relevant image series to characterize respiratory mechanics in pulmonary imaging.
APA, Harvard, Vancouver, ISO, and other styles
16

McDonald, Terry E. "A comprehensive literature review and critique of the identification of methods and practical applications of accelerated learning strategies." Online version, 2001. http://www.uwstout.edu/lib/thesis/2001/2001mcdonaldt.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Meloy, Faye A. Haslam Elizabeth L. "Managing the maelstrom self-regulated learning, academic outcomes, and the student learning experience in a second-degree accelerated baccalaureate nursing program /." Philadelphia, Pa. : Drexel University, 2009. http://hdl.handle.net/1860/3118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Amar, Yehia. "Accelerating process development of complex chemical reactions." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/288220.

Full text
Abstract:
Process development of new complex reactions in the pharmaceutical and fine chemicals industries is challenging, and expensive. The field is beginning to see a bridging between fundamental first-principles investigations, and utilisation of data-driven statistical methods, such as machine learning. Nonetheless, process development and optimisation in these industries is mostly driven by trial-and-error, and experience. Approaches that move beyond these are limited to the well-developed optimisation of continuous variables, and often do not yield physical insights. This thesis describes several new methods developed to address research questions related to this challenge. First, we investigated whether utilising physical knowledge could aid statistics-guided self-optimisation of a C-H activation reaction, in which the optimisation variables were continuous. We then considered algorithmic treatment of the more challenging discrete variables, focussing on solvents. We parametrised a library of 459 solvents with physically meaningful molecular descriptors. Our case study was a homogeneous Rh-catalysed asymmetric hydrogenation to produce a chiral γ-lactam, with conversion and diastereoselectivity as objectives. We adapted a state-of-the-art multi-objective machine learning algorithm, based on Gaussian processes, to utilise the descriptors as inputs, and to create a surrogate model for each objective. The aim of the algorithm was to determine a set of Pareto solutions with a minimum experimental budget, whilst simultaneously addressing model uncertainty. We found that descriptors are a valuable tool for Design of Experiments, and can produce predictive and interpretable surrogate models. Subsequently, a physical investigation of this reaction led to the discovery of an efficient catalyst-ligand system, which we studied by operando NMR, and identified a parametrised kinetic model. Turning the focus then to ligands for asymmetric hydrogenation, we calculated versatile empirical descriptors based on the similarity of atomic environments, for 102 chiral ligands, to predict diastereoselectivity. Whilst the model fit was good, it failed to accurately predict the performance of an unseen ligand family, due to analogue bias. Physical knowledge has then guided the selection of symmetrised physico-chemical descriptors. This produced more accurate predictive models for diastereoselectivity, including for an unseen ligand family. The contribution of this thesis is a development of novel and effective workflows and methodologies for process development. These open the door for process chemists to save time and resources, freeing them up from routine work, to focus instead on creatively designing new chemistry for future real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Yi. "Accelerating convex optimization in machine learning by leveraging functional growth conditions." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/7048.

Full text
Abstract:
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization. We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of $\widetilde O(1/\epsilon^{1-\theta})$ and $\widetilde O(1/\epsilon^{2(1-\theta)})$ respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of $O(1/\epsilon)$ to $\widetilde O(1/\epsilon^{1-\theta})$ omitting a logarithmic factor with $\theta\in(0,1]$. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than $O(1/\epsilon^2)$ without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with $\theta\in(0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.
APA, Harvard, Vancouver, ISO, and other styles
20

Biswas, Rajarshi. "Benchmarking and Accelerating TensorFlow-based Deep Learning on Modern HPC Systems." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1531827968620294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Morozs, Nils. "Accelerating reinforcement learning for dynamic spectrum access in cognitive wireless networks." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/11523/.

Full text
Abstract:
This thesis studies the applications of distributed reinforcement learning (RL) based machine intelligence to dynamic spectrum access (DSA) in future cognitive wireless networks. In particular, this work focuses on ways of accelerating distributed RL based DSA algorithms in order to improve their adaptability in terms of the initial and steady-state performance, and the quality of service (QoS) convergence behaviour. The performance of the DSA schemes proposed in this thesis is empirically evaluated using large-scale system-level simulations of a temporary event scenario which involves a cognitive small cell network installed in a densely populated stadium, and in some cases a base station on an aerial platform and a number of local primary LTE base stations, all sharing the same spectrum. Some of the algorithms are also theoretically evaluated using a Bayesian network based probabilistic convergence analysis method proposed by the author. The thesis presents novel distributed RL based DSA algorithms that employ a Win-or-Learn-Fast (WoLF) variable learning rate and an adaptation of the heuristically accelerated RL (HARL) framework in order to significantly improve the initial performance and the convergence speed of classical RL algorithms and, thus, increase their adaptability in challenging DSA environments. Furthermore, a distributed case-based RL approach to DSA is proposed. It combines RL and case-based reasoning to increase the robustness and adaptability of distributed RL based DSA schemes in dynamically changing wireless environments.
APA, Harvard, Vancouver, ISO, and other styles
22

Kodi, Ramanah Doogesh. "Bayesian statistical inference and deep learning for primordial cosmology and cosmic acceleration." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS169.

Full text
Abstract:
Cette thèse a pour vocation le développement et l’application de nouvelles techniques d’inférence statistique bayésienne et d’apprentissage profond pour relever les défis statistiques imposés par les gros volumes de données complexes des missions du fond diffus cosmologique (CMB) ou des relevés profonds de galaxies de la prochaine génération, dans le but d'optimiser l’exploitation des données scientifiques afin d’améliorer, à terme, notre compréhension de l’Univers. La première partie de cette thèse concerne l'extraction des modes E et B du signal de polarisation du CMB à partir des données. Nous avons développé une méthode hiérarchique à haute performance, nommée algorithme du dual messenger, pour la reconstruction du champ de spin sur la sphère et nous avons démontré les capacités de cet algorithme à reconstruire des cartes E et B pures, tout en tenant compte des modèles de bruit réalistes. La seconde partie porte sur le développement d’un cadre d'inférence bayésienne pour contraindre les paramètres cosmologiques en s’appuyant sur une nouvelle implémentation du test géométrique d'Alcock-Paczyński et nous avons présenté nos contraintes cosmologiques sur la densité de matière et l'équation d'état de l'énergie sombre. Etant donné que le contrôle des effets systématiques est un facteur crucial, nous avons également présenté une fonction de vraisemblance robuste, qui résiste aux contaminations inconnues liées aux avant-plans. Finalement, dans le but de construire des émulateurs de dynamiques complexes dans notre modèle, nous avons conçu un nouveau réseau de neurones qui apprend à peindre des distributions de halo sur des champs approximatifs de matière noire en 3D
The essence of this doctoral research constitutes the development and application of novel Bayesian statistical inference and deep learning techniques to meet statistical challenges of massive and complex data sets from next-generation cosmic microwave background (CMB) missions or galaxy surveys and optimize their scientific returns to ultimately improve our understanding of the Universe. The first theme deals with the extraction of the E and B modes of the CMB polarization signal from the data. We have developed a high-performance hierarchical method, known as the dual messenger algorithm, for spin field reconstruction on the sphere and demonstrated its capabilities in reconstructing pure E and B maps, while accounting for complex and realistic noise models. The second theme lies in the development of various aspects of Bayesian forward modelling machinery for optimal exploitation of state-of-the-art galaxy redshift surveys. We have developed a large-scale Bayesian inference framework to constrain cosmological parameters via a novel implementation of the Alcock-Paczyński test and showcased our cosmological constraints on the matter density and dark energy equation of state. With the control of systematic effects being a crucial limiting factor for modern galaxy redshift surveys, we also presented an augmented likelihood which is robust to unknown foreground and target contaminations. Finally, with a view to building fast complex dynamics emulators in our above Bayesian hierarchical model, we have designed a novel halo painting network that learns to map approximate 3D dark matter fields to realistic halo distributions
APA, Harvard, Vancouver, ISO, and other styles
23

Javed, Muhammad Haseeb. "Characterizing and Accelerating Deep Learning and Stream Processing Workloads using Roofline Trajectories." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574445196024129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Williams, Richard Larry. "The impact of accelerated versus traditional learning with a practical test in advanced culinary skills at Fox Valley Technical College." Online version, 2008. http://www.uwstout.edu/lib/thesis/2008/2008williamsr.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mukherjee, Rajaditya. "Accelerating Data-driven Simulations for Deformable Bodies and Fluids." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1523634514740489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Van, Mai Vien. "Large-Scale Optimization With Machine Learning Applications." Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263147.

Full text
Abstract:
This thesis aims at developing efficient algorithms for solving some fundamental engineering problems in data science and machine learning. We investigate a variety of acceleration techniques for improving the convergence times of optimization algorithms.  First, we investigate how problem structure can be exploited to accelerate the solution of highly structured problems such as generalized eigenvalue and elastic net regression. We then consider Anderson acceleration, a generic and parameter-free extrapolation scheme, and show how it can be adapted to accelerate practical convergence of proximal gradient methods for a broad class of non-smooth problems. For all the methods developed in this thesis, we design novel algorithms, perform mathematical analysis of convergence rates, and conduct practical experiments on real-world data sets.

QC 20191105

APA, Harvard, Vancouver, ISO, and other styles
27

Koufetta, Christiana. "Teaching thinking in schools : an investigation into the teaching of CASE and its contribution to student learning." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Arale, Brännvall Marian. "Accelerating longitudinal spinfluctuation theory for iron at high temperature using a machine learning method." Thesis, Linköpings universitet, Teoretisk Fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170314.

Full text
Abstract:
In the development of materials, the understanding of their properties is crucial. For magnetic materials, magnetism is an apparent property that needs to be accounted for. There are multiple factors explaining the phenomenon of magnetism, one being the effect of vibrations of the atoms on longitudinal spin fluctuations. This effect can be investigated by simulations, using density functional theory, and calculating energy landscapes. Through such simulations, the energy landscapes have been found to depend on the magnetic background and the positions of the atoms. However, when simulating a supercell of many atoms, to calculate energy landscapes for all atoms consumes many hours on the supercomputer. In this thesis, the possibility of using machine learning models to accelerate the approximation of energy landscapes is investigated. The material under investigation is body-centered cubic iron in the paramagnetic state at 1043 K. Machine learning enables statistical predictions to be made on new data based on patterns found in a previous set of data. Kernel ridge regression is used as the machine learning method. An important issue when training a machine learning model is the representation of the data in the so called descriptor (feature vector representation) or, more specific to this case, how the environment of an atom in a supercell is accounted for and represented properly. Four different descriptors are developed and compared to investigate which one yields the best result and why. Apart from comparing the descriptors, the results when using machine learning models are compared to when using other methods to approximate the energy landscapes. The machine learning models are also tested in a combined atomistic spin dynamics and ab initio molecular dynamics simulation (ASD-AIMD) where they were used to approximate energy landscapes and, from that, magnetic moment magnitudes at 1043 K. The results of these simulations are compared to the results from two other cases: one where the magnetic moment magnitudes are set to a constant value and one where they are set to their magnitudes at 0 K. From these investigations it is found that using machine learning methods to approximate the energy landscapes does, to a large degree, decrease the errors compared to the other approximation methods investigated. Some weaknesses of the respective descriptors were detected and if, in future work, these are accounted for, the errors have the potential of being lowered further.
APA, Harvard, Vancouver, ISO, and other styles
29

Mills, Alessaundra D. "Strategic school solutions| A capacity building framework for leaders accelerating 21st century teaching and learning." Thesis, Pepperdine University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10182306.

Full text
Abstract:

This grounded theory study sought to create a viable framework that may help school leaders accelerate the expansion of an authentic 21st century instructional model. The U.S. economy is now more dependent on knowledge work than manufacturing. Yet, many for-profit, non-profit, and public sectors perceive schools as not adequately preparing students for 21st century careers and colleges. However, customary principal-led change is challenging. Leaders face several complex organizational challenges, including a modern-day duty and role expansion that limits time, and the inherent difficulty of human-behavior and organizational change, observed in the fact that schools have deeply entrenched norms: an estimated 150 years of traditional lecture-dominant instruction.

As such, a singular research question informed this study: What leadership competencies do 21st century change-savvy school administrators perceive as critical to accelerate successful change to a 21st century instructional model? Using a purposive sampling method, change-savvy school leaders (n = 22) with lived experience were interviewed covering germane topics such as what worked for them, professional development, and change management.

Utilizing Charmaz’s (2014) constructed grounded theory coding process and data analysis technique, the results include two key findings: five leadership competencies (discerning, authentic, facilitative, collaborative, and communicative) and the Authentic 21st Century Leadership Framework, which integrates the respective competencies to provide a user guide for the contemporary time-burdened school leader. Ultimately, the study concluded the following: (a) the leadership competencies are essential; (b) the framework provides a supportive guide to accelerate expansion of the 21st century instructional model; (c) 21st century leadership is chiefly collaborative; (d) leader created and sustained growth culture is critical; and, lastly (e) as the 21st century instructional model magnifies in utilization across schools, opportunities for all students improve.

APA, Harvard, Vancouver, ISO, and other styles
30

Dantas, Cássio Fraga. "Accelerating sparse inverse problems using structured approximations." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S065.

Full text
Abstract:
En raison de la vertigineuse croissance des données disponibles, la complexité computationnelle des algorithmes traitant les problèmes inverses parcimonieux peut vite devenir un goulot d'étranglement. Dans cette thèse, nous explorons deux stratégies pour accélérer de tels algorithmes. D'abord, nous étudions l'utilisation de dictionnaires structurés rapides à manipuler. Une famille de dictionnaires écrits comme une somme de produits Kronecker est proposée. Ensuite, nous développons des tests d'élagage sûrs, capables d'identifier et éliminer des atomes inutiles (colonnes de la matrice dictionnaire ne correspondant pas au support de la solution), malgré l'utilisation de dictionnaires approchés
As the quantity and size of available data grow, the existing algorithms for solving sparse inverse problems can become computationally intractable. In this work, we explore two main strategies for accelerating such algorithms. First, we study the use of structured dictionaries which are fast to operate with. A particular family of dictionaries, written as a sum of Kronecker products, is proposed. Then, we develop stable screening tests, which can safely identify and discard useless atoms (columns of the dictionary matrix which do not correspond to the solution support), despite manipulating approximate dictionaries
APA, Harvard, Vancouver, ISO, and other styles
31

TAKEUCHI, Yoshinori, Hiroaki KUDO, Noboru OHNISHI, Tetsuya MATSUMOTO, and Ukrit WATCHAREERUETAI. "Acceleration of Genetic Programming by Hierarchical Structure Learning: A Case Study on Image Recognition Program Synthesis." Institute of Electronics, Information and Communication Engineers, 2009. http://hdl.handle.net/2237/15003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

ALMEIDA, RAPHAEL CELESTINO DE. "THE FATE OF THE WEAKEST: SUBORDINATE INCLUSION: A STUD OF STUDENTS PLACED IN LEARNING ACCELERATION CLASSES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=25997@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Esta pesquisa investiga o destino de alunos do Ensino Médio em defasagem idade-série em uma escola da rede estadual de educação do Rio de Janeiro, analisando a experiência de inserção em duas classes de aceleração da aprendizagem do Programa Autonomia da Fundação Roberto Marinho. A partir da conjugação de observações com entrevistas, busca, na fronteira entre etnografia e educação, aproximar-se da perspectiva destes alunos para tentar construir um conhecimento sob novos ângulos. Investiga o sentido do que pensam estes alunos sobre seu atraso escolar, sobre a transição para o programa de aceleração, sobre a experiência de estudar nestas classes e sobre os ganhos simbólicos e concretos percebidos por eles. Identifica nos problemas de comportamento dos alunos a principal explicação escolar para o fracasso, explicação internalizada pelos próprios estudantes. Caracteriza a ênfase dada no Programa Autonomia à socialização e adequação dos comportamentos, e não à aprendizagem, o que acaba por assegurar uma inclusão subalterna no sistema escolar.
This study investigates the fate of high school students with an age-grade gap at a state school in Rio de Janeiro, analyzing the placement experience in two learning acceleration classes in the Roberto Marinho Foundation Autonomy Program. From a set of observations through interviews, it aims, on the boundary between ethnography and education, to get a closer perspective of these students to try to build knowledge from new angles. It investigates the meaning of the reflections that these students make about their educational delay and the transition to the accelerated program on the experience of studying in these classes and the symbolic and concrete gains they perceived. It identifies the main reason for school failure as the students behavior problems, an explanation internalized by the students themselves. It characterizes the acceleration classes as a space for socialization and behavior suitability at the expense of learning, which ultimately ensures subordinate inclusion in the school system.
APA, Harvard, Vancouver, ISO, and other styles
33

Rose, Linda Dean. "Teaching and learning in community college a close-up view of student success in accelerated developmental writing classes /." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1459901941&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Abdelouahab, Kamel. "Reconfigurable hardware acceleration of CNNs on FPGA-based smart cameras." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC042/document.

Full text
Abstract:
Les Réseaux de Neurones Convolutifs profonds (CNNs) ont connu un large succès au cours de la dernière décennie, devenant un standard de la vision par ordinateur. Ce succès s’est fait au détriment d’un large coût de calcul, où le déploiement des CNNs reste une tâche ardue surtout sous des contraintes de temps réel.Afin de rendre ce déploiement possible, la littérature exploite le parallélisme important de ces algorithmes, ce qui nécessite l’utilisation de plate-formes matérielles dédiées. Dans les environnements soumis à des contraintes de consommations énergétiques, tels que les nœuds des caméras intelligentes, les cœurs de traitement à base de FPGAs sont reconnus comme des solutions de choix pour accélérer les applications de vision par ordinateur. Ceci est d’autant plus vrai pour les CNNs, où les traitements se font naturellement sur un flot de données, rendant les architectures matérielles à base de FPGA d’autant plus pertinentes. Dans ce contexte, cette thèse aborde les problématiques liées à l’implémentation des CNNs sur FPGAs. En particulier, ces travaux visent à améliorer l’efficacité des implantations grâce à deux principales stratégies d’optimisation; la première explore le modèle et les paramètres des CNNs, tandis que la seconde se concentre sur les architectures matérielles adaptées au FPGA
Deep Convolutional Neural Networks (CNNs) have become a de-facto standard in computer vision. This success came at the price of a high computational cost, making the implementation of CNNs, under real-time constraints, a challenging task.To address this challenge, the literature exploits the large amount of parallelism exhibited by these algorithms, motivating the use of dedicated hardware platforms. In power-constrained environments, such as smart camera nodes, FPGA-based processing cores are known to be adequate solutions in accelerating computer vision applications. This is especially true for CNN workloads, which have a streaming nature that suits well to reconfigurable hardware architectures.In this context, the following thesis addresses the problems of CNN mapping on FPGAs. In Particular, it aims at improving the efficiency of CNN implementations through two main optimization strategies; The first one focuses on the CNN model and parameters while the second one considers the hardware architecture and the fine-grain building blocks
APA, Harvard, Vancouver, ISO, and other styles
35

Dahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.

Full text
Abstract:
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.
Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
APA, Harvard, Vancouver, ISO, and other styles
36

Nowak, Michel. "Accelerating Monte Carlo particle transport with adaptively generated importance maps." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS403/document.

Full text
Abstract:
Les simulations Monte Carlo de transport de particules sont un outil incontournable pour l'étude de problèmes de radioprotection. Leur utilisation implique l'échantillonnage d'événements rares grâce à des méthode de réduction de variance qui reposent sur l'estimation de la contribution d'une particule au détecteur. On construit cette estimation sous forme d'une carte d'importance.L’objet de cette étude est de proposer une stratégie qui permette de générer de manière adaptative des cartes d'importance durant la simulation Monte Carlo elle-même. Le travail a été réalisé dans le code de transport des particules TRIPOLI-4®, développé à la Direction de l’énergie nucléaire du CEA (Salay, France).Le cœur du travail a consisté à estimer le flux adjoint à partir des trajectoires simulées avec l'Adaptive Multilevel Splitting, une méthode de réduction de variance robuste. Ce développement a été validé à l'aide de l'intégration d'un module déterministe dans TRIPOLI-4®.Trois stratégies sont proposés pour la réutilisation de ce score en tant que carte d'importance dans la simulation Monte Carlo. Deux d'entre elles proposent d'estimer la convergence du score adjoint lors de phases d'exploitation.Ce travail conclut sur le lissage du score adjoint avec des méthodes d'apprentissage automatique, en se concentrant plus particulièrement sur les estimateurs de densité à noyaux
Monte Carlo methods are a reference asset for the study of radiation transport in shielding problems. Their use naturally implies the sampling of rare events and needs to be tackled with variance reduction methods. These methods require the definition of an importance function/map. The aim of this study is to propose an adaptivestrategy for the generation of such importance maps during the Montne Carlo simulation. The work was performed within TRIPOLI-4®, a Monte Carlo transport code developped at the nuclear energy division of CEA in Saclay, France. The core of this PhD thesis is the implementation of a forward-weighted adjoint score that relies on the trajectories sampled with Adaptive Multilevel Splitting, a robust variance reduction method. It was validated with the integration of a deterministic module in TRIPOLI-4®. Three strategies were proposed for the reintegrationof this score as an importance map and accelerations were observed. Two of these strategies assess the convergence of the adjoint score during exploitation phases by evalutating the figure of merit yielded by the use of the current adjoint score. Finally, the smoothing of the importance map with machine learning algorithms concludes this work with a special focus on Kernel Density Estimators
APA, Harvard, Vancouver, ISO, and other styles
37

Ghneim, Jabra F. "The Practice of Belonging: Can Learning Entrepreneurship Accelerate and Aid the Social Inclusion of Refugees in the United States." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/8979.

Full text
Abstract:
This dissertation examines the role that culinary entrepreneurship communities of practice, using Lave and Wenger's Legitimate Peripheral Participation (LPP) model (Lave & Wenger, 1991), can lead to better social and economic inclusion for Middle Eastern Muslim refugee chefs in Utah. The life history approach was used to construct life histories for two Middle Eastern Muslim refugee chefs in Utah who joined the Spice Kitchen Incubator (SKI) program. SKI is a community of practice funded by the International Rescue Committee to assist refugee chefs in the resettlement process. This was an exploratory study, and given the limited number of cases reviewed, the conclusions cannot be generalized. However, this study concludes that SKI, as a community of practice, despite the many difficulties faced by refugee programs in the period 2016-2018 (the study period), had a positive impact on the social and economic inclusion outcomes for the participants.
APA, Harvard, Vancouver, ISO, and other styles
38

Sepp, Löfgren Nicholas. "Accelerating bulk material property prediction using machine learning potentials for molecular dynamics : predicting physical properties of bulk Aluminium and Silicon." Thesis, Linköpings universitet, Teoretisk Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-179894.

Full text
Abstract:
In this project machine learning (ML) interatomic potentials are trained and used in molecular dynamics (MD) simulations to predict the physical properties of total energy, mean squared displacement (MSD) and specific heat capacity for systems of bulk Aluminium and Silicon. The interatomic potentials investigated are potentials trained using the ML models kernel ridge regression (KRR) and moment tensor potentials (MTPs). The simulations using these ML potentials are then compared with results obtained from ab-initio simulations using the gold standard method of density functional theory (DFT), as implemented in the Vienna ab-intio simulation package (VASP). The results show that the MTP simulations reach comparable accuracy compared to the DFT simulations for the properties total energy and MSD for Aluminium, with errors in the orders of magnitudes of meV and 10-5 Å2. Specific heat capacity is not reasonably replicated for Aluminium. The MTP simulations do not reasonably replicate the studied properties for the system of Silicon. The KRR models are implemented in the most direct way, and do not yield reasonably low errors even when trained on all available 10000 time steps of DFT training data. On the other hand, the MTPs require only to be trained on approximately 100 time steps to replicate the physical properties of Aluminium with accuracy comparable to DFT. After being trained on 100 time steps, the trained MTPs achieve mean absolute errors in the orders of magnitudes for the energy per atom and force magnitude predictions of 10-3 and 10-1 respectively for Aluminium, and 10-3 and 10-2 respectively for Silicon. At the same time, the MTP simulations require less core hours to simulate the same amount of time steps as the DFT simulations. In conclusion, MTPs could very likely play a role in accelerating both materials simulations themselves and subsequently the emergence of the data-driven materials design and informatics paradigm.
APA, Harvard, Vancouver, ISO, and other styles
39

Petti, Amy Daggett. "Comprehensive School Reform Influence on Teacher Practice: Listening in the Classroom: An Examination of Powerful Learning Labs within the Accelerated Schools Project." PDXScholar, 2002. https://pdxscholar.library.pdx.edu/open_access_etds/614.

Full text
Abstract:
Focusing on teacher learning, this study follows fifteen teachers in the crux of comprehensive school reform. These "regular" classroom teachers are the ubiquitous players of this theatre of school reform. "Regular" teacher is defined as a typical classroom teacher who is not actively involved in the district's school reform project or one who hasn't taken an active leadership role. The teachers in this study work in the challenging environment of a poor, diverse urban school district that was in its third year of a comprehensive school reform program, the Accelerated Schools Project. Fifteen teachers volunteered to take part in a teaching laboratory where they met, planned, taught, assessed and reflected on their practice. The study tells, analyzes and speculates about their journey. The Accelerated Schools Project (ASP) is a national comprehensive school improvement model that provides professional development to schools. The study described the experiences of regular classroom teachers who engaged in a yearlong professional development program that is part of the ASP service to schools. This study employs qualitative research methods in a multiple case study analysis. By examining the teaching practices of regular classroom teachers who are often depicted as "closing the door" to the outside influences of school, district, state or federal policy, the study seeks to fully understand the planning, teaching, assessing and reflecting of classroom teachers who are caught in the center of school reform. The key findings of this study suggest teacher practice for all teacher cohorts (novice, mid-career and veteran) was influenced by participation in the Powerful Learning Laboratory. Each aspect of teaching (planning, teaching, assessing and reflection) was influenced, with differing emphasis by each cohort. The findings suggest the Powerful Learning Lab is a positive professional development experience for teachers, and that teacher learning labs should remain an integral part of the Accelerated Schools Project.
APA, Harvard, Vancouver, ISO, and other styles
40

Vaupel, Yannic [Verfasser], Alexander [Akademischer Betreuer] Mitsos, and Sergio [Akademischer Betreuer] Lucia. "Accelerating nonlinear model predictive control through machine learning with application to automotive waste heat recovery / Yannic Vaupel ; Alexander Mitsos, Sergio Lucia." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1231738715/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Barduhn, Susan. "Traits and conditions that accelerate teacher learning : a consideration of the four-week Cambridge RSA Certificate in English Language Teaching to Adults." Thesis, University of West London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fortes, Maria Auxiliadora Soares. "AceleraÃÃo da Aprendizagem - resultado de decisÃes curriculares no contexto escolar?" Universidade Federal do CearÃ, 2006. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=511.

Full text
Abstract:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
Esse trabalho decorre da preocupaÃÃo com resultados de pesquisas, as quais indicam que, ainda hoje, a educaÃÃo de nÃvel fundamental continua um grave problema do ensino pÃblico brasileiro a ser solucionado. Nesse contexto, privilegiou-se as classes de aceleraÃÃo da aprendizagem, a qual compÃs o cenÃrio educacional cearense de 1998 a 2002, com forte apelo de inovaÃÃo pedagÃgica capaz de inserir os alunos âdefasadosâ no ensino dito regular. Desse modo, o presente estudo tem como objetivo central explicitar o sentido e o alcance que a implantaÃÃo desse programa adquire quando preconiza o desenvolvimento de uma escola sem exclusÃo. A Teoria CrÃtica ancora esta discussÃo, cuja anÃlise coloca como questÃo bÃsica as relaÃÃes de poder, esclarecendo que as contradiÃÃes e resistÃncias tÃm papel de destaque nessa teorizaÃÃo. A metodologia empregada engloba um estudo de caso mÃltiplo com tÃcnicas da histÃria oral, delineado com base na pesquisa em jornal e bibliogrÃfica, anÃlise documental, conversas informais e entrevistas semi-estruturadas com tÃcnicos das secretarias estadual e municipal, diretores, professores e alunos de uma escola da rede estadual e outra da rede municipal, em Fortaleza. Os resultados esclarecem que, a implantaÃÃo das classes de aceleraÃÃo nÃo resolveu o problema da exclusÃo escolar, notadamente, porque os alunos nÃo retornaram, em sua grande maioria, ao ensino regular.
Results of recent research indicate that the low quality of the basic level education is one of the most serious problems of Brazilian public education. In this work we study the âclasses de aceleraÃÃo da aprendizagemâ (learning acceleration classrooms) in the State of Cearà educational system, in the period 1998 - 2002. The âacceleration classroomsâ appeal to pedagogical innovation as a means to insert "defasados" (out of fase) pupils in regular education. We objective to identify the reach of this modality of education for the development of a non exclusive school. The study is based on the "Critical Theory"; it focus on power relations in society; and points the role of contradictions and resistency as explanation to students progress. The methodology used here is the case study, with techniques of life history, searches in periodical, bibliographical, documentary analysis, informal and semi-structured interviews. Technician of the Board of Education (both of the State of Cearà and of the city of Fortaleza) managers, professors, and pupils of a State school, and of the city of Fortaleza school had been interviewed. The results show that the acceleration classrooms did not solve the problem of school exclusion because the pupils had not returned, in its great majority, to regular education.
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Hongzhou. "Algorithmes d'accélération générique pour les méthodes d'optimisation en apprentissage statistique." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM069/document.

Full text
Abstract:
Les problèmes d’optimisation apparaissent naturellement pendant l’entraine-ment de modèles d’apprentissage supervises. Un exemple typique est le problème deminimisation du risque empirique (ERM), qui vise a trouver un estimateur en mini-misant le risque sur un ensemble de données. Le principal défi consiste a concevoirdes algorithmes d’optimisation efficaces permettant de traiter un grand nombre dedonnées dans des espaces de grande dimension. Dans ce cadre, les méthodes classiques d’optimisation, telles que l’algorithme de descente de gradient et sa varianteaccélérée, sont couteux en termes de calcul car elles nécessitent de passer a traverstoutes les données a chaque évaluation du gradient. Ce défaut motive le développement de la classe des algorithmes incrémentaux qui effectuent des mises a jour avecdes gradients incrémentaux. Ces algorithmes réduisent le cout de calcul par itération, entrainant une amélioration significative du temps de calcul par rapport auxméthodes classiques. Une question naturelle se pose : serait-il possible d’accélérerdavantage ces méthodes incrémentales ? Nous donnons ici une réponse positive, enintroduisant plusieurs schémas d’accélération génériques.Dans le chapitre 2, nous développons une variante proximale de l’algorithmeFinito/MISO, qui est une méthode incrémentale initialement conçue pour des problèmes lisses et fortement convexes. Nous introduisons une étape proximale dans lamise a jour de l’algorithme pour prendre en compte la pénalité de régularisation quiest potentiellement non lisse. L’algorithme obtenu admet un taux de convergencesimilaire a l’algorithme Finito/MISO original.Dans le chapitre 3, nous introduisons un schéma d’accélération générique, appele Catalyst, qui s’applique a une grande classe de méthodes d’optimisation, dansle cadre d’optimisations convexes. La caractéristique générique de notre schémapermet l’utilisateur de sélectionner leur méthode préférée la plus adaptée aux problemes. Nous montrons que en appliquant Catalyst, nous obtenons un taux deconvergence accélère. Plus important, ce taux coïncide avec le taux optimale desméthodes incrémentales a un facteur logarithmique pres dans l’analyse du pire descas. Ainsi, notre approche est non seulement générique mais aussi presque optimale du point de vue théorique. Nous montrons ensuite que l’accélération est bienprésentée en pratique, surtout pour des problèmes mal conditionnes.Dans le chapitre 4, nous présentons une seconde approche générique qui appliqueles principes Quasi-Newton pour accélérer les méthodes de premier ordre, appeléeQNing. Le schéma s’applique a la même classe de méthodes que Catalyst. En outre,il admet une simple interprétation comme une combinaison de l’algorithme L-BFGSet de la régularisation Moreau-Yosida. A notre connaissance, QNing est le premieralgorithme de type Quasi-Newton compatible avec les objectifs composites et lastructure de somme finie.Nous concluons cette thèse en proposant une extension de l’algorithme Catalyst au cas non convexe. Il s’agit d’un travail en collaboration avec Dr. CourtneyPaquette et Pr. Dmitriy Drusvyatskiy, de l’Université de Washington, et mes encadrants de thèse. Le point fort de cette approche réside dans sa capacité a s’adapterautomatiquement a la convexité. En effet, aucune information sur la convexité de lafonction n’est nécessaire avant de lancer l’algorithme. Lorsque l’objectif est convexe,l’approche proposée présente les mêmes taux de convergence que l’algorithme Catalyst convexe, entrainant une accélération. Lorsque l’objectif est non-convexe, l’algorithme converge vers les points stationnaires avec le meilleur taux de convergencepour les méthodes de premier ordre. Des résultats expérimentaux prometteurs sontobserves en appliquant notre méthode a des problèmes de factorisation de matriceparcimonieuse et a l’entrainement de modèles de réseaux de neurones
Optimization problems arise naturally in machine learning for supervised problems. A typical example is the empirical risk minimization (ERM) formulation, which aims to find the best a posteriori estimator minimizing the regularized risk on a given dataset. The current challenge is to design efficient optimization algorithms that are able to handle large amounts of data in high-dimensional feature spaces. Classical optimization methods such as the gradient descent algorithm and its accelerated variants are computationally expensive under this setting, because they require to pass through the entire dataset at each evaluation of the gradient. This was the motivation for the recent development of incremental algorithms. By loading a single data point (or a minibatch) for each update, incremental algorithms reduce the computational cost per-iteration, yielding a significant improvement compared to classical methods, both in theory and in practice. A natural question arises: is it possible to further accelerate these incremental methods? We provide a positive answer by introducing several generic acceleration schemes for first-order optimization methods, which is the main contribution of this manuscript. In chapter 2, we develop a proximal variant of the Finito/MISO algorithm, which is an incremental method originally designed for smooth strongly convex problems. In order to deal with the non-smooth regularization penalty, we modify the update by introducing an additional proximal step. The resulting algorithm enjoys a similar linear convergence rate as the original algorithm, when the problem is strongly convex. In chapter 3, we introduce a generic acceleration scheme, called Catalyst, for accelerating gradient-based optimization methods in the sense of Nesterov. Our approach applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. The Catalyst algorithm can be viewed as an inexact accelerated proximal point algorithm, applying a given optimization method to approximately compute the proximal operator at each iteration. The key for achieving acceleration is to appropriately choose an inexactness criteria and control the required computational effort. We provide a global complexity analysis and show that acceleration is useful in practice. In chapter 4, we present another generic approach called QNing, which applies Quasi-Newton principles to accelerate gradient-based optimization methods. The algorithm is a combination of inexact L-BFGS algorithm and the Moreau-Yosida regularization, which applies to the same class of functions as Catalyst. To the best of our knowledge, QNing is the first Quasi-Newton type algorithm compatible with both composite objectives and the finite sum setting. We provide extensive experiments showing that QNing gives significant improvement over competing methods in large-scale machine learning problems. We conclude the thesis by extending the Catalyst algorithm into the nonconvex setting. This is a joint work with Courtney Paquette and Dmitriy Drusvyatskiy, from University of Washington, and my PhD advisors. The strength of the approach lies in the ability of the automatic adaptation to convexity, meaning that no information about the convexity of the objective function is required before running the algorithm. When the objective is convex, the proposed approach enjoys the same convergence result as the convex Catalyst algorithm, leading to acceleration. When the objective is nonconvex, it achieves the best known convergence rate to stationary points for first-order methods. Promising experimental results have been observed when applying to sparse matrix factorization problems and neural network models
APA, Harvard, Vancouver, ISO, and other styles
44

Barbosa, Raimundo José Pereira. "Análise da implementação do projeto avançar na coordenadoria distrital de educação 4 da secretaria estadual de educação do estado do Amazonas." Universidade Federal de Juiz de Fora, 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/2195.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-07-22T18:42:46Z No. of bitstreams: 1 raimundojosepereirabarbosa.pdf: 2983435 bytes, checksum: e3f779bd6cb6fe8344f53da29841fcee (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-07-25T16:32:28Z (GMT) No. of bitstreams: 1 raimundojosepereirabarbosa.pdf: 2983435 bytes, checksum: e3f779bd6cb6fe8344f53da29841fcee (MD5)
Made available in DSpace on 2016-07-25T16:32:28Z (GMT). No. of bitstreams: 1 raimundojosepereirabarbosa.pdf: 2983435 bytes, checksum: e3f779bd6cb6fe8344f53da29841fcee (MD5) Previous issue date: 2015-12-18
A presente dissertação foi desenvolvida no âmbito do Mestrado Profissional em Gestão e Avaliação da Educação (PPGP) do Centro de Políticas Públicas e Avaliação da Educação da Universidade Federal de Juiz de Fora (CAEd/UFJF). O caso de gestão estudado tem o objetivo de compreender quais as principais dificuldades na implementação do “Programa de Correção do Fluxo Escolar do Ensino Fundamental: Projeto Avançar”, na Coordenadoria Distrital de Educação – 4 (CDE-4), a partir da descrição do programa e da análise de sua implementação nas escolas e na sede da CDE-4 e assim propor ações que contribuam para solucionar as dificuldades de implementação encontradas. A CDE-4 esta localizada na zona centro-oeste da cidade de Manaus - Amazonas - Brasil, faz parte da Secretaria Estadual de Educação do Amazonas (SEDUC-AM) e coordena 34 escolas de Ensino Fundamental e Médio, das quais, dezessete implementaram o programa em 2015. O Projeto Avançar foi implantado pela SEDUC-AM, com o objetivo de corrigir a distorção idade-ano dos alunos matriculados no Ensino Fundamental, com até dois anos de atraso escolar, através de uma metodologia diferenciada, baseada na interdisciplinaridade e na aprendizagem significativa. As coordenadorias distritais e regionais de educação da SEDUC-AM são as responsáveis pela implementação e monitoramento do projeto em suas escolas, seguindo as orientações do Departamento de Politicas e Programas Educacionais (DEPPE) e da Proposta Curricular do Projeto Avançar (PCPAV). O motivo desta pesquisa foi a dificuldade enfrentada pelo pesquisador, no período em que atuou como gestor escolar, para implementar o programa na escola onde trabalhava e por observar, enquanto supervisor pedagógico, atuando desde 2012 na CDE-4, que a referida coordenadoria, também enfrentava dificuldades com a implementação do Projeto Avançar. A pesquisa é um estudo de caso, baseado na análise documental do programa e de suas atividades de implementação (PCPAV, legislação, atas, pautas de reuniões e registros acadêmicos, dentre outros), na aplicação de questionários e entrevistas com principais atores que implementam o programa no âmbito da CDE-4. Os achados da pesquisa permitiram identificar os seguintes problemas na implementação do Projeto Avançar na CDE-4: inadequação da formação continuada oferecida aos atores envolvidos; falta da prática pedagógica interdisciplinar e ineficiência do monitoramento do programa feito pela CDE-4. A partir dos achados da pesquisa, o PAE apresentado propõe as seguintes medidas visando a melhoria da implementação do programa na CDE-4: fortalecer a interdisciplinaridade como prática pedagógica, melhorar a formação continuada e tornar o monitoramento do programa mais eficiente.
This work was developed under the Professional Master in Management and Education Assessment (PPGP) of the Center for Public Policy and Federal University of Education Evaluation of Juiz de Fora (CAEd / UFJF). The case management study aims to understand what the main difficulties in implementing the "School Flow Correction Program Elementary School: Next Project", the District Coordinator of Education - 4 (CDE-4), starting from the description the program and the analysis of its implementation in schools and the headquarters of the CDE-4 and so propose actions aimed at contributing to resolving implementation difficulties. The CDE-4 is located in the center-west of the city of Manaus - Amazonas - Brazil, is part of the State Department of Amazonas Education (SEDUC-AM) and coordinates 34 primary schools and East, of which seventeen implementaramm the program in 2015. The next project was implemented by SEDUC-AM, in order to correct the age-year students enrolled in elementary school, within two years of school delay, through a different methodology based on interdisciplinary and meaningful learning. The district and regional coordinators SEDUC-AM of education are responsible for the implementation and monitoring of the project in their schools, following the guidelines of the Department of Policies and Educational Programs (DEPPE) and the Curriculum Proposal of the Forward Project. The reason for this research was the difficulty faced by the researcher, in the period when he served as school manager, to implement the program at the school where he worked and watched as teaching supervisor, working since 2012 in the CDE-4, said coordinating body also facing difficulties with implementation of the Next Project. The research is a case study based on document analysis of the program and its implementation activities (Curriculum Proposal, legislation, minutes, meeting agendas and academic records, among others), the use of questionnaires and interviews with key actors that implement the program within the CDE-4. The research findings have identified the following problems in implementing the Project on Next CDE-4: inadequacy of continuing education offered to stakeholders; lack of interdisciplinary teaching practice and inefficiency of program monitoring done by the CDE-4. From, the research findings, the PAE presented proposed the following measures aimed at improving program implementation in the CDE-4: strengthen interdisciplinarity as a pedagogical practice, enhance continuing education and make the monitoring more efficient program.
APA, Harvard, Vancouver, ISO, and other styles
45

Cyr-Mutty, Paul B. "Accelerating Experience| Using Learning Scenarios Based on Master Teacher Experiences and Specific School Contexts to Help Induct Novice Faculty into Teaching at an Independent Boarding School." Thesis, University of Pennsylvania, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10264630.

Full text
Abstract:

Many independent boarding schools have customarily hired significant numbers of novice faculty who are not certified teachers and who do not have significant teaching experience. Additionally, the time available to help such novice faculty learn about the many aspects of their jobs is quite limited. Therefore, the methods used to help novice faculty learn, while they already enacting their roles as educators, are important. As a result, this study examined the effectiveness of using school context based learning scenarios as a tool for teaching novice faculty at independent boarding schools. Specifically, the study tried to determine if such scenarios helped novice faculty feel greater self-efficacy and helped them to more effectively gain the benefits of their own experiential learning, thus acquiring more quickly the important knowledge of their craft that senior teachers developed through their own experiential learning. I theorized that this would ultimately lead to their achieving better educational outcomes with their students in all facets of their jobs. First, the researcher interviewed six master teachers from three different junior boarding schools to gather information about the key experiential learning events of successful teachers and then analyzed this data to identify common themes and types of experiences. These narrated, real experiences and the analyses of them were used as the basis for the construction of learning scenarios. These scenarios attempted to both highlight important concepts and approaches to working with adolescents that the master teachers felt they gleaned from the actual experiences and reflect the specific details of the independent boarding middle school where they were used. These scenarios were then read and discussed with the novice faculty at the school as part of their induction to life and work there over the course of a four-month period. To assess the impact of the use of scenarios, the researcher audio recorded, video taped and analyzed two of the scenario learning sessions; had the new faculty respond, in written form, to two scenarios; conducted a focus group with the new faculty, and administered a pre and post scenario learning experience self-efficacy scale.

APA, Harvard, Vancouver, ISO, and other styles
46

Lingala, Sajan Goud. "Novel adaptive reconstruction schemes for accelerated myocardial perfusion magnetic resonance imaging." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/5016.

Full text
Abstract:
Coronary artery disease (CAD) is one of the leading causes of death in the world. In the United States alone, it is estimated that approximately every 25 seconds, a new CAD event will occur, and approximately every minute, someone will die of one. The detection of CAD during in its early stages is very critical to reduce the mortality rates. Magnetic resonance imaging of myocardial perfusion (MR-MPI) has been receiving significant attention over the last decade due to its ability to provide a unique view of the microcirculation blood flow in the myocardial tissue through the coronary vascular network. The ability of MR-MPI to detect changes in microcirculation during early stages of ischemic events makes it a useful tool in identifying myocardial tissues that are alive but at the risk of dying. However this technique is not yet fully established clinically due to fundamental limitations imposed by the MRI device physics. The limitations of current MRI schemes often make it challenging to simultaneously achieve high spatio-temporal resolution, sufficient spatial coverage, and good image quality in myocardial perfusion MRI. Furthermore, the acquisitions are typically set up to acquire images during breath holding. This often results in motion artifacts due to improper breath hold patterns. This dissertation deals with developing novel image reconstruction methods in conjunction with non-Cartesian sampling for the reconstruction of dynamic MRI data from highly accelerated / under-sampled Fourier measurements. The reconstruction methods are based on adaptive signal models to represent the dynamic data using few model coefficients. Three novel adaptive reconstruction methods are developed and validated: (a) low rank and sparsity based modeling, (b) blind compressed sensing, and (c) motion compensated compressed sensing. The developed methods are applicable to a wide range of dynamic imaging problems. In the context of MR-MPI, this dissertation show feasibilities that the developed methods can enable free breathing myocardial perfusion MRI acquisitions with high spatio-temporal resolutions ( < 2mm x 2mm, 1 heart beat) and slice coverage (upto 8 slices).
APA, Harvard, Vancouver, ISO, and other styles
47

Karlsson, Johanna. "Identifying patterns in physiological parameters of expert and novice marksmen in simulation environment related to performance outcomes." Thesis, Linköpings universitet, Avdelningen för medicinsk teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139589.

Full text
Abstract:
The goal of this thesis is to investigate if it is possible to use measurements of physiological parameters to accelerate learning of target shooting for novice marksmen in Saab’s Ground combat indoor trainer (GC-IDT). This was done through a literature study that identified brain activity, eye movements, heart activity, muscle activity and breathing as related to shooting technique. The sensors types Electroencephalography (EEG), Electroocculography (EOG), Electrocardiogram (ECG), Electromyography (EMG) and impedance pneumography (IP) were found to be suitable for measuring the respective parameters in the GC-IDT. The literature study also showed that previous studies had found differences in the physiological parameters in the seconds leading up to the shot when comparing experts and novices. The studies further showed that it was possible to accelerate learning by giving feedback to the novices about their physiological parameters allowing them to mimic the behavior of the experts. An experiment was performed in the GC-IDT by measuring EOG, ECG, EMG and IP on expert and novice marksmen to investigate if similar results as seen in previous studies were to be found. The experiment showed correlation between eye movements and shooting score, which was in line with what previous studies had shown. The respiration measurement did not show any correlation to the shooting scores in this experiment, it was however possible to see a slight difference between expert and novices. The other measurements did not show any correlation to the shooting score in this experiment. In the future, further experiments needs to be made as not all parameters could be explored in depth in this experiment. Possible improvements to such experiments are i.e. increasing the number of participants and/or the number of shots as well as marking shots automatically in the data and increasing the time between shots.
APA, Harvard, Vancouver, ISO, and other styles
48

McCaw, Donna S. Davis-Lenski Susan Braun Joseph A. "Teaching reading using small flexible-skills grouping and whole classroom instruction a study of project : FIRST /." Normal, Ill. Illinois State University, 2001. http://wwwlib.umi.com/cr/ilstu/fullcit?p3006623.

Full text
Abstract:
Thesis (Ed. D.)--Illinois State University, 2001.
Title from title page screen, viewed April 20, 2006. Dissertation Committee: Susan Davis-Lenski, Joseph Braun (co-chairs), Anthony Lorsbach. Includes bibliographical references (leaves 115-139) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
49

Gruvstad, Kim, and Ebba Remes. "Särskilt begåvade elever i matematikklassrummet : Hur kan lärare upptäcka, stimulera och utmana särskilt begåvade elever i matematik?" Thesis, Linköpings universitet, Pedagogik och didaktik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-152015.

Full text
Abstract:
Syftet med den här litteraturstudien är att ta reda på hur lärare upptäcker särskilt begåvade elever i ämnet matematik samt hur de stöttar och stimulerar dessa elever på bästa sätt. Syftet är också att göra lärare och pedagoger uppmärksamma på de särskilt begåvade elevernas uttryckssätt och behov. För att få fram relevant litteratur till vår studie har vi använt oss av metoderna databassökning och manuell sökning. Databaserna som vi har använt är ERIC, UniSearch och Google Scholar. Resultatet visar att elever med särskild begåvning kan uttrycka sig på många olika sätt vilket kan göra dessa elever svåridentifierade. Det finns karaktärsdrag hos de särbegåvade eleverna som är gemensamma för de flesta som till exempel motivation, kunskapstörst, nyfikenhet och god problemlösningsförmåga. Dock finns det även karaktärsdrag som ter sig olika från elev till elev, som till exempel förklaringsmetoder och social kompetens. Resultatet visar även vilka speciella behov särbegåvade elever har, samt didaktiska val som kan vara gynnsamma för de särbegåvade eleverna, exempelvis berikning, acceleration, gruppering och mentorskap.
APA, Harvard, Vancouver, ISO, and other styles
50

Barbosa, Tânia Maria Meneses Farias. "A implementação do Projeto Acelerar para Vencer (PAV) em uma unidade escolar: das intenções às ações." Universidade Federal de Juiz de Fora, 2013. https://repositorio.ufjf.br/jspui/handle/ufjf/1000.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-03-22T13:49:34Z No. of bitstreams: 1 taniamariamenesesfariasbarbosa.pdf: 1525501 bytes, checksum: bb1ca4d56c8cd73e1c4d10a0743853a9 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-04-24T02:23:01Z (GMT) No. of bitstreams: 1 taniamariamenesesfariasbarbosa.pdf: 1525501 bytes, checksum: bb1ca4d56c8cd73e1c4d10a0743853a9 (MD5)
Made available in DSpace on 2016-04-24T02:23:01Z (GMT). No. of bitstreams: 1 taniamariamenesesfariasbarbosa.pdf: 1525501 bytes, checksum: bb1ca4d56c8cd73e1c4d10a0743853a9 (MD5) Previous issue date: 2013-08-16
O objetivo desta pesquisa é analisar a efetividade da implementação do Projeto Acelerar Para Vencer (PAV), no período de 2009 a 2012, e sua relação com a gestão de uma unidade escolar pertencente à Superintendência Regional de Ensino (SRE) de São João del-Rei, visando a verificar quais ações do gestor escolar da unidade pesquisada contribuem para a efetividade da implementação do projeto. O PAV se constitui como uma estratégia para atender aos alunos do Ensino Fundamental (EF) com pelo menos dois anos ou mais de distorção da idade em relação ao ano de escolaridade adequada. Seus principais objetivos são aumentar a proficiência média do aluno e reduzir a distorção idade/ano de escolaridade. A presente pesquisa revela, entretanto, que os objetivos preconizados nos documentos estruturadores do projeto, não têm sido alcançados. Para o desenvolvimento da pesquisa foi selecionada uma das escolas que têm turmas de PAV desde 2009. Sua escolha está associada aos bons resultados obtidos pela instituição, no âmbito da política, evidenciados pelo Programa de Avaliação da Rede Pública de Educação Básica (PROEB) e pelos menores índices de evasão e infrequência no projeto. É uma escola que, sob a jurisdição pesquisada, apresenta menos problemas em relação à implementação do PAV. A pesquisa foi desenvolvida por meio de observação não participante, aplicação de questionários aos alunos e pais, grupo focal e realização de entrevistas com os professores do projeto, Especialistas da Educação Básica (EEB) e o diretor da escola. Os profissionais da SRE/SJDR integram o grupo de atores pesquisados, tendo em vista o papel dessa instituição como mediadora das políticas implantadas pela SEE/MG e a implementação destas nas escolas da rede estadual. A análise dos resultados da pesquisa é feita a partir de duas categorias que envolvem como mote da discussão a proposta curricular do PAV e as contradições da política. Os instrumentos metodológicos e os referenciais teóricos forneceram subsídios para a confirmação de que o PAV não tem alcançado seus objetivos em decorrência de sua implementação inadequada, por ter sido uma política implantada de forma pouco democrática; pela dificuldade de aceitação do regime de progressão continuada, pelo corpo docente; pela ausência de um perfil pedagógico do gestor escolar para atuação no PAV e pelo desconhecimento, por parte dos profissionais que atuam diretamente no projeto, dos documentos estruturadores e, consequentemente, de suas premissas, objetivos e orientações. A partir dos resultados observados, é proposto um plano de ação que envolve a formação de gestores escolares com foco na gestão pedagógica e, para os professores com ênfase na construção de uma proposta curricular para o projeto. Para a escola e SRE/SJDR, consideradas suas especificidades de atuação, são propostas ações que promovam uma implementação mais efetiva do PAV. Em caráter sugestivo, tendo em vista as contradições observadas na implementação da política em questão, são apresentadas para a SEE/MG propostas que possam adequar as diretrizes contidas nos documentos oficiais ao contexto da prática.
The aim of this research is to analyze the effectiveness of the implementation of the Project Acelerar Para Vencer (PAV, in Portuguese), from 2009 to 2012, and its relation to a school unit belonging to the Regional Superintendency of Education (SRE, in Portuguese), of São João del-Rei, aiming to verify which actions by the school manager of the studied school have contributed to the effectiveness in implementing the project. PAV is constituted as a strategy to attend to Basic Education students with two or more years of age distortion in relation to the adequate grade. Its mains goals are to increase the student’s average proficiency levels and to reduce the age/grade distortion. The present research reveals, however, that the goals established in the documents which structure the project have not been met. In order to conduct the research we selected a school which has had PAV classes since 2009, Such choice is associated with the good results obtained by the institution regarding the policy, highlighted by the Assessment Program of the Public Network of Basic Education (PROEB, in Portuguese) and by the lesser levels of evasion and absence in the project. It is a school that, under the researched jurisdiction, presents less problems regarding the PAV’s implementation. The research was developed by means of non participating observation, surveys being conducted among students and parents, focal group and conducting of interviews with the project’s teachers, Specialists in Basic Education (EEB, in Portuguese) and the school manager. The professionals from SRE/SJDR integrate the group of research actors, considering their role in mediating the policies fomented by the SEE/MG and their implementations in the state schools. The analysis of the results is conducted from two categories which have evolved as a motto of the discussion surrounding PAV’s curricular proposal and the contradictions of the policy. The methodological instruments and theoretical references subsidized the confirmation that the PAV has not reached its goals due to its inadequate implementation, having been a policy implemented in a less than democratic way, due to the difficulty in accepting of the continued progression regimen by the teaching staff, due to the absence of a pedagogical posture by the school manager to act in the PAV and by the lack of knowledge, by the professionals who work directly with the project, of their structuring documents and, consequently, of tits premises, goals and orientations. From the results observed, we propose an action plan which involved the training of school managers with a focus on the pedagogical management and for the teachers with an emphasis in the construction of a curricular proposal for the project. To the school and the SRE/SJDR, considering its specificities we propose actions which promote a more effective implementation of the PAV. We also suggest, never losing sight of the contradictions observed in implementing the policy in question, proposals to the SEE/MG which may adequate the guidelines contained in the official documents to the practice’s context.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography