Segui questo link per vedere altri tipi di pubblicazioni sul tema: TD2 prediction.

Tesi sul tema "TD2 prediction"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "TD2 prediction".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Dursoniah, Danilo. "Modélisation computationnelle de l’absorption intestinale du glucose pour la prédiction du diabète de Type 2". Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILB023.

Testo completo
Abstract (sommario):
Les travaux sur le diabète de types 2 (DT2) se sont jusqu'à présent très majoritairement focalisés sur le rôle de la fonction bêta du pancréas et de la sensibilité à l'insuline. De nombreux indices, de précision et de pertinence variables, ont été proposés pour les mesurer. Ces indices sont calculés à partir de modèles plus ou moins complexes utilisant des données statiques de glycémie à jeun, ou des données dynamiques de test glycémique oral.La chirurgie bariatrique a mis en évidence l'existence d'un troisième paramètre pouvant potentiellement être une cause de DT2 : l'absorption intestinale de glucose (IGA). Contrairement à la fonction bêta du pancréas et à la sensibilité à l'insuline, aucun indice n'a encore été proposé pour mesurer l'effet de ce paramètre sur le DT2. Mesurer expérimentalement l'absorption intestinale de glucose nécessite l'accès à la veine porte, ce qui est en pratique impossible sur les humains. Une technique expérimentale de multi-traceur utilisant le glucose marqué a été proposée en alternative mais celle-ci demeure très difficile à réaliser et demande une expertise qui ne permet pas son utilisant en routine clinique. Notons aussi que les approches de modélisation jusqu'à présent proposées pour prédire la réponse postprandiale de glucose nécessitent ce gold standard. Les rares modèles existants ne sont que partiellement mécanistiques et sont relativement complexes. Cette thèse propose de surmonter ces problèmes.Ainsi en première contribution, nous reproduisons, dans un premier temps, le modèle post-prandial de Dalla Man et les simulations de l'article de référence (Dalla Man et al., 2007). Ce modèle étant décrit exclusivement à l'aide d'EDO, nous l'avons partiellement retranscrit en un système de réactions chimiques afin de mettre en perspective les aspects relevant de mécanismes physiologiques. Cette implémentation a d'abord permis de réaliser un travail de reproductibilité - malgré l'absence des données originales de l'article de référence - puis de confronter le modèle à nos données cliniques OBEDIAB, montrant ainsi ses limites en terme d'estimations et d'identifiabilité. C'est ainsi, qu'en seconde contribution majeure, pour contourner le recours au gold standard du multi-traceur, nous avons utilisé le D-xylose, un sucre analogue du glucose, comme biomarqueur afin d'observer directement l'IGA, disponibles dans notre jeu de données pré-cliniques, issu d'expériences effectuées sur des cochons nains, ou minipigs. Nous avons réalisé le premier modèle de D-xylose à notre connaissance. Ce modèle a été sélectionné par estimation de paramètres sur nos jeux de données, puis à l'aide d'une analyse d'identifiabilité pratique et d'une analyse de sensibilité globale. Ces analyses ont également permis d'étudier les contributions relatives de la vidange gastrique et de l'absorption intestinale sur le profil de la dynamique de D-xylose. Enfin nous nous interrogerons sur les liens entre la modélisation de la glycémie et celle de la réponse de D-xylose postprandial tout en envisageant les applications cliniques et les limites du modèle de D-xylose.Mots-clés: Biologie des systèmes, modélisation, réseaux de réactions chimiques, équations différentielles ordinaires, estimation de paramètres, analyse d'identifiabilité, diabète de type 2, D-xylose
Research on type 2 diabetes (T2D) has so far predominantly focused on the role of pancreatic beta function and insulin sensitivity. Numerous indices, of varying precision and relevance, have been proposed to measure these factors. These indices are calculated using more or less complex models based on static fasting glucose data or dynamic oral glucose test data.Bariatric surgery has highlighted the existence of a third parameter that could potentially be a cause of T2D: intestinal glucose absorption (IGA). Unlike pancreatic beta function and insulin sensitivity, no index has yet been proposed to measure the effect of this parameter on T2D. Experimentally measuring intestinal glucose absorption requires access to the portal vein, which is practically impossible in humans. An experimental multi-tracer technique using labeled glucose has been proposed as an alternative, but it remains very difficult to implement and requires expertise that prevents its routine clinical use. It should also be noted that the modeling approaches proposed so far to predict the postprandial glucose response require this gold standard. The few existing models are only partially mechanistic and relatively complex. This thesis proposes to overcome these problems.Thus, as a first contribution, we initially reproduce the postprandial model of Dalla Man and the simulations from the reference article (Dalla Man et al., 2007). Since this model is exclusively described using ODEs, we have partially transcribed it into a system of chemical reactions to put the relevant physiological mechanisms into perspective. This implementation first allowed us to carry out reproducibility work - despite the absence of the original data from the reference article - and then to compare the model with our OBEDIAB clinical data, thus showing its limitations in terms of estimations and identifiability.As a second major contribution, to circumvent the use of the multi-tracer gold standard, we used D-xylose, a glucose analog, as a biomarker to directly observe IGA, available in our pre-clinical dataset from experiments conducted on minipigs. To our knowledge, we developed the first D-xylose model. This model was selected through parameter estimation on our datasets, followed by a practical identifiability analysis and a global sensitivity analysis. These analyses also allowed us to study the relative contributions of gastric emptying and intestinal absorption on the D-xylose dynamic profile. Finally, we will explore the links between blood glucose modeling and postprandial D-xylose response modeling while considering the clinical applications and limitations of the D-xylose model.Keywords: Systems biology, modeling, chemical reaction networks, ordinary differential equations, parameter estimation, identifiability analysis, type 2 diabetes, D-xylose
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Bostock, Adam K. "Prediction and reduction of traffic pollution in urban areas". Thesis, University of Nottingham, 1994. http://eprints.nottingham.ac.uk/14352/.

Testo completo
Abstract (sommario):
This thesis is the result of five years research into road traffic emissions of air pollutants. It includes a review of traffic pollution studies and models, and a description of the PREDICT model suite and PREMIT emissions model. These models were used to evaluate environmentally sensitive traffic control strategies, some of which were based on the use of Advanced Transport Telematics (ATT). This research has improved our understanding of traffic emissions. It studied emissions of the following pollutants: carbon monoxide (CO), hydrocarbons (HC) and oxides of nitrogen (NOx). PREMIT modelled emissions from each driving mode (cruise, acceleration, deceleration and idling) and, consequently, predicted relatively complex emission characteristics for some scenarios. Results suggest that emission models should represent emissions by driving mode, instead of using urban driving cycles or average speeds. Emissions of NOx were more complex than those of CO and HC. The change in NOx, caused by a particular strategy, could be similar or opposite to changes in CO and HC. Similarly, for some scenarios, a reduction in stops and delay did not reduce emissions of NOx. It was also noted that the magnitude of changes in emissions of NOx were usually much less than the corresponding changes in CO and HC. In general, the traffic control strategies based on the adjustment of signal timings were not effective in reducing total network emissions. However, high emissions of pollutants on particular links could, potentially, be reduced by changing signal timings. For many links, mutually exclusive strategies existed for reducing emissions of CO and HC, and emissions of NOx. Hence, a decision maker may have to choose which pollutants are to be reduced, and which can be allowed to increase. The environmental area licensing strategy gave relatively large reductions in emissions of all pollutants. This strategy was superior to the traffic signal timing strategies because it had no detrimental impact on the efficiency of the traffic network and gave simultaneous reductions in emissions of CO, HC and NOx.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Reynolds, Shirley Anne. "Monitoring and prediction of air pollution from traffic in the urban environment". Thesis, University of Nottingham, 1996. http://eprints.nottingham.ac.uk/11740/.

Testo completo
Abstract (sommario):
Traffic-related air pollution is now a major concern. The Rio Earth Summit and the Government's commitment to Agenda 21 has led to Local Authorities taking responsibility to manage the growing number of vehicles and to reduce the impact of traffic on the environment. There is an urgent need to effectively monitor urban air quality at reasonable cost and to develop long and short term air pollution prediction models. The aim of the research described was to investigate relationships between traffic characteristics and kerbside air pollution concentrations. Initially, the only pollution monitoring equipment available was basic and required constant supervision. The traffic data was made available from the demand-responsive traffic signal control systems in Leicestershire and Nottinghamshire. However, it was found that the surveys were too short to produce statistically significant results, and no useful conclusions could be drawn. Subsequently, an automatic, remote kerbside monitoring system was developed specifically for this research. The data collected was analysed using multiple regression techniques in an attempt to obtain an empirical relationship which could be used to predict roadside pollution concentrations from traffic and meteorological data. However, the residual series were found to be autocorrelated, which meant that the statistical tests were invalid. It was then found to be possible to fit an accurate model to the data using time series analysis, but that it could not predict levels even in the short-term. Finally, a semi-empirical model was developed by estimating the proportion of vehicles passing a point in each operating mode (cruising, accelerating, decelerating and idling) and using real data to derive the coefficients. Unfortunately, it was again not possible to define a reliable predictive relationship. However, suggestions have been made about how this research could be progressed to achieve its aim.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Zerbeto, Ana Paula. "Melhor preditor empírico aplicado aos modelos beta mistos". Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-09042014-132109/.

Testo completo
Abstract (sommario):
Os modelos beta mistos são amplamente utilizados na análise de dados que apresentam uma estrutura hierárquica e que assumem valores em um intervalo restrito conhecido. Com o objetivo de propor um método de predição dos componentes aleatórios destes, os resultados previamente obtidos na literatura para o preditor de Bayes empírico foram estendidos aos modelos de regressão beta com intercepto aleatório normalmente distribuído. O denominado melhor preditor empírico (MPE) proposto tem aplicação em duas situações diferentes: quando se deseja fazer predição sobre os efeitos individuais de novos elementos de grupos que já fizeram parte da base de ajuste e quando os grupos não pertenceram à tal base. Estudos de simulação foram delineados e seus resultados indicaram que o desempenho do MPE foi eficiente e satisfatório em diversos cenários. Ao utilizar-se da proposta na análise de dois bancos de dados da área da saúde, observou-se os mesmos resultados obtidos nas simulações nos dois casos abordados. Tanto nas simulações, quanto nas análises de dados reais, foram observados bons desempenhos. Assim, a metodologia proposta se mostrou promissora para o uso em modelos beta mistos, nos quais se deseja fazer predições.
The mixed beta regression models are extensively used to analyse data with hierarquical structure and that take values in a restricted and known interval. In order to propose a prediction method for their random components, the results previously obtained in the literature for the empirical Bayes predictor were extended to beta regression models with random intercept normally distributed. The proposed predictor, called empirical best predictor (EBP), can be applied in two situations: when the interest is predict individuals effects for new elements of groups that were already analysed by the fitted model and, also, for elements of new groups. Simulation studies were designed and their results indicated that the performance of EBP was efficient and satisfatory in most of scenarios. Using the propose to analyse two health databases, the same results of simulations were observed in both two cases of application, and good performances were observed. So, the proposed method is promissing for the use in predictions for mixed beta regression models.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Porto, Faimison Rodrigues. "Cross-project defect prediction with meta-Learning". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-21032018-163840/.

Testo completo
Abstract (sommario):
Defect prediction models assist tester practitioners on prioritizing the most defect-prone parts of the software. The approach called Cross-Project Defect Prediction (CPDP) refers to the use of known external projects to compose the training set. This approach is useful when the amount of historical defect data of a company to compose the training set is inappropriate or insufficient. Although the principle is attractive, the predictive performance is a limiting factor. In recent years, several methods were proposed aiming at improving the predictive performance of CPDP models. However, to the best of our knowledge, there is no evidence of which CPDP methods typically perform best. Moreover, there is no evidence on which CPDP methods perform better for a specific application domain. In fact, there is no machine learning algorithm suitable for all domains. The decision task of selecting an appropriate algorithm for a given application domain is investigated in the meta-learning literature. A meta-learning model is characterized by its capacity of learning from previous experiences and adapting its inductive bias dynamically according to the target domain. In this work, we investigate the feasibility of using meta-learning for the recommendation of CPDP methods. In this thesis, three main goals were pursued. First, we provide an experimental analysis to investigate the feasibility of using Feature Selection (FS) methods as an internal procedure to improve the performance of two specific CPDP methods. Second, we investigate which CPDP methods present typically best performances. We also investigate whether the typically best methods perform best for the same project datasets. The results reveal that the most suitable CPDP method for a project can vary according to the project characteristics, which leads to the third investigation of this work. We investigate the several particularities inherent to the CPDP context and propose a meta-learning solution able to learn from previous experiences and recommend a suitable CDPD method according to the characteristics of the project being predicted. We evaluate the learning capacity of the proposed solution and its performance in relation to the typically best CPDP methods.
Modelos de predição de defeitos auxiliam profissionais de teste na priorização de partes do software mais propensas a conter defeitos. A abordagem de predição de defeitos cruzada entre projetos (CPDP) refere-se à utilização de projetos externos já conhecidos para compor o conjunto de treinamento. Essa abordagem é útil quando a quantidade de dados históricos de defeitos é inapropriada ou insuficiente para compor o conjunto de treinamento. Embora o princípio seja atrativo, o desempenho de predição é um fator limitante nessa abordagem. Nos últimos anos, vários métodos foram propostos com o intuito de melhorar o desempenho de predição de modelos CPDP. Contudo, na literatura, existe uma carência de estudos comparativos que apontam quais métodos CPDP apresentam melhores desempenhos. Além disso, não há evidências sobre quais métodos CPDP apresentam melhor desempenho para um domínio de aplicação específico. De fato, não existe um algoritmo de aprendizado de máquina que seja apropriado para todos os domínios de aplicação. A tarefa de decisão sobre qual algoritmo é mais adequado a um determinado domínio de aplicação é investigado na literatura de meta-aprendizado. Um modelo de meta-aprendizado é caracterizado pela sua capacidade de aprender a partir de experiências anteriores e adaptar seu viés de indução dinamicamente de acordo com o domínio alvo. Neste trabalho, nós investigamos a viabilidade de usar meta-aprendizado para a recomendação de métodos CPDP. Nesta tese são almejados três principais objetivos. Primeiro, é conduzida uma análise experimental para investigar a viabilidade de usar métodos de seleção de atributos como procedimento interno de dois métodos CPDP, com o intuito de melhorar o desempenho de predição. Segundo, são investigados quais métodos CPDP apresentam um melhor desempenho em um contexto geral. Nesse contexto, também é investigado se os métodos com melhor desempenho geral apresentam melhor desempenho para os mesmos conjuntos de dados (ou projetos de software). Os resultados revelam que os métodos CPDP mais adequados para um projeto podem variar de acordo com as características do projeto sendo predito. Essa constatação conduz à terceira investigação realizada neste trabalho. Foram investigadas as várias particularidades inerentes ao contexto CPDP a fim de propor uma solução de meta-aprendizado capaz de aprender com experiências anteriores e recomendar métodos CPDP adequados, de acordo com as características do software. Foram avaliados a capacidade de meta-aprendizado da solução proposta e a sua performance em relação aos métodos base que apresentaram melhor desempenho geral.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Kattekola, Sravanthi. "Weather Radar image Based Forecasting using Joint Series Prediction". ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1238.

Testo completo
Abstract (sommario):
Accurate rainfall forecasting using weather radar imagery has always been a crucial and predominant task in the field of meteorology [1], [2], [3] and [4]. Competitive Radial Basis Function Neural Networks (CRBFNN) [5] is one of the methods used for weather radar image based forecasting. Recently, an alternative CRBFNN based approach [6] was introduced to model the precipitation events. The difference between the techniques presented in [5] and [6] is in the approach used to model the rainfall image. Overall, it was shown that the modified CRBFNN approach [6] is more computationally efficient compared to the CRBFNN approach [5]. However, both techniques [5] and [6] share the same prediction stage. In this thesis, a different GRBFNN approach is presented for forecasting Gaussian envelope parameters. The proposed method investigates the concept of parameter dependency among Gaussian envelopes. Experimental results are also presented to illustrate the advantage of parameters prediction over the independent series prediction.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Conroy, Sean F. "Nonproliferation Regime Compliance: Prediction and Measure Using UNSCR 1540". ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2308.

Testo completo
Abstract (sommario):
This dissertation investigates factors that predict compliance with international regimes, specifically the Non-Proliferation Regime. Generally accepted in international relations literature, is Krasner’s (1983) definition that regimes are “sets of implicit or explicit principles, norms, rules, and decision-making procedures around which actor expectations converge in a given [issue] area of international relations.” Using institutionalization as a framework, I hypothesize that compliance is a function of the respect for which a nation has for the rule of law. I investigate the NP regime through the lens of United Nations Security Council Resolution 1540, a mandate for member nations to enact domestic legislation criminalizing the proliferation of Weapons of Mass Destruction. Using NP regime compliance and implementation of UNSCR 1540’s mandates as dependent variables, I test the hypotheses with the following independent variables: rule of law, political competition, and regional compliance. I also present qualitative case studies on Argentina, South Africa, and Malaysia. The quantitative results of these analyses indicated a strong relationship between rule of law and regional compliance and a nation’s compliance with the overall NP regime and implementation of UNSCR 1540. These results indicate a nation will institutionalize the NP norms, and comply with the specifics of implementation. The results of in-depth analysis of Argentina, South Africa, and Malaysia showed that predicting an individual nation’s compliance is more complex than descriptions of government capacity or geography. Argentina and South Africa, expected by the hypotheses to exhibit low to medium compliance and implementation, scored high and well above their region for both measures. Malaysia, expected to score high in compliance, scored low. Findings thus reveal that rule of law is probably less influential on individual cases and regional compliance and cooperation better predictors of a nation’s compliance with a security regime.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Mishra, Avdesh. "Effective Statistical Energy Function Based Protein Un/Structure Prediction". ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2674.

Testo completo
Abstract (sommario):
Proteins are an important component of living organisms, composed of one or more polypeptide chains, each containing hundreds or even thousands of amino acids of 20 standard types. The structure of a protein from the sequence determines crucial functions of proteins such as initiating metabolic reactions, DNA replication, cell signaling, and transporting molecules. In the past, proteins were considered to always have a well-defined stable shape (structured proteins), however, it has recently been shown that there exist intrinsically disordered proteins (IDPs), which lack a fixed or ordered 3D structure, have dynamic characteristics and therefore, exist in multiple states. Based on this, we extend the mapping of protein sequence not only to a fixed stable structure but also to an ensemble of protein conformations, which help us explain the complex interaction within a cell that was otherwise obscured. The objective of this dissertation is to develop effective ab initio methods and tools for protein un/structure prediction by developing effective statistical energy function, conformational search method, and disulfide connectivity patterns predictor. The key outcomes of this dissertation research are: i) a sequence and structure-based energy function for structured proteins that includes energetic terms extracted from hydrophobic-hydrophilic properties, accessible surface area, torsion angles, and ubiquitously computed dihedral angles uPhi and uPsi, ii) an ab initio protein structure predictor that combines optimal energy function derived from sequence and structure-based properties of proteins and an effective conformational search method which includes angular rotation and segment translation strategies, iii) an SVM with RBF kernel-based framework to predict disulfide connectivity pattern, iv) a hydrophobic-hydrophilic property based energy function for unstructured proteins, and v) an ab initio conformational ensemble generator that combines energy function and conformational search method for unstructured proteins which can help understand the biological systems involving IDPs and assist in rational drugs design to cure critical diseases such as cancer or cardiovascular diseases caused by challenging states of IDPs.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Satiro, Lucas Santos. "Crop prediction and soil response to sugarcane straw removal". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/11/11140/tde-03052018-171843/.

Testo completo
Abstract (sommario):
Concerns about global warming and climate change have triggered a growing demand for renewable energy. In this scenario, the interest in using sugarcane straw as raw material for energy production has increased. However, straw plays an important role in maintaining soil quality. In addition, uncertainties as to produced straw amount and the straw removal impact on the stalk yield have raised doubts as to the use this raw material. In this sense, the objective this study was evaluate the short-term (2-year) the sugarcane straw removal impacts on soil and yield modeling of sugarcane stalk and straw, using soil attributes of different layers. Two experiments were carried out in São Paulo state, Brazil: one at Capivari (sandy clay loam soil) and another at Valparaíso (sandy loam soil). We have tested five rates of straw removal (i.e., equivalent to 0, 25, 50, 75 and 100 %). Soil samples were taken from 0-2.5, 2.5-5, 5-10, 10-20 and 20-30 cm layers to analyze pH, total C and N, P, K, Ca, Mg, bulk density and soil penetration resistance. Plant samples were collected to determine the straw and stalk yield. The impacts caused by straw removal differed between the areas, however, they concentrated on the more soil superficial layer. In sandy clay loam soil, straw removal led to organic carbon depletion and soil compaction, while in the sandy loam soil the chemical attributes (i.e. Ca and Mg contents) were the most impacted. In general, the results suggest that straw removal causes reduction more significant in soil quality for the sandy clay loam soil. The results indicate the possibility to remove about half-straw amount deposited on soil\'s surface (8.7 Mg ha-1 straw remaining) without causing severe implications on the quality of this soil. In contrast, although any amount of straw was sufficient to cause alterations the quality of the sandy loam soil, these impacts were less intense and are not magnified with the increase of straw removal. It was possible to model sugarcane straw and stalk yield using soil attributes. The 0-20 cm layer was the most important layer in the stalk yield definition, whereas the 0-5 cm layer, which the impacts caused by the straw removal were concentrated, was less important. Thus, we noticed that impacts caused to soil by straw removal have little influence on crop productivity. Straw prediction has proved more complex and possibly requires additional information (e.g crop and climate information) for good results to be obtained. Overall, the results suggest that the planned removal of straw for energy purposes can occur in a sustainable way, but should take into account site conditions, e.g soil properties. However, long-term research with different approaches is still necessary, both to follow up and confirm our results, and to develop ways to reduce damage caused by this activity.
Preocupações acerca do aquecimento global e mudanças climáticas tem provocado uma crescente demanda por energias renováveis. Nesse cenário, tem aumentado o interesse em utilizar a palha de cana-de-açúcar como matéria prima para produção de energia. Contudo, a palha desempenha importante papel na manutenção da qualidade do solo. Aliado a isso, incertezas quanto a quantidade de palha produzida e o impacto da remoção da palha na produção de colmos tem levantado duvidas quanto ao uso dessa matéria prima. Nesse sentido, o objetivo desse estudo foi avaliar a curto prazo (2 anos) os impactos da remoção da palha de cana-de-açucar no solo, e modelar a produção de palha e colmo de cana-de-açucar utilizando atributos do solo de diferentes camadas. Para tanto, foram conduzidos dois experimentos nos municípios de Capivari (solo de textura média) e Valparaíso (solo de textura arenosa), estado de São Paulo, Brasil. Foram testados cinco taxas de remoção de palha (i.e., equivalentes a 0, 25, 50, 75 e 100 %). Amostras de solo foram coletadas nas camadas 0-2,5, 2,5-5, 5-10, 10-20 e 20-30 cm de profundidade para determinação de C, N, pH, P, K, Ca, Mg, densidade do solo e resistência do solo a penetração. Amostras de planta foram coletadas para determinar a produção de colmo e palha. Os impactos causados pela remoção da palha diferiu entre as áreas, no entato, se concentraram na camada mais superficial do solo. No solo de textura média a remoção da palha levou a depleção do carbono orgânico e a compactação do solo, enquanto que, no solo de textura arenosa os atributos químicos (i.e teores de Ca e Mg) foram os mais impactados. Os resultados indicam a possibilidade de remover cerca de metade da quantidade de palha depositada sobre o solo (8.7 Mg ha-1 palha remanecente) sem causar graves implicações na qualidade deste solo. Em contraste, no solo de textura arenosa, qualquer quantidade de palha foi suficiente para causar alterações na qualidade do solo, contudo, essas alterações foram menos intensas e não aumentaram com as taxas de remoção da palha. Foi possível modelar a produção de colmo e palha de cana-de-açucar utilizando atributos do solo. A camada 0-20 cm foi a mais importante na definição da produção de colmos, ao passo que a camada 0-5 cm, camada em que se concentra os impactos causados pela remoção da palha, foi menos importante. Assim, notamos que os impactos causados ao solo pela remoção da palha tem pouca influencia na produtividade da cultura. A predição da palha se mostrou mais complexa e possivelmente requer informações adicionas (e.g informações da cultivar e de clima) para que bons resultados sejam obtidos. No geral, os resultados sugerem que a remoção planejada da palha para fins energéticos pode ocorre de maneira susutentável, porém deve levar em conta condições locais, e.g propriedades do solo. Contudo, pesquisas de longo prazo com diferentes abordagens ainda são necessárias, tanto para acompanhar e confirmar nossos resultados, como para desenvolver soluções que atenuem os danos causados por esta atividade.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Segura, Gustavo Alonso Nuñez. "Energy consumption prediction in software-defined wirelwss sensor networks". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-04052018-113551/.

Testo completo
Abstract (sommario):
Energy conservation is a main concern in Wireless Sensor Networks (WSN). To reduce energy consumption it is important to know how it is spent and how much is available during the node and network operation. Several previous works have proposed energy consumption models focused on the communication module, while neglecting the processing and sensing activities. Other works presented more complex and complete models, but lacked experiments to demonstrate their accuracy in real deployments. The main objective of this work is to design and to evaluate an accurate energy consumption model for WSN, which considers the sensing, processing, and communication modules usage. This model was used to implement two energy consumption prediction mechanism. One mechanism is based in Markov chains and the other one is based in time series analysis. The metrics to evaluate the model and prediction mechanisms performance were: energy consumption estimation accuracy, energy consumption prediction accuracy, and node\'s communication and processing resources usage. The energy consumption prediction mechanisms performance was compared using two implementation schemes: running the prediction algorithm in the sensor node and running the prediction algorithm in a Software-Defined Networking controller. The implementation was conducted using IT-SDN, a Software-Defined Wireless Sensor Network framework. For the evaluation, simulation and emulation used COOJA, while testbed experiments used TelosB devices. Results showed that considering the sensing, processing, and communication energy consumption into the model, it is possible to obtain an accurate energy consumption estimation for Wireless Sensor Networks. Also, the use of a Software-Defined Networking controller for processing complex prediction algorithms can improve the prediction accuracy.
A conservação da energia é uma das principais preocupações nas Redes de Sensores Sem Fio (WSN, do inglês Wireless Sensor Networks). Para reduzir o consumo de energia, é importante saber como a energia é gasta e quanta energia há disponível durante o funcionamento da rede. Diversos trabalhos anteriores propuseram modelos de consumo de energia focados no módulo de comunicação, ignorando o consumo por tarefas de processamento e sensoriamento. Outros trabalhos apresentam modelos mais completos e complexos, mas carecem de experimentos que demonstrem a exatidão em dispositivos reais. O objetivo principal deste trabalho é projetar e avaliar um modelo de consumo de energia para WSN que considere o consumo por sensoriamento, processamento e comunicação. Este modelo foi utilizado para implementar dois mecanismos de previsão de consumo de energia, um deles baseado em cadeias de Markov e o outro baseado em séries temporais. As métricas para avaliar o desempenho do modelo e dos mecanismos de previsão de consumo de energia foram: exatidão da estimativa de consumo de energia, exatidão da previsão de consumo de energia e uso dos recursos de comunicação e processamento do nó. O desempenho dos mecanismos de previsão de consumo de energia foram comparados utilizando dois esquemas de implementação: rodando o algoritmo de previsão no nó sensor e rodando o algoritmo de previsão em um controlador de rede definida por software. A implementação foi conduzida utilizando IT-SDN, um arcabouço de desenvolvimento de redes de sensores sem fio definidas por software. A avaliação foi feita com simulações e emulações utilizando o simulador COOJA e ensaios com dispositivos reais utilizando o TelosB. Os resultados mostraram que considerando o consumo de energia por sensoriamento, processamento e communicação, é possivel fazer uma estimativa de consumo de energia em redes de sensores sem fio com uma boa exatidão. Ainda, o uso de um controlador de rede definida por software para processamento de algoritmos de previsão complexos pode aumentar a exatidão da previsão.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Ajibulu, Ayodeji Opeoluwa. "Robust adaptive model predictive control for intelligent drinking water distribution systems". Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8193/.

Testo completo
Abstract (sommario):
Large-scale complex systems have large numbers of variables, network structure of interconnected subsystems, nonlinearity, spatial distribution with several time scales in its dynamics, uncertainties and constrained. Decomposition of large-scale complex systems into smaller more manageable subsystems allowed for implementing distributed control and coordinations mechanisms. This thesis proposed the use of distributed softly switched robustly feasible model predictive controllers (DSSRFMPC) for the control of large-scale complex systems. Each DSSRFMPC is made up of reconfigurable robustly feasible model predictive controllers (RRFMPC) to adapt to different operational states or fault scenarios of the plant. RRFMPC reconfiguration to adapt to different operational states of the plant is achieved using the soft switching method between the RRFMPC controllers. The RRFMPC is designed by utilizing the off-line safety zones and the robustly feasible invariant sets in the state space which are established off-line using Karush Kuhn Tucker conditions. This is used to achieve robust feasibility and recursive feasibility for the RRFMPC under different operational states of the plant. The feasible adaptive cooperation among DSSRFMPC agents under different operational states are proposed. The proposed methodology is verified by applying it to a simulated benchmark drinking water distribution systems (DWDS) water quality control.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Liu, Gang. "A Study on Remaining Useful Life Prediction for Prognostic Applications". ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/456.

Testo completo
Abstract (sommario):
We consider the prediction algorithm and performance evaluation for prognostics and health management (PHM) problems, especially the prediction of remaining useful life (RUL) for the milling machine cutter and lithium ‐ ion battery. We modeled battery as a voltage source and internal resisters. By analyzing voltage change trend during discharge, we made the prediction of battery remain discharge time in one discharge cycle. By analyzing internal resistance change trend during multiple cycles, we were able to predict the battery remaining useful time during its life time. We showed that the battery rest profile is correlated with the RUL. Numerical results using the realistic battery aging data from NASA prognostics data repository yielded satisfactory performance for battery prognosis as measured by certain performance metrics. We built a battery test platform and simulated more usage pattern and verified the prediction algorithm. Prognostic performance metrics were used to compare different algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Hamide, Mahmoud. "Schedule and Cost Performance Analysis and Prediction in Louisiana DOTD". ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2311.

Testo completo
Abstract (sommario):
Many construction projects in the United States are facing the risk of cost overrun and schedule delays. This is also happening here in the State of Louisiana. When these things happen, it causes cost overrun which can then be passed on to the tax payers and may also cause the state to take on less projects than they normal. Many researchers have studied the reasons behind both the cost overrun and the delays resulting in private firms, developing project management tools and best practices to prevent this risk. In this research, I aim to study the historical trend in 2912 publically funded projects in the State of Louisiana. The study will reveal the overall state level of accuracy of forecasting cost and schedule. A forecasting formula based on those historical projects will be developed to assist estimators at the Parish level in predicting cost and schedule performance. The State of Louisiana has so many projects that deal with the transportation system (roadway, bridges, drainage, traffic sign, traffic signal, lighting etc...) My Dissertation will be a study and analysis of time and cost of the projects in LADOTD, whether the projects finish on time, before time or after time as well as the cost of the project that has been completed overrun or underrun or the exact amount that the bid amount was. With this study and analysis, my intention is to create time schedule and cost to be used to on reaching accuracy on finishing the project on time and the exact bid amount of the project (exclude whether condition, extra work, and some unexpected problems that may arise during the length of the project).
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Danker-McDermot, Holly. "A Fuzzy/Neural Approach to Cost Prediction with Small Data Sets". ScholarWorks@UNO, 2004. http://scholarworks.uno.edu/td/86.

Testo completo
Abstract (sommario):
The project objective in this work is to create an accurate cost estimate for NASA engine tests at the John C. Stennis Space Center testing facilities using various combinations of fuzzy and neural systems. The data set available for this cost prediction problem consists of variables such as test duration, thrust, and many other similar quantities, unfortunately it is small and incomplete. The first method implemented to perform this cost estimate uses the locally linear embedding (LLE) algorithm for a nonlinear reduction method that is then put through an adaptive network based fuzzy inference system (ANFIS). The second method is a two stage system that uses various ANFIS with either single or multiple inputs for a cost estimate whose outputs are then put through a backpropagation trained neural network for the final cost prediction. Finally, method 3 uses a radial basis function network (RBFN) to predict the engine test cost.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Panta, Manisha. "Prediction of Hierarchical Classification of Transposable Elements Using Machine Learning Techniques". ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2677.

Testo completo
Abstract (sommario):
Transposable Elements (TEs) or jumping genes are the DNA sequences that have an intrinsic capability to move within a host genome from one genomic location to another. Studies show that the presence of a TE within or adjacent to a functional gene may alter its expression. TEs can also cause an increase in the rate of mutation and can even promote gross genetic arrangements. Thus, the proper classification of the identified jumping genes is important to understand their genetic and evolutionary effects. While computational methods have been developed that perform either binary classification or multi-label classification of TEs, few studies have focused on their hierarchical classification. The existing methods have limited accuracy in classifying TEs. In this study, we examine the performance of a variety of machine learning (ML) methods and propose a robust augmented Stacking-based ML method, ClassifyTE, for the hierarchical classification of TEs with high accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Sadaike, Marcelo Tetsuhiro. "Melhoria do tempo de resposta para execução de jogos em um sistema em Cloud Gaming com implementação de camadas e predição de movimento". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-27092017-084328/.

Testo completo
Abstract (sommario):
Com o crescimento da indústria dos jogos eletrônicos, surgem novos mercados e tecnologias. Os jogos eletrônicos da última geração exigem cada vez mais processamento e placas de vídeo mais poderosas. Uma solução que vem ganhando cada vez mais destaque é o Cloud Gaming, no qual o jogador realiza um comando e a informação é enviada e processada remotamente em uma nuvem, localizada na Internet, e que retorna com as imagens como uma sequência de vídeo para o jogador. Para melhorar a qualidade de experiência (QoE) é proposto um modelo que diminui o tempo de resposta entre o jogador e a nuvem, através de um arcabouço chamado Cloud Manager que utiliza a técnica de implementação de camadas, na camada do plano de fundo e predição de movimentos, utilizando uma matriz de predição, na camada do personagem. Para validar os resultados é utilizado um jogo de ação com ponto de vista onipresente dentro do sistema de Cloud Gaming Uniquitous.
With the growing video games industry, new markets and technologies are emerging. Electronic games of the new generation are increasingly requiring more processing and powerful video cards. The solution that is gaining more prominence is Cloud Gaming, which the player performs a command, the information is sent and processed remotely on a cloud, then the images return as a video stream back to the player using the Internet. To improve the Quality of Experience (QoE), it is proposed a model that reduces the response time between the player command and the stream of the resulting game scenes through a framework called Cloud Manager that use layer caching techniques, in the background, and future state prediction using a prediction matrix, in the character layer. To validate the results, a action game with god-view as point of view is used in a Cloud Gaming system called Uniquitous.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Bessa, Iuri Sidney. "Laboratory and field study of fatigue cracking prediction in asphalt pavements". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3138/tde-15012018-160715/.

Testo completo
Abstract (sommario):
The prediction of asphalt pavements performance in relation to their main distresses has been proposed by different researchers, by means of laboratory characterization and field data evaluation. In relation to fatigue cracking, there is no universal consensus about the laboratory testing to be performed, the damage criterion to be considered, the testing condition to be set (level and frequency of loading, and temperature), and the specimen geometry to be used. Tests performed in asphalt binders and in asphalt mixes are used to study fatigue behavior and to predict fatigue life. The characterization of asphalt binders is relevant, since fatigue cracking is highly dependent on the rheological characteristics of these materials. In the present research, the linear viscoelastic characterization, time sweep tests, and amplitude sweep tests were done. In respect to the laboratorial characterization of asphalt mixes, tests based on indirect tensile, four point flexural bending beam, and tension-compression were performed. Field damage evolution data of two asphalt pavement sections were collected from an experimental test site in a very heavy traffic highway. Three asphalt binders (one neat binder, one SBS-modified binder and one highly modified binder, HiMA), and one asphalt concrete constituted by the neat binder were tested in laboratory. The experimental test site was composed by two segments, constituted by different base layers (unbound course and cement-treated crushed stone) that provided different mechanical responses in the asphalt wearing course. The field damage data were compared to fatigue life models that use empirical results obtained in the laboratory and computer simulations. Correlations among the asphalt materials scales were discussed in this dissertation, with the objective of predicting the fatigue cracking performance of asphalt pavements.
A previsão do desempenho de pavimentos asfálticos em relação aos seus principais defeitos tem sido proposta por diferentes pesquisadores, por meio da caracterização em laboratório e da avaliação de dados de campo. No que diz respeito ao trincamento por fadiga, não há um consenso universal sobre o tipo de ensaio a ser realizado, o critério de dano a ser considerado, e as condições de ensaio a serem adotadas (nível e frequência de carregamento, e temperatura), além da geometria das amostras testadas. Ensaios realizados em ligantes asfálticos e em misturas asfálticas são usados para estudar o comportamento em relação à fadiga e para prever a vida útil. A caracterização dos ligantes asfálticos é relevante, uma vez que o trincamento por fadiga é altamente dependente das características reológicas desses materiais. Nesta pesquisa, a obtenção dos parâmetros viscoelásticos lineares e a caracterização por meio de ensaios de varredura de tempo e de varredura de deformação foram realizados. Em relação à caracterização laboratorial das misturas asfálticos, ensaios baseados em compressão diametral, vigota de quatro pontos e em tração-compressão axial foram realizados. Dados de evolução do dano de campo obtidos em duas seções de pavimentos asfálticos foram coletados de um trecho experimental construído em uma rodovia de alto volume de tráfego. Três ligantes asfálticos (um ligante não modificado, um ligante modificado por polímero do tipo SBS e um ligante altamente modificado, HiMA), e uma mistura asfáltica do tipo concreto asfáltico constituída pelo ligante não modificado foram testados em laboratório. O trecho experimental era composto por dois segmentos, constituídos por diferentes tipos de camadas de base (brita graduada simples e brita graduada tratada com cimento) que forneciam diferentes respostas mecânicas à camada de revestimento asfáltico. Os dados de campo foram comparados com modelos de previsão de vida de fadiga que utilizam resultados empíricos obtidos em laboratórios e simulações computacionais. Correlações entre as diferentes escalas são discutidas nesta tese, com o objetivo de prever o desempenho de pavimentos asfálticos ao trincamento por fadiga.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Matsumoto, Élia Yathie. "A methodology for improving computed individual regressions predictions". Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-12052016-140407/.

Testo completo
Abstract (sommario):
This research proposes a methodology to improve computed individual prediction values provided by an existing regression model without having to change either its parameters or its architecture. In other words, we are interested in achieving more accurate results by adjusting the calculated regression prediction values, without modifying or rebuilding the original regression model. Our proposition is to adjust the regression prediction values using individual reliability estimates that indicate if a single regression prediction is likely to produce an error considered critical by the user of the regression. The proposed method was tested in three sets of experiments using three different types of data. The first set of experiments worked with synthetically produced data, the second with cross sectional data from the public data source UCI Machine Learning Repository and the third with time series data from ISO-NE (Independent System Operator in New England). The experiments with synthetic data were performed to verify how the method behaves in controlled situations. In this case, the outcomes of the experiments produced superior results with respect to predictions improvement for artificially produced cleaner datasets with progressive worsening with the addition of increased random elements. The experiments with real data extracted from UCI and ISO-NE were done to investigate the applicability of the methodology in the real world. The proposed method was able to improve regression prediction values by about 95% of the experiments with real data.
Esta pesquisa propõe uma metodologia para melhorar previsões calculadas por um modelo de regressão, sem a necessidade de modificar seus parâmetros ou sua arquitetura. Em outras palavras, o objetivo é obter melhores resultados por meio de ajustes nos valores computados pela regressão, sem alterar ou reconstruir o modelo de previsão original. A proposta é ajustar os valores previstos pela regressão por meio do uso de estimadores de confiabilidade individuais capazes de indicar se um determinado valor estimado é propenso a produzir um erro considerado crítico pelo usuário da regressão. O método proposto foi testado em três conjuntos de experimentos utilizando três tipos de dados diferentes. O primeiro conjunto de experimentos trabalhou com dados produzidos artificialmente, o segundo, com dados transversais extraídos no repositório público de dados UCI Machine Learning Repository, e o terceiro, com dados do tipo séries de tempos extraídos do ISO-NE (Independent System Operator in New England). Os experimentos com dados artificiais foram executados para verificar o comportamento do método em situações controladas. Nesse caso, os experimentos alcançaram melhores resultados para dados limpos artificialmente produzidos e evidenciaram progressiva piora com a adição de elementos aleatórios. Os experimentos com dados reais extraído das bases de dados UCI e ISO-NE foram realizados para investigar a aplicabilidade da metodologia no mundo real. O método proposto foi capaz de melhorar os valores previstos por regressões em cerca de 95% dos experimentos realizados com dados reais.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Islam, Md Nasrul. "A Balanced Secondary Structure Predictor". ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/1995.

Testo completo
Abstract (sommario):
Secondary structure (SS) refers to the local spatial organization of the polypeptide backbone atoms of a protein. Accurate prediction of SS is a vital clue to resolve the 3D structure of protein. SS has three different components- helix (H), beta (E) and coil (C). Most SS predictors are imbalanced as their accuracy in predicting helix and coil are high, however significantly low in the beta. The objective of this thesis is to develop a balanced SS predictor which achieves good accuracies in all three SS components. We proposed a novel approach to solve this problem by combining a genetic algorithm (GA) with a support vector machine. We prepared two test datasets (CB471 and N295) to compare the performance of our predictors with SPINE X. Overall accuracy of our predictor was 76.4% and 77.2% respectively on CB471 and N295 datasets, while SPINE X gave 76.5% overall accuracy on both test datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Iqbal, Sumaiya. "Machine Learning based Protein Sequence to (un)Structure Mapping and Interaction Prediction". ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2379.

Testo completo
Abstract (sommario):
Proteins are the fundamental macromolecules within a cell that carry out most of the biological functions. The computational study of protein structure and its functions, using machine learning and data analytics, is elemental in advancing the life-science research due to the fast-growing biological data and the extensive complexities involved in their analyses towards discovering meaningful insights. Mapping of protein’s primary sequence is not only limited to its structure, we extend that to its disordered component known as Intrinsically Disordered Proteins or Regions in proteins (IDPs/IDRs), and hence the involved dynamics, which help us explain complex interaction within a cell that is otherwise obscured. The objective of this dissertation is to develop machine learning based effective tools to predict disordered protein, its properties and dynamics, and interaction paradigm by systematically mining and analyzing large-scale biological data. In this dissertation, we propose a robust framework to predict disordered proteins given only sequence information, using an optimized SVM with RBF kernel. Through appropriate reasoning, we highlight the structure-like behavior of IDPs in disease-associated complexes. Further, we develop a fast and effective predictor of Accessible Surface Area (ASA) of protein residues, a useful structural property that defines protein’s exposure to partners, using regularized regression with 3rd-degree polynomial kernel function and genetic algorithm. As a key outcome of this research, we then introduce a novel method to extract position specific energy (PSEE) of protein residues by modeling the pairwise thermodynamic interactions and hydrophobic effect. PSEE is found to be an effective feature in identifying the enthalpy-gain of the folded state of a protein and otherwise the neutral state of the unstructured proteins. Moreover, we study the peptide-protein transient interactions that involve the induced folding of short peptides through disorder-to-order conformational changes to bind to an appropriate partner. A suite of predictors is developed to identify the residue-patterns of Peptide-Recognition Domains from protein sequence that can recognize and bind to the peptide-motifs and phospho-peptides with post-translational-modifications (PTMs) of amino acid, responsible for critical human diseases, using the stacked generalization ensemble technique. The involved biologically relevant case-studies demonstrate possibilities of discovering new knowledge using the developed tools.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Gattani, Suraj. "StackCBpred: A Stacking based Prediction of Protein-Carbohydrate Binding Sites from Sequence". ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2605.

Testo completo
Abstract (sommario):
Carbohydrate-binding proteins play vital roles in many vital biological processes and study of these interactions, at residue level, are useful in treating many critical diseases. Analyzing the local sequential environments of the binding and non-binding regions to predict the protein-carbohydrate binding sites is one of the challenging problems in molecular and computational biology. Prediction of such binding sites, directly from sequences, using computational methods, can be useful to fast annotate the binding sites and guide the experimental process. Because the number of carbohydrate-binding residues is significantly lower than non-carbohydrate-binding residues, most of the methods developed are biased towards over predicting the non-carbohydrate-binding residues. Here, we propose a balanced predictor, called StackCBPred, which utilizes features, extracted from evolution-driven sequence profile, called the position-specific scoring matrix (PSSM) and several predicted structural properties of amino acids to effectively train a stacking-based machine learning method for the accurate prediction of protein-carbohydrate binding sites.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Bautista, de los Santos Quyen Melina. "Towards a predictive framework for microbial management in drinking water systems". Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8261/.

Testo completo
Abstract (sommario):
The application of DNA sequencing-based approaches to drinking water microbial ecology has revealed the presence of an abundant and diverse microbiome; therefore, the possibility of harnessing drinking water (DW) microbial communities is an attractive prospect in order to address some of the current and emerging challenges in the sector. Moreover, these multiple challenges suggest that a shift in the DW sector, from a “reactive and sanctioning” paradigm to a “due diligence/proactive” based approach may be the key in identifying potentially adverse events. My research project has focused on the characterization of the microbial ecology of full-scale DW systems using DNA sequencing-based approaches, with the aim of exploring how the obtained insights could be applied into a predictive/proactive microbial management approach. To achieve this aim, I have focused my efforts on sampling multiple full-scale DW systems in order to elucidate the impacts of: (i) methodological variation and (ii) system properties on DW microbial communities, using a combination of bioinformatics, molecular biology, microbial ecology and multivariate statistical analyses. Regarding methodological variation, I have elucidated the impacts of sample replication, PCR replication, sample volume and sampling flow rate on the structure and membership of DW microbial communities. This was the first time that methodological variation was explored in the DW context, and the first time that multi-level replication has been tested and applied in DW molecular microbial ecology. Moreover, my findings have direct implications for the design of future sampling campaigns. Regarding system properties, I have shown that microbial communities in DW distribution systems (DWDSs) undergo diurnal variation, and therefore are linked to water use patters/hydraulics in the systems. I have also shown that sampling locations in the same distribution system are similar, with OTUs found across sampling locations at different relative abundance and detection frequency levels. An assessment of the impact of source water type and treatment processes showed that disinfection is a key treatment step for community composition and functional potential, and that several genes related to protection against chlorine/oxygen species are overabundant in chlorinated and chloraminated systems. Looking to the future, I believe that the application of a “toolbox” of techniques is key in shifting towards a proactive approach in DW management, that multidisciplinary synergies hold the possibility of changing the way in which DW systems have been studied and managed for over 100 years.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Lima, Marcos Pereira. "Equações preditivas para determinar a temperatura interna do ar: envolventes em painel alveolar com cobertura verde". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18139/tde-03122009-160414/.

Testo completo
Abstract (sommario):
Introdução: Através da ferramenta estatística denominada análise de regressão linear múltipla se gerou equações preditivas de temperatura interna do ar de uma edificação com paredes e lajes compostas por painéis de concreto alveolar, com sistema de cobertura verde. Justificativa: Com equações preditivas é possível simular temperaturas internas de edificações utilizando uma pequena entrada de dados com uma precisão satisfatória. Utilizando tais equações é possível, também, corrigir erros de projetos antes de sua execução. Objetivos: Gerar equações preditivas para o período seco (outono e inverno) e para o período chuvoso (primavera e verão) para a edificação analisada. Metodologia: Foram selecionadas duas séries de dados, um referente ao período de característica seca e outro de característica chuvosa. Foram geradas equações preditivas de temperatura interna do ar máxima, média e mínima para os dois períodos, utilizando análise de regressão linear. Resultados: Foram geradas sete equações preditivas para o período seco e cinco para o período chuvoso. As diferenças máximas, em módulo, entre as temperaturas estimadas pelas equações e as monitoradas experimentalmente ficaram em aproximadamente 2°C. Conclusão: As equações preditivas geradas para os dois períodos considerados descrevem satisfatoriamente o comportamento térmico da edificação.
Introduction: Using a statistics tool called multiple linear regression, we created equations for predicting the indoor temperature in a building with walls and ceiling build from panels of alveolar concrete, with a green roof system. Explanation: Predictive equations enable simulations of indoor temperatures of buildings using a small number of data and with a satisfactory precision. They also allow corrections on project errors before they are put into effect. Objectives: Generate predictive equations for the building for the dry season (autumn and winter) and for the rainy season (spring and summer). Method: We selected two series of data, one for the dry and one for the rainy season. Using linear regression analysis we ran predictive equations for maximum, intermediate and minimum indoor temperatures of the air for both seasons. Results: We created seven predictive equations for the dry season and five for the wet season. The largest differences (in module) between the temperatures estimated using equations and monitored experimentally was approximately 2°C. Conclusion: The predictive equations generated for both periods described satisfactorily the thermal behavior of the building.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

L'Altrella, Claudio. "Stormwater Runoff from Elevated Highways: Prediction of COD from Field Measurements and TSS". ScholarWorks@UNO, 2007. http://scholarworks.uno.edu/td/532.

Testo completo
Abstract (sommario):
This proposed research focused on the prediction and identification of chemical oxygen demand (COD) concentrations in storm water runoff from elevated roadways, which transports a significant load of contaminants. The objective of this research was to develop a mathematical model to relate COD concentration to different measurable parameters which are easily available and routinely measurable for elevated roadways. The test site for this research was selected at the intersection of the Interstate-10 and Interstate-610, Orleans Parish, New Orleans, Louisiana. Subsequently a research test site was developed and highway storm water runoff was collected. The developed model enables the user to predict COD concentrations within a prediction interval of 95 % confidence. The reliability of the model was verified by carrying out significant-difference tests for both sets of data, observed and predicted, for a 5% of significance level.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Baribault, Carl. "Meta State Generalized Hidden Markov Model for Eukaryotic Gene Structure Identification". ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1098.

Testo completo
Abstract (sommario):
Using a generalized-clique hidden Markov model (HMM) as the starting point for a eukaryotic gene finder, the objective here is to strengthen the signal information at the transitions between coding and non-coding (c/nc) regions. This is done by enlarging the primitive hidden states associated with individual base labeling (as exon, intron, or junk) to substrings of primitive hidden states or footprint states. Moreover, the allowed footprint transitions are restricted to those that include either one c/nc transition or none at all. (This effectively imposes a minimum length on exons and the other regions.) These footprint states allow the c/nc transitions to be seen sooner and have their contributions to the gene-structure identification weighted more heavily – yet contributing as such with a natural weighting determined by the HMM model itself according to the training data – rather than via introducing an artificial gain-parameter tuning on major transitions. The selection of the generalized HMM model is interpolated to highest Markov order on emission probabilities, and to highest Markov order (subsequence length) on the footprint states. The former is accomplished via simple count cutoff rules, the latter via an identification of anomalous base statistics near the major transitions using Shannon entropy. Preliminary indications, from applications to the C. elegans genome, are that the sensitivity/specificity (SN/SP) result for both the individual state and full exon predictions are greatly enhanced using the generalized-clique HMM when compared to the standard HMM. Here the standard HMM is represented by the choice of the smallest size of footprint state in the generalized-clique HMM. Even with these improvements, we observe that both extremely long and short exon and intron segments would go undetected without an explicit model of the duration of state. The key contributions of this effort are the full derivation and experimental confirmation of a rudimentary, yet powerful and competitive gene finding method based on a higher order hidden Markov model. With suitable extensions, this method is expected to provide superior gene finding capability – not only in the context of pre-conditioned data sets as in the evaluations cited but also in the wider context of less preconditioned and/or raw genomic data.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Simpson, Tiffany P. "Factors Predicting Therapeutic Alliance in Antisocial Adolescents". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/858.

Testo completo
Abstract (sommario):
Therapeutic alliance is a robust predictor of future therapeutic outcomes. While treatment of normal children and adolescents is often hard, treating antisocial youth is especially difficult because of the social, cognitive, and emotional deficits experienced by these youth. This study investigated whether differing levels of callous-unemotional (CU) traits influenced the formation of therapeutic alliance in a sample of 51 adjudicated youth in juvenile institutions. Also, we tested whether therapeutic alliance influenced success in the institution and whether this association differed based on levels of CU traits. Results revealed that CU traits and selfreported delinquency were both modestly related to institutional infractions. Children low on both dimensions showed the lowest levels of institutional infractions. Additionally, these findings suggest that children high on both CU traits and delinquency reported better therapeutic alliance, but that youth with high CU traits committed more institutional infractions, despite their level of therapeutic alliance.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Beeravolu, Nagendrakumar. "Predicting Voltage Abnormality Using Power System Dynamics". ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1722.

Testo completo
Abstract (sommario):
The purpose of this dissertation is to analyze dynamic behavior of a stressed power system and to correlate the dynamic responses to a near future system voltage abnormality. It is postulated that the dynamic response of a stressed power system in a short period of time-in seconds-contains sufficient information that will allow prediction of voltage abnormality in future time-in minutes. The PSSE dynamics simulator is used to study the dynamics of the IEEE 39 Bus equivalent test system. To correlate dynamic behavior to system voltage abnormality, this research utilizes two different pattern recognition methods one being algorithmic method known as Regularized Least Square Classification (RLSC) pattern recognition and the other being a statistical method known as Classification and Regression Tree (CART). Dynamics of a stressed test system is captured by introducing numerous contingencies, by driving the system to the point of abnormal operation, and by identifying those simulated contingencies that cause system voltage abnormality. Normal and abnormal voltage cases are simulated using the PSSE dynamics tool. The results of simulation from PSSE dynamics will be divided into two sets of training and testing set data. Each of the two sets of data includes both normal and abnormal voltage cases that are used for development and validation of a discriminator. This research uses stressed system simulation results to train two RLSC and CART pattern recognition models using the training set obtained from the dynamic simulation data. After the training phase, the trained pattern recognition algorithm will be validated using the remainder of data obtained from simulation of the stressed system. This process will determine the prominent features and parameters in the process of classification of normal and abnormal voltage cases from dynamic simulation data. Each of the algorithmic or statistical pattern recognition methods have their advantages and disadvantages and it is the intention of this dissertation to use them only to find correlations between the dynamic behavior of a stressed system in response to severe contingencies and the outcome of the system behavior in a few minutes into the future.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Ramirez, Miguel Arjona. "Codificador preditivo de voz por análise mediante síntese". Universidade de São Paulo, 1992. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-11052017-144144/.

Testo completo
Abstract (sommario):
Os codificadores preditivos de voz por analise-mediante-síntese vem sendo amplamente aplicados em telefonia móvel celular e em telecomunicações sigilosas. A predição linear do sinal de voz e as técnicas de análise-mediante-síntese são apresentadas de forma a relacionar algumas características perceptivas da audição humana as técnicas e parâmetros usados no processamento de sinais. Esta classe de codificadores e descrita no contexto do codificador preditivo excitado por códigos. Estruturas especiais do codificador tais como livros de códigos adaptativos, esparsos e definidos por base vetorial são abordadas bem como melhoramentos de processamento tais quais as buscas com ortogonalidade. Propõe-se um novo codificador, o codificador preditivo linear com excitação decomposta em vetores singulares, que complementa uma representação recentemente anunciada da excitação da voz com buscas em livros de códigos adaptativos. Os resultados de um estudo de codificadores principais desta classe são apresentados. A analise comparativa baseia-se em medidas objetivas temporais e espectrais. Um estudo suplementar de seleção espectral das características da excitação e de quantização do conjunto completo de parâmetros do codificador proposto revelou resultados interessantes sobre a representação espectral adaptativa e sobre a sensibilidade a quantização das características da excitação.
Analysis-by-synthesis linear predictive speech coders are widely applied in mobile and secure telecommunications. Linear prediction of speech signals and analysis-by-synthesis techniques are presented so that some perceptual features of human hearing may be related to signal processing techniques and parameters. The basic operation of this class of coders is described in the framework of the code-excited predictive coder. Special coder structures such as adaptive, sparse and vector-basis codebooks are introduced as well as processing enhancements such as orthogonal searches. A recently introduced representation of voice excitation is complemented by adaptive codebook searches to give rise to the new proposed coder, the singular-vector-decomposed excitation linear predictive coder. The sults of a study of some important coders in this class is present. The coders are compared on the basis of waveform and spectral objective distortion measures. A further study of spectral selection of excitation features, and quantization of the whole set of parameters is performed on the proposed coder. Some interesting results are described concerning the adaptive spectral representation and the sensitivity to quantization of the excitation features.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Silva, Karla Cristina Rodrigues. "Assessing the transferability of crash prediction models for two lane highways in Brazil". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18144/tde-10112017-215500/.

Testo completo
Abstract (sommario):
The present study focused on evaluating some crash prediction models for two lane highways on Brazilian conditions. Also, the transferability of models was considered, specifically by means of a comparison between Brazil, HSM and Florida. The analysis of two lane highways crash prediction models was promising when the road characteristics were well known and there was not much difference from base conditions. This conclusion was attained regarding the comparison of results for all segments, non-curved segments and curved segments, confirming that a transferred model can be used with caution. In addition, two novel models for Brazilian two-lane highways segments were estimated. The model developed showed better results for non-curved segments in the calibration/validation sample. Thus, for a general analysis purpose of non-curved segments this model is recommended. Finally, there are many factors that could not be measured by these models and reflects road safety various condition. Even so, the study of crash predict models in Brazilian context could provide a better start point in safety road analysis.
O foco desta pesquisa foi avaliar a aplicação de alguns modelos de previsão de acidentes em rodovias de pista simples de três estados brasileiros. Ainda, a transferabilidade destes modelos foi abordada, especificamente por meio de uma comparação entre características do Brasil, Florida e aquelas recomendadas pelo Highway Safety Manual. O uso dos distintos modelos se mostrou promissor para situações nas quais as características da via se mantiveram semelhantes às condições para as quais os modelos foram desenvolvidos. A avaliação foi empreendida para todos os segmentos homogêneos, separados posteriormente segundo a existência de curvas horizontais. Adicionalmente, dois novos modelos foram equacionados para a amostra brasileira. O modelo de previsão de acidentes desenvolvido apresentou melhores medidas de desempenho para segmentos sem curvas horizontais, sendo recomendável para previsão de acidentes em análises preliminares. Por fim, foi constatado que outros fatores não contemplados pelos modelos podem ter impactado as condições de segurança dos locais estudados. Ainda assim, essa pesquisa representa no contexto do Brasil um ponto de partida em análises relacionadas à segurança de rodovias de pista simples.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Cabrera, Carlos Andres Cuenca. "Ductile failure prediction using phenomenological fracture model for steels: calibration, validation and application". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3135/tde-27082018-075853/.

Testo completo
Abstract (sommario):
The present thesis shows the analysis, calibration, and application of the stress modified criticai strain criterion to predict ductile failure for an A285 steel. To obtain the mechanical behavior of the material, experimental tests were carried out by implementation of 5 different types of geometries: smooth round bar, notched round bar (R=1 , 2, 3 mm), and, deep and shallow cracked SE(B) specimens. Then, for the calibration process of the mechanical properties finite element models were generated, using 30 solid elements with 8 nodes (C308), matching the geometry and the properties of the tested specimens. To calibrate the elastoplastic behavior was used the experimental and numerical response obtained from the smooth and notched round bar specimens; and, for the damage calibration was used the responses obtained from both deep and shallow crack SE(B) specimens. Once the mechanical properties were calibrated, then there were obtained the SMSC criterion factors represented by the equation ..... and, the damage condition which is represented by the displacement at failure (.......) and exponential factor (....). This calibrated model was able to recover the SE(B) experimental responses that validate the use of the characterized material in a complex structure. Then, the fully characterized material was applied in two pipelines which have externai initial circumferential elliptical crack; being the first one pipe with shallow crack and the second one with deep crack. Finally, both pipes were submitted to tension loads to predict the ductile damage behavior, obtaining the necessary load to the crack start growing, and the evolution of the failure.
A presente dissertação apresenta o processo de análise, calibração e aplicação das propriedades mecânicas, incluindo o comportamento elastoplástico e de dano, para o aço A285, utilizando o critério \"Stress modified criticai strain\" (SMCS). Para obter o comportamento mecânico do material, testes experimentais foram realizados com a implementação de 5 tipos diferentes de geometrias: barra cilíndrica sem entalhe, barra cilíndrica com entalhe (R = 1, 2, 3 mm) e corpos de prova SE(B) com trinca inicial profunda e rasa. Para o processo de calibração das propriedades mecânicas foram gerados modelos de elementos finitos, utilizando elementos sólidos 30 com 8 nós (C3D8), que representam de forma adequada a geometria e as propriedades dos corpos de prova testados. Para calibrar o comportamento elastoplástico e iniciação do dano, utilizou-se a resposta experimental e numérica obtida para as amostras de barra cilíndrica com e sem entalhe; e, para a calibração da evolução do dano, foram utilizadas as respostas obtidas para os espécimes SEB de trincas profundas e rasa. Este modelo calibrado foi capaz de recuperar as respostas experimentais dos corpos de prova SE(B), o que valida o uso do material caracterizado em uma estrutura complexa. Uma vez calibradas as propriedades mecânicas, foram obtidos os fatores do critério SMSC representados pela equação ....... , e, a condição de dano que é representada pelo deslocamento na falha .... e o fator de amolecimento exponencial .... . Depois, o material totalmente caracterizado foi aplicado em dois dutos que possuem trinca elíptica circunferencial inicial externa; sendo o primeiro tubo com trinca superficial e o segundo com trinca profunda. Finalmente, ambos os tubos foram submetidos a cargas de tensão para prever o comportamento do dano dúctil, obtendo a carga necessária para o início do crescimento da trinca e a evolução da falha.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Xie, Mingyu. "Model predictive control of water quality in drinking water distribution systems considering disinfection by-products". Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7207/.

Testo completo
Abstract (sommario):
The shortage in water resources have been observed all over the world. However, the safety of drinking water has been given much attention by scientists because the disinfection will react with organic matters in drinking water to generate disinfection by-products (DBPs) which are considered as the cancerigenic matters. Although much research has been carried out on the water quality control problem in DWDS, the water quality model considered is linear with only chlorine dynamics. Compared to the linear water quality model, the nonlinear water quality model considers the interaction between chlorine and DBPs dynamics. The thesis proposes a nonlinear model predictive controller which utilises the newly derived nonlinear water quality model as a control alternative for controlling water quality. EPANET and EPANET-MSN are simulators utilised for modelling in the developed nonlinear MPC controller. Uncertainty is not considered in these simulators. This thesis proposes the bounded PPM in a form of multi-input multi-output to robustly bound parameters of chlorine and DBPs jointly and to robustly predict water quality control outputs for quality control purpose. The methodologies and algorithms developed in this thesis are verified by applying extended case studies to the example DWDS. The simulation results are presented and critically analysed.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Silva, Vinícius Ferreira da. "Otimização multinível em predição de links". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18102018-170343/.

Testo completo
Abstract (sommario):
A predição de links em redes é uma tarefa com aplicações em diversos cenários. Com a automatização de processos, as redes sociais, redes tecnológicas e outras cresceram muito em número de vértices e arestas. Portanto, a utilização de preditores de links em redes com alta complexidade estrutural não é trivial, mesmo considerando algoritmos de baixa complexidade computacional. A grande quantidade de operações necessárias para que os preditores possam escolher quais arestas são promissoras torna o processo de considerar a rede toda inviável na maioria dos casos. As abordagens existentes enfrentam essa característica de diversas formas, sendo que as mais populares são as que limitam o conjunto de pares de vértices que serão considerados para existência de arestas promissoras. Este projeto aborda a criação de uma estratégia que utiliza otimização multinível para contrair as redes, executar os algoritmos de predição de links nas redes contraídas e projetar os resultados de predição para a rede original, para reduzir o número de operações necessárias à predição de links. Os resultados mostram que a abordagem consegue reduzir o tempo necessário para predição, apesar de perdas esperadas na qualidade na predição.
Link prediction in networks is a task with applications in several scenarios. With the automation of processes, social networks, technological networks, and others have grown considerably in the number of vertices and edges. Therefore, the creation of systems for link prediction in networks of high structural complexity is not a trivial process, even considering low-complexity algorithms. The large number of operations required for predicting which edges are promising makes the considering of the whole network impracticable in many cases. The existing approaches face this characteristic in several ways, and the most popular are those that limit the set of vertex pairs that will be considered for the existence of promising edges. This project addresses a strategy that uses multilevel optimization to coarse networks, execute prediction algorithms on coarsened networks and project the results back to the original network, in order to reduce the number of operations for link prediction. The experiments show that the approach can reduce the time despite some expected losses of accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Ferreira, Raoni Simões. "A wikification prediction model based on the combination of latent, dyadic and monadic features". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-29112016-164654/.

Testo completo
Abstract (sommario):
Most of the reference information, nowadays, is found in repositories of documents semantically linked, created in a collaborative fashion and freely available in the web. Among the many problems faced by content providers in these repositories, one of the most important is Wikification, that is, the placement of links in the articles. These links have to support user navigation and should provide a deeper semantic interpretation of the content. Wikification is a hard task since the continuous growth of such repositories makes it increasingly demanding for editors. As consequence, they have their focus shifted from content creation, which should be their main objective. This has motivated the design of automatic Wikification tools which, traditionally, address two distinct problems: (a) how to identify which words (or phrases) in an article should be selected as anchors and (b) how to determine to which article the link, associated with the anchor, should point. Most of the methods in literature that address these problems are based on machine learning approaches which attempt to capture, through statistical features, characteristics of the concepts and its associations. Although these strategies handle the repository as a graph of concepts, normally they take limited advantage of the topological structure of this graph, as they describe it by means of human-engineered link statistical features. Despite the effectiveness of these machine learning methods, better models should take full advantage of the information topology if they describe it by means of data-oriented approaches such as matrix factorization. This indeed has been successfully done in other domains, such as movie recommendation. In this work, we fill this gap, proposing a wikification prediction model that combines the strengths of traditional predictors based on statistical features with a latent component which models the concept graph topology by means of matrix factorization. By comparing our model with a state-of-the-art wikification method, using a sample of Wikipedia articles, we obtained a gain up to 13% in F1 metric. We also provide a comprehensive analysis of the model performance showing the importance of the latent predictor component and the attributes derived from the associations between the concepts. The study still includes the analysis of the impact of ambiguous concepts, which allows us to conclude the model is resilient to ambiguity, even though does not include any explicitly disambiguation phase. We finally study the impact of selecting training samples from specific content quality classes, an information that is available in some respositories, such as Wikipedia. We empirically shown that the quality of the training samples impact on precision and overlinking, when comparing training performed using random quality samples versus high quality samples.
Atualmente, informações de referência são disponibilizadas através de repositórios de documentos semanticamente ligados, criados de forma colaborativa e com acesso livre na Web. Entre os muitos problemas enfrentados pelos provedores de conteúdo desses repositórios, destaca-se a Wikification, isto é, a inclusão de links nos artigos desses repositórios. Esses links possibilitam a navegação pelos artigos e permitem ao usuário um aprofundamento semântico do conteúdo. A Wikification é uma tarefa complexa, uma vez que o crescimento contínuo de tais repositórios resulta em um esforço cada vez maior dos editores. Como consequência, eles têm seu foco desviado da criação de conteúdo, que deveria ser o seu principal objetivo. Isso tem motivado o desenvolvimento de ferramentas de Wikification automática que, tradicionalmente, abordam dois problemas distintos: (a) como identificar que palavras (ou frases) em um artigo deveriam ser selecionados como texto de âncora e (b) como determinar para que artigos o link, associado ao texto de âncora, deveria apontar. A maioria dos métodos na literatura que abordam esses problemas usam aprendizado de máquina. Eles tentam capturar, através de atributos estatísticos, características dos conceitos e seus links. Embora essas estratégias tratam o repositório como um grafo de conceitos, normalmente elas pouco exploram a estrutura topológica do grafo, uma vez que se limitam a descrevê-lo por meio de atributos estatísticos dos links, projetados por especialistas humanos. Embora tais métodos sejam eficazes, novos modelos poderiam tirar mais proveito da topologia se a descrevessem por meio de abordagens orientados a dados, tais como a fatoração matricial. De fato, essa abordagem tem sido aplicada com sucesso em outros domínios como recomendação de filmes. Neste trabalho, propomos um modelo de previsão para Wikification que combina a força dos previsores tradicionais baseados em atributos estatísticos, projetados por seres humanos, com um componente de previsão latente, que modela a topologia do grafo de conceitos usando fatoração matricial. Ao comparar nosso modelo com o estado-da-arte em Wikification, usando uma amostra de artigos Wikipédia, observamos um ganho de até 13% em F1. Além disso, fornecemos uma análise detalhada do desempenho do modelo enfatizando a importância do componente de previsão latente e dos atributos derivados dos links entre os conceitos. Também analisamos o impacto de conceitos ambíguos, o que permite concluir que nosso modelo se porta bem mesmo diante de ambiguidade, apesar de não tratar explicitamente este problema. Ainda realizamos um estudo sobre o impacto da seleção das amostras de treino conforme a qualidade dos seus conteúdos, uma informação disponível em alguns repositórios, tais como a Wikipédia. Nós observamos que o treino com documentos de alta qualidade melhora a precisão do método, minimizando o uso de links desnecessários.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Srivastava, Shivank. "A Numerical Study in Prediction of Pressure on High-Speed Planing Craft during Slamming Events". ScholarWorks@UNO, 2018. https://scholarworks.uno.edu/td/2494.

Testo completo
Abstract (sommario):
This thesis is an attempt to create a computer based tool that can be used academically and later industrially by naval architects in analysis and development of efficient planing hull forms. The work contained here is based on the theory created by Vorus (1996) which falls between empirical asymptotic solutions and intractable non-linear boundary value problem in the time-domain. The computer code developed predicts pressures on the bottom of high-speed planing craft during slamming events. The code is validated with available numerical data as a benchmark case. An aluminum wedge is dropped from various heights resulting in unsteady pressure distributions with high peak over the bottom plate. These pressure distributions are compared to the numerically predicted pressures by the code and presented in this thesis. The predicted flow velocities are within 8% difference of experimental data. The graphs depicts similar trends in experimental and numerical data. The predicted peak pressures deviate within 4% to 20% from experimental data. The analysis and comparison illustrate efficacy of the code.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Castro, Emiliano Gonçalves de. "Proposta para previsão de evasão baseada em padrões de acesso de usuários em jogos online". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-11082011-125123/.

Testo completo
Abstract (sommario):
O mercado de jogos eletrônicos online tem crescido em ritmo acelerado nos últimos anos, particularmente a partir do surgimento do modelo de negócio baseado em serviços. Como consequência, as publicadoras destes jogos passaram a compartilhar problemas comuns na área de serviços, como a erosão do lucro causada pela evasão de usuários. Modelos preditivos têm sido utilizados no combate à evasão em mercados como os de telefonia móvel e de cartões de crédito, setores que detêm um grande volume de informações demográficas e econômicas a respeito dos seus consumidores. Já os publicadores de jogos muitas vezes só possuem o endereço eletrônico dos jogadores. O objetivo deste trabalho é propor um modelo de previsão de evasão com base exclusivamente nos padrões de acesso de usuários em jogos online, onde estes registros temporais são submetidos a um conjunto de operadores que analisam os dados no domínio do plano tempo-frequência, utilizando a Transformada Discreta de Wavelet. Sua principal contribuição está na proposta de parametrização dos dados de entrada para classificadores probabilísticos baseados no algoritmo k-Nearest Neighbors. Testados com dados reais de acessos de usuários ao longo de alguns meses em um jogo online, os classificadores foram avaliados com o uso de curvas ROC (Receiver Operating Characteristic) e de elevação. A abordagem proposta nesta tese, baseada na análise no domínio do plano tempo-frequência, apresentou resultados satisfatórios. Não apenas superiores se comparados com as abordagens no domínio do tempo ou da frequência, mas também comparáveis aos desempenhos encontrados por modelos com centenas de variáveis preditivas utilizados em outros mercados.
The online gaming market has rapidly grown in recent years, particularly since the rise of the service-based business model. As a result, the publishers of these games have started to share usual problems from the services business, like the profit erosion caused by customer churn. Predictive models have been used to address the churn problem in the mobile phones and credit cards markets, where companies have a huge volume of demographic and economic data about their customers. While game publishers often have only their users email addresses. The goal of this study is to propose a model for churn prediction based solely on the online games users access patterns, where these time entries are fed into a set of operators that are able to analyze the data in the time-frequency plane domain, using the Discrete Wavelet Transform. Its main contribution is the input data parameterization proposed for the probabilistic classifiers based on the k-Nearest Neighbors algorithm. Tested with real data from an online game users access over a few months, the classifiers were evaluated using ROC (Receiver Operating Characteristic) and lift curves. The approach proposed in this thesis, based on the analysis of the time-frequency plane domain, has shown satisfactory results. Not only higher when compared with approaches based on both time or frequency domains, but also comparable to performances found on models with hundreds of predictive variables used in other markets.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Suguisawa, Liliane. "Ultra-sonografia para predição das características e composição da carcaça de bovinos". Universidade de São Paulo, 2002. http://www.teses.usp.br/teses/disponiveis/11/11139/tde-05072002-101058/.

Testo completo
Abstract (sommario):
No presente trabalho foi avaliada a técnica de ultra-sonografia em tempo real como ferramenta para predição da área de olho-de-lombo (AOL) e espessura da camada de gordura subcutânea (ECG) a partir de imagens tomadas em animais vivos. Foram utilizados 115 bovinos jovens (30 ½ Angus x Nelore; 30 ½ Canchim x Nelore; 30 ½ Simental x Nelore e 25 Nelores), peso inicial médio de 329 kg, de dois tamanhos à maturidade (pequeno e grande) no sistema de produção do novilho superprecoce. As medidas de ultra-sonografia foram realizadas a cada 28 dias totalizando quatro medições até o final do confinamento. Os resultados obtidos demonstraram que a precisão da predição aumentou em função da proximidade com a data do abate, sendo máxima na quarta medida (R 2 de 0,68 para AOL e 0,82 para ECG com as mesmas medidas na carcaça). O efeito de grupo genético e as medidas ultra-sonográficas foram significativos (P<0,05) para ECG, enquanto que na carcaça ambas medidas foram significativas. A AOL da ultra-sonografia por 100 kilogramas (kg) de peso vivo (AOL4PKG) foi significativamente (P<0,05) influenciada pelo grupo genético, mas não pelo tamanho corporal. Em relação à AOL da carcaça por 100 kg de carcaça (AOLPCAR), todos os grupos genéticos alcançaram o valor ideal apontado pela literatura como indicativo de bom rendimento de cortes cárneos. Não foi observada diferença significativa (P<0,05) na composição entre os dois grupos de tamanho corporal provavelmente devido à pequena variação existente entre eles. Os mestiços Simental apresentaram a maior porcentagem de músculo na carcaça, e os animais Nelore o menor valor (P<0,05). Os animais Nelore, mestiços Canchim e Angus tenderam a acumular maior porcentagem de gordura na carcaça quando comparados aos mestiços Simental (P<0,05). A AOL por ultra-sonografia apresentou correlações positivas com a quantidade de músculo (0,49), de cortes cárneos (0,42) e porcentagem de tecido muscular estimada da carcaça (0,30). A ECG por ultra-som foi positivamente correlacionada com a porcentagem (0,49) e a quantidade de tecido adiposo estimada na carcaça (0,52), e negativamente correlacionada com a porcentagem de traseiro (-0,49). O rendimento de cortes cárneos só apresentou correlações significativas com a medida de AOLPCAR (0,26), o peso vivo dos animais (-0,45) e o peso da carcaça quente (-0,44). As medidas ultra-sonográficas nos animais vivos não mostraram alta precisão na predição da AOL na carcaça. Em geral os coeficientes de determinação das equações regressão para predição das características da composição da carcaça pelas medidas ultra-sonográficas são similares ou superiores àqueles obtidos nas equações que utilizaram as mesmas medidas após o abate. Este fato indica que o erro na predição da AOL não necessariamente se deve a uma falha na técnica de ultra-sonografia, mas também pode ser causada por diferenças naturais entre as medidas tomadas na carcaça e no animal vivo.
In the present study real time ultrasonography was used to predict the ribeye area (AOL) and the subcutaneous fat thickness (ECG) in live animals. A total of 115 yearling steers from four genetic groups (30 ½ Angus x Nelore; 30 ½ Canchim x Nelore; 30 ½ Simental x Nelore e 25 Nelores), with 329 kg initial average weight, and two different finishing frame sizes (small and large) were used. Animals were kept in a feedlot during the whole study. Four ultrasonographic measurements were taken every 28 days until slaughter. AOL and ECG were measured in the carcasses and compared to the measurements made in the live animals. Predictive accuracy of the ultrasonographic measurements increased as the animals approached slaughter date, reaching the maximum value at the last one (R 2 = 0.68 and 0.82 for AOL and ECG, respectively). Genetic group influenced (P<0.05) AOL and ECG measurements in the carcass but only the ECG was significant with the ultrasound (P<0.05). AOL ultrasonographic measurements showed a significant difference (P<0.05) among the genetic groups when body weight (per 100 Kg of carcass) was taken into account (AOL4PKG). AOL per 100 kg of carcass (AOLPCAR) for all genetic groups showed higher values than those suggested by the literature. Frame size did not influence AOL or ECG probably due to the small, but distinctive differences between the two groups. The Simental crossed animals had higher (P<0.05) carcass muscle percentage, and the Nelore steers had the lowest. The Nelore, Canchim and Angus crossed animals showed higher carcass fattissue percentage (P<0.05) as compared to the Simental crosses. The ultrasound AOL showed a positive correlation (0.49) with muscle weight, carcass cutability (0.42) and muscle carcass percentage (0.30). The ultrasound ECG was positively correlated with fattissue percentage (0.49) and carcass total fat tissue amount (0.52) and negatively correlated with the hind cut yield (-0.49). Carcass meat yield was correlated only with AOLPCAR (0,26), live weight (-0.45) and hot carcass weight (0.44). The ultrasound AOL had a low prediction accuracy for the carcass AOL. Overall, the accuracy of the carcass composition’s predictive equations by ultrasound were similar or greater than the one obtained by direct carcass measurements. This suggests that the difference in AOL measurements by ultrasonography and in the carcass does not necessarily mean a faulty ultrasonographic measurement for differences can be expected when measurements are made in the carcass or in the live animal.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Sezer, Ozcelik Ganime Asli. "Prediction Techniques Of Acid Mine Drainage: A Case Study Of A New Poly- Metallic Mine Development In Erzincan-ilic, Turkey". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608263/index.pdf.

Testo completo
Abstract (sommario):
Acid Mine Drainage (AMD) is an environmental problem that eventually occurs in sulfide rich mine sites. In Turkey most of the metal mines are associated with sulphide minerals and are potential AMD generators. The purpose of this PhD thesis is to practice universally accepted tools for the prediction of AMD potential for a new metallic mine development. This study involves evaluation of geological data, geochemistry, mineralogy, and acid-base accounting (static tests) data, obtained from the Erzincan-iliç
Ç
ö
pler Gold Prospect case. The mineralization in Ç
ö
pler is in sulfide and oxide types. The oxide is a supergene alteration and porphyry-copper type gold mineralization is classified as an intermediate sulfidation. The major lithologies observed in the study area are the regionally un-correlated meta-sedimentary lithologies, Munzur Limestone, and the Ç
ö
pler Granitoid.Thirty-eight representative samples were tested for AMD prediction purposes. Sixteen more were included to the sampling scheme for site characterization. Both acid producing and neutralizing lithologies are present in the mine site. Similarly it was revealed that the sulphate sulfur content of the samples were insignificant that any determined total sulfur amount can be directly considered as the factor for AMD production. Geochemical data revealed arsenic enrichments up to 10000 ppm in the study area. Therefore, during the operational stage, in addition to the planning to avoid or minimize AMD, it is necessary to take precautions against arsenic mobilization during the design of the AMD neutralization scheme. Both Kinetic studies and the heavy metal mobilization related to AMD are kept out of the scope of this investigation. Similarly, management and abatement stages of AMD are excluded.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Semmelroth, Carrie Lisa. "Response to intervention at the secondary level : identifying students at risk for high school dropout /". [Boise, Idaho] : Boise State University, 2009. http://scholarworks.boisestate.edu/td/30/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Brugnolli, Mateus Mussi. "Predictive adaptive cruise control in an embedded environment". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-24092018-151311/.

Testo completo
Abstract (sommario):
The development of Advanced Driving Assistance Systems (ADAS) produces comfort and safety through the application of several control theories. One of these systems is the Adaptive Cruise Control (ACC). In this work, a distribution of two control loops of such system is developed for an embedded application to a vehicle. The vehicle model was estimated using the system identification theory. An outer loop control manages the radar data to compute a suitable cruise speed, and an inner loop control aims for the vehicle to reach the cruise speed given a desired performance. For the inner loop, it is used two different approaches of model predictive control: a finite horizon prediction control, known as MPC, and an infinite horizon prediction control, known as IHMPC. Both controllers were embedded in a microcontroller able to communicate directly with the electronic unit of the vehicle. This work validates its controllers using simulations with varying systems and practical experiments with the aid of a dynamometer. Both predictive controllers had a satisfactory performance, providing safety to the passengers.
A inclusão de sistemas avançados para assistência de direção (ADAS) tem beneficiado o conforto e segurança através da aplicação de diversas teorias de controle. Um destes sistemas é o Sistema de Controle de Cruzeiro Adaptativo. Neste trabalho, é usado uma distribuição de duas malhas de controle para uma implementação embarcada em um carro de um Controle de Cruzeiro Adaptativo. O modelo do veículo foi estimado usando a teoria de identificação de sistemas. O controle da malha externa utiliza dados de um radar para calcular uma velocidade de cruzeiro apropriada, enquanto o controle da malha interna busca o acionamento do veículo para atingir a velocidade de cruzeiro com um desempenho desejado. Para a malha interna, é utilizado duas abordagens do controle preditivo baseado em modelo: um controle com horizonte de predição finito, e um controle com horizonte de predição infinito, conhecido como IHMPC. Ambos controladores foram embarcados em um microcontrolador capaz de comunicar diretamente com a unidade eletrônica do veículo. Este trabalho valida estes controladores através de simulações com sistemas variantes e experimentos práticos com o auxílio de um dinamômetro. Ambos controladores preditivos apresentaram desempenho satisfatório, fornecendo segurança para os passageiros.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Anbar, Sultan. "Development Of A Predictive Model For Carbon Dioxide Sequestration In Deep Saline Carbonate Aquifers". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610692/index.pdf.

Testo completo
Abstract (sommario):
Although deep saline aquifers are found in all sedimentary basins and provide very large storage capacities, a little is known about them because they are rarely a target for the exploration. Furthermore, nearly all the experiments and simulations made for CO2 sequestration in deep saline aquifers are related to the sandstone formations. The aim of this study is to create a predictive model to estimate the CO2 storage capacity of the deep saline carbonate aquifers since a little is known about them. To create a predictive model, the variables which affect the CO2 storage capacity and their ranges are determined from published literature data. They are rock properties (porosity, permeability, vertical to horizontal permeability ratio), fluid properties (irreducible water saturation, gas permeability end point, Corey water and gas coefficients), reaction properties (forward and backward reaction rates) and reservoir properties (depth, pressure gradient, temperature gradient, formation dip angle, salinity), diffusion coefficient and Kozeny-Carman Coefficient. Other parameters such as pore volume compressibility and density of brine are calculated from correlations found in literature. To cover all possibilities, Latin Hypercube Space Filling Design is used to construct 100 simulation cases and CMG STARS is used for simulation runs. By using least squares method, a linear correlation is found to calculate CO2 storage capacity of the deep saline carbonate aquifers with a correlation coefficient 0.81 by using variables found from literature and simulation results. Numerical dispersion effects have been considered by increasing the grid dimensions. It has been found that correlation coefficient decreased to 0.77 when the grid size was increased from 250 ft to 750 ft. The sensitivity analysis shows that the most important parameter that affects CO2 storage capacity is depth since the pressure difference between formation pressure and fracture pressure increases with depth. Also, CO2 storage mechanisms are investigated at the end of 300 years of simulation. Most of the gas (up to 90%) injected into formation dissolves into the formation water and negligible amount of CO2 reacts with carbonate. This result is consistent with sensitivity analysis results since the variables affecting the solubility of CO2 in brine have greater affect on storage capacity of aquifers. Dimensionless linear and nonlinear predictive models are constructed to estimate the CO2 storage capacity of all deep saline carbonate aquifers and it is found that the best dimensionless predictive model is linear one independent of bulk volume of the aquifer.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Romero, Danilo Jefferson. "Procedimento para uniformização de espectros de solos (VIS-NIR-SWIR)". Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11140/tde-07012016-165309/.

Testo completo
Abstract (sommario):
As técnicas de sensoriamento remoto têm evoluído dentro da ciência do solo visando superar as limitações de tempo e custo das análises químicas tradicionalmente utilizadas para quantificação de atributos do solo. As análises espectrais há muito tempo têm provado serem alternativas para complementar às análises tradicionais, sendo consideradas atualmente uma técnica consolidada e de ampla utilização. Os estudos em pedologia espectral têm utilizado os comprimentos de onda entre 350 a 25000 nm, porém, têm se detido com mais frequência na região de 350 a 2500 nm, a qual é dividida em Visível (VIS - 350 a 700 nm), infravermelho próximo (NIR - 700 a 1000 nm), e infravermelho de ondas curtas (SWIR - 1000 até 2500 nm). A exemplo das técnicas laboratoriais tradicionalmente utilizadas em análises de solos, faz-se necessário estabelecer padrões visando a comunicação científica a nível mundial em espectroscopia de solos. Com vista ao futuro da espectroscopia de solos, desenvolveu-se este estudo afim de se avaliar o efeito do uso de amostras padrões na aquisição de dados espectrais de amostras de solos tropicais em três diferentes geometrias de aquisição em três espectrorradiômetros (350 - 2.500 nm). 97 amostras de solos registrados na Biblioteca Espectral de Solos do Brasil (BESB) provenientes do Estado do Mato Grosso do Sul, cedidas pelo projeto AGSPEC foram utilizadas no estudo e duas amostras mestre brancas utilizadas como padrões de referência, sendo estas oriundas das dunas das praias de Wylie Bay (WB - 99 % quartzo) e Lucky Bay (LB - 90 % quartzo e 10 % aragonita), no sudoeste da Austrália. Para avaliar a padronização, as morfologias das curvas espectrais foram observadas quanto curvatura, feições, albedo; complementando as observações descritivas, as diferenças de reflectância entre os tratamentos utilizados (Sensor x Geometria x Correção) foram estudadas pela análise de variância e pelo teste de Tukey a 5 % de significância em três bandas espectrais médias (VIS-NIR-SWIR); e a modelagem para quantificação de teores de argila por meio da regressão por mínimos quadrados parciais (\"partial least squares regression\", PLSR), com validação cruzada (Cross Validation) para cada configuração e outra simulando uma biblioteca espectral mista, composta por combinações entre as configurações. O método de padronização proposto reduz as diferenças entre espectros obtidos em diferentes sensores e geometrias. A predição de argila por uma biblioteca espectral utilizando dados com diferentes configurações é favorecida pela padronização, passando de um de 0,83 para um de 0,85 após a correção, indicando a validade da unificação dos espectros pela técnica proposta.
Remote sensing techniques have evolved within the soil science aiming to overcome time and cost limitations of chemical analysis traditionally used for quantification of soil properties. Spectral analysis have long proven to be alternatives to supplement traditional analysis, currently being considered a mature technique and widely applied. Studies on spectral pedology have used the wavelength between 350 to 25000 nm, however, have held more often in the region of 350 to 2500 nm, which is divided into visible (VIS - 350 to 700 nm), near infrared ( NIR - 700 to 1000 nm) and short wave infrared (SWIR - 1000 to 2500 nm). As traditional laboratory techniques used in soil analysis, it is necessary to establish standards aimed at worldwide scientific communication in soils spectroscopy. Going forward soil spectroscopy, this study was developed in order to evaluate the effect of using standard samples in the acquisition of spectral data of tropical soil in three different geometries acquisition in three spectroradiometers (350-2500 nm). 97 soil samples documented in Brazilian Soils Spectral Library (BESB) from Mato Grosso do Sul State, provided by the AGSPEC project were used in the study and two white master samples used as reference standards, which are from the beaches dunes of Wylie Bay (WB - 99% quartz) and Lucky Bay (LB - 90% quartz and 10% aragonite) in southwestern Australia. To judge the standardization, the morphologies of the spectral curves were observed for curvature, absorption features, albedo; complementing the descriptive observations, the reflectance differences between the configurations (Sensor x Geometry x Correction) were studied by analysis of variance and Tukey test at 5% significance in three average spectral bands (VIS-NIR-SWIR); and modelling for quantification of clay through regression by partial least squares (PLSR) with cross-validation for each configuration and another simulating a mixed spectral library, consisting of combinations of situations. The method proposed standardization reduces differences between spectra obtained from different sensors and geometries. The prediction of clay by a spectral library using data with different settings is favoured to standardize, from R² of 0.83 to 0.85 after correction, indicating the validity of the unification of the spectra by the proposed technique.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

González, Marcos Tulio Amarís. "Performance prediction of application executed on GPUs using a simple analytical model and machine learning techniques". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-06092018-213258/.

Testo completo
Abstract (sommario):
The parallel and distributed platforms of High Performance Computing available today have became more and more heterogeneous (CPUs, GPUs, FPGAs, etc). Graphics Processing Units (GPU) are specialized co-processor to accelerate and improve the performance of parallel vector operations. GPUs have a high degree of parallelism and can execute thousands or millions of threads concurrently and hide the latency of the scheduler. GPUs have a deep hierarchical memory of different types as well as different configurations of these memories. Performance prediction of applications executed on these devices is a great challenge and is essential for the efficient use of resources in machines with these co-processors. There are different approaches for these predictions, such as analytical modeling and machine learning techniques. In this thesis, we present an analysis and characterization of the performance of applications executed on GPUs. We propose a simple and intuitive BSP-based model for predicting the CUDA application execution times on different GPUs. The model is based on the number of computations and memory accesses of the GPU, with additional information on cache usage obtained from profiling. We also compare three different Machine Learning (ML) approaches: Linear Regression, Support Vector Machines and Random Forests with BSP-based analytical model. This comparison is made in two contexts, first, data input or features for ML techniques were the same than analytical model, and, second, using a process of feature extraction, using correlation analysis and hierarchical clustering. We show that GPU applications that scale regularly can be predicted with simple analytical models, and an adjusting parameter. This parameter can be used to predict these applications in other GPUs. We also demonstrate that ML approaches provide reasonable predictions for different cases and ML techniques required no detailed knowledge of application code, hardware characteristics or explicit modeling. Consequently, whenever a large data set with information about similar applications are available or it can be created, ML techniques can be useful for deploying automated on-line performance prediction for scheduling applications on heterogeneous architectures with GPUs.
As plataformas paralelas e distribuídas de computação de alto desempenho disponíveis hoje se tornaram mais e mais heterogêneas (CPUs, GPUs, FPGAs, etc). As Unidades de processamento gráfico são co-processadores especializados para acelerar operações vetoriais em paralelo. As GPUs têm um alto grau de paralelismo e conseguem executar milhares ou milhões de threads concorrentemente e ocultar a latência do escalonador. Elas têm uma profunda hierarquia de memória de diferentes tipos e também uma profunda configuração da memória hierárquica. A predição de desempenho de aplicações executadas nesses dispositivos é um grande desafio e é essencial para o uso eficiente dos recursos computacionais de máquinas com esses co-processadores. Existem diferentes abordagens para fazer essa predição, como técnicas de modelagem analítica e aprendizado de máquina. Nesta tese, nós apresentamos uma análise e caracterização do desempenho de aplicações executadas em Unidades de Processamento Gráfico de propósito geral. Nós propomos um modelo simples e intuitivo fundamentado no modelo BSP para predizer a execução de funções kernels de CUDA sobre diferentes GPUs. O modelo está baseado no número de computações e acessos à memória da GPU, com informação adicional do uso das memórias cachês obtidas do processo de profiling. Nós também comparamos três diferentes enfoques de aprendizado de máquina (ML): Regressão Linear, Máquinas de Vetores de Suporte e Florestas Aleatórias com o nosso modelo analítico proposto. Esta comparação é feita em dois diferentes contextos, primeiro, dados de entrada ou features para as técnicas de aprendizado de máquinas eram as mesmas que no modelo analítico, e, segundo, usando um processo de extração de features, usando análise de correlação e clustering hierarquizado. Nós mostramos que aplicações executadas em GPUs que escalam regularmente podem ser preditas com modelos analíticos simples e um parâmetro de ajuste. Esse parâmetro pode ser usado para predizer essas aplicações em outras GPUs. Nós também demonstramos que abordagens de ML proveem predições aceitáveis para diferentes casos e essas abordagens não exigem um conhecimento detalhado do código da aplicação, características de hardware ou modelagens explícita. Consequentemente, sempre e quando um banco de dados com informação de \\textit esteja disponível ou possa ser gerado, técnicas de ML podem ser úteis para aplicar uma predição automatizada de desempenho para escalonadores de aplicações em arquiteturas heterogêneas contendo GPUs.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Barreto, Guilherme de Alencar. "Redes neurais não-supervisionadas temporais para identificação e controle de sistemas dinâmicos". Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-25112015-115752/.

Testo completo
Abstract (sommario):
A pesquisa em redes neurais artificiais (RNAs) está atualmente experimentando um crescente interesse por modelos que utilizem a variável tempo como um grau de liberdade extra a ser explorado nas representações neurais. Esta ênfase na codificação temporal (temporal coding) tem provocado debates inflamados nas neurociências e áreas correlatas, mas nos últimos anos o surgimento de um grande volume de dados comportamentais e fisiológicos vêm dando suporte ao papel-chave desempenhado por este tipo de representação no cérebro [BALLARD et al. (1998)]. Contribuições ao estudo da representação temporal em redes neurais vêm sendo observadas nos mais diversos tópicos de pesquisa, tais como sistemas dinâmicos não-lineares, redes oscilatórias, redes caóticas, redes com neurônios pulsantes e redes acopladas por pulsos. Como conseqüência, várias tarefas de processamento da informação têm sido investigada via codificação temporal, a saber: classificação de padrões, aprendizagem, memória associativa, controle sensório-motor, identificação de sistemas dinâmicos e robótica. Freqüentemente, porém, não fica muito claro até que ponto a modelagem dos aspectos temporais de uma tarefa contribui para aumentar a capacidade de processamento da informação de modelos neurais. Esta tese busca apresentar, de uma maneira clara e abrangente, os principais conceitos e resultados referentes à proposição de dois modelos de redes neurais não-supervisionadas (RNATs), e como estas lançam mão da codificação temporal para desempenhar melhor a tarefa que lhes é confiada. O primeiro modelo, chamado rede competitiva Hebbiana temporal (competitive temporal Hebbian - CTH), é aplicado especificamente em tarefas de aprendizagem e reprodução de trajetórias do robô manipulador PUMA 560. A rede CTH é uma rede neural competitiva cuja a principal característica é o aprendizado rápido, em apenas uma época de treinamento, de várias trajetórias complexas contendo ) elementos repetidos. As relações temporais da tarefa, representadas pela ordem temporal da trajetória, são capturadas por pesos laterais treinados via aprendizagem hebbiana. As propriedades computacionais da rede CTH são avaliadas através de simulações, bem como através da implementação de um sistema de controle distribuído para o robô PUMA 560 real. O desempenho da rede CTH é superior ao de métodos tabulares (look-up table) tradicionais de aprendizagem de trajetórias robóticas e ao de outras técnicas baseadas em redes neurais, tais como redes recorrentes supervisionadas e modelos de memória associativa bidirecional (BAM). O segundo modelo, chamado rede Auto-Organizável NARX (Self-Organizing NARX-SONARX), é baseado no conhecido algoritmo SOM, proposto por KOHONEN (1997). Do ponto de vista computacional, as propriedades de rede SONARX são avaliadas em diferentes domínios de aplicação, tais como predição de séries temporais caóticas, identificação de um atuador hidráulico e no controle preditivo de uma planta não-linear. Do ponto de vista teórico, demonstra-se que a rede SONARX pode ser utilizada como aproximador assintótico de mapeamentos dinâmicos não-lineares, graças a uma nova técnica de modelagem neural, chamada Memória Associativa Temporal via Quantização Vetorial (MATQV). A MATQV, assim como a aprendizagem hebbiana da rede CTH, é uma técnica de aprendizado associativo temporal. A rede SONARX é comparada com modelos NARX supervisionados, implementados a partir das redes MLP e RBF. Em todos os testes realizados para cada uma das tarefas citadas no parágrafo anterior, a rede SONARX tem desempenho similar ou melhor do que o apresentado por modelos supervisionados tradicionais, com um custo computacional consideravelmente menor. A rede SONARX é também comparada com a rede CTH na parendizagem de trajetórias robóticas complexas, com o intuito de destacar as principais diferenças entre os dois ) tipos de aprendizado associativo. Esta tese também propõe uma taxonomia matemática, baseada na representação por espaço de estados da teoria de sistemas, que visa classificar redes neurais não-supervisionadas temporais com ênfase em suas propriedades computacionais. Esta taxonomia tem como principal objetivo unificar a descrição de modelos neurais dinâmicos, facilitando a análise e a comparação entre diferentes arquiteturas, contrastando suas características representacionais e operacionais. Como exemplo, as redes CTH e a SONARX serão descritas usando a taxonomia proposta.
Neural network research is currently witnessing a significant shift of emphasis towards temporal coding, which uses time as an extra degree of freedom in neural representations. Temporal coding is passionately debated in neuroscience and related fields, but in the last few years a large volume of physiological and behavioral data has emerged that supports a key role for temporal coding in the brain [BALLARD et al. (1998)]. In neural networks, a great deal of research is undertaken under the topics of nonlinear dynamics, oscillatory and chaotic networks, spiking neurons, and pulse-coupled networks. Various information processing tasks are investigated using temporal coding, including pattern classification, learning, associative memory, inference, motor control, dynamical systems identification and control, and robotics. Progress has been made that substantially advances the state-of-the-art of neural computing. In many instances, however, it is unclear whether, and to what extent, the temporal aspects of the models contribute to information processing capabilities. This thesis seeks to present, in a clear and collective way, the main issues and results regarding the proposal of two unsupervised neural models, emphasizing how these networks make use of temporal coding to perform better in the task they are engaged in. The first model, called Competitive Temporal Hebbian (CTH) network, is applied specifically to learning and reproduction of trajectories of a PUMA 560 robot. The CTH model is a competitive neural network whose main characteristic is the fast learning, in just one training epoch, of multiple trajectories containing repeated elements. The temporal relationships within the task, represented by the temporal order of the elements of a given trajectory, are coded in lateral synaptic trained with hebbian learning. The computational properties of the CTH network are assessed through simulations, as well ) as through the practical implementation of a distributed control system for the real PUMA 560 robot. The CTH performs better than conventional look-up table methods for robot trajectory learning, and better than other neural-based techniques, such as supervised recurrent networks and bidirectional associative memory models. The second model, called Self-Organizing NARX (SONARX) network, is based on the well-known SOM algorithm by KOHONEN (1997). From the computational view-point, the properties of the SONARX model are evaluated in different application domains, such as prediction of chaotic time series, identification of an hydraulic actuator and predictive control of a non-linear plant. From the theoretic viewpoint, it is shown that the SONARX model can be seen as an asymptotic approximator for nonlinear dynamical mappings, thanks to a new neural modelling technique, called Vector-Quantized Temporal Associative Memory (VQTAM). This VQTAM, just like the hebbian learning rule of the CTH network, is a temporal associative memory techniques. The SONARX network is compared with supervised NARX models which based on the MLP and RBF networks. For all simulations, in each one of the forementioned application domains, the SONARX network had a similar and sometimes better performance than those observed for standard supervised models, with the additional advantage of a lower computational cost. The SONARX model is also compared with the CTH network in trajectory reproduction tasks, in order to contrast the main differences between these two types of temporal associative learning models. In this thesis, it is also proposed a mathematical taxonomy, based on the state-space representation of dynamical systems, for classification of unsupervised temporal neural networks with emphasis in their computational properties. The main goal of this taxonomy is to unify the description of dynamic neural models, ) facilitating the analysis and comparison of different architectures by constrasting their representational and operational characteristics. Is is shown how the CTH and SONARX models can be described using the proposed taxonomy.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Júnior, José Genario de Oliveira. "Model predictive control applied to A 2-DOF helicopter". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-11042018-082532/.

Testo completo
Abstract (sommario):
This work presents an embedded model predictive control application to a 2-DOF Helicopter Process. The mathematical modeling of the plant is first presented along with an analysis of the linear model. Then, the incremental state-space representations used in the MPC formulation are derived. The MPC technique is then defined, along with how to rewrite the physical constraints into the problem formulation. After that, a discussion on the utilized Quadratic Programming solver is presented along with possible alternatives to it, showing some considerations on which matrices to calculate beforehand for an embedded application. Finally, system identification is performed and the experimental results are presented.
Este trabalho apresenta uma aplicação de controle preditivo embarcado em um helicóptero de bancada com dois graus de liberdade. A modelagem matemática é apresentada, junto com uma análise do modelo linear obtido. São obtidas duas representações de modelos de espaço de estados considerando a entrada incremental, que serão usadas posteriormente para a formulação do controlador. Então, é definida a técnica de controle utilizada, juntamente com a inclusão das restrições físicas da planta na formulação do problema. Após isto, é feita uma discussão sobre qual solver para a programação quadrática utilizar, junto com algumas alternativas ao solver escolhido, bem como algumas considerações sobre a aplicação embarcada. Finalmente, são apresentados os resultados da identificação de sistemas aplicadas ao protótipo, bem como os resultados experimentais obtidos.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Maruyama, William Takahiro. "Predição de coautorias em redes sociais acadêmicas". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-07052016-232625/.

Testo completo
Abstract (sommario):
Atualmente, as redes sociais estão ganhando cada vez mais destaque no dia-a-dia das pessoas. Nessas redes são estabelecidos diferentes relacionamentos entre entidades que compartilham alguma característica ou objetivo em comum. Diversas informações sobre a produção científica nacional podem ser encontradas na Plataforma Lattes, que é um sistema utilizado para o registro dos currículos dos pesquisadores no Brasil. A partir dessas informações é possível construir uma rede social acadêmica, na qual as relações entre os pesquisadores representam uma parceria na produção de uma publicação (coautoria) - um link. Na análise de redes sociais existe uma linha de pesquisa conhecida como predição de link ou de relacionamentos, que tem como objetivo identificar relacionamentos futuros. Essa tarefa pode favorecer a comunicação entre os usuários e otimizar o processo de produção científica identificando possíveis colaboradores. Este projeto analisou a influência de diferentes atributos encontrados na literatura e filtros de dados para prever relações de coautoria nas redes sociais acadêmicas. Foi abordado dois tipos de problemas na predição de relacionamentos, o problema geral que analisa todos os possíveis relacionamentos de coautoria e o problema de novas coautoria que refere-se aos relacionamentos de coautorias inéditas na rede. Os resultados dos experimentos foram promissores para o problema geral de predição com a combinação de atributos e filtros utilizados. Contudo, para o problema de novas coautorias, devido à sua maior complexidade, os resultados não foram tão bons. Os experimentos apresentados avaliaram diferentes estratégias e analisaram o custo e benefício de cada uma. Conclui-se que para lidar com o problema de predição de coautorias em redes sociais acadêmicas é necessário analisar as vantagens e desvantagens entre as estratégias, encontrando um equilíbrio entre a revocação da classe positiva e a acurácia geral
Nowadays, social networks are gaining prominence in the day-to-day lives. In these networks, different relationships are established between entities that share some characteristic or common goal. A huge amount of information about the Brazilian national scientific production can be found in the Lattes Platform, which is a system used to record the curricula of researchers in Brazil. From this information, it is possible to build an academic social network, where relations between researchers represent a partnership in the production of a publication - a link. In social network analysis there is a research area known as link prediction, which aims to identify future relationships. This task may facilitate communication among researchers and optimize the scientific production process identifying possible collaborators. This project analyzed the influence of different attributes found in the literature and data filters to predict co-authorship relationships in academic social networks. Was approached two types of problems in predicting relationships, the general problem that analyzes all possible co-authoring relationships and the problem of new co-authoring that relates to novel co-authorships relationships in the network. The experimental results were promising to the prediction general problem, combining attributes and using filters. However, for the new co-authorships problem the results were not as good. The experiments evaluated different strategies and analyzed the costs and benefits of each. We concluded that to deal with the co-authorships prediction problem in academic social networking it is necessary to analyze the advantages and disadvantages among the strategies, finding a balance between the recall of the positive class and the overall accuracy
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Gusson, Eduardo. "Avaliação de métodos para a quantificação de biomassa e carbono em florestas nativas e restauradas da Mata Atlântica". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/11/11150/tde-13022014-084600/.

Testo completo
Abstract (sommario):
A quantificação de biomassa e carbono em florestas requer a aplicação de métodos adequados para se obter estimativas confiáveis de seus estoques. Neste sentido, o objetivo deste trabalho foi avaliar a aplicação de alguns métodos utilizados para a predição e estimação dessas variáveis em florestas nativas e restauradas da Mata Atlântica. Para isso, um primeiro capítulo aborda o uso do índice de vegetação NDVI como ferramenta auxiliar no inventário de estoques de biomassa em áreas de restauração florestal. Diferentes métodos de amostragem foram comparados em termos de precisão e conservadorismo das estimativas. Os resultados demonstraram que o NDVI apresentou adequada correlação com a biomassa estimada nas parcelas do inventário florestal instaladas em campo, sendo viável sua aplicação, seja para auxiliar na determinação de estratos, na aplicação da amostragem estratificada, seja como variável suplementar na utilização de um estimador de regressão relacionando-o à biomassa, no procedimento da amostragem dupla. Este último método, possibilitou minimizar as incertezas acerca das estimativas, valendo-se de uma intensidade amostral reduzida, fato que torna seu uso interessante, principalmente aos estudos em escala ampla, de modo a aumentar a confiabilidade das quantificações de estoques de carbono presentes na biomassa florestal, a custos de inventário reduzido. Um segundo capítulo discute a abordagem metodológica utilizada para inferir sobre a qualidade de modelos preditivos quando da seleção de modelos concorrentes para a aplicação em estudos de biomassa de florestas nativas. Para tanto, seis modelos considerando diferentes combinações de variáveis preditoras, incluindo diâmetro, altura total e alguma informação relativa à densidade da madeira, foram construídos a partir de dados de uma amostra de 80 árvores. As equações de predição de biomassa seca geradas por estes modelos foram avaliadas quanto à sua qualidade de ajuste e desempenho de aplicação. Neste segundo caso, aplicando-as aos dados de outra amostra composta por 146 árvores presentes em nove parcelas destrutivas instaladas em diferentes estágios sucessionais da floresta, de modo a possibilitar a avaliação dos vieses preditivos. No intuito de se verificar as discrepâncias nas estimativas de biomassa devido à aplicação das diferentes equações de predição de biomassa, as equações desenvolvidas, junto a outras disponíveis na literatura, foram aplicados aos dados de um inventário florestal realizado na área estudada. O estudo confirma a natureza empírica destas equações, atentando para a necessidade de prévia avaliação de seu desempenho de predição antes de sua aplicação, em especial, das ajustadas com amostras de outras florestas, expondo alguns dos principais fatores associados às causas de incertezas nas quantificações dos estoques de biomassa nos estudos realizado em florestas nativas.
The biomass and carbon quantification requires the application of appropriate methods to obtain reliable estimates of their stocks in natural and planted forests. The aim of this study was to evaluate different applicable methods to estimate biomass in both, natural and restored Atlantic Forests. The first chapter discusses the use of the vegetation index (NDVI) as an auxiliary tool in the inventory of biomass stocks in forest restoration areas. Different sampling methods were compared in terms of its accuracy and conservativeness. The results shown an adequate correlation between the vegetation index and the measured biomass, making the NDVI applicable either as supporting decision tool to define strata in the stratified sampling or as a predictor in the double sampling procedure. The last method allowed to the minimization of the uncertainties related to the biomass estimation combined to the reduction of sampling efforts. It makes the approach very interesting, especially in the context of large-scale surveys. The second chapter discusses the methodological approach used to evaluate the quality of predictive models applied to biomass studies in natural forests. For this, six models were fitted from 80 sample trees, using different combinations of predictor variables, such as, total height and information of wood density. The predictive equations generated by the models were evaluated according to their quality of fit and prediction performance. In order to evaluate its prediction performance, the equations were applied to the dataset of another 146 sample trees measured in nine destructive sample plots. The plots were located in different forest successional stages allowing the evaluation of model predictive bias among the stages. A third step of the analysis was the application of literature equations to a dataset of a forest inventory conducted in the study area, in order to verify the discrepancies in the estimates due to the use of these different models. The study confirms the empirical nature of the biomass equations and the need of previous evaluation in terms of prediction performance. This conclusion is even more relevant when we consider the equations that were obtained from other forests types, exposing some of the key factors associated to the causes of uncertainty in the biomass estimation applied to natural forests.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Granato, Italo Stefanine Correia. "snpReady and BGGE: R packages to prepare datasets and perform genome-enabled predictions". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/11/11137/tde-21062018-134207/.

Testo completo
Abstract (sommario):
The use of molecular markers allows an increase in efficiency of the selection as well as better understanding of genetic resources in breeding programs. However, with the increase in the number of markers, it is necessary to process it before it can be ready to use. Also, to explore Genotype x Environment (GE) in the context of genomic prediction some covariance matrices needs to be set up before the prediction step. Thus, aiming to facilitate the introduction of genomic practices in the breeding program pipelines, we developed two R-packages. The former is called snpReady, which is set to prepare data sets to perform genomic studies. This package offers three functions to reach this objective, from organizing and apply the quality control, build the genomic relationship matrix and a summary of a population genetics. Furthermore, we present a new imputation method for missing markers. The latter is the BGGE package that was built to generate kernels for some GE genomic models and perform predictions. It consists of two functions (getK and BGGE). The former is helpful to create kernels for the GE genomic models, and the latter performs genomic predictions with some features for GE kernels that decreases the computational time. The features covered in the two packages presents a fast and straightforward option to help the introduction and usage of genome analysis in the breeding program pipeline.
O uso de marcadores moleculares permite um aumento na eficiência da seleção, bem como uma melhor compreensão dos recursos genéticos em programas de melhoramento. No entanto, com o aumento do número de marcadores, é necessário o processamento deste antes de deixa-lo disponível para uso. Além disso, para explorar a interação genótipo x ambiente (GA) no contexto da predição genômica, algumas matrizes de covariância precisam ser obtidas antes da etapa de predição. Assim, com o objetivo de facilitar a introdução de práticas genômicas nos programa de melhoramento, dois pacotes em R foram desenvolvidos. O primeiro, snpReady, foi criado para preparar conjuntos de dados para realizar estudos genômicos. Este pacote oferece três funções para atingir esse objetivo, organizando e aplicando o controle de qualidade, construindo a matriz de parentesco genômico e com estimativas de parâmetros genéticos populacionais. Além disso, apresentamos um novo método de imputação para marcas perdidas. O segundo pacote é o BGGE, criado para gerar kernels para alguns modelos genômicos de interação GA e realizar predições genômicas. Consiste em duas funções (getK e BGGE). A primeira é utilizada para criar kernels para os modelos GA, e a última realiza predições genômicas, com alguns recursos especifico para os kernels GA que diminuem o tempo computacional. Os recursos abordados nos dois pacotes apresentam uma opção rápida e direta para ajudar a introdução e uso de análises genômicas nas diversas etapas do programa de melhoramento.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Néto, Karla Ferraz. "Análise gênica de comorbidades a partir da integração de dados epidemiológicos". Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/95/95131/tde-29012015-150351/.

Testo completo
Abstract (sommario):
A identificação de genes responsáveis por doenças humanas pode fornecer conhecimentos sobre mecanismos patológicos e psicológicos que são essenciais para o desenvolvimento de novos diagnósticos e terapias. Sabemos que uma doença é raramente uma consequência de uma anormalidade num único gene, porém reflete desordens de uma rede intra e intercelular complexa. Muitas metodologias conhecidas na Bioinformática são capazes de priorizar genes relacionados a uma determinada doença. Algumas abordagens também podem validar a pertinência ou não destes genes em relação à doença estudada. Uma abordagem de priorização de genes é a investigação a partir de doenças que acometem pacientes ao mesmo tempo, as comorbidades. Existem muitas fontes de dados biomédicos que podem ser utilizadas para a coleta de comorbidades. Desta forma, podemos coletar pares de doenças que formam comorbidades epidemiológicas e assim analisar os genes de cada doença. Esta análise serve para expandirmos a lista de genes candidatos de cada uma dessas doenças e justificarmos a relação gênica entre essas comorbidades. O objetivo principal deste projeto é o de integração dos dados epidemiológicos e genéticos para a realização da predição de genes causadores de doenças. Isto se dará através do estudo de comorbidade destas doenças.
The identification of genes responsible for human diseases can provide knowledge about pathological and physiological mechanisms that are essential for the development of new diagnostics and therapeutics. It is known that a disease is rarely a consequence of an abnormality in a single gene, but reflects complex intra and intercellular network disorders. Many methodologies known in Bioinformatics are able to prioritize genes related to a particular disease. Some approaches can also validate how appropriate or not these genes are relative to a disease. An approach for prioritizing genes is the research from diseases afecting patients at the same time, i.e. comorbidities. There are many sources of biomedical data that can be used to collect comorbidities and analyse genes of each disease. We can also expand the list of candidate genes for each singular disease and justify the genetic relationship of these comorbidities. The main objective of this project is the integration of epidemiologic and genetic data to perform the prediction of causing genes through the study of comorbidity of these illnesses.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Tamura, Karin Ayumi. "Métodos de predição para modelo logístico misto com k efeitos aleatórios". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10032013-125846/.

Testo completo
Abstract (sommario):
A predição de uma observação futura para modelos mistos é um problema que tem sido extensivamente estudado. Este trabalho trata o problema de atribuir valores para os efeitos aleatórios e/ou variável resposta de novos grupos para o modelo logístico misto, cujo objetivo é predizer respostas futuras com base em parâmetros estimados previamente. Na literatura, existem alguns métodos de predição para este modelo que considera apenas o intercepto aleatório. Para a regressão logística mista com k efeitos aleatórios, atualmente não há métodos propostos para a predição dos efeitos aleatórios de novos grupos. Portanto, foram propostas novas abordagens baseadas no método da média zero, no melhor preditor empírico (MPE), na regressão linear e nos modelos de regressão não-paramétricos. Todos os métodos de predição foram avaliados usando os seguintes métodos de estimação: aproximação de Laplace, quadratura adaptativa de Gauss-Hermite e quase-verossimilhança penalizada. Os métodos de estimação e predição foram analisados por meio de estudos de simulação, com base em sete cenários, com comparações de diferentes valores para: o tamanho de grupo, os desvios-padrão dos efeitos aleatórios, a correlação entre os efeitos aleatórios, e o efeito fixo. Os métodos de predição foram aplicados em dois conjuntos de dados reais. Em ambos os problemas os conjuntos de dados apresentaram estrutura hierárquica, cujo objetivo foi predizer a resposta para novos grupos. Os resultados indicaram que o método MPE apresentou o melhor desempenho em termos de predição, entretanto, apresentou alto custo computacional para grandes bancos de dados. As demais metodologias apresentaram níveis de predição semelhantes ao MPE, e reduziram drasticamente o esforço computacional.
The prediction of a future observation in a mixed regression is a problem that has been extensively studied. This work treat the problem of assigning the random effects and/or the outcome of new groups for the mixed logistic regression, in which the aim is to predict future outcomes based on the parameters previously estimated. In the literature, there are some prediction methods for this model that considers only the random intercept. For the mixed logistic regression with k random effects, there is currently no method for predicting the random effects of new groups. Therefore, we proposed new approaches based on average zero method, empirical best predictor (EBP), linear regression and nonparametric regression models. All prediction methods were evaluated by using the estimation methods: Laplace approximation, adaptive Gauss-Hermite quadrature and penalized quasi-likelihood. The estimation and prediction methods were analyzed by simulation studies, based on seven simulation scenarios, which considered comparisons of different values for: the group size, the standard deviations of the random effects, the correlation between the random effects, and the fixed effect. The prediction methods were applied in two real data sets. In both problems the data set presented hierarchical structure, and the objective was to predict the outcome for new groups. The results indicated that EBP presented the best performance in prediction terms, however it has been presented high computational cost for big data sets. The other methodologies presented similar level of prediction in relation to EBP, and drastically reduced the computational effort.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Sousa, Massáine Bandeira e. "Improving accuracy of genomic prediction in maize single-crosses through different kernels and reducing the marker dataset". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/11/11137/tde-07032018-163203/.

Testo completo
Abstract (sommario):
In plant breeding, genomic prediction (GP) may be an efficient tool to increase the accuracy of selecting genotypes, mainly, under multi-environments trials. This approach has the advantage to increase genetic gains of complex traits and reduce costs. However, strategies are needed to increase the accuracy and reduce the bias of genomic estimated breeding values. In this context, the objectives were: i) to compare two strategies to obtain markers subsets based on marker effect regarding their impact on the prediction accuracy of genome selection; and, ii) to compare the accuracy of four GP methods including genotype × environment interaction and two kernels (GBLUP and Gaussian). We used a rice diversity panel (RICE) and two maize datasets (HEL and USP). These were evaluated for grain yield and plant height. Overall, the prediction accuracy and relative efficiency of genomic selection were increased using markers subsets, which has the potential for build fixed arrays and reduce costs with genotyping. Furthermore, using Gaussian kernel and the including G×E effect, there is an increase in the accuracy of the genomic prediction models.
No melhoramento de plantas, a predição genômica (PG) é uma eficiente ferramenta para aumentar a eficiência seletiva de genótipos, principalmente, considerando múltiplos ambientes. Esta técnica tem como vantagem incrementar o ganho genético para características complexas e reduzir os custos. Entretanto, ainda são necessárias estratégias que aumentem a acurácia e reduzam o viés dos valores genéticos genotípicos. Nesse contexto, os objetivos foram: i) comparar duas estratégias para obtenção de subconjuntos de marcadores baseado em seus efeitos em relação ao seu impacto na acurácia da seleção genômica; ii) comparar a acurácia seletiva de quatro modelos de PG incluindo o efeito de interação genótipo × ambiente (G×A) e dois kernels (GBLUP e Gaussiano). Para isso, foram usados dados de um painel de diversidade de arroz (RICE) e dois conjuntos de dados de milho (HEL e USP). Estes foram avaliados para produtividade de grãos e altura de plantas. Em geral, houve incremento da acurácia de predição e na eficiência da seleção genômica usando subconjuntos de marcadores. Estes poderiam ser utilizados para construção de arrays e, consequentemente, reduzir os custos com genotipagem. Além disso, utilizando o kernel Gaussiano e incluindo o efeito de interação G×A há aumento na acurácia dos modelos de predição genômica.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia