Siga este link para ver outros tipos de publicações sobre o tema: Hydrology Mathematical models Data processing.

Teses / dissertações sobre o tema "Hydrology Mathematical models Data processing"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Hydrology Mathematical models Data processing".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Samper, Calvete F. Javier(Francisco Javier) 1958. "Statistical methods of analyzing hydrochemical, isotopic, and hydrological data from regional aquifers". Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/191115.

Texto completo da fonte
Resumo:
This dissertation is concerned with the development of mathematical aquifer models that combine hydrological, hydrochemical and isotopic data. One prerequisite for the construction of such models is that prior information about the variables and parameters be quantified in space and time by appropriate statistical methods. Various techniques using multivariate statistical data analyses and geostatistical methods are examined in this context. The available geostatistical methods are extended to deal with the problem at hand. In particular, a three-dimensional interactive geostatistical package has been developed for the estimation of intrinsic and nonintrinsic variables. This package is especially designed for groundwater applications and incorporates a maximum likelihood cross-validation method for estimating the parameters of the covariance function. Unique features of this maximum likelihood cross-validation method include: the use of an adjoint state method to compute the gradient of the likelihood function, the computation of the covariance of the parameter estimates and the use of identification criteria for the selection of a covariance model. In addition, it can be applied to data containing measurement errors, data regularized over variable lengths, and to nonintrinsic variables. The above methods of analysis are applied to synthetic data as well as hydrochemical and isotopic data from the Tucson aquifer in Arizona and the Madrid Basin in Spain. The dissertation also includes a discussion of the processes affecting the transport of dissolved constituents in groundwater, the mathematical formulation of the inverse solute transport problem and a proposed numerical method for its solution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Ma, Chunyan. "Mathematical security models for multi-agent distributed systems". CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2568.

Texto completo da fonte
Resumo:
This thesis presents the developed taxonomy of the security threats in agent-based distributed systems. Based on this taxonomy, a set of theories is developed to facilitate analyzng the security threats of the mobile-agent systems. We propose the idea of using the developed security risk graph to model the system's vulnerabilties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ethington, Corinna A. "The robustness of LISREL estimates in structural equation models with categorical data". Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54504.

Texto completo da fonte
Resumo:
This study was an examination of the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical manifest variables. Two types of correlation matrices were analyzed; one containing Pearson product-moment correlations and one containing tetrachoric, polyserial, and product-moment correlations as appropriate. Using continuous variables generated according to the equations defining the population model, three cases were considered by dichotomizing some of the variables with varying degrees of skewness. When Pearson product-moment correlations were used to estimate associations involving dichotomous variables, the structural parameter estimates were biased when skewness was present in the dichotomous variables. Moreover, the degree of bias was consistent for both the maximum likelihood and unweighted least squares estimates. The standard errors of the estimates were found to be inflated, making significance tests unreliable. The analysis of mixed matrices produced average estimates that more closely approximated the model parameters except in the case where the dichotomous variables were skewed in opposite directions. However, since goodness-of-fit statistics and standard errors are not available in LISREL when tetrachoric and polyserial correlations are used, the unbiased estimates are not of practical significance. Until alternative computer programs are available that employ distribution-free estimation procedures that consider the skewness and kurtosis of the variables, researchers are ill-advised to employ LISREL in the estimation of structural equation models containing skewed categorical manifest variables.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Weed, Richard Allen. "Computational strategies for three-dimensional flow simulations on distributed computing systems". Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/12154.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Witkowski, Walter Roy 1961. "SIMULATION ROUTINE FOR THE STUDY OF TRANSIENT BEHAVIOR OF CHEMICAL PROCESSES". Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/276537.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Sathisan, Shashi Kumar. "Encapsulation of large scale policy assisting computer models". Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/101261.

Texto completo da fonte
Resumo:
In the past two decades policy assisting computer models have made a tremendous impact in the analysis of national security issues and the analysis of problems in various government affairs. SURMAN (Survivability Management) is a policy assisting model that has been developed for use in national security planning. It is a large scale model formulated using the system dynamics approach of treating a problem in its entirety rather than in parts. In this thesis, an encapsulation of SURMAN is attempted so as to sharpen and focus its ability to perform policy/design evaluation. It is also aimed to make SURMAN more accessible to potential users and to provide a simple tool to the decision makers without having to resort to the mainframe computers. To achieve these objectives a personal/microcomputer version of SURMAN (PC SURMAN) and a series of curves relating inputs to outputs are developed. PC SURMAN reduces the complexity of SURMAN by dealing with generic aircraft. It details the essential survivability management parameters and their causal relationships through the life-cycle of aircraft systems. The model strives to link the decision parameters (inputs) to the measures of effectiveness (outputs). The principal decision variables identified are survivability, availability, and inventory of the aircraft system. The measures of effectiveness identified are the Increase Payload Delivered to Target Per Loss (ITDPL), Cost Elasticity of Targets Destroyed Per Loss (CETDPL), Combat Value Ratio (COMVR), Kill to Loss Ratio (KLR), and Decreased Program Life-Cycle Cost (DPLCC). The model provides an opportunity for trading off decision parameters. The trading off of survivability enhancement techniques and the defense budget allocation parameters for selecting those techniques/parameters with higher benefits and lower penalties are discussed. The information relating inputs to outputs for the tradeoff analysis is presented graphically using curves derived from experimentally designed computer runs.
M.S.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Yan, Hongxiang. "From Drought Monitoring to Forecasting: a Combined Dynamical-Statistical Modeling Framework". PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/3292.

Texto completo da fonte
Resumo:
Drought is the most costly hazard among all natural disasters. Despite the significant improvements in drought modeling over the last decade, accurate provisions of drought conditions in a timely manner is still one of the major research challenges. In order to improve the current drought monitoring and forecasting skills, this study presents a hybrid system with a combination of remotely sensed data assimilation based on particle filtering and a probabilistic drought forecasting model. Besides the proposed drought monitoring system through land data assimilation, another novel aspect of this dissertation is to seek the use of data assimilation to quantify land initial condition uncertainty rather than relying entirely on the hydrologic model or the land surface model to generate a single deterministic initial condition. Monthly to seasonal drought forecasting products are generated using the updated initial conditions. The computational complexity of the distributed data assimilation system required a modular parallel particle filtering framework which was developed and allowed for a large ensemble size in particle filtering implementation. The application of the proposed system is demonstrated with two case studies at the regional (Columbia River Basin) and the Conterminous United States. Results from both synthetic and real case studies suggest that the land data assimilation system significantly improves drought monitoring and forecasting skills. These results also show how sensitive the seasonal drought forecasting skill is to the initial conditions, which can lead to better facilitation of the state/federal drought preparation and response actions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Hall, David Eric. "Transient thermal models for overhead current-carrying hardware". Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17133.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Shmeleva, Nataliya V. "Making sense of cDNA : automated annotation, storing in an interactive database, mapping to genomic DNA". Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/25178.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

WEST, KAREN FRANCES. "AN EXTENSION TO THE ANALYSIS OF THE SHIFT-AND-ADD METHOD: THEORY AND SIMULATION (SPECKLE, ATMOSPHERIC TURBULENCE, IMAGE RESTORATION)". Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188021.

Texto completo da fonte
Resumo:
The turbulent atmosphere degrades images of objects viewed through it by introducing random amplitude and phase errors into the optical wavefront. Various methods have been devised to obtain true images of such objects, including the shift-and-add method, which is examined in detail in this work. It is shown theoretically that shift-and-add processing may preserve diffraction-limited information in the resulting image, both in the point source and extended object cases, and the probability of ghost peaks in the case of an object consisting of two point sources is discussed. Also, a convergence rate for the shift-and-add algorithm is established and simulation results are presented. The combination of shift-and-add processing and Wiener filtering is shown to provide excellent image restorations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Gustafson, Nathaniel Lee. "A Confidence-Prioritization Approach to Data Processing in Noisy Data Sets and Resulting Estimation Models for Predicting Streamflow Diel Signals in the Pacific Northwest". BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3294.

Texto completo da fonte
Resumo:
Streams in small watersheds are often known to exhibit diel fluctuations, in which streamflow oscillates on a 24-hour cycle. Streamflow diel fluctuations, which we investigate in this study, are an informative indicator of environmental processes. However, in Environmental Data sets, as well as many others, there is a range of noise associated with individual data points. Some points are extracted under relatively clear and defined conditions, while others may include a range of known or unknown confounding factors, which may decrease those points' validity. These points may or may not remain useful for training, depending on how much uncertainty they contain. We submit that in situations where some variability exists in the clarity or 'Confidence' associated with individual data points – Notably environmental data – an approach that factors this confidence into account during the training phase is beneficial. We propose a methodological framework for assigning confidence to individual data records and augmenting training with that information. We then exercise this methodology on two separate datasets: A simulated data set, and a real-world, Environmental Science data set with a focus on streamflow diel signals. The simulated data set provides integral understanding of the nature of the data involved, and the Environmental Science data set provides a real-world case study of an application of this methodology against noisy data. Both studies' results indicate that applying and utilizing confidence in training increases performance and assists in the Data Mining Process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Murrel, Benjamin. "Improved models of biological sequence evolution". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71870.

Texto completo da fonte
Resumo:
Thesis (PhD)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: Computational molecular evolution is a field that attempts to characterize how genetic sequences evolve over phylogenetic trees – the branching processes that describe the patterns of genetic inheritance in living organisms. It has a long history of developing progressively more sophisticated stochastic models of evolution. Through a probabilist’s lens, this can be seen as a search for more appropriate ways to parameterize discrete state continuous time Markov chains to better encode biological reality, matching the historical processes that created empirical data sets, and creating useful tools that allow biologists to test specific hypotheses about the evolution of the organisms or the genes that interest them. This dissertation is an attempt to fill some of the gaps that persist in the literature, solving what we see as existing open problems. The overarching theme of this work is how to better model variation in the action of natural selection at multiple levels: across genes, between sites, and over time. Through four published journal articles and a fifth in preparation, we present amino acid and codon models that improve upon existing approaches, providing better descriptions of the process of natural selection and better tools to detect adaptive evolution.
AFRIKAANSE OPSOMMING: Komputasionele molekulêre evolusie is ’n navorsingsarea wat poog om die evolusie van genetiese sekwensies oor filogenetiese bome – die vertakkende prosesse wat die patrone van genetiese oorerwing in lewende organismes beskryf – te karakteriseer. Dit het ’n lang geskiedenis waartydens al hoe meer gesofistikeerde waarskynlikheidsmodelle van evolusie ontwikkel is. Deur die lens van waarskynlikheidsleer kan hierdie proses gesien word as ’n soektog na meer gepasde metodes om diskrete-toestand kontinuë-tyd Markov kettings te parametriseer ten einde biologiese realiteit beter te enkodeer – op so ’n manier dat die historiese prosesse wat tot die vorming van biologiese sekwensies gelei het nageboots word, en dat nuttige metodes geskep word wat bioloë toelaat om spesifieke hipotesisse met betrekking tot die evolusie van belanghebbende organismes of gene te toets. Hierdie proefskrif is ’n poging om sommige van die gapings wat in die literatuur bestaan in te vul en bestaande oop probleme op te los. Die oorkoepelende tema is verbeterde modellering van variasie in die werking van natuurlike seleksie op verskeie vlakke: variasie van geen tot geen, variasie tussen posisies in gene en variasie oor tyd. Deur middel van vier gepubliseerde joernaalartikels en ’n vyfde artikel in voorbereiding, bied ons aminosuur- en kodon-modelle aan wat verbeter op bestaande benaderings – hierdie modelle verskaf beter beskrywings van die proses van natuurlike seleksie sowel as beter metodes om gevalle van aanpassing in evolusie te vind.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Zhu, Tulong. "Meshless methods in computational mechanics". Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/11795.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Theodoridis, John Apostolis 1972. "Borehole electromagnetic prospecting for weak conductors". Monash University, School of Geosciences, 2004. http://arrow.monash.edu.au/hdl/1959.1/5225.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

王日昇 e Yat-sing Wong. "Production scheduling for virtual cellular manufacturing systems". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31239468.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Camp, Nicholas Julian. "A model for the time dependent behaviour of rock joints". Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/21138.

Texto completo da fonte
Resumo:
This thesis is a theoretical investigation into the time-dependent behaviour of rock joints. Much of the research work that has been conducted to date in the area of finite element analysis has been involved with the development of special elements to deal with these discontinuities. A comprehensive literature survey is undertaken highlighting some of the significant contributions to the modelling of joints. It is then shown how internal variables can be used to model discontinuities in the rock mass. A finite element formulation is described resulting in a system of equations which can easily be adapted to cope with various constitutive behaviours on the discontinuities. In particular, a viscoplastic relationship; which uses a homogeneous, hyperbolic yield function is adopted. The viscoplastic relationship can be used for both time-dependent (creep) or quasi-static (elasto-plastic) problems. Time-dependent behaviour requires a time integration scheme and therefore a generalised explicit/implicit scheme is chosen. The resulting numerical algorithms are all implemented in the finite element program, NOSTRUM. Various examples are presented to illustrate certain features of both the formulation and the numerical algorithm. Jointed rock beams and a jointed infinite rock mass are modelled assuming plane strain conditions. Reasons are proposed to explain the predicted behaviour. The results of the analysis shows that the internal variable formulation successfully models time-dependent joint movements in a continuous media. The method gives good, qualitative results which agree with observations in deep level mines. It is recommended that quantitative mine observations be used to calibrate the model so that usable predictions of joint movement can be made. This would enable any new developments to be implemented in the model. Further work on implicit methods might allow greater modelling flexibility by reducing computer run times.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Beller, Douglas K. "Alternate Computer Models of Fire Convection Phenomena for the Harvard Computer Fire Code". Digital WPI, 2000. https://digitalcommons.wpi.edu/etd-theses/892.

Texto completo da fonte
Resumo:
"Alternate models for extended ceiling convection heat transfer and ceiling vent mass flow for use in the Harvard Computer fire Code are developed. These models differ from current subroutines in that they explicitly consider the ceiling jet resulting from the fire plume of a burning object. The Harvard Computer fire Code (CFC) was used to compare the alternate models against the models currently used in CFC at Worcester Polytechnic Institute and with other available data. The results indicate that convection heat transfer to the ceiling of the enclosure containing the fire may have been previously underestimated at times early in the fire. Also, the results of the ceiling vent model provide new insight into ceiling vent phenomena and how ceiling vents can be modeled given sufficient experimental data. this effort serves as a qualitative verification of the models as implemented; complete quantitative verification requires further experimentation. Recommendations are also included so that these alternate models may be enhanced further. "
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Hayes, Thomas S. "Evaluation of a refined lattice dome model". Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/56187.

Texto completo da fonte
Resumo:
A general review of lattice dome geometry and connection details, leads to a modeling approach, which introduces intermediate elements to represent connections. The method provides improved modeling of joint behavior and flexibility for comparative studies. The discussion of lattice domes is further specialized for parallel lamella geometry. A procedure is developed for minimizing the number of different member lengths. This procedure is incorporated into a program, which generates the geometric data for a specified dome. The model is developed from a background which considers commercial space frame systems, static and dynamic loads, and modeling techniques using ABAQUS, a finite element program. An optional output of the generation program creates input data for ABAQUS. Modal analysis, static design loads, and earthquake loads are used in the evaluation of the model.
Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Nifong, Nathaniel H. "Learning General Features From Images and Audio With Stacked Denoising Autoencoders". PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1550.

Texto completo da fonte
Resumo:
One of the most impressive qualities of the brain is its neuro-plasticity. The neocortex has roughly the same structure throughout its whole surface, yet it is involved in a variety of different tasks from vision to motor control, and regions which once performed one task can learn to perform another. Machine learning algorithms which aim to be plausible models of the neocortex should also display this plasticity. One such candidate is the stacked denoising autoencoder (SDA). SDA's have shown promising results in the field of machine perception where they have been used to learn abstract features from unlabeled data. In this thesis I develop a flexible distributed implementation of an SDA and train it on images and audio spectrograms to experimentally determine properties comparable to neuro-plasticity. Specifically, I compare the visual-auditory generalization between a multi-level denoising autoencoder trained with greedy, layer-wise pre-training (GLWPT), to one trained without. I test a hypothesis that multi-modal networks will perform better than uni-modal networks due to the greater generality of features that may be learned. Furthermore, I also test the hypothesis that the magnitude of improvement gained from this multi-modal training is greater when GLWPT is applied than when it is not. My findings indicate that these hypotheses were not confirmed, but that GLWPT still helps multi-modal networks adapt to their second sensory modality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Nakakita, Kunio. "Toward real-time aero-icing simulation using reduced order models". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99781.

Texto completo da fonte
Resumo:
Even though the power of supercomputers has increased extraordinarily, there is still an insatiable need for more advanced multi-disciplinary CFD simulations in the aircraft analysis and design fields. A particular current interest is in the realistic three-dimensional fully viscous turbulent flow simulation of the highly non-linear aspects of aero-icing. This highly complex simulation is still computationally too demanding in industry, especially when several runs, such as parametric studies, are needed. In order to make such compute-intensive simulations more affordable, this work presents a reduced order modeling approach, based on the "Proper Orthogonal Decomposition", (POD), method to predict a wider swath of flow fields and ice shapes based on a limited number of "snapshots" obtained from complete high-fidelity CFD computations. The procedure of the POD approach is to first decompose the fields into modes, using a limited number of full-calculations snapshots, and then to reconstruct the field and/or ice shapes using those decomposed modes for other conditions, leading to reduced order calculations. The use of the POD technique drastically reduces the computational cost and can provide a more complete map of the performance degradation of an iced aircraft over a wide range of flight and weather conditions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Chakraborty, Amal. "An integrated computer simulator for surface mine planning and design". Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/90920.

Texto completo da fonte
Resumo:
In the increasingly competitive coal market, it is becoming more important for coal operators to develop mathematical models for surface mining which can estimate mining costs before the actual mining begins. The problem becomes even more acute with the new reclamation laws, as they affect surface coal mining methods, productivity, and costs. This study presents a computer simulator for a mountaintop removal type of surface mining operation. It will permit users to compare the costs associated with different overburden handling and reclamation plans. It may be used to minimize productivity losses, and, perhaps, to increase productivity and consequently to reduce operating costs through design and implementation of modified mountain top removal methods.
M.S.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Moore, Thomas P. "Optimal design, procurement and support of multiple repairable equipment and logistic systems". Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/71158.

Texto completo da fonte
Resumo:
A concept for the mathematical modeling of multiple repairable equipment and logistic systems (MREAL systems) is developed; These systems consist of multiple populations of repairable equipment, and their associated design, procurement, maintenance, and supply support. MREAL systems present management and design problems which parallel the·management and design of multiple, consumable item inventory systems. However, the MREAL system is more complex since it has a repair component. The MREAL system concept is described in a classification hierarchy which attempts to categorize the components of such systems. A specific mathematical model (MREAL1) is developed for a subset of these components. Included in MREAL1 are representations of the equipment reliability and maintainability design problem, the maintenance capacity problem, the retirement age problem, and the population size problem, for each of the multiple populations. MREAL1 models the steady state stochastic behavior of the equipment repair facilities using an approximation which is based upon the finite source, multiple server queuing system. System performance measures included in MREAL1 are: the expected MREAL total system life cycle cost (including a shortage cost penalty); the steady state expected number of shortages; the probability of catastrophic failure in each equipment population; and two budget based measures of effectiveness. Two optimization methods are described for a test problem developed for MREAL1. The first method computes values of the objective function and the constraints for a specified subset of the solution space. The best feasible solution found is recorded. This method can also examine all possible solutions, or can be used in a manual search. The second optimization method performs an exhaustive enumeration. of the combinatorial programming portion of MREAL1, which represents equipment design. For each enumerated design combination, an attempt is made to find the optimal solution to the remaining nonlinear discrete programming problem. A sequential unconstrained minimization technique is used which is based on an augmented Lagrangian penalty function adapted to the integer nature of MREAL1. The unconstrained minimization is performed by a combination of Rosenbrock's search technique, the steepest descent method, and Fibonacci line searches, adapted to the integer nature of the search. Since the model contains many discrete local minima, the sequential unconstrained minimization is repeated from different starting solutions, based upon a heuristic selection procedure. A gradient projection method provides the termination criteria for each unconstrained minimization.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

陸穎剛 e Wing-kong Luk. "Concept space approach for cross-lingual information retrieval". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B30147724.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Hwang, Kuo-Ping. "Applying heuristic traffic assignment in natural disaster evacuation: a decision support system". Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/54455.

Texto completo da fonte
Resumo:
The goal of this research is to develop a heuristic traffic assignment method to simulate the traffic flow of a transportation network at a real-time speed. The existing assignment methods are reviewed and a heuristic path-recording assignment method is proposed. Using the new heuristic assignment method, trips are loaded onto the network in a probabilistic approach for the first iteration; paths are recorded, and path impedance is computed as the basis for further assignment iteration. The real-time traffic assignment model developed with the new assignment method is called HEUPRAE. The difference in link traffic between this new assignment and Dial's multipath assignment ranges from 10 to 25 percent. Saving in computer time is about 55 percent. The proposed heuristic path-recording assignment is believed to be an efficient and reliable method. Successful development of this heuristic assignment method helps solve those transportation problems which need assignment results at a real-time speed, and for which the assignment process lasts a couple of hours. Evacuation planning and operation are well suited to the application of this real-time heuristic assignment method. Evacuation planning and operations are major activities in emergency management. Evacuation planning instructs people where to go, which route to take, and the time needed to accomplish an evacuation. Evacuation operations help the execution of an evacuation plan in response to the changing nature of a disaster. The Integrated Evacuation Decision Support System (IEDSS) is a computer system which employs the evacuation planning model, MASSVAC2, and the evacuation operation model, HEUPRAE, to deal with evacuations. The IEDSS uses computer graphics to prepare input and interpret output. It helps a decision maker analyze the evacuation system, review evacuation plans, and issue an evacuation order at a proper time. Users of the IEDSS can work on evacuation problems in a friendly interactive visual environment. The application of the IEDSS to the hurricane and flood problems for the city of Virginia Beach shows how IEDSS is practically implemented. It proves the usefulness of the IEDSS in coping with disasters.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

McEntire, Barney Joseph. "Biodynamic modeling enhancement to KRASH program". Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/12164.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Bernier, Thomas. "Development of an algorithmic method for the recognition of biological objects". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29656.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Ngaye, Zonke. "User experience metrics for Dr Math". Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1012036.

Texto completo da fonte
Resumo:
The purpose of this research study is to propose guidelines for providing a positive user experience for pupils using Dr Math®. User experience was found to have a positive impact on the acceptance and adoption of a product. Thus the proposed guidelines contribute in maximizing the adoption and acceptance of Dr Math® among pupils. This study begins with an introductory chapter that describes the problem that forms the basis for this research. The chapter defines the objectives that this study is intended to achieve in order to accomplish its ultimate goal. The methodology followed to conduct this research study as well as its scope are also defined here. The results from a preliminary survey revealed that despite its potential accessibility, Dr Math® has a low adoption rate. However, when compared to other mobile learning (m-learning) applications for mathematics learning, Dr Math® is more popular. Thus Dr Math® was selected as a case for study. Chapter 2 of this study provides a detailed description of Dr Math® as a local mobile application for mathematics learning. It was found that the affordability and accessibility of Dr Math® did not necessarily imply a high adoption rate. There are various possible barriers to its low adoption. User experience (UX), which is the focus of this study, is one of them. Thus, a subsequent chapter deals with UX. Chapter 3 discusses UX, its scope, components and definition and places particular emphasis on its significance in the success of any product. The chapter also highlights the characteristics of a positive UX and the importance of designing for this outcome. In Chapter 4, a discussion and justification of the methodology used to conduct this research is discussed. This study primarily employs a qualitative inductive approach within an interpretivism paradigm. An exploratory single case study was used to obtain an in-depth analysis of the case. Data was collected using Dr Math® log files as a documentary source. Gathered data was then analysed and organized into themes and categories using qualitative content analysis as outlined in Chapter 5. Also the findings obtained from the results, which are mainly the factors that were found to have an impact on the user interaction with Dr Math®, are presented here. The identified factors served as a basis from which the guidelines presented in Chapter 6 were developed. Chapter 7 presents the conclusions and recommendations of the research. From both theoretical and empirical work, it was concluded that Dr Math® has the potential to improve mathematics learning in South Africa. Its adoption rate, however, is not satisfying: hence, the investigation of the factors impacting on the user interaction with Dr Math®, from which the proposed guidelines are based.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Chartree, Jedsada. "Monitoring Dengue Outbreaks Using Online Data". Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc500167/.

Texto completo da fonte
Resumo:
Internet technology has affected humans' lives in many disciplines. The search engine is one of the most important Internet tools in that it allows people to search for what they want. Search queries entered in a web search engine can be used to predict dengue incidence. This vector borne disease causes severe illness and kills a large number of people every year. This dissertation utilizes the capabilities of search queries related to dengue and climate to forecast the number of dengue cases. Several machine learning techniques are applied for data analysis, including Multiple Linear Regression, Artificial Neural Networks, and the Seasonal Autoregressive Integrated Moving Average. Predictive models produced from these machine learning methods are measured for their performance to find which technique generates the best model for dengue prediction. The results of experiments presented in this dissertation indicate that search query data related to dengue and climate can be used to forecast the number of dengue cases. The performance measurement of predictive models shows that Artificial Neural Networks outperform the others. These results will help public health officials in planning to deal with the outbreaks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Lam, Fung, e 林峰. "Internet inter-domain traffic engineering and optimizatioon". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31224581.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Wang, Yongqiang, e 王永強. "A study on structured covariance modeling approaches to designing compact recognizers of online handwritten Chinese characters". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664305.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Whitaker, David Lee. "Supersonic conical flow computations using a rectangular finite volume method". Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/101335.

Texto completo da fonte
Resumo:
A method is developed to solve the conical flow equations in spherical coordinates using a rectangular finite volume approach. The only mapping done is the mapping of the spherical solution surface to that of a flat plane using a stereographic projection. The mapped plane is then discretised into rectangular finite volumes. The rectangular volumes are allowed to intersect the body surface in an arbitrary manner. A full potential formulation is used to represent the flow-field velocities. The full potential formulation prevents the formation of vortices in the flow-field but all other essential features of the supersonic conical flow are resolved. An upwind density shift is used to introduce an artificial viscosity in a conservative manner to eliminate non-physical expansion shocks and add numerical damping. The rectangular finite volume method is then extended to deal with infinitely thin conical fins. Numerical tests of cones, elliptical cones, conical wing-bodies and waveriders (with very thin winglets) have been done. Very good agreement with experimental results is found.
M.S.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Garrett, Joseph Lee. "A comparison of flux-splitting algorithms for the Euler equations with equilibrium air chemistry". Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/44636.

Texto completo da fonte
Resumo:

The use of flux-splitting techniques on the Euler equations is considered for high Mach number, high temperature flows in which the fluid is assumed to be inviscid air in equilibrium. Three different versions of real gas extensions to the Stegerâ Warming and Van Leer flux-vector splitting, and four different versions of real gas extensions to the Roe flux-difference splitting, are compared with regard to general applicability and ease of implementation in existing perfect gas g algorithms. Test computations are performed for the M = 5, high temperature flow over a 10-degree wedge and the M = 24.5 flow over a blunt body. Although there were minor differences between the computed results for the three types of flux-splitting algorithms considered, little variation is observed between different versions of the same algorithm.


Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Shortt, James S. "A comparison of forest growth and yield models for inventory updating". Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-01102009-063919/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Smith, Edwin L. "A system dynamics computer model for long-term water quality planning". Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/41562.

Texto completo da fonte
Resumo:

The objective of this study was to develop a comprehensive, basin-wide, water-quality-planning model using system dynamics methodology. Later, the model was to be interfaced with a more conventional system dynamics model: one simulating social, technological, economic, and political interactions. By doing so, it is envisioned that such management policies as zoning, abatement facilities, and best management practices may be simulated together.


Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Gwaze, Arnold Rumosa. "A cox proportional hazard model for mid-point imputed interval censored data". Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/385.

Texto completo da fonte
Resumo:
There has been an increasing interest in survival analysis with interval-censored data, where the event of interest (such as infection with a disease) is not observed exactly but only known to happen between two examination times. However, because so much research has been focused on right-censored data, so many statistical tests and techniques are available for right-censoring methods, hence interval-censoring methods are not as abundant as those for right-censored data. In this study, right-censoring methods are used to fit a proportional hazards model to some interval-censored data. Transformation of the interval-censored observations was done using a method called mid-point imputation, a method which assumes that an event occurs at some midpoint of its recorded interval. Results obtained gave conservative regression estimates but a comparison with the conventional methods showed that the estimates were not significantly different. However, the censoring mechanism and interval lengths should be given serious consideration before deciding on using mid-point imputation on interval-censored data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Mousumi, Fouzia Ashraf. "Exploiting the probability of observation for efficient Bayesian network inference". Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, 2013. http://hdl.handle.net/10133/3457.

Texto completo da fonte
Resumo:
It is well-known that the observation of a variable in a Bayesian network can affect the effective connectivity of the network, which in turn affects the efficiency of inference. Unfortunately, the observed variables may not be known until runtime, which limits the amount of compile-time optimization that can be done in this regard. This thesis considers how to improve inference when users know the likelihood of a variable being observed. It demonstrates how these probabilities of observation can be exploited to improve existing heuristics for choosing elimination orderings for inference. Empirical tests over a set of benchmark networks using the Variable Elimination algorithm show reductions of up to 50% and 70% in multiplications and summations, as well as runtime reductions of up to 55%. Similarly, tests using the Elimination Tree algorithm show reductions by as much as 64%, 55%, and 50% in recursive calls, total cache size, and runtime, respectively.
xi, 88 leaves : ill. ; 29 cm
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Simpson, Charles Robert Jr. "Analysis of Passive End-to-End Network Performance Measurements". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14612.

Texto completo da fonte
Resumo:
NETI@home, a distributed network measurement infrastructure to collect passive end-to-end network measurements from Internet end-hosts was developed and discussed. The data collected by this infrastructure, as well as other datasets, were used to conduct studies on the behavior of the network and network users as well as the security issues affecting the Internet. A flow-based comparison of honeynet traffic, representing malicious traffic, and NETI@home traffic, representing typical end-user traffic, was conducted. This comparison showed that a large portion of flows in both datasets were failed and potentially malicious connection attempts. We additionally found that worm activity can linger for more than a year after the initial release date. Malicious traffic was also found to originate from across the allocated IP address space. Other security-related observations made include the suspicious use of ICMP packets and attacks on our own NETI@home server. Utilizing observed TTL values, studies were also conducted into the distance of Internet routes and the frequency with which they vary. The frequency and use of network address translation and the private IP address space were also discussed. Various protocol options and flags were analyzed to determine their adoption and use by the Internet community. Network-independent empirical models of end-user network traffic were derived for use in simulation. Two such models were created. The first modeled traffic for a specific TCP or UDP port and the second modeled all TCP or UDP traffic for an end-user. These models were implemented and used in GTNetS. Further anonymization of the dataset and the public release of the anonymized data and their associated analysis tools were also discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Devadason, Tarith Navendran. "The virtual time function and rate-based schedulers for real-time communications over packet networks". University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0108.

Texto completo da fonte
Resumo:
[Truncated abstract] The accelerating pace of convergence of communications from disparate application types onto common packet networks has made quality of service an increasingly important and problematic issue. Applications of different classes have diverse service requirements at distinct levels of importance. Also, these applications offer traffic to the network with widely variant characteristics. Yet a common network is expected at all times to meet the individual communication requirements of each flow from all of these application types. One group of applications that has particularly critical service requirements is the class of real-time applications, such as packet telephony. They require both the reproduction of a specified timing sequence at the destination, and nearly instantaneous interaction between the users at the endpoints. The associated delay limits (in terms of upper bound and variation) must be consistently met; at every point where these are violated, the network transfer becomes worthless, as the data cannot be used at all. In contrast, other types of applications may suffer appreciable deterioration in quality of service as a result of slower transfer, but the goal of the transfer can still largely be met. The goal of this thesis is to evaluate the potential effectiveness of a class of packet scheduling algorithms in meeting the specific service requirements of real-time applications in a converged network environment. Since the proposal of Weighted Fair Queueing, there have been several schedulers suggested to be capable of meeting the divergent service requirements of both real-time and other data applications. ... This simulation study also sheds light on false assumptions that can be made about the isolation produced by start-time and finish-time schedulers based on the deterministic bounds obtained. The key contributions of this work are as follows. We clearly show how the definition of the virtual time function affects both delay bounds and delay distributions for a real-time flow in a converged network, and how optimality is achieved. Despite apparent indications to the contrary from delay bounds, the simulation analysis demonstrates that start-time rate-based schedulers possess useful characteristics for real-time flows that the traditional finish-time schedulers do not. Finally, it is shown that all the virtual time rate-based schedulers considered can produce isolation problems over multiple hops in networks with high loading. It becomes apparent that the benchmark First-Come-First-Served scheduler, with spacing and call admission control at the network ingresses, is a preferred arrangement for real-time flows (although lower priority levels would also need to be implemented for dealing with other data flows).
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Deragisch, Patricia Amelia. "Electronic portfolio for mathematical problem solving in the elementary school". CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1299.

Texto completo da fonte
Resumo:
Electronic portfolio for mathematical problem solving in the elementary school is an authentic assessment tool for teachers and students to utilize in evaluating mathematical skills. It is a computer-based interactive software program to allow teachers to easily access student work in the problem solving area for assessment purposes, and to store multimedia work samples over time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Hon, Alan 1976. "Compressive membrane action in reinforced concrete beam-and-slab bridge decks". Monash University, Dept. of Civil Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/5629.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Shi, Zhenwu. "Non-worst-case response time analysis for real-time systems design". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51827.

Texto completo da fonte
Resumo:
A real-time system is a system such that the correctness of operations depends not only on the logical results, but also on the time at which these results are available. A fundamental problem in designing real-time systems is to analyze response time of operations, which is defined as the time elapsed from the moment when the operation is requested to the moment when the operation is completed. Response time analysis is challenging due to the complex dynamics among operations. A common technique is to study response time under worst-case scenario. However, using worst-case response time may lead to the conservative real-time system designs. To improve the real-time system design, we analyze the non-worst-case response time of operations and apply these results in the design process. The main contribution of this thesis includes mathematical modeling of real-time systems, calculation of non-worst-case response time, and improved real-time system design. We perform analysis and design on three common types of real-time systems as the real-time computing system, real-time communication network, and real-time energy management. For the real-time computing systems, our non-worst-response time analysis leads a necessary and sufficient online schedulability test and a measure of robustness of real-time systems. For the real-time communication network, our non-worst-response time analysis improves the performance for the model predictive control design based on the real-time communication network. For the real-time energy management, we use the non-worst-case response time to check whether the micro-grid can operate independently from the main grid.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Dvorak, Gary John. "Economic analysis of irrigation pumping plants". Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9834.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Luxhoj, James T. "A dynamic programming approach to the multi-stream replacement problem". Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/49829.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Olivier, Brett Gareth. "Simulation and database software for computational systems biology : PySCes and JWS Online". Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50449.

Texto completo da fonte
Resumo:
Thesis (PhD)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: Since their inception, biology and biochemistry have been spectacularly successful in characterising the living cell and its components. As the volume of information about cellular components continues to increase, we need to ask how we should use this information to understand the functioning of the living cell? Computational systems biology uses an integrative approach that combines theoretical exploration, computer modelling and experimental research to answer this question. Central to this approach is the development of computational models, new modelling strategies and computational tools. Against this background, this study aims to: (i) develop a new modelling package: PySCeS, (ii) use PySCeS to study discontinuous behaviour in a metabolic pathway in a way that was very difficult, if not impossible, with existing software, (iii) develop an interactive, web-based repository (JWS Online) of cellular system models. Three principles that, in our opinion, should form the basis of any new modelling software were laid down: accessibility (there should be as few barriers as possible to PySCeS use and distribution), flexibility (pySCeS should be extendable by the user, not only the developers) and usability (PySCeS should provide the tools we needed for our research). After evaluating various alternatives we decided to base PySCeS on the freely available programming language, Python, which, in combination with the large collection of science and engineering algorithms in the SciPy libraries, would give us a powerful modern, interactive development environment.
AFRIKAANSE OPSOMMING: Sedert hul totstandkoming was biologie en, meer spesifiek, biochemie uiters suksesvol in die karakterisering van die lewende sel se komponente. Steeds groei die hoeveelheid informasie oor die molekulêre bestanddele van die sel daagliks; ons moet onself dus afvra hoe ons hierdie informasie kan integreer tot 'n verstaanbare beskrywing van die lewende sel se werking. Om dié vraag te beantwoord gebruik rekenaarmatige sisteembiologie 'n geïntegreerde benadering wat teorie, rekenaarmatige modellering en eksperimenteeIe navorsing kombineer. Sentraal tot die benadering is die ontwikkeling van nuwe modelle, strategieë vir modellering, en sagteware. Teen hierdie agtergrond is die hoofdoelstelling van hierdie projek: (i) die ontwikkeling van 'n nuwe modelleringspakket, PySCeS (ii) die benutting van PySCeS om diskontinue gedrag in n metaboliese sisteem te bestudeer (iets wat met die huidiglik beskikbare sagteware redelik moeilik is), (en iii) die ontwikkeling vann interaktiewe, internet-gebaseerde databasis van sellulêre sisteem modelle, JWS Online. Ons is van mening dat nuwe sagteware op drie belangrike beginsels gebaseer behoort te wees: toeganklikheid (die sagteware moet maklik bekombaar en bruikbaar wees), buigsaamheid (die gebruiker moet self PySCeS kan verander en ontwikkel) en bruikbaarheid (al die funksionalitiet wat ons vir ons navorsing nodig moet in PySCeS ingebou wees). Ons het verskeie opsies oorweeg en besluit om die vrylik verkrygbare programmeringstaal, Python, in samehang die groot kolleksie wetenskaplike algoritmes, SciPy, te gebruik. Hierdie kombinasie verskaf n kragtige, interaktiewe ontwikkelings- en gebruikersomgewing. PySCeS is ontwikkel om onder beide die Windows en Linux bedryfstelsels te werk en, meer spesifiek, om gebruik te maak van 'n 'command line interface'. Dit beteken dat PySCeS op enige interaktiewe rekenaar-terminaal Python ondersteun sal werk. Hierdie eienskap maak ook moontlik die gebruik van PySCeS as 'n modelleringskomponent in 'n groter sagteware pakket onder enige bedryfstelsel wat Python ondersteun. PySCeS is op 'n modulere ontwerp gebaseer, wat dit moontlik vir die eindgebruiker maak om die sagteware se bronkode verder te ontwikkel. As 'n toepassing is PySCeS gebruik om die oorsaak van histeretiese gedrag van 'n lineêre, eindproduk-geïnhibeerde metaboliese pad te ondersoek. Ons het hierdie interessante gedrag in 'n vorige studie ontdek, maar kon nie, met die sagteware wat op daardie tydstip tot ons beskikking was, hierdie studie voortsit nie. Met PySCeS se ingeboude vermoë om parameter kontinuering te doen, kon ons die oorsake van hierdie diskontinuë gedrag volledig karakteriseer. Verder het ons 'n nuwe metode ontwikkel om hierdie gedrag te visualiseer as 'n interaksie tussen die volledige sisteem se subkomponente. Tydens PySCeS se ontwikkeling het ons opgemerk dat dit baie moeilik was om metaboliese modelle wat in die literature gepubliseer is te herbou en te bestudeer. Hierdie situasie is grotendeels die gevolg van die feit dat nêrens 'n sentrale databasis vir metaboliese modelle bestaan nie (soos dit wel bestaan vir genomiese data of proteïen strukture). Die JWS Online databasis is spesifiek ontwikkel om hierdie leemte te vul. JWS Online maak dit vir die gebruiker moontlik om, via die internet en sonder die installasie van enige gespesialiseerde modellerings sagteware, gepubliseerde modelle te bestudeer en ook af te laai vir gebruik met ander modelleringspakkette soos bv. PySCeS. JWS Online het alreeds 'n onmisbare hulpbron vir sisteembiologiese navorsing en onderwys geword.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Botha, Stephen Gordon. "The effect of evolutionary rate estimation methods on correlations observed between substitution rates in models of evolution". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/19938.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Majeke, Lunga. "Preliminary investigation into estimating eye disease incidence rate from age specific prevalence data". Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/464.

Texto completo da fonte
Resumo:
This study presents the methodology for estimating the incidence rate from the age specific prevalence data of three different eye diseases. We consider both situations where the mortality may differ from one person to another, with and without the disease. The method used was developed by Marvin J. Podgor for estimating incidence rate from prevalence data. It delves into the application of logistic regression to obtain the smoothed prevalence rates that helps in obtaining incidence rate. The study concluded that the use of logistic regression can produce a meaningful model, and the incidence rates of these diseases were not affected by the assumption of differential mortality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Aliaga, Rivera Cristhian Neil. "An unsteady multiphase approach to in-flight icing /". Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112552.

Texto completo da fonte
Resumo:
Ice accretion is a purely unsteady phenomenon that is presently approximated by most icing codes using quasi-steady modeling. The accuracy of ice prediction is thus directly related to the arbitrarily prescribed time span during which the impact of ice growth on both flow and droplets is neglected. The objective of this work is to remove this limitation by implementing a cost-effective unsteady approach. This is done by fully coupling, in time, a diphasic flow (interacting air and droplet particles) with the ice accretion model. The two-phase flow is solved using the Navier-Stokes and Eulerian droplet equations with dual-time stepping in order to improve computational time. The ice shape is either obtained from the conservation of mass and energy within a thin film layer for glaze and mixed icing conditions, or from a mass balance between water droplets impingement and mass flux of ice for rime icing conditions. The iced surface being constantly displaced in time, Arbitrary Lagrangian-Eulerian terms are added to the governing equations to account for mesh movement. Moreover, surface smoothing techniques are developed to prevent degradation of the iced-surface geometric discretization. For rime ice, the numerical results clearly show that the new full unsteady modeling improves the accuracy of ice prediction, compared to the quasi-steady approach, while in addition ensuring time span independence. The applicability of the unsteady icing model for predicting glaze ice accretion is also demonstrated by coupling the diphasic model to the Shallow Water Icing Model. A more rigorous analysis reveals that this model requires the implementation of local surface roughness and that previous quasi-steady validations cannot be carried out using a small number of shots, therefore the need for unsteady simulation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Melius, Matthew Scott. "Identification of Markov Processes within a Wind Turbine Array Boundary Layer". PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1422.

Texto completo da fonte
Resumo:
The Markovianity within a wind turbine array boundary layer is explored for data taken in a wind tunnel containing a model wind turbine array. A stochastic analysis of the data is carried out using Markov chain theory. The data were obtained via hot-wire anemometry thus providing point velocity statistics. The theory of Markovian processes is applied to obtain a statistical description of longitudinal velocity increments inside the turbine wake using conditional probability density functions. It is found that two and three point conditional probability density functions are similar for scale differences larger than the Taylor micro-scale. This result is quantified by use of the Wilcoxon rank-sum test which verifies that this relationship holds independent of initial scale selection outside of the near-wake region behind a wind turbine. Furthermore, at the locations which demonstrate Markovian properties there is a well defined inertial sub-range which follows Kolmogorv's -5/3 scaling behavior. Results indicate an existence of Markovian properties at scales on the order of the Taylor micro-scale, λ for most locations in the wake. The exception being directly behind the tips of the rotor and the hub where the complex turbulent interactions characteristic of the near-wake demonstrate influence upon the Markov process. The presence of a Markov process in the remaining locations leads to characterization of the multi-point statistics of the wind turbine wakes using the most recent states of the flow.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Munyakazi, Justin Bazimaziki. "Transport modelling in the Cape Town Metropolitan Area". Thesis, University of the Western Cape, 2005. http://etd.uwc.ac.za/index.php?module=etd&amp.

Texto completo da fonte
Resumo:
The use of MEPLAN by the Metropolitan Transport Planning Branch of the Cape Town City Council since 1984 was not successful due to apartheid anomalies. EMME/2 was then introduced in 1991 in replacement of MEPLAN. The strengths and weaknesses of both MEPLAN and EMME/2 are recorded in this study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Said, Munzir. "Computational optimal control modeling and smoothing for biomechanical systems". University of Western Australia. Dept. of Mathematics and Statistics, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0082.

Texto completo da fonte
Resumo:
[Truncated abstract] The study of biomechanical system dynamics consists of research to obtain an accurate model of biomechanical systems and to find appropriate torques or forces that reproduce motions of a biomechanical subject. In the first part of this study, specific computational models are developed to maintain relative angle constraints for 2-dimensional segmented bodies. This is motivated by the fact that there is a possibility of models of segmented bodies, moving under gravitational acceleration and joint torques, for its segments to move past the natural relative angle limits. Three models to maintain angle constraints between segments are proposed and compared. These models are: all-time angle constraints, a restoring torque in the state equations and an exponential penalty model. The models are applied to a 2-D three segment body to test the behaviour of each model when optimizing torques to minimize an objective. The optimization is run to find torques so that the end effector of the body follows the trajectory of a half circle. The result shows the behavior of each model in maintaining the angle constraints. The all-time constraints case exhibits a behaviour of not allowing torques (at a solution) which make segments move past the constraints, while the other two show a flexibility in handling the angle constraints more similar to a real biomechanical system. With three computational methods to represent the angle contraint, a workable set of initial torques for the motion of a segmented body can be obtained without causing integration failure in the ordinary differential equation (ODE) solver and without the need to use the “blind man method” that restarts the optimal control many times. ... With one layer of penalty weight balancing between trajectory compliance penalty and other optimal control objectives (minimizing torque/smoothing torque) already difficult to obtain (as explained by the L-curve phenomena), adding the second layer penalty weight for the closeness of fit for each of the body segments will further complicate the weight balancing and too much trial and error computation may be needed to get a reasonably good set of weighting values. Second order regularization is also added to the optimal control objective and the optimization has managed to obtain smoother torques for all body joints. To make the current approach more competitive with the inverse dynamic, an algorithm to speed up the computation of the optimal control is required as a potential future work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia