Dissertations / Theses on the topic 'Modeling algorithms'

To see the other types of publications on this topic, follow the link: Modeling algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Modeling algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Frank, Matthew I. "LoPC-- modeling contention in parallel algorithms." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/47439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

DeBrunner, Linda Sumners. "Modeling reconfiguration algorithms for regular architecture." Diss., Virginia Tech, 1991. http://hdl.handle.net/10919/29254.

Full text
Abstract:
Three models are proposed to evaluate and design distributed reconfigurable systems for fault tolerant, highly reliable applications. These models serve as valuable tools for developing fault tolerant systems. In each model, cells work together in parallel to change the global structure through a series of separate actions. In the Local Supervisor Model (LSM), selected cells guide the reconfiguration process. In the Tessellation Automata Model (TAM), each cell determines its next state based on its state and its neighbors' states, and communicates its state information to its neighbors. In the Interconnected Finite State Machine Model (IFS:MM:), each cell determines its next state and outputs based on its state and its inputs. The hierarchical nature of the TAM and IFSMM provides advantages in evaluating, comparing, and designing systems. The use of each of these models in describing systems is demonstrated. The IFSMM: is emphasized since it is the most versatile of the three models. The IFSMM: is used to identify algorithm weaknesses and improvements, compare existing algorithms, and develop a novel design for a reconfigurable hypercube.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Wangyang. "IC Spatial Variation Modeling: Algorithms and Applications." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/136.

Full text
Abstract:
Rapidly improving the yield of today's complicated manufacturing process is a key challenge to ensure profitability for the IC industry. In this thesis, we propose accurate and efficient modeling techniques for spatial variation, which is becoming increasing important in the advanced technology nodes. Based on the spatial model, we develop algorithms for two applications that help identify the important yield-limiting factors and prioritize yield improvement efforts. Variation decomposition narrows down the sources of variation by decomposing the overall variation into multiple different components, each corresponding to a different subset of variation sources. Wafer spatial signature clustering automatically partitions a large number of wafers into groups exhibiting different spatial signatures, which helps process engineers find important factors that prevent the process from stably maintaining a high yield across different lots and wafers. An important problem in variation decomposition is to accurately model and extract the wafer-level and within-die spatially correlated variation. Towards this goal, we first develop a physical basis function dictionary based on our study of several common physical variation sources. We further propose the DCT dictionary to discover spatially correlated systematic patterns not modeled by the physical dictionary. Moreover, we propose to apply sparse regression to significantly reduce the over-fitting problem posed by a large basis function dictionary. We further extend the sparse regression algorithm to a robust sparse regression algorithm for outlier detection, which provides superior accuracy compared to the traditional IQR method. Finally, we propose several efficient methods to make the computational cost of sparse regression tractable for large-scale problems. We further develop an algorithm for the wafer spatial signature clustering problem based on three steps. First, we re-use the spatial variation modeling technique developed for variation decomposition to automatically capture the spatial signatures of wafers by a small number of features. Next, we select a complete-link hierarchical clustering algorithm to perform clustering on the features. Finally, we develop a modified L-method to select the number of clusters from the hierarchical clustering result.
APA, Harvard, Vancouver, ISO, and other styles
4

Yee, Seung Hee. "Three algorithms for planar-patch terrain modeling." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23136.

Full text
Abstract:
Providing a simplified model of real terrain has applications to route planning for robotic vehicles and military maneuvers. In this thesis I explore planar-patch surface modeling to represent terrain in a simple and effective way. In planar-patch surface modeling the terrain is subdivided into a set of planar subregions. The homogeneity of the gradient within a planar subregion simplifies calculating the cost of traversing the region, thus simplifying route planning. I have explored three main strategies to model the surface: joint top-down and and bottom-up, strict bottom-up, and presmoothing bottom-up approaches. Results of the algorithms are shown graphically by using the APL and Grafstat packages, verifying their correctness and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Lin. "Causal modeling in quantitative genomics /." Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/9577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bosson, Maël. "Adaptive algorithms for computational chemistry and interactive modeling." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00846458.

Full text
Abstract:
At the atomic scale, interactive physically-based modeling tools are more and more in demand. Unfortunately, solving the underlying physics equations at interactive rates is computationally challenging. In this dissertation, we propose new algorithms that allow for interactive modeling of chemical structures. We first present a modeling tool to construct structural models of hydrocarbon systems. The physically-based feedbacks are based on the Brenner potential. In order to be able to interactively edit systems containing numerous atoms, we introduce a new adaptive simulation algorithm. Then, we introduce what we believe to be the first interactive quantum chemistry simulation algorithm at the Atom Superposition and Electron Delocalization Molecular Orbital (ASED-MO) level of theory. This method is based on the divide-and-conquer (D&C) approach, which we show is accurate and efficient for this non-self-consistent semi-empirical theory. We then propose a novel Block-Adaptive Quantum Mechanics (BAQM) approach to interactive quantum chemistry. BAQM constrains some nuclei positions and some electronic degrees of freedom on the fly to simplify the simulation. Finally, we demonstrate several applications, including one study of graphane formation, interactive simulation for education purposes, and virtual prototyping at the atomic scale, both on desktop computers and in virtual reality environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Lam, Warren Michael. "Modeling algorithms for a class of fractal signals." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/31034.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
Includes bibliographical references (leaves 86-87).
by Warren Michael Lam.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
8

Stuhlmüller, Andreas. "Modeling cognition with probabilistic programs : representations and algorithms." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100860.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 167-176).
This thesis develops probabilistic programming as a productive metaphor for understanding cognition, both with respect to mental representations and the manipulation of such representations. In the first half of the thesis, I demonstrate the representational power of probabilistic programs in the domains of concept learning and social reasoning. I provide examples of richly structured concepts, defined in terms of systems of relations, subparts, and recursive embeddings, that are naturally expressed as programs and show initial experimental evidence that they match human generalization patterns. I then proceed to models of reasoning about reasoning, a domain where the expressive power of probabilistic programs is necessary to formalize our intuitive domain understanding due to the fact that, unlike previous formalisms, probabilistic programs allow conditioning to be represented in a model, not just applied to a model. I illustrate this insight with programs that model nested reasoning in game theory, artificial intelligence, and linguistics. In the second half, I develop three inference algorithms with the dual intent of showing how to efficiently compute the marginal distributions defined by probabilistic programs, and providing building blocks for process-level accounts of human cognition. First, I describe a Dynamic Programming algorithm for computing the marginal distribution of discrete probabilistic programs by compiling to systems of equations and show that it can make inference in models of "reasoning about reasoning" tractable by merging and reusing subcomputations. Second, I introduce the setting of amortized inference and show how learning inverse models lets us leverage samples generated by other inference algorithms to compile probabilistic models into fast recognition functions. Third, I develop a generic approach to coarse-to-fine inference in probabilistic programs and provide evidence that it can speed up inference in models with large state spaces that have appropriate hierarchical structure. Finally, I substantiate the claim that probabilistic programming is a productive metaphor by outlining new research questions that have been opened up by this line of investigation.
by Andreas Stuhlmüller.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Chaudhari, Soumee. "Modeling distance functions induced by face recognition algorithms." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hedberg, Vilhelm. "Evaluation of Hair Modeling, Simulation and Rendering Algorithms for a VFX Hair Modeling System." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-65592.

Full text
Abstract:
Creating realistic virtual hair consists of several major areas: creating the geometry, moving the hair strands realistically and rendering the hair. In this thesis, a background survey covering each one of these areas is given. A node-based, procedural hair system is presented, which utilizes the capabilities of modern GPUs. The hair system is implemented as a plugin for Autodesk Maya, and a user interface is developed to allow the user to control the various parameters. A number of nodes are developed to create effects such as clumping, noise and frizz. The proposed system can easily handle a variety of hairstyles, and pre-renders the result in real-time using a local shading model.
APA, Harvard, Vancouver, ISO, and other styles
11

Pittman, Jennifer L. "Adaptive splines and genetic algorithms for optimal statistical modeling." Adobe Acrobat reader required to view the full dissertation, 2000. http://www.etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-23/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Choi, Jee Whan. "Power and performance modeling for high-performance computing algorithms." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53561.

Full text
Abstract:
The overarching goal of this thesis is to provide an algorithm-centric approach to analyzing the relationship between time, energy, and power. This research is aimed at algorithm designers and performance tuners so that they may be able to make decisions on how algorithms should be designed and tuned depending on whether the goal is to minimize time or to minimize energy on current and future systems. First, we present a simple analytical cost model for energy and power. Assuming a simple von Neumann architecture with a two-level memory hierarchy, this model pre- dicts energy and power for algorithms using just a few simple parameters, such as the number of floating point operations (FLOPs or flops) and the amount of data moved (bytes or words). Using highly optimized microbenchmarks and a small number of test platforms, we show that although this model uses only a few simple parameters, it is, nevertheless, accurate. We can also visualize this model using energy “arch lines,” analogous to the “rooflines” in time. These “rooflines in energy” allow users to easily assess and com- pare different algorithms’ intensities in energy and time to various target systems’ balances in energy and time. This visualization of our model gives us many inter- esting insights, and as such, we refer to our analytical model as the energy roofline model. Second, we present the results of our microbenchmarking study of time, energy, and power costs of computation and memory access of several candidate compute- node building blocks of future high–performance computing (HPC) systems. Over a dozen server-, desktop-, and mobile-class platforms that span a range of compute and power characteristics were evaluated, including x86 (both conventional and Xeon Phi accelerator), ARM, graphics processing units (GPU), and hybrid (AMD accelerated processing units (APU) and other system–on–chip (SoC)) processors. The purpose of this study was twofold; first, it was to extend the validation of the energy roofline model to a more comprehensive set of target systems to show that the model works well independent of system hardware and microarchitecture; second, it was to improve the model by uncovering and remedying potential shortcomings, such as incorporating the effects of power “capping,” multi–level memory hierarchy, and different implementation strategies on power and performance. Third, we incorporate dynamic voltage and frequency scaling (DVFS) into the energy roofline model to explore its potential for saving energy. Rather than the more traditional approach of using DVFS to reduce energy, whereby a “slack” in computation is used as an opportunity to dynamically cycle down the processor clock, the energy roofline model can be used to determine precisely how the time and energy costs of different operations, both compute and memory, change with respect to frequency and voltage settings. This information can be used to target a specific optimization goal, whether that be time, energy, or a combination of both. In the final chapter of this thesis, we use our model to predict the energy dissi- pation of a real application running on a real system. The fast multipole method (FMM) kernel was executed on the GPU component of the Tegra K1 SoC under various frequency and voltage settings and a breakdown of instructions and data ac- cess pattern was collected via performance counters. The total energy dissipation of FMM was then calculated as a weighted sum of these instructions and the associated costs in energy. On eight different voltage and frequency settings and eight different algorithm–specific input parameters per setting, for a total of 64 total test cases, the accuracy of the energy roofline model for predicting total energy dissipation was within 6.2%, with a standard deviation of 4.7%, when compared to actual energy measurements. Despite its simplicity and its foundation on the first principles of algorithm anal- ysis, the energy roofline model has proven to be both practical and accurate for real applications running on a real system. And as such, it can be an invaluable tool for al- gorithm designers and performance tuners with which they can more precisely analyze the impact of their design decisions on both performance and energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
13

Schmeißer, Andre [Verfasser]. "Contact Modeling Algorithms for Fiber Dynamics Simulations / Andre Schmeißer." München : Verlag Dr. Hut, 2016. http://d-nb.info/1115549960/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Phanaphat, Piyajit 1980. "Modeling and algorithms for optimizing beam steering optical crossconnects." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29690.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 81).
One of the most significant applications of Micro-Electromechanical Systems (MEMS) technology in optical communications today is in building large non-blocking optical crossconnects based on arrays of tiltable micro-mirrors. The complexity for these crossconnects to make all possible connections lies in the calibration or fine-tuning of the mirror tilt angles to optimize the transmissivity through each possible input/output pair. The result from the fine-tuning process that produces optimization at one point in time, however, does not guarantee optimization for future attempts. This thesis models the transmissivity as a function of control variables in the vicinity of an optimal point and uses this model to re-optimize the connections quickly when a connection is reestablished. The re-optimization algorithm achieves the goal of optimizing quickly by requiring that some prior knowledge about each connection is already known. Scalable methods for representing the per-connection transmissivity model are also studied. Experimental results of the algorithm performance on real crossconnect systems are reported, including connection setup in under 50 milliseconds.
by Piyajit Phanaphat.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
15

Mahmood, Zohaib. "Algorithms for passive dynamical modeling and passive circuit realizations." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97760.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 163-174).
The design of modern electronic systems is based on extensive numerical simulations, aimed at predicting the overall system performance and compliance since early design stages. Such simulations rely on accurate dynamical models. Linear passive components are described by their frequency response in the form of admittance, impedance or scattering parameters which are obtained by physical measurements or electromagnetic field simulations. Numerical dynamical models for these components are constructed by a fitting to frequency response samples. In order to guarantee stable system level simulations, the dynamical models of the passive components need to preserve the passivity property (or inability to generate power), in addition to being causal and stable. A direct formulation results into a non-convex nonlinear optimization problem which is difficult to solve. In this thesis, we propose multiple algorithms that fit linear passive multiport dynamical models to given frequency response samples. The algorithms are based on convex relaxations of the original non-convex problem. The proposed techniques improve accuracy and computational complexity compared to the existing approaches. Compared to sub-optimal schemes based on singular value or Hamiltonian eigenvalue perturbation, we are able to guarantee convergence to the optimal solution within the given relaxation. Compared to convex formulations based on direct Bounded-Real (or Positive-Real) Lemma constraints, we are able to reduce both memory and time requirements by orders of magnitude. We show how these models can be extended to include geometrical and design parameters. We have applied our passive modeling algorithms and developed new strategies to realize passive multiport circuits to decouple multichannel radio frequency (RF) arrays, specifically for magnetic resonance imaging (MRI) applications. In a coupled parallel transmit array, because of the coupling, the power delivered to a channel is partially distributed to other channels and is dissipated in the circulators. This dissipated power causes a significant reduction in the power efficiency of the overall system. In this work, we propose an automated eigen-decomposition based approach to designing a passive decoupling matrix interfaced between the RF amplifiers and the coils. The decoupling matrix, implemented via hybrid couplers and reactive elements, is optimized to ensure that all forward power is delivered to the load. The results show that our decoupling matrix achieves nearly ideal decoupling. The methods presented in this work scale to any arbitrary number of channels and can be readily applied to other coupled systems such as antenna arrays.
by Zohaib Mahmood.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Lashley, Matthew Bevly David M. Hung John Y. "Modeling and performance analysis of GPS vector tracking algorithms." Auburn, Ala., 2009. http://hdl.handle.net/10415/2009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Nazarian, Bamshad. "Integrated Field Modeling." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Engineering Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-84.

Full text
Abstract:

This research project studies the feasibility of developing and applying an integrated field simulator to simulate the production performance of an entire oil or gas field. It integrates the performance of the reservoir, the wells, the chokes, the gathering system, the surface processing facilities and, whenever applicable, gas and water injection systems.

The approach adopted for developing the integrated simulator is to couple existing commercial reservoir and process simulators using available linking technologies. The simulators are dynamically linked and customized into a single hybrid application that benefits from the concept of open software architecture. The integrated field simulator is linked to an optimization routine developed based on the genetic algorithm search strategies. This enables optimization of the system at field level, from the reservoir to the process. Modeling the wells and the gathering network is achieved by customizing the process simulator.

This study demonstrates that the integrated simulation improves currentcapabilities to simulate the performance of an entire field and optimize its design. This is achieved by evaluating design options including spread and layout of the wells and gathering system, processing alternatives, reservoir development schemes, and production strategies.

Effectiveness of the integrated simulator is demonstrated and tested through several field-level case studies that discuss and investigate technical problems relevant to offshore field development. The case studies cover topics such as process optimization, optimum tie-in of satellite wells into existing process facilities, optimal well location, and field layout assessment of a high pressure high temperature deepwater oil field.

Case study results confirm the viability of the total field simulator by demonstrating that the field performance simulation and optimal design were obtained in an automated process with reasonable computation time. No significant simplifying assumptions were required to solve the system and tedious manual data transfer between simulators, as conventionally practiced, was avoided.

APA, Harvard, Vancouver, ISO, and other styles
18

Shi, Tian. "Novel Algorithms for Understanding Online Reviews." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104998.

Full text
Abstract:
This dissertation focuses on the review understanding problem, which has gained attention from both industry and academia, and has found applications in many downstream tasks, such as recommendation, information retrieval and review summarization. In this dissertation, we aim to develop machine learning and natural language processing tools to understand and learn structured knowledge from unstructured reviews, which can be investigated in three research directions, including understanding review corpora, understanding review documents, and understanding review segments. For the corpus-level review understanding, we have focused on discovering knowledge from corpora that consist of short texts. Since they have limited contextual information, automatically learning topics from them remains a challenging problem. We propose a semantics-assisted non-negative matrix factorization model to deal with this problem. It effectively incorporates the word-context semantic correlations into the model, where the semantic relationships between the words and their contexts are learned from the skip-gram view of a corpus. We conduct extensive sets of experiments on several short text corpora to demonstrate the proposed model can discover meaningful and coherent topics. For document-level review understanding, we have focused on building interpretable and reliable models for the document-level multi-aspect sentiment analysis (DMSA) task, which can help us to not only recover missing aspect-level ratings and analyze sentiment of customers, but also detect aspect and opinion terms from reviews. We conduct three studies in this research direction. In the first study, we collect a new DMSA dataset in the healthcare domain and systematically investigate reviews in this dataset, including a comprehensive statistical analysis and topic modeling to discover aspects. We also propose a multi-task learning framework with self-attention networks to predict sentiment and ratings for given aspects. In the second study, we propose corpus-level and concept-based explanation methods to interpret attention-based deep learning models for text classification, including sentiment classification. The proposed corpus-level explanation approach aims to capture causal relationships between keywords and model predictions via learning importance of keywords for predicted labels across a training corpus based on attention weights. We also propose a concept-based explanation method that can automatically learn higher level concepts and their importance to model predictions. We apply these methods to the classification task and show that they are powerful in extracting semantically meaningful keywords and concepts, and explaining model predictions. In the third study, we propose an interpretable and uncertainty aware multi-task learning framework for DMSA, which can achieve competitive performance while also being able to interpret the predictions made. Based on the corpus-level explanation method, we propose an attention-driven keywords ranking method, which can automatically discover aspect terms and aspect-level opinion terms from a review corpus using the attention weights. In addition, we propose a lecture-audience strategy to estimate model uncertainty in the context of multi-task learning. For the segment-level review understanding, we have focused on the unsupervised aspect detection task, which aims to automatically extract interpretable aspects and identify aspect-specific segments from online reviews. The existing deep learning-based topic models suffer from several problems such as extracting noisy aspects and poorly mapping aspects discovered by models to the aspects of interest. To deal with these problems, we propose a self-supervised contrastive learning framework in order to learn better representations for aspects and review segments. We also introduce a high-resolution selective mapping method to efficiently assign aspects discovered by the model to the aspects of interest. In addition, we propose using a knowledge distillation technique to further improve the aspect detection performance.
Doctor of Philosophy
Nowadays, online reviews are playing an important role in our daily lives. They are also critical to the success of many e-commerce and local businesses because they can help people build trust in brands and businesses, provide insights into products and services, and improve consumers' confidence. As a large number of reviews accumulate every day, a central research problem is to build an artificial intelligence system that can understand and interact with these reviews, and further use them to offer customers better support and services. In order to tackle challenges in these applications, we first have to get an in-depth understanding of online reviews. In this dissertation, we focus on the review understanding problem and develop machine learning and natural language processing tools to understand reviews and learn structured knowledge from unstructured reviews. We have addressed the review understanding problem in three directions, including understanding a collection of reviews, understanding a single review, and understanding a piece of a review segment. In the first direction, we proposed a short-text topic modeling method to extract topics from review corpora that consist of primary complaints of consumers. In the second direction, we focused on building sentiment analysis models to predict the opinions of consumers from their reviews. Our deep learning models can provide good prediction accuracy as well as a human-understandable explanation for the prediction. In the third direction, we develop an aspect detection method to automatically extract sentences that mention certain features consumers are interested in, from reviews, which can help customers efficiently navigate through reviews and help businesses identify the advantages and disadvantages of their products.
APA, Harvard, Vancouver, ISO, and other styles
19

Ravaglia, Leonardo. "Modeling and control algorithms for product phasing using smart belts." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Find full text
Abstract:
Nella seguente tesi è descritto il principio di sviluppo di una macchina industriale di alimentazione. Il suddetto sistema dovrà essere installato fra due macchine industriali. L’apparato dovrà mettere al passo e sincronizzare con la macchina a valle i prodotti che arriveranno in input. La macchina ordina gli oggetti usando una serie di nastri trasportatori a velocità regolabile.
Lo sviluppo è stato effettuato al Laboratorio Liam dopo la richiesta dell’azienda Sitma. Sitma produceva già un tipo di sistema come quello descritto in questa tesi. Il deisderio di Sitma è quindi quello di modernizzare la precedente applicazione poiché il dispositivo che le permetteva di effettuare la messa al passo di prodotti era un PLC Siemens che non è più commercializzato. La tesi verterà sullo studio dell’applicazione e la modellazione tramite Matlab-Simulink per poi proseguire ad una applicazione, seppure non risolutiva, in TwinCAT 3.
APA, Harvard, Vancouver, ISO, and other styles
20

Iakymchuk, Roman [Verfasser]. "Performance modeling and prediction for linear algebra algorithms / Roman Iakymchuk." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2012. http://d-nb.info/1026308690/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

O'Connor, Ruaidhrí M. (Ruaidhrí Manfried). "A distributed discrete element modeling environment : algorithms, implementation and applications." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Luo, Yuan Ph D. Massachusetts Institute of Technology. "Towards unified biomedical modeling with subgraph mining and factorization algorithms." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101575.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 157-181).
This dissertation applies subgraph mining and factorization algorithms to clinical narrative text, ICU physiologic time series and computational genomics. These algorithms aims to build clinical models that improve both prediction accuracy and interpretability, by exploring relational information in different biomedical data modalities including clinical narratives, physiologic time series and exonic mutations. This dissertation focuses on three concrete applications: implicating neurodevelopmentally coregulated exon clusters in phenotypes of Autism Spectrum Disorder (ASD), predicting mortality risk of ICU patients based on their physiologic measurement time series, and identifying subtypes of lymphoma patients based on pathology report text. In each application, we automatically extract relational information into a graph representation and collect important subgraphs that are of interest. Depending on the degree of structure in the data format, heavier machinery of factorization models becomes necessary to reliably group important subgraphs. We demonstrate that these methods lead to not only improved performance but also better interpretability in each application.
by Yuan Luo.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Lubin, Miles (Miles C. ). "Mixed-integer convex optimization : outer approximation algorithms and modeling power." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113434.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 137-143).
In this thesis, we study mixed-integer convex optimization, or mixed-integer convex programming (MICP), the class of optimization problems where one seeks to minimize a convex objective function subject to convex constraints and integrality restrictions on a subset of the variables. We focus on two broad and complementary questions on MICP. The first question we address is, "what are efficient methods for solving MICP problems?" The methodology we develop is based on outer approximation, which allows us, for example, to reduce MICP to a sequence of mixed-integer linear programming (MILP) problems. By viewing MICP from the conic perspective of modern convex optimization as defined by Ben-Tal and Nemirovski, we obtain significant computational advances over the state of the art, e.g., by automating extended formulations by using disciplined convex programming. We develop the first finite-time outer approximation methods for problems in general mixed-integer conic form (which includes mixed-integer second-order-cone programming and mixed-integer semidefinite programming) and implement them in an open-source solver, Pajarito, obtaining competitive performance with the state of the art. The second question we address is, "which nonconvex constraints can be modeled with MICP?" This question is important for understanding both the modeling power gained in generalizing from MILP to MICP and the potential applicability of MICP to nonconvex optimization problems that may not be naturally represented with integer variables. Among our contributions, we completely characterize the case where the number of integer assignments is bounded (e.g., mixed-binary), and to address the more general case we develop the concept of "rationally unbounded" convex sets. We show that under this natural restriction, the projections of MICP feasible sets are well behaved and can be completely characterized in some settings.
by Miles Lubin.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Nicholson, Bethany. "Applications, Modeling Tools, and Parallel Solution Algorithms for Dynamic Optimization." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/732.

Full text
Abstract:
Dynamic optimization problems directly incorporate detailed dynamic models as constraints within an optimization framework. Model predictive control, state estimation, and parameter estimation are all common applications of dynamic optimization which can lead to significant improvements in process efficiency, reliability, safety, and profitability. This dissertation deals with dynamic optimization from three perspectives. We begin with an application of dynamic optimization. State estimation is a crucial part of the monitoring and/or control of all chemical processes. We make use of a state estimation technique called moving horizon estimation (MHE) which can be formulated as a dynamic optimization problem. However, large-scale MHE formulations may require non-negligible computational time to solve limiting its application for real-time state estimation. An extension of MHE, called Advanced Step Moving Horizon Estimation (asMHE), eliminates this computational delay. Both MHE and asMHE perform well under the assumption of Gaussian measurement noise. We consider the case where this assumption does not hold and measurements are contaminated with large errors. Standard least squares based estimators generate biased estimates even with relatively few gross measurement errors. We therefore extend MHE and asMHE formulations using robust M-Estimators in order to mitigate the bias of these errors on the state estimates. We demonstrate this approach on dynamic models of a CSTR and a distillation column and find that our approach produces fast and accurate state estimates even in the presence of many gross measurement errors. A well-established method to solve dynamic optimization problems is direct transcription where the differential equations are replaced with algebraic approximations using some numerical method such as a finite-difference or Runge-Kutta scheme. In the second part of this work we present pyomo.dae, an open-source modeling framework that enables high-level abstract representations of optimization problems with differential and algebraic equations. A key distinctive feature of pyomo.dae is that it does not adhere to standard, predefined formats of optimal control and estimation problems. This enables high modeling flexibility and the consideration of constraints and objective functions in non-standard forms that cannot be easily handled by traditional solution methods and cannot be expressed in other modeling frameworks. pyomo.dae also enables the specification of optimization problems with high-order differential equations and partial differential equations on restricted domain types and it provides automatic discretization capabilities to transcribe high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf optimization solvers. However, for problems with thousands of state variables and discretization points, direct transcription may result in nonlinear optimization problems that exceed memory and speed limits of most serial computers. In particular, when applying interior point optimization methods, the computational bottleneck and dominant computational cost lies in solving the linear systems resulting from the Newton steps that solve the discretized optimality conditions. To overcome these limits, we exploit the parallelizable structure of the linear system to accelerate the overall interior point algorithm. We investigate two algorithms which take advantage of this property, cyclic reduction and Schur complement decomposition and study the performance of these algorithms when applied to dynamic optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
25

Kenniche, Hichem. "Large Wireless Sensor Networks : Some Contributions to Modeling and Algorithms." Paris 13, 2011. http://www.theses.fr/2011PA132021.

Full text
Abstract:
Recent advances in sensor and wireless communication technologies in conjunction with developments in microelectronics have made available a new type of communication network. Wireless Sensor Networks (WSNs) are self-configured and infrastructure less wireless networks made of small devices equipped with specialized sensors and wireless transceivers. The main goal of a WSN is to collect data from the environment and send it to a reporting site where the data can be observed and analyzed. While many fundamental ideas existed about twenty to thirty years ago, recent years we see tremendous research activity in wireless networks due to their applications in various situations. Potential applications of sensor networks abound; they can be used to monitor remote and/or hostile geographical regions, to trace animals movement, to improve weather forecast, and so on. However, the development of working large scale WSNs still requires solutions to a number of technical and theoretical challenges, due mainly to the constraints imposed by the wireless sensor devices. This thesis is concerned first with problems in modeling wireless sensor networks. Namely, we address the problems of finding a realistic model. Communication network have their origin in classic areas of theoretical computer science and applied mathematics, the framework for wireless sensor networks will not depart from the rule, regardless of the radio technology used, from the topology point of view, at any instant in time a WSN can be represented as a graph with a set of vertices consisting of the nodes of the network and a set of edges consisting of the links between the nodes. The notion of a graph is important because a large number of parameters used in graph theory can quantify properties physical, or topological network performance modeled. However, WSNs applications involve a large number of sensors to deploy on inaccessible areas, leading to a random deployment which makes the classical graph models obsolete. We show that the modeling with Random Geometric Graphs and Poisson Graphs is the most appropriate. Another important issue is network coverage. It means how well an area of interest is being monitored by a network. To study dense random sensor networks and to show how to profit their densities in order to design protocols that can maintain high degrees of coverage while prolonging the lifetime of the network, we investigate the fundamental limits of sensor network lifetime that any algorithm can achieve. In our settings, n nodes are deployed as a Poisson point process with density # in a region of size S and each sensor node can cover a unit-area disk. We investigate the required value of lenda congruent à lambda(k), i. E. In terms of k, to guarantee the k-coverage of the region R (with absolute value of R = l2 = S) by the nodes for unbounded values of k, while all previous work dealt only with constant values. In the second part of this thesis, we consider the Maximal Independent Set problem. We consider first the performance of the simplest of the maximal independent set algorithms on two types of random graphs. In the first case, we consider Poisson graphs and establish that the greedy algorithm finds a maximal independent set whose size is asymptotically normal. In the second case, we study the same algorithm whenever the inputs are Erd˝ os-Rényi random graphs and show that the limit distribution does not exist. Finally, we present and analyze a fast randomized distributed algorithm for the Maximal Independent Set problem. The average running time of the algorithm is E(Tn) = O(logn) and the running time is O(logn) with high probability, n being the number of nodes. Each message is containing one bit
Le concept des réseaux de capteurs sans fil , issu de la fusion des progrès en radiocommunication et en micro-électronique est apparu assez récemment et offre un nouveau paradigme pour la définition de l’intelligence ambiante. Les réseaux de capteurs sont constitués d’un ensemble de noeuds qui sont déployés en grand nombre et qui interagissent avec l’environnement (mesure ou action sur des paramètres), dont le large spectre applicatif s’étend de la surveillance environnementale à la détection de vie sur d’autres planètes. Cependant, la conception de tels réseaux de grande taille se heurte à un certain nombre de difficultés techniques provenant des contraintes imposés par les capacités réduites des capteurs (individuels): basse puissance, énergie limité, capacité de stockage et communication réduite. Cette thèse est consacrée d’abord aux problèmes de modélisation des réseaux de capteurs sans fil. A savoir, comment trouver un modèle de communication réaliste. Un réseau de communication se modélise par un graphe. Le cadre de réseaux de capteurs sans fil ne va pas déroger à la règle et les capteurs, les noeuds de communication seront modélisés par les sommets du graphe tandis que les arêtes représenteront les liens physiques de communication entre ces éléments communicants. La notion de graphe est importante car un grand nombre de paramètres employés en théorie des graphes permettent de quantifier des propriétés physique, topologiques ou des performances des réseaux modélisés. Toutefois, dans l’avenir les applications impliqueront un très grand nombre de capteurs à déployer sur de grande zones souvent inaccessibles, ce qui conduirait à un déploiement aléatoire rendant le les modèles de graphe classique obsolète. Nous montrons que la stratégie aléatoire est la seule facon de déployer les réseaux de capteurs et par conséquent la modélisation par les graphes aléatoires géométriques et les graphe de Poisson est la plus appropriée. Nous développons ensuite nos travaux concernant la k-couverture, définit comme tout point physique de la zone de d´ ploiement doit être couvert par au moins k noeuds actifs. Une notion très importante car nécessaire au bon fonctionnement du réseau car impliquant le connexion. Pour étudier les réseaux de capteurs aléatoires et pour pouvoir mettre à profit leurs densités afin de concevoir des protocoles qui permettent de maintenir un haut degré de couverture tout en prolongeant la durée de vie du réseau, nous étudions les limites fondamentales de la durée vie d’un réseau de capteurs que tout algorithme peut atteindre. Pour un ensemble de n points suivants un processus ponctuels de Poissons d’intensité lambda dans une région de taille S avec chaque point couvrant un disque unité, nous donnerons des résultats concernant les valeurs requise pour garantir la k-couverture pour toute valeurs de k. Nous terminons enfin par l’étude en moyenne d’algorithmes pour la recherche de stable maximal dans les graphes aléatoires. Nous examinons d’abord la performance du plus simple des algorithmes séquentiel sur deux types de graphes aléatoires. Dans le premier cas, nous considérons des graphes de Poisson, on établira que l’algorithme glouton trouve un ensemble indépendant maximal dont la loi limite est asymptotiquement normal. Dans le second cas, nous étudions le même algorithme sur les graphes d’Erd˝ os-Rényi et montreront que la distribution limite n’existe pas. Enfin, nous présenterons et analyserons un algorithmes distribués probabilistes optimal (en temps et en bits)
APA, Harvard, Vancouver, ISO, and other styles
26

Choi, Bong-Jin. "Statistical Analysis, Modeling, and Algorithms for Pharmaceutical and Cancer Systems." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5200.

Full text
Abstract:
The aim of the present study is to develop a statistical algorithm and model associ- ated with breast and lung cancer patients. In this study, we developed several statistical softwares, R packages, and models using our new statistical approach. In the present study, we used the five parameters logistic model for determining the optimal doses of a pharmaceutical drugs, including dynamic initial points, an automatic process for outlier detection and an algorithm that develops a graphic user interface(GUI) program. The developed statistical procedure assists medical scientists by reducing their time in determining the optimal dose of new drugs, and can also easily identify which drugs need more experimentation. Secondly, in the present study, we developed a new classification method that is very useful in the health sciences. We used a new decision tree algorithm and a random forest method to rank our variables and to build a final decision tree model. The decision tree can identify and communicate complex data systems to scientists with minimal knowledge in statistics. Thirdly, we developed statistical packages using the Johnson SB probability distribu- tion which is important in parametrically studying a variety of health, environmental, and engineering problems. Scientists are experiencing difficulties in obtaining estimates for the four parameters of the subject probability distribution. The developed algorithm com- bines several statistical procedures, such as, the Newtwon Raphson, the Bisection, the Least Square Estimation, and the regression method to develop our R package. This R package has functions that generate random numbers, calculate probabilities, inverse probabilities, and estimate the four parameters of the SB Johnson probability distribution. Researchers can use the developed R package to build their own statistical models or perform desirable statistical simulations. The final aspect of the study involves building a statistical model for lung cancer sur- vival time. In developing the subject statistical model, we have taken into consideration the number of cigarettes the patient smoked per day, duration of smoking, and the age at diagnosis of lung cancer. The response variables the survival time. The significant factors include interaction. the probability density function of the survival times has been obtained and the survival function is determined. The analysis is have on your groups the involve gender and with factors. A companies with the ordinary survival function is given.
APA, Harvard, Vancouver, ISO, and other styles
27

Chu, Chung-kwan. "Computationally efficient passivity-preserving model order reduction algorithms in VLSI modeling." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38719551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sun, Mingxuan. "Visualizing and modeling partial incomplete ranking data." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45793.

Full text
Abstract:
Analyzing ranking data is an essential component in a wide range of important applications including web-search and recommendation systems. Rankings are difficult to visualize or model due to the computational difficulties associated with the large number of items. On the other hand, partial or incomplete rankings induce more difficulties since approaches that adapt well to typical types of rankings cannot apply generally to all types. While analyzing ranking data has a long history in statistics, construction of an efficient framework to analyze incomplete ranking data (with or without ties) is currently an open problem. This thesis addresses the problem of scalability for visualizing and modeling partial incomplete rankings. In particular, we propose a distance measure for top-k rankings with the following three properties: (1) metric, (2) emphasis on top ranks, and (3) computational efficiency. Given the distance measure, the data can be projected into a low dimensional continuous vector space via multi-dimensional scaling (MDS) for easy visualization. We further propose a non-parametric model for estimating distributions of partial incomplete rankings. For the non-parametric estimator, we use a triangular kernel that is a direct analogue of the Euclidean triangular kernel. The computational difficulties for large n are simplified using combinatorial properties and generating functions associated with symmetric groups. We show that our estimator is computational efficient for rankings of arbitrary incompleteness and tie structure. Moreover, we propose an efficient learning algorithm to construct a preference elicitation system from partial incomplete rankings, which can be used to solve the cold-start problems in ranking recommendations. The proposed approaches are examined in experiments with real search engine and movie recommendation data.
APA, Harvard, Vancouver, ISO, and other styles
29

Tolentino, Sean Lucio. "Effective and efficient algorithms for simulating sexually transmitted diseases." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1509.

Full text
Abstract:
Sexually transmitted diseases affect millions of lives every year. In order to most effectively use prevention resources epidemiologists deploy models to understand how the disease spreads through the population and which intervention methods will be most effective at reducing disease perpetuation. Increasingly agent-based models are being used to simulate population heterogeneity and fine-grain sociological effects that are difficult to capture with traditional compartmental and statistical models. A key challenge is using a sufficiently large number of agents to produce robust and reliable results while also running in a reasonable amount of time. In this thesis we show the effectiveness of agent-based modeling in planning coordinated responses to a sexually transmitted disease epidemic and present efficient algorithms for running these models in parallel and in a distributed setting. The model is able to account for population heterogeneity like age preference, concurrent partnership, and coital dilution, and the implementation scales well to large population sizes to produce robust results in a reasonable amount of time. The work helps epidemiologists and public health officials plan a targeted and well-informed response to a variety of epidemic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
30

Balmer, Michael. "Travel demand modeling for multi-agent transport simulations : algorithms and systems /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Müller, Johannes Christian [Verfasser]. "Auctions in Exchange Trading Systems: Modeling Techniques and Algorithms / Johannes Müller." Berlin : epubli GmbH, 2014. http://d-nb.info/106322747X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chu, Chung-kwan, and 朱頌君. "Computationally efficient passivity-preserving model order reduction algorithms in VLSI modeling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38719551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wan, Jin Hao M. Eng Massachusetts Institute of Technology. "Geometric modeling and optimization in 3D solar cells : implementation and algorithms." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92087.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 63).
Conversion of solar energy in three-dimensional (3D) devices has been essentially untapped. In this thesis, I design and implement a C++ program that models and optimizes a 3D solar cell ensemble embedded in a given landscape. The goal is to find the optimum arrangement of these solar cells with respect to the landscape buildings so as to maximize the total energy collected. On the modeling side, in order to calculate the energies generated from both direct and reflected sunlight, I store all the geometric inputs in a binary space partition tree; this data structure in turn efficiently supports a crucial polygon clipping algorithm. On the optimization side, I deploy simulated annealing (SA). Both advantages and limitation of SA lead me to restrict the solar cell docking sites to orthogonal grids imposed on the building surfaces. The resulting program is an elegant trade-off between accuracy and efficiency.
by Jin Hao Wan.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
34

Jin, Fang. "Algorithms for Modeling Mass Movements and their Adoption in Social Networks." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/72292.

Full text
Abstract:
Online social networks have become a staging ground for many modern movements, with the Arab Spring being the most prominent example. In an effort to understand and predict those movements, social media can be regarded as a valuable social sensor for disclosing underlying behaviors and patterns. To fully understand mass movement information propagation patterns in social networks, several problems need to be considered and addressed. Specifically, modeling mass movements that incorporate multiple spaces, a dynamic network structure, and misinformation propagation, can be exceptionally useful in understanding information propagation in social media. This dissertation explores four research problems underlying efforts to identify and track the adoption of mass movements in social media. First, how do mass movements become mobilized on Twitter, especially in a specific geographic area? Second, can we detect protest activity in social networks by observing group anomalies in graph? Third, how can we distinguish real movements from rumors or misinformation campaigns? and fourth, how can we infer the indicators of a specific type of protest, say climate related protest? A fundamental objective of this research has been to conduct a comprehensive study of how mass movement adoption functions in social networks. For example, it may cross multiple spaces, evolve with dynamic network structures, or consist of swift outbreaks or long term slowly evolving transmissions. In many cases, it may also be mixed with misinformation campaigns, either deliberate or in the form of rumors. Each of those issues requires the development of new mathematical models and algorithmic approaches such as those explored here. This work aims to facilitate advances in information propagation, group anomaly detection and misinformation distinction and, ultimately, help improve our understanding of mass movements and their adoption in social networks.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Sun, Fangzhou. "Modeling, Analysis, and Algorithms for Some Supply Chain Logistics Optimization Problems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/101054.

Full text
Abstract:
In today's competitive market place, all the components of a supply chain must be well coordinated to achieve economic and service goals. This dissertation is devoted to the modeling, analysis, and development of solution approaches for some logistics problems with emphasis on coordination of various supply chain components and decisions. Specifically, we have addressed four problems in this domain that span various decision levels. The first problem deals with integrated production and shipping scheduling for a single manufacturer and multiple customers. We develop an optimum-seeking algorithm and a fast heuristic, both of which exploit structural properties of the problem. The second problem is a joint production and delivery scheduling problem in which a single vendor supplies goods to a single buyer over a finite horizon. We model this multi-period problem by using a dynamic programming framework and develop an effective Lagrange multiplier method for the solution of the single-period problem, which is then used to solve the multi-period problem. We show that the optimal shipments in each period follow a pattern of geometric-then-equal sizes except for the last shipment, which may be of a larger size. We also show that an optimal solution for the infinite horizon problem can be derived as a special case of our finite horizon approach. In addition, we propose two fast heuristic methods, which, as we show, can obtain almost optimal solutions. We also address the design and logistics operation of biomass feedstock supply chain. To that end, we consider two problems. The first of these problems arises in the context of delivering biomass sorghum to a biorefinery. We propose multi-period, mixed integer linear programming models, which prescribe the strategic and tactical logistics decisions. Our aim is to investigate different logistical configurations available in a sorghum biomass feedstock logistics system. The second of these problems further allows sharing of loadout equipment among storage facilities. We develop an efficient Benders decomposition-based algorithm, and also, two heuristic methods that are capable of effectively solving large-scale instances. We also show the advantage of using mobile equipment.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
36

Williams, Kjerstin Irja Burdick Joel Wakeman. "Multi-robot systems : modeling swarm dynamics and designing inspection planning algorithms /." Diss., Pasadena, Calif. : Caltech, 2006. http://resolver.caltech.edu/CaltechETD:etd-05192006-063455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ladkau, Marcel. "Stochastic volatility Libor modeling and efficient algorithms for optimal stopping problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17559.

Full text
Abstract:
Die vorliegende Arbeit beschäftigt sich mit verschiedenen Aspekten der Finanzmathematik. Ein erweitertes Libor Markt Modell wird betrachtet, welches genug Flexibilität bietet, um akkurat an Caplets und Swaptions zu kalibrieren. Weiterhin wird die Bewertung komplexerer Finanzderivate, zum Beispiel durch Simulation, behandelt. In hohen Dimensionen können solche Simulationen sehr zeitaufwendig sein. Es werden mögliche Verbesserungen bezüglich der Komplexität aufgezeigt, z.B. durch Faktorreduktion. Zusätzlich wird das sogenannte Andersen-Simulationsschema von einer auf mehrere Dimensionen erweitert, wobei das Konzept des „Momentmatchings“ zur Approximation des Volaprozesses in einem Heston Modell genutzt wird. Die daraus resultierende verbesserten Konvergenz des Gesamtprozesses führt zu einer verringerten Komplexität. Des Weiteren wird die Bewertung Amerikanischer Optionen als optimales Stoppproblem betrachtet. In höheren Dimensionen ist die simulationsbasierte Bewertung meist die einzig praktikable Lösung, da diese eine dimensionsunabhängige Konvergenz gewährleistet. Eine neue Methode der Varianzreduktion, die Multilevel-Idee, wird hier angewandt. Es wird eine untere Preisschranke unter zu Hilfenahme der Methode der „policy iteration“ hergeleitet. Dafür werden Konvergenzraten für die Simulation des Optionspreises erarbeitet und eine detaillierte Komplexitätsanalyse dargestellt. Abschließend wird das Preisen von Amerikanischen Optionen unter Modellunsicherheit behandelt, wodurch die Restriktion, nur ein bestimmtes Wahrscheinlichkeitsmodell zu betrachten, entfällt. Verschiedene Modelle können plausibel sein und zu verschiedenen Optionswerten führen. Dieser Ansatz führt zu einem nichtlinearen, verallgemeinerten Erwartungsfunktional. Mit Hilfe einer verallgemeinerte Snell''sche Einhüllende wird das Bellman Prinzip hergeleitet. Dadurch kann eine Lösung durch Rückwärtsrekursion erhalten werden. Ein numerischer Algorithmus liefert untere und obere Preisschranken.
The work presented here deals with several aspects of financial mathematics. An extended Libor market model is considered offering enough flexibility to accurately calibrate to various market data for caplets and swaptions. Moreover the evaluation of more complex financial derivatives is considered, for instance by simulation. In high dimension such simulations can be very time consuming. Possible improvements regarding the complexity of the simulation are shown, e.g. factor reduction. In addition the well known Andersen simulation scheme is extended from one to multiple dimensions using the concept of moment matching for the approximation of the vola process in a Heston model. This results in an improved convergence of the whole process thus yielding a reduced complexity. Further the problem of evaluating so called American options as optimal stopping problem is considered. For an efficient evaluation of these options, particularly in high dimensions, a simulation based approach offering dimension independent convergence often happens to be the only practicable solution. A new method of variance reduction given by the multilevel idea is applied to this approach. A lower bound for the option price is obtained using “multilevel policy iteration” method. Convergence rates for the simulation of the option price are obtained and a detailed complexity analysis is presented. Finally the valuation of American options under model uncertainty is examined. This lifts the restriction of considering one particular probabilistic model only. Different models might be plausible and may lead to different option values. This approach leads to a non-linear expectation functional, calling for a generalization of the standard expectation case. A generalized Snell envelope is obtained, enabling a backward recursion via Bellman principle. A numerical algorithm to valuate American options under ambiguity provides lower and upper price bounds.
APA, Harvard, Vancouver, ISO, and other styles
38

Groder, Seth. "Modeling and synthesis of the HD photo compression algorithm /." Online version of thesis, 2008. http://hdl.handle.net/1850/7118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Tianze. "Variational Inequality Based Dynamic Travel Choice Modeling." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1234999856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Kyoung-Jin. "Efficient ray tracing algorithms based on wavefront construction and model based interpolation method." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3771.

Full text
Abstract:
Understanding and modeling seismic wave propagation is important in regional and exploration seismology. Ray tracing is a powerful and popular method for this purpose. Wavefront construction (WFC) method handles wavefronts instead of individual rays, thereby controlling proper ray density on the wavefront. By adaptively controlling rays over a wavefront, it efficiently models wave propagation. Algorithms for a quasi-P wave wavefront construction method and a new coordinate system used to generate wavefront construction mesh are proposed and tested for numerical properties and modeling capabilities. Traveltimes, amplitudes, and other parameters, which can be used for seismic imaging such as migrations and synthetic seismograms, are computed from the wavefront construction method. Modeling with wavefront construction code is applied to anisotropic media as well as isotropic media. Synthetic seismograms are computed using the wavefront construction method as a new way of generating synthetics. To incorporate layered velocity models, the model based interpolation (MBI) ray tracing method, which is designed to take advantage of the wavefront construction method as well as conventional ray tracing methods, is proposed and experimental codes are developed for it. Many wavefront construction codes are limited to smoothed velocity models for handling complicated problems in layered velocity models and the conventional ray tracing methods suffer from the inability to control ray density during wave propagation. By interpolating the wavefront near model boundaries, it is possible to handle the layered velocity model as well as overcome ray density control problems in conventional methods. The test results revealed this new method can be an effective modeling tool for accurate and effective computing.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Yong. "Robust algorithms for property recovery in motion modeling, medical imaging and biometrics." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ientilucci, Emmett J. "Hyperspectral sub-pixel target detection using hybrid algorithms and physics based modeling /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/1185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chavannes, Nicolas Pierre. "Local mesh refinement algorithms for enhanced modeling capabilities in the FDTD method /." Konstanz : Hartung-Gorre, 2002. http://www.loc.gov/catdir/toc/fy0801/2006483066.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mirikitani, Derrick Takeshi. "Sequential recurrent connectionist algorithms for time series modeling of nonlinear dynamical systems." Thesis, Goldsmiths College (University of London), 2010. http://research.gold.ac.uk/3239/.

Full text
Abstract:
This thesis deals with the methodology of building data driven models of nonlinear systems through the framework of dynamic modeling. More specifically this thesis focuses on sequential optimization of nonlinear dynamic models called recurrent neural networks (RNNs). In particular, the thesis considers fully connected recurrent neural networks with one hidden layer of neurons for modeling of nonlinear dynamical systems. The general objective is to improve sequential training of the RNN through sequential second-order methods and to improve generalization of the RNN by regularization. The total contributions of the proposed thesis can be summarized as follows: 1. First, a sequential Bayesian training and regularization strategy for recurrent neural networks based on an extension of the Evidence Framework is developed. 2. Second, an efficient ensemble method for Sequential Monte Carlo filtering is proposed. The methodology allows for efficient O(H 2 ) sequential training of the RNN. 3. Last, the Expectation Maximization (EM) framework is proposed for training RNNs sequentially.
APA, Harvard, Vancouver, ISO, and other styles
45

Pang, Huey, and 彭栩怡. "Computer modeling of building-integrated photovoltaic systems using genetic algorithms for optimization." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31227764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Pedjeu, Jean-Claude. "Multi-time Scales Stochastic Dynamic Processes: Modeling, Methods, Algorithms, Analysis, and Applications." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4383.

Full text
Abstract:
By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model is formulated and it leads to a system of multi-time scale stochastic differential equations. The classical Picard-Lindel\"{o}f successive approximations scheme is expended to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this generates to a problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations. To illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are exhibited. Without loss in generality, the modeling and analysis of three time-scale fractional stochastic differential equations is followed by the development of the numerical algorithm for multi-time scale dynamic equations. The development of numerical algorithm is based on the idea if numerical integration in the context of the notion of multi-time scale integration. The multi-time scale approach is applied to explore the study of higher order stochastic differential equations (HOSDE) is presented. This study utilizes the variation of constant parameter technique to develop a method for finding closed form solution processes of classes of HOSDE. Then then probability distribution of the solution processes in the context of the second order equations is investigated.
APA, Harvard, Vancouver, ISO, and other styles
47

Foulds, James Richard. "Latent Variable Modeling for Networks and Text| Algorithms, Models and Evaluation Techniques." Thesis, University of California, Irvine, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3631094.

Full text
Abstract:

In the era of the internet, we are connected to an overwhelming abundance of information. As more facets of our lives become digitized, there is a growing need for automatic tools to help us find the content we care about. To tackle the problem of information overload, a standard machine learning approach is to perform dimensionality reduction, transforming complicated high-dimensional data into a manageable, low-dimensional form. Probabilistic latent variable models provide a powerful and elegant framework for performing this transformation in a principled way. This thesis makes several advances for modeling two of the most ubiquitous types of online information: networks and text data.

Our first contribution is to develop a model for social networks as they vary over time. The model recovers latent feature representations of each individual, and tracks these representations as they change dynamically. We also show how to use text information to interpret these latent features.

Continuing the theme of modeling networks and text data, we next build a model of citation networks. The model finds influential scientific articles and the influence relationships between the articles, potentially opening the door for automated exploratory tools for scientists. The increasing prevalence of web-scale data sets provides both an opportunity and a challenge. With more data we can fit more accurate models, as long as our learning algorithms are up to the task. To meet this challenge, we present an algorithm for learning latent Dirichlet allocation topic models quickly, accurately and at scale. The algorithm leverages stochastic techniques, as well as the collapsed representation of the model. We use it to build a topic model on 4.6 million articles from the open encyclopedia Wikipedia in a matter of hours, and on a corpus of 1740 machine learning articles from the NIPS conference in seconds.

Finally, evaluating the predictive performance of topic models is an important yet computationally difficult task. We develop one algorithm for comparing topic models, and another for measuring the progress of learning algorithms for these models. The latter method achieves better estimates than previous algorithms, in many cases with an order of magnitude less computational effort.

APA, Harvard, Vancouver, ISO, and other styles
48

Toledo, Sivan Avraham. "Quantitative performance modeling of scientific computations and creating locality in numerical algorithms." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37768.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 141-150) and index.
by Sivan Avraham Toledo.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
49

Alsadhan, Majed. "An application of topic modeling algorithms to text analytics in business intelligence." Thesis, Kansas State University, 2014. http://hdl.handle.net/2097/17580.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Doina Caragea
William H. Hsu
In this work, we focus on the task of clustering businesses in the state of Kansas based on the content of their websites and their business listing information. Our goal is to cluster the businesses and overcome the challenges facing current approaches such as: data noise, low number of clustered businesses, and lack of evaluation approach. We propose an LSA-based approach to analyze the businesses’ data and cluster those businesses by using Bisecting K-Means algorithm. In this approach, we analyze the businesses’ data by using LSA and produce businesses’ representations in a reduced space. We then use the businesses’ representations to cluster the businesses by applying the Bisecting K-Means algorithm. We also apply an existing LDA-based approach to cluster the businesses and compare the results with our proposed LSA-based approach at the end. In this work, we evaluate the results by using a human-expert-based evaluation procedure. At the end, we visualize the clusters produced in this work by using Google Earth and Tableau. According to our evaluation procedure, the LDA-based approach performed slightly bet- ter then the LSA-based approach. However, with the LDA-based approach, there were some limitations which are: low number of clustered businesses, and not being able to produce a hierarchical tree for the clusters. With the LSA-based approach, we were able to cluster all the businesses and produce a hierarchical tree for the clusters.
APA, Harvard, Vancouver, ISO, and other styles
50

Jin, Ying. "New Algorithms for Mining Network Datasets: Applications to Phenotype and Pathway Modeling." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/40493.

Full text
Abstract:
Biological network data is plentiful with practically every experimental methodology giving â network viewsâ into cellular function and behavior. Bioinformatic screens that yield network data include, for example, genome-wide deletion screens, protein-protein interaction assays, RNA interference experiments, and methods to probe metabolic pathways. Efficient and comprehensive computational approaches are required to model these screens and gain insight into the nature of biological networks. This thesis presents three new algorithms to model and mine network datasets. First, we present an algorithm that models genome-wide perturbation screens by deriving relations between phenotypes and subsequently using these relations in a local manner to derive genephenotype relationships. We show how this algorithm outperforms all previously described algorithms for gene-phenotype modeling. We also present theoretical insight into the convergence and accuracy properties of this approach. Second, we define a new data mining problemâ constrained minimal separator miningâ and propose algorithms as well as applications to modeling gene perturbation screens by viewing the perturbed genes as a graph separator. Both of these data mining applications are evaluated on network datasets from S. cerevisiae and C. elegans. Finally, we present an approach to model the relationship between metabolic pathways and operon structure in prokaryotic genomes. In this approach, we present a new pattern classâ biclusters over domains with supplied partial ordersâ and present algorithms for systematically detecting such biclusters. Together, our data mining algorithms provide a comprehensive arsenal of techniques for modeling gene perturbation screens and metabolic pathways.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography