To see the other types of publications on this topic, follow the link: Reduction of automata.

Dissertations / Theses on the topic 'Reduction of automata'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Reduction of automata.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Turcel, Matej. "Minimalizace automatů s jednoduchými čítači." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445565.

Full text
Abstract:
Táto práca sa zaoberá redukciou veľkosti tzv. čítačových automatov. Čítačové automaty rozširujú klasické konečné automaty o čítače s obmedzeným rozsahom hodnôt. Umožňujú tým efektívne spracovať napr. regulárne výrazy s opakovaním: a{5,10}. V tejto práci sa zaoberáme reláciou simulácie v čítačových automatoch, pomocou ktorej sme schopní zredukovať ich veľkosť. Opierame sa pritom o klasickú simuláciu v konečných automatoch, ktorú netriviálnym spôsobom rozširujeme na čítačové automaty. Kľúčovým rozdielom je nutnosť simulovať okrem stavov taktiež čítače. Za týmto účelom zavádzame nový koncept parametrizovanej relácie simulácie, a navrhujeme metódy výpočtu tejto relácie a redukcie veľkosti čítačových automatov pomocou nej. Navrhnuté metódy sú tiež implementované a je vyhodnotená ich efektivita.
APA, Harvard, Vancouver, ISO, and other styles
2

Kaati, Lisa. "Reduction Techniques for Finite (Tree) Automata." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9330.

Full text
Abstract:
Finite automata appear in almost every branch of computer science, for example in model checking, in natural language processing and in database theory. In many applications where finite automata occur, it is highly desirable to deal with automata that are as small as possible, in order to save memory as well as excecution time. Deterministic finite automata (DFAs) can be minimized efficiently, i.e., a DFA can be converted to an equivalent DFA that has a minimal number of states. This is not the case for non-deterministic finite automata (NFAs). To minimize an NFA we need to compute the corresponding DFA using subset construction and minimize the resulting automaton. However, subset construction may lead to an exponential blow-up in the size of the automaton and therefore even if the minimal DFA may be small, it might not be feasible to compute it in practice since we need to perform the expensive subset construction. To aviod subset construction we can reduce the size of an NFA using heuristic methods. This can be done by identifying and collapsing states that are equal with respect to some suitable equivalence relation that preserves the language of the automaton. The choice of an equivalence relation is a trade-off between the desired amount of reduction and the computation time since the coarser a relation is, the more expensive it is to compute. This way we obtain a reduction method for NFAs that is useful in practice. In this thesis we address the problem of reducing the size of non-deterministic automata. We consider two different computation models: finite tree automata and finite automata. Finite automata can be seen as a special case of finite tree automata and all of the previously mentioned results concerning finite automata are applicable to tree automata as well. For non-deterministic bottom-up tree automata, we present a broad spectrum of different relations that can be used to reduce their size. The relations differ in their computational complexity and reduction capabilities. We also provide efficient algorithms to compute the relations where we translate the problem of computing a given relation on a tree automaton to the problem of computing the relation on a finite automaton. For finite automata, we have extended and re-formulated two algorithms for computing bisimulation and simulation on transition systems to operate on finite automata with alphabets. In particular, we consider a model of automata where the labels are encoded symbolically and we provide an algorithm for computing bisimulation on this partial symbolic encoding.
APA, Harvard, Vancouver, ISO, and other styles
3

Almeida, Ricardo Manuel de Oliveira. "Efficient algorithms for hard problems in nondeterministic tree automata." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28794.

Full text
Abstract:
We present PTIME language-preserving techniques for the reduction of non-deterministic tree automata, both for the case of finite trees and for infinite trees. Our techniques are based on new transition removing and state merging results, which rely on binary relations that compare the downward and upward behaviours of states in the automaton. We use downward/upward simulation preorders and the more general but EXPTIME-complete trace inclusion relations, for which we introduce good under-approximations computable in polynomial time. We provide a complete picture of combinations of downward and upward simulation/trace inclusions which can be used in our reduction techniques. We define an algorithm that puts together all the reduction results found for finite trees, and implemented it under the name minotaut, a tool built on top of the well-known tree automata library libvata. We tested minotaut on large collections of automata from program verification provenience, as well as on different classes of randomly generated automata. Our algorithm yields substantially smaller and sparser automata than all previously known reduction techniques, and it is still fast enough to handle large instances. Taking reduction of automata on finite trees one step further, we then introduce saturation, a technique that consists of adding new transitions to an automaton while preserving its language. We implemented this technique on minotaut and we show how it can make subsequent state-merge and transition-removal operations more effective. Thus we obtain a PTIME algorithm that reduces the number of states of tree automata even more than before. Additionally, we explore how minotaut alone can play an important role when performing hard operations like complementation, allowing to obtain smaller complement automata and at lower computation times overall. We then show how saturation can extend this contribution even further. An overview of the tool, highlighting some of its implementation features, is presented as well.
APA, Harvard, Vancouver, ISO, and other styles
4

Charvát, Lucie. "Deep Pushdown Automata and Their Restricted Versions." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363840.

Full text
Abstract:
Pro přirozené číslo n, n-expandovatelné hluboké zasobníkové automaty vždy obsahují maximálně n výskytů nevstupních symbolů v jejich zásobníku v průběhu jakékoli kompilace. Jako hlavní výsledek, tato práce demonstruje, že tyto automaty mají stejnou vyjadřovací sílu jako automaty s #, nacházející pouze na dně zásobníku, a jediným dalším nevstupním symbolem. Z tohoto závěru vyplývá nekonečná hierarchie jazyků přijímaných těmito automaty.
APA, Harvard, Vancouver, ISO, and other styles
5

Carpi, Arturo. "Reduction et synchronisation d'automates non ambigus." Paris 7, 1988. http://www.theses.fr/1988PA077025.

Full text
Abstract:
La notion de reduction non ambigue d'automates non ambigus est introduite. Il est montre que, sous des hypotheses tres generales, pour toute reduction booleenne d'automates non ambigus il y a une reduction non ambigue entre les automates consideres. Il est aussi demontre qu'il y a une reduction entre des convenables automates non ambigus finis reconnaissants deux sous monoides libre si et seulement s'ils sont engendres par deux codes rationnels x et z tels que x est compose d'un code complet par z. Dans la derniere partie, le probleme de la synchronisation pour les automates non ambigus est etudie
APA, Harvard, Vancouver, ISO, and other styles
6

Havlena, Vojtěch. "Porovnávání jazyků a redukce automatů používaných při filtraci síťového provozu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363849.

Full text
Abstract:
The focus of this thesis is the comparison of languages and the reduction of automata used in network traffic monitoring. In this work, several approaches for approximate (language non-preserving) reduction of automata and comparison of their languages are proposed. The reductions are based on either under-approximating the languages of automata by pruning their states, or over-approximating the language by introducing new self-loops (and pruning redundant states later). The proposed approximate reduction methods and the proposed probabilistic distance utilize information from a network traffic. Formal guarantees with respect to a model of network traffic, represented using a probabilistic automaton are provided. The methods were implemented and evaluated on automata used in network traffic filtering.
APA, Harvard, Vancouver, ISO, and other styles
7

Luukkainen, Matti. "A process algebraic reduction strategy for automata theoretic verification of untimed and timed concurrent systems." Helsinki : University of Helsinki, 2003. http://ethesis.helsinki.fi/julkaisut/mat/tieto/vk/luukkainen/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Malinowski, Janusz. "Algorithmes pour la synthèse et le model checking." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4780/document.

Full text
Abstract:
Nous avons étudié dans cette thèse une approche discrète de la synthèse de contrôleurs pour les systèmes hybrides permettant la manipulation de dynamiques non-linéaires : les états sont regroupés dans une partition finie au prix d'une sur-approximation non déterministe de la relation de transition. Nous avons développé des algorithmes permettant de réduire l'explosion du nombre d'états due à la discrétisation en exploitant des propriétés des systèmes ODE. Ces algorithmes sont basés sur une approche hiérarchique du problème de la synthèse en le résolvant pour des sous problèmes et en utilisant ces résultats pour réduire l'espace d'états global. Nous avons aussi combiné des objectifs de vivacité et de sécurité pour s'approcher d'une stabilisation. Des résultats implémentés sur un prototype viennent montrer l'intérêt de cette approche.Pour la vérification, nous avons étudié le problème du model checking d'automates temporisés basé sur la résolution SAT. Nous avons exploré des solutions alternatives pour le codage des réductions SAT basées sur des exécutions parallèles de transitions indépendantes. Alors qu'une telle optimisation a déjà été étudiée pour les systèmes discrets, une approche intuitive pour les automates temporisés serait de considérer que des transitions en parallèle ont lieu au même instant (synchrones). Toutefois il est possible de relâcher cette condition et nous avons montré trois sémantiques différentes pour les séquences temporisées avec des transitions parallèles. Nous montrons la correction des sémantiques et décrivons des résultats expérimentaux réalisés avec notre prototype
We consider a discretization based approach to controller synthesis of hybrid systems that allows to handle non-linear dynamics. In such an approach, states are grouped together in a finite index partition at the price of a non-deterministic over approximation of the transition relation. The main contribution of this work is a technique to reduce the state explosion generated by the discretization: exploiting structural properties of ODE systems, we propose a hierarchical approach to the synthesis problem by solving it first for sub problems and using the results for state space reduction in the full problem. A secondary contribution concerns combined safety and liveness control objectives that approximate stabilization. Results implemented on a prototype show the benefit of this approach. For the verification, we study the model checking problem of timed automata based on SAT solving. Our work investigates alternative possibilities for coding the SAT reductions that are based on parallel executions of independent transitions. While such an optimization has been studied for discrete systems, its transposition to timed automata poses the question of what it means for timed transitions to be executed “in parallel”. The most obvious interpretation is that the transitions in parallel take place at the same time (synchronously). However, it is possible to relax this condition. On the whole, we define and analyse three different semantics of timed sequences with parallel transitions. We prove the correctness of the proposed semantics and report experimental results with a prototype implementation
APA, Harvard, Vancouver, ISO, and other styles
9

Schwartzmann, Benjamin. "Automatic X-Ray Parameter Analysis and Reduction." Thesis, KTH, Ljud- och bildbehandling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-55378.

Full text
Abstract:
This thesis describes research in the area of parameter analysis for X-ray imaging. This work was performed at Philips Healthcare in Best (the Netherlands) as a nal project for the Master study at the Sound and Image Processing Laboratory at Kungliga Tekniska Hogskolan (KTH), Stockholm. The objective of this project is to provide methods for automatic parameter analysis and reduction for X-ray tuning. These methods can be used to reduce the amount of parameters involving in X-ray tuning. X-ray processing is performed via a black-box process and parameter analysis consists in looking at the impact on the resulting X-ray image. The visual quality of this image depends on parameter tuning. With a large number of parameters, analysing their visual impact directly is not feasible, which is why objective image quality (OIQ) assessment is used to get numerical results. Several image quality assessment models are reviewed leading to further research in the full-reference and no-reference model approaches. Both assessments are explored with investigation of four dierent full-reference metrics, namely the Peak-Signalto-Noise Ratio (PSNR), the Structural Similarity (SSIM), the Visual Information Fidelity (VIF) and one using wavelet properties that we have called Wavelet Method (WM), and three no-reference metrics: noise, contrast and sharpness. Search algorithms are used to get a set of parameters which give the same image quality (using OIQ) such that dimensionality reduction can be performed. Several search algorithms are reviewed from the simplest (looking at the function evaluations of all points) to the most sophisticated algorithms for global optimization (e.g. genetic algorithm). Depending on the function to optimize, dierent algorithms are used. Finding corrolated parameters, or parameters that have no impact on the image quality is the way to reduce the amount of parameters. Principal Composent Analysis (PCA) which is one of the most common method for dimensionality reduction is performed on the results of combination of OIQ and search algorithms. For each step of the project, we test the assessments or the algorithms on some examples to validate the used procedure. We will nally test all our methods with one IP function which acts like a real X-ray process. The results will enable us to see if parameter analysis and reduction is feasible.
APA, Harvard, Vancouver, ISO, and other styles
10

Rioual, Jean-Luc. "The automatic control of boundary layer transition." Thesis, University of Southampton, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Larsson, Camilla. "Reduction of oil pump losses in automatic transmissions." Thesis, Linköpings universitet, Fordonssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111937.

Full text
Abstract:
In the vehicle industry it is of great interest to reduce the emissions and lower the fuel consumption.Up to now a lot of effort has been put into increasing the efficiency of the engine,but it starts to get expensive to keep improving the engine. In this master thesis the transmissionand especially the oil supply to the transmission is investigated. An example of how the requirements of an oil pump can be decided is described. Knowingthe requirements different pumps may be adapted to meet the demands. The gear pumpused today is compared with a variable displacement pump and an electric pump. The gearpump is not possible to control, but the other two are. A few simple control strategies areintroduced. The strategies are implemented and the three pumps are used in the same drivecycle. It is shown that it is possible to reduce the energy that the pump requires if it isreplaced by a variable vane pump or an electric pump.
APA, Harvard, Vancouver, ISO, and other styles
12

Salzmann, Roger. "Fuel staging for NOx reduction in automatic wood furnaces /." [S.l.] : [s.n.], 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Turner, Andrew William. "Computational methods of automatic reduction planning of actabular fractures." Thesis, Imperial College London, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Al-Khazraji, Yusra [Verfasser], and Bernhard [Akademischer Betreuer] Nebel. "Analysis of partial order reduction techniques for automated planning." Freiburg : Universität, 2017. http://d-nb.info/1163200824/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Voss, T. J. "Automated Analysis Tools for Reducing Spacecraft Telemetry Data." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/611898.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
A practical description is presented of the methods used to reduce spacecraft telemetry data using a hierarchial toolkit of software programs developed for a UNIX environment.
APA, Harvard, Vancouver, ISO, and other styles
16

Nijjar, Paul. "An attempt to automate NP-hardness reductions via SOE logic." Waterloo, Ont. : University of Waterloo, 2004. http://etd.uwaterloo.ca/etd/pnijjar2004.pdf.

Full text
Abstract:
Thesis (MMath)--University of Waterloo, 2004.
"A thesis presented to the University of Waterloo in fulfillment of the thesis requirements for the degree of Master of Mathematics in Computer Science." Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
17

Hague, Douglas James. "The automatic classification of building maintenance." Thesis, De Montfort University, 1997. http://hdl.handle.net/2086/4325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Tang, Mi. "Torque ripple reduction in a.c. permanent magnet servo motor drives." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/43379/.

Full text
Abstract:
Servo systems play an important role in industrial automation. A servo system denotes a closed loop controlled system capable of tracking required demands. One way of achieving high performance servo drive systems is to apply the closed loop control of an a.c. permanent magnet synchronous machine (PMSM). PMSM is a type of machine which rotates once three-phase a.c. voltages are supplied. The usage of permanent magnet materials contributes to the high efficiency of PMSM, and makes it a popular type of machine in industrial applications. However, the interaction between the permanent magnets and the machine stator would generate torque ripple and consequently unsmooth speed. Therefore, torque ripple of PMSM need to be considered carefully in the control of such servo systems. An innovative control scheme combining an enhanced high bandwidth deadbeat current controller and a fractional delay variable frequency angle-based repetitive controller, is developed in this work in order to minimize torque ripple. For the purpose of accurately modelling the cogging torque and flux harmonics in PMSM, a lookup table embedded PMSM model is also proposed. It has been validated by both simulative and experimental tests that the proposed control scheme is able to reduce torque ripple in a PMSM drive system effectively for a wide range of frequencies, and even during transients, which has never been achieved according to the author's knowledge. The proposed method is not only adaptive to variable frequencies, but also adaptive to the variations of electrical and mechanical parameters in normal operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
19

Russo, Charles. "AVL AND RESPONSE TIME REDUCTION: IMAGE AND REALITY." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2647.

Full text
Abstract:
Automatic vehicle locator (AVL) systems, utilizing military's global positioning system, may impact response time to law enforcement calls for service. In order to evaluate the impacts of AVL on response time to calls for service at the Altamonte Springs Police Department (ASPD), computer aided dispatch (CAD) data from years 1999 to 2003 were analyzed. The analysis of each of the data sets consisted of an initial sequence chart, an analysis of variance (ANOVA), a means plot and a linear regression. Interviews of ASPD personnel were conducted to understand user perceptions of AVL. Based on the ANOVA results, trends indicate that weekly response time was significantly lower during the AVL partial implementation period than during the pre or post AVL stages across all categories of data analyzed. Based on the regression results, trends indicate that the overall impact of AVL on response time for all categories analyzed is flat and show AVL as having no overall impact on response time across all calls for service analyzed. An exception to this is the findings related to Priority 3 calls for service; however this exception can be attributed to performance during the pre AVL implementation stage. These results do not suggest a capability for AVL to reduce response time to calls for service in a meaningful comprehensive way. Thus, the study's hypotheses are not supported.
Ph.D.
Department of Criminal Justice and Legal Studies
Health and Public Affairs
Public Affairs: Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Sjöberg, Johan. "Optimal Control and Model Reduction of Nonlinear DAE Models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11345.

Full text
Abstract:
In this thesis, different topics for models that consist of both differential and algebraic equations are studied. The interest in such models, denoted DAE models, have increased substantially during the last years. One of the major reasons is that several modern object-oriented modeling tools used to model large physical systems yield models in this form. The DAE models will, at least locally, be assumed to be described by a decoupled set of ordinary differential equations and purely algebraic equations. In theory, this assumption is not very restrictive because index reduction techniques can be used to rewrite rather general DAE models to satisfy this assumption. One of the topics considered in this thesis is optimal feedback control. For state-space models, it is well-known that the Hamilton-Jacobi-Bellman equation (HJB) can be used to calculate the optimal solution. For DAE models, a similar result exists where a Hamilton-Jacobi-Bellman-like equation is solved. This equation has an extra term in order to incorporate the algebraic equations, and it is investigated how the extra term must be chosen in order to obtain the same solution from the different equations. A problem when using the HJB to find the optimal feedback law is that it involves solving a nonlinear partial differential equation. Often, this equation cannot be solved explicitly. An easier problem is to compute a locally optimal feedback law. For analytic nonlinear time-invariant state-space models, this problem was solved in the 1960's, and in the 1970's the time-varying case was solved as well. In both cases, the optimal solution is described by convergent power series. In this thesis, both of these results are extended to analytic DAE models. Usually, the power series solution of the optimal feedback control problem consists of an infinite number of terms. In practice, an approximation with a finite number of terms is used. A problem is that for certain problems, the region in which the approximate solution is accurate may be small. Therefore, another parametrization of the optimal solution, namely rational functions, is studied. It is shown that for some problems, this parametrization gives a substantially better result than the power series approximation in terms of approximating the optimal cost over a larger region. A problem with the power series method is that the computational complexity grows rapidly both in the number of states and in the order of approximation. However, for DAE models where the underlying state-space model is control-affine, the computations can be simplified. Therefore, conditions under which this property holds are derived. Another major topic considered is how to include stochastic processes in nonlinear DAE models. Stochastic processes are used to model uncertainties and noise in physical processes, and are often an important part in for example state estimation. Therefore, conditions are presented under which noise can be introduced in a DAE model such that it becomes well-posed. For well-posed models, it is then discussed how particle filters can be implemented for estimating the time-varying variables in the model. The final topic in the thesis is model reduction of nonlinear DAE models. The objective with model reduction is to reduce the number of states, while not affecting the input-output behavior too much. Three different approaches are studied, namely balanced truncation, balanced truncation using minimization of the co-observability function and balanced residualization. To compute the reduced model for the different approaches, a method originally derived for nonlinear state-space models is extended to DAE models.
APA, Harvard, Vancouver, ISO, and other styles
21

Latu, Ioana M. "Reducing Automatic Stereotype Activation: Mechanisms and Moderators of Situational Attribution Training." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/psych_diss/72.

Full text
Abstract:
Individuals tend to underestimate situational causes and overly rely on trait causes in explaining negative behaviors of outgroup members, a tendency named the ultimate attribution error (Pettigrew, 1979). This attributional pattern is directly related to stereotyping, because attributing negative behaviors to internal, stable causes tends to perpetuate negative stereotypes of outgroup members. Recent research on implicit bias reduction revealed that circumventing individuals’ tendency to engage in the ultimate attribution error led to reduced stereotyping. More specifically, training White participants to consider situational factors in determining Blacks’ negative stereotypic behaviors led to decreased automatic stereotype activation. This technique was named Situational Attribution Training (Stewart, Latu, Kawakami, & Myers, 2010). In the current studies, I investigated the mechanisms and moderators of Situational Attribution Training. In Study 1, I investigated the effect of training on spontaneous situational inferences. Findings revealed that training did not increase spontaneous situational inferences: both training and control participants showed evidence of spontaneous situational inferences. In Study 2, I investigated whether correcting trait inferences by taking into account situational factors has become automatic after training. In addition, explicit prejudice, motivations to control prejudice, and cognitive complexity variables (need for cognition, personal need for structure) were investigated as moderators of training success. These findings revealed that Situational Attribution Training works best for individuals high in need for cognition, under conditions of no cognitive load, but not high cognitive load. Training increased implicit bias for individuals high in modern racism, regardless of their cognitive load. Possible explanations of these findings were discussed, including methodological limitations and theoretical implications.
APA, Harvard, Vancouver, ISO, and other styles
22

Fehr, Jörg [Verfasser]. "Automated and Error Controlled Model Reduction in Elastic Multibody Systems / Jörg Fehr." Aachen : Shaker, 2011. http://d-nb.info/1069050350/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ahrens, Jared. "A Compositional Approach to Asynchronous Design Verification with Automated State Space Reduction." Scholar Commons, 2007. http://scholarcommons.usf.edu/etd/3751.

Full text
Abstract:
Model checking is the most effective means of verifying the correctness of asynchronous designs, and state space exploration is central to model checking. Although model checking can achieve very high verification coverage, the high degree of concurrency in asynchronous designs often leads to state explosion during state space exploration. To inhibit this explosion, our approach builds on the ideas of compositional verification. In our approach, a design modeled in a high level description is partitioned into a set of parallel components. Before state space exploration, each component is paired with an over-approximated environment to decouple it from the rest of the design. Then, a global state transition graph is constructed by reducing and incrementally composing component state transition graphs. We take great care during reduction and composition to preserve all failures found during the initial state space exploration of each component. To further reduce complexity, interface constraints are automatically derived for the over-approximated environment of each component. We prove that our approach is conservative in that false positive results are never produced. The effectiveness of our approach is demonstrated by the experimental results of several case studies showing that our approach can verify designs that cannot be handled by traditional at approaches. The experiments also show that constraints can reduce the size of the global state transition graph and prevent some false failures.
APA, Harvard, Vancouver, ISO, and other styles
24

Battey, Heather Suzanne. "Dimension reduction and automatic smoothing in high dimensional and functional data analysis." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Procházka, Lukáš. "Redukce nedeterministických konečných automatů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237032.

Full text
Abstract:
Nondeterministic finite automaton is an important tool, which is used to process strings in many different areas of programming. It is important to try to reduce its size for increasing programs' effectiveness. However, this problem is computationally hard, so we need to search for new techniques. Basics of finite automata are described in this work. Some methods for their reduction are then introduced. Usable reduction algorithms are described in greater detail. Then they are implemented and tested. The test results are finally evaluated.
APA, Harvard, Vancouver, ISO, and other styles
26

Bozkurt, M. "Automated realistic test input generation and cost reduction in service-centric system testing." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1400300/.

Full text
Abstract:
Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testing
APA, Harvard, Vancouver, ISO, and other styles
27

Hao, Yan. "Automated Reductions of Markov Chain Models of Calcium Release Site Models." W&M ScholarWorks, 2012. https://scholarworks.wm.edu/etd/1539623353.

Full text
Abstract:
Markov chain models have played an important role in understanding the relationship between single channel gating of intracellular calcium (Ca2+) channels, specifically 1,4,5-trisphosphate receptors (IP3Rs) and ryanodine receptors (RyRs), and the stochastic dynamics of Ca2+ release events, known as Ca2+ puffs and sparks. Mechanistic Ca2+ release site models are defined by the composition of single channel models whose transition probabilities depend on the local calcium concentration and thus the state of the other channels. Unfortunately, the large state space of such compositional models impedes simulation and computational analysis of the whole cell Ca2+ signaling in which the stochastic dynamics of localized Ca2+ release events play an important role. This dissertation introduces, implements and validates the application of several automated model reduction techniques that significantly reduce the computational cost of mechanistic compositionally defined Ca2+ release site models.;A common feature of Ca2+ channel models is the separation of time scales. For example, the well-known bell-shaped equilibrium open probability of IP3Rs can be reproduced by Markov Chain models that include transitions mediated by fast Ca2+ activation and slower Ca2+ inactivation. Chapter 2 introduces an automated model reduction technique that is based on fast/slow analysis that leverages these time scale differences. Rate constants in the single channel model are categorized as either fast or slow, groups of release site states that are connected by fast transitions are identified and lumped, and transition rates between reduced states are chosen consistent with the conditional probability distributions among states within each group. The fast/slow reduction approach is validated by the fact that puff/spark statistics can be efficiently computed from reduced Ca2+ release site models with small and transient error.;For Markov chain Ca2+ release site models without time-scale separation, the manner in which the full model states should be aggregated for optimal reduction is difficult to determine a priori. In Chapter 3, a genetic algorithm based approach that mimics the inheritance, mutation and selection processes of natural evolution is implemented to reduce these models. Given a full model of interest and target reduced model size, this genetic algorithm searches for set partitions, each corresponding to a potential scheme for state aggregation, that lead to reduced models that well-approximate the full model. A whole cell model with coupled local and global Ca2+ signaling is simplified by replacing a compositionally defined full Ca2+ release site model with a reduced model obtained through the genetic algorithm.;In Chapter 4, a Langevin formulation of Ca2+ release sites is introduced as an alternative model reduction technique that is applicable when the number of channels per Ca2+ release site is too large for the previously discussed reduction methods, but not so large that the stochasticity of Ca2+ release is negligible. The Langevin formulation for coupled intracellular Ca2+ channels results in stochastic differntial equations that well-approximate the corresponding Markov chain models when release sites possess as few as 20 channels, and the agreement improves as the number of channels per release site increases. Importantly, the computational time required by the Langevin approach does not increase with the size of Ca2+ release site.
APA, Harvard, Vancouver, ISO, and other styles
28

Kalinovská, Romana. "Efektivnost harm reduction v drogové politice s hlavním zaměřením na programy výměny jehel a výdejní automaty." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-75148.

Full text
Abstract:
The thesis deals with harm reduction in drug addition (damage reducing, next short cut HR) its effectiveness on the world and especially in the Czech Republic. At the beginning there is explained the term HR in drug addition, why did it originate, what does it involve and what are the opinions of the other states about it, mentioned in foreign studies, and economic theory. The main focus dedicates needle exchange programs in lowtreshold centres, in field programs and its additional services -- syringe vending machines, fixed/mobile vans. Next I concentrate on the different general view of the effectivenes analysis, particularly of CEA, CBA, CMA and CUA. The continuous part concerns about situation HR in the Czech Republic, mainly about NEPs and SVMs conducted by Progressive o.s. Thanks to performed questionnaire and other datas from Progressive o.s. I apply effectiveness analysis by using of the appropriate method. At the end I summarize results, I evaluace effectiveness of the services and I suggest possible recommendation for teh Czech Republic in HR.
APA, Harvard, Vancouver, ISO, and other styles
29

West, Terrance Roshad. "HYPERSPECTRAL DIMENSIONALITY REDUCTION VIA SEQUENTIAL PARAMETRIC PROJECTION PURSUITS FOR AUTOMATED INVASIVE SPECIES TARGET RECOGNITION." MSSTATE, 2006. http://sun.library.msstate.edu/ETD-db/theses/available/etd-09152006-094948/.

Full text
Abstract:
This thesis investigates the use of sequential parametric projection pursuits (SPPP) for hyperspectral dimensionality reduction and invasive species target recognition. The SPPP method is implemented in a top-down fashion, where hyperspectral bands are used to form an increasing number of smaller groups, with each group being projected onto a subspace of dimensionality one. Both supervised and unsupervised potential projections are investigated for their use in the SPPP method. Fisher?s linear discriminant analysis (LDA) is used as a potential supervised projection. Average, Gaussian-weighted average, and principal component analysis (PCA) are used as potential unsupervised projections. The Bhattacharyya distance is used as the SPPP performance index. The performance of the SPPP method is compared to two other currently used dimensionality reduction techniques, namely best spectral band selection (BSBS) and best wavelet coefficient selection (BWCS). The SPPP dimensionality reduction method is combined with a nearest mean classifier to form an automated target recognition (ATR) system. The ATR system is tested on two invasive species hyperspectral datasets: a terrestrial case study of Cogongrass versus Johnsongrass and an aquatic case study of Waterhyacinth versus American Lotus. For both case studies, the SPPP approach either outperforms or performs on par with the BSBS and BWCS methods in terms of classification accuracy; however, the SPPP approach requires significantly less computational time. For the Cogongrass and Waterhyacinth applications, the SPPP method results in overall classification accuracy in the mid to upper 90?s.
APA, Harvard, Vancouver, ISO, and other styles
30

Revanur, Vandan, and Ayodeji Ayibiowu. "Automatic Generation of Descriptive Features for Predicting Vehicle Faults." Thesis, Högskolan i Halmstad, CAISR Centrum för tillämpade intelligenta system (IS-lab), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42885.

Full text
Abstract:
Predictive Maintenance (PM) has been increasingly adopted in the Automotive industry, in the recent decades along with conventional approaches such as the Preventive Maintenance and Diagnostic/Corrective Maintenance, since it provides many advantages to estimate the failure before the actual occurrence proactively, and also being adaptive to the present status of the vehicle, in turn allowing flexible maintenance schedules for efficient repair or replacing of faulty components. PM necessitates the storage and analysis of large amounts of sensor data. This requirement can be a challenge in deploying this method on-board the vehicles due to the limited storage and computational power on the hardware of the vehicle. Hence, this thesis seeks to obtain low dimensional descriptive features from high dimensional data using Representation Learning. This low dimensional representation will be used for predicting vehicle faults, specifically Turbocharger related failures. Since the Logged Vehicle Data (LVD) was base on all the data utilized in this thesis, it allowed for the evaluation of large populations of trucks without requiring additional measuring devices and facilities. The gradual degradation methodology is considered for describing vehicle condition, which allows for modeling the malfunction/ failure as a continuous process rather than a discrete flip from healthy to an unhealthy state. This approach eliminates the challenge of data imbalance of healthy and unhealthy samples. Two important hypotheses are presented. Firstly, Parallel StackedClassical Autoencoders would produce better representations com-pared to individual Autoencoders. Secondly, employing Learned Em-beddings on Categorical Variables would improve the performance of the Dimensionality reduction. Based on these hypotheses, a model architecture is proposed and is developed on the LVD. The model is shown to achieve good performance, and in close standards to the previous state-of-the-art research. This thesis, finally, illustrates the potential to apply parallel stacked architectures with Learned Embeddings for the Categorical features, and a combination of feature selection and extraction for numerical features, to predict the Remaining Useful Life (RUL) of a vehicle, in the context of the Turbocharger. A performance improvement of 21.68% with respect to the Mean Absolute Error (MAE) loss with an 80.42% reduction in the size of data was observed.
APA, Harvard, Vancouver, ISO, and other styles
31

Nijjar, Paul. "An Attempt to Automate NP-Hardness Reductions via SO∃ Logic." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1162.

Full text
Abstract:
We explore the possibility of automating NP-hardness reductions. We motivate the problem from an artificial intelligence perspective, then propose the use of second-order existential (SO∃) logic as representation language for decision problems. Building upon the theoretical framework of J. Antonio Medina, we explore the possibility of implementing seven syntactic operators. Each operator transforms SO∃ sentences in a way that preserves NP-completeness. We subsequently propose a program which implements these operators. We discuss a number of theoretical and practical barriers to this task. We prove that determining whether two SO∃ sentences are equivalent is as hard as GRAPH ISOMORPHISM, and prove that determining whether an arbitrary SO∃ sentence represents an NP-complete problem is undecidable.
APA, Harvard, Vancouver, ISO, and other styles
32

Jung, Uk. "Wavelet-based Data Reduction and Mining for Multiple Functional Data." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5084.

Full text
Abstract:
Advance technology such as various types of automatic data acquisitions, management, and networking systems has created a tremendous capability for managers to access valuable production information to improve their operation quality and efficiency. Signal processing and data mining techniques are more popular than ever in many fields including intelligent manufacturing. As data sets increase in size, their exploration, manipulation, and analysis become more complicated and resource consuming. Timely synthesized information such as functional data is needed for product design, process trouble-shooting, quality/efficiency improvement and resource allocation decisions. A major obstacle in those intelligent manufacturing system is that tools for processing a large volume of information coming from numerous stages on manufacturing operations are not available. Thus, the underlying theme of this thesis is to reduce the size of data in a mathematical rigorous framework, and apply existing or new procedures to the reduced-size data for various decision-making purposes. This thesis, first, proposes {it Wavelet-based Random-effect Model} which can generate multiple functional data signals which have wide fluctuations(between-signal variations) in the time domain. The random-effect wavelet atom position in the model has {it locally focused impact} which can be distinguished from other traditional random-effect models in biological field. For the data-size reduction, in order to deal with heterogeneously selected wavelet coefficients for different single curves, this thesis introduces the newly-defined {it Wavelet Vertical Energy} metric of multiple curves and utilizes it for the efficient data reduction method. The newly proposed method in this thesis will select important positions for the whole set of multiple curves by comparison between every vertical energy metrics and a threshold ({it Vertical Energy Threshold; VET}) which will be optimally decided based on an objective function. The objective function balances the reconstruction error against a data reduction ratio. Based on class membership information of each signal obtained, this thesis proposes the {it Vertical Group-Wise Threshold} method to increase the discriminative capability of the reduced-size data so that the reduced data set retains salient differences between classes as much as possible. A real-life example (Tonnage data) shows our proposed method is promising.
APA, Harvard, Vancouver, ISO, and other styles
33

Kubo, Takeshi. "Efficiency and reproducibility in pulmonary nodule detection in simulated dose reduction lung CT images." Kyoto University, 2019. http://hdl.handle.net/2433/243276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Hong. "Automatic generation and reduction of the semi-fuzzy knowledge base in symbolic processing and numerical calculation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1995. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ27811.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Tidefelt, Henrik. "Structural algorithms and perturbations in differential-algebraic equations." Licentiate thesis, Linköping : Department of Electrical Engineering, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Panzer, Heiko [Verfasser]. "Model Order Reduction by Krylov Subspace Methods with Global Error Bounds and Automatic Choice of Parameters / Heiko Panzer." München : Verlag Dr. Hut, 2014. http://d-nb.info/1063222176/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Furuhashi, Takeshi, Tomohiro Yoshikawa, Hiromu Takahashi, and Yusuke Kaneda. "A Study on Reliability-based Selective Repeat Automatic Repeat Request for Reduction of Discrimination Time of P300 Speller." 日本知能情報ファジィ学会, 2010. http://hdl.handle.net/2237/20692.

Full text
Abstract:
Session ID: SA-B1-2
SCIS & ISIS 2010, Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems. December 8-12, 2010, Okayama Convention Center, Okayama, Japan
APA, Harvard, Vancouver, ISO, and other styles
38

Malfante, Marielle. "Automatic classification of natural signals for environmental monitoring." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAU025/document.

Full text
Abstract:
Ce manuscrit de thèse résume trois ans de travaux sur l’utilisation des méthodes d’apprentissage statistique pour l’analyse automatique de signaux naturels. L’objectif principal est de présenter des outils efficaces et opérationnels pour l’analyse de signaux environnementaux, en vue de mieux connaitre et comprendre l’environnement considéré. On se concentre en particulier sur les tâches de détection et de classification automatique d’événements naturels.Dans cette thèse, deux outils basés sur l’apprentissage supervisé (Support Vector Machine et Random Forest) sont présentés pour (i) la classification automatique d’événements, et (ii) pour la détection et classification automatique d’événements. La robustesse des approches proposées résulte de l’espace des descripteurs dans lequel sont représentés les signaux. Les enregistrements y sont en effet décrits dans plusieurs espaces: temporel, fréquentiel et quéfrentiel. Une comparaison avec des descripteurs issus de réseaux de neurones convolutionnels (Deep Learning) est également proposée, et favorise les descripteurs issus de la physique au détriment des approches basées sur l’apprentissage profond.Les outils proposés au cours de cette thèse sont testés et validés sur des enregistrements in situ de deux environnements différents : (i) milieux marins et (ii) zones volcaniques. La première application s’intéresse aux signaux acoustiques pour la surveillance des zones sous-marines côtières : les enregistrements continus sont automatiquement analysés pour détecter et classifier les différents sons de poissons. Une périodicité quotidienne est mise en évidence. La seconde application vise la surveillance volcanique : l’architecture proposée classifie automatiquement les événements sismiques en plusieurs catégories, associées à diverses activités du volcan. L’étude est menée sur 6 ans de données volcano-sismiques enregistrées sur le volcan Ubinas (Pérou). L’analyse automatique a en particulier permis d’identifier des erreurs de classification faites dans l’analyse manuelle originale. L’architecture pour la classification automatique d’événements volcano-sismiques a également été déployée et testée en observatoire en Indonésie pour la surveillance du volcan Mérapi. Les outils développés au cours de cette thèse sont rassemblés dans le module Architecture d’Analyse Automatique (AAA), disponible en libre accès
This manuscript summarizes a three years work addressing the use of machine learning for the automatic analysis of natural signals. The main goal of this PhD is to produce efficient and operative frameworks for the analysis of environmental signals, in order to gather knowledge and better understand the considered environment. Particularly, we focus on the automatic tasks of detection and classification of natural events.This thesis proposes two tools based on supervised machine learning (Support Vector Machine, Random Forest) for (i) the automatic classification of events and (ii) the automatic detection and classification of events. The success of the proposed approaches lies in the feature space used to represent the signals. This relies on a detailed description of the raw acquisitions in various domains: temporal, spectral and cepstral. A comparison with features extracted using convolutional neural networks (deep learning) is also made, and favours the physical features to the use of deep learning methods to represent transient signals.The proposed tools are tested and validated on real world acquisitions from different environments: (i) underwater and (ii) volcanic areas. The first application considered in this thesis is devoted to the monitoring of coastal underwater areas using acoustic signals: continuous recordings are analysed to automatically detect and classify fish sounds. A day to day pattern in the fish behaviour is revealed. The second application targets volcanoes monitoring: the proposed system classifies seismic events into categories, which can be associated to different phases of the internal activity of volcanoes. The study is conducted on six years of volcano-seismic data recorded on Ubinas volcano (Peru). In particular, the outcomes of the proposed automatic classification system helped in the discovery of misclassifications in the manual annotation of the recordings. In addition, the proposed automatic classification framework of volcano-seismic signals has been deployed and tested in Indonesia for the monitoring of Mount Merapi. The software implementation of the framework developed in this thesis has been collected in the Automatic Analysis Architecture (AAA) package and is freely available
APA, Harvard, Vancouver, ISO, and other styles
39

Martín, Montoya Ligia Andrea [Verfasser]. "Automatic reduction of large x-ray fluorescence data-sets applied to XAS and mapping experiments / Ligia Andrea Martín Montoya." Paderborn : Universitätsbibliothek, 2017. http://d-nb.info/1126005630/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Thomas, Clayton Austin. "Modeling and Performance Analysis of a 10-Speed Automatic Transmission for X-in-the-Loop Simulation." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1532014317646195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Abdel-Rahman, Tarek. "Mixture of Factor Analyzers (MoFA) Models for the Design and Analysis of SAR Automatic Target Recognition (ATR) Algorithms." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500625807524146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lekkas, Sotirios. "Life Cycle Assessment on Bridge Abutments : Automated Design in Structural Enginee." Thesis, KTH, Bro- och stålbyggnad, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259573.

Full text
Abstract:
Life Cycle Assessment (LCA) is the globally the most recognised method for quantifying theimpact the a product or service has on the environment through its whole life-span. Theconstruction sector plays a key role in the depletion of the natural resources and the energyconsumption on the planet. Thus it is fundamental that an environmental assessment tool likeLCA should be in close cooperation with the construction process.This thesis focuses on the environmental impact of bridge abutments, and can be divided in twoparts.The rst one focuses on enhancing the automated design in the construction eld. A Python codeis created that focuses on creating the geometry of any type of bridge abutment and conductingthe calculations for the required concrete and reinforcement. The process is attempted to becomecompletely automated.The second part introduces three alternative designs for a bridge abutment that attempt to havethe same structural properties and cooperate successfully with the superstructure, while at thesame time utilize as little material as possible. The possible reduction in material is quantiedin environmental terms after an environmental impact assessment is performed.The results show that dierent designs can have a great impact on the reduction on the materialconsumption and on the impact that the whole structure has on the environment. The resultsin this study might provide the designers with valuable motivation and guidelines to achievehigher sustainability standards in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Mössinger, Klaus. "Innovative Duplex Filter for Hydraulic Applications." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-199519.

Full text
Abstract:
For decades, duplex filters have been put to use virtually unmodified. Technologies, handling and use of materials show enormous potential for improvement. Filter element emoval/replacement is performed according to a complex process sequence. With the newly developed Duplex Filter, the market demands concerning simple filter element removal/replacement, as well as weight and pressure loss reduction are fully met.
APA, Harvard, Vancouver, ISO, and other styles
44

Laverty, Stephen William. "Detection of Nonstationary Noise and Improved Voice Activity Detection in an Automotive Hands-free Environment." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-051105-110646/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Al, Akhras Hassan. "Automatic isogeometric analysis suitable trivariate models generation : Application to reduced order modeling." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI047/document.

Full text
Abstract:
Cette thèse présente un algorithme automatique pour la construction d’un modèle NURBS volumique à partir d’un modèle représenté par ses bords (maillages ou splines). Ce type de modèle est indispensable dans le cadre de l’analyse isogéométrique utilisant les NURBS comme fonctions de forme. Le point d’entrée de l’algorithme est une triangulation du bord du modèle. Après deux étapes de décomposition, le modèle est approché par un polycube. Ensuite un paramétrage surfacique entre le bord du modèle et celui du polycube est établi en calculant un paramétrage global aligné à un champ de direction interpolant les directions de courbure principales du modèle. Finalement, le paramétrage volumique est obtenu en se basant sur ce paramétrage surfacique. Dans le contexte des études paramétriques basées sur des paramètres de formes géométriques, cette méthode peut être appliquée aux techniques de réduction de modèles pour obtenir la même représentation pour des objets ayant différentes géométries mais la même topologie
This thesis presents an effective method to automatically construct trivariate tensor-product spline models of complicated geometry and arbitrary topology. Our method takes as input a solid model defined by its triangulated boundary. Using cuboid decomposition, an initial polycube approximating the input boundary mesh is built. This polycube serves as the parametric domain of the tensor-product spline representation required for isogeometric analysis. The polycube's nodes and arcs decompose the input model locally into quadrangular patches, and globally into hexahedral domains. Using aligned global parameterization, the nodes are re-positioned and the arcs are re-routed across the surface in a way to achieve low overall patch distortion, and alignment to principal curvature directions and sharp features. The optimization process is based on one of the main contributions of this thesis: a novel way to design cross fields with topological (i.e., imposed singularities) and geometrical (i.e., imposed directions) constraints by solving only sparse linear systems. Based on the optimized polycube and parameterization, compatible B-spline boundary surfaces are reconstructed. Finally, the interior volumetric parameterization is computed using Coon's interpolation and the B-spline surfaces as boundary conditions. This method can be applied to reduced order modeling for parametric studies based on geometrical parameters. For models with the same topology but different geometries, this method allows to have the same representation: i.e., meshes (or parameterizations) with the same topology
APA, Harvard, Vancouver, ISO, and other styles
46

Wilkerson, Jerod W. "Closing the Defect Reduction Gap between Software Inspection and Test-Driven Development: Applying Mutation Analysis to Iterative, Test-First Programming." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/195160.

Full text
Abstract:
The main objective of this dissertation is to assist in reducing the chaotic state of the software engineering discipline by providing insights into both the effectiveness of software defect reduction methods and ways these methods can be improved. The dissertation is divided into two main parts. The first is a quasi-experiment comparing the software defect rates and initial development costs of two methods of software defect reduction: software inspection and test-driven development (TDD). Participants, consisting of computer science students at the University of Arizona, were divided into four treatment groups and were asked to complete the same programming assignment using either TDD, software inspection, both, or neither. Resulting defect counts and initial development costs were compared across groups. The study found that software inspection is more effective than TDD at reducing defects, but that it also has a higher initial cost of development. The study establishes the existence of a defect-reduction gap between software inspection and TDD and highlights the need to improve TDD because of its other benefits.The second part of the dissertation explores a method of applying mutation analysis to TDD to reduce the defect reduction gap between the two methods and to make TDD more reliable and predictable. A new change impact analysis algorithm (CHA-AS) based on CHA is presented and evaluated for applications of software change impact analysis where a predetermined set of program entry points is not available or is not known. An estimated average case complexity analysis indicates that the algorithm's time and space complexity is linear in the size of the program under analysis, and a simulation experiment indicates that the algorithm can capitalize on the iterative nature of TDD to produce a cost savings in mutation analysis applied to TDD projects. The algorithm should also be useful for other change impact analysis situations with undefined program entry points such as code library and framework development.An enhanced TDD method is proposed that incorporates mutation analysis, and a set of future research directions are proposed for developing tools to support mutation analysis enhanced TDD and to continue to improve the TDD method.
APA, Harvard, Vancouver, ISO, and other styles
47

Maji, Nabanita. "An Interactive Tutorial for NP-Completeness." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/52973.

Full text
Abstract:
A Theory of Algorithms course is essential to any Computer Science curriculum at both the undergraduate and graduate levels. It is also considered to be difficult material to teach or to learn. In particular the topics of Computational Complexity Theory, reductions, and the NP-Complete class of problems are considered difficult by students. Numerous algorithm visualizations (AVs) have been developed over the years to portray the dynamic nature of known algorithms commonly taught in undergraduate classes. However, to the best of our knowledge, the instructional material available for NP-Completeness is mostly static and textual, which does little to alleviate the complexity of the topic. Our aim is to improve the pedagogy of NP-Completeness by providing intuitive, interactive, and easy-to-understand visualizations for standard NP Complete problems, reductions, and proofs. In this thesis, we present a set of visualizations that we developed using the OpenDSA framework for certain NP-Complete problems. Our paradigm is a three step process. We first use an AV to illustrate a particular NP-Complete problem. Then we present an exercise to provide a first-hand experience with attempting to solve a problem instance. Finally, we present a visualization of a reduction as a part of the proof for NP-Completeness. Our work has been delivered as a collection of modules in OpenDSA, an interactive eTextbook system developed at Virginia Tech. The tutorial has been introduced as a teaching supplement in both a senior undergraduate and a graduate class. We present an analysis of the system use based on records of online interactions by students who used the tutorial. We also present results from a survey of the students.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
48

Schulze, Struchtrup Sarah [Verfasser]. "A concept for the rapid prediction of microbiological reduction in automatic dish cleaning processes: the Microbiological Inactivation Equivalent (MIE) unit / Sarah Schulze Struchtrup." Düren : Shaker, 2021. http://d-nb.info/1229779620/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

RAHIM, FAHIM. "Techniques symboliques pour la reduction des automates d'etats finis et application a la verification formelle modulaire et l'optimisation de circuits sequentiels vlsi complexes." Paris 6, 1999. http://www.theses.fr/1999PA066419.

Full text
Abstract:
Les circuits vlsi (very large scale integration) sont utilises de maniere tres courante dans les produits electroniques modernes. Depuis la creation du premier circuit integre en 1959, le nombre de transistors utilise pour fabriquer un chip a double pratiquement tous les ans. Comme la complexite et les performances requises des vlsi ont augmente de maniere exponentielle, le processus de design a ete rendu automatique en utilisant des outils de cao (conception assistee par ordinateur). Un outil de cao est un programme que peut utiliser le concepteur d'un circuit integre dans le processus de design d'un vlsi. Cette these cherche a resoudre le probleme de la verification formelle automatique d'une classe de circuits digitaux, ce qui est un element cle de l'automatisation de ce processus. Nous nous interessons plus particulierement aux techniques dites de model-checking qui permettent de demontrer automatiquement si un circuit se comporte de la maniere voulue. Cette these presente plusieurs methodes dites d'abstraction ou reduction pour permettre la verification de modele sur des circuits de grande taille. En outre nous presentons en annexe de cette these comment les techniques mises en uvre pour le model-checking peuvent etre aussi utilisees pour la synthese de circuits sequentiels, notamment pour l'evaluation de la consommation et l'optimisation en vue de reduire la consommation.
APA, Harvard, Vancouver, ISO, and other styles
50

Johansson, Ingrid. "Post-processing for roughness reduction of additive manufactured polyamide 12 using a fully automated chemical vapor technique - The effect on micro and macrolevel." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279316.

Full text
Abstract:
Additive manufacturing has increased in popularity in recent years partly due to the possibilities of producing complex geometries in a rapid manner. Selective laser sintering (SLS) is a type of additive manufacturing technique that utilizes polymer powder and a layer-by-layer technique to build up the desired geometry. The main drawbacks with this technique are related to the reproducibility, mechanical performance and the poor surface finish of printed parts. Surface roughness increase the risk of bacterial adhesion and biofilm formation, which is unbeneficial for parts to be used in the healthcare industry. This thesis investigated the possibility in reducing the surface roughness of SLS printed polyamide 12 with the fully automated post-processing technology PostPro3D. The postprocessing relies on chemical post-processing for smoothening of the parts’ surface. PostPro3D utilizes vaporized solvent which condenses on the printed parts causing the surface to reflow. By this roughness, in terms of unmolten particles, is dissolved and surface pores are sealed. The influence of alternating post-processing parameters; pressure, temperature, time and solvent volume was evaluated with a Design of Experiments (DoE). The roughness reduction was quantified with monitoring the arithmetic mean average roughness (Ra), the ten-point height roughness (Rz) and the average waviness (Wa) using a stylus profilometer and confocal laser scanning microscope (CLSM). The effect of post-processing on mechanical properties was evaluated with tensile testing and the effect on microstructure by scanning electron microscopy (SEM). A comparison was made between post-processed samples and a non-postprocessed reference, as well as between samples post-processed with different degree of aggressivity, with regards to the roughness values, mechanical properties and the microstructure. Results indicated that solvent volume and time had the largest effect in reducing the roughness parameters Ra and Rz, while time had the largest influence in increasing the elongation at break, tensile strength at break and toughness. The post-processing’s effect on waviness and Young’s modulus was less evident. SEM established that complete dissolution of powder particles was not achieved for the tested parameter ranges, but a clear improvement of the surface was observed for all different post-processing conditions, as compared to a non-post-processed specimen. The reduction in roughness by increased solvent volume and time was thought to be due to increased condensation of solvent droplets on the SLS-parts. The increase in mechanical properties was likely related to elimination of crack initiation points at the surface. In general, the mechanical properties experienced a wide spread in the results, this was concluded to be related to differences in intrinsic properties of the printed parts, and highlighted the problems with reproducibility related to the SLS. An optimal roughness of Ra less than 1 µm was not obtained for the tested post-processing conditions, and further parameter optimization is required.
Möjligheten att tillverka komplexa geometrier på ett snabbt sätt, har fått additiv tillverkning att öka i popularitet. Selective laser sintering (SLS) är en typ av additiv tillverkning där polymer pulver sintras samman succesivt lager för lager. Dessa lager bygger tillsammans upp den önskade geometrin. De största nackdelarna med SLS är att de tillverkade delarna har bristande mekaniska egenskaper, har brister i reproducerbarheten samt att ytan har en dålig kvalitet, den är ojämn. Ytojämnheten ökar risken för att bakterier fastnar och ett en biofilm bildas. Då produkten ska användas inom sjukvården, är det viktigt att biofilm bildning undviks. Den här uppsatsen har undersökt möjligheterna att reducera ytojämnheten av SLS-printad polyamid 12 med hjälp av kemisk efterbehandling i PostPro3D. Denna maskin är helt automatisk och åstadkommer ytbehandling genom att förånga lösningsmedel som sedan kondenserar på det SLS-printade materialet. Ytan på materialet löses upp vilket minskar ytojämnheter i form av pulver partiklar samt sluter porer på ytan. Genom att ändra på parametrarna för efterbehandlingen kan graden av aggressivitet påverkas, detta gäller tryck, temperatur, tid och lösningsmedels volym. De optimala parametrarna för att åstadkomma en jämn yta utvärderades med en Design of Experiments (DoE). Reducering av ytojämnhet mättes med hjälp av aritmetisk genomsnittlig ojämnhet (Ra), tio-punkts höjd ojämnhet (Rz) och medel-vågighet (Wa), med nålprofilometer och konfokal mikroskop. Efterbehandlingens påverkan på de mekaniska egenskaperna utvärderades i ett dragprov, medan mikrostrukturen undersöktes med svepelektronmikroskop (SEM). Ytjämnheten, de mekaniska egenskaperna och mikrostrukturen jämfördes mellan icke behandlade prover och ytbehandlade prover, med varierad grad av aggressivitet. Resultaten indikerade att tid och volym hade störst effekt på Ra och Rz, medan tid hade störst positiv inverkan på töjning, styrka och seghet. Effekten på styvheten (E-modulen) och vågigheten (Wa) var mindre uppenbar, och någon tydlig påverkan kunde inte observeras. SEM-analys visade att fullständig upplösning av partiklar på ytan inte sker för de testade behandlingarna, men en tydlig förbättring kunde ses vid jämförelse av ett obehandlat prov och ett behandlat prov. Den ökade ytjämnheten för längre tid och högre volym tros bero på en ökad kondensering av lösningsmedel på ytan under efterbehandlingen. Ökningen i mekaniska egenskaperna är troligtvis relaterade till eliminering av kritiska defekter på ytan. Generellt visade de mekaniska egenskaper en stor spridning i resultaten, detta tros bero på inneboende egenskaper i provstavarna. Denna slutsats understryker den bristande reproducerbarheten för SLS-printning. En optimal ytjämnhet antas vara ett Ra värde under 1 µm, denna ytjämnhet har inte uppnåtts med de testade efterbehandlingsparameter värdena, därför krävs ytterligare parameter optimering för att nå optimal efterbehandling.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography