Dissertations / Theses on the topic 'Minimal subset'

To see the other types of publications on this topic, follow the link: Minimal subset.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 dissertations / theses for your research on the topic 'Minimal subset.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Barrus, Michael David. "A forbidden subgraph characterization problem and a minimal-element subset of universal graph classes /." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd374.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barrus, Michael D. "A Forbidden Subgraph Characterization Problem and a Minimal-Element Subset of Universal Graph Classes." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/125.

Full text
Abstract:
The direct sum of a finite number of graph classes H_1, ..., H_k is defined as the set of all graphs formed by taking the union of graphs from each of the H_i. The join of these graph classes is similarly defined as the set of all graphs formed by taking the join of graphs from each of the H_i. In this paper we show that if each H_i has a forbidden subgraph characterization then the direct sum and join of these H_i also have forbidden subgraph characterizations. We provide various results which in many cases allow us to exactly determine the minimal forbidden subgraphs for such characterizations. As we develop these results we are led to study the minimal graphs which are universal over a given list of graphs, or those which contain each graph in the list as an induced subgraph. As a direct application of our results we give an alternate proof of a theorem of Barrett and Loewy concerning a forbidden subgraph characterization problem.
APA, Harvard, Vancouver, ISO, and other styles
3

Papacchini, Fabio. "Minimal model reasoning for modal logic." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/minimal-model-reasoning-for-modal-logic(dbfeb158-f719-4640-9cc9-92abd26bd83e).html.

Full text
Abstract:
Model generation and minimal model generation are useful for tasks such as model checking, query answering and for debugging of logical specifications. Due to this variety of applications, several minimality criteria and model generation methods for classical logics have been studied. Minimal model generation for modal logics how ever did not receive the same attention from the research community. This thesis aims to fill this gap by investigating minimality criteria and designing minimal model generation procedures for all the sublogics of the multi-modal logic S5(m) and their extensions with universal modalities. All the procedures are minimal model sound and complete, in the sense that they generate all and only minimal models. The starting point of the investigation is the definition of a Herbrand semantics for modal logics on which a syntactic minimality criterion is devised. The syntactic nature of the minimality criterion allows for an efficient minimal model generation procedure, but, on the other hand, the resulting minimal models can be redundant or semantically non minimal with respect to each other. To overcome the syntactic limitations of the first minimality criterion, the thesis moves from minimal modal Herbrand models to semantic minimality criteria based on subset-simulation. At first, theoretical procedures for the generation of models minimal modulo subset-simulation are presented. These procedures for the generation of models minimal modulo subset-simulation are minimal model sound and complete, but they might not terminate. The minimality criterion and the procedures are then refined in such a way that termination can be ensured while preserving minimal model soundness and completeness.
APA, Harvard, Vancouver, ISO, and other styles
4

Nicol, Janet L., Andrew Barss, and Jason E. Barker. "Minimal Interference from Possessor Phrases in the Production of Subject-Verb Agreement." FRONTIERS MEDIA SA, 2016. http://hdl.handle.net/10150/615107.

Full text
Abstract:
We explore the language production process by eliciting subject-verb agreement errors. Participants were asked to create complete sentences from sentence beginnings such as The elf's/elves' house with the tiny window/windows and The statue in the eirs/elves' gardens. These are subject noun phrases containing a head noun and controller of agreement (statue), and two nonheads, a "local noun" (window(s)/garden(s)), and a possessor noun (elf's/elves'). Past research has shown that a plural nonhead noun (an "attractor") within a subject noun phrase triggers the production of verb agreement errors, and further, that the nearer the attractor to the head noun, the greater the interference. This effect can be interpreted in terms of relative hierarchical distance from the head noun, or via a processing window account, which claims that during production, there is a window in which the head and modifying material may be co-active, and an attractor must be active at the same time as the head to give rise to errors. Using possessors attached at different heights within the same window, we are able to empirically distinguish these accounts. Possessors also allow us to explore two additional issues. First, case marking of local nouns has been shown to reduce agreement errors in languages with "rich" inflectional systems, and we explore whether English speakers attend to case. Secondly, formal syntactic analyses differ regarding the structural position of the possessive marker, and we distinguish them empirically with the relative magnitude of errors produced by possessors and local nouns. Our results show that, across the board, plural possessors are significantly less disruptive to the agreement process than plural local nouns. Proximity to the head noun matters: a possessor directly modifying the head noun induce a significant number of errors, but a possessor within a modifying prepositional phrase did not, though the local noun did. These findings suggest that proximity to a head noun is independent of a "processing window" effect. They also support a noun phrase-internal, case-like analysis of the structural position of the possessive ending and show that even speakers of inflectionally impoverished languages like English are sensitive to morphophonological case-like marking.
APA, Harvard, Vancouver, ISO, and other styles
5

Bekkouche, Mohammed. "Combinaison des techniques de Bounded Model Checking et de programmation par contraintes pour l'aide à la localisation d'erreurs : exploration des capacités des CSP pour la localisation d'erreurs." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4096/document.

Full text
Abstract:
Un vérificateur de modèle peut produire une trace de contreexemple, pour un programme erroné, qui est souvent difficile à exploiter pour localiser les erreurs dans le code source. Dans ma thèse, nous avons proposé un algorithme de localisation d'erreurs à partir de contreexemples, nommé LocFaults, combinant les approches de Bounded Model Checking (BMC) avec un problème de satisfaction de contraintes (CSP). Cet algorithme analyse les chemins du CFG (Control Flow Graph) du programme erroné pour calculer les sous-ensembles d'instructions suspectes permettant de corriger le programme. En effet, nous générons un système de contraintes pour les chemins du graphe de flot de contrôle pour lesquels au plus k instructions conditionnelles peuvent être erronées. Ensuite, nous calculons les MCSs (Minimal Correction Sets) de taille limitée sur chacun de ces chemins. La suppression de l'un de ces ensembles de contraintes donne un sous-ensemble satisfiable maximal, en d'autres termes, un sous-ensemble maximal de contraintes satisfaisant la postcondition. Pour calculer les MCSs, nous étendons l'algorithme générique proposé par Liffiton et Sakallah dans le but de traiter des programmes avec des instructions numériques plus efficacement. Cette approche a été évaluée expérimentalement sur des programmes académiques et réalistes
A model checker can produce a trace of counter-example for erroneous program, which is often difficult to exploit to locate errors in source code. In my thesis, we proposed an error localization algorithm from counter-examples, named LocFaults, combining approaches of Bounded Model-Checking (BMC) with constraint satisfaction problem (CSP). This algorithm analyzes the paths of CFG (Control Flow Graph) of the erroneous program to calculate the subsets of suspicious instructions to correct the program. Indeed, we generate a system of constraints for paths of control flow graph for which at most k conditional statements can be wrong. Then we calculate the MCSs (Minimal Correction Sets) of limited size on each of these paths. Removal of one of these sets of constraints gives a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the postcondition. To calculate the MCSs, we extend the generic algorithm proposed by Liffiton and Sakallah in order to deal with programs with numerical instructions more efficiently. This approach has been experimentally evaluated on a set of academic and realistic programs
APA, Harvard, Vancouver, ISO, and other styles
6

Jonsson, Robin. "Optimal Linear Combinations of Portfolios Subject to Estimation Risk." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28524.

Full text
Abstract:
The combination of two or more portfolio rules is theoretically convex in return-risk space, which provides for a new class of portfolio rules that gives purpose to the Mean-Variance framework out-of-sample. The author investigates the performance loss from estimation risk between the unconstrained Mean-Variance portfolio and the out-of-sample Global Minimum Variance portfolio. A new two-fund rule is developed in a specific class of combined rules, between the equally weighted portfolio and a mean-variance portfolio with the covariance matrix being estimated by linear shrinkage. The study shows that this rule performs well out-of-sample when covariance estimation error and bias are balanced. The rule is performing at least as good as its peer group in this class of combined rules.
APA, Harvard, Vancouver, ISO, and other styles
7

Tomaras, Panagiotis J. "Decomposition of general queueing network models : an investigation into the implementation of hierarchical decomposition schemes of general closed queueing network models using the principle of minimum relative entropy subject to fully decomposable constraints." Thesis, University of Bradford, 1989. http://hdl.handle.net/10454/4212.

Full text
Abstract:
Decomposition methods based on the hierarchical partitioning of the state space of queueing network models offer powerful evaluation tools for the performance analysis of computer systems and communication networks. These methods being conventionally implemented capture the exact solution of separable queueing network models but their credibility differs when applied to general queueing networks. This thesis provides a universal information theoretic framework for the implementation of hierarchical decomposition schemes, based on the principle of minimum relative entropy given fully decomposable subset and aggregate utilization, mean queue length and flow-balance constraints. This principle is used, in conjuction with asymptotic connections to infinite capacity queues, to derive new closed form approximations for the conditional and marginal state probabilities of general queueing network models. The minimum relative entropy solutions are implemented iteratively at each decomposition level involving the generalized exponential (GE) distributional model in approximating the general service and asymptotic flow processes in the network. It is shown that the minimum relative entropy joint state probability, subject to mean queue length and flow-balance constraints, is identical to the exact product-form solution obtained as if the network was separable. An investigation into the effect of different couplings of the resource units on the relative accuracy of the approximation is carried out, based on an extensive experimentation. The credibility of the method is demonstrated with some illustrative examples involving first-come-first-served general queueing networks with single and multiple servers and favourable comparisons against exact solutions and other approximations are made.
APA, Harvard, Vancouver, ISO, and other styles
8

Tran, Quoc Huy. "Robust parameter estimation in computer vision: geometric fitting and deformable registration." Thesis, 2014. http://hdl.handle.net/2440/86270.

Full text
Abstract:
Parameter estimation plays an important role in computer vision. Many computer vision problems can be reduced to estimating the parameters of a mathematical model of interest from the observed data. Parameter estimation in computer vision is challenging, since vision data unavoidably have small-scale measurement noise and large-scale measurement errors (outliers) due to imperfect data acquisition and preprocessing. Traditional parameter estimation methods developed in the statistics literature mainly deal with noise and are very sensitive to outliers. Robust parameter estimation techniques are thus crucial for effectively removing outliers and accurately estimating the model parameters with vision data. The research conducted in this thesis focuses on single structure parameter estimation and makes a direct contribution to two specific branches under that topic: geometric fitting and deformable registration. In geometric fitting problems, a geometric model is used to represent the information of interest, such as a homography matrix in image stitching, or a fundamental matrix in three-dimensional reconstruction. Many robust techniques for geometric fitting involve sampling and testing a number of model hypotheses, where each hypothesis consists of a minimal subset of data for yielding a model estimate. It is commonly known that, due to the noise added to the true data (inliers), drawing a single all-inlier minimal subset is not sufficient to guarantee a good model estimate that fits the data well; the inliers therein should also have a large spatial extent. This thesis investigates a theoretical reasoning behind this long-standing principle, and shows a clear correlation between the span of data points used for estimation and the quality of model estimate. Based on this finding, the thesis explains why naive distance-based sampling fails as a strategy to maximise the span of all-inlier minimal subsets produced, and develops a novel sampling algorithm which, unlike previous approaches, consciously targets all-inlier minimal subsets with large span for robust geometric fitting. The second major contribution of this thesis relates to another computer vision problem which also requires the knowledge of robust parameter estimation: deformable registration. The goal of deformable registration is to align regions in two or more images corresponding to a common object that can deform nonrigidly such as a bending piece of paper or a waving flag. The information of interest is the nonlinear transformation that maps points from one image to another, and is represented by a deformable model, for example, a thin plate spline warp. Most of the previous approaches to outlier rejection in deformable registration rely on optimising fully deformable models in the presence of outliers due to the assumption of the highly nonlinear correspondence manifold which contains the inliers. This thesis makes an interesting observation that, for many realistic physical deformations, the scale of errors of the outliers usually dwarfs the nonlinear effects of the correspondence manifold on which the inliers lie. The finding suggests that standard robust techniques for geometric fitting are applicable to model the approximately linear correspondence manifold for outlier rejection. Moreover, the thesis develops two novel outlier rejection methods for deformable registration, which are based entirely on fitting simple linear models and shown to be considerably faster but at least as accurate as previous approaches.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2014
APA, Harvard, Vancouver, ISO, and other styles
9

Guestrin, Elias Daniel. "Remote, Non-contact Gaze Estimation with Minimal Subject Cooperation." Thesis, 2010. http://hdl.handle.net/1807/24349.

Full text
Abstract:
This thesis presents a novel system that estimates the point-of-gaze (where a person is looking at) remotely while allowing for free head movements and minimizing personal calibration requirements. The point-of-gaze is estimated from the pupil and corneal reflections (virtual images of infrared light sources that are formed by reflection on the front corneal surface, which acts as a convex mirror) extracted from eye images captured by video cameras. Based on the laws of geometrical optics, a detailed general mathematical model for point-of-gaze estimation using the pupil and corneal reflections is developed. Using this model, the full range of possible system configurations (from one camera and one light source to multiple cameras and light sources) is analyzed. This analysis shows that two cameras and two light sources is the simplest system configuration that can be used to reconstruct the optic axis of the eye in 3-D space, and therefore measure eye movements, without the need for personal calibration. To estimate the point-of-gaze, a simple single-point personal calibration procedure is needed. The performance of the point-of-gaze estimation depends on the geometrical arrangement of the cameras and light sources and the method used to reconstruct the optic axis of the eye. Using a comprehensive simulation framework developed from the mathematical model, the performance of several gaze estimation methods of varied complexity is investigated for different geometrical system setups in the presence of noise in the extracted eye features, deviation of the corneal shape from the ideal spherical shape and errors in system parameters. The results of this investigation indicate the method(s) and geometrical setup(s) that are optimal for different sets of conditions, thereby providing guidelines for system implementation. Experimental results with adults, obtained with a system that follows those guidelines, exhibit RMS point-of-gaze estimation errors of 0.4-0.6º of visual angle (comparable to the best commercially available systems, which require multiple-point personal calibration procedures). Preliminary results with infants demonstrate the ability of the proposed system to record infants' visual scanning patterns, enabling applications that are very difficult or impossible to carry out with previously existing technologies (e.g., study of infants' visual and oculomotor systems).
APA, Harvard, Vancouver, ISO, and other styles
10

Ondreka, David. "Construction of minimal gauge invariant subsets of Feynman diagrams with loops in gauge theories." Phd thesis, 2005. http://tuprints.ulb.tu-darmstadt.de/569/1/diss_ondreka.pdf.

Full text
Abstract:
In this work, we consider Feynman diagrams with loops in renormalizable gauge theories with and without spontaneous symmetry breaking. We demonstrate that the set of Feynman diagrams with a fixed number of loops, contributing to the expansion of a connected Green's function in a fixed order of perturbation theory, can be partitioned into minimal gauge invariant subsets by means of a set of graphical manipulations of Feynman diagrams, called gauge flips. To this end, we decompose the Slavnov-Taylor identities for the expansion of the Green's function in such a way that these identities can be defined for subsets of the set of all Feynman diagrams. We then prove, using diagrammatical methods, that the subsets constructed by means of gauge flips really constitute minimal gauge invariant subsets. Thereafter, we employ gauge flips in a classification of the minimal gauge invariant subsets of Feynman diagrams with loops in the Standard Model. We discuss in detail an explicit example, comparing it to the results of a computer program which has been developed in the context of the present work.
APA, Harvard, Vancouver, ISO, and other styles
11

Ondreka, David [Verfasser]. "Construction of minimal gauge invariant subsets of Feynman diagrams with loops in gauge theories / von David Ondreka." 2005. http://d-nb.info/975446266/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hsu, Shao-Jui, and 許劭睿. "Minimum weight topology optimization subject to displacement or frequency constraints." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/50590576612602386088.

Full text
Abstract:
碩士
國立中興大學
機械工程學系所
98
Since 1988 BendsØe and Kikuchi [1] published the homogenization method to solve structural topology optimization problems, more and more researchers have used this method to generate initial shapes of structures. In this thesis, the objective function is defined as minimum weight subject to two types of constraints: one is the displacement and the other one is the natural frequency. The normalized density of each finite element is adopted as the design variable. Some formulas representing the relationship between Young’s modulus and the normalized density are used in the optimization process. The results are compared. A MATLAB program is written to use SQP optimizer to solve the optimization problems. While MSC/NASTRAN and MSC/PATRAN are used to do the pre- and post- processing. Because of the existence of uncertain elements, higher order elements or penalty function is employed to improve this drawback to make the structure clearer and recognizable. Compared with using minimum weight and minimum compliance as objective function, the biggest advantage of using minimum weight as the objective function is that there is no need to assign an amount of mass in the design space. By choosing an appropriate α in the formula relating Young’s modulus and design variables, the weight saving is found in the cantilever plate case.
APA, Harvard, Vancouver, ISO, and other styles
13

Ou, Beiyan. "Minimax designs for comparing treatment means for field experiments." Thesis, 2006. http://hdl.handle.net/1828/2068.

Full text
Abstract:
This thesis studies the linear model, estimators of the treatment means, and opti¬mality criteria for designs and analysis of spatially arranged experiments. Four types of commonly used spatial correlation structures are discussed, and a neighbourhood of covariance matrices is investigated. Various properties about the neighbourhood are explored. When the covariance matrix of the error process is unknown, but be-longs to a neighbourhood of a covariance matrix, a modified generalized least squares estimator (GLSE) is proposed. This estimator seems more efficient than the ordinary least squares estimator in many practical applications. We also propose a criterion to find minimax designs that are efficient for a neighbourhood of correlations. When the number of plots is small, minimax designs can be computed exactly. When the number of plots is large, a simulated annealing algorithm is applied to find minimax or near minimax designs. Minimax designs for the least squares and generalized least squares estimators are compared in details. In general, we recommend using GLSE and the minimax design based on GLSE.
APA, Harvard, Vancouver, ISO, and other styles
14

Hong, Jhen-Zong, and 洪振宗. "A study of the optimal used period and number of minimal repairs of repairable product subject to shocks." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/48916911368547442109.

Full text
Abstract:
碩士
華梵大學
工業管理學系碩士班
91
We consider the two-phase warranty models for repairable products. We define the time-interval as the first phase, and the time interval as the second phase. The products have two types of failures: typeⅠfailures (minor failures) and type II failures (catastrophic failures). The two failures are number dependent. In our model, typeⅠfailures are also removed by minimal repairs in the first and the second phases. Type II failures are removed by replacements in the first phase. If type II failures take place in the second phase. We suppose the life of products will be ended. To buy a new product is conducted at time W+T, upon the type II failure, or the nth of type I failures whichever occurs first. When type II failure occurs, we need to order the spare. The transport of the spare will have the leadtime and loss cost for consumer. In this thesis, we consider warranty and maintenance models for consumer. Our objective here is to obtain the optimal T* and n* when W is fixed, and a numerical example is provided.
APA, Harvard, Vancouver, ISO, and other styles
15

Guenther, Phatchanok. "Efficacy of disinfectants against multidrug-resistant Enterobacter cloacae strains isolated from humans in a clinical setting." 2020. https://ul.qucosa.de/id/qucosa%3A73091.

Full text
Abstract:
Einführung: Enterobacter (E.) cloacae subsp. cloacae sind wichtige Humanpathogene, insbesondere bei stationär untergebrachten Patienten. Sie sind in der Lage, Medizinprodukte zu kontaminieren, und es wurde über nosokomiale Krankheitsausbrüche in Verbindung mit der Kolonisation von chirurgischen Utensilien berichtet. Es ist daher wichtig, die Wirksamkeit und Effizienz von Desinfektionsmitteln gegenüber dieser Bakterienart zu bestimmen. Ziele: Die aktuelle Studie wurde durchgeführt, um nachzuweisen, ob Peressigsäure, Ethanol, Benzalkoniumchlorid und Natriumhypochlorit, welche weit verbreitet in kommerziellen Desinfektionsmitteln enthalten sind, eine ausreichende Wirksamkeit gegen multiresistente, von Patienten im Krankenhaus isolierte, E. cloacae aufweisen. Material und Methoden: Sechs multiresistente E. cloacae Isolate, die von Patienten in einem klinischen Umfeld gewonnen wurden, wurden getestet und mit dem E. cloacae Typstamm verglichen. Die Studien wurden in vitro mit Peressigsäure, Ethanol, Benzalkoniumchlorid und Natriumhypochlorit nach den Richtlinien der Desinfektionsmittel-Kommission des Verbundes für Angewandte Hygiene e.V. durchgeführt. Die Tests umfassten qualitative und quantitative Suspensionstests zur Bestimmung der bakteriziden Wirkung, den sogenannten Keimträgertest und die Bestimmung der minimalen Hemmkonzentrationen. Ergebnisse: Die Studienergebnisse zeigten, dass multiresistente E. cloacae Stämme genauso empfindlich gegenüber Desinfektionsmitteln waren wie der Typstamm. Organische Belastung interagierte stark mit Natriumhypochlorit und minderte dadurch seine Wirksamkeit, während Peressigsäure und Ethanol nicht durch organische Verunreinigung beeinflusst wurden. Die Kontaktzeit hatte nur einen geringen Einfluss auf die bakterizide Wirkung. Im Gegensatz dazu spielten bei Benzalkoniumchlorid organische Verunreinigung und die Kontaktzeit eine wichtige Rolle. Insgesamt waren die minimalen Hemmkonzentrationen und die bakterizid wirksamen Konzentrationen niedriger als die für kommerzielle Produkte gebräuchlichen Konzentrationen. In den Keimträgertests hatte das Trocknen auf einer glatten Oberfläche einen Einfluss auf das Überleben eines Stammes von E. cloacae. Die Ergebnisse zeigten auch, dass sich die Wirksamkeit der Desinfektionsmittel in den verschiedenen verwendeten Tests deutlich unterscheiden kann. Die Ergebnisse waren schwer mit anderen Studien zu vergleichen, da eine internationale Durchführungsrichtlinie für die Prüfung der Wirksamkeit von Desinfektionsmitteln gegen multiresistente Bakterien fehlt. Fazit: Peressigsäure, Ethanol, Benzalkoniumchlorid und Natriumhypochlorit eignen sich zur Desinfektion von multiresistenten E. cloacae. Die Wirksamkeit von Natriumhypochlorit und Benzalkoniumchlorid wird jedoch stark durch organische Stoffe beeinflusst. Dies unterstreicht die Bedeutung geeigneter Reinigungsmaßnahmen vor der Desinfektion. Wenn dies erfolgt ist, erweisen sich die getesteten Desinfektionsmittel gegen multiresistente E. cloacae genauso effektiv wie gegen den Typstamm.:1. Introduction ............................................................ 1 2. Literature Review ....................................................... 3 2.1 Enterobacteriaceae ..................................................... 4 2.1.1 General properties ................................................... 4 2.1.2 Enterobacter cloacae complex ......................................... 4 2.2 Multidrug-resistant bacteria and disinfectant “resistance” ............. 6 2.3 Disinfectant testing ................................................... 7 2.4 Active substances investigated in this study ........................... 8 2.4.1 Peracetic acid (PAA) ................................................. 8 2.4.2 Ethanol (ETH) ........................................................ 9 2.4.3 Benzalkonium chloride (BKC) .......................................... 9 2.4.4 Sodium hypochlorite (NaOCl) .......................................... 10 3. Materials and Methods ................................................... 11 3.1 Materials .............................................................. 11 3.2 Methods ................................................................ 14 3.2.1 Culture and storage of bacteria ...................................... 14 3.2.2 Preparation of disinfectants and the neutralizing agent .............. 14 3.2.3 Minimum inhibitory concentration (MIC) ............................... 15 3.2.4 Qualitative suspension test .......................................... 15 3.2.5 Quantitative suspension test.......................................... 16 3.2.6 Surface disinfection without mechanical action (germ carrier test) ... 16 3.2.7 Statistical analysis ................................................. 17 4. Results ................................................................. 18 4.1 Minimum inhibitory concentrations ...................................... 18 4.2 Qualitative suspension tests ........................................... 18 4.3 Quantitative suspension tests .......................................... 21 4.4 Surface disinfection without mechanical action (germ carrier test) ..... 24 5. Discussion .............................................................. 28 6. Summary ................................................................. 31 7. Zusammenfassung ......................................................... 33 8. References .............................................................. 35 9. Appendix ................................................................ 49 Acknowledgments ............................................................ 50
Introduction: Enterobacter (E.) cloacae subsp. cloacae are important human pathogens, particularly in hospitalized patients. They tend to contaminate various medical devices and nosocomial outbreaks have been reported to be associated with the colonization of surgical equipment. Therefore, it is critical to determine the efficacy and effectiveness of disinfectants against this bacterial species. Objectives: The current study was undertaken to prove whether single active ingredients (i.e. peracetic acid, ethanol, benzalkonium chloride, and sodium hypochlorite) of widely used commercial disinfectants provide proper efficacy against multidrug-resistant human isolates of E. cloacae. Material and Methods: Six multidrug-resistant E. cloacae isolates obtained from patients in a clinical setting were tested and compared to the E. cloacae type strain. The studies were performed in vitro using peracetic acid, ethanol, benzalkonium chloride and sodium hypochlorite following the guidelines specified by the Disinfectants Commission within the Association of Applied Hygiene. Tests included determination of minimum inhibitory concentrations, bactericidal values by qualitative and quantitative suspension tests, and so-called germ carrier tests. The influence of exposure time and organic load on bacteriostatic and bactericidal concentrations was evaluated for each disinfectant using the two-tailed Mann-Whitney U-test. Results: Study results showed that multidrug-resistant E. cloacae strains were equally susceptible to disinfectants as the type strain. Organic matter highly interfered with sodium hypochlorite thereby decreasing its efficacy whereas peracetic acid and ethanol were not influenced by organic soiling. Contact time had only a minor effect on bactericidal values. This was in contrast to benzalkonium chloride where organic soiling and contact time played an important role. On the whole, minimum inhibitory concentrations and bactericidal concentrations were lower than in-use concentrations of commercial products. Drying on smooth surfaces in the carrier tests had an effect on the survival of one E. cloacae strain. Results also showed that efficacious values determined by the different tests used may differ distinctly. Results were difficult to compare with other studies because an international practical standard for testing disinfectant efficacy against multidrug-resistant bacteria is missing. Conclusion: Peracetic acid, ethanol, benzalkonium chloride and sodium hypochlorite are suitable to disinfect multidrug-resistant E. cloacae but the effectiveness of sodium hypochlorite and benzalkonium chloride is strongly influenced by organic matter. This underlines the importance of proper cleaning measures before disinfection. When this is done, the tested disinfectants proved to be as efficient against multidrug-resistant E. cloacae as against the type strain.:1. Introduction ............................................................ 1 2. Literature Review ....................................................... 3 2.1 Enterobacteriaceae ..................................................... 4 2.1.1 General properties ................................................... 4 2.1.2 Enterobacter cloacae complex ......................................... 4 2.2 Multidrug-resistant bacteria and disinfectant “resistance” ............. 6 2.3 Disinfectant testing ................................................... 7 2.4 Active substances investigated in this study ........................... 8 2.4.1 Peracetic acid (PAA) ................................................. 8 2.4.2 Ethanol (ETH) ........................................................ 9 2.4.3 Benzalkonium chloride (BKC) .......................................... 9 2.4.4 Sodium hypochlorite (NaOCl) .......................................... 10 3. Materials and Methods ................................................... 11 3.1 Materials .............................................................. 11 3.2 Methods ................................................................ 14 3.2.1 Culture and storage of bacteria ...................................... 14 3.2.2 Preparation of disinfectants and the neutralizing agent .............. 14 3.2.3 Minimum inhibitory concentration (MIC) ............................... 15 3.2.4 Qualitative suspension test .......................................... 15 3.2.5 Quantitative suspension test.......................................... 16 3.2.6 Surface disinfection without mechanical action (germ carrier test) ... 16 3.2.7 Statistical analysis ................................................. 17 4. Results ................................................................. 18 4.1 Minimum inhibitory concentrations ...................................... 18 4.2 Qualitative suspension tests ........................................... 18 4.3 Quantitative suspension tests .......................................... 21 4.4 Surface disinfection without mechanical action (germ carrier test) ..... 24 5. Discussion .............................................................. 28 6. Summary ................................................................. 31 7. Zusammenfassung ......................................................... 33 8. References .............................................................. 35 9. Appendix ................................................................ 49 Acknowledgments ............................................................ 50
APA, Harvard, Vancouver, ISO, and other styles
16

Guo, Tzung-Rung, and 郭宗榮. "Applying Pulse Shaping Techniques for PAPR Reduction in OFDM Systems Subject to the Constraint of Minimum Error Probability." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/13248159242138549877.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程所
93
An OFDM signal consists of a number of independently modulated subcarriers, which can give a large peak-to-average power ratio (PAPR). In this thesis, we focused on the techniques to reduce the PAPR values while maintain the system error probability as minimum as possible at the same time. We introduced the pulse shaping technique and showed the principles for PAPR reduction. The pulse shaping approach can be implemented by using a discrete pulse shaping matrix, which is very suitable for digital implementation. The system error probability will reach to the minimal as the shaping matrix satisfies the condition that we proposed. Our simulation results showed that the pulse shaping matrix generated by the square root of a raised cosine is an optimal solution.
APA, Harvard, Vancouver, ISO, and other styles
17

Yan, Jie. "Designs of orthogonal filter banks and orthogonal cosine-modulated filter banks." Thesis, 2010. http://hdl.handle.net/1828/2647.

Full text
Abstract:
This thesis investigates several design problems concerning two-channel conjugate quadrature (CQ) filter banks and orthogonal wavelets, as well as orthogonal cosine-modulated (OCM) filter banks. It is well known that optimal design of CQ filters and wavelets and optimal design of prototype filters (PFs) of OCM filter banks in the least squares (LS) or minimax sense are nonconvex problems and to date only local solutions can be claimed. In this thesis, we first make some improvements over several direct design techniques for local design problems in terms of convergence and solution accuracy. By virtue of the recent progress in global polynomial optimization and the improved local design methods mentioned above, we describe an attempt at developing several design strategies that may be viewed as our endeavors towards global solutions for LS CQ filter banks, minimax CQ filter banks, and OCM filter banks. In brief terms, the proposed design strategies are based on several observations made among globally optimal impulse responses of low-order filter banks, and are essentially order-recursive algorithms in terms of filter length combined with some techniques in identifying a desirable initial point in each round of iteration. This main idea is applied to three design scenarios in this thesis, namely, LS design of orthogonal filter banks and wavelets, minimax design of orthogonal filter banks and wavelets, and design of orthogonal cosine-modulated filter banks. Simulation studies are presented to evaluate and compare the performance of the proposed design methods with several well established algorithms in the literature.
APA, Harvard, Vancouver, ISO, and other styles
18

Williams, Aaron Michael. "Shift gray codes." Thesis, 2009. http://hdl.handle.net/1828/1966.

Full text
Abstract:
Combinatorial objects can be represented by strings, such as 21534 for the permutation (1 2) (3 5 4), or 110100 for the binary tree corresponding to the balanced parentheses (()()). Given a string s = s1 s2 sn, the right-shift operation shift(s, i, j) replaces the substring si si+1..sj by si+1..sj si. In other words, si is right-shifted into position j by applying the permutation (j j−1 .. i) to the indices of s. Right-shifts include prefix-shifts (i = 1) and adjacent-transpositions (j = i+1). A fixed-content language is a set of strings that contain the same multiset of symbols. Given a fixed-content language, a shift Gray code is a list of its strings where consecutive strings differ by a shift. This thesis asks if shift Gray codes exist for a variety of combinatorial objects. This abstract question leads to a number of practical answers. The first prefix-shift Gray code for multiset permutations is discovered, and it provides the first algorithm for generating multiset permutations in O(1)-time while using O(1) additional variables. Applications of these results include more efficient exhaustive solutions to stacker-crane problems, which are natural NP-complete traveling salesman variants. This thesis also produces the fastest algorithm for generating balanced parentheses in an array, and the first minimal-change order for fixed-content necklaces and Lyndon words. These results are consequences of the following theorem: Every bubble language has a right-shift Gray code. Bubble languages are fixed-content languages that are closed under certain adjacent-transpositions. These languages generalize classic combinatorial objects: k-ary trees, ordered trees with fixed branching sequences, unit interval graphs, restricted Schr oder and Motzkin paths, linear-extensions of B-posets, and their unions, intersections, and quotients. Each Gray code is circular and is obtained from a new variation of lexicographic order known as cool-lex order. Gray codes using only shift(s, 1, n) and shift(s, 1, n−1) are also found for multiset permutations. A universal cycle that omits the last (redundant) symbol from each permutation is obtained by recording the first symbol of each permutation in this Gray code. As a special case, these shorthand universal cycles provide a new fixed-density analogue to de Bruijn cycles, and the first universal cycle for the "middle levels" (binary strings of length 2k + 1 with sum k or k + 1).
APA, Harvard, Vancouver, ISO, and other styles
19

Ferreira, Helena Prata Garrido. "A justiciabilidade dos direitos económicos e sociais : o espaço de capacidade e funcionamentos como parâmetro decisório." Doctoral thesis, 2020. http://hdl.handle.net/10400.14/36368.

Full text
Abstract:
O presente trabalho tem como objecto a análise das possibilidades de concretização dos direitos económicos e sociais por via judicial a partir do exame do tipo das normas/directrizes que se podem extrair do modelo constitucionalmente consagrado para aqueles direitos, procurando responder à questão de saber que tipo de obrigações e direitos se podem retirar das normas constitucionais consagradoras de direitos económicos e sociais. Considerando as consequências que a execução judicial desses direitos exerce ao nível das políticas públicas, a nossa abordagem procura avaliar até que ponto é função dos Tribunais pronunciarem-se sobre as formas de realização daqueles direitos, se podem ou não extrair, directamente, das respectivas normas direitos a prestações e determinar de forma coerciva a sua implementação, e em caso afirmativo que critérios poderão ou deverão orientar o julgador no correspondente processo decisório. A nossa análise parte de uma reflexão crítica do debate contemporâneo sobre as diversas concepções de liberdade procurando reflectir sobre o tipo de justiça e de liberdade que se deseja e acredita serem razoáveis numa sociedade democrática de direito e quais as respecitvas consequências no sistema de protecção dos direitos económicos e sociais. A nossa abordadgem passa ainda por (i) uma leitura dos direitos económicos e sociais inserida na estrutura de valores das escolhas sociais que parte de uma reformulação da visão tradicional da economia de bem-estar, desenvolvida por Amartya Sen, e da sua releitura de acordo com uma perspectiva normativa que procura introduzir julgamentos de valor na análise de bem-estar e avaliar a liberdade-inclusiva como aquela que não pode senão corresponder ao reconhecimento e exercício pelos indivíduos de direitos mínimos; e (ii) uma proposta de construção do conteúdo essencial daqueles direitos por meio da inserção no tratamento das respectivas possibilidades de realização da abordagem das capabilities, e da sua interpretação como o espaço de direitos e prerrogativas que determina o conjunto de oportunidades de concretização das condições de liberdade, igualdade e dignidade de cada indivíduo. Por fim, verificadas as possibilidades de realização judicial a nossa investigação culmina com as respostas à questão de saber até que ponto capability approach se pode afigurar como um padrão justificiável para os direitos económicos e sociais, quais as suas possibilidades e limites e que considerações poderão auxiliar o julgador no processo de tomada de decisão.
The aim of this essay is to address the possibilities for judicial protection of economic and social rights by inquiring what type of norms/guidelines can be extracted from the constitutionally enshrined model for those rights, and seeking to answer the question regarding what kind of obligations and rights can be withdrawn from said constitutional design. Considering the consequences that the judicial enforcement of these rights has at the level of public policies, our approach seeks to assess the extent to which it is Courts’ role to accommodate concerns and revindications arising from those rights, if they can directly extract from the respective norms rights to performance, and to coercively determine its implementation, and what criteria can or should guide the judge in the corresponding decision-making process. Our approach starts with a critical analysis of the contemporary debate of the different conceptions of liberty, seeking to reflect on the type of justice and freedom that is desired and believed to be reasonable in a democratic society and what are the respective consequences for the system of protection of economic and social rights. Our analysis also involves (i) a reading of economic and social rights inserted in the structure of social choices values whose starting point is a reformulation of the traditional view of the welfare economics, developed by Amartya Sen, and its reinterpretation according to a normative perspective that seeks to introduce value judgments in the well-being analysis and evaluate freedom-inclusive as corresponding to the recognition and exercise by individuals of minimum rights; and (ii) a proposal of construction of the minimum core content of those rights through the insertion of the capabilities approach on its analysis, and its interpretation as the space of rights and prerogatives that determines the set of opportunities for realizing the conditions of freedom, equality and dignity for each individual. Finally, once the possibilities for judicial adjudication are verified, our analysis culminates with the answers to the question concerning how far the capability approach appears as a justifiable standard for economic and social rights, what are its possibilities and limits and what considerations may assist the judge in the decision-making process.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography