Dissertations / Theses on the topic 'Multiple constraints'

To see the other types of publications on this topic, follow the link: Multiple constraints.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multiple constraints.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Guasco, Luciano M. "Multiple sequence alignment correction using constraints." Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/5143.

Full text
Abstract:
Trabalho apresentado no âmbito do European Master in Computational Logics, como requisito parcial para obtenção do grau de Mestre em Computational Logics
One of the most important fields in bioinformatics has been the study of protein sequence alignments. The study of homologous proteins, related by evolution, shows the conservation of many amino acids because of their functional and structural importance. One particular relationship between the amino acid sites in the same sequence or between different sequences, is protein-coevolution, interest in which has increased as a consequence of mathematical and computational methods used to understand the spatial, functional and evolutionary dependencies between amino acid sites. The principle of coevolution means that some amino acids are related through evolution because mutations in one site can create evolutionary pressures to select compensatory mutations in other sites that are functionally or structurally related. With the actual methods to detect coevolution, specifically mutual information techniques from the information theory field, we show in this work that much of the information between coevolved sites is lost because of mistakes in the multiple sequence alignment of variable regions. Moreover, we show that using these statistical methods to detect coevolved sites in multiple sequence alignments results in a high rate of false positives. Due to the amount of errors in the detection of coevolved site from multiple sequence alignments, we propose in this work a method to improve the detection efficacy of coevolved sites and we implement an algorithm to fix such sites correcting the misalignment produced in those specific locations. The detection part of our work is based on the mutual information between sites that are guessed as having coevolved, due to their high statistical correlation score. With this information we search for possible misalignments on those regions due to the incorrect matching of amino acids during the alignment. The re-alignment part is based on constraint programming techniques, to avoid the combinatorial complexity when one amino acid can be aligned with many others and to avoid inconsistencies in the alignments. In this work, we present a framework to impose constraints over the sequences, and we show how it is possible to compute alignments based on different criteria just by setting constraint between the amino acids. This framework can be applied not only for improving the alignment and detection of coevolved regions, but also to any desired constraints that may be used to express functional or structural relations among the amino acids in multiple sequences. We show also that after we fix these misalignments, using constraints based techniques, the correlation between coevolved sites increases and, in general, the new alignment is closer to the correct alignment than the MSA alignment. Finally, we show possible future research lines with the objective of overcoming some drawbacks detected during this work.
APA, Harvard, Vancouver, ISO, and other styles
2

Maesto, Tony V. (Tony Vu) 1973. "Nulling performance on antenna patterns using multiple null constraints vs. derivative constraints." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/84209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tae, Yun-Jin. "Leisure constraints multiple hileararchy [sic] stratification perspectives /." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1202500372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Herrmann, Felix J., and Eric Verschuur. "Curvelet-domain multiple elimination with sparseness constraints." Society of Exploration Geophysicists, 2004. http://hdl.handle.net/2429/426.

Full text
Abstract:
Predictive multiple suppression methods consist of two main steps: a prediction step, in which multiples are predicted from the seismic data, and a subtraction step, in which the predicted multiples are matched with the true multiples in the data. The last step appears crucial in practice: an incorrect adaptive subtraction method will cause multiples to be sub-optimally subtracted or primaries being distorted, or both. Therefore, we propose a new domain for separation of primaries and multiples via the Curvelet transform. This transform maps the data into almost orthogonal localized events with a directional and spatialtemporal component. The multiples are suppressed by thresholding the input data at those Curvelet components where the predicted multiples have large amplitudes. In this way the more traditional filtering of predicted multiples to fit the input data is avoided. An initial field data example shows a considerable improvement in multiple suppression.
APA, Harvard, Vancouver, ISO, and other styles
5

Robinson, Michael 1982. "Robust minimum variance beamforming with multiple response constraints." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99791.

Full text
Abstract:
Conventional beamformers can be sensitive to mismatches between presumed and actual steering vectors of the signal-of-interest. A recently proposed class of robust beamformers aim to counteract this problem by using a non-attenuation constraint inside a single hypersphere centered at the presumed steering vector of the signal-of-interest. In an effort to strike a balance between robustness to steering vector error and interference-plus-noise suppression, we propose in this manuscript to use multiple concentric hyperspheres instead of one with different degrees of protection in each. We derive several useful properties of this multiply constrained beamformer and use numerical simulations to show that using two constraints yields improved signal-to-interference-plus-noise-ratio compared to one constraint in certain scenarios, particularly at a large input signal-to-noise-ratio.
The manuscript also includes an overview of conventional beamforming, the mismatch problem and previously proposed robust beamformers.
APA, Harvard, Vancouver, ISO, and other styles
6

Gera, Geetanjali. "Motor abundance contributes to resolve multiple task constraints." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 109 p, 2009. http://proquest.umi.com/pqdweb?did=1885754581&sid=5&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Humphris, Elisabeth Lyn. "Computational protein design with multiple functional and structural constraints." Diss., Search in ProQuest Dissertations & Theses. UC Only, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3390110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yuhan. "Formation of the complex neural networks under multiple constraints." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Herrmann, Felix J., and Dirk J. Verschuur. "Robust curvelet-domain primary-multiple separation with sparseness constraints." European Association of Geoscientists & Engineers, 2005. http://hdl.handle.net/2429/454.

Full text
Abstract:
A non-linear primary-multiple separation method using curvelets frames is presented. The advantage of this method is that curvelets arguably provide an optimal sparse representation for both primaries and multiples. As such curvelets frames are ideal candidates to separate primaries from multiples given inaccurate predictions for these two data components. The method derives its robustness regarding the presence of noise; errors in the prediction and missing data from the curvelet frame's ability (i) to represent both signal components with a limited number of multi-scale and directional basis functions; (ii) to separate the components on the basis of differences in location, orientation and scales and (iii) to minimize correlations between the coefficients of the two components. A brief sketch of the theory is provided as well as a number of examples on synthetic and real data.
APA, Harvard, Vancouver, ISO, and other styles
10

Kieu, Duy Thong. "Inversion of multiple geophysical data sets using petrophysical constraints." Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/65987.

Full text
Abstract:
The inversion of multiple geophysical datasets co-operatively is a challenge that I address with the fuzzy cluster method. Techniques to incorporate petrophysical constraints and other data in a robust manner in seismic and magnetotelluric inversion were devised and tested upon synthetic data, and real datasets from Nevada, USA and Kevitsa, Finland. The resulting earth models were demonstrated to be better than with smooth constrained approaches and produced useful outputs for exploration and mining.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Dan 1969. "Low-frequency bottom backscattering data analysis using multiple constraints beamforming." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hartman, Joseph C. "Evaluating multiple options in parallel replacement under demand and rationing constraints." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/24881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Karem, Tope Razaq. "A low-cost design of multiservice SDH networks with multiple constraints." Master's thesis, University of Cape Town, 2006. http://hdl.handle.net/11427/5196.

Full text
Abstract:
Word processed copy.
Includes bibliographical references (leaves 63-64)
This study investigates the problem of ring-node assignment a Multiservice SDH/SONET Optical network design with constraints in capacity and differential delay. The problem is characterized as a graph-partitioning problem, and a heuristic algorithm based on constraints programming satisfaction technology is proposed.
APA, Harvard, Vancouver, ISO, and other styles
14

Hossain, K. S. M. Tozammel. "Modeling Evolutionary Constraints and Improving Multiple Sequence Alignments using Residue Couplings." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/83218.

Full text
Abstract:
Residue coupling in protein families has received much attention as an important indicator toward predicting protein structures and revealing functional insight into proteins. Existing coupling methods identify largely pairwise couplings and express couplings over amino acid combinations, which do not yield a mechanistic explanation. Most of these methods primarily use a multiple protein sequence alignment---most likely a resultant alignment---which better exposes couplings and is obtained through manual tweaking of an alignment constructed by a classical alignment algorithm. Classical alignment algorithms primarily focus on capturing conservations and may not fully unveil couplings in the alignment. In this dissertation, we propose methods for capturing both pairwise and higher-order couplings in protein families. Our methods provide mechanistic explanations for couplings using physicochemical properties of amino acids and discernibility between orders. We also investigate a method for mining frequent episodes---called coupled patterns---in an alignment produced by a classical algorithm for proteins and for exploiting the coupled patterns for improving the alignment quality in terms of exposition of couplings. We demonstrate the effectiveness of our proposed methods on a large collection of sequence datasets for protein families.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhu, Ji 1964. "Multiple choice modular design when linear and separable constraints are present." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276938.

Full text
Abstract:
In this thesis we give two extensions to the multiple choice modular design problem. In the first case, we consider the situation that parts are purchased from different vendors. In the second case, we consider the situation that linear and separable constraints are present in our model. We propose a heuristic for solving each of the problems. Some computational results are included.
APA, Harvard, Vancouver, ISO, and other styles
16

Weisenburger, Shawn D. "Effect of constraints and multiple receivers for on-the-fly ambiguity resolution." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq20888.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Jin. "The use of multiple cameras and geometric constraints for 3-D measurement." Thesis, City University London, 1995. http://openaccess.city.ac.uk/7542/.

Full text
Abstract:
This thesis addresses some of the problems involved in the automation of 3-D photogrammetric measurement using multiple camera viewpoints. The primary research discussed in this thesis concerns the automatic solution of the correspondence problem. This and associated research has led to the development of an automated photograrnmetric measuring system which combines the techniques from both machine vision and photogrammetry. Such a system is likely to contribute greatly to the accessibility of 3-D measurement to non-photogrammetrists who will generally have little knowledge and expertise of photogrammetry. A matching method, which is called the 3-D matching method, is developed in the thesis. This method is based on a 3-D intersection and "epipolar plane", as opposed to the 2-D intersection of the epipolar line method. The method is shown to provide a robust and flexible procedure,especially where camera orientation parameters are not well known. The theory of the method is derived and discussed. It is further developed by combination with a bundle adjustment process to iteratively improve the estimated camera orientations and to gradually introduce legitimate matched target images from multiple cameras. The 3-D target matching method is also optimised using a 3-D space constrained search technique. A globally consistent search is developed in which pseudo target images are defined to overcome problems due to occlusion. Hypothesis based heuristic algorithms are developed to optimise the matching process. This method of solving target correspondences is thoroughly tested and evaluated by simulation and by its use in practical applications. The characteristics of the components necessary for a photogrammetric measuring system are investigated. These include sources of illumination, targets, sensors, lenses, and framegrabbers. Methods are introduced for analysis of their characteristics. CCD cameras are calibrated using both plumb line and self calibration methods. These methods provide an estimation of some of the sources of error, which influence the performance of the system as a whole. The design of an automated photogrammetric measuring system with a number of novel features is discussed and a prototype system is developed for use in a constrained environment. The precision, accuracy, reliability, speed, and flexibility of the developed system are explored in a number of laboratory and experimental applications. Trials show that with further development the system could have commercial value and be used to provide a solution suitable for photogrammetrists and trained operators in a wide range of applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Phan, Khing Fong. "Optimal design of large-scale structures with multiple displacement and frequency constraints /." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487683401441828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tao, Jinxin. "Comparison Between Confidence Intervals of Multiple Linear Regression Model with or without Constraints." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/404.

Full text
Abstract:
Regression analysis is one of the most applied statistical techniques. The sta- tistical inference of a linear regression model with a monotone constraint had been discussed in early analysis. A natural question arises when it comes to the difference between the cases of with and without the constraint. Although the comparison be- tween confidence intervals of linear regression models with and without restriction for one predictor variable had been considered, this discussion for multiple regres- sion is required. In this thesis, I discuss the comparison of the confidence intervals between a multiple linear regression model with and without constraints.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Jian. "Advance Surgery Scheduling with Consideration of Downstream Capacity Constraints and Multiple Sources of Uncertainty." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCA023.

Full text
Abstract:
Les travaux de ce mémoire portent sur une gestion optimisée des blocs opératoires dans un service chirurgical. Les arrivées des patients chaque semaine, la durée des opérations et les temps de séjour des patients sont considérés comme des paramètres assujettis à des incertitudes. Chaque semaine, le gestionnaire hospitalier doit déterminer les blocs chirurgicaux à mettre en service et leur affecter certaines opérations figurant sur la liste d'attente. L'objectif est la minimisation d'une part des coûts liés à la réalisation et au report des opérations, et d'autre part des coûts hospitaliers liés aux ressources chirurgicaux. Lorsque nous considérons que les modèles de programmations mathématiques couramment utilisés dans la littérature n'optimisent pas la performance à long terme des programmes chirurgicaux, nous proposons un nouveau modèle d'optimisation à deux phases combinant le processus de décision Markovien (MDP) et la programmation stochastique. Le MDP de la première phase détermine les opérations à effectuer chaque semaine et minimise les coûts totaux sur un horizon infini. La programmation stochastique de la deuxième phase optimise les affectations des opérations sélectionnées dans les blocs chirurgicaux. Afin de résoudre la complexité des problèmes de grande taille, nous développons un algorithme de programmation dynamique approximatif basé sur l'apprentissage par renforcement et plusieurs autres heuristiques basés sur la génération de colonnes. Nous développons des applications numériques afin d'évaluer le modèle et les algorithmes proposés. Les résultats expérimentaux indiquent que ces algorithmes sont considérablement plus efficaces que les algorithmes traditionnels. Les programmes chirurgicaux du modèle d’optimisation à deux phases sont plus performants de manière significative que ceux d’un modèle de programmation stochastique classique en termes de temps d’attente des patients et de coûts totaux sur le long terme
This thesis deals with the advance scheduling of elective surgeries in an operating theatre that is composed of operating rooms and downstream recovery units. The arrivals of new patients in each week, the duration of each surgery, and the length-of-stay of each patient in the downstream recovery unit are subject to uncertainty. In each week, the surgery planner should determine the surgical blocks to open and assign some of the surgeries in the waiting list to the open surgical blocks. The objective is to minimize the patient-related costs incurred by performing and postponing surgeries as well as the hospital-related costs caused by the utilization of surgical resources. Considering that the pure mathematical programming models commonly used in literature do not optimize the long-term performance of the surgery schedules, we propose a novel two-phase optimization model that combines Markov decision process (MDP) and stochastic programming to overcome this drawback. The MDP model in the first phase determines the surgeries to be performed in each week and minimizes the expected total costs over an infinite horizon, then the stochastic programming model in the second phase optimizes the assignments of the selected surgeries to surgical blocks. In order to cope with the huge complexity of realistically sized problems, we develop a reinforcement-learning-based approximate dynamic programming algorithm and several column-generation-based heuristic algorithms as the solution approaches. We conduct numerical experiments to evaluate the model and algorithms proposed in this thesis. The experimental results indicate that the proposed algorithms are considerably more efficient than the traditional ones, and that the resulting schedules of the two-phase optimization model significantly outperform those of a conventional stochastic programming model in terms of the patients' waiting times and the total costs on the long run
APA, Harvard, Vancouver, ISO, and other styles
21

Mahadevan, Srisudha. "Network Selection Algorithm for Satisfying Multiple User Constraints Under Uncertainty in a Heterogeneous Wireless Scenario." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1302550606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Vyska, Martin. "Analysis of epidemiological models for disease control in single and multiple populations under resource constraints." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/276746.

Full text
Abstract:
Efficient management of epidemics is one of the primary motivations for computational modelling of disease dynamics. Examples range from reactive control measures, where the resources used to manage the epidemic in real time may be limited to prophylactic control measures such as deployment of genetically resistant plant varieties, which may lead to economic trade-offs. In such situations the question is how should resources for disease control be deployed to ensure the efficient management of the epidemic. Mathematical models are a powerful tool to investigate such questions since experiments are usually infeasible and the primary aim of this thesis is to study selected mathematical models of disease control to improve the current understanding of their behaviour. We initially analyse the dynamical behaviour that arises from incorporating an economic constraint into two simple, but widely used epidemic models with reactive control. Despite the selection of simple models, the addition of constrained control leads to mathematically rich dynamics, including the coexistence of multiple stable equilibria and stable limit cycles arising from global bifurcations. We use the analytical understanding obtained from the simple model to explore how to allocate a limited resource optimally between a number of separate populations that are exposed to an epidemic. Initially, we assume that the allocation is done at the beginning and cannot be changed later. We seek to answer the question of how the resource should be allocated efficiently to minimise the long-term number of infections. We show that the optimal allocation strategy can be approximated by a solution to a knapsack-type problem, that is the problem of how to select items of varying values and weights to maximise combined value without going over certain combined weight. The weights and values are given as functions of the population sizes, initial conditions, and the disease parameters. Later, we relax the assumptions to allow for reallocation and use the understanding of the dynamics gained from the simple models in the beginning to devise a new continuous time reallocation strategy, which outperforms previously considered approaches. In the final part of the thesis, we focus on plant disease and study a model of prophylactic control using a genetically resistant variety. We consider a trade-off where the genetic resistance carries with it a fitness penalty and therefore reduces yield. We identify the conditions on the parameters under which the resistant variety should be deployed and investigate how these change when the outbreak is uncertain. We show that deploying the resistant variety reduces the probability of an outbreak occurring and therefore can be optimal even when it would not be optimal to deploy it during the outbreak.
APA, Harvard, Vancouver, ISO, and other styles
23

Healey, Christopher M. "Advances in ranking and selection: variance estimation and constraints." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34768.

Full text
Abstract:
In this thesis, we first show that the performance of ranking and selection (R&S) procedures in steady-state simulations depends highly on the quality of the variance estimates that are used. We study the performance of R&S procedures using three variance estimators --- overlapping area, overlapping Cramer--von Mises, and overlapping modified jackknifed Durbin--Watson estimators --- that show better long-run performance than other estimators previously used in conjunction with R&S procedures for steady-state simulations. We devote additional study to the development of the new overlapping modified jackknifed Durbin--Watson estimator and demonstrate some of its useful properties. Next, we consider the problem of finding the best simulated system under a primary performance measure, while also satisfying stochastic constraints on secondary performance measures, known as constrained ranking and selection. We first present a new framework that allows certain systems to become dormant, halting sampling for those systems as the procedure continues. We also develop general procedures for constrained R&S that guarantee a nominal probability of correct selection, under any number of constraints and correlation across systems. In addition, we address new topics critical to efficiency of the these procedures, namely the allocation of error between feasibility check and selection, the use of common random numbers, and the cost of switching between simulated systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Chakka, Varun Raj. "Models and algorithms for the Multiple Knapsack problem with conflicts." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
In the Multiple Knapsack Problem with Conflicts, we are given a set of items, each with its own weight and profit, as well as a set of multiple knapsacks, each with its own capacity. The goal is to maximize the total profit of the items inserted in the knapsacks, while respecting the knapsack capacity and the incompatibility constraints between items.. The thesis is based on Basnet2018 research article. I developed some heuristic algorithms and tested them with various instances with a maximum case of 500 items and 15 knapsacks. The results of the computations are reported.
APA, Harvard, Vancouver, ISO, and other styles
25

Rastgoufard, Rastin. "The Interacting Multiple Models Algorithm with State-Dependent Value Assignment." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1477.

Full text
Abstract:
The value of a state is a measure of its worth, so that, for example, waypoints have high value and regions inside of obstacles have very small value. We propose two methods of incorporating world information as state-dependent modifications to the interacting multiple models (IMM) algorithm, and then we use a game's player-controlled trajectories as ground truths to compare the normal IMM algorithm to versions with our proposed modifications. The two methods involve modifying the model probabilities in the update step and modifying the transition probability matrix in the mixing step based on the assigned values of different target states. The state-dependent value assignment modifications are shown experimentally to perform better than the normal IMM algorithm in both estimating the target's current state and predicting the target's next state.
APA, Harvard, Vancouver, ISO, and other styles
26

Hanisch, Susan [Verfasser]. "Improving cropping systems of semi-arid south-western Madagascar under multiple ecological and socio-economic constraints / Susan Hanisch." Kassel : Universitätsbibliothek Kassel, 2015. http://d-nb.info/1073894169/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Htun, Ei Hnun Phyu. "Assessing the policy constraints and limitations in state-led developing land policy in Myanmar: Using kingdon's multiple streams framework approach." Thesis, Htun, Ei Hnun Phyu (2015) Assessing the policy constraints and limitations in state-led developing land policy in Myanmar: Using kingdon's multiple streams framework approach. Masters by Coursework thesis, Murdoch University, 2015. https://researchrepository.murdoch.edu.au/id/eprint/29397/.

Full text
Abstract:
Policy literature frames this research in order to investigate land use policy in Myanmar. Specifically, this paper draws on the multiple streams framework to explore how the Government of Myanmar defined the problem of land use. This project situates Myanmar as a nation in transition and it examines the development of the reformist administration through the introduction of the National Land Use Policy (NLUP) to improve land governance. The project argues that the process of policy development was captured by government policy entrepreneurs and ‘cronies’ who, in turn, set the policy agenda in ways that limited the views of traditional land users and failed to address land-use conflicts within the broader society. The project highlights the usefulness of multiple streams framework in examining land use policy in the context of a political transition. This project illustrates that the political changes in Myanmar constitute a ‘window of opportunity’ to germinate the new policy initiative, however the paper demonstrates the significant power of political cronies to influence the process. The project makes several recommendations to generate successful land reform in Myanmar; these include: reducing the influence of cronies in the political system, building institutional capacity and to learn successful strategies for land reform from neighbouring countries.
APA, Harvard, Vancouver, ISO, and other styles
28

Kchaou, Mouna. "Modeling and solving a distribution network design problem with multiple operational constraints : Application to a case-study in the automotive industry." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00978486.

Full text
Abstract:
L'objet de notre projet de recherche est le développement d'un modèle de conception d'un réseau de distribution composé de trois niveaux : les usines, les centres de distribution (CD) et les clients. Nous supposons que le nombre et la localisation des usines ainsi que le nombre et la localisation des clients sont connus. Etant donné la demande des clients et une liste de CD potentiels, l'objectif est de déterminer la localisation des CD à ouvrir et d'y affecter les clients de manière à minimiser le coût total. En termes de modélisation, nous considérons divers aspects opérationnels qui sont inspirés d'une étude de cas dans l'industrie automobile. Ces aspect ont été pris en compte séparément dans la littérature mais jamais combinés dans un même modèle. Plus particulièrement, nous introduisons un " clustering " en prétraitement afin de modéliser les tournées de camions. Nous intégrons également des contraintes de volume minimum sur les axes de transport, des contraintes de volume minimum et de capacité maximale sur les centres de distribution, des contraintes de distance de couverture maximale et des contraintes d'uni-affectation. Par ailleurs, nous étudions une extension multi-périodes du problème en utilisant un " clustering " dynamique pour modéliser des tournées de camions multi-périodes. En termes de résolution, comme le problème étudié est NP-difficile au sens fort, nous proposons différentes méthodes heuristiques performantes basées sur la relaxation linéaire. A travers les tests effectués, nous montrons que ces méthodes fournissent des solutions proches de l'optimale en moins de temps de calcul que l'application directe d'un solveur linéaire. Nous analysons également la structure des réseaux de distribution obtenus et nous comparons les résultats issus de plusieurs versions du modèle afin de montrer la valeur ajoutée du " clustering " ainsi que de l'approche multi-périodes.
APA, Harvard, Vancouver, ISO, and other styles
29

Kchaou-Boujelben, Mouna. "Modeling and solving a distribution network design problem with multiple operational constraints. Application to a case-study in the automotive industry." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00946890.

Full text
Abstract:
A cause de leur aspect strat égique et des divers challenges qu'ils repr ésentent en termes de mod élisation et de r ésolution, les probl èmes de localisation et de conception de r éseaux ont été largement étudi es par les sp écialistes en recherche opérationnelle. Par ailleurs, bien que les études de cas dans ce domaine soient rares dans la litt érature, plusieurs travaux r écents ont int égr é certains aspects op érationnels afi n de rendre ces probl èmes d'optimisation plus r éalistes. L'objet de notre projet de recherche est le d éveloppement d'un mod èle de conception d'un r éseau de distribution prenant en compte plusieurs aspects op érationnels inspir és d'une étude de cas dans le domaine de l'automobile. Bien que nos choix de mod élisation soient motiv és par cette étude de cas, ils restent applicables dans d'autres secteurs industriels. Le r éseau de distribution consid ér é se compose de trois niveaux : les usines au premier niveau, les centres de distribution (CD) au deuxi ème niveau et les clients au dernier niveau. Nous supposons que le nombre et la localisation des usines ainsi que le nombre et la localisation des clients sont connus. Etant donn é la demande des clients et une liste de CD potentiels, l'objectif est de d éterminer la localisation des CD a ouvrir et d'y a ffecter les clients de mani ère a minimiser le coût total. Nos contributions par rapport aux travaux existants concernent la mod élisation et la r ésolution du probl ème ainsi que les tests num ériques eff ectu és. En termes de mod élisation, nous consid érons divers aspects op érationnels qui ont été pris en compte s épar ément dans la litt érature mais jamais combin és dans un même mod èle. Plus particuli èrement, nous introduisons un "clustering" en pr étraitement afi n de mod éliser les tourn ées de camions. Nous int égrons également des contraintes de volume minimum sur les axes de transport pour assurer l'utilisation de camions pleins, des contraintes de volume minimum et de capacit é maximale sur les centres de distribution, des contraintes de distance de couverture maximale et des contraintes d'uni-aff ectation. Par ailleurs, nous étudions une extension multi-p ériodes du probl ème en utilisant un "clustering" dynamique pour mod éliser des tourn ées de camions multi-p ériodes. En termes de r ésolution, comme le probl ème étudi é est NP-di ffcile au sens fort, nous proposons di fférentes m éthodes heuristiques performantes bas ées sur la relaxation lin éaire. A travers les tests eff ectu és, nous montrons que ces m éthodes fournissent des solutions proches de l'optimale en moins de temps de calcul que l'application directe d'un solveur lin éaire. Nous analysons également la structure des r éseaux de distribution obtenus et nous comparons les r ésultats issus de plusieurs versions du mod èle afi n de montrer la valeur ajout ée du "clustering" ainsi que de l'approche multi-p ériodes.
APA, Harvard, Vancouver, ISO, and other styles
30

Oliveira, Talmai B. "Dealing with Uncertainty and Conflicting Information in Heterogeneous Wireless Networks." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1352490436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lehmann, Rüdiger. "Transformation model selection by multiple hypotheses testing." Hochschule für Technik und Wirtschaft Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-211719.

Full text
Abstract:
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
APA, Harvard, Vancouver, ISO, and other styles
32

Lehmann, Rüdiger. "Transformation model selection by multiple hypotheses testing." Hochschule für Technik und Wirtschaft Dresden, 2014. https://htw-dresden.qucosa.de/id/qucosa%3A23299.

Full text
Abstract:
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
APA, Harvard, Vancouver, ISO, and other styles
33

Batur, Demet. "Variance Estimation in Steady-State Simulation, Selecting the Best System, and Determining a Set of Feasible Systems via Simulation." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10541.

Full text
Abstract:
In this thesis, we first present a variance estimation technique based on the standardized time series methodology for steady-state simulations. The proposed variance estimator has competitive bias and variance compared to the existing estimators in the literature. We also present the technique of rebatching to further reduce the bias and variance of our variance estimator. Second, we present two fully sequential indifference-zone procedures to select the best system from a number of competing simulated systems when best is defined by the maximum or minimum expected performance. These two procedures have parabola shaped continuation regions rather than the triangular continuation regions employed in several papers. The rocedures we present accommodate unequal and unknown ariances across systems and the use of common random numbers. However, we assume that basic observations are independent and identically normally distributed. Finally, we present procedures for finding a set of feasible or near-feasible systems among a finite number of simulated systems in the presence of multiple stochastic constraints, especially when the number of systems or constraints is large.
APA, Harvard, Vancouver, ISO, and other styles
34

Smith, Jacques. "Constructing low cost core-satellite portfolios with multiple risk constraints: practical applications to Robo advising in South Africa using active, passive and smart-beta strategies." Master's thesis, Faculty of Commerce, 2020. http://hdl.handle.net/11427/32985.

Full text
Abstract:
Risk and tracking error budgeting was originally adopted by large institutional investors, including pension funds, plan sponsors, foundations, and endowments. More recently, risk and tracking error budgeting have gained popularity among financial advisors, multi-managers, fund of funds managers, high net worth individuals as well as retail investors. These techniques contribute to the portfolio optimisation process by limiting the extent to which a portfolio can deviate from its benchmark with regards to risk and tracking error. This is an ambitious paper that attempts to determine the optimal strategy to practically implement risk and tracking error budgeting as a portfolio optimisation technique in South Africa. This study attempts to bridge the gap between active, passive, and smart-beta investment management styles by introducing a low-cost portfolio construction technique, for core-satellite portfolio management, which contributes to the risk and tracking error budgeting process. Core-satellite portfolios are designed to expose the portfolio to a low-cost primary “core” consisting of passive and enhanced index funds, thus systematic risk “beta”, limiting the tracking error of the portfolio. The secondary “satellite” component is allocated to active and smart-beta managers to exploit expected excess return “alpha”. The primary aim of this research is to construct a rule-based product range of core-satellite portfolios called “replica portfolios”. The product range builds on the foundation of the Association for Savings & Investments South Africa (ASISA) framework. The study identifies three “target portfolios” from ASISA's framework, namely (1) High Risk: SA General Equity, (2) Medium Risk: SA Multi-Asset High Equity and (3) Low Risk: SA Multi-Asset Low Equity. Through this framework, active managers from each category are shortlisted using a Sharpe and Information Ratio filter. A secondary filtering technique, namely Returns Based Style Analysis (RBSA) is used to determine the style, R-squared and alpha-generating ability of active managers versus the passive asset classes and style indices they seek to replicate. Applying Euler's theorem for homogenous functions, we decompose the risk of the coresatellite portfolio into the risk contributed by each of its components. The primary mandate of the core-satellite portfolios in the product range is to allocate risk and tracking error efficiently across several investment management styles and asset classes in order to maximise returns while remaining within the specified risk parameters. iii The results highlighted that active managers, after fees, predominantly failed to outperform their benchmarks and passive building blocks, as identified through RBSA over the sample period (October 2009 – September 2019). However, only a small number of active managers generated superior risk-adjusted returns and were included in the core-satellite range of products. This study recommends to investors that they exploit the “hot-hands effect” by investing in specialised, benchmark agnostic active managers who consistently produce superior risk-adjusted returns. By blending active, passive and smart-beta strategies, investors are exposed to less total risk, less risk per holding and a lower tracking error. The three coresatellite portfolios developed in this study generated absolute and risk-adjusted returns that are more significant than their active and passive counterparts. Fee arbitrage was derived through the range of core-satellite products, resulting in tangible alpha over the sample period. The study encourages investors to use smart-beta strategies alongside active and passive funds since it improves Sharpe and Information ratios while enhancing the original portfolio's characteristics.
APA, Harvard, Vancouver, ISO, and other styles
35

Quinones, Gerardo. "The embeddedness of e-entrepreneurship : institutional constraints and strategic choice in Latin American digital start-ups." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/the-embeddedness-of-eentrepreneurship-institutional-constraints-and-strategic-choice-in-latin-american-digital-startups(e9efe00d-6f27-48a8-bc41-636adb6cb1e2).html.

Full text
Abstract:
The so-called digital economy has been growing exponentially in the emerging economies and it is expected to continue growing around the globe. For this reason, many governments are funding support programmes (e.g. Start-up America in the USA, the UK’s Tech City, and Brazil Startup) to both encourage and facilitate the creation of Digital Start-ups (DSs), defined here as recentlycreated enterprises that produce solely digital products or services. Whilst in some regions there is some evidence that these efforts are starting to pay off, the majority of DSs that have grown to become global digital enterprises remain concentrated in the United States and Europe. In the case of Latin America, the digital economy already accounts for between 2-3.2% of GDP. Nonetheless, most e-commerce transactions occur through platforms based in the United States, with a scarcity of examples of Latin American DSs (LADSs) that have grown to become large digital firms. Despite this, the literature has paid little attention to the relationship that exists between the institutional environment and LADS’s agency. The few extant studies that do exist have focused on either institutional or infrastructure constraints and public policies, or business models and resource analysis. To address this knowledge gap, this research studied LADSs in the four largest Latin American countries (Brazil, Mexico, Argentina, and Colombia), representing three-quarters of the region’s GDP, in order to answer the following questions: How do environmental pressures influence the development of LADSs? How do LADSs respond to these pressures and seize potential business opportunities? The research followed a critical realist philosophical foundation and was operationalised through a qualitative exploratory field study of forty organisations, including DSs, accelerators, investors, government agencies, and not-for-profits. Geel’s (2014) Triple Embeddedness Framework (TEF) was chosen as the theoretical framework to guide this research and integrates constructs from the Lean Start-up method (LSM), which was widely adopted by the LADSs to develop their business models. This study provides empirical support for the constructs outlined in the TEF, identifies crucial shortcomings in LSM, and uncovers new constructs that are necessary to accommodate the DSs’ digital properties, which result in tensions between their embeddedness in the institutional environment, their hybrid embeddedness in a product-sector industry and a digital industry, and their embeddedness in a multi-level organisational field that creates a core-periphery relationship between Latin America and the United States. Therefore, a new framework, entitled DIME, is proposed to assist e-entrepreneurs when developing digital business models to achieve the right firm-environment-fit in Latin America. The findings of this study will also contribute to future research, and to guide policy makers interested in fostering the development of the digital economy in emerging economies.
APA, Harvard, Vancouver, ISO, and other styles
36

Styles, Julie Maree, and julie styles@oregonstate edu. "Inverse Modelling of Trace Gas Exchange at Canopy and Regional Scales." The Australian National University. Research School of Biological Sciences, 2003. http://thesis.anu.edu.au./public/adt-ANU20030905.040030.

Full text
Abstract:
This thesis deals with the estimation of plant-atmosphere trace gas exchange and isotopic discrimination from atmospheric concentration measurements. Two space scales were investigated: canopy and regional. The canopy-scale study combined a Lagrangian model of turbulent dispersal with ecophysiological principles to infer vertical profiles of fluxes of CO2, H2O and heat as well as carbon and oxygen isotope discrimination during CO2 assimilation, from concentration measurements within a forest. The regional-scale model used a convective boundary layer budget approach to infer average regional isotopic discrimination and fluxes of CO2 and sensible and latent heat from the evolution during the day of boundary layer height and mean concentrations of CO2 and H2O, temperature and carbon and oxygen isotope composition of CO2. For the canopy study, concentrations of five scalar quantities, CO2, 13CO2, C18O16O, H2O and temperature, were measured at up to nine heights within and above a mixed fir and spruce forest in central Siberia over several days just after snow melt in May 2000. Eddy covariance measurements of CO2, H2O and heat fluxes were made above the canopy over the same period, providing independent verification of the model flux estimates. Photosynthesis, transpiration, heat exchange and isotope discrimination during CO2 assimilation were modelled for sun and shade leaves throughout the canopy through a combination of inversion of the concentration data and principles of biochemistry, plant physiology and energy balance. In contrast to the more usual inverse modelling concept where fluxes are inferred directly from concentrations, in this study the inversion was used to predict unknown parameters within a process-based model of leaf gas and energy exchange. Parameters relating to photosynthetic capacity, stomatal conductance, radiation penetration and turbulence structure were optimised by the inversion to provide the best fit of modelled to measured concentration profiles of the five scalars. Model results showed that carbon isotope discrimination, stomatal conductance and intercellular CO2 concentration were depressed due to the low temperatures experienced during snow melt, oxygen isotope discrimination was positive and consistent with other estimates, radiation penetrated further than simple theoretical predictions because of leaf clumping and penumbra, the turbulence coherence was lower than expected and stability effects were important in the morning and evening. For the regional study, five flights were undertaken over two days in and above the convective boundary layer above a heterogeneous pine forest and bog region in central Siberia. Vertical profiles of CO2 and H2O concentrations, temperature and pressure were obtained during each flight. Air flask samples were taken at various heights for carbon and oxygen isotopic analysis of CO2. Two budget methods were used to estimate regional surface fluxes of CO2 and plant isotopic discrimination against 13CO2 and C18O16O, with the first method also used to infer regional sensible and latent heat fluxes. Flux estimates were compared to ground-based eddy covariance measurements. Model results showed that afternoon estimates for carbon and oxygen isotope discrimination were close to those expected from source water isotopic measurements and theory of isotope discrimination. Estimates for oxygen isotope discrimination for the morning period were considerably different and could be explained by contrasting influences of the two different ecosystem types and non-steady state evaporative enrichment of leaf water.
APA, Harvard, Vancouver, ISO, and other styles
37

Tsai, Pei-Fang. "Tight Flow-Based Formulations for the Asymmetric Traveling Salesman Problem and Their Applications to some Scheduling Problems." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/27863.

Full text
Abstract:
This dissertation is devoted to the development of new flow-based formulations for the asymmetric traveling salesman problem (ATSP) and to the demonstration of their applicability in effectively solving some scheduling problems. The ATSP is commonly encountered in the areas of manufacturing planning and scheduling, and transportation logistics. The integration of decisions pertaining to production and shipping, in the supply chain context, has given rise to an additional and practical relevance to this problem especially in situations involving sequence-dependent setups and routing of vehicles. Our objective is to develop new ATSP formulations so that algorithms can be built by taking advantage of their relaxations (of integer variables, thereby, resulting in linear programs) to effectively solve large-size problems.

In view of our objective, it is essential to have a formulation that is amenable to the development of an effective solution procedure for the underlying problem. One characteristic of a formulation that is helpful in this regard is its tightness. The tightness of a formulation usually refers to the quality of its approximation to the convex hull of integer feasible solutions. Another characteristic is its compactness. The compactness of a formulation is measured by the number of variables and constraints that are used to formulate a given problem. Our formulations for the ATSP and the scheduling problems that we address are both tight and compact.

We present a new class of polynomial length formulations for the asymmetric traveling salesman problem (ATSP) by lifting an ordered path-based model using logical restrictions in concert with the Reformulation-Linearization Technique (RLT). We show that a relaxed version of this formulation is equivalent to a flow-based ATSP model, which, in turn, is tighter than the formulation based on the exponential number of Dantzig-Fulkerson-Johnson (DFJ) subtour elimination constraints. The proposed lifting idea is applied to derive a variety of new formulations for the ATSP, and a detailed analysis of these formulations is carried out to show that some of these formulations are the tightest among those presented in the literature. Computational results are presented to exhibit the relative tightness of our formulations and the efficacy of the proposed lifting process.

While the computational results demonstrate the efficacy of employing the proposed theoretical RLT and logical lifting ideas, yet it remains of practical interest to take due advantage of the tightest formulations. The key requirement to accomplish this is the ability to solve the underlying LP relaxations more effectively. One approach, to that end, is to solve these LP relaxations to (near-) optimality by using deflected subgradient methods on Lagrangian dual formulations. We solve the LP relaxation of our tightest formulation, ATSP6, to (near-) optimality by using a deflected subgradient algorithm with average direction strategy (SA_ADS) (see Sherali and Ulular [69]). We also use two nondifferentiable optimization (NDO) methods, namely, the variable target value method (VTVM) presented by Sherali et al. [66] and the trust region target value method (TRTV) presented by Lim and Sherali [46], on the Lagrangian dual formulation of ATSP6. The preliminary results show that the near-optimal values obtained by the VTVM on solving the problem in the canonical format are the closest to the target optimal values. Another approach that we use is to derive a set of strong valid inequalities based on our tighter formulations through a suitable surrogation process for inclusion within the more compact manageable formulations. Our computational results show that, when the dual optimal solution is available, the associated strong valid inequalities generated from our procedure can successfully lift the LP relaxation of a less tight formulation, such as ATSP2R¯, to be as tight as the tightest formulation, such as ATSP6.

We extend our new formulations to include precedence constraints in order to enforce a partial order on the sequence of cities to be visited in a tour. The presence of precedence constraints within the ATSP framework is encountered quite often in practice. Examples include: disassembly optimization (see Sarin et al. [62]), and scheduling of wafers/ ICs on automated testing equipments in a semiconductor manufacturing facility (see Chen and Hsia [17]); among others. Our flow-based ATSP formulation can very conveniently capture these precedence constraints. We also present computational results to depict the tightness of our precedence-constrained asymmetric traveling salesman problem (PCATSP) formulations.

We, then, apply our formulations to the hot strip rolling scheduling problem, which involves the processing of hot steel slabs, in a pre-specified precedence order, on one or more rollers. The single-roller hot strip rolling scheduling problem can be directly formulated as a PCATSP. We also consider the multiple-roller hot strip rolling scheduling problem. This gives rise to the multiple-asymmetric traveling salesman problem (mATSP). Not many formulations have been presented in the literature for the mATSP, and there are none for the mATSP formulations involving a precedence order among the cities to be visited by the salesmen, which is the case for the multiple-roller hot strip rolling scheduling problem. To begin with, we develop new formulations for the mATSP and show the validity of our formulations, and present computational results to depict their tightness. Then, we extend these mATSP formulations to include a pre-specified, special type of precedence order in which to process the slabs, and designate the resulting formulations as the restricted precedence-constrained multiple-asymmetric traveling salesman problem (rPCmATSP) formulations. We directly formulate the multiple-roller hot strip rolling scheduling problem as a rPCmATSP. Furthermore, we consider the hot strip rolling scheduling problem with slab selection in which not all slabs need to be processed. We model the single-roller hot strip rolling scheduling problem with slab selection as a multiple-asymmetric traveling salesman problem with exactly two traveling salesmen. Similarly, the multiple-roller hot strip rolling scheduling problem with slab selection is modeled as a multiple-asymmetric traveling salesman problem with (m+1) traveling salesmen.

A series of computational experiments are conducted to exhibit the effectiveness of our formulations for the solution of hot strip rolling scheduling problems. Furthermore, we develop two mixed-integer programming algorithms to solve our formulations. These are based on Benders΄ decomposition [13] and are designated Benders΄ decomposition and Modified Benders΄ methods. In concert with a special type of precedence order presented in the hot strip rolling scheduling problems, we further introduce an adjustable density ratio of the associated precedence network and we use randomly generated test problems to study the effect of various density ratios in solving these scheduling problems. Our experimentation shows the efficacy of our methods over CPLEX.

Finally, we present a compact formulation for the job shop scheduling problem, designated as JSCD (job shop conjunctive-disjunctive) formulation, which is an extension of our ATSP formulations. We use two test problems given in Muth and Thompson [53] to demonstrate the optimal schedule and the lower bound values obtained by solving the LP relaxations of our formulations. However, we observe that the lower bound values obtained by solving the LP relaxations of all variations of our JSCD formulation equal to the maximum total processing time among the jobs in the problem.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Nguyen, Thi hai yen. "Multiple exposures and co-exposures to chemical neurotoxic agents and intense physical constraints among male blue-collar workers in the agriculture, manufacturing, and construction sectors in France Multiple Exposures and Co-exposures to Occupational Hazards among Agricultural Workers: A Systematic Review of Observational Studies." Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0065.

Full text
Abstract:
Les effets délétères sur la santé de certaines expositions professionnelles, prises indépendamment, ont été observés dans un large nombre d’études. Pourtant, la prévalence et l'impact de multi-exposition ou co-exposition à des diverses nuisances ont plus rarement été explorée, malgré le caractère ubiquitaire de nombreuses nuisances. Par conséquent, l’étude de multi-exposition/co-exposition dans le cadre professionnel est considérée comme un enjeu majeur de la recherche épidémiologique en santé au travail. Une revue systématique de la littérature concernant le secteur de l’agriculture a été réalisé en s'appuyant sur le titre, le résumé, et le texte intégral des 36.404 articles originaux grâce à 5 bases de données reconnues et 2 sources de données nord-américaines complémentaires. Les résultats des 15 articles inclus suggèrent que l’exposition aux multiples chimiques est significativement associée au risque de maladies respiratoires, de cancers, de dommages sur l’ADN et les cytogénétiques. L’exposition aux multiples physiques a été associée à une augmentation du risque de perte d'audition, tandis que la co-exposition aux facteurs physiques et biomécaniques a été associée à un risque accru de troubles musculo-squelettiques. Aucune étude n'a exploré la co-exposition professionnelle à des facteurs chimiques et physiques, ainsi qu'à la co-exposition professionnelle à des facteurs chimiques et biomécaniques. Les résultats de cette revue de la littérature indiquent la nécessité l’évaluer la prévalence de l’exposition professionnel à des multiples nuisances en France. Les multiple/co-expositions aux agents neurotoxiques chimiques(ANCs) et aux contraintes physiques intenses (CPIs) ont ainsi été analysées chez 5587 hommes ouvriers français des secteurs de l'agriculture, de l’industrie manufacturière, et de la construction à partir de l’enquête nationale transversale SUMER 2010. Environ 6% des ouvriers étaient co-exposés aux ANCs et CPIs dans les trois secteurs étudiés (p = 0,29). La multi-exposition aux CPIs était plus nettement plus fréquente (35%, p <0,001) que la multi-exposition aux ANCs (2%, p <0,001) chez les hommes de trois secteurs. Ces recherches mettent en évidence la nécessité de conduire davantage d’études liées à multi-exposition/coexposition professionnelle. Elles seront essentielles pour améliorer la sécurité au travail et permettre la surveillance et la prévention risques et des maladies professionnelles
A wide range of studies has demonstrated the relationships between diverse types of occupational exposures,taken independently, and adverse health outcomes. Yet, the prevalence and impact of multiple occupational exposures or co-exposures have rarely been explored despite the ubiquity of numerous hazards. Therefore, multiple occupational exposures/co-exposures and their impact on health are considered as a major challenge of epidemiologic research inthe occupational health and safety area. A systematic review concerning the agriculture sector was carried out based on the titles, abstracts and fulltexts screening of 36,404 initial articles from 5 well-known databases and 2 North American complementary sources. The findings from the 15 papers finally included suggested that multiple chemical exposures were significantly associated with an increased risk of respiratory diseases, cancers, DNA and cytogenetic damages. Multiple physical exposures were shown to increase the risk of hearing loss while co-exposures to physical and biomechanical hazardswere associated with an increased risk of musculoskeletal disorders. However, no studies included in the systematic review explored either occupational co-exposures to both physical and chemical factors or occupational co-exposures to biomechanical and chemical factors.The results described in the systematic review raised the necessity to conduct further studies multipleoccupational exposures and co-exposures among workers. Therefore, multiple occupational exposures and coexposures’ prevalences to chemical neurotoxic agents (CNAs) and intense physical constraints (IPCs) were examined among 5,587 French male blue-collar workers (BCWs) in the agriculture, manufacturing, and construction sectors based on the cross-sectional and national SUMER 2010 survey. About 6% of male BCWs were co-exposed to IPCs andCNAs in these three sectors (p=0.29). Multiple exposures to IPCs was predominantly observed (35%, p <0.001), while multiple exposures to CNAs was much lower (2%, p <0.001) among male BCWs in three sectors.The findings highlight the necessity to carry out further studies on multiple occupational exposures/coexposures to diverse hazards and their impact on workers’ health. These further researches are required to improve occupational safety and the efficiency of health care surveillance and occupational disease prevention
APA, Harvard, Vancouver, ISO, and other styles
39

Chima, Chidiebere Daniel. "Socio-economic determinants of modern agricultural technology adoption in multiple food crops and its impact on productivity and food availability at the farm-level : a case study from south-eastern Nigeria." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3310.

Full text
Abstract:
Farmers generally produce multiple crops while selectively adopting modern technologies to meet various needs. The main aim of this study is, therefore, to identify the range of socio-economic factors influencing the adoption of modern agricultural technology in multiple food crops and the corresponding impacts on productivity and food availability at the farm-level in South-eastern Nigeria. In this study, three major food crops (i.e., rice, yam and cassava) and two elements of modern technologies (i.e., HYV seeds and inorganic fertilizers) are considered. The hypotheses of the study are that inverse farm size – technology adoption, size – productivity, size- profitability and size – food availability relationships exist in Nigerian agriculture. The research is based on an in-depth farm-survey of 400 farmers from two states (251 from Ebonyi and 149 from Anambra states) of South-eastern Nigeria. Data has also been derived from surveys and interviews of ADP Program Managers and NGOs. A range of qualitative and quantitative methods including inferential statistics, bivariate probit model and regression analysis were used in order to achieve the specific objectives and test hypotheses. The results show that sample respondents are dominated by small scale farmers (81% of total) owning land less than 1 ha. The average farm size is small estimated at 1.27 ha. Farmers grow multiple crops instead of a single crop, i.e., 68% of the surveyed farmers grew at least two food crops. The level of modern technology adoption is low and mixed and farmers selectively adopt components of technologies as expected and use far less than recommended dose of fertilizers in crops. Only 29% of farmers adopted both HYV seeds and fertilizers as a package. The study clearly demonstrates that inverse farm size – technology adoption, farm size – productivity, and farm size – food availability relationships exist in agriculture in this region of Nigeria; but not inverse farm size – profitability. The bivariate probit model diagnostic reveals that the decision to adopt modern technologies are significantly correlated, implying that univariate analysis of such decisions are biased, thereby, justifying use of the bivariate approach. Overall, the most dominant determinants are the positive influence of farming experience and the negative influence of remoteness of extension services on modern technology adoption. The per capita per day level of mean food produced is 12322.74 calories from one ha of land and food available for consumption is 4693.34 calories which is higher than the daily requirement of 2000 calories. Yam is produced mainly for sale while cassava is produced for consumption. Regression analysis shows that farm size and share of cassava in the total crop portfolio significantly increases food availability. A host of constraints are affecting Nigerian agriculture, which includes lack of extension agents, credit facilities, farm inputs, irrigation, and value addition and corruption, lack of support for ADP staff and ineffective government policies. Policy implications include investment in extension credit services and other infrastructure (e.g., irrigation, ADP staff), training of small farmers in business skills, promotion of modern technology, as a package as well as special projects targeted for cassava (e.g., Cassava Plus project) in order to boost modern technology adoption in food crops, as well as improving productivity, profitability and food availability at the farm-level in Nigeria.
APA, Harvard, Vancouver, ISO, and other styles
40

Jishnu, A. "QoS Routing With Multiple Constraints." Thesis, 2004. http://hdl.handle.net/2005/1125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Levy, David. "Multiple Vehicle Routing Problem with Fuel Constraints." Thesis, 2013. http://hdl.handle.net/1969.1/151093.

Full text
Abstract:
In this paper, a Multiple Vehicle Routing Problem with Fuel Constraints (MVRPFC) is considered. This problem consists of a field of targets to be visited, and a collection of vehicles with fuel tanks that may visit the targets. Consideration of this problem is mainly in the improvement of feasible solutions, but the following steps are discussed: Cost Matrix Transformation, Field Partitioning, Tour Generation and Rerouting, and Tour Improvement. Four neighborhoods were investigated (2-opt, 3-opt, Target Vehicle Exchange, Depot Exchange), using the Variable Neighborhood Descent and Variable Neighborhood Search schemes, with APD and Voronoi partition methods. These neighborhoods were compared to investigate their performance for various instances using the above schemes and partition methods. In general, 2-opt performed as well as 3-opt in less time than 3-opt; in fact, 3-opt was the slowest of the four neighborhoods. Additionally, the Variable Neighborhood Descent scheme was found to produce better results than the Variable Neighborhood Search.
APA, Harvard, Vancouver, ISO, and other styles
42

LIN, ZONG-ZHI, and 林宗志. "Midcourse guidance law design with multiple constraints." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/68124443775878728713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

林國訓. "Dispatching Rules for Multiple Queue Time Constraints." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/74351526832466924095.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系
92
Queue time constraint refers to constraint that the elapsed time between two processes must be contained within a set time period in order to maintain the quality of the work pieces. Although many studied queue time constraints in semiconductor manufacturing, few discussed multiple queue constraints applied on consecutive manufacturing processes. The situation appears more frequently with the introduction of recent advanced processes. The research proposed dispatching and a lot releasing methods for a section of manufacturing processes under multiple queue time and batching constraints. First, an algorithm and associated software was developed to calculated the earliest lot releasing time that can assure non-violation of queue times and satisfy lot batching requirement of the work pieces in the observed manufacturing section. Second, a dispatching method based on waiting time allocation concept, at the beginning of the manufacturing section, and FIFO method for the ensuing processes is proposed. Simulation results show that this “section-wise” dispatching method performs better to some other commonly used dispatching methods such as First-In First-Out(FIFO), Global FIFO, Earliest Due Date(EDD), and Critical Ratio(CR) methods in all three performance indices: Target hit rate, average throughput and average cycle time.
APA, Harvard, Vancouver, ISO, and other styles
44

CHEN, YI-TING, and 陳逸庭. "Travel Path Planning under Multiple User Constraints." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7ee6d2.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
105
In recent years, more and more tourists are choosing to travel independently rather than join a tour group. However, travel planning is a tedious and time consuming job. In this study, we propose an algorithm, namely TPS-RC (Travel Path Scheduling under User Requirement Constraints), which considers multiple constraints (travel time, travel budget and specific attractions) during the scheduling making process. The experiment results show that TPS-RC outperforms the other approaches in gaining travel scores. We also develop a prototype of the travel website which is able to show applicability of TPS-RC in real life.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Chih-Yu, and 陳治宇. "Robust Broadband Array Beamforming with Multiple-Beam Constraints." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/05956276204490360992.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
93
The theory about narrowband array signal processing is almost complete and saturated. There exist lots of robust algorithms for each non-ideal condition to improve the array performance. In my thesis I try to extend the robust algorithm used in narrowband signal and narrowband beamformer to broadband beamformer dealing with broadband signals. And in the broadband beamformer structure I use the tap-delay-line after each array element to yield desired frequency response. I discuss two non-ideal situations: angle mismatch and coherent interference. Then under the criterions used in narrowband case before, I extend the robust algorithm to broadband case and simulate their performance. For angle mismatch, I extend the Cheng’s method to broadband case. However finding the signal subspace is not easy like narrowband case because the broad frequency band and the tap-delay-line structure. So I develop a method to construct the signal subspace. For coherent interference I extend the two stage array structure in narrowband case to broadband case, and compare its performance with spatial smoothing.
APA, Harvard, Vancouver, ISO, and other styles
46

YU, JIN-LANG, and 余金郎. "Performance analysis of antenna arrays multiple linear constraints." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/23847936013419893445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jung, Guo-Wei, and 鍾國暐. "Adaptive Broadband Array Beamforming with Multiple-Beam Constraints." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/46711279202424546574.

Full text
Abstract:
碩士
臺灣大學
電信工程學研究所
95
By adjusting weights, adaptive antenna array can suppress interference and noise while receiving a signal from a specific angle. However, the performance of antenna array may be highly degraded if there are uncertainties in array. In this thesis, we consider some uncertainties including array element position perturbations and mutual coupling effects. We focus on correcting these effects when using broadband uniform linear array, broadband uniform circular array, and GSC. In this thesis, we use broadband noise-subspace projection method to correct the problem caused by the uncertainties. In this method, we use the eigen-decomposition of spectral density matrix and a gradient method to update the steering vector of antenna array iteratively. According to the simulation results, our method performs better than Diagonal Loading and Optimum Diagonal Loading.
APA, Harvard, Vancouver, ISO, and other styles
48

Hung, Sheng-Yi, and 洪笙益. "High Performance System-In-Package Partitioning With Multiple Constraints." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/35938561184370475529.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
97
In this thesis, we have presented two partitioning approaches for a die-stacking SiP design. First, we use Integer Linear Programming (ILP) based method to solve this problem. Second, we use Simulated Annealing (SA) based method to solve this problem. Our SiP partitioning goal is to minimize total length of inter-layer connections, total I/O pads, onchip wire-length and temperature in order to get better performance. The experimental results on the Gigascale Systems Research Center (GSRC) benchmarks show that our SA based method improves the total length of inter-layer connections by 0.31%-7.72%, total I/O pads by 0.34% - 3.13%, temperature by 0.83% - 11.74% and on-chip wire-length by 0.05% - 2.8% compared with hmetis [10].
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Chih-Chang, and 王智璋. "Robust Adaptive Array Signal Processing with Multiple-Beam Constraints." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/54725506621311732742.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
92
Abstract We can receive a signal from a specific DOA (direction of arrival) or receive more than one signal from different DOAs by using adaptive antenna array, and we can suppress noise and interference. However, the adaptive antenna array may suffer from many non-ideal effects, for example: array element position perturbations and signal direction angle error, these kinds of non-ideal effects will deteriorate the performance of adaptive antenna array. In this thesis, we focus on correcting these two non-ideal effects using one-dimensional antenna array structure and two-dimensional antenna array structure. We use four methods to correct the non-ideal effects in this thesis. Three of them were developed from our laboratory, the first one is DITAM discussed in [14], the second is Method discussed in [19], and the third one is Improved Method that will be discussed in this thesis. The last method is Diagonal Loading which has been discussed in many papers[16-17,25-27] in recent years. We use DITAM, Method, and Diagonal Loading to solve the array elements position perturbation problem in chapter 3 and chapter 4. Better robustibility has been shown in Method. In chapter 5, we use Method, Diagonal Loading, and Improved Method to solve the signal direction angle error problem. According to the experiments, Improved Method has the best robustibility.
APA, Harvard, Vancouver, ISO, and other styles
50

Morgenstern, Burkhard, Nadine Werner, Sonja J. Prohaska, Rasmus Steinkamp, Isabelle Schneider, Amarendran R. Subramanian, Peter F. Stadler, and Jan Weyer-Menkhoff. "Multiple sequence alignment with user-defined constraints at GOBICS." 2005. https://ul.qucosa.de/id/qucosa%3A32117.

Full text
Abstract:
Most multi-alignment methods are fully automated, i.e. they are based on a fixed set of mathematical rules. For various reasons, such methods may fail to produce biologically meaningful alignments. Herein, we describe a semi-automatic approach to multiple sequence alignment where biological expert knowledge can be used to influence the alignment procedure. The user can specify parts of the sequences that are biologically related to each other; our software program uses these sites as anchor points and creates a multiple alignment respecting these user-defined constraints. By using known functionally, structurally or evolutionarily related positions of the input sequences as anchor points, our method can produce alignments that reflect the true biological relationships among the input sequences more accurately than fully automated procedures can do.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography