Dissertations / Theses on the topic 'Minimisation'

To see the other types of publications on this topic, follow the link: Minimisation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Minimisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Savulescu, L. E. "Simultaneous energy and water minimisation." Thesis, University of Manchester, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mostafavi, S. M. ostafa. "Delay minimisation in network coding." Thesis, University of Surrey, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Priest, Andrew. "Waste minimisation for bromination chemistry." Thesis, University of York, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bhikha, Harshad. "Water minimisation at Skorpion zinc." Master's thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/5350.

Full text
Abstract:
Includes synopsis.
Includes bibliographical references (leaves 80-84).
This work proposes a systemic optimisation of the water balance of the Skorpion Zinc refinery, the case study selected for this work. The Skorpion process is located in Rosh Pinah in Namibia and was selected due to its location on an ecologally sensitive region where water is scarce. The project proposes that peocess optimisation cannot occure through focus on unit operations in isolation, but rather to use the interactions between the different operations to optimise the process systematically.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Zheng. "Minimisation L¹ en mécanique spatiale." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS229/document.

Full text
Abstract:
En astronautique, une question importante est de contrôler le mouvement d’un satellite soumis à la gravitation des corps célestes de telle sorte que certains indices de performance soient minimisés (ou maximisés). Dans cette thèse, nous nous intéressons à la minimisation de la norme L¹ du contrôle pour le problème circulaire restreint des trois corps. Les conditions nécessaires à l’optimalité sont obtenues en utilisant le principe du maximum de Pontryagin, révélant l’existence de contrôles bang-bang et singuliers. En s’appuyant sur les résultats de Marchal [1] et Zelikin et al. [2], la présence du phénomène de Fuller est mise en évidence par l’analyse des es extrêmales singulières. La contrôlabilité pour le problème à deux corps (un cas dégénéré du problème circulaire restreint des trois corps) avec un contrôle prenant des valeurs dans une boule euclidienne est caractérisée dans le chapitre 2. Le résultat de contrôlabilité est facilement étendu au problème des trois corps puisque le champ de vecteurs correspondant à la dérive est récurrent. En conséquence, si les trajectoires contrôlées admissibles restent dans un compact fixé, l’existence des solutions du problème de minimisation L¹ peut être obtenu par une combinaison du théorème de Filippov (voir [4, chapitre 10]) et une procédure appropriée de convexification (voir [5]). En dimension finie, le problème de minimisation L¹ est bien connu pour générer des solutions où le contrôle s’annule sur certains intervalles de temps. Bien que le principe du maximum de Pontryagin soit un outil puissant pour identifier les solutions candidates pour le problème de minimisation L¹, il ne peut pas garantir que ces candidats sont au moins localement optimaux sauf si certaines conditions d’optimalité suffisantes sont satisfaites. En effet, il est une condition préalable pour établir (et pour être capable de vérifier) les conditions d’optimalité nécessaires et suffisantes pour résoudre le problème de minimisation L¹. Dans cette thèse, l’idée cruciale pour obtenir de telles conditions est de construire une famille paramétrée d’extrémales telle que l’extrémale de référence peut être intégrée dans un champ d’extrémales. Deux conditions de non-pliage pour la projection canonique de la famille paramétrée d’extrémales sont proposées. En ce qui concerne le cas de points terminaux fixés, ces conditions de non-pliage sont suffisantes pour garantir que l’extrémale de référence est localement minimisante tant que chaque point de commutation est régulier (cf. chapitre 3). Si le point terminal n’est pas fixe mais varie sur une sous-variété lisse, une condition suffisante supplémentaire impliquant la géométrie de variété de cible est établie (cf. chapitre 4). Bien que diverses méthodes numériques, y compris celles considérées comme directes [6, 7], indirectes [5, 8], et hybrides [11], dans la littérature sont en mesure de calculer des solutions optimales, nous ne pouvons pas attendre d’un satellite piloté par le contrôle optimal précalculé (ou le contrôle nominal) de se déplacer sur la trajectoire optimale précalculée (ou trajectoire nominale) en raison de perturbations et des erreurs inévitables. Afin d’éviter de recalculer une nouvelle trajectoire optimale une fois que la déviation de la trajectoire nominale s’est produite, le contrôle de rétroaction optimale voisin, qui est probablement l’application pratique la plus importante de la théorie du contrôle optimal [12, Chapitre 5], est obtenu en paramétrant les extrémales voisines autour de la nominale (cf. chapitre 5). Étant donné que la fonction de contrôle optimal est bang-bang, le contrôle optimal voisin comprend non seulement la rétroaction sur la direction de poussée, mais aussi celle sur les instants de commutation. En outre, une analyse géométrique montre qu’il est impossible de construire un contrôle optimal voisin une fois que le point conjugué apparaisse ou bien entre ou bien à des instants de commutation
In astronautics, an important issue is to control the motion of a satellite subject to the gravitation of celestial bodies in such a way that certain performance indices are minimized (or maximized). In the thesis, we are interested in minimizing the L¹-norm of control for the circular restricted three-body problem. The necessary conditions for optimality are derived by using the Pontryagin maximum principle, revealing the existence of bang-bang and singular controls. Singular extremals are analyzed, and the Fuller phenomenon shows up according to the theories developed by Marchal [1] and Zelikin et al. [2, 3]. The controllability for the controlled two-body problem (a degenerate case of the circular restricted three-body problem) with control taking values in a Euclidean ball is addressed first (cf. Chapter 2). The controllability result is readily extended to the three-body problem since the drift vector field of the three-body problem is recurrent. As a result, if the admissible controlled trajectories remain in a fixed compact set, the existence of the solutions of the L¹-minimizaion problem can be obtained by a combination of Filippov theorem (see [4, Chapter 10], e.g.) and a suitable convexification procedure (see, e.g., [5]). In finite dimensions, the L¹-minimization problem is well-known to generate solutions where the control vanishes on some time intervals. While the Pontryagin maximum principle is a powerful tool to identify candidate solutions for L1-minimization problem, it cannot guarantee that the these candidates are at least locally optimal unless sufficient optimality conditions are satisfied. Indeed, it is a prerequisite to establish (as well as to be able to verify) the necessary and sufficient optimality conditions in order to solve the L¹-minimization problem. In this thesis, the crucial idea for establishing such conditions is to construct a parameterized family of extremals such that the reference extremal can be embedded into a field of extremals. Two no-fold conditions for the canonical projection of the parameterized family of extremals are devised. For the scenario of fixed endpoints, these no-fold conditions are sufficient to guarantee that the reference extremal is locally minimizing provided that each switching point is regular (cf. Chapter 3). If the terminal point is not fixed but varies on a smooth submanifold, an extra sufficient condition involving the geometry of the target manifold is established (cf. Chapter 4). Although various numerical methods, including the ones categorized as direct [6, 7], in- direct [5, 8, 9], and hybrid [10], in the literature are able to compute optimal solutions, one cannot expect a satellite steered by the precomputed optimal control (or nominal control) to move on the precomputed optimal trajectory (or nominal trajectory) due to unavoidable perturbations and errors. In order to avoid recomputing a new optimal trajectory once a deviation from the nominal trajectory occurs, the neighboring optimal feedback control, which is probably the most important practical application of optimal control theory [11, Chapter 5], is derived by parameterizing the neighboring extremals around the nominal one (cf. Chapter 5). Since the optimal control function is bang-bang, the neighboring optimal control consists of not only the feedback on thrust direction but also that on switching times. Moreover, a geometric analysis shows that it is impossible to construct the neighboring optimal control once a conjugate point occurs either between or at switching times
APA, Harvard, Vancouver, ISO, and other styles
6

Tseng, Wen-Kung. "Sound minimisation for local active control." Thesis, University of Southampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sadfi, Chérif. "Problèmes d'ordonnancement avec minimisation des encours." Grenoble INPG, 2002. http://www.theses.fr/2002INPG0016.

Full text
Abstract:
Dans ce travail de thèse, nous nous sommes intéressés aux problèmes d'ordonnancement avec minimisation des encours. Cet objectif se traduit par la minimisation du flot moyen (temps de séjour moyen des produits dans l'atelier). Le critère de minimisation du flot est une mesure de performance souvent rencontrée en pratique. La minimisation des encours permet de raccourcir le temps de cycle du produit et ainsi maîtriser sa date de sortie de l'atelier. Nous nous sommes intéressés particulièrement à trois types de problèmes : le problème du flow shop, le problème sur une machine avec contrainte d'indisponibilité de la machine et le problème sur une machine avec dates d'arrivées des travaux. Nous commençons notre étude par une présentation générale des problèmes d'ordonnancement, de leur complexité et un état de l'art des problèmes d'ordonnancement avec minimisation des encours. Pour le problème du flow shop, pour mieux comprendre l'influence des temps opératoires sur le résultat des méthodes de résolution, ous présentons une étude théorique du comportement de la fonction objectif et du résultat de l'ordonnancement suite à une variation des temps opératoires des travaux. Enfin, pour résoudre chacun des problèmes considérés, nous proposons différentes méthodes approximatives et exactes. Une analyse théorique et expérimentale est présentée pour chacune des méthodes proposées afin de juger sa performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Babahaji, Meibodi Amir. "On-site concrete waste minimisation in Iran." Thesis, Kingston University, 2015. http://eprints.kingston.ac.uk/35583/.

Full text
Abstract:
Construction waste minimization and management plays an efficient role in achieving sustainability by providing appropriate consideration to the environment, community, and social conditions by delivering built assets. The construction industry has a significant effect on the environment in terms of resource consumption and waste production. Recent statistics published by the UK Government disclose that the construction and demolition sector generates approximately 32% of the total waste in the UK, which is three times more than the waste generated by all households combined. Concrete has been a leading construction material for more than a century. However, current and on-going studies in the field of construction waste minimization and management mostly focus on general waste management or examine one specific method of waste minimization. While only a limited number of studies have been conducted to examine on-site concrete waste minimization, the literature reveals that research in this context is required. This research aimed to propose an on-site concrete waste minimisation framework (OCWMF) for construction projects, which could potentially be applicable and achievable in Iran. In this pursuit, six objectuves were determined to guide the research, which are: to identify the common methods on OCWM in the UK as a successful pattern in WM; to rank OCWM methods in UK; to rank OCWM methods in Iran; to identify the differences between common methods of OCWM in the UK and Iran and explore the possible causes of these differences; and to investigate the causes of differences in the favoured methods in the UK and the favoured methods in Iran. Finally, the last objective was to propose a framework for Iran. Both quantitative and qualitative strategies as well as a combination of qualitative and quantitative strategies were adopted for this research. Data was collected through face-to-face semi-structured interviews in the UK (N=5), a self-administered postal questionnaire survey in the UK (N=196 distributed, N=73 received), a self-administered postal questionnaire survey in Iran (N=196 distributed, N=110 received), and face-to-face semi-structured interviews in Iran (N=10). Interviewees were project managers, site superintendents, consultants, and engineers selected from the top 100 contractor companies and the top 100 consultant companies in the UK and in Iram. The questionnaire questions were developed on the findings of the literature review and the semi-structured interviews in the UK. Then, to examine the outcomes of interviews in Iran, three case studies in Iran was observed. Finally, emanating from study results, an OCWMF was developed and refined using discussions (N=2), a questionnaire (N=6), and interviews (N=7). Key findings that emerged from the study include: legislation and regulations in the UK are the main drivers for construction waste reduction; governmental initiatives in reducing waste, use of pre-fabricated building components, and education and training are the most recommended OCWM methods in the UK in terms of overall worthiness or spending to create savings or minimize waste; governmental incentives to reduce waste, education and training, and purchase management are the most recommended methods in Iran; the main differences between proposed OCWM methods in Iran and in the UK are in the use of pre-fabricated concrete elements (PCEs) and ready-mix concrete; the cost of using PCEs in the main cause of difference in methods between the countries; and the consultants and contractors involved in the case study were not interested in using PCEs in their projects due to the high costs involved despite the significant reduction in waste when this method is used. In conclusion, the framework proposed various remedies that could potentially be used for improving OCWM in Iran. This study has also made some recommendations for the industry, policy makers, and for further research. The content should be of interest to contractors, clients, and engineers.
APA, Harvard, Vancouver, ISO, and other styles
9

Brown, M. D. "Energy minimisation in variational quantum Monte Carlo." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596975.

Full text
Abstract:
After reviewing previously published techniques, a new algorithm is presented for optimising variable parameters in explicitly correlated many-body trial wavefunctions for use in variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) calculations. The method optimises the parameters with respect to the VMC energy by extending a low-noise VMC implementation of diagonalisation to include parameters which affect the wavefunction to higher than first-order. Similarly to minimising the variance of the local energy by fixed-sampling, accurate results are achieved using a relatively small number of VMC configurations because the optimisation is based on a least-squares fitting procedure. The method is tested by optimising six small examples intended to broadly cover the range of systems and wavefunctions typically treated using VMC and DMC, including atoms, molecules, and extended systems. Least-squares energy minimisation is found to be stable, fast enough to be practical, and capable of achieving lower VMC energies than minimisation of the filtered underweighted variance of the local energy (and the underweighted mean absolute deviation from the median local energy) by fixed-sampling. Least-squares energy minimisation is used to optimise four different wavefunctions for each of the all-electron first row atoms, from lithium to neon: single-determinant Slater-Jastrow wavefunctions with and without backflow transformations, and multi-determinant Slater-Jastrow wavefunctions with and without backflow transformations. The optimisations are more stable and successful than some previous variance minimisations using similar wavefunctions. The DMC energies of the energy-optimised wavefunctions for the atoms from boron to neon are significantly lower than previously published results, and, using the multi-determinant Slater-Jastrow wavefunctions with backflow, the calculations recover at least 90% of the correlation energies for lithium, beryllium, boron, carbon, nitrogen and neon, 97% for oxygen, and 98% for fluorine.
APA, Harvard, Vancouver, ISO, and other styles
10

Schmitz, Marcus Thomas. "Energy minimisation techniques for distributed embedded systems." Thesis, University of Southampton, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mistry, Jatin N. "Leakage power minimisation techniques for embedded processors." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/348805/.

Full text
Abstract:
Leakage power is a growing concern in modern technology nodes. In some current and emerging applications, speed performance is uncritical but many of these applications rely on untethered power making energy a primary constraint. Leakage power minimisation is therefore key to maximising energy efficiency for these applications. This thesis proposes two new leakage power minimisation techniques to improve the energy efficiency of embedded processors. The first technique, called sub-clock power gating,can be used to reduce leakage power during the active mode. The technique capitalises on the observation that there can be large combinational idle time within the clock period in low performance applications and therefore power gates it. Sub-clock power gating is the first study into the application of power gating within the clock period, and simulation results on post layout netlists using a 90nm technology library show 3.5x, 2x and 1.3x improvement in energy efficiency for three test cases: 16-bit multiplier, ARM Cortex-M0 and Event Processor at a given performance point. To reduce the energy cost associated with moving between the sleep and active mode of operation, a second technique called symmetric virtual rail clamping is proposed. Rather than shutting down completely during sleep mode, the proposed technique uses a pair of NMOS and PMOS transistors at the head and foot of the power gated logic to lower the supply voltage by 2Vth. This reduces the energy needed to recharge the supply rails and eliminates signal glitching energy cost during wake-up. Experimental results from a 65nm test chip shows application of symmetric virtual rail clamping in sub-clock power gating improves energy efficiency, extending its applicable clock frequency range by 400x. The physical layout of power gating requires dedicated techniques and this thesis proposes dRail, a new physical layout technique for power gating. Unlike the traditional voltage area approach, dRail allows both power gated and non-power gated cells to be placed together in the physical layout to reduce area and routing overheads. Results from a post layout netlist of an ARM Cortex-M0 with sub-clock power gating shows standard cell area and signal routing are improved by 3% and 19% respectively. Sub-clock power gating, symmetric virtual rail clamping and dRail are incorporated into power gating design flows and are compatible with commercial EDA tools and gate libraries.
APA, Harvard, Vancouver, ISO, and other styles
12

David, Julien. "Génération aléatoire d'automates et analyse d'algorithmes de minimisation." Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00587637.

Full text
Abstract:
Cette thèse porte sur la génération aléatoire uniforme des automates finis et l'analyse des algorithmes de minimisation qui s'y appliquent. La génération aléatoire permet de conduire une étude expérimentale sur les propriétésde l'objet engendré et sur les méthodes algorithmiques qui s'y appliquent. Il s'agit également d'un outil de recherche, qui permet de faciliter l'étude théorique du comportement moyen des algorithmes. L'analyse en moyenne des algorithmes s'inscrit dans la suite des travaux précurseurs de Donald Knuth. Le schéma classique en analyse d'algorithmes consiste à étudier le pire des cas, qui n'est souvent pas représentatif du comportement de l'algorithme en pratique. D'un point de vue théorique, on définit ce qui se produit "souvent'' en fixant une loi de probabilitésur les entrées de l'algorithme. L'analyse en moyenne consiste alors à estimer des ressources utiliséespour cette distribution de probabilité. Dans ce cadre, j'ai travaillé sur des algorithmes de génération aléatoire d'automatesdéterministes accessibles (complets ou non). Ces algorithmes sont basés sur de la combinatoirebijective, qui permet d'utiliser un procédé générique : les générateurs de Boltzmann. J'ai ensuite implanté ces méthodes dans deux logiciels : REGAL et PREGA. Je me suis intéressé à l'analyse en moyenne des algorithmes de minimisation d'automateset j'ai obtenu des résultats qui montrent le cas moyen des algorithmes de Moore et Hopcroft est bien meilleur que le pire des cas
APA, Harvard, Vancouver, ISO, and other styles
13

Vlaspouloas, Nikolaos. "Waste minimisation through sustainable magnesium oxide cement products." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.512049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ochoa-Montiel, Marco A. "Investigation into power minimisation algorithms for behavioural synthesis." Thesis, University of Southampton, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.485540.

Full text
Abstract:
The rapid growth of mobile electronics has led power consumption to be considered as a critical design priority. This necessitates the development of algorithms and design tools that target power minimisation at all levels of the design abstraction. The work presented in this thesis addresses the problem of dynamic power minimisation at behavioural level. A detailed investigation into power reduction algorithms during behavioural synthesis is presented. The research undertaken has produced two novel power-aware algorithms: time constrained scheduling and datapath synthesis. The power-aware time constrained scheduling algorithm selects the clock period and operations throughput such that, power consumption can be reduced by scaling the voltage until the slack of at least one of the design operations is zero. It has been shown that by carefully choosing the clock period and operations throughput, it is possible to produce a set of solutions with different power-area tradeoffs. To demonstrate the efficiency of the new scheduling algorithm in terms of solution quality, scheduling results of various benchmark examples have been included and compared with a multiple supply voltage (MSV) algorithm. It has been shown that the proposed algorithm is capable of obtaining schedules with single supply voltage (SSV) that have identical resource requirements and comparable power consumption to schedules obtained using a MSV algorithm. Using SSV avoids the difficulties of MSV, including area and power overhead due to required level shifters to transfer data between functional units operating at different voltages.. To solve the highly interrelated tasks of behavioural synthesis together with the power minimisation problem, an efficient algorithm for concurrent scheduling, binding, and clock and operation~ throughput selection has been introduced. This represents the second contributioIf of this work. Using a simulated annealing-based optimisation and a compound cost function, the exploration of different power-area tradeoffs is possible. The new scheduling and datapath synthesis algorithms have been incorporated into a power aware behavioural compiler (PABCOM). Synthesis results of various benchmark examples are included to demonstrate the higher solution quality when compared with a power-aware algorithm previously reported. Furthermore, to demonstrate the applicability of PABCOM in dealing with a real life design, two solutions for the motion vector reconstructor from MPEG-l decoder have been implemented using O.12Jlm technology. Power and area values for both solutions have been obtained using the reports generated after logic synthesis with Synplify ASIC and power analysis with PrimePower. The solutions dissipate 31% and 42% less power than if they were operated at the maximum supply voltage of the library components.
APA, Harvard, Vancouver, ISO, and other styles
15

Semling, M. "Minimisation of filling time in resin transfer moulding." Thesis, University of Warwick, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ramos, Gabriel de Oliveira. "Regret minimisation and system-efficiency in route choice." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/178665.

Full text
Abstract:
Aprendizagem por reforço multiagente (do inglês, MARL) é uma tarefa desafiadora em que agentes buscam, concorrentemente, uma política capaz de maximizar sua utilidade. Aprender neste tipo de cenário é difícil porque os agentes devem se adaptar uns aos outros, tornando o objetivo um alvo em movimento. Consequentemente, não existem garantias de convergência para problemas de MARL em geral. Esta tese explora um problema em particular, denominado escolha de rotas (onde motoristas egoístas deve escolher rotas que minimizem seus custos de viagem), em busca de garantias de convergência. Em particular, esta tese busca garantir a convergência de algoritmos de MARL para o equilíbrio dos usuários (onde nenhum motorista consegue melhorar seu desempenho mudando de rota) e para o ótimo do sistema (onde o tempo médio de viagem é mínimo). O principal objetivo desta tese é mostrar que, no contexto de escolha de rotas, é possível garantir a convergência de algoritmos de MARL sob certas condições. Primeiramente, introduzimos uma algoritmo de aprendizagem por reforço baseado em minimização de arrependimento, o qual provamos ser capaz de convergir para o equilíbrio dos usuários Nosso algoritmo estima o arrependimento associado com as ações dos agentes e usa tal informação como sinal de reforço dos agentes. Além do mais, estabelecemos um limite superior no arrependimento dos agentes. Em seguida, estendemos o referido algoritmo para lidar com informações não-locais, fornecidas por um serviço de navegação. Ao usar tais informações, os agentes são capazes de estimar melhor o arrependimento de suas ações, o que melhora seu desempenho. Finalmente, de modo a mitigar os efeitos do egoísmo dos agentes, propomos ainda um método genérico de pedágios baseados em custos marginais, onde os agentes são cobrados proporcionalmente ao custo imposto por eles aos demais. Neste sentido, apresentamos ainda um algoritmo de aprendizagem por reforço baseado em pedágios que, provamos, converge para o ótimo do sistema e é mais justo que outros existentes na literatura.
Multiagent reinforcement learning (MARL) is a challenging task, where self-interested agents concurrently learn a policy that maximise their utilities. Learning here is difficult because agents must adapt to each other, which makes their objective a moving target. As a side effect, no convergence guarantees exist for the general MARL setting. This thesis exploits a particular MARL problem, namely route choice (where selfish drivers aim at choosing routes that minimise their travel costs), to deliver convergence guarantees. We are particularly interested in guaranteeing convergence to two fundamental solution concepts: the user equilibrium (UE, when no agent benefits from unilaterally changing its route) and the system optimum (SO, when average travel time is minimum). The main goal of this thesis is to show that, in the context of route choice, MARL can be guaranteed to converge to the UE as well as to the SO upon certain conditions. Firstly, we introduce a regret-minimising Q-learning algorithm, which we prove that converges to the UE. Our algorithm works by estimating the regret associated with agents’ actions and using such information as reinforcement signal for updating the corresponding Q-values. We also establish a bound on the agents’ regret. We then extend this algorithm to deal with non-local information provided by a navigation service. Using such information, agents can improve their regrets estimates, thus performing empirically better. Finally, in order to mitigate the effects of selfishness, we also present a generalised marginal-cost tolling scheme in which drivers are charged proportional to the cost imposed on others. We then devise a toll-based Q-learning algorithm, which we prove that converges to the SO and that is fairer than existing tolling schemes.
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Zhen. "Building Information Modelling (BIM) aided waste minimisation framework." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/14971.

Full text
Abstract:
Building design can have a major impact on sustainability through material efficiency and construction waste minimisation (CWM). The construction industry consumes over 420 million tonnes of material resources every year and generates 120 million tonnes of waste containing approximately 13 million tonnes of unused materials. The current and on-going field of CWM research is focused on separate project stages with an overwhelming endeavour to manage on-site waste. Although design stages are vital to achieve progress towards CWM, currently, there are insufficient tools for CWM. In recent years, Building Information Modelling (BIM) has been adopted to improve sustainable building design, such as energy efficiency and carbon reduction. Very little has been achieved in this field of research to evaluate the use of BIM to aid CWM during design. However, recent literature emphasises a need to carry out further research in this context. This research aims to investigate the use of BIM as a platform to help with CWM during design stages by developing and validating a BIM-aided CWM (BaW) Framework. A mixed research method, known as triangulation, was adopted as the research design method. Research data was collected through a set of data collection methods, i.e. selfadministered postal questionnaire (N=100 distributed, n=50 completed), and semistructured follow-up interviews (n=11) with architects from the top 100 UK architectural companies. Descriptive statistics and constant comparative methods were used for data analysis. The BaW Framework was developed based on the findings of literature review, questionnaire survey and interviews. The BaW Framework validation process included a validation questionnaire (N=6) and validation interviews (N=6) with architects. Key research findings revealed that: BIM has the potential to aid CWM during design; Concept and Design Development stages have major potential in helping waste reduction through BIM; BIM-enhanced practices (i.e. clash detection, detailing, visualisation and simulation, and improved communication and collaboration) have impacts on waste reduction; BIM has the most potential to address waste causes (e.g. ineffective coordination and communication, and design changes); and the BaW Framework has the potential to enable improvements towards waste minimisation throughout all design stages. Participating architects recommended that the adoption of the BaW Framework could enrich both CWM and BIM practices, and most importantly, would enhance waste reduction performance in design. The content should be suitable for project stakeholders, architects in particular, when dealing with construction waste and BIM during design.
APA, Harvard, Vancouver, ISO, and other styles
18

Lotze, Sven. "Ion backdrift minimisation in a GEM-based TPC readout." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=979865158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Weibel, Thomas. "Modèles de minimisation d'énergies discrètes pour la cartographie cystoscopique." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00866824.

Full text
Abstract:
L'objectif de cette thèse est de faciliter le diagnostic du cancer de la vessie. Durant une cystoscopie, un endoscope est introduit dans la vessie pour explorer la paroi interne de l'organe qui est visualisée sur un écran. Cependant, le faible champ de vue de l'instrument complique le diagnostic et le suivi des lésions. Cette thèse présente des algorithmes pour la création de cartes bi- et tridimensionnelles à large champ de vue à partir de vidéo-séquences cystoscopiques. En utilisant les avancées récentes dans le domaine de la minimisation d'énergies discrètes, nous proposons des fonctions coût indépendantes des transformations géométriques requises pour recaler de façon robuste et précise des paires d'images avec un faible recouvrement spatial. Ces transformations sont requises pour construire des cartes lorsque des trajectoires d'images se croisent ou se superposent. Nos algorithmes détectent automatiquement de telles trajectoires et réalisent une correction globale de la position des images dans la carte. Finalement, un algorithme de minimisation d'énergie compense les faibles discontinuités de textures restantes et atténue les fortes variations d'illuminations de la scène. Ainsi, les cartes texturées sont uniquement construites avec les meilleures informations (couleurs et textures) pouvant être extraites des données redondantes des vidéo-séquences. Les algorithmes sont évalués quantitativement et qualitativement avec des fantômes réalistes et des données cliniques. Ces tests mettent en lumière la robustesse et la précision de nos algorithmes. La cohérence visuelle des cartes obtenues dépasse celles des méthodes de cartographie de la vessie de la littérature.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Y. "Wastewater minimisation and the design of wastewater treatment systems." Thesis, University of Manchester, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rey, David. "Minimisation des conflits aériens par des modulations de vitesse." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00819879.

Full text
Abstract:
Afin de pouvoir subvenir aux futurs besoins en matière de transport aérien il est nécessaire d'augmenter la capacité de l'espace aérien. Les contrôleurs aériens, qui occupent une place centrale dans la gestion du trafic, doivent quotidiennement faire face à des situations conflictuelles (conflits) lors desquelles deux vols risquent de violer les normes de séparation en vigueur si aucune modification de trajectoire n'est envisagée. La détection et la résolution des conflits potentiels contribuent à augmenter la charge de travail des contrôleurs et peuvent potentiellement les conduire à diriger les vols vers des zones moins denses de l'espace aérien, induisant a posteriori un retard pour les vols. Le problème de la capacité de l'espace aérien peut donc être abordé en régulant les flux de trafic de façon réduire la quantité de conflits aériens. L'objectif de cette thèse est de mettre au point une méthodologie destinée à minimiser les risques de conflits aériens en modifiant légèrement les vitesses des appareils. Cette approche est principalement motivée par les conclusions du projet ERASMUS portant sur la régulation de vitesse subliminale. Ce type de régulation a été conçu de façon à ne pas perturber les contrôleurs aériens dans leur tâche. En utilisant de faibles modulations de vitesse, imperceptibles par les contrôleurs aériens, les trajectoires des vols peuvent être modifiées pour minimiser la quantité totale de conflits et ainsi faciliter l'écoulement du trafic dans le réseau aérien. La méthode retenue pour mettre en œuvre ce type de régulation est l'optimisation sous contrainte. Dans cette thèse, nous développons un modèle d'optimisation déterministe pour traiter les conflits à deux avions. Ce modèle est par la suite adapté à la résolution de grandes instances de trafic en formulant le modèle comme un Programme Linéaire en Nombres Entiers. Pour reproduire des conditions de trafic réalistes, nous introduisons une perturbation sur la vitesse des vols, destinée à représenter l'impact de l'incertitude en prévision de trajectoire dans la gestion du trafic aérien. Pour valider notre approche, nous utilisons un outil de simulation capable de rejouer des journées entières de trafic au dessus de l'espace aérien européen. Les principaux résultats de ce travail démontrent les performances du modèle de détection et de résolution de conflits et soulignent la robustesse de la formulation face à l'incertitude en prévision de trajectoire. Enfin, l'impact de notre approche est évalué à travers divers indicateurs propres à la gestion du trafic aérien et valide la méthodologie développée.
APA, Harvard, Vancouver, ISO, and other styles
22

Houghton, Claire. "Development of a methodology for batch process waste minimisation." Thesis, University of Bath, 1998. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Al-Shammari, Zaid Shakir Kadhim. "Power minimisation techniques for space-based wireless sensor networks." Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40862.

Full text
Abstract:
Wireless sensor networks (WSNs) have received much attention in recent years. Such networks comprise spatially distributed sensors to monitor various parameters. Space-based wireless sensor networks (SB-WSNs) consisting of tiny, low power, inexpensive satellites flying in a fleet with a close formation can offer a wide range of applications. Since communication is typically the major factor in power consumption, the activity of the transceiver should be reduced to increase the nodes’ lifetime. To understand the network power behaviour, a space-based wireless sensor network consisting of 40 nodes was designed as an experimental testbed. Several tests were undertaken to investigate the nodes’ lifetime and the packet loss with various sleep/wake up methods. The study found that the nodes with shorter paths to the sink benefit from improvement in their lifetime. In contrast, the other nodes with routes including many hops obtain less enhancement in their lifetime and high packet loss. To further reduce the power consumption, a novel sleep/wake up technique where the nodes have different sleep periods based on their locations has been proposed and tested. This modification enhanced the network operation time by 24% and increased the total delivered packets by 51% compared to when the nodes stay active all their duty cycle. Another concern was uneven power consumption due to the extra packet-relaying duties imposed on central nodes. This was addressed first by altering the connectivity of the network, and then by adding extra nodes dedicated to this task. The proposed sleep/wake up scheme was extended further through the adoption of transmission power control (TPC) and the introduction of multiple sinks. Both mechanisms were used to decrease the power budget required to deliver a packet from source to destination by reducing the number of hops in the paths. This improves the nodes’ lifetime and the total amount of collected data. Findings in this research have direct relevance to the use of commercial off the shelf (COTS) nodes in a SB-WSN and will provide an impetus for accurate estimation of the performance and design of such a network.
APA, Harvard, Vancouver, ISO, and other styles
24

Al-Zobaidi, Zaid. "Coherent minimisation : aggressive optimisation for symbolic finite state transducers." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/5012/.

Full text
Abstract:
Automata minimisation is considered as one of the key computational resources that drive the cost of computation. Most of the conventional minimisation techniques are based on the notion of bisimulation to determine equivalent states which can be identified. Although minimisation of automata has been an established topic of research, the optimisation of automata works in constrained environments is a novel idea which we will examine in this dissertation, along with a motivating, non-trivial application to efficient tamper-proof hardware compilation. This thesis introduces a new notion of equivalence, coherent equivalence, between states of a transducer. It is weaker than the usual notions of bisimulation, so it leads to more states being identified as equivalent. This new equivalence relation can be utilised to aggressively optimise transducers by reducing the number of states, a technique which we call coherent minimisation. We note that the coherent minimisation always outperforms the conventional minimisation algorithms. The main result of this thesis is that the coherent minimisation is sound and compositional. In order to support more realistic applications to hardware synthesis, we also introduce a refined model of transducers, which we call symbolic finite states transducers that can model systems which involve very large or infinite data-types.
APA, Harvard, Vancouver, ISO, and other styles
25

Nicolici, N. "Power minimisation techniques for testing low power VLSI circuits." Thesis, University of Southampton, 2000. https://eprints.soton.ac.uk/254107/.

Full text
Abstract:
Testing low power very large scale integrated (VLSI) circuits has recently become an area of concern due to yield and reliability problems. This dissertation focuses on minimising power dissipation during test application at logic level and register-transfer level (RTL) of abstraction of the VLSI design flow. The first part of this dissertation addresses power minimisation techniques in scan sequential circuits at the logic level of abstraction. A new best primary input change (BPIC) technique based on a novel test application strategy has been proposed. The technique increases the correlation between successive states during shifting in test vectors and shifting out test responses by changing the primary inputs such that the smallest number of transitions is achieved. The new technique is test set dependent and it is applicable to small to medium sized full and partial scan sequential circuits. Since the proposed test application strategy depends only on controlling primary input change time, power is minimised with no penalty in test area, performance, test efficiency, test application time or volume of test data. Furthermore, it is shown that partial scan does not provide only the commonly known benefits such as less test area overhead and test application time, but also less power dissipation during test application when compared to full scan. To achieve power savings in large scan sequential circuits a new test set independent multiple scan chain-based technique which employs a new design for test (DFT) architecture and a novel test application strategy, is presented. The technique has been validated using benchmark examples, and it has been shown that power is minimised with low computational time, low overhead in test area and volume of test data, and with no penalty in test application time, test efficiency, or performance. The second part of this dissertation addresses power minimisation techniques for testing low power VLSI circuits using built-in self-test (BIST) at RTL. First, it is important to overcome the shortcomings associated with traditional BIST methodologies. It is shown how a new BIST methodology for RTL data paths using a novel concept called test compatibility classes (TCC) overcomes high test application time, BIST area overhead, performance degradation, volume of test data, fault-escape probability, and complexity of the testable design space exploration. Second, power minimisation in BIST RTL data paths is achieved by analysing the effect of test synthesis and test scheduling on power dissipation during test application and by employing new power conscious test synthesis and test scheduling algorithms. Third, the new BIST methodology has been validated using benchmark examples. Further, it is shown that when the proposed power conscious test synthesis and test scheduling is combined with novel test compatibility classes simultaneous reduction in test application time and power dissipation is achieved with low overhead in computational time.
APA, Harvard, Vancouver, ISO, and other styles
26

Nessah, Rabia. "Ordonnancement de la production pour la minimisation des encours." Troyes, 2005. http://www.theses.fr/2005TROY0015.

Full text
Abstract:
Cette thèse est consacrée aux problèmes d’ordonnancement sur machines parallèles identiques. Les travaux développés portent sur l’un des critères les plus difficiles de la théorie de l’ordonnancement, à savoir la minimisation du temps total pondéré de séjour. Dans un premier temps, nous avons considéré le problème où tous les poids des tâches sont identiques. Ensuite, nous nous sommes intéressés au cas où les poids des tâches sont quelconques. Pour ces deux critères, nous avons considéré des contraintes assez fréquentes en entreprise à savoir les temps de changement entre les tâches et les dates d’arrivée différentes des tâches. Nous avons développé des méthodes exactes de type Branch-and-Bound pour la minimisation du temps total de séjour, avec des disponibilités des tâches et avec ou sans temps de changement, sur machines parallèles, et également la minimisation du temps total pondéré de séjour avec des disponibilités des tâches sur une seule machine. Nous avons démontré pour chaque problème, des propriétés de dominance, des bornes inférieures et supérieures. Les tests numériques ont montré l’efficacité de nos algorithmes. Pour le problème de la minimisation de la somme pondérée du temps total de séjour avec des disponibilités des tâches, sur des machines parallèles, nous avons démontré de nouvelles bornes inférieures. Nous avons aussi construit une méthode approchée dont l’efficacité a été établie par des expérimentations numériques
This thesis is devoted to the identical parallel machines scheduling problems. It concerns one of the most difficult criteria of the scheduling theory, that is, the minimization of the total weighted completion time. First we consider the problem where all the weights of the jobs are identical. Then we deal with the case where the weights of the jobs are arbitrary. For these two cases we consider quite frequent constraints in enterprises, such as, setup times and job release dates. We develop exact Branch-and-Bound methods for minimizing total completion time, with different release dates and with or without setup times on parallel machines, and also total weighted completion time with different release dates on one machine. For each problem, we show dominance properties, lower and upper bounds. Numerical tests show the efficiency of our algorithms. For the minimization of the weighted completion time with different release dates on parallel machines, we establish new lower bounds. We have also introduce an approximate method which the efficiency is established by numerical experimentations
APA, Harvard, Vancouver, ISO, and other styles
27

Echague, Eugénio. "Optimisation globale sans dérivées par minimisation de modèles simplifiés." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0016.

Full text
Abstract:
Dans cette thèse, on étudie deux méthodes d’optimisation globale sans dérivées : la méthode des moments et les méthodes de surface de réponse. Concernant la méthode des moments, nous nous sommes intéressés à ses aspects numériques et à l'un de ses aspects théoriques : l’approximation à une constante près d'une fonction par des polynômes somme de carrés. Elle a aussi été implémentée dans les sous-routines d'une méthode sans dérivées et testée avec succès sur un problème de calibration de moteur. Concernant les surface de réponse, nous construisons un modèle basée sur la technique de Sparse Grid qui permet d’obtenir une approximation précise avec un nombre faible d'évaluations de la fonction. Cette surface est ensuite localement raffinée autour des points les plus prometteurs. La performance de cette méthode, nommée GOSgrid, a été testée sur différentes fonctions et sur un cas réel. Elle surpasse les performances d'autres méthodes existantes d’optimisation globale en termes de coût
In this thesis, we study two global derivative-free optimization methods: the method of moments and the surrogate methods. Concerning the method of moments, it is implemented as solver of the sub-problems in a derivative-free optimization method and tested for an engine calibration problem with succes. We also explore its dual approach, and we study the approximation of a function by a sum of squares of polynomials plus a constant. Concerning the surrogate methods, we construct a new approximation by using the Sparse Grid interpolation method, which builds an accurate model from a limited number of function evaluations. This model is then locally refined near the points with low function value. The numerical performance of this new method, called GOSgrid, is tested for classical optimisation test functions and finally for an inverse parameter identification problem, showing good results compared to some of the others existing methods, in terms of number of function evaluations
APA, Harvard, Vancouver, ISO, and other styles
28

Papa, Guillaume. "Méthode d'échantillonnage appliqué à la minimisation du risque empirique." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0005.

Full text
Abstract:
Dans ce manuscrit, nous présentons et étudions des stratégies d’échantillonnage appliquées, à problèmes liés à l’apprentissage statistique. L’objectif est de traiter les problèmes qui surviennent généralement dans un contexte de données volumineuses lorsque le nombre d’observations et leur dimensionnalité contraignent le processus d’apprentissage. Nous proposons donc d’aborder ce problème en utilisant deux stratégies d’échantillonnage: - Accélérer le processus d’apprentissage en échantillonnant les observations les plus utiles. - Simplifier le problème en écartant certaines observations pour réduire la complexité et la taille du problème. Pour commencer, nous nous plaçons dans le contexte de la classification binaire, lorsque les observations utilisées pour former un classificateur sont issues d’un schéma d’échantillonnage/sondage et présentent une structure de dépendance complexe pour lequel nous établissons des bornes de généralisation. Ensuite nous étudions le problème d’implémentation de la descente de gradient stochastique quand les observations sont tirées non uniformément. Nous concluons cette thèse par l’étude du problème de reconstruction de graphes pour lequel nous établissons de nouveau résultat théoriques
In this manuscript, we present and study applied sampling strategies, with problems related to statistical learning. The goal is to deal with the problems that usually arise in a context of large data when the number of observations and their dimensionality constrain the learning process. We therefore propose to address this problem using two sampling strategies: - Accelerate the learning process by sampling the most helpful. - Simplify the problem by discarding some observations to reduce complexity and the size of the problem. We first consider the context of the binary classification, when the observations used to form a classifier come from a sampling / survey scheme and present a complex dependency structure. for which we establish bounds of generalization. Then we study the implementation problem of stochastic gradient descent when observations are drawn non uniformly. We conclude this thesis by studying the problem of graph reconstruction for which we establish new theoretical results
APA, Harvard, Vancouver, ISO, and other styles
29

Boiger, Wolfgang Josef. "Stabilised finite element approximation for degenerate convex minimisation problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16790.

Full text
Abstract:
Infimalfolgen nichtkonvexer Variationsprobleme haben aufgrund feiner Oszillationen häufig keinen starken Grenzwert in Sobolevräumen. Diese Oszillationen haben eine physikalische Bedeutung; Finite-Element-Approximationen können sie jedoch im Allgemeinen nicht auflösen. Relaxationsmethoden ersetzen die nichtkonvexe Energie durch ihre (semi)konvexe Hülle. Das entstehende makroskopische Modell ist degeneriert: es ist nicht strikt konvex und hat eventuell mehrere Minimalstellen. Die fehlende Kontrolle der primalen Variablen führt zu Schwierigkeiten bei der a priori und a posteriori Fehlerschätzung, wie der Zuverlässigkeits- Effizienz-Lücke und fehlender starker Konvergenz. Zur Überwindung dieser Schwierigkeiten erweitern Stabilisierungstechniken die relaxierte Energie um einen diskreten, positiv definiten Term. Bartels et al. (IFB, 2004) wenden Stabilisierung auf zweidimensionale Probleme an und beweisen dabei starke Konvergenz der Gradienten. Dieses Ergebnis ist auf glatte Lösungen und quasi-uniforme Netze beschränkt, was adaptive Netzverfeinerungen ausschließt. Die vorliegende Arbeit behandelt einen modifizierten Stabilisierungsterm und beweist auf unstrukturierten Netzen sowohl Konvergenz der Spannungstensoren, als auch starke Konvergenz der Gradienten für glatte Lösungen. Ferner wird der sogenannte Fluss-Fehlerschätzer hergeleitet und dessen Zuverlässigkeit und Effizienz gezeigt. Für Interface-Probleme mit stückweise glatter Lösung wird eine Verfeinerung des Fehlerschätzers entwickelt, die den Fehler der primalen Variablen und ihres Gradienten beschränkt und so starke Konvergenz der Gradienten sichert. Der verfeinerte Fehlerschätzer konvergiert schneller als der Fluss- Fehlerschätzer, und verringert so die Zuverlässigkeits-Effizienz-Lücke. Numerische Experimente mit fünf Benchmark-Tests der Mikrostruktursimulation und Topologieoptimierung ergänzen und bestätigen die theoretischen Ergebnisse.
Infimising sequences of nonconvex variational problems often do not converge strongly in Sobolev spaces due to fine oscillations. These oscillations are physically meaningful; finite element approximations, however, fail to resolve them in general. Relaxation methods replace the nonconvex energy with its (semi)convex hull. This leads to a macroscopic model which is degenerate in the sense that it is not strictly convex and possibly admits multiple minimisers. The lack of control on the primal variable leads to difficulties in the a priori and a posteriori finite element error analysis, such as the reliability-efficiency gap and no strong convergence. To overcome these difficulties, stabilisation techniques add a discrete positive definite term to the relaxed energy. Bartels et al. (IFB, 2004) apply stabilisation to two-dimensional problems and thereby prove strong convergence of gradients. This result is restricted to smooth solutions and quasi-uniform meshes, which prohibit adaptive mesh refinements. This thesis concerns a modified stabilisation term and proves convergence of the stress and, for smooth solutions, strong convergence of gradients, even on unstructured meshes. Furthermore, the thesis derives the so-called flux error estimator and proves its reliability and efficiency. For interface problems with piecewise smooth solutions, a refined version of this error estimator is developed, which provides control of the error of the primal variable and its gradient and thus yields strong convergence of gradients. The refined error estimator converges faster than the flux error estimator and therefore narrows the reliability-efficiency gap. Numerical experiments with five benchmark examples from computational microstructure and topology optimisation complement and confirm the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
30

Peters, Michael D. "Improving the environmental performance of small and medium sized enterprises : an assessment of attitudes and voluntary action in the UK." Thesis, University of East Anglia, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365135.

Full text
Abstract:
The environmental performance of small and medium sized enterprises (SMEs) was chosen to be the topic of study for this thesis. While this policy-relevant research area has gained increased coverage in the literature over the last decade, it has still proved difficult to generate empirical data and information of sufficient quality and quantity. A major aspect of environmental performance involves the management of waste, and waste minimisation was of particular interest to this programme of research. Another area of special interest for this thesis was the extent to which voluntary policy tools (voluntary initiatives, or VIs) could be utilised at the local level to engage with SMEs on the issue of improved environmental performance. The early desk study research revealed the major barriers preventing more environmental action by SMEs to date. The barriers included low-priority attachment to environmental issues, a lack of time/manpower and limited understanding. It also revealed that while VIs have proved successful at the 'macro' level there is little evidence or experience to draw on for their design or implementation at the local scale. The programme of empirical research Involved an original analysis of a recent nation-wide survey into the environmental attitudes of UK manufacturing businesses; the completion of an environmental attitudes survey with approximately 60 SMEs situated in East Anglia; observation of a waste-oriented local authority environment project Involving small businesses and a similar project with a rural village community in Suffolk, and finally the establishment of two voluntary waste minimisation initiatives on Industrial estates in Norfolk and Suffolk. The national survey analysis identified smaller sites as consistently less proactive in most areas of environmental thinking and action. This finding was not strongly confirmed by the survey of East Anglian SMEs which showed that a small business does not have to be a member of an environmental group/initiative to have already adopted certain sound environmental practices, even if primarily these measures were geared towards cost savings/efficiency gains. The industrial estates projects have proved to be particularly useful, demonstrating the potential benefits of this type of voluntary action which capitalises on the close geographical proximity of a number of SMEs sharing common problems. The benefits included a reduction of waste generation, the development of more environmentally responsive business cultures and improved relations with the local authority. The village community project that brought together all elements of the local society from the businesses to the school, in a rural setting, seems to be a sensible way to focus minds on the reduction of waste and consequent benefits.
APA, Harvard, Vancouver, ISO, and other styles
31

Zvolinschi, Anita. "On exergy analysis and entropy production minimisation in industrial ecology." Doctoral thesis, Norwegian University of Science and Technology, Department of Chemistry, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1591.

Full text
Abstract:

The objective of this thesis is to improve the basis for applying industrial ecology to the evaluation of material and energy resource use and transformation in industrial systems. The underlying hypothesis was that when the second law of thermodynamics is applied it improves the basis for using industrial ecology for the evaluation of the use and transformation of resources in industrial systems. Exergy analysis and entropy production calculation and minimisation of industrial processes are used as methods for analysis.

APA, Harvard, Vancouver, ISO, and other styles
32

Ooi, Hoe Seng. "Position sensorless switched reluctance motor drive with torque ripple minimisation." Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.398085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

McPherson, Gladys. "The role of minimisation in treatment allocation for clinical trials." Thesis, University of Aberdeen, 2011. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=167718.

Full text
Abstract:
Simple randomisation is the easiest method for allocating participants to treatment groups in clinical trials. In the long run it balances all features of participants across the groups but may not be suitable for small to medium sized trials. If important prognostic factors are identified at the design stage then stratified randomisation or minimisation can help to balance these features. Aim: To examine the relative benefits of different randomisation algorithms and determine guidelines for which randomisation design is advisable for a given trial. For a trial of known size with a specified number of important prognostic factors, and levels within these, it will be possible to identify the most appropriate randomisation technique for that trial. Methods: A review of methods of randomisation was first conducted followed by a survey of trialists into the current use of randomisation methods in clinical trials. Using simulations the following comparisons were made; simple randomisation compared with minimisation, whether to stratify or minimise by centre and predictability versus balance when using minimisation. The recommendations resulting from the simulations were used to design a prototype generic randomisation program. Results: The review and the survey both highlighted the probability of imbalance using simple randomisation. Minimisation was seen to be superior in producing balanced groups but the method was criticised for being more complex and unpredictable. The simulations showed that several factors influence imbalance including size of trial, the number of prognostic factors and the number of categories within these. Optimal algorithms for maintaining balance while reducing predictability were presented for varying trial parameters. Conclusions: Minimisation is a suitable method of randomisation for most clinical trials. Several strategies can be employed to address the conflicting issues of predictability and imbalance without resorting to complex mathematical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
34

Alatorre-Frenk, Claudio. "Cost minimisation in micro-hydro systems using pumps-as-turbines." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/36099/.

Full text
Abstract:
The use of reverse-running pumps as turbines (PATs) is a promising technology for small-scale hydropower. This thesis reviews the published knowledge about PATs and deals with some areas of uncertainty that have hampered their dissemination, especially in 'developing' countries. Two options for accommodating seasonal flow variations using PATs are examined and compared with using conventional turbines (that have flow control devices). This has been done using financial parameters, and it is shown' that, under typical conditions, PATs are more economic. The various published techniques for predicting the turbine-mode performance of a pump without expensive tests are reviewed; a new heuristic one is developed, and it is shown (using the same financial parameters and a large set of test data in both modes of operation) that the cost of prediction inaccuracy is negligible under typical circumstances. The economics of different ways of accommodating water-hammer are explored. Finally, the results of laboratory tests on a PAT are presented, including cavitation tests, and for the latter a theoretical framework is exposed.
APA, Harvard, Vancouver, ISO, and other styles
35

Ofori-Darko, Francis Kwame. "Crack frequency and the minimisation of reinforcement corrosion in concrete." Thesis, London South Bank University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Arda, Dawn Rungnada. "The sharkskin extrusion instability and its minimisation in polyethylene processing." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Millard, D. M. "Environmental learning and SMEs : the role of waste minimisation projects." Thesis, Manchester Metropolitan University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cutforth, Claire Louise. "Understanding waste minimisation practices at the individual and household level." Thesis, Cardiff University, 2014. http://orca.cf.ac.uk/69484/.

Full text
Abstract:
Over recent years, the issue of how to manage waste sustainably has intensified for both researchers and policy makers. From a policy perspective, the reason for this intensification can be traced to European legislation and its transposition into UK policy. The Welsh Government in particular has set challenging statutory targets for Local Authorities. Such targets include increases in recycling and composting as well as waste reduction and reuse targets. From a research perspective there has been dissatisfaction with behavioural models and their willingness to explore alternative social science thinking (such as leading approaches to practice). Despite policy interest in sustainable waste practices, there remains little research which focuses specifically on waste minimisation at the individual or household level. What research exists focuses on pro-environmental or recycling behaviour, and tends to focus upon values, intention and behavioural change, rather than on what actual practices occur, and for what reasons. This research focuses on what practices take place in order to access a more complex range of reasons why such practices take place. The methodology adopts a qualitative approach to uncovering practices in a variety of contexts, and discovers a number of key insights which underpin waste minimisation practice. This thesis demonstrates that waste minimisation performances take place, but often do so ‘unwittingly’. Coupled to this, many witting or unwitting waste minimisation actions occur for reasons other than concern for the environment. Furthermore, this research suggests that practices (and their motivations) vary dependent upon the context in which they occur. In general, three key themes were found to be significant in influencing the take up and transfer of practice: cost, convenience, and community. As a waste practitioner, the researcher is able to engage with these themes in order to suggest future directions for waste minimisation policy as well as research.
APA, Harvard, Vancouver, ISO, and other styles
39

Benaim, Abderrahim. "Unicité pour un problème de minimisation à valeurs dans Sn." Metz, 1996. http://www.theses.fr/1996METZ018S.

Full text
Abstract:
On étudie l'unicité pour le problème de minimisation de la fonctionnelle qui intervient dans le modèle de Landau-Lifshitz pour le ferromagnétisme. Sous certaines hypothèses sur le champ extérieur, on caractérise tous les cas de non unicité. Cette caractérisation prend une forme simple et explicite pour les applications a images dans la sphère de dimension deux. On applique le résultat général pour certains problèmes concrets, notamment en présence de la symétrie axiale. Dans la dernière partie de cette thèse, on applique nos techniques pour l'étude du problème avec condition au bord
We study the uniqueness problem for minimizers of the Landau-Lifshitz model in ferromagnetism. Under certain assumptions on the external field, we characterize all the cases where the uniqueness fails to hold. This characterization takes a very explicit and simple form when the impage of the maps lies in two-dimentional sphere. We apply the general result for some concrete problems, notably in the presence of axial symmetry. In the last part of the thesis we apply our techniques for the study of a problem with boundary data
APA, Harvard, Vancouver, ISO, and other styles
40

Moisan, Thierry. "Minimisation des perturbations et parallélisation pour la planification et l'ordonnancement." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26631.

Full text
Abstract:
Nous étudions dans cette thèse deux approches réduisant le temps de traitement nécessaire pour résoudre des problèmes de planification et d'ordonnancement dans un contexte de programmation par contraintes. Nous avons expérimenté avec plusieurs milliers de processeurs afin de résoudre le problème de planification et d'ordonnancement des opérations de rabotage du bois d'oeuvre. Ces problèmes sont d'une grande importance pour les entreprises, car ils permettent de mieux gérer leur production et d'économiser des coûts reliés à leurs opérations. La première approche consiste à effectuer une parallélisation de l'algorithme de résolution du problème. Nous proposons une nouvelle technique de parallélisation (nommée PDS) des stratégies de recherche atteignant quatre buts : le respect de l'ordre de visite des noeuds de l'arbre de recherche tel que défini par l'algorithme séquentiel, l'équilibre de la charge de travail entre les processeurs, la robustesse aux défaillances matérielles et l'absence de communications entre les processeurs durant le traitement. Nous appliquons cette technique pour paralléliser la stratégie de recherche Limited Discrepancy-based Search (LDS) pour ainsi obtenir Parallel Limited Discrepancy-Based Search (PLDS). Par la suite, nous démontrons qu'il est possible de généraliser cette technique en l'appliquant à deux autres stratégies de recherche : Depth-Bounded discrepancy Search (DDS) et Depth-First Search (DFS). Nous obtenons, respectivement, les stratégies Parallel Discrepancy-based Search (PDDS) et Parallel Depth-First Search (PDFS). Les algorithmes parallèles ainsi obtenus créent un partage intrinsèque de la charge de travail : la différence de charge de travail entre les processeurs est bornée lorsqu'une branche de l'arbre de recherche est coupée. En utilisant des jeux de données de partenaires industriels, nous avons pu améliorer les meilleures solutions connues. Avec la deuxième approche, nous avons élaboré une méthode pour minimiser les changements effectués à un plan de production existant lorsque de nouvelles informations, telles que des commandes additionnelles, sont prises en compte. Replanifier entièrement les activités de production peut mener à l'obtention d'un plan de production très différent qui mène à des coûts additionnels et des pertes de temps pour les entreprises. Nous étudions les perturbations causéees par la replanification à l'aide de trois métriques de distances entre deux plans de production : la distance de Hamming, la distance d'édition et la distance de Damerau-Levenshtein. Nous proposons trois modèles mathématiques permettant de minimiser ces perturbations en incluant chacune de ces métriques comme fonction objectif au moment de la replanification. Nous appliquons cette approche au problème de planification et ordonnancement des opérations de finition du bois d'oeuvre et nous démontrons que cette approche est plus rapide qu'une replanification à l'aide du modèle d'origine.
We study in this thesis two approaches that reduce the processing time needed to solve planning and ordering problems in a constraint programming context. We experiment with multiple thousands of processors on the planning and scheduling problem of wood-finish operations. These issues are of a great importance for businesses, because they can better manage their production and save costs related to their operations. The first approach consists in a parallelization of the problem solving algorithm. We propose a new parallelization technique (named PDS) of the search strategies, that reaches four goals: conservation of the nodes visit order in the search tree as defined by the sequential algorithm, balancing of the workload between the processors, robustness against hardware failures, and absence of communication between processors during the treatment. We apply this technique to parallelize the Limited Discrepancy-based (LDS) search strategy to obtain Parallel Limited Discrepancy-Based Search (PLDS). We then show that this technique can be generalized by parallelizing two other search strategies: Depth-Bounded discrepancy Search (DDS) and Depth-First Search (DFS). We obtain, respectively, Parallel Discrepancy-based Search (PDDS) and Parallel Depth-First Search (PDFS). The algorithms obtained this way create an intrinsic workload balance: the imbalance of the workload among the processors is bounded when a branch of the search tree is pruned. By using datasets coming from industrial partners, we are able to improve the best known solutions. With the second approach, we elaborated a method to minimize the changes done to an existing production plan when new information, such as additional orders, are taken into account. Completely re-planning the production activities can lead to a very different production plan which create additional costs and loss of time for businesses. We study the perturbations caused by the re-planification with three distance metrics: Hamming distance, Edit distance, and Damerau-Levenshtein Distance. We propose three mathematical models that allow to minimize these perturbations by including these metrics in the objective function when replanning. We apply this approach to the planning and scheduling problem of wood-finish operations and we demonstrate that this approach outperforms the use of the original model.
APA, Harvard, Vancouver, ISO, and other styles
41

Oueslati, Laroussi. "Commande multivariable d'une serre agricole par minimisation d'un critere quadratique." Toulon, 1990. http://www.theses.fr/1990TOUL0003.

Full text
Abstract:
La serre agricole est un systeme dans lequel on cherche a creer un microclimat aussi favorable que possible a la croissance des plantes, en tenant compte de l'aspect economique. Ce microclimat est la resultante des fonctions biologiques, des conditions climatiques exterieures et des actionneurs. Les interactions qui existent entre les differentes variables nous conduisent a envisager un controle multivariable par microordinateur. Dans ce memoire, nous presentons l'etude de la commande d'une serre agricole permettant le controle de la temperature et de l'humidite internes et prenant en compte les perturbations meteorologiques. Un modele de connaissance de l'etat climatique interne est elabore dans le but d'obtenir un outil de simulation pour tester des lois de commande pendant une periode donnee. L'identification des modeles de commande sous forme de representation d'etat est effectuee a partir des mesures enregistrees sur le site d'une serre experimentale. Le controle du microclimat est propose dans une premiere etape a l'aide d'une commande optimale et dans une deuxieme etape a l'aide d'une commande adaptative de type indirecte. Ces deux types de commande sont bases sur la minimisation d'un critere quadratique. Ce critere comprend deux matrices de ponderation q et r qui realisent un compromis entre les performances de precision et les contraintes de cout energetique. La commande adaptative traite de plus les aspects non stationnaires. Pour resoudre le probleme du choix des elements des matrices de ponderation q et r, nous avons elabore une methode qui determine systematiquement des relations entre ces matrices et les poles en boucle fermee du processus commande. Cette methode permet de controler la dynamique de la temperature et de l'humidite internes tout en minimisant le critere quadratique. Les resultats obtenus en simulation permettront d'envisager la prochaine etape qui consiste a rendre
APA, Harvard, Vancouver, ISO, and other styles
42

Benouali, Jugurtha. "Etude et minimisation des consommations des systèmes de climatisation automobile." Paris, ENMP, 2002. http://www.theses.fr/2002ENMP1107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yu, Jiaqian. "Minimisation du risque empirique avec des fonctions de perte nonmodulaires." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC012/document.

Full text
Abstract:
Cette thèse aborde le problème de l’apprentissage avec des fonctions de perte nonmodulaires. Pour les problèmes de prédiction, où plusieurs sorties sont prédites simultanément, l’affichage du résultat comme un ensemble commun de prédiction est essentiel afin de mieux incorporer les circonstances du monde réel. Dans la minimisation du risque empirique, nous visons à réduire au minimum une somme empirique sur les pertes encourues sur l’échantillon fini avec une certaine perte fonction qui pénalise sur la prévision compte tenu de la réalité du terrain. Dans cette thèse, nous proposons des méthodes analytiques et algorithmiquement efficaces pour traiter les fonctions de perte non-modulaires. L’exactitude et l’évolutivité sont validées par des résultats empiriques. D’abord, nous avons introduit une méthode pour les fonctions de perte supermodulaires, qui est basé sur la méthode d’orientation alternée des multiplicateurs, qui ne dépend que de deux problémes individuels pour la fonction de perte et pour l’infèrence. Deuxièmement, nous proposons une nouvelle fonction de substitution pour les fonctions de perte submodulaires, la Lovász hinge, qui conduit à une compléxité en O(p log p) avec O(p) oracle pour la fonction de perte pour calculer un gradient ou méthode de coupe. Enfin, nous introduisons un opérateur de fonction de substitution convexe pour des fonctions de perte nonmodulaire, qui fournit pour la première fois une solution facile pour les pertes qui ne sont ni supermodular ni submodular. Cet opérateur est basé sur une décomposition canonique submodulairesupermodulaire
This thesis addresses the problem of learning with non-modular losses. In a prediction problem where multiple outputs are predicted simultaneously, viewing the outcome as a joint set prediction is essential so as to better incorporate real-world circumstances. In empirical risk minimization, we aim at minimizing an empirical sum over losses incurred on the finite sample with some loss function that penalizes on the prediction given the ground truth. In this thesis, we propose tractable and efficient methods for dealing with non-modular loss functions with correctness and scalability validated by empirical results. First, we present the hardness of incorporating supermodular loss functions into the inference term when they have different graphical structures. We then introduce an alternating direction method of multipliers (ADMM) based decomposition method for loss augmented inference, that only depends on two individual solvers for the loss function term and for the inference term as two independent subproblems. Second, we propose a novel surrogate loss function for submodular losses, the Lovász hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a subgradient or cutting-plane. Finally, we introduce a novel convex surrogate operator for general non-modular loss functions, which provides for the first time a tractable solution for loss functions that are neither supermodular nor submodular. This surrogate is based on a canonical submodular-supermodular decomposition
APA, Harvard, Vancouver, ISO, and other styles
44

Michalska, Maria. "Algèbres de polynômes bornés sur ensembles semi-algébriques non bornés." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00684253.

Full text
Abstract:
Dans cette thèse nous étudions les algèbres des polynômes qui sont bornés sur un ensemble semi-algébrique non borné. Tout d'abord nous abordons le problème consistant à déterminer si un polynôme est borné sur un ensemble. Nous résolvons ce problème pour les polynômes à deux variables définis sur des ensembles semi-algébriques quelconques. Dans la section suivante nous donnons une méthode pour déterminer des générateurs de l'algèbre des polynômes bornés et ce pour une large classe de semi-algébriques du plan réel. Dans la section 3 nous établissons une relation entre les valeurs de bifurcation du complexifié d'un polynôme $f$ à deux variables et la stabilité de la famille d'algèbres des polynômes bornés sur les ensembles ${fle c}$. Dans la section 4 nous décrivons la structure de l'algèbre des polynômes bornés sur un certain type de sous-ensembles de $mathbb{R}^n$ avec $n$ arbitraire, que nous appelons tentacules pondérées. Nous donnons aussi une preuve géométrique du fait que l'algèbre d'un sous-ensemble non borné d'un ensemble algébrique propre n'est pas de type fini. Dans la section suivante nous établissons une correspondance entre les cônes convexes et les algèbres des ensembles obtenus par des inégalités sur des monômes appropriés. Enfin, nous démontrons une version du Positivstellensatz de Schmudgen pour les polynômes bornés sur un ensemble non compact.
APA, Harvard, Vancouver, ISO, and other styles
45

Vega, Martínez Esther. "Minimisation and abatement of volatile sulphur compounds in sewage sludge processing." Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/283655.

Full text
Abstract:
Environmental pollution related to odour emission has become in the last years an important public concern. Closeness of odour-causing facilities such as waste water treatment plants (WWTPs) to urban areas further aggravates the problem. Volatile sulphur compounds (VSC) are one of the main groups of odour causing compounds in WWTPs, especially in the sludge processing. Nowadays, there are a variety of options available for the effective treatment of odorous sulphur compounds emission. This thesis has focused on the minimisation during sewage sludge conditioning process and the abatement of these compounds by treatments such as advanced oxidation processes and adsorption at the end-of-pipe
La contaminació atmosfèrica relacionada amb la emissió de males olors s’ha convertit en els darrers anys en un motiu de preocupació social. La proximitat d’instal·lacions causants de males olors com les estacions depuradores d’aigües residuals (EDAR) a les àrees urbanes, agreuja encara més el problema. Els compostos volàtils de sofre (CVS) són un dels principals grups de compostos causants de males olors, especialment en el tractament i processament dels fangs generats a les EDARs. Actualment, existeix una gran varietat d’opcions disponibles per al tractament efectiu de les emissions dels CVS causants de males olors. La present tesi s’ha focalitzat en la minimització de emissions durant el condicionament químic dels fangs i la eliminació mitjançant tractaments a final de procés com processos d'oxidació avançada o adsorció en carbons actius
APA, Harvard, Vancouver, ISO, and other styles
46

Bel, Haj Ali Wafa. "Minimisation de fonctions de perte calibrée pour la classification des images." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00934062.

Full text
Abstract:
La classification des images est aujourd'hui un défi d'une grande ampleur puisque ça concerne d'un côté les millions voir des milliards d'images qui se trouvent partout sur le web et d'autre part des images pour des applications temps réel critiques. Cette classification fait appel en général à des méthodes d'apprentissage et à des classifieurs qui doivent répondre à la fois à la précision ainsi qu'à la rapidité. Ces problèmes d'apprentissage touchent aujourd'hui un grand nombre de domaines d'applications: à savoir, le web (profiling, ciblage, réseaux sociaux, moteurs de recherche), les "Big Data" et bien évidemment la vision par ordinateur tel que la reconnaissance d'objets et la classification des images. La présente thèse se situe dans cette dernière catégorie et présente des algorithmes d'apprentissage supervisé basés sur la minimisation de fonctions de perte (erreur) dites "calibrées" pour deux types de classifieurs: k-Plus Proches voisins (kNN) et classifieurs linéaires. Ces méthodes d'apprentissage ont été testées sur de grandes bases d'images et appliquées par la suite à des images biomédicales. Ainsi, cette thèse reformule dans une première étape un algorithme de Boosting des kNN et présente ensuite une deuxième méthode d'apprentissage de ces classifieurs NN mais avec une approche de descente de Newton pour une convergence plus rapide. Dans une seconde partie, cette thèse introduit un nouvel algorithme d'apprentissage par descente stochastique de Newton pour les classifieurs linéaires connus pour leur simplicité et leur rapidité de calcul. Enfin, ces trois méthodes ont été utilisées dans une application médicale qui concerne la classification de cellules en biologie et en pathologie.
APA, Harvard, Vancouver, ISO, and other styles
47

Haben, Stephen A. "Conditioning and preconditioning of the minimisation problem in variational data assimilation." Thesis, University of Reading, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.541945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bertout, Antoine. "Minimisation du nombre de tâches d'un système temps réel par regroupement." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10110/document.

Full text
Abstract:
Les systèmes embarqués des domaines de l'aéronautique ou de l'automobile sont en interaction permanente avec leur environnement. Ils récupèrent de l'information depuis leurs capteurs, traitent les données et réagissent par le biais de leurs actionneurs. Ces systèmes critiques se doivent non seulement de produire des résultats corrects du point de vue logique mais aussi de les réaliser dans le temps imparti. Cette particularité les classe dans la famille des systèmes temps réel. Dans les domaines cités, les fonctionnalités sont à l'origine définies au regard de la dynamique du système et leur nombre peut atteindre plusieurs milliers. Les systèmes d'exploitation temps réel, logiciels responsables du traitement de ces fonctionnalités sur le matériel, limitent généralement le nombre de traitements implantables, en raison des surcoûts engendrés par leur gestion. Dans ce travail, nous nous intéressons donc à des techniques de réduction du nombre de ces traitements, de manière à passer outre les limitations des systèmes d'exploitation temps réel. Nous proposons des algorithmes de regroupement qui assurent que les contraintes de temps soient respectées. Ces méthodes visent des architectures monoprocesseurs et multiprocesseurs pour des traitements communicants
Embedded systems dedicated to aeronautics or automotive interact permanently with their environment. They get information from their sensors, process the data and react with their actuators. Such systems have to execute the functionalities correctly, but also to process them within the allocated time. This feature classifies those systems in the category of real-time systems. In the cited domains, those functionalities are originally defined accordingly to the dynamics of the system and their number can reach several thousand. The real-time operating systems, software which handles the processing of those functionalities on the hardware, generally limit the number of functionalities, due to the overhead caused by their management. In this work, we are interested in techniques that reduce the number of those functionalities so to overstep those restrictions. We propose clustering algorithms that ensure that timing constraints are respected. These methods are applied to monoprocessor and multiprocessor architecture with communicating processes
APA, Harvard, Vancouver, ISO, and other styles
49

Sfynia, Chrysoula. "Minimisation of regulated and unregulated disinfection by-products in drinking water." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/58879.

Full text
Abstract:
This research, involving a collaboration between Imperial College London and Anglian Water, and had the overall aim to understand the occurrence and fate of a wide range of disinfection by-products (DBPs) during drinking water distribution and to establish operational strategies to simultaneously control them in water supply systems. Therefore, the research is essentially centred on two main issues: i) improving our understanding of the impact of water quality and operational parameters on regulated and unregulated DBPs in water distribution networks, and ii) the validation of a prediction tool to proactively design and adapt operational practices to minimise DBPs. The research explored these issues through a series of experiments focused on the analysis of 29 DBPs upon chlorination and chloramination, under various water ages and water quality conditions, by sampling from four locations in four full-scale distribution systems in four sampling rounds and simultaneously running Simulated Distribution System (SDS) tests. This resulted in one of the most comprehensive databases of the occurrence and behaviour in distribution systems of regulated trihalomethanes (THMs), the likely-to-be-soon-regulated-in-the-UK haloacetic acids (HAAs), as well as unregulated haloacetonitriles (HANs) and haloacetamides (HAcAms) of potential health significance, and their individual species. For the first time, SDS tests were shown to be able to successfully predict the levels and speciation of HANs and HAcAms in chlorinated and chloraminated systems, by direct comparison with actual distribution water samples. The configuration of SDS tests addressed the spatial and temporal variation of the selected DBPs, indicating that even though THM concentrations significantly increase with water age (on average by ~54% between water ages of6-106 h) and present high seasonal dependence. together with HAAs. The latter, HANs, and HAcAms concentrations had fluctuations that resulted in less pronounced overall increases, with the two N-DBPs relatively unaffected by water temperature. To explore the impact of disinfectant alteration in distribution, free chlorine and chloramination were applied in the same real water samples in SDS tests. This showed that the implementation of chloramination minimises the formation not only of THMs and HAAs, but also HANs and HAcAms, though it shifts speciation towards more brominated HAAs, HANs and HAcAms species. Through this research, SDS tests can be recommended to water utilities to both estimate the concentrations of DBPs (those included in this study) in their supply systems and assess the effect of potential DBP minimisation strategies. The interesting behavioural patterns of HAcAms in distribution systems raised questions concerning their formation mechanisms and determining factors. Therefore, a laboratory study was conducted whereby chlorination and chloramination were applied to six model amide compounds to investigate their relative contributions as N-DBPs precursors, under a range of water quality conditions (pH, bromide dose, water age). The findings of this study suggest that the N-oxidation of amide structures, more evident in aromatic moieties, is a potential mechanism for HAcAms formation, which occurs completely separately from HAN hydrolysis. This suggests that if precursor removal is to be used as a treatment strategy for minimising HAcAms and HANs, the success in minimising these groups of N-DBPs may differ based on the relative success in removing their independent precursors.
APA, Harvard, Vancouver, ISO, and other styles
50

Duerden, Christopher James. "Minimisation of energy consumption variance in manufacturing through production schedule manipulation." Thesis, University of Central Lancashire, 2016. http://clok.uclan.ac.uk/16545/.

Full text
Abstract:
In the manufacturing sector, despite the vital role it plays, the consumption of energy is rarely considered as a manufacturing process variable during the scheduling of production jobs. Due to both physical and contractual limits, the local power infrastructure can only deliver a finite amount of electrical energy at any one time. As a consequence of not considering the energy usage during the scheduling process, this limited capacity can be inefficiently utilised or exceeded, potentially resulting in damage to the infrastructure. To address this, this thesis presents a novel schedule optimisation system. Here, a Genetic Algorithm is used to optimise the start times of manufacturing jobs such that the variance in production line energy consumption is minimised, while ensuring that typical hard and soft schedule constraints are maintained. Prediction accuracy is assured through the use of a novel library-based system which is able to provide historical energy data at a high temporal granularity, while accounting for the influence of machine conditions on the energy consumption. In cases where there is insufficient historical data for a particular manufacturing job, the library-based system is able to analyse the available energy data and utilise machine learning to generate temporary synthetic profiles compensated for probable machine conditions. The performance of the entire proposed system is optimised through significant experimentation and analysis, which allows for an optimised schedule to be produced within an acceptable amount of time. Testing in a lab-based production line demonstrates that the optimised schedule is able to significantly reduce the energy consumption variance produced by a production schedule, while providing a highly accurate prediction as to the energy consumption during the schedules execution. The proposed system is also demonstrated to be easily expandable, allowing it to consider local renewable energy generation and energy storage, along with objectives such as the minimisation of peak energy consumption, and energy drawn from the National Grid.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography