Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Bid Optimization.

Rozprawy doktorskie na temat „Bid Optimization”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Bid Optimization”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Yu, Zhenjian. "Strategic sourcing and bid optimization for ocean freight /". View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?IEEM%202004%20YU.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Qian. "Pre-bid network analysis for transportation procurement auction under stochastic demand". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41727.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007.
Includes bibliographical references (p. 67-68).
Transportation procurement is one of the most critical sourcing decisions to be made in many companies. This thesis addresses a real-life industrial problem of creating package bids for a company's transportation procurement auction. The purpose of offering package bids is to increase the carriers' capacity and to improve the reliability of services. In this thesis, we investigate the possibility of forming packages using the company's own distribution network. Effective distribution of packages requires balanced cycles. A balanced cycle is a cycle containing no more than 3 nodes with equal loads (or volume of package) on every link in the cycle. We develop mixed-integer programs to find the maximum amount of network volume that can be covered by well-balanced cycles. These models are deterministic models that provide a rough guide on the optimal way of package formation when loads are known in advance. Since demand is random in real life, we perform a stochastic analysis of the problem using various techniques including simulation, probabilistic analysis and stochastic programming. Results from the stochastic analysis show that the effectiveness of package distribution depends on how we allocate the volumes on the lanes to create balanced cycles. If we always assign a fixed proportion of the lanes' volumes to the cycles, then it is only possible to have well-balanced cycles when the average volumes on the lanes are very large, validating the advantage of joint bids between several companies. However, if we assign a different proportion of the lanes' volumes to the cycles each time demand changes, then it is possible to create cycles that are balanced most of the time. An approximated solution method is provided to obtain a set of balanced cycles that can be bid out.
by Qian Wang.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
3

Aly, Mazen. "Automated Bid Adjustments in Search Engine Advertising". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210651.

Pełny tekst źródła
Streszczenie:
In digital advertising, major search engines allow advertisers to set bid adjustments on their ad campaigns in order to capture the valuation differences that are a function of query dimensions. In this thesis, a model that uses bid adjustments is developed in order to increase the number of conversions and decrease the cost per conversion. A statistical model is used to select campaigns and dimensions that need bid adjustments along with several techniques to determine their values since they can be between -90% and 900%. In addition, an evaluation procedure is developed that uses campaign historical data in order to evaluate the calculation methods as well as to validate different approaches. We study the problem of interactions between different adjustments and a solution is formulated. Real-time experiments showed that our bid adjustments model improved the performance of online advertising campaigns with statistical significance. It increased the number of conversions by 9%, and decreased the cost per conversion by 10%.
I digital marknadsföring tillåter de dominerande sökmotorerna en annonsör att ändra sina bud med hjälp av så kallade budjusteringar baserat på olika dimensioner i sökförfrågan, i syfte att kompensera för olika värden de dimensionerna medför. I det här arbetet tas en modell fram för att sätta budjusteringar i syfte att öka mängden konverteringar och samtidigt minska kostnaden per konvertering. En statistisk modell används för att välja kampanjer och dimensioner som behöver justeringar och flera olika tekniker för att bestämma justeringens storlek, som kan spänna från -90% till 900%, undersöks. Utöver detta tas en evalueringsmetod fram som använder en kampanjs historiska data för att utvärdera de olika metoderna och validera olika tillvägagångssätt. Vi studerar interaktionsproblemet mellan olika dimensioners budjusteringar och en lösning formuleras. Realtidsexperiment visar att vår modell för budjusteringar förbättrade prestandan i marknadsföringskampanjerna med statistisk signifikans. Konverteringarna ökade med 9% och kostnaden per konvertering minskade med 10%.
Style APA, Harvard, Vancouver, ISO itp.
4

Mikheev, Sergej [Verfasser]. "Portfolio optimization in arbitrary dimensions in the presence of small bid-ask spreads / Sergej Mikheev". Kiel : Universitätsbibliothek Kiel, 2018. http://d-nb.info/1155760778/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Balkan, Kaan. "Robust Optimization Approach For Long-term Project Pricing". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12612104/index.pdf.

Pełny tekst źródła
Streszczenie:
In this study, we address the long-term project pricing problem for a company that operates in the defense industry. The pricing problem is a bid project pricing problem which includes various technical and financial uncertainties, such as estimations of workhour content of the project and exchange &
inflation rates. We propose a Robust Optimization (RO) approach that can deal with the uncertainties during the project lifecycle through the identification of several discrete scenarios. The bid project&rsquo
s performance measures, other than the monetary measures, for R&
D projects are identified and the problem is formulated as a multi-attribute utility project pricing problem. In our RO approach, the bid pricing problem is decomposed into two parts which are v solved sequentially: the Penalty-Model, and the RO model. In the Penalty-Model, penalty costs for the possible violations in the company&rsquo
s workforce level due to the bid project&rsquo
s workhour requirements are determined. Then the RO model searches for the optimum bid price by considering the penalty cost from the Penalty-Model, the bid project&rsquo
s performance measures, the probability of winning the bid for a given bid price and the deviations in the bid project&rsquo
s cost. Especially for the R&
D type projects, the model tends to place lower bid prices in the expected value solutions in order to win the bid. Thus, due to the possible deviations in the project cost, R&
D projects have a high probability of suffering from a financial loss in the expected value solutions. However, the robust solutions provide results which are more aware of the deviations in the bid project&rsquo
s cost and thus eliminate the financial risks by making a tradeoff between the bid project&rsquo
s benefits, probability of winning the bid and the financial loss risk. Results for the probability of winning in the robust solutions are observed to be lower than the expected value solutions, whereas expected value solutions have higher probabilities of suffering from a financial loss.
Style APA, Harvard, Vancouver, ISO itp.
6

Lyu, Ke. "Studies on Auction Mechanism and Bid Generation in the Procurement of Truckload Transportation Services". Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0032.

Pełny tekst źródła
Streszczenie:
Le transport par camions entiers est un mode courant de transport de marchandises, qui représente une part importante de l’industrie de transport, où les expéditeurs achètent des services de transport auprès des transporteurs. L'achat de services de transport est souvent réalisé par des enchères. Par concevoir des mécanismes d'enchères efficaces et des méthodes efficaces pour résoudre les problèmes de génération d'enchères associés, les expéditeurs et les transporteurs peuvent réduire leurs coûts et augmenter leurs bénéfices respectivement. Cette thèse étudie trois problèmes soulevés dans l'achat de services de transport par camions entiers réalisé par une enchère combinatoire. Premièrement, deux mécanismes d'enchères combinatoires à deux phases sont conçus avec des paquets supplémentaires de demandes offerts à l’enchère générés respectivement par le commissaire-priseur et les transporteurs dans la deuxième phase. Deuxièmement, un algorithme de génération de colonnes est proposé pour résoudre le problème de génération d'enchères apparu dans l'enchère combinatoire. Enfin, le problème de génération d’enchères est étendu à un problème qui tient compte à la fois plusieurs périodes et l'incertitude dans l'achat de services de transport par camions entières. Ce problème d'optimisation stochastique est formulé par l’optimisation de scénario et l’équivalence déterministe. Pour résoudre ce modèle, une approche de décomposition de Benders est proposée
Truckload transportation is a common mode of freight transportation, which accounts for a substantial portion of transportation industry, where shippers procure transportation services from carriers. Transportation service procurement is often realized by auction. Through designing effective auction mechanisms and efficient methods for solving related bid generation problems, shippers and carriers can save costs and increase profits respectively. This thesis studies three problems raised in the procurement of truckload transportation services realized by combinatorial auctions. Firstly, two two-phase combinatorial auction mechanisms are designed with supplementary bundles of requests offered for bid generated by the auctioneer and the carriers respectively in the second phase. Secondly, a column generation algorithm is proposed to solve the bid generation problem appeared in the combinatorial auction. Finally, the bid generation problem is extended to one that considers both multiple periods and uncertainty in truckload transportation service procurement. This stochastic optimization problem is formulated through scenario optimization and deterministic equivalence. To solve this model, a Benders decomposition approach is proposed
Style APA, Harvard, Vancouver, ISO itp.
7

Mubark, Athmar. "Computer Science Optimization Of Reverse auction : Reverse Auction". Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-68140.

Pełny tekst źródła
Streszczenie:
Many people still confused and misunderstand the differences between auction types: In fact, we have only two major types of auctions which are the forward auction and Reverse auction[22]. In a forward auction a single seller offers an item for sale with many competitive buyers driving the price upward: In a Reverse Auction, a single buyer wants to purchase a service or an item from many sellers, they drive the price downward: There are many differences between these type of auction: Including the progress of the auctions; winner selection criterion and other factors: The Reverse Auction nowadays is one of the most preferred types of online auctions: It gains popularity rapidly because of representing the buyers' side and helps him to drive prices down in contrary with the forward auction or traditional auction. The aim of this study is to identify the most common types of the Reverse auctions and compare them to one another to determine when should be used by a buyer and propose the most efficient implementation model for some types: The results of this study are: achieve a written report and a small demonstrator model on how to implement English Auction and Second-Sealed bid Auction.
Style APA, Harvard, Vancouver, ISO itp.
8

Taylor, Kendra C. "Sequential Auction Design and Participant Behavior". Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7250.

Pełny tekst źródła
Streszczenie:
This thesis studies the impact of sequential auction design on participant behavior from both a theoretical and an empirical viewpoint. In the first of the two analyses, three sequential auction designs are characterized and compared based on expected profitability to the participants. The optimal bid strategy is derived as well. One of the designs, the alternating design, is a new auction design and is a blend of the other two. It assumes that the ability to bid in or initiate an auction is given to each side of the market in an alternating fashion to simulate seasonal markets. The conditions for an equilibrium auction design are derived and characteristics of the equilibrium are outlined. The primary result is that the alternating auction is a viable compromise auction design when buyers and suppliers disagree on whether to hold a sequence of forward or reverse auctions. We also found the value of information on future private value for a strategic supplier in a two-period case of the alternating and reverse auction designs. The empirical work studies the cause of low aggregation of timber supply in reverse auctions of an online timber exchange. Unlike previous research results regarding timber auctions, which focus on offline public auctions held by the U.S. Forest Service, we study online private auctions between logging companies and mills. A limited survey of the online auction data revealed that the auctions were successful less than 50% of the time. Regression analysis is used to determine which internal and external factors to the auction affect the aggregation of timber in an effort to determine the reason that so few auctions succeeded. The analysis revealed that the number of bidders, the description of the good and the volume demanded had a significant influence on the amount of timber supplied through the online auction exchange. A plausible explanation for the low aggregation is that the exchange was better suited to check the availability for custom cuts of timber and to transact standard timber.
Style APA, Harvard, Vancouver, ISO itp.
9

Müller, Sibylle D. "Bio-inspired optimization algorithms for engineering applications /". Zürich, 2002. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14719.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zuniga, Virgilio. "Bio-inspired optimization algorithms for smart antennas". Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5766.

Pełny tekst źródła
Streszczenie:
This thesis studies the effectiveness of bio-inspired optimization algorithms in controlling adaptive antenna arrays. Smart antennas are able to automatically extract the desired signal from interferer signals and external noise. The angular pattern depends on the number of antenna elements, their geometrical arrangement, and their relative amplitude and phases. In the present work different antenna geometries are tested and compared when their array weights are optimized by different techniques. First, the Genetic Algorithm and Particle Swarm Optimization algorithms are used to find the best set of phases between antenna elements to obtain a desired antenna pattern. This pattern must meet several restraints, for example: Maximizing the power of the main lobe at a desired direction while keeping nulls towards interferers. A series of experiments show that the PSO achieves better and more consistent radiation patterns than the GA in terms of the total area of the antenna pattern. A second set of experiments use the Signal-to-Interference-plus-Noise-Ratio as the fitness function of optimization algorithms to find the array weights that configure a rectangular array. The results suggest an advantage in performance by reducing the number of iterations taken by the PSO, thus lowering the computational cost. During the development of this thesis, it was found that the initial states and particular parameters of the optimization algorithms affected their overall outcome. The third part of this work deals with the meta-optimization of these parameters to achieve the best results independently from particular initial parameters. Four algorithms were studied: Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing and Hill Climb. It was found that the meta-optimization algorithms Local Unimodal Sampling and Pattern Search performed better to set the initial parameters and obtain the best performance of the bio-inspired methods studied.
Style APA, Harvard, Vancouver, ISO itp.
11

França, Fabricio Olivetti de. "Algoritmos bio-inspirados aplicados a otimização dinamica". [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259091.

Pełny tekst źródła
Streszczenie:
Orientadores: Fernando Jose Von Zuben, Leandro Nunes de Castro
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-14T19:14:33Z (GMT). No. of bitstreams: 1 Franca_FabricioOlivettide_M.pdf: 2824607 bytes, checksum: 3de6277fbb2c8c3460d62b4d81d14f73 (MD5) Previous issue date: 2005
Resumo: Esta dissertação propõe algoritmos bio-inspirados para a solução de problemas de otimização dinâmica, ou seja, problemas em que a superfície de otimização no espaço de busca sofre variações diversas ao longo do tempo. Com a variação, no tempo, de número, posição e qualidade dos ótimos locais, as técnicas de programação matemática tendem a apresentar uma acentuada degradação de desempenho, pois geralmente foram concebidas para tratar do caso estático. Algoritmos populacionais, controle dinâmico do número de indivíduos na população, estratégias de busca local e uso eficaz de memória são requisitos desejados para o sucesso da otimização dinâmica, sendo contemplados nas propostas de solução implementadas nesta dissertação. Os algoritmos a serem apresentados e comparados com alternativas competitivas presentes na literatura são baseados em funcionalidades e estruturas de processamento de sistemas imunológicos e de colônias de formigas. Pelo fato de considerarem todos os requisitos para uma busca eficaz em ambientes dinâmicos, o desempenho dos algoritmos imuno-inspirados se mostrou superior em todos os critérios considerados para comparação dos resultados dos experimentos.
Abstract: This dissertation proposes bio-inspired algorithms to solve dynamic optimization problems, i.e., problems for which the optimization surface on the search space suffers several changes over time. With such variation of number, position and quality of local optima, mathematical programming techniques may present degradation of performance, because they were usually conceived to deal with static problems. Population-based algorithms, dynamic control of the population size, local search strategies and an efficient memory usage are desirable requirements to a proper treatment of dynamic optimization problems, thus being incorporated into the solution strategies implemented here. The algorithms to be presented, and compared with competitive alternatives available in the literature, are based on functionalities and processing structures of immune systems and ant colonies. Due to the capability of incorporating all the requirements for an efficient search on dynamic environments, the immune-inspired approaches overcome the others in all the performance criteria adopted to evaluate the experimental results.
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
Style APA, Harvard, Vancouver, ISO itp.
12

Castro, Junior Olacir Rodrigues. "Bio-inspired optimization algorithms for multi-objective problems". reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/46312.

Pełny tekst źródła
Streszczenie:
Orientador : Aurora Trinidad Ramirez Pozo
Coorientador : Roberto Santana Hermida
Tese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 06/03/2017
Inclui referências : f. 161-72
Área de concentração : Computer Science
Resumo: Problemas multi-objetivo (MOPs) são caracterizados por terem duas ou mais funções objetivo a serem otimizadas simultaneamente. Nestes problemas, a meta é encontrar um conjunto de soluções não-dominadas geralmente chamado conjunto ótimo de Pareto cuja imagem no espaço de objetivos é chamada frente de Pareto. MOPs que apresentam mais de três funções objetivo a serem otimizadas são conhecidos como problemas com muitos objetivos (MaOPs) e vários estudos indicam que a capacidade de busca de algoritmos baseados em Pareto é severamente deteriorada nesses problemas. O desenvolvimento de otimizadores bio-inspirados para enfrentar MOPs e MaOPs é uma área que vem ganhando atenção na comunidade, no entanto, existem muitas oportunidades para inovar. O algoritmo de enxames de partículas multi-objetivo (MOPSO) é um dos algoritmos bio-inspirados adequados para ser modificado e melhorado, principalmente devido à sua simplicidade, flexibilidade e bons resultados. Para melhorar a capacidade de busca de MOPSOs, seguimos duas linhas de pesquisa diferentes: A primeira foca em métodos de líder e arquivamento. Trabalhos anteriores apontaram que esses componentes podem influenciar no desempenho do algoritmo, porém a seleção desses componentes pode ser dependente do problema. Uma alternativa para selecioná-los dinamicamente é empregando hiper-heurísticas. Ao combinar hiper-heurísticas e MOPSO, desenvolvemos um novo framework chamado H-MOPSO. A segunda linha de pesquisa também é baseada em trabalhos anteriores do grupo que focam em múltiplos enxames. Isso é feito selecionando como base o framework multi-enxame iterado (I-Multi), cujo procedimento de busca pode ser dividido em busca de diversidade e busca com múltiplos enxames, e a última usa agrupamento para dividir um enxame em vários sub-enxames. Para melhorar o desempenho do I-Multi, exploramos duas possibilidades: a primeira foi investigar o efeito de diferentes características do mecanismo de agrupamento do I-Multi. A segunda foi investigar alternativas para melhorar a convergência de cada sub-enxame, como hibridizá-lo com um algoritmo de estimativa de distribuição (EDA). Este trabalho com EDA aumentou nosso interesse nesta abordagem, portanto seguimos outra linha de pesquisa, investigando alternativas para criar versões multi-objetivo de um dos EDAs mais poderosos da literatura, chamado estratégia de evolução baseada na adaptação da matriz de covariância (CMA-ES). Para validar o nosso trabalho, vários estudos empíricos foram conduzidos para investigar a capacidade de busca das abordagens propostas. Em todos os estudos, nossos algoritmos investigados alcançaram resultados competitivos ou melhores do que algoritmos bem estabelecidos da literatura. Palavras-chave: multi-objetivo, algoritmo de estimativa de distribuição, otimização por enxame de partículas, multiplos enxames, híper-heuristicas.
Abstract: Multi-Objective Problems (MOPs) are characterized by having two or more objective functions to be simultaneously optimized. In these problems, the goal is to find a set of non-dominated solutions usually called Pareto optimal set whose image in the objective space is called Pareto front. MOPs presenting more than three objective functions to be optimized are known as Many-Objective Problems (MaOPs) and several studies indicate that the search ability of Pareto-based algorithms is severely deteriorated in such problems. The development of bio-inspired optimizers to tackle MOPs and MaOPs is a field that has been gaining attention in the community, however there are many opportunities to innovate. Multi-objective Particle Swarm Optimization (MOPSO) is one of the bio-inspired algorithms suitable to be modified and improved, mostly due to its simplicity, flexibility and good results. To enhance the search ability of MOPSOs, we followed two different research lines: The first focus on leader and archiving methods. Previous works have pointed that these components can influence the algorithm performance, however the selection of these components can be problem-dependent. An alternative to dynamically select them is by employing hyper-heuristics. By combining hyper-heuristics and MOPSO, we developed a new framework called H-MOPSO. The second research line, is also based on previous works of the group that focus on multi-swarm. This is done by selecting as base framework the iterated multi swarm (I-Multi) algorithm, whose search procedure can be divided into diversity and multi-swarm searches, and the latter employs clustering to split a swarm into several sub-swarms. In order to improve the performance of I-Multi, we explored two possibilities: the first was to further investigate the effect of different characteristics of the clustering mechanism of I-Multi. The second was to investigate alternatives to improve the convergence of each sub-swarm, like hybridizing it to an Estimation of Distribution Algorithm (EDA). This work on EDA increased our interest in this approach, hence we followed another research line by investigating alternatives to create multi-objective versions of one of the most powerful EDAs from the literature, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). In order to validate our work, several empirical studies were conducted to investigate the search ability of the approaches proposed. In all studies, our investigated algorithms have reached competitive or better results than well established algorithms from the literature. Keywords: multi-objective, estimation of distribution algorithms, particle swarm optimization, multi-swarm, hyper-heuristics.
Style APA, Harvard, Vancouver, ISO itp.
13

Takac, Martin. "Randomized coordinate descent methods for big data optimization". Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9670.

Pełny tekst źródła
Streszczenie:
This thesis consists of 5 chapters. We develop new serial (Chapter 2), parallel (Chapter 3), distributed (Chapter 4) and primal-dual (Chapter 5) stochastic (randomized) coordinate descent methods, analyze their complexity and conduct numerical experiments on synthetic and real data of huge sizes (GBs/TBs of data, millions/billions of variables). In Chapter 2 we develop a randomized coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth separable convex function and prove that it obtains an ε-accurate solution with probability at least 1 - p in at most O((n/ε) log(1/p)) iterations, where n is the number of blocks. This extends recent results of Nesterov [43], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing ε from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving first true iteration complexity bounds. For strongly convex functions the method converges linearly. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Our analysis is also much simpler. In Chapter 3 we show that the randomized coordinate descent method developed in Chapter 2 can be accelerated by parallelization. The speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is equal to the product of the number of processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there is no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of coordinates being updated at each iteration is random, which allows for modeling situations with variable (busy or unreliable) number of processors. We demonstrate numerically that the algorithm is able to solve huge-scale l1-regularized least squares problems with a billion variables. In Chapter 4 we extended coordinate descent into a distributed environment. We initially partition the coordinates (features or examples, based on the problem formulation) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bounds on the number of iterations sufficient to approximately solve the problem with high probability, and show how it depends on the data and on the partitioning. We perform numerical experiments with a LASSO instance described by a 3TB matrix. Finally, in Chapter 5, we address the issue of using mini-batches in stochastic optimization of Support Vector Machines (SVMs). We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched (parallel) SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss. Our results in Chapters 2 and 3 are cast for blocks (groups of coordinates) instead of coordinates, and hence the methods are better described as block coordinate descent methods. While the results in Chapters 4 and 5 are not formulated for blocks, they can be extended to this setting.
Style APA, Harvard, Vancouver, ISO itp.
14

Sturt, Bradley Eli. "Dynamic optimization in the age of big data". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127292.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 241-249).
This thesis revisits a fundamental class of dynamic optimization problems introduced by Dantzig (1955). These decision problems remain widely studied in many applications domains (e.g., inventory management, finance, energy planning) but require access to probability distributions that are rarely known in practice. First, we propose a new data-driven approach for addressing multi-stage stochastic linear optimization problems with unknown probability distributions. The approach consists of solving a robust optimization problem that is constructed from sample paths of the underlying stochastic process. As more sample paths are obtained, we prove that the optimal cost of the robust problem converges to that of the underlying stochastic problem. To the best of our knowledge, this is the first data-driven approach for multi-stage stochastic linear optimization problems which is asymptotically optimal when uncertainty is arbitrarily correlated across time.
Next, we develop approximation algorithms for the proposed data-driven approach by extending techniques from the field of robust optimization. In particular, we present a simple approximation algorithm, based on overlapping linear decision rules, which can be reformulated as a tractable linear optimization problem with size that scales linearly in the number of data points. For two-stage problems, we show the approximation algorithm is also asymptotically optimal, meaning that the optimal cost of the approximation algorithm converges to that of the underlying stochastic problem as the number of data points tends to infinity. Finally, we extend the proposed data-driven approach to address multi-stage stochastic linear optimization problems with side information. The approach combines predictive machine learning methods (such as K-nearest neighbors, kernel regression, and random forests) with the proposed robust optimization framework.
We prove that this machine learning-based approach is asymptotically optimal, and demonstrate the value of the proposed methodology in numerical experiments in the context of inventory management, scheduling, and finance.
by Bradley Eli Sturt.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
Style APA, Harvard, Vancouver, ISO itp.
15

Chan, Cameron D. "MECHANICAL OPTIMIZATION AND BUCKLING ANALYSIS OF BIO-COMPOSITES". DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/877.

Pełny tekst źródła
Streszczenie:
Today’s environmental concerns have led a renewed search in industry to find new sustainable materials to replace non-renewable resources. President Barack Obama also quoted in the recent 2012 Presidential Debate “that there is a need to build the energy sources of the future and invest in solar, wind, and bio-fuels.” Bio-composites are believed to be the future and the new substitute for non-renewable resources. Bio-composites are similar to composites in that they are made up of two constituent materials; however the main difference is that bio-composites are made from natural fibers and a biopolymer matrix. This research investigates the buckling behavior of bamboo and will analyze and determine the slender ratio that will induce buckling when bamboo is used as a column. Along with the investigation of the bamboo under buckling, this study will also show the potential of bio-composites to replace non-renewable resources in industry through experimental and numerical analysis. However, in order to study the buckling behavior of the bamboo, the mechanical characteristics of the bamboo and optimal curing treatment first had to be established. This is because, in order for bamboo to acquire proper strength characteristics, the bamboo must first be treated. Due to the scarcity of bamboo material in the lab, the obtainment of the mechanical properties of the bamboo as well as the optimal curing treatment was done in collaboration with Jay Lopez. In order for bamboo to acquire proper strength characteristics, the bamboo must be treated. In the first study, a total of four different types of natural treatments were analyzed to optimize the mechanical characteristics of bamboo. To assess each curing method, tensile and compression tests were performed to obtain the mechanical properties. Due to each bamboo culm having different thicknesses and cross sections, the specific strength property is used to normalize the data and allow for easy comparison and assessing of each curing method equally. The specific strength parameter is defined as the ultimate stress divided by the density of the material. These curing treatments consisted of four thermo-treatments, three different percentages of salt treatments, one lime treatment, and one oil treatment. The thermo-treatments consisted of heating the bamboo internodes in an autoclave with no pressure at 150oF, 180°F, 200°F, and 220°F. The experimental results of the thermo-treatments determined that bamboo obtains higher mechanical properties as well as reduced weight when heated at higher temperatures. This is explained by the increasing bound water extracted from the bamboo material at higher temperatures. In addition to finding the optimal heat treatment, the internodes of bamboo were soaked in natural additives that included a 3%, 6%, and 9% Instant Ocean sea salt solution, a Bonide hydrated lime solution, and a Kirkland canola oil solution for approximately five days and then heat treated at the optimal temperature of 220°F. The experimental results showed that all of the different additives had a significant effect on the mechanical properties. After determining the mechanical properties of each curing method, the results were then analyzed through a trade study. The trade study parameters consisted of weight-drop of the material, the specific strength, and the ultimate stress for both compression and tension. Each parameter of the trade study is kept unbiased as the weighting of each parameter is set equal to each other. The results of the trade study indicated that the 3% salt solution was the optimal curing treatment, yielding a higher specific strength value for both compression and tension, along with a significantly lower weight-drop after curing. After we came up with the optimal treatment, the buckling behavior of bamboo was investigated. The buckling analysis was investigated to determine at what slenderness ratio the bamboo would buckle when used as a column. A total of seven cases were investigated using different lengths, that ranged from 1.5” to 10”. Through experimental results, it was determined that a slenderness ratio above approximately 34.7 would induce global buckling to the bamboo column. The last investigation of this study consisted of building a small prototype wall structure using bio-composites. The prototype wall structure was manufactured using a combination of bamboo and a bi-directional woven hemp fabric. The dimensions of the prototype were 15.13” long and 7.75” tall. The wall structure was tested under compression in the Aerospace Structures/Composites Lab and the Architectural Engineering Department’s high bay laboratory. The results of the experimental test on the wall showed great potential for bio-composites, as the structure withstood a force of 46,800 pounds. A numerical analysis technique was also employed through the finite element method using the Abaqus software. The purpose of the finite element method was to validate the experimental results by comparing the buckling behavior of the tests. The numerical analysis showed very good agreement with the experimental results.
Style APA, Harvard, Vancouver, ISO itp.
16

Spagna, Nicolò. "Design and optimization of bed transportation at Herlev hospital". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3390/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Arifeen, N. "Process design and optimization of bio-ethanol production system". Thesis, University of Manchester, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.509174.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Tsang, Wai-pong Wilburn, i 曾瑋邦. "Bio-inspired algorithms for single and multi-objective optimization". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42182104.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Chen, Longbiao. "Big data-driven optimization in transportation and communication networks". Electronic Thesis or Diss., Sorbonne université, 2018. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2018SORUS393.pdf.

Pełny tekst źródła
Streszczenie:
L'évolution des structures métropolitaines ont créé divers types de réseaux urbains. Parmi lesquels deux types de réseaux sont d'une grande importance pour notre vie quotidienne : les réseaux de transport correspondant à la mobilité humaine dans l'espace physique et les réseaux de communications soutenant les interactions humaines dans l'espace numérique. L'expansion rapide dans la portée et l'échelle de ces deux réseaux soulève des questions de recherche fondamentales sur la manière d’optimiser ces réseaux. Certains des objectifs principaux comprennent le provisioning de ressources à la demande, la détection des anomalies, l'efficacité énergétique et la qualité de service. Malgré les différences dans la conception et les technologies de mise en œuvre, les réseaux de transport et les réseaux de communications partagent des structures fondamentales communes, et présentent des caractéristiques spatio-temporelles dynamiques similaires. En conséquence, ils existent les défis communs dans l’optimisation de ces deux réseaux : le profil du trafic, la prédiction de la mobilité, l’agrégation de trafic, le clustering des nœuds et l'allocation de ressources. Pour atteindre les objectifs d'optimisation et relever les défis de la recherche, différents modèles analytiques, algorithmes d'optimisation et systèmes de simulation ont été proposés et largement étudiés à travers plusieurs disciplines. Ces modèles analytiques sont souvent validés par la simulation et pourraient conduire à des résultats sous-optimaux dans le déploiement. Avec l'émergence de l’Internet, un volume massif de données de réseau urbain peuvent être collecté. Les progrès récents dans les techniques d'analyse de données Big Data ont fourni aux chercheurs de grands potentiels pour comprendre ces données. Motivé par cette tendance, l’objectif de cette thèse est d'explorer un nouveau paradigme d'optimisation des réseaux basé sur les données. Nous abordons les défis scientifiques mentionnés ci-dessus en appliquant des méthodes d'analyse de données pour l'optimisation des réseaux. Nous proposons deux algorithmes data-driven pour le clustering de trafic réseau et la prédiction de la mobilité d’utilisateur, et appliquer ces algorithmes à l'optimisation dans les réseaux de transport et de communications. Premièrement, en analysant les jeux de données de trafic à grande échelle des deux réseaux, nous proposons un algorithme de clustering à base de graphe pour mieux comprendre les similitudes de la circulation et les variations de trafic entre différents zones et heures. Sur cette base, nous appliquons l'algorithme d’agrégation (clustering) de trafic aux deux applications d'optimisation de réseau suivants : 1. Un clustering de trafic dynamique pour la planification à la demande des réseaux de vélos partagés. Dans cette application, nous regroupons dynamiquement les stations de vélos avec des motifs de trafic similaires pour obtenir des demandes de trafic groupées (en cluster) plus stables et plus prédictible, de manière à pouvoir prévoir les stations surchargés dans le réseau et à permettre une planification dynamique de réseau en fonction de la demande. Les résultats d'évaluation en utilisant les données réelles de New York City et Washington, D.C. montrent que notre solution prévoit précisément des clusters surchargés [...]
The evolution of metropolitan structures and the development of urban systems have created various kinds of urban networks, among which two types of networks are of great importance for our daily life, the transportation networks corresponding to human mobility in the physical space, and the communication networks supporting human interactions in the digital space. The rapid expansion in the scope and scale of these two networks raises a series of fundamental research questions on how to optimize these networks for their users. Some of the major objectives include demand responsiveness, anomaly awareness, cost effectiveness, energy efficiency, and service quality. Despite the distinct design intentions and implementation technologies, both the transportation and communication networks share common fundamental structures, and exhibit similar spatio-temporal dynamics. Correspondingly, there exists an array of key challenges that are common in the optimization in both networks, including network profiling, mobility prediction, traffic clustering, and resource allocation. To achieve the optimization objectives and address the research challenges, various analytical models, optimization algorithms, and simulation systems have been proposed and extensively studied across multiple disciplines. Generally, these simulation-based models are not evaluated in real-world networks, which may lead to sub-optimal results in deployment. With the emergence of ubiquitous sensing, communication and computing diagrams, a massive number of urban network data can be collected. Recent advances in big data analytics techniques have provided researchers great potentials to understand these data. Motivated by this trend, we aim to explore a new big data-driven network optimization paradigm, in which we address the above-mentioned research challenges by applying state-of-the-art data analytics methods to achieve network optimization goals. Following this research direction, in this dissertation, we propose two data-driven algorithms for network traffic clustering and user mobility prediction, and apply these algorithms to real-world optimization tasks in the transportation and communication networks. First, by analyzing large-scale traffic datasets from both networks, we propose a graph-based traffic clustering algorithm to better understand the traffic similarities and variations across different area and time. Upon this basis, we apply the traffic clustering algorithm to the following two network optimization applications. 1. Dynamic traffic clustering for demand-responsive bikeshare networks. In this application, we dynamically cluster bike stations with similar usage patterns to obtain stable and predictable cluster-wise bike traffic demands, so as to foresee over-demand stations in the network and enable demand-responsive bike scheduling. Evaluation results using real-world data from New York City and Washington, D.C. show that our framework accurately foresees over-demand clusters (e.g. with 0.882 precision and 0.938 recall in NYC), and outperforms other baseline methods significantly. 2. Complementary traffic clustering for cost-effective C-RAN. In this application, we cluster RRHs with complementary traffic patterns (e.g., an RRH in residential area and an RRH in business district) to reuse the total capacity of the BBUs, so as to reduce the overall deployment cost. We evaluate our framework with real-world network data collected from the city of Milan, Italy and the province of Trentino, Italy. Results show that our method effectively reduces the overall deployment cost to 48.4\% and 51.7\% of the traditional RAN architecture in the two datasets, respectively, and consistently outperforms other baseline methods. Second, by analyzing large-scale user mobility datasets from both networks, we propose [...]
Style APA, Harvard, Vancouver, ISO itp.
20

Tsang, Wai-pong Wilburn. "Bio-inspired algorithms for single and multi-objective optimization". Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42182104.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Maronga, Savini. "On the optimization of the fluidized bed particulate coating process". Doctoral thesis, KTH, Chemical Engineering and Technology, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2720.

Pełny tekst źródła
Streszczenie:

Different aspects that influence the fluidked bedparticulate coating process have been investigated. An easy touse procedure for creating temperature and humidity profilesfor the gas inside the bed was developed. The procedureinvolves measuring a limited number of points inside the bedand using these points generate more data points used for thecreation of the profiles. The profiles revealed that differentparts of the bed have a different involvement in the coatingprocess. Apart from showing the hydrodynamic conditions insidethe bed, the profiles can be used to optimize the size of thebed, to map the net migration of particles and to set thespraying rate limits.

The three-domain representation of the bed was used to modelthe coating process. The model showed that large beds and theexistence of stagnant region within the bed widen the toatingdistribution. Increasing the rate of transfer to the sprayingdomain or increasing the overall mixing of the bed Will havethe opposite effect of narrowing the coating distribution. Theunequal transfer between domains can be used to manipulate thetoating distribution. The combination of bed ske, differentrates of transfer between domains, size of spraying domain andthe rate of spraying can be used to obtain particles that arecoated to a particular distribution.The mechanism and kinetics of growth in a top sprayingprocess were determined by coating two poly-distributed seedparticles with a cellulose under different operatingconditions. The resulting particle distribution reveals thatnot all particles in the bed are equally coated. For the topspraying process, smaller particles Capture more toating thanlarger particles. A narrowing of the seed distribution wasfound to increase the chance of toating the large particles.The results also confirmed that particles are coated only aftervisiting the spraying region, which is small compared to therest of the bed. A growth model developed using theexperimental results introduces a segregation factor whichrepresent the probability of different particle sizes beingcoated. For the top spraying coating of lactose particle with acellulose, the segregation factor was found to be anexponential decaying function of the particle weight.

Keywords:granulation, coating, surface layering,segregation, coating uniformity, temperature protile, humidityprofile, growth kinetics, particle coating.

Style APA, Harvard, Vancouver, ISO itp.
22

Wei, Kang. "Bio-inspired Reconfigurable Elastomer-liquid Lens: Design, Actuation and Optimization". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429657034.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Khan, Mukhtaj. "Hadoop performance modeling and job optimization for big data analytics". Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11078.

Pełny tekst źródła
Streszczenie:
Big data has received a momentum from both academia and industry. The MapReduce model has emerged into a major computing model in support of big data analytics. Hadoop, which is an open source implementation of the MapReduce model, has been widely taken up by the community. Cloud service providers such as Amazon EC2 cloud have now supported Hadoop user applications. However, a key challenge is that the cloud service providers do not a have resource provisioning mechanism to satisfy user jobs with deadline requirements. Currently, it is solely the user responsibility to estimate the require amount of resources for their job running in a public cloud. This thesis presents a Hadoop performance model that accurately estimates the execution duration of a job and further provisions the required amount of resources for a job to be completed within a deadline. The proposed model employs Locally Weighted Linear Regression (LWLR) model to estimate execution time of a job and Lagrange Multiplier technique for resource provisioning to satisfy user job with a given deadline. The performance of the propose model is extensively evaluated in both in-house Hadoop cluster and Amazon EC2 Cloud. Experimental results show that the proposed model is highly accurate in job execution estimation and jobs are completed within the required deadlines following on the resource provisioning scheme of the proposed model. In addition, the Hadoop framework has over 190 configuration parameters and some of them have significant effects on the performance of a Hadoop job. Manually setting the optimum values for these parameters is a challenging task and also a time consuming process. This thesis presents optimization works that enhances the performance of Hadoop by automatically tuning its parameter values. It employs Gene Expression Programming (GEP) technique to build an objective function that represents the performance of a job and the correlation among the configuration parameters. For the purpose of optimization, Particle Swarm Optimization (PSO) is employed to find automatically an optimal or a near optimal configuration settings. The performance of the proposed work is intensively evaluated on a Hadoop cluster and the experimental results show that the proposed work enhances the performance of Hadoop significantly compared with the default settings.
Style APA, Harvard, Vancouver, ISO itp.
24

Curti, Nico <1992&gt. "Implementation and optimization of algorithms in Biomedical Big Data Analytics". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9371/1/PhDThesis.pdf.

Pełny tekst źródła
Streszczenie:
Big Data Analytics poses many challenges to the research community who has to handle several computational problems related to the vast amount of data. An increasing interest involves Biomedical data, aiming to get the so-called personalized medicine, where therapy plans are designed on the specific genotype and phenotype of an individual patient and algorithm optimization plays a key role to this purpose. In this work we discuss about several topics related to Biomedical Big Data Analytics, with a special attention to numerical issues and algorithmic solutions related to them. We introduce a novel feature selection algorithm tailored on omics datasets, proving its efficiency on synthetic and real high-throughput genomic datasets. We tested our algorithm against other state-of-art methods obtaining better or comparable results. We also implemented and optimized different types of deep learning models, testing their efficiency on biomedical image processing tasks. Three novel frameworks for deep learning neural network models development are discussed and used to describe the numerical improvements proposed on various topics. In the first implementation we optimize two Super Resolution models showing their results on NMR images and proving their efficiency in generalization tasks without a retraining. The second optimization involves a state-of-art Object Detection neural network architecture, obtaining a significant speedup in computational performance. In the third application we discuss about femur head segmentation problem on CT images using deep learning algorithms. The last section of this work involves the implementation of a novel biomedical database obtained by the harmonization of multiple data sources, that provides network-like relationships between biomedical entities. Data related to diseases and other biological relates were mined using web-scraping methods and a novel natural language processing pipeline was designed to maximize the overlap between the different data sources involved in this project.
Style APA, Harvard, Vancouver, ISO itp.
25

Sohangir, Soroosh. "MACHINE LEARNING ALGORITHM PERFORMANCE OPTIMIZATION: SOLVING ISSUES OF BIG DATA ANALYSIS". OpenSIUC, 2015. https://opensiuc.lib.siu.edu/dissertations/1111.

Pełny tekst źródła
Streszczenie:
Because of high complexity of time and space, generating machine learning models for big data is difficult. This research is introducing a novel approach to optimize the performance of learning algorithms with a particular focus on big data manipulation. To implement this method a machine learning platform using eighteen machine learning algorithms is implemented. This platform is tested using four different use cases and result is illustrated and analyzed.
Style APA, Harvard, Vancouver, ISO itp.
26

Zandi, Atashbar Nasim. "Modeling and Optimization of Biomass Supply Chains for Several Bio-refineries". Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0038.

Pełny tekst źródła
Streszczenie:
La biomasse peut jouer un rôle crucial comme source d'énergie renouvelable. La logistique représentant une part importante du coût, des chaînes d'approvisionnement efficaces doivent être conçues pour fournir aux bio-raffineries les quantités demandées, à des prix raisonnables et à des moments adéquats. Cette thèse porte sur la modélisation et l'optimisation de chaînes logistiques de biomasse pour plusieurs raffineries. Un modèle de données est élaboré pour structurer les informations nécessaires à une base de données alimentant les modèles mathématiques. Ensuite, un modèle linéaire multi-période à variables mixtes est proposé pour optimiser au niveau tactique et stratégique une chaîne logistique multi-biomasse. Les emplacements des raffineries peuvent être prédéfinis ou déterminés par le modèle. L'objectif est de minimiser un coût total incluant la production de biomasse, le stockage, la manutention, la création des raffineries et le transport, tout en satisfaisant les besoins des raffineries dans chaque période. Une version multi-objective est développée pour optimiser simultanément des critères économiques et environnementaux. Elle est résolue par une méthode de type ε-contrainte. Des grandes instances avec des données réelles pour deux régions de France (Picardie et Champagne Ardenne) sont préparées pour évaluer des modèles proposés. Enfin, des approches en deux phases sont appliquées pour résoudre les grands cas en un temps raisonnable, tout en évaluant l’écart à l’optimum fourni par la méthode exacte
Biomass can play a crucial role as one of the main sources of renewable energies. As logistics holds a significant share of biomass cost, efficient biomass supply chains must be designed to provide bio-refineries with adequate quantities of biomass at reasonable prices and appropriate times. This thesis focuses on modeling and optimization of multi-biomass supply chains for several bio-refineries. A data model is developed to list, analyze and structure the set of required data, in a logical way. The result is a set of tables that can be loaded into mathematical models for solving optimization problems. Then, a multi-period mixed integer linear programming model is proposed to optimize a multi-biomass supply chains for several bio-refineries, at the tactical and strategic level. Refineries can be already placed or located by the model. The aim is to minimize the total costs, including biomass production, storage, handling, refineries setup and transportation costs, while satisfying the demand of refineries in each period. Additionally, a multi-objective model is developed to optimize simultaneously the economic and environmental performance of biomass supply chains. The model is solved by using the ε-constraint method. Furthermore, large-scale tests on real data for two regions of France (Picardie & Champagne-Ardenne) are prepared to evaluate the proposed models. Finally, two-phase approaches are proposed to solve large-scale instances in reasonable running times, while evaluating the loss of optimality compared to the exact model
Style APA, Harvard, Vancouver, ISO itp.
27

WILSON, GREGORY. "ANAEROBIC/AEROBIC BIODEGRADATION OF PENTACHLOROPHENOL USING GAC FLUIDIZED BED BIOREACTORS: OPTIMIZATION OF THE EMPTY BED CONTACT TIME". University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1018531262.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Casazza, Marco. "Algorithms for Optimization Problems with Fractional Resources". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCD048/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse nous considérons une classe de problèmes d’optimisation ayant une particularité : des décisions à la fois discrètes et continues doivent être prises simultanément. Ces problèmes se posent dans de nombreuses applications pratiques, comme par exemple dans les réseaux de télécommunications à large bande passante et dans les problèmes de transport écologique, où les ressources disponibles peuvent être très légèrement consommées ou réparties. Ces problèmes se sont avérés être plus difficiles à résoudre que leurs homologues purement discrets. Des méthodes efficaces pour la résolution de ces problèmes sont proposées dans cette thèse. Notre approche est de prendre en compte des variantes de problèmes classiques d’optimisation combinatoire appartenant à trois domaines : packing, routage et routage/ packing intégré. Les résultats obtenus suggèrent l’existence de méthodes efficaces, réduisant l’effort de calcul nécessaire pour résoudre ce type de problème. La plupart du temps, ces méthodes sont basées sur l’exploitation de la structure des solutions optimales pour réduire l’espace de recherche
In this thesis we consider a class of optimization problems having adistinctive feature : both discrete and continuous decisions need to betaken simultaneously. These problems arise in many practical applications,for example broadband telecommunications and green transportation problems, where resources are available, that can be fractionally consumed or assigned. These problems are proven of being harder than their purely discrete counterpart. We propose effective methodologies to tackle them. Our approach is to consider variants of classical combinatorial optimization problems belonging to three domains : packing, routing, and integrated routing / packing. Our results suggest that indeed effective approaches exist, reducing the computational effort required for solving the problem. Mostly, they arebased on exploiting the structure of optimal solutions to reduce the search space
In questa tesi affrontiamo una classe di problemi di ottimizzazione con una caratteristica in comune : sia le decisioni discrete che quelle continue devono essere prese simultaneamente. Questi problemi emergono in molti campi, come ad esempio le nelle telecomunicazioni abanda larga e in problemi di trasporto ecologico, dove le risorse disponibili possono essere consumate o assegnate in modo frazionario.Questi problemi sono generalmente più difficili da risolvere rispetto alla loro controparte puramente combinatoria. Noi proponiamo metodologie efficaci per affrontarli. Con il nostro approccio consideriamo varianti di problemi classici nel campo dell’ottimizzazione combinatoriache appartengono a tre domini : impaccamento, instradamento einstradamento / impaccamento integrati. I nostri risultati suggeriscono l’esistenza di approcci efficienti che riducono lo sforzo computazionale necessario per risolvere questi problemi. Nella maggior parte deicasi, tali approcci sono basati sullo sfruttamento di particolari proprietà della struttura delle soluzioni ottime in modo da ridurre lo spaziodi ricerca
Style APA, Harvard, Vancouver, ISO itp.
29

Rajachidambaram, Sarojini Priyadarshini. "NANOCONTROLLER PROGRAM OPTIMIZATION USING ITE DAGS". UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/479.

Pełny tekst źródła
Streszczenie:
Kentucky Architecture nanocontrollers employ a bit-serial SIMD-parallel hardware design to execute MIMD control programs. A MIMD program is transformed into equivalent SIMD code by a process called Meta-State Conversion (MSC), which makes heavy use of enable masking to distinguish which code should be executed by each processing element. Both the bit-serial operations and the enable masking imposed on them are expressed in terms of if-then-else (ITE) operations implemented by a 1-of-2 multiplexor, greatly simplifying the hardware. However, it takes a lot of ITEs to implement even a small program fragment. Traditionally, bit-serial SIMD machines had been programmed by expanding a fixed bitserial pattern for each word-level operation. Instead, nanocontrollers can make use of the fact that ITEs are equivalent to the operations in Binary Decision Diagrams (BDDs), and can apply BDD analysis to optimize the ITEs. This thesis proposes and experimentally evaluates a number of techniques for minimizing the complexity of the BDDs, primarily by manipulating normalization ordering constraints. The best method found is a new approach in which a simple set of optimization transformations is followed by normalization using an ordering determined by a Genetic Algorithm (GA).
Style APA, Harvard, Vancouver, ISO itp.
30

Ilicak, Isil. "Bi-objective Bin Packing Problems". Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/2/1079987/index.pdf.

Pełny tekst źródła
Streszczenie:
In this study, we consider two bi-objective bin packing problems that assign a number of weighted items to bins having identical capacities. Firstly, we aim to minimize total deviation over bin capacity and minimize number of bins. We show that these two objectives are conflicting. Secondly, we study the problem of minimizing maximum overdeviation and minimizing the number of bins. We show the similarities of these two problems to parallel machine scheduling problems and benefit from the results while developing our solution approaches. For both problems, we propose exact procedures that generate efficient solutions relative to two objectives. To increase the efficiency of the solutions, we propose some lower and upper bounding procedures. The results of our experiments show that total overdeviation problem is easier to solve compared to maximum overdeviation problem and the bin capacity, the weight of items and the number of items are important factors that effect the solution time and quality. Our procedures can solve the problems with up to 100 items in reasonable solution times.
Style APA, Harvard, Vancouver, ISO itp.
31

Lee, Moon-bor Bob, i 李滿坡. "Optimization of highway networks". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31952677.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Vanden, Berg Andrew M. "Optimization-simulation framework to optimize hospital bed allocation in academic medical centers". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120223.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 99-100).
Congestion, overcrowding, and increasing patient wait times are major challenges that many large, academic centers currently face. To address these challenges, hospitals must effectively utilize available beds through proper strategic bed allocation and robust operational day-to-day bed assignment policies. Since patient daily demand for beds is highly variable, it is frequent that the physical capacity allocated to a given clinical service is not sufficient to accommodate all of the patients who belong to that service. This situation could lead to extensive wait time of patients in various locations in the hospital (e.g., the emergency department), as well as clinically and operationally undesirable misplacements of patients in hospital floors/beds that are managed by other clinical services than the ones to which the patients belong. In this thesis, we develop an optimization-simulation framework to optimize the bed allocation at Mass General Hospital. Detailed, data-driven simulation suggests that the newly proposed bed allocation would lead to significant reduction in patient intra-day wait time in the emergency department and other hospital locations, as well as a major reduction in the misplacements of patients in the Medicine service, which is the largest service in the hospital. We employ a two-pronged approach. First, we developed a detailed simulation setting of the entire hospital that could be used to assess the effectiveness of day-to-day operational bed assignment policies given a specific bed allocation. However, the simulation does not allow tractable optimization that seeks to find the best bed allocation among all possible allocations. This motivates the development of a network-flow/network design inspired mixed integer program that approximates the operational performance of bed allocations and allows us to effectively search for approximately the best allocation. The mixed integer program can be solved via a scenario sampling approach to provide candidate bed allocations. These are then tested and evaluated via the simulation setting. These tools facilitate expert discussions on how to modify the existing bed allocation at MGH to improve the day-to-day performance of the bed assignment process.
by Andrew M. Vanden Berg.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
33

Boyle, Kellie. "Optimization of Moving Bed Biofilm Reactor (MBBR) Operation for Brewery Wastewater Treatment". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39147.

Pełny tekst źródła
Streszczenie:
The significant rise in the number of micro-breweries in North America has increased the need for efficient on-site industrial wastewater facilities. Brewery wastewater is considered to be a high strength food industry wastewater with high variability in terms of both organic and hydraulic loading. Small breweries require cost-effective, reliable, and simple to operate treatment technologies to properly manage their brewery wastewaters. Moving bed biofilm reactor (MBBR) technology has shown promise at the lab-scale and full-scale with respect to brewery effluent treatment. MBBR systems have the capability for short hydraulic retention times (HRT), high organic loading rates, as well as increased treatment capacity and stability due to biofilm retention, all within a compact reactor size when compared to other aerobic and attached growth treatment options. Two MBBR systems utilizing two different carrier types (Kaldnes K5 and Kontakt), and a suspended growth (SG) control reactor, were used in this study to investigate the impacts of surface area loading rate (SALR) and HRT on attached growth (AG) and SG kinetics and carrier type for brewery wastewater at 2000 mg-sCOD/L. An increase in SALR from 10-55 g-sCOD/m2/d while at an HRT of 12 hr resulted in no significant impact in total volumetric removal rates between the MBBR systems and the SG control reactor; however, MLSS concentrations were lower for the MBBR systems at SALRs below 55 g-sCOD/m2/d, which indicated AG contribution. Over 92% soluble chemical oxygen demand (sCOD) removal was achieved at each SALR in each of the three reactors. These results indicated that the reactors were substrate limited and SG controlled. Due to the SG dependency, the difference between the two types of carriers was indeterminate. A decrease in HRT from 12-3 hr while maintaining an SALR of 40 g-sCOD/m2/d resulted in a shift from SG to AG dependency in the MBBR systems. The total volumetric removal rates for the MBBR systems were significantly higher at HRTs of 3 and 4 hr as compared to the SG control reactor. The AG volumetric removal rates from both MBBR systems were highest at an HRT of 3 and 4 hr. At an HRT of 12 hr all three reactors maintained over 92% sCOD removal; however, at an HRT of 4 hr the SG control reactor dropped to 88% and at 3 hr to 61%, whereas the MBBR systems maintained 95% removal at an HRT of 4 hr and only decreased to 73% at 3 hr. These results indicated that the MBBR systems were more effective at lower HRT than the SG control reactor, with no significant difference observed between the two carrier types tested. Biofilm morphology and viability from each of the two carriers utilized in the study of moving bed biofilm reactor (MBBR) treatment of brewery wastewater were investigated using stereoscopy and confocal laser scanning microscopy (CLSM) in combination with live/dead cell staining. Both carriers demonstrated thicker and more viable biofilms at high SALR and denser and less viable biofilms at low SALR. At lower HRT, the carriers reacted differently resulting in thicker, but less dense biofilms on the Kontakt carriers and thinner, but more dense biofilms on the K5 carriers. However, no trend in cell viability was observed with change in HRT. Although the systems were suspended growth (SG) dominated, based on the MBBR kinetics and carrier biofilm morphology and cell viability, either carrier would be a viable choice for an MBBR treating brewery wastewater at HRTs between 4 to 12 hr and SALRs between 10-55 g-sCOD/m2/d.
Style APA, Harvard, Vancouver, ISO itp.
34

Mansuri, Dolly N. "Optimization of Formwork Management Using Building Information Modeling (BIM) and Cascading Tool". University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1470743739.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

DeBruyne, Sandra DeBruyne. "Bio-Inspired Evolutionary Algorithms for Multi-Objective Optimization Applied to Engineering Applications". University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1542282067378143.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Hahne, William. "Optimization of laser powder bed fusion process parameters for 316L stainless steel". Thesis, Uppsala universitet, Oorganisk kemi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448263.

Pełny tekst źródła
Streszczenie:
The interest for additive manufacturing techniques have in recent years increased considerably because of their association to good printing resolution, unique design possibilities and microstructure. In this master project, 316L stainless steel was printed using metal laser powder bed fusion in an attempt to find process parameters which yield good productivity while maintaining as good material properties as possible. Laser powder bed fusion works by melting a powder bed locally with a laser. When one slice of the material is done, the powder bed is lowered, new powder is added on top, and the process is repeated, building the components layer by layer. In this thesis, samples produced with a powder layer thickness of 80 μm and 100 μm has been investigated. Process parameters like laser power, scanning speed and hatch spacing were investigated in order to establish clear processing windows where the highest productivity and lowest porosity are obtained. The most common defects in all sample batches were lack of fusion, gas pores, and spatter related pores. The best samples with regard to both porosity and build rate were obtained at normalized build rates between 1,3-1,6 and porosity-values in the 0,01-0,1 % range.
Style APA, Harvard, Vancouver, ISO itp.
37

Navarro-Brull, Francisco J. "Modeling, Simulation and Optimization of Multiphase Micropacked-Bed Reactors and Capillary Sonoreactors". Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/90920.

Pełny tekst źródła
Streszczenie:
In the last decades, miniaturized flow chemistry has promised to bring the benefits of process intensification, continuous manufacturing and greener chemistry to the fine chemical industry. However, miniaturized catalytic processes where gas, liquid, and solids are involved have always been impeded by two main drawbacks: multiphase-flow maldistribution (i.e. gas channeling) and clogging of capillary reactors. In this thesis, first principle models have been used to capture the complexity of multiphase flow in micropacked-bed reactors, which can suffer from poor and unpredictable mass-transfer performance. When the particle size ranges 100 µm in diameter, capillary and viscous forces control the hydrodynamics. Under such conditions, the gas —and not the liquid— flows creating preferential channels that cause poor radial dispersion. Experimental observations from the literature were reproduced to validate a physical-based modeling approach, the Phase Field Method (PFM). This simulation strategy sheds light on the impact of the micropacked-bed geometry and wettability on the formation of preferential gas channels. Counterintuitively, to homogenize the two-phase flow hydrodynamics and reduce radial mass-transfer limitations, solvent wettability of the support needs to be restricted, showing best performance when the contact angle ranges 60° and capillary forces are still dominant. Visualization experiments showed that ultrasound irradiation can also be used to partially fluidized the bed and modify the hydrodynamics. Under sonication, residence time distributions (RTD) in micropacked-bed reactors revealed a two-order-of-magnitude reduction in dispersion, allowing for nearly plug-flow behavior at high gas and liquid flow rates. At a reduced scale, surfaces vibrating with a low amplitude were shown to fluidize, prevent and solve capillary tube blockage problems, which are commonly found in the fine chemical industry for continuous product synthesis. The modeling and simulation strategy used in this thesis, enables a fast prototyping methodology for the proper acoustic design of sonoreactors, whose scale-up was achieved by introducing slits in sonotrodes. In addition, a patent-pending helicoidal capillary sonoreactor has shown to transform longitudinal vibrating modes into radial and torsional modes, pioneering a new range of chemistry able to handle a high concentration of particles. The contributions of this thesis made in the fields of reaction engineering and process intensification have demonstrated how computational methods and experimental techniques in other areas of research can be used to foster innovation at a fast pace.
Style APA, Harvard, Vancouver, ISO itp.
38

PAMPARARO, GIOVANNI. "Nanostructured catalysts for the bio-ethanol conversion to acetaldehyde: development and optimization". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1073525.

Pełny tekst źródła
Streszczenie:
Heterogeneous catalysts, especially when nanostructured, can play a crucial role to minimize byproducts and to stabilize the catalyst itself, thanks to its tailored properties. The efforts of my PhD have been focused to synthetize new nanostructured catalysts for the acetaldehyde production from ethanol via oxidative and non-oxidative dehydrogenation reactions. For oxidative dehydrogenation process Mo-based catalysts, supported on alumina, silica-alumina or silica, have been synthesized and tested. High acetaldehyde yield was obtained in the temperature range 573-673 K even if at higher temperatures dehydration products are formed. Catalytic activity was enhanced when a small amount of silica was introduced in the alumina support, thus obtaining more stable catalysts. Thus, other two different catalytic systems have been tested: Nb-P-Si oxides and copper-based catalysts, but the first presents low selectivity to acetaldehyde while the second catalyses the total oxidation to CO2. For non-oxidative dehydrogenation, the research activity started with the synthesis of copper supported (Al2O3, Mg2Al2O4 and ZnAl2O4 as supports) catalysts by means of conventional impregnation method. Even if these materials are active to produce acetaldehyde they suffer of deactivation and several by products were detected. To improve the catalytic performances of the best catalyst identified, the CuO-ZnAl2O4 one, new synthetic procedures were investigated and optimized. Moreover, during the period abroad, another innovative technique based on Aerosol Assisted Sol Gel process, was employed to synthesize CuO-SiO2 catalysts. All the synthetized catalysts were tested on a laboratory plant and deeply morphologically and structurally characterized. Particular attention was also dedicated to the characterization of exhaust catalysts to enlighten deactivation causes and regeneration possibilities.
Style APA, Harvard, Vancouver, ISO itp.
39

Jottreau, Benoît. "Financial models and price formation : applications to sport betting". Thesis, Paris Est, 2009. http://www.theses.fr/2009PEST1031.

Pełny tekst źródła
Streszczenie:
Cette thèse est composée de quatre chapitres. Le premier chapitre traite de l'évaluation de produits financiers dans un modèle comportant un saut pour l'actif risque. Ce saut représente la faillite de l'entreprise correspondante. On étudie alors l'évaluation des prix d'options par indifférence d'utilité dans un cadre d'utilité exponentielle. Par des techniques de programmation dynamique on montre que le prix d'un Bond est solution d'une équation différentielle et le prix d'options dépendantes de l'actif est solution d'une équation aux dérives partielles d'Hamilton-Jacobi-Bellman. Le saut dans la dynamique de l'actif risque induit des différences avec le modèle de Merton que nous tentons de quantifier. Le second chapitre traite d'un marché comportant des sauts : les paris sur le football. Nous rappelons les différentes familles de modèles pour un match de football et introduisons un modèle complet permettant d'évaluer les prix des différents produits apparus sur ce marché ces dix dernières années. La complexité de ce modèle nous amène à étudier un modèle simplifié dont nous étudions les implications et calculons les prix obtenus que l'on compare à la réalité. On remarque que la calibration implicite obtenue génère de très bons résultats en produisant des prix très proches de la réalité. Le troisième chapitre développe le problème de fixation des prix par un teneur de marche monopolistique dans le marché des paris binaires. Ce travail est un prolongement direct au problème introduit par Levitt [Lev04]. Nous généralisons en effet son travail aux cas des paris européens et proposons une méthode pour estimer la méthode de cotation utilisée par le book-maker. Nous montrons que deux hypothèses inextricables peuvent expliquer cette fixation des prix. D'une part, l'incertitude du public sur la vraie valeur ainsi que le caractère extrêmement risque-averse du bookmaker. Le quatrième chapitre prolonge quant à lui cette approche au cas de produits financiers non binaires. Nous examinons différents modèles d'offre et de demande et en déduisons, par des techniques de programmation dynamique, des équations aux dérivées partielles dictant la formation des prix d'achat et de vente. Nous montrons finalement que l'écart entre prix d'achat et prix de vente ne dépend pas de la position du teneur de marche dans l'actif considère. Cependant le prix moyen dépend lui fortement de la quantité détenue par le teneur de marche. Une approche simplifiée est finalement proposée dans le cas multidimensionnel
This thesis is composed of four chapters. The first one deals with the pricing of financial products in a single jump model for the risky asset. This jump represents the bankrupcy of the quoted firm. We study the pricing of derivatives in the context of indifference of utility with an exponential utility. By means of dynamic programming we show that the bond price is solution of an ordinary differential equation and that stock price dependent options are solutions of an equation with partial derivatives of Hamilton-Jacobi-Bellman type generalizing the Black-Scholes one. We then try to quantify differences in the price obtained here and the one from Merton model without jump. The second chapter deals with a specific jump market : the soccer betting market. We recall the different model families for a soccer match and introduce some full model which allows to price the products recently born in this market in last ten years. Nevertheless the model complexity leads us to study a simplified model introduced by Dixon and Robinson from which we are able to derive closed formulas and simulate prices that we compare to market prices. We remark that implicit calibration gives pretty goof fit of market data. Third chapter developps the approach of Levitt [Lev04] on price formation in binary betting market held by a monopolistic market-maker operating in a one time step trading. We generalize Levitt results with european format of betting. We show that prices are distorded on the pressure of demand and offer, that phenomena introducing a market probability that allows to price products under this new measure. We identify some best model for demand and offer and market maker strategy and show that probability change is obvious in case of imperfect information about the value of the product. Fourth chapter generalizes this approach to the case of general payoffs and continuous time. The task is more complex and we just derive partial derivative equations from dynamic programming that enable us to give the bid-ask prices of the product traded by the market-maker. One result is that, in most models, bid-ask spread does not depend on the inventory held by the dealer whereas mid-quote price strongly reflects the unbalance of the dealer
Style APA, Harvard, Vancouver, ISO itp.
40

Perin, Giorgio. "Biotechnological optimization of microalgae for the sustainable production of biocommodities". Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424372.

Pełny tekst źródła
Streszczenie:
The current global economy development trends strongly rely on fossil fuels exploitation, which are responsible for a net greenhouse gases (e.g. CO2) release in the atmosphere. In 2014, the Intergovernmental Panel on Climate Change (IPCC) stated that this net atmospheric CO2 increase is anthropogenic, and it will lead to the rising of the global temperature of > 2 °C, before the end of this century. The latter will strongly contribute to change the behavior of climate and oceans (e.g. leading to their acidification and oxygenation) in a permanent way, with a consequent magnification of the demographic pressures on food and water security, as well as on several ecosystems functional bio-diversity. To avoid this apocalyptic scenario, the development of renewable and clean energy sources to sustain a consistent part of the global economy is an unavoidable challenge for our society. A plant biomass-based economy could meet this need, but several studies predicted a food prices inflation and a concrete carbon debt, as consequence of this scenario. On the contrary, the exploitation of microalgae biomass could indeed avoid these issues and bring a positive effect on atmospheric CO2 levels, leading to its sequestration and fixation in organic carbon. Currently, microalgae are indeed the best CO2 sequestering organisms, thanks to higher photosynthetic rates with respect to plants. Their ability to grow on marginal lands and use wastewaters opens the doors for their application as environment-impact mitigating agents of current industrial processes. Their exploitation could be indeed the key for the development of an integrate process in which the biomass would be used to convert the majority of the current economy toward environmental-friendly processes. Despite this promising scenario, mainly wild type microalgae species are currently available for these purposes. Their evolution in a natural environment, different from the artificial one exploited during their intensive industrial cultivation, strongly impairs their theoretical biomass productivities, leading to unsatisfactory values. The development of an algae-based economy indeed depends on the efficient conversion of light energy into biomass and the optimization of metabolic pathways to maximize the synthesis of the products of interest. Still far from the development of an economically competitive and energetically sustainable microalgae industrial cultivation, these organisms therefore need to undergo a biotechnological optimization to achieve the competitiveness threshold. Although there isn’t an ideal species that could serve to meet all human needs, we focused on the seawater species Nannochloropsis gaditana, which is a promising candidate for both basic biological and applied investigations. The following PhD thesis was conceived to provide a molecular insight on N. gaditana photosynthetic efficiency and metabolic regulation, to provide the molecular targets for its biotechnological improvement. The prerequisite of this experimental work was the optimization of the currently available molecular toolkit for N. gaditana genetic manipulation and also the available molecular information was therefore significantly improved. In chapter II random mutagenesis approaches were performed in order to isolate mutant strains with photosynthetic phenotypes, likely more suitable for intensive growth conditions. Selected strains indeed showed an improved photosynthesis in limiting growth conditions, also serving as biological tools to improve the available information on the underlying molecular elements controlling light-use efficiency in this organism. When tested in lab-scale growth conditions, the strain E2, selected for a reduction in its Chl content, indeed showed an improved biomass productivity, therefore representing a proof of concept for the developed biotechnological approaches, to able to improve the photosynthetic performances of this organism in the artificial environment of a photobioreactor. Among the isolated mutant strains, in chapter III two major photosynthetic phenotypes, the reduction in the Chl content (strain E2) and the inability to activate NPQ (strain I48), were chosen as selection criteria to improve light penetration and energy conversion mechanisms, respectively, in an intensive culture. Both indeed showed an enhanced biomass productivity in industrial-simulating cultures, proving the theoretical advantage underneath their exploitation. However, when the growth conditions changed, leading to altering the light availability, the selected mutants altered their behavior. They weren’t able of performing better than the WT in all the tested conditions. This would explain why the data published to date for the same mutants in other species often provided contrasting results. We concluded that photosynthetic mutants can modulate their phenotype in relation to the growth conditions and some of the latter could indeed highlight their drawbacks rather than their benefits, therefore the genetic engineering efforts have to be tailored properly to the growth conditions used. The forward genetics strategy here developed could open the doors toward the identification of the molecular basis regulating photosynthesis in this promising species. In chapter IV mutant strain I48 was further investigate for the identification of the genetic basis of its phenotype. Thanks to the whole genome re-sequencing we identified a splicing variant in the 5’-donor splicing site of the 4th intron of the gene Naga_100173g12, which encodes for the LHCX1 protein. This mutation caused the retention of the intron sequence, leading to a truncated protein product, which is likely degraded. The absence of the LHCX1 protein strongly correlates with inability to activate NPQ, since this proteins clade is well known to be involved in the activation of this mechanism. However, the future complementation of the phenotype will serve to validate this conclusion. Moreover, the LHCX1 protein was found co-localized with the PSI in N. gaditana, therefore strain I48 could also serve as an optimal tool to investigate further its biological role. To develop an efficient biotechnological optimization strategy, the information on the metabolism regulation of N. gaditana has to be highly enriched. Understanding the metabolic fluxes direction could lead to specifically affect those involved in a specific product accumulation, without affecting other pathways, leading to a possible negative impact on growth. In chapter V an integrated analysis of genome-wide, biochemical and physiological approaches helped us in deciphering the metabolic remodeling of N. gaditana that switches its metabolism toward a greater lipid production in excess light conditions. The latter indeed induced the accumulation of DAGs and TAGs, together with the up-regulation of genes involved in their biosynthesis. We saw the induction of cytosolic fatty acids synthase (FAS1) genes and the down-regulation of those of the chloroplast (FAS2). Lipid accumulation is accompanied by the regulation of triose phosphate/inorganic phosphate transport across the chloroplast membranes, tuning the carbon metabolic allocation between cell compartments and favoring the cytoplasm and endoplasmic reticulum at the expense of the chloroplast. This highlighted the flexibility of N. gaditana metabolism to respond to environmental needs. In chapter VI the information gained from the latter work was exploited to test the potentiality of this prosing species also as protein expression platform. We built up a modular system for protein overexpression in which the regulatory sequences were chosen among those which revealed to induce a high level of transcription or to be highly regulated by light availability. N. gaditana revealed to be a very promising host for protein expression, given the higher luciferase activity monitored with respect to the reference species for these applications, C. reinhardtii. A method to test the efficacy of several regulatory sequences in driving proteins expression was developed, as well as several expression vectors, which are ready to be tested. The investigation of the N. gaditana metabolism regulation in chapter V, showed a fine tuning of its photosynthetic apparatus components, in different light conditions. Focusing on LHC proteins we identified a new LHCX protein in this species, called LHCX3 (GENE ID: Naga_101036g3), whose gene coding sequence wasn’t annotated correctly. In chapter VII the correct coding sequence of this gene was further investigated and experimentally validated with molecular techniques. The LHCX3 protein revealed to be fused with an N-terminal fasciclin I-like domain and a sequence analysis together with a preliminary evolution study was performed to infer the biological role of this association. Since algae metabolism entirely relies on light availability, the importance of investigating the light intensity effect on growth is seminal for their industrial application. In chapter VIII we developed a micro-scale platform, that we called micro-photobioreactor, to easy investigate the impact of light intensity on N. gaditana growth. We were able to test the effect of different light regimes, simultaneously, also on the photosynthetic performances in an integrate system which could be merged with nutrients availability studies, speeding up the N. gaditana characterization process. Three appendix sections are also included in the thesis in which some of the experimental techniques exploited in this work were applied to different organisms toward the common target of investigating light-use efficiency and the molecular elements involved in its regulation. In appendix I, the development of a mathematical prediction model for growth and fluorescence data of the species Nannochloropsis salina was described. The work was carried out in collaboration with Prof. Fabrizio Bezzo of the industrial engineering department of the University of Padova. The development of behavior prediction models representing the phenomena affecting algae growth, could be very helpful in designing and optimizing the production systems at industrial level. The developed model well represented N. salina growth over a wide range of light intensities, and could be further implemented to describe also the influence on growth of other parameters, such as nutrients availability and mixing. In appendix II, the monitoring of the in vivo chlorophyll fluorescence was exploited to study the photosynthetic features of rice plants exposed to salt stress conditions. The presented results are part of a wider project (in collaboration with Prof. Fiorella Lo Schiavo from the biology department of the University of Padova), aiming to depict the physiological, biochemical and molecular remodeling, undergoing in one of the major food crop in the world, in response to salt stress conditions. Depicting the impact of environmental stresses on photosynthesis is seminal to control biomass productivity since plants metabolism strongly relies on the former for growth. We showed the activation of the NPQ mechanism in salt tolerant plants, highlighting the importance of photosynthetic features monitoring to predict plants performances, directly on the field. In appendix III, Chlamydomonas reinhardtii 13C – 15N labeled thylakoids were isolated from the cw15 and npq2 mutant strain in order to study their structure and dynamics in term of protein and lipid components in situ, by applying the solid-state NMR technique, in collaboration with Prof. Anjali Pandit group from the Leiden Institute of Chemistry. These analyses will serve to investigate the photosynthetic membranes remodeling that undergoes from an active (cw15 strain) to a photo-protective state (npq 2 mutant strain), during the switch toward excess light conditions, with the final aim to understand the biochemical processes regulating this event.
Le attuali tendenze di sviluppo economico mondiale si basano fortemente sullo sfruttamento di combustibili fossili, che sono responsabili di un netto rilascio di gas serra (ad esempio CO2) nell'atmosfera. Nel 2014, il gruppo intergovernativo di esperti sui cambiamenti climatici (IPCC), ha dichiarato che questo incremento netto di CO2 atmosferica è di origine antropogenica, e che porterà all’aumento della temperatura globale di > 2 °C, prima della fine di questo secolo. Quest'ultimo contribuirà fortemente alla modifica del comportamento del clima e degli oceani (ad esempio portando alla loro acidificazione ed ossigenazione) in modo permanente, con conseguente aumento delle pressioni demografiche sulla sicurezza alimentare e idrica, nonché sulla bio-diversità funzionale di diversi ecosistemi. Per evitare questo scenario apocalittico, lo sviluppo di fonti energetiche rinnovabili e pulite per sostenere una parte consistente dell'economia globale è una sfida inevitabile per la nostra società. Un'economia basata sulla biomassa vegetale potrebbe soddisfare questa esigenza, ma diversi studi hanno previsto un’inflazione dei prezzi alimentari e un concreto debito di carbonio, come conseguenza di questo scenario. Al contrario, lo sfruttamento della biomassa delle microalghe potrebbe in realtà evitare questi problemi e portare ad un effetto positivo sui livelli di CO2, portando al suo sequestro e fissazione in carbonio organico. Attualmente, le microalghe sono infatti i migliori organismi capaci di sequestrare la CO2, grazie a tassi fotosintetici più elevati rispetto alle piante. La loro capacità di crescere su terreni marginali e sfruttando le acque di scarto apre le porte per la loro applicazione come agenti in grado di mitigare l’impatto ambientale degli attuali processi industriali. Il loro impiego potrebbe essere infatti la chiave per lo sviluppo di un processo integrato in cui la biomassa dovrebbe essere usata per convertire la maggior dell'economia attuale verso processi rispettosi dell'ambiente. Nonostante questo scenario promettente, attualmente soprattutto specie di microalghe di tipo selvatico sono impiegate a questi scopi. La loro evoluzione in un ambiente naturale, diverso da quello artificiale sfruttato durante la coltivazione industriale intensiva, ostacola fortemente le produttività teoriche della biomassa, portando a valori insoddisfacenti. Lo sviluppo di un'economia basata sulle alghe infatti dipende dall’efficiente conversione dell'energia luminosa in biomassa e dall'ottimizzazione delle vie metaboliche per massimizzare la sintesi dei prodotti di interesse. Ancora lontani dallo sviluppo di una coltivazione industriale di microalghe economicamente competitiva ed energicamente sostenibile, questi organismi devono quindi subire un’ottimizzazione biotecnologica per raggiungere la soglia di competitività. Sebbene non ci sia una specie ideale che potrebbe servire a soddisfare tutti i bisogni umani, noi ci siamo concentrati sulla specie di acqua marina Nannochloropsis gaditana, che è un candidato promettente sia per indagini biologiche di base che applicative. La seguente tesi di dottorato è stato concepita per fornire un’investigazione molecolare sull’efficienza fotosintetica e la regolazione metabolica di N. gaditana, al fine di fornire i bersagli molecolari per il suo miglioramento biotecnologico. Il presupposto di questo lavoro sperimentale era l'ottimizzazione degli strumenti molecolari attualmente disponibili per la manipolazione genetica di N. gaditana e anche le informazioni molecolari disponibili sono state quindi significativamente migliorate. Nel capitolo II è stato eseguito un approccio di mutagenesi casuale al fine di isolare ceppi mutanti con fenotipi fotosintetici, possibilmente più adatti alle condizioni di crescita intensiva. I ceppi selezionati in effetti hanno mostrato un miglioramento della fotosintesi in condizioni di crescita limitanti, rappresentando anche degli strumenti biologici per migliorare le informazioni disponibili sugli elementi molecolari alla base del controllo dell’efficienza dell’uso della luce in questo organismo. Quando testato in condizioni di crescita su scala di laboratorio, il ceppo E2, selezionato per una riduzione del contenuto in clorofilla, ha infatti mostrato un miglioramento della produttività della biomassa, rappresentando quindi una conferma per gli approcci biotecnologici qui sviluppati, di essere in grado di migliorare le prestazioni fotosintetiche di questo organismo, nell'ambiente artificiale di un fotobioreattore. Tra i ceppi mutanti isolati, nel capitolo III, due principali fenotipi fotosintetici, la riduzione nel contenuto di clorofilla (ceppo E2) e l'impossibilità di attivare il meccanismo di NPQ (ceppo I48), sono stati scelti come criteri di selezione per migliorare, rispettivamente, i meccanismi di penetrazione della luce e di conversione dell'energia luminosa in una cultura intensiva. Entrambi infatti hanno mostrato una produttività della biomassa maggiore nelle culture che simulano le condizioni industriali, dimostrando il vantaggio teorico dato dal loro sfruttamento. Tuttavia, quando le condizioni di crescita sono state cambiate, variando la disponibilità di luce, i mutanti selezionati hanno alterato il loro comportamento. Non erano in grado di essere più produttivi del WT in tutte le condizioni testate. Questo spiegherebbe perché i dati pubblicati fino ad oggi per gli stessi mutanti, isolati in altre specie, spesso forniscano risultati contrastanti. Abbiamo quindi concluso che i mutanti fotosintetici possono modulare il loro fenotipo in relazione alle condizioni di crescita e alcune di queste potrebbero infatti evidenziare i loro svantaggi piuttosto che i loro benefici. Pertanto, i futuri approcci di ingegneria genetica dovranno essere adattati adeguatamente alle condizioni di crescita utilizzate. La strategia di genetica diretta qui sviluppata potrebbe aprire le porte verso l'individuazione delle basi molecolari che regolano la fotosintesi in questa specie promettente. Nel capitolo IV il ceppo mutante I48 è stato ulteriormente indagato per individuare le basi genetiche responsabili del suo fenotipo. Grazie al ri-sequenziamento del suo intero genoma abbiamo identificato una variante di splicing nel sito di splicing donatore al 5’ del 4 ° introne del gene Naga_100173g12, che codifica per la proteina LHCX1. Questa mutazione ha causato la ritenzione della sequenza dell’introne, portando ad un prodotto proteico tronco, che viene probabilmente degradato. L'assenza della proteina LHCX1 si correla fortemente con l'incapacità di attivare l’NPQ, dal momento che questo gruppo di proteine è ben noto per essere coinvolto nell'attivazione di tale meccanismo. Tuttavia, la futura complementazione del fenotipo servirà per validare questa conclusione. Inoltre, la proteina LHCX1 è stata trovata co-localizzata con il PSI in N. gaditana, quindi il ceppo I48 potrebbe anche servire come strumento ottimale per indagare ulteriormente il ruolo biologico di questa associazione. Per sviluppare un’efficace strategia di ottimizzazione biotecnologica, le informazioni relative alla regolazione del metabolismo di N. gaditana devono essere notevolmente arricchite. Capire la direzione dei flussi metabolici potrebbe permettere di colpire in particolare solo quelli che sono coinvolti nell’accumulo di un determinato prodotto, senza influenzare altre vie metaboliche, cosa che potrebbe portare ad un possibile impatto negativo sulla crescita. Nel capitolo V un'analisi integrata con approcci genomici, biochimici e fisiologici ci ha aiutato a decifrare il rimodellamento metabolico di N. gaditana, che porta il suo metabolismo verso una maggiore produzione di lipidi in condizioni di luce in eccesso. Quest'ultima condizione infatti induce l'accumulo di di-acilgliceroli (DAG) e tri-acilgliceroli (TAG), insieme alla sovra-regolazione di geni coinvolti nella loro biosintesi. Abbiamo visto l'induzione di geni del complesso citosolico dell’acido grasso sintasi (FAS1) e la sub-regolazione di quelli del cloroplasto (FAS2). L’accumulo di lipidi è accompagnato dalla regolazione di trasportatori di trioso fosfati / fosfato inorganico attraverso le membrane del cloroplasto, inducendo la ripartizione metabolica del carbonio tra compartimenti cellulari e favorendo il citoplasma ed il reticolo endoplasmatico a spese del cloroplasto. Ciò ha evidenziato la flessibilità del metabolismo di N. gaditana al fine di rispondere alle esigenze ambientali. Nel capitolo VI le informazioni acquisite da quest'ultimo capitolo sono state sfruttate per testare le potenzialità di questa specie promettente, anche come piattaforma di espressione di proteine. Abbiamo costruito un sistema modulare per la sovra-espressione di proteine in cui le sequenze regolatrici sono state scelte tra quelle che inducono un elevato livello di trascrizione o che sono altamente regolate dalla disponibilità di luce. N. gaditana si è rivelata essere un organismo molto promettente per l'espressione di proteine, data la maggiore attività luciferasica osservata, rispetto alla specie di riferimento per tali applicazioni, C. reinhardtii. È stato sviluppato un metodo per testare l'efficacia di diverse sequenze regolatrici nel guidare l'espressione di proteine così come sono stati preparati diversi vettori di espressione, pronti per essere testati. L'indagine della regolazione del metabolismo di N. gaditana, svolta nel capitolo V, ha mostrato una fine regolazione dei suoi componenti dell'apparato fotosintetico, in diverse condizioni di luce. Concentrandosi sulle proteine antenna, coinvolte nella cattura delle luce (light-harvesting complex (LHC) proteins) abbiamo identificato una nuova proteina LHCX in questa specie, chiamata LHCX3 (ID del gene: Naga_101036g3), la cui sequenza codificante non era annotata in modo corretto. Nel capitolo VII, la corretta sequenza codificante di questo gene è stata studiata ulteriormente e convalidata sperimentalmente con tecniche molecolari. La proteina LHCX3 ha rivelato la presenza di un dominio simile a quello di tipo fasciclina I, all’N-terminale, e sono stati eseguiti un’analisi di sequenza insieme ad uno studio evolutivo preliminare per dedurre il ruolo biologico di questa associazione. Poiché il metabolismo delle alghe si basa interamente sulla disponibilità di luce, è fondamentale indagare l'effetto dell’intensità della luce sulla crescita per studiare la loro applicazione industriale. Nel capitolo VIII, abbiamo sviluppato una piattaforma su micro-scala, che abbiamo chiamato micro-fotobioreattore, per indagare facilmente l'impatto dell’intensità della luce sulla crescita di N. gaditana. Siamo stati in grado di testare simultaneamente l'effetto di diversi regimi di luce, anche sulle prestazioni fotosintetiche, in un sistema integrato che potrebbe essere associato a studi sull’impatto delle sostanze nutritive, accelerando il processo di caratterizzazione di N. gaditana. Nella tesi sono state incluse anche tre sezioni di appendice, in cui alcune delle tecniche sperimentali sfruttate in questo lavoro sono state applicate a diversi organismi, con l'obiettivo comune di indagare l’efficienza dell’uso della luce e gli elementi molecolari coinvolti nella sua regolazione. Nell’appendice I, è stato descritto lo sviluppo di un modello matematico per la previsione dei dati di crescita e di fluorescenza, nella specie Nannochloropsis salina. Il lavoro è stato svolto in collaborazione con il Prof. Fabrizio Bezzo del dipartimento di ingegneria industriale dell'università di Padova. Lo sviluppo di modelli di previsione del comportamento che rappresentino i fenomeni che influenzano la crescita delle alghe, potrebbe essere molto utile nella progettazione e ottimizzazione dei sistemi di produzione a livello industriale. Il modello sviluppato descrive bene la crescita di N. salina in un ampio intervallo di intensità di luce, e potrebbe essere ulteriormente implementato per descrivere anche l'influenza sulla crescita di altri parametri, come la disponibilità di nutrienti ed il mescolamento. Nell’appendice II, il monitoraggio della fluorescenza della clorofilla in vivo è stato sfruttato per studiare le caratteristiche fotosintetiche di piante di riso, esposte a condizioni di stress salino. I risultati presentati sono parte di un progetto più ampio (in collaborazione con la Prof. Fiorella Lo Schiavo del dipartimento di biologia dell'università di Padova), con l'obiettivo di rappresentare il rimodellamento fisiologico, biochimico e molecolare, subito da una delle principali colture destinate ad usi alimentari nel mondo, in risposta a condizioni di stress salino. La descrizione dell'impatto degli stress ambientali sulla fotosintesi è fondamentale per controllare la produttività della biomassa delle piante poiché il loro metabolismo dipende interamente dalla fotosintesi per la crescita. Abbiamo osservato l'attivazione del meccanismo di NPQ in piante tolleranti il sale, sottolineando l'importanza del monitoraggio delle caratteristiche fotosintetiche per prevedere le prestazioni delle piante, direttamente sul campo. Nell’ appendice III, sono stati isolati i tilacoidi di Chlamydomonas reinhardtii marcati con atomi 13C - 15N, sia dal ceppo cw15 che dal ceppo mutante npq2, al fine di studiare la loro struttura e la dinamica delle proteine e dei lipidi costituenti, in situ, applicando la tecnica dell’NMR allo stato solido, in collaborazione con il gruppo del Prof. Anjali Pandit dell'Istituto di chimica di Leiden. Queste analisi serviranno per studiare il rimodellamento delle membrane fotosintetiche che passano da uno stato attivo (ceppo cw15) ad uno stato foto-protettivo (ceppo mutante npq2), durante il passaggio verso condizioni di luce in eccesso, con lo scopo finale di comprendere i processi biochimici che regolano quest'evento.
Style APA, Harvard, Vancouver, ISO itp.
41

Kalavri, Vasiliki. "Performance Optimization Techniques and Tools for Distributed Graph Processing". Doctoral thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192471.

Pełny tekst źródła
Streszczenie:
In this thesis, we propose optimization techniques for distributed graph processing. First, we describe a data processing pipeline that leverages an iterative graph algorithm for automatic classification of web trackers. Using this application as a motivating example, we examine how asymmetrical convergence of iterative graph algorithms can be used to reduce the amount of computation and communication in large-scale graph analysis. We propose an optimization framework for fixpoint algorithms and a declarative API for writing fixpoint applications. Our framework uses a cost model to automatically exploit asymmetrical convergence and evaluate execution strategies during runtime. We show that our cost model achieves speedup of up to 1.7x and communication savings of up to 54%. Next, we propose to use the concepts of semi-metricity and the metric backbone to reduce the amount of data that needs to be processed in large-scale graph analysis. We provide a distributed algorithm for computing the metric backbone using the vertex-centric programming model. Using the backbone, we can reduce graph sizes up to 88% and achieve speedup of up to 6.7x.

QC 20160919

Style APA, Harvard, Vancouver, ISO itp.
42

Karppanen, E. (Erkki). "Advanced control of an industrial circulating fluidized bed boiler using fuzzy logic". Doctoral thesis, University of Oulu, 2000. http://urn.fi/urn:isbn:9514255194.

Pełny tekst źródła
Streszczenie:
Abstract Circulating Fluidized Bed (CFB) boilers are widely used for multi-fuel combustion of waste and bio-fuels. When several non-homogeneous fuels, having varying heat values, are burned simultaneously, the boiler control system can be affected by various control challenges, especially since it is not feasible to reliably measure the energy content of the multi-fuel flow. In order to fulfill energy production needs and maintain the ability to burn low grade fuels, co-firing with high heat value fuels such as gas, oil or coal is needed. Fuzzy Logic Control (FLC) has been successfully used for solving control challenges, where operators' process expertise can be transformed into automation. Real life control objects are often non-linear because the dynamics change with the operating point, or there might be other essential non-linearities in the combustion process. The proposed fuzzy control applications were developed to solve control challenges the operators meet in daily operation of a 150 MW(th) CFB at Varenso Oy's (Stora Enso Oyj) K6 boiler in Varkaus Finland. Before implementing the applications in the fullscale boiler, they were tested at a 2 MW(e) pilot plant boiler at Foster Wheeler Energia Oy's Research Center in Karhula, Finland. According to the industrial experiments, the four applications (steam pressure control, compensation of fuel quality fluctuation, fuel-feed optimization and increased bed inventory monitoring) discussed in this thesis, showed satisfactory performance and various improvements to the boiler control were achieved. Fuzzy logic control was shown to be a notable tool to improve the multi-fuel CFB boiler control.
Style APA, Harvard, Vancouver, ISO itp.
43

李國誠 i Kwok-shing Lee. "Convergences of stochastic optimization algorithms". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B3025632X.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Lü, Lin, i 吕琳. "Geometric optimization for shape processing". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46483640.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Boisard, Olivier. "Optimization and implementation of bio-inspired feature extraction frameworks for visual object recognition". Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS016/document.

Pełny tekst źródła
Streszczenie:
L'industrie a des besoins croissants en systèmes dits intelligents, capable d'analyserles signaux acquis par des capteurs et prendre une décision en conséquence. Cessystèmes sont particulièrement utiles pour des applications de vidéo-surveillanceou de contrôle de qualité. Pour des questions de coût et de consommation d'énergie,il est souhaitable que la prise de décision ait lieu au plus près du capteur. Pourrépondre à cette problématique, une approche prometteuse est d'utiliser des méthodesdites bio-inspirées, qui consistent en l'application de modèles computationels issusde la biologie ou des sciences cognitives à des problèmes industriels. Les travauxmenés au cours de ce doctorat ont consisté à choisir des méthodes d'extractionde caractéristiques bio-inspirées, et à les optimiser dans le but de les implantersur des plateformes matérielles dédiées pour des applications en vision par ordinateur.Tout d'abord, nous proposons un algorithme générique pouvant être utilisés dans différentscas d'utilisation, ayant une complexité acceptable et une faible empreinte mémoire.Ensuite, nous proposons des optimisations pour une méthode plus générale, baséesessentiellement sur une simplification du codage des données, ainsi qu'une implantationmatérielle basées sur ces optimisations. Ces deux contributions peuvent par ailleurss'appliquer à bien d'autres méthodes que celles étudiées dans ce document
Industry has growing needs for so-called “intelligent systems”, capable of not only ac-quire data, but also to analyse it and to make decisions accordingly. Such systems areparticularly useful for video-surveillance, in which case alarms must be raised in case ofan intrusion. For cost saving and power consumption reasons, it is better to perform thatprocess as close to the sensor as possible. To address that issue, a promising approach isto use bio-inspired frameworks, which consist in applying computational biology modelsto industrial applications. The work carried out during that thesis consisted in select-ing bio-inspired feature extraction frameworks, and to optimize them with the aim toimplement them on a dedicated hardware platform, for computer vision applications.First, we propose a generic algorithm, which may be used in several use case scenarios,having an acceptable complexity and a low memory print. Then, we proposed opti-mizations for a more global framework, based on precision degradation in computations,hence easing up its implementation on embedded systems. Results suggest that whilethe framework we developed may not be as accurate as the state of the art, it is moregeneric. Furthermore, the optimizations we proposed for the more complex frameworkare fully compatible with other optimizations from the literature, and provide encourag-ing perspective for future developments. Finally, both contributions have a scope thatgoes beyond the sole frameworks that we studied, and may be used in other, more widelyused frameworks as well
Style APA, Harvard, Vancouver, ISO itp.
46

Duespohl, Dale W. "Modeling and optimization of a cross-flow, moving-bed, flue gas desulfurization reactor". Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179511746.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Bentley, Jason A. "Systematic process development by simultaneous modeling and optimization of simulated moving bed chromatography". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47531.

Pełny tekst źródła
Streszczenie:
Adsorption separation processes are extremely important to the chemical industry, especially in the manufacturing of food, pharmaceutical, and fine chemical products. This work addresses three main topics: first, systematic decision-making between rival gas phase adsorption processes for the same separation problem; second, process development for liquid phase simulated moving bed chromatography (SMB); third, accelerated startup for SMB units. All of the work in this thesis uses model-based optimization to answer complicated questions about process selection, process development, and control of transient operation. It is shown in this thesis that there is a trade-off between productivity and product recovery in the gaseous separation of enantiomers using SMB and pressure swing adsorption (PSA). These processes are considered as rivals for the same separation problem and it is found that each process has a particular advantage that may be exploited depending on the production goals and economics. The processes are compared on a fair basis of equal capitol investment and the same multi-objective optimization problem is solved with equal constraints on the operating parameters. Secondly, this thesis demonstrates by experiment a systematic algorithm for SMB process development that utilizes dynamic optimization, transient experimental data, and parameter estimation to arrive at optimal operating conditions for a new separation problem in a matter of hours. Comparatively, the conventional process development for SMB relies on careful system characterization using single-column experiments, and manual tuning of operating parameters, that may take days and weeks. The optimal operating conditions that are found by this new method ensure both high purity constraints and optimal productivity are satisfied. The proposed algorithm proceeds until the SMB process is optimized without manual tuning. In some case studies, it is shown with both linear and nonlinear isotherm systems that the optimal performance can be reached in only two changes of operating conditions following the proposed algorithm. Finally, it is shown experimentally that the startup time for a real SMB unit is significantly reduced by solving model-based startup optimization problems using the SMB model developed from the proposed algorithm. The startup acceleration with purity constraints is shown to be successful at reducing the startup time by about 44%, and it is confirmed that the product purities are maintained during the operation. Significant cost savings in terms of decreased processing time and increased average product concentration can be attained using a relatively simple startup acceleration strategy.
Style APA, Harvard, Vancouver, ISO itp.
48

Price, Dorielle T. "Optimization of Bio-Impedance Sensor for Enhanced Detection and Characterization of Adherent Cells". Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4208.

Pełny tekst źródła
Streszczenie:
This research focuses on the detection and characterization of cells using impedance-based techniques to understand the behavior and response of cells to internal/environmental changes. In combination with impedimetric sensing techniques, the biosensors in this work allow rapid, label-free, quantitative measurements and are very sensitive to changes in environment and cell morphology. The biosensor design and measurement setup is optimized to detect and differentiate cancer cells and healthy (normal) cells. The outcome of this work will provide a foundation for enhanced 3-dimensional tumor analysis and characterization; thus creating an avenue for earlier cancer detection and reduced healthcare costs. The magnitude of cancer-related deaths is a result of late-diagnosis and the fact that cancer is challenging to treat, due to the non-uniform nature of the tumor. In order to characterize and treat individual cells based on their malignant potential, it is important to have a measurement technique with enhanced spatial resolution and increased sensitivity. This requires the study of individual or small groups of cells that make up the entire tissue mass. The overall objective of this research is to optimize a microelectrode biosensor and obtain statistically relevant data from a cell culture using an independent multi-electrode design. This would provide a means to explore the feasibility of electrically characterizing cells with greater accuracy and enhanced sensitivity.
Style APA, Harvard, Vancouver, ISO itp.
49

Vaneeckhaute, Céline. "Nutrient recovery from bio-digestion waste : from field experimentation to model-based optimization". Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26116.

Pełny tekst źródła
Streszczenie:
"Thèse en cotutelle pour le doctorat en génie des eaux Université Laval Québec et Ghent University à Ghent, Belgique"
La prise de conscience croissante de l’épuisement des ressources naturelles, la demande croissante de nutriments et d’énergie pour la production alimentaire et les normes de plus en plus strictes de décharge des nutriments et de fertilisation, ont donné lieu à une attention accrue pour la récupération des nutriments à partir des déchets municipaux et agricoles. Cette thèse de doctorat vise à stimuler la transition vers une bio-économie en fournissant des (moyens à développer des) stratégies durables pour la récupération des nutriments à partir des déchets organiques après la production de bio-énergie par la digestion anaérobie. Une attention particulière est accordée à la valorisation des produits récupérés comme substituts renouvelables aux engrais chimiques et/ou comme engrais organo-minéraux durables dans l'agriculture. Trois phases de recherche complémentaires ont été exécutées: 1) l'inventaire des technologies et la classification des produits, 2) l'évaluation de la valeur des produits, 3) la modélisation et l’optimisation des procédés. Dans la première phase, une revue systématique des technologies et une classification des produits ont été réalisées. Dans la seconde phase, la caractérisation des produits et des analyses de bilan de masse dans des stations de récupération des ressources de l’eau et des déchets (StaRRED) à grande échelle ont été exécutées. Une évaluation économique et écologique de différents scénarios de bio-fertilisation a été menée et les scénarios les plus durables ont été sélectionnés pour une évaluation agronomique réalisée ultérieurement sur le terrain et à l'échelle de la serre. Dans la troisième phase, une librairie générique de modèles pour la récupération des nutriments a été élaborée visant à modéliser la quantité et la qualité d'engrais. Une meilleure compréhension de la performance et des interactions des processus unitaires a été obtenue par des analyses de sensibilité globale. Les modèles ont été utilisés avec succès comme un outil pour la configuration et l'optimisation des chaînes de traitement. Sur la base de toutes les connaissances acquises, une feuille de route générique pour la mise en place des stratégies de récupération des nutriments en fonction des marchés et des législations des engrais, et de la caractérisation des déchets a été développée. En tant que telle, la présente thèse développe les concepts de fermeture maximale des cycles des nutriments dans une approche du berceau-au-berceau. Le travail apporte des preuves importantes de l'impact positif des produits récupérés sur l'économie, l'agronomie et l'écologie de la production végétale intensive. En outre, cette thèse offre des informations et des outils fondamentaux pour faciliter la mise en œuvre et l'optimisation des stratégies durables de récupération des nutriments. Ces résultats ouvrent de nouvelles possibilités pour une croissance économique durable axée sur les ressources biologiques et créent ainsi une situation gagnant-gagnant pour l'environnement, la société et l'économie en Belgique, au Canada, et au-delà.
The increasing awareness of natural resource depletion, the increasing demand of nutrients and energy for food production, and the more and more stringent nutrient discharge and fertilization levels, have resulted in an increased attention for nutrient recovery from municipal and agricultural wastes. This PhD dissertation aims at stimulating the transition to a bio-based economy by providing (tools to develop) sustainable strategies for nutrient recovery from organic wastes following bio-energy production through anaerobic digestion (= bio-digestion waste). Particular attention is paid to the valorization of the recovered products as renewable substitutes for chemical fertilizers and/or as sustainable organo-mineral fertilizers in agriculture. Three complementary research phases were conducted: 1) technology inventory and product classification, 2) product value evaluation, 3) process modelling and optimization. In the first phase, a systematic technology review and product classification was performed. In phase 2, product characterizations and mass balance analyses at full-scale waste(water) resource recovery facilities (WRRFs) were executed. An economic and ecological evaluation of different bio-based fertilization scenarios was conducted and the most sustainable scenarios were selected for subsequent agronomic evaluation at field and greenhouse scale. In phase 3, a generic nutrient recovery model library was developed aiming at fertilizer quantity and quality as model outputs. Increased insights in unit process performance and interactions were obtained through global sensitivity analyses. The models were successfully used as a tool for treatment train configuration and optimization. Based on all acquired knowledge, a generic roadmap for setting up nutrient recovery strategies as function of fertilizer markets, legislations, and waste characterization was established. As such, the present dissertation further develops the concepts of maximally closing nutrient cycles in a cradle-to-cradle approach. The work reveals important evidence of the positive impact of recovered products on the economy, agronomy, and ecology of intensive plant production. Moreover, it provides the fundamental information and tools to facilitate the implementation and optimization of sustainable nutrient recovery strategies. All of this may open up new opportunities for sustainable and more bio-based economic growth and thus create a win-win situation for the environment, the society, and the economy in Belgium, Canada, and beyond.
Style APA, Harvard, Vancouver, ISO itp.
50

Vyhnalek, Brian. "Bio-Inspired Optimization of Ultra-Wideband Patch Antennas Using Graphics Processing Unit Acceleration". Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1398262685.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii