Tesi sul tema "Optimization-based evaluation"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Optimization-based evaluation.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-47 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Optimization-based evaluation".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Alvarez, Mendoza Patricio. "Evaluating the effectiveness of signal timing optimization based on microscopic simulation evaluation". FIU Digital Commons, 2008. http://digitalcommons.fiu.edu/etd/1271.

Testo completo
Abstract (sommario):
The optimization of the timing parameters of traffic signals provides for efficient operation of traffic along a signalized transportation system. Optimization tools with macroscopic simulation models have been used to determine optimal timing plans. These plans have been, in some cases, evaluated and fine tuned using microscopic simulation tools. A number of studies show inconsistencies between optimization tool results based on macroscopic simulation and the results obtained from microscopic simulation. No attempts have been made to determine the reason behind these inconsistencies. This research investigates whether adjusting the parameters of macroscopic simulation models to correspond to the calibrated microscopic simulation model parameters can reduce said inconsistencies. The adjusted parameters include platoon dispersion model parameters, saturation flow rates, and cruise speeds. The results from this work show that adjusting cruise speeds and saturation flow rates can have significant impacts on improving the optimization/macroscopic simulation results as assessed by microscopic simulation models.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Johnson, Christian Axel. "Optimization-based biomechanical evaluation of isometric exertions on a brake wheel". Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-03032009-041004/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Martins, Pedro Miguel Pereira Serrano. "Evaluation and optimization of a session-based middleware for data management". Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12609.

Testo completo
Abstract (sommario):
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The current massive daily production of data has created a non-precedent opportunity for information extraction in many domains. However, this huge rise in the quantities of generated data that needs to be processed, stored, and timely delivered, has created several new challenges. In an effort to attack these challenges [Dom13] proposed a middleware with the concept of a Session capable of dynamically aggregating, processing and disseminating large amounts of data to groups of clients, depending on their interests. However, this middleware is deployed on a commercial cloud with limited processing support in order to reduce its costs. Moreover, it does not explore the scalability and elasticity capabilities provided by the cloud infrastructure, which presents a problem even if the associated costs may not be a concern. This thesis proposes to improve the middleware’s performance and to add to it the capability of scaling when inside a cloud by requesting or dismissing additional instances. Additionally, this thesis also addresses the scalability and cost problems by exploring alternative deployment scenarios for the middleware, that consider free infrastructure providers and open-source cloud management providers. To achieve this, an extensive evaluation of the middleware’s architecture is performed using a profiling tool and several test applications. This information is then used to propose a set of solutions for the performance and scalability problems, and then a subset of these is implemented and tested again to evaluate the gained benefits.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wang, Dongwei. "A REUSED DISTANCE BASED ANALYSIS AND OPTIMIZATION FOR GPU CACHE". VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4840.

Testo completo
Abstract (sommario):
As a throughput-oriented device, Graphics Processing Unit(GPU) has already integrated with cache, which is similar to CPU cores. However, the applications in GPGPU computing exhibit distinct memory access patterns. Normally, the cache, in GPU cores, suffers from threads contention and resources over-utilization, whereas few detailed works excavate the root of this phenomenon. In this work, we adequately analyze the memory accesses from twenty benchmarks based on reuse distance theory and quantify their patterns. Additionally, we discuss the optimization suggestions, and implement a Bypassing Aware(BA) Cache which could intellectually bypass the thrashing-prone candidates. BA cache is a cost efficient cache design with two extra bits in each line, they are flags to make the bypassing decision and find the victim cache line. Experimental results show that BA cache can improve the system performance around 20\% and reduce the cache miss rate around 11\% compared with traditional design.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

McMulkin, Mark L. "Investigation and empirical evaluation of inputs to optimization-based biomechanical trunk models". Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10032007-172024/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Garcés, Monge Luis. "Knowledge-based configuration : a contribution to generic modeling, evaluation and evolutionary optimization". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2019. http://www.theses.fr/2019EMAC0003/document.

Testo completo
Abstract (sommario):
Dans un contexte de personnalisation de masse, la configuration concourante du produit et de son processus d’obtention constituent un défi industriel important : de nombreuses options ou alternatives, de nombreux liens ou contraintes et un besoin d’optimisation des choix réalisés doivent être pris en compte. Ce problème est intitulé O-CPPC (Optimization of Concurrent Product and Process Configuration). Nous considérons ce problème comme un CSP (Constraints Satisfaction Problem) et l’optimisons avec des algorithmes évolutionnaires. Un état de l’art fait apparaître : i) que la plupart des travaux de recherche sont illustrés sur des exemples spécifiques à un cas industriel ou académique et peu représentatifs de la diversité existante ; ii) un besoin d’amélioration des performances d’optimisation afin de gagner en interactivité et faire face à des problèmes de taille plus conséquente. En réponse au premier point, ces travaux de thèse proposent les briques d’un modèle générique du problème O-CPPC. Ces briques permettent d’architecturer le produit et son processus d’obtention. Ce modèle générique est utilisé pour générer un benchmark réaliste pour évaluer les algorithmes d’optimisation. Ce benchmark est ensuite utilisé pour analyser la performance de l’approche évolutionnaire CFB-EA. L’une des forces de cette approche est de proposer rapidement un front de Pareto proche de l’optimum. Pour répondre au second point, une amélioration de cette méthode est proposée puis évaluée. L’idée est, à partir d’un premier front de Pareto approximatif déterminé très rapidement, de demander à l’utilisateur de choisir une zone d’intérêt et de restreindre la recherche de solutions uniquement sur cette zone. Cette amélioration entraine des gains de temps de calcul importants
In a context of mass customization, the concurrent configuration of the product and its production process constitute an important industrial challenge: Numerous options or alternatives, numerous links or constraints and a need to optimize the choices made. This problem is called O-CPPC (Optimization of Concurrent Product and Process Configuration). We consider this problem as a CSP (Constraints Satisfaction Problem) and optimize it with evolutionary algorithms. A state of the art shows that: i) most studies are illustrated with examples specific to an industrial or academic case and not representative of the existing diversity; ii) a need to improve optimization performance in order to gain interactivity and face larger problems. In response to the first point, this thesis proposes a generic model of the O-CPPC problem. This generic model is used to generate a realistic benchmark for evaluating optimization algorithms. This benchmark is then used to analyze the performance of the CFB-EA evolutionary approach. One of the strengths of this approach is to quickly propose a Pareto front near the optimum. To answer the second point, an improvement of this method is proposed and evaluated. The idea is, from a first approximate Pareto front, to ask the user to choose an area of interest and to restrict the search for solutions only on this area. This improvement results in significant computing time savings
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Ghimire, Rajiv, e Mustafa Noor. "Evaluation and Optimization of Quality of Service (QoS) In IP Based Networks". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3920.

Testo completo
Abstract (sommario):
The purpose of this thesis is to evaluate and analyze the performance of RED (Random Early Detection) algorithm and our proposed RED algorithm. As an active queue management RED has been considered an emerging issue in the last few years. Quality of service (QoS) is the latest issue in today’s internet world. The name QoS itself signifies that special treatment is given to the special traffic. With the passage of time the network traffic grew in an exponential way. With this, the end user failed to get the service for what they had paid and expected for. In order to overcome this problem, QoS within packet transmission came into discussion in internet world. RED is the active queue management system which randomly drops the packets whenever congestion occurs. It is one of the active queue management systems designed for achieving QoS. In order to deal with the existing problem or increase the performance of the existing algorithm, we tried to modify RED algorithm. Our purposed solution is able to minimize the problem of packet drop in a particular duration of time achieving the desired QoS. An experimental approach is used for the validation of the research hypothesis. Results show that the probability of packet dropping in our proposed RED algorithm during simulation scenarios significantly minimized by early calculating the probability value and then by calling the pushback mechanism according to that calculated probability value.
+46739567385(Rajiv), +46762125426(Mustafa)
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Moalemiyan, Mitra. "Optimization and evaluation of a pectin-based composite coating on mango and cucumber". Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112542.

Testo completo
Abstract (sommario):
The current research was designed to determine the effects of different compositions of a pectin-based emulsion coating on the quality indices and shelf life extension of mango and cucumber. The fruits were treated with pectin-based coating (coated) or kept as such (control), and stored under different temperatures and relative humidities. Samples of fruits were then tested periodically to note the changes in quality as determined by visual observation, weight loss, respiration rate, color, firmness, pH, titrable acidity (TA), total soluble solids (TSS), chlorophyll content, and decay. Coated fruits displayed retarded color development, higher TA, higher chlorophyll content, greater firmness, lower pH, and lower TSS. Loss in weight and CO 2 evolution were also reduced significantly. The results of this research suggested that pectin-based coating increased the shelf life of mango and cucumber more than 100% without perceptible losses in quality.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Salehi, H. (Hamed). "A mean-variance portfolio optimization based on firm characteristics and its performance evaluation". Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201305201300.

Testo completo
Abstract (sommario):
A flexible and financially sensible methodology that takes quantifiable firm’s characteristics into account when constructing a portfolio inspired by Brandt et al. (2009) and Hjalmarsson and Manchev (2010) is described. The method imposes the weights to be a linear function of characteristics for investor that maximizes return and penalizes for amount of volatility and solves the optimization model with a statistical method suggested by Britten-Jones (1999). It is designed in a way to be dollar-and beta-neutral. In order to exploit the information of some of the return-predictive factors with the described method, we form various single- and combined-factor strategies on a portfolio of 76 stocks out of FTSE100 in the period of January 2000 to October 2011 both in-sample and 60-rolling window out-of-sample. The results show that the designed strategies based on abnormal return, Jenson’s alpha and bootstrapped Sharpe-ratio lead to better performance in most of the designed strategies. Holding-based and expectation-based evaluation methods also support our results.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Kashfi, S. Ruhollah. "Towards Evaluation of the Adaptive-Epsilon-R-NSGA-II algorithm (AE-R-NSGA-II) on industrial optimization problems". Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-10841.

Testo completo
Abstract (sommario):
Simulation-based optimization methodologies are widely applied in real world optimization problems. In developing these methodologies, beside simulation models, algorithms play a critical role. One example is an evolutionary multi objective optimization algorithm known as Reference point-based Non-dominated Sorting Genetic Algorithm-II (R-NSGA-II), which has shown to have some promising results in this regard. Its successor, R-NSGA-II-adaptive diversity control (hereafter Adaptive Epsilon-R-NSGA-II (AE-R-NSGA-II) algorithm) is one of the latest proposed extensions of the R-NSGA-II algorithm and in the early stages of its development. So far, little research exists on its applicability and usefulness, especially in real world optimization problems. This thesis evaluates behavior and performance of AE-R-NSGA-II, and to the best of our knowledge is one of its kind. To this aim, we have investigated the algorithm in two experiments, using two benchmark functions, 10 performance measures, and a behavioral characteristics analysis method. The experiments are designed to (i) assess behavior and performance of AE-R-NSGA-II, (ii) and facilitate efficient use of the algorithm in real world optimization problems. This is achieved through the algorithm parameter configuration (parametric study) according to the problem characteristics. The behavior and performance of the algorithm in terms of diversity of the solutions obtained, and their convergence to the optimal Pareto front is studied in the first experiment through manipulating a parameter of the algorithm referred to as Adaptive epsilon coefficient value (C), and in the second experiment through manipulating the Reference point (R) according to the distance between the reference point and the global Pareto front. Therefore, as one contribution of this study two new diversity performance measures (called Modified spread, and Population diversity), and the behavioral characteristics analysis method called R-NSGA-II adaptive epsilon value have been introduced and applied. They can be modified and applied for the evaluation of any reference point based algorithm such as the AE-R-NSGA-II. Additionally, this project contributed to improving the Benchmark software, for instance by identifying new features that can facilitate future research in this area. Some of the findings of the study are as follows: (i) systematic changes of C and R parameters influence the diversity and convergence of the obtained solutions (to the optimal Pareto front and to the reference point), (ii) there is a tradeoff between the diversity and convergence speed, according to the systematic changes in the settings, (iii) the proposed diversity measures and the method are applicable and useful in combination with other performance measures. Moreover, we realized that because of the unexpected abnormal behaviors of the algorithm, in some cases the results are conflicting, therefore, impossible to interpret. This shows that still further research is required to verify the applicability and usefulness of AE-R-NSGA-II in practice. The knowledge gained in this study helps improving the algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Ullah, Zain. "Performance Evaluation and Optimization of Continuous Integration Based Automated Toolchain for Safety Related Embedded Applications Software". Master's thesis, Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-225250.

Testo completo
Abstract (sommario):
Continues Integration has been a vital part of software development process in order to make the development process fast and reliable. There are number of actors which play an important role with support of third party tools that helps the development process to be effective and productive in nature. The CI- toolchain is capable of doing much more than the compilation of the software project which covers the daily life tasks of the developers like testing, documentation etc. The important part of automated toolchain is the conversion of source code artifacts into executables with the help of the build system. The selection of proper build system is a matter of subjective in nature and it depends upon the number of factors that should be analyzed before proceeding forward towards the selection mechanism. This thesis focuses on software rebuilding and proves practically with experiments that could help developers and managers to decide between two important software build systems SCons and CMake. It has been experimentally proved that what are the conditions and situations where SCons performs better and what are the moments where it is wise to select CMake as a build tool. At first, individual build tools are evaluated in terms of scalability, conveniency, consistency, correctness, performance (in terms of speed and targets) and later, the build systems are experimented by automating the workflow by increasing the source code artifacts to evaluate the performance when there is limited user interaction. The behavior of the build systems are also tried with other third party tools like Tessy for testing purposes, Jenkins as CI server, and Polarion as requirement engineering tool to show how much effort is required to integrate third party tools with the build system in order to increase the functionality. The evaluation of the build systems is important because that will highlights the areas where potential candidates are better and where there is lack of functional specifications. Generally speaking, SCons has an advantage of being Pythonic in nature and provides the developer ease of use to specify the build configurations using programmatic skills. CMake on other hand are on top of shelves where there is no need to understanding and caring about the underlying platform and where developers want to generate the native build tool solutions which are readily available for exporting them into IDEs. Though both of the build systems has different goals, for example SCons is ready to sacrifices the performance while providing user correctness of the build while CMake focuses on generating native build tools by understanding the underlying platform. All of these types of situations are discussed with experiments in this thesis and serves as the practical guides for high level managers to decide the build tools among others. After evaluation, this thesis firstly suggests the general techniques where the bottlenecks could be covered and then build tool specific optimizations and recommendations are discussed to speed-up the development process.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Lu, Tao. "A Metrics-based Sustainability Assessment of Cryogenic Machining Using Modeling and Optimization of Process Performance". UKnowledge, 2014. http://uknowledge.uky.edu/me_etds/47.

Testo completo
Abstract (sommario):
The development of a sustainable manufacturing process requires a comprehensive evaluation method and fundamental understanding of the processes. Coolant application is a critical sustainability concern in the widely used machining process. Cryogenic machining is considered a candidate for sustainable coolant application. However, the lack of comprehensive evaluation methods leaves significant uncertainties about the overall sustainability performance of cryogenic machining. Also, the lack of practical application guidelines based on scientific understanding of the heat transfer mechanism in cryogenic machining limits the process optimization from achieving the most sustainable performance. In this dissertation, based on a proposed Process Sustainability Index (ProcSI) methodology, the sustainability performance of the cryogenic machining process is optimized with application guidelines established by scientific modeling of the heat transfer mechanism in the process. Based on the experimental results, the process optimization is carried out with Genetic Algorithm (GA). The metrics-based ProcSI method considers all three major aspects of sustainable manufacturing, namely economy, environment and society, based on the 6R concept and the total life-cycle aspect. There are sixty five metrics, categorized into six major clusters. Data for all relavant metrics are collected, normalized, weighted, and then aggregated to form the ProcSI score, as an overall judgment for the sustainability performance of the process. The ProcSI method focuses on the process design as a manufacturer’s aspect, hoping to improve the sustainability performance of the manufactured products and the manufacturing system. A heat transfer analysis of cryogenic machining for a flank-side liquid nitrogen jet delivery is carried out. This is performed by micro-scale high-speed temperature measurement experiments. The experimental results are processed with an innovative inverse heat transfer solution method to calculate the surface heat transfer coefficient at various locations throughout a wide temperature range. Based on the results, the application guidelines, including suggestions of a minimal, but sufficient, coolant flow rate are established. Cryogenic machining experiments are carried out, and ProcSI evaluation is applied to the experimental scenario. Based on the ProcSI evaluation, the optimization process implemented with GA provides optimal machining process parameters for minimum manufacturing cost, minimal energy consumption, or the best sustainability performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Willey, Landon Clark. "A Systems-Level Approach to the Design, Evaluation, and Optimization of Electrified Transportation Networks Using Agent-Based Modeling". BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8532.

Testo completo
Abstract (sommario):
Rising concerns related to the effects of traffic congestion have led to the search for alternative transportation solutions. Advances in battery technology have resulted in an increase of electric vehicles (EVs), which serve to reduce the impact of many of the negative consequences of congestion, including pollution and the cost of wasted fuel. Furthermore, the energy-efficiency and quiet operation of electric motors have made feasible concepts such as Urban Air Mobility (UAM), in which electric aircraft transport passengers in dense urban areas prone to severe traffic slowdowns. Electrified transportation may be the solution needed to combat urban gridlock, but many logistical questions related to the design and operation of the resultant transportation networks remain to be answered. This research begins by examining the near-term effects of EV charging networks. Stationary plug-in methods have been the traditional approach to recharge electric ground vehicles; however, dynamic charging technologies that can charge vehicles while they are in motion have recently been introduced that have the potential to eliminate the inconvenience of long charging wait times and the high cost of large batteries. Using an agent-based model verified with traffic data, different network designs incorporating these dynamic chargers are evaluated based on the predicted benefit to EV drivers. A genetic optimization is designed to optimally locate the chargers. Heavily-used highways are found to be much more effective than arterial roads as locations for these chargers, even when installation cost is taken into consideration. This work also explores the potential long-term effects of electrified transportation on urban congestion by examining the implementation of a UAM system. Interdependencies between potential electric air vehicle ranges and speeds are explored in conjunction with desired network structure and size in three different regions of the United States. A method is developed to take all these considerations into account, thus allowing for the creation of a network optimized for UAM operations when vehicle or topological constraints are present. Because the optimization problem is NP-hard, five heuristic algorithms are developed to find potential solutions with acceptable computation times, and are found to be within 10% of the optimal value for the test cases explored. The results from this exploration are used in a second agent-based transportation model that analyzes operational parameters associated with UAM networks, such as service strategy and dispatch frequency, in addition to the considerations associated with network design. General trends between the effectiveness of UAM networks and the various factors explored are identified and presented.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Hanzaz, Zakaria [Verfasser]. "Optimization of the Link-to-System Interface for the System Level Evaluation based on the LTE System / Zakaria Hanzaz". München : Verlag Dr. Hut, 2017. http://d-nb.info/1135596824/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Raihana, Nishat. "Optimization of Operational Overhead based on the Evaluation of Current Snow Maintenance System : A Case Study of Borlänge, Sweden". Thesis, Högskolan Dalarna, Mikrodataanalys, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:du-30548.

Testo completo
Abstract (sommario):
This study analyzes snow maintenance data of Borlänge municipality of Sweden based on the data of 2017 to 2018. The goal of this study is to reduce operational overhead of snow maintenance, for example, fuel and time consumption of the snow maintenance vehicles, work hour of dedicated personnel, etc. Borlänge Energy equipped the snow maintenance vehicles with GPS devices which stored the record of the snow maintenance activities. The initial part of this study obtained insights out of the GPS data by using spatiotemporal data analysis. Derivation of the different snow maintenance treatments (plowing, sanding and salting) as well as the efficiency of the sub-contractors (companies which are responsible for snow maintenance) and inspectors (personnel who are liable to call the subcontractors if they think it is time for snow maintenance) are performed in the beginning of this study. The efficiency of the subcontractors and inspectors are measured to compare their performance with each other. The latter part of this study discusses a simulated annealing-based heuristics technique to find out optimal location for dispatching snow maintenance vehicles. In the existing system of snow maintenance, drivers of the maintenance vehicles decide to start location of maintenance work based on their experience and intuition, which might vary from one driver to another driver. The vehicle dispatch locations are calculated based on the availability of the vehicles. For example, if a subcontractor has three vehicles to perform snow maintenance on a specific road map, the proposed solution would suggest three locations to dispatch those vehicles. The purpose of finding the optimal dispatch location is to reduce the total travel distance of the maintenance vehicles, which yield less fuel and time consumption. The study result shows the average travel distance for 1, 3, and 5 vehicles on 15 road networks. The proposed solution would yield 18% less travel than the existing system of snow maintenance.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Liefferinckx, Claire. "Evaluation of disease severity in inflammatory bowel diseases: From predictive diagnostic gene markers to treatment optimization based on pharmacokinetics". Doctoral thesis, Universite Libre de Bruxelles, 2019. https://dipot.ulb.ac.be/dspace/bitstream/2013/286479/3/table.docx.

Testo completo
Abstract (sommario):
Inflammatory bowel diseases (IBD), including Crohn’s disease (CD) and ulcerative colitis (UC), are chronic inflammatory immune-mediated diseases of the gastrointestinal tract. Two-thirds of IBD patients will develop severe disease, with complications that will require frequent surgeries and hospital admissions, and will seriously impair their quality of life. The ultimate clinical challenge of precision medicine in IBD is to find predictive markers to anticipate the development of severe disease and to monitor treatment in these patients.In the first part of my PhD thesis, we have carried out several studies monitoring the biologics used in IBD patients with severe disease. We have evaluated the pharmacokinetics of the following biologics used in IBD patients: infliximab, vedolizumab, and ustekinumab. We have focused on measuring trough levels (TLs) (defined as the serum drug level measured just before the next drug administration) early on after initiating biologic treatment to predict patient outcomes, including long- term responses in patients treated with infliximab and vedolizumab. In addition, we are currently conducting a prospective multicentric study that aims to design a pharmacokinetic model of infliximab at induction in IBD patients (EudraCT: CT 2015- 004618-10) (End of study expected by December 2019 but interim analysis available in the present work). Moreover, we have reported on the efficacy of ustekinumab in a large national cohort of highly refractory CD patients and have also examined the benefit of early measurement of ustekinumab TLs in these patients. Finally, we have reported novel findings on the impact of different wash-out periods (defined as the time frame between the discontinuation of one biologic and the initiation of a second biologic on the pharmacokinetics of the second-line biologic). Altogether, over the past 3 years, our data suggest the importance of measuring TLs early on during induction to predict long-term response to biologics during maintenance therapyIn the second part of my PhD thesis, we have analysed the inter-variability of the immune response in healthy subjects. Inflammation is the obvious key driver and underlying mechanism of disease severity in IBD. Therefore, the magnitude of inflammation must help define the phenotype of mild to severe disease. Delineating the inter-variability of the immune response in a healthy cohort constitutes a fundamental step to uncovering the genetic factors underlying this variability. We have performed whole blood cell cultures in a highly selected population of more than 400 healthy subjects stimulated with several Toll-like receptor (TLR) agonists and a T-cell receptor (TCR) antagonist. We found that the magnitude of the immune response (the high- or low-cytokine producer phenotype) was independent of the cytokine measured and the TLR agonists used. Thus, a donor exhibits a specific immune (cytokine) response or “immunotype” across cytokines released and TLR stimulation. Importantly, the high- or low-cytokine producer phenotype was different and did not overlap between the TLR and TCR stimulation conditions. In other words, a donor who is ahigh-cytokine producer following TLR stimulation will not be a high-cytokine producer following TCR stimulation (or the inverse). Therefore, we have defined TLR- or TCR- related Immunotypes (IT) as “a grading classification of the magnitude of the cytokine immune response” with IT1, IT2, and IT3 as low, intermediate, and high immunotypes. This suggests that two independent TLR and TCR ITs (TLR IT1 and TCR IT3) can co-exist in the same subject. We are now currently evaluating the genetic markers underlying these ITs before validating them in large cohort of IBD patients with mild-to-moderate and severe disease.This PhD thesis provides some data suggesting that the assessment of the pharmacokinetics of biologics early on at the initiation of treatment could help predict how the patient will respond in the long run. In parallel, this PhD thesis provides some advances in the understanding of the inter-variability of the immune response, a fundamental step before the identification of potential genetic markers underlying the inter-variability of inflammation and, hence, the severity of disease in IBD.
Doctorat en Sciences biomédicales et pharmaceutiques (Médecine)
info:eu-repo/semantics/nonPublished
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Wang, Qing. "Performance Evaluation and Integrated Management of Airport Surface Operations". Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5609.

Testo completo
Abstract (sommario):
The demand for aviation has been steadily growing over the past few decades and will keep increasing in the future. The anticipated growth of traffic demand will cause the current airspace system, one that is already burdened by heavy operations and inefficient usage, to become even more congested than its current state. Because busy airports in the United States (U.S.) are becoming "bottlenecks" of the National Airspace System (NAS), it is of great importance to discover the most efficient means of using existing facilities to improve airport operations. This dissertation aims at designing an efficient airport surface operations management system that substantially contributes to the modernized NAS. First, a global comparison is conducted in the major airports within the U.S. and Europe in order to understand, compare, and explore the differences of surface operational efficiency in two systems. The comparison results are then presented for each airport pair with respect to various operational performance metrics, as well as airport capacity and different demand patterns. A detailed summary of the associated Air Traffic Management (ATM) strategies that are implemented in the U.S. and Europe can be found towards the end of this work. These strategies include: a single Air Navigation Service Provider (ANSP) in the U.S. and multiple ANSPs in Europe, airline scheduling and demand management differences, mixed usage of Instrument Flight Rule (IFR) and Visual Flight Rules (VFR) operations in the U.S., and varying gate management policies in two regions. For global comparison, unimpeded taxi time is the reference time used for measuring taxi performance. It has been noted that different methodologies are currently used to benchmark taxi times by the performance analysis groups in the U.S. and Europe, namely the Federal Aviation Authority (FAA) and EUROCONTROL. The consistent methodology to measure taxi efficiency is needed for the facilitation of global benchmarking. Therefore, after an in-depth factual comparison conducted for two varying methodologies, new methods to measure unimpeded taxi times are explored through various tools, including simulation software and projection of historical surveillance data. Moreover, a sophisticated statistical model is proposed as a state-of-the-art method to measure taxi efficiency while quantifying the impact of various factors to taxi inefficiency and supporting decision-makers with reliable measurements to improve the operational performance. Lastly, a real-time integrated airport surface operations management (RTI-ASOM) is presented to fulfil the third objective of this dissertation. It provides optimal trajectories for each aircraft between gates and runways with the objective of minimizing taxi delay and maximizing runway throughput. The use of Mixed Integer Linear Programming (MIP) formulation, Dynamic Programming for decomposition, and CPLEX optimization can permit the use of an efficient solution algorithm that can instantly solve the large-scale optimization problem. Examples are shown based on one-day track data at LaGuardia Airport (LGA) in New York City. In additional to base scenarios with historical data, simulation through MATLAB is constructed to provide further comparable scenarios, which can demonstrate a significant reduction of taxi times and improvement of runway utilization in RTI-ASOM. By strategically holding departures at gates, the application of RTI-ASOM also reduces excess delay on the airport surface, decreases fuel consumption at airports, and mitigates the consequential environmental impacts.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Mach, Rufí Ferran. "Optimization analysis of the number and location of holding control steps : A simulation-based evaluation of line number 1, Stockholm". Thesis, KTH, Trafik och logistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-34871.

Testo completo
Abstract (sommario):
Summary   The growing congestion problems in big cities result in growing need for public transport services. In order to attract new users, public transport operators are looking for methods to improve  their performance  and level of service.   Service reliability  is one of the main   objectives   of  public  transport   operators.   Various   sources   of  service   uncertainty   can   causebus  bunching:  buses  from  the same  line tend  to bunch  together  due to a positive feedback  loop,  unless  control  measures   are  implemented.   The  most  commonly   used strategy for preventing service irregularity is  to define holding points along the bus route. The design of the holding strategy involves the determination  of the optimal number and location of holding points, as well as the holding criteria. These strategies are classified to schedule-  or  headway-based.   Previous  studies  showed  that  headway-based   strategies have the potential  to improve  transit  performance  from both passengers  and operators perspectives.   This thesis analyzes the performance of optimization algorithms when solving the holding problem. The optimization process involves the determination  of time point location for a given  headway-based   strategy.   The  evaluation   of  candidate   solutions   is  based  on  a mesoscopic  transit simulation.  The input data for the simulation  corresponds  to the bus line number 1 in Stockholm city.   The  objective  function  is  made  up  of  the  weighted  sum  of  all  time  components  that passengers  experience:  in-vehicle  riding  time,  dwell  time,  waiting  time  at stop  and  on- board holding time.The optimization was carried out by greedy and genetic algorithms.  In addition, a multi-objective  function that incorporated  the performance  from the operator perspective was solved using a multi-objective genetic algorithm.   The results demonstrate  the potential benefits from optimizing the location of time point stops.  The  best  solution  results  in  an  improvement   of  around  11%  in  the  objective function value. Interestingly, the results indicate that wrongly chosen time point stops can yield transit performance that is worse off than having no holding control.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Wicaksono, Hendro Verfasser], e Jivka [Akademischer Betreuer] [Ovtcharova. "An Integrated Method for Information and Communication Technology (ICT) Supported Energy Efficiency Evaluation and Optimization in Manufacturing: Knowledge-based Approach and Energy Performance Indicators (EnPI) to Support Evaluation and Optimization of Energy Efficiency / Hendro Wicaksono ; Betreuer: J. Ovtcharova". Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1117701913/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Wicaksono, Hendro [Verfasser], e Jivka [Akademischer Betreuer] Ovtcharova. "An Integrated Method for Information and Communication Technology (ICT) Supported Energy Efficiency Evaluation and Optimization in Manufacturing: Knowledge-based Approach and Energy Performance Indicators (EnPI) to Support Evaluation and Optimization of Energy Efficiency / Hendro Wicaksono ; Betreuer: J. Ovtcharova". Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1117701913/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Aitzetmüller, Matthias [Verfasser], Hans-Günther [Akademischer Betreuer] Machens, Fabian J. [Gutachter] Theis e Griensven Martijn [Gutachter] van. "Evaluation and Optimization of Mesenchymal Stromal Cell based therapies / Matthias Aitzetmüller ; Gutachter: Fabian J. Theis, Martijn van Griensven ; Betreuer: Hans-Günther Machens". München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1220321206/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Hamdy, Sarah [Verfasser], Tetyana [Akademischer Betreuer] Morozyuk, Tetyana [Gutachter] Morozyuk, Aaron [Gutachter] Praktiknjo e George [Gutachter] Tsatsaronis. "Cryogenic energy storage systems : an exergy-based evaluation and optimization / Sarah Hamdy ; Gutachter: Tetyana Morozyuk, Aaron Praktiknjo, George Tsatsaronis ; Betreuer: Tetyana Morozyuk". Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1197124853/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Al-Naeem, Tariq Abdullah Computer Science &amp Engineering Faculty of Engineering UNSW. "A quality-driven decision-support framework for architecting e-business applications". Awarded by:University of New South Wales. Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/23419.

Testo completo
Abstract (sommario):
Architecting e-business applications is a complex design activity. This is mainly due to the numerous architectural decisions to be made, including the selection of alternative technologies, software components, design strategies, patterns, standards, protocols, platforms, etc. Further complexities arise due to the fact that these alternatives often vary considerably in their support for different quality attributes. Moreover, there are often different groups of stakeholders involved, with each having their own quality goals and criteria. Furthermore, different architectural decisions often include interdependent alternatives, where the selection of one alternative for one particular decision impacts the selections to be made for alternatives from other different decisions. There have been several research efforts aiming at providing sufficient mechanisms and tools for facilitating the architectural evaluation and design process. These approaches, however, address architectural decisions in isolation, where they focus on evaluating a limited set of alternatives belonging to one architectural decision. This has been the primary motivation behind the development of the Architectural DEcision-Making Support (ADEMS) framework, which basically aims at supporting stakeholders and architects during the architectural decision-making process by helping them determining a suitable combination of architectural alternatives. ADEMS framework is an iterative process that leverages rigorous quantitative decision-making techniques available in the literature of Management Science, particularly Multiple Attribute Decision-Making (MADM) methods and Integer Programming (IP). Furthermore, due to the number of architectural decisions involved as well as the variety of available alternatives, the architecture design space is expected to be huge. For this purpose, a query language has been developed, known as the Architecture Query Language (AQL), to aid architects in exploring and analyzing the design space in further depth, and also in examining different ???what-if??? architectural scenarios. In addition, in order to support leveraging ADEMS framework, a support tool has been implemented for carrying out the sophisticated set of mathematical computations and comparisons of the large number of architectural combinations, which might otherwise be hard to conduct using manual techniques. The primary contribution of the tool is in its help to identify, evaluate, and rank all potential combinations of alternatives based on their satisfaction to quality preferences provided by the different stakeholders. Finally, to assess the feasibility of ADEMS, three different case studies have been conducted relating to the architectural evaluation of different e-business and enterprise applications. Results obtained for the three case studies were quite positive as they showed an acceptable accuracy level for the decisions recommended by ADEMS, and at a reasonable time and effort costs for the different system stakeholders.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Al-Qargholi, Basim [Verfasser]. "Development, Evaluation and Optimization of Fabrication Processes for Daylight Guiding Systems Based on Micromirror Arrays : Entwicklung, Evaluierung und Optimierung von Herstellungsprozessen für mikrospiegelbasierte Lichtlenksysteme / Basim Al-Qargholi". Kassel : Universitätsbibliothek Kassel, 2017. http://d-nb.info/1125981156/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Primabudi, Eko [Verfasser], Tetyana [Akademischer Betreuer] Morozyuk, Tetyana [Gutachter] Morozyuk e Vittorio [Gutachter] Verda. "Evaluation and optimization of natural gas liquefaction process with exergy-based methods: a case study for C3MR / Eko Primabudi ; Gutachter: Tetyana Morozyuk, Vittorio Verda ; Betreuer: Tetyana Morozyuk". Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1188313703/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Grandicki, Andreas, e Mattias Lokgård. "Parametric CAD Modeling to aid Simulation-Driven Design : An evaluation and improvement of methods used at Scania". Thesis, Linköpings universitet, Maskinkonstruktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138121.

Testo completo
Abstract (sommario):
This report documents a thesis conducted at Scania CV AB in Södertälje, Sweden. The main purpose of the thesis has been to examine and improve upon current practices of parametric CAD-modeling at Scania, with the ultimate goal of increased design automation and simulation-driven design. The thesis was initiated with a literature study, mainly covering the fields of parametric CAD-modeling, design automation and knowledge-based engineering. Furthermore, a questionnaire and multiple interviews were conducted to assess the awareness and mind-set of the employees. Finally, a case-study was carried out to follow current methodologies, and address any deficiencies found. Some of the most important findings were that while parametric modeling has considerable potential in enabling design automation, it is crucial, and most beneficial in terms of automation efficiency, to start with the fundamentals, namely achieving a uniform modeling practice. With these findings, a new proposed methodology has been introduced, as well as a recommended plan for a widespread implementation of parametric modeling at Scania. Such implementation would allow for shorter lead-times, faster adaptation to changing conditions, and reduced development expenditures.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Khalid, Adeel S. "Development and Implementation of Rotorcraft Preliminary Design Methodology using Multidisciplinary Design Optimization". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14013.

Testo completo
Abstract (sommario):
A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. All the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design information from one discipline to another in a collaborative manner. Several disciplinary and system level optimization problems are solved. After all the constraints of a multidisciplinary problem have been satisfied and an optimal design has been obtained, it is compared with the initial baseline, using the earlier developed OEC, to measure the level of improvement achieved. Finally a digital preliminary design is proposed. The proposed design methodology provides an automated design framework, facilitates parallel design by removing disciplinary interdependency, current and updated information is made available to all disciplines at all times of the design through a central collaborative repository, overall design time is reduced and an optimized design is achieved.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Yun, Changgeun. "THREE ESSAYS ON PUBLIC ORGANIZATIONS". UKnowledge, 2015. http://uknowledge.uky.edu/msppa_etds/15.

Testo completo
Abstract (sommario):
Organizations play key roles in modern societies. The importance of organizations for a society requires an understanding of organizations. In order to fully understand public organizations, it is necessary to recognize how organizational settings affect subjects of organizations and organizing. Although public and private organizations interrelate with each other, the two types are not identical. In this dissertation, I attempt to describe public organizations in their own setting by discussing three important topics in public organization theory: (1) innovation adoption in the public sector; (2) representative bureaucracy; and (3) decline and death of public organizations. In Chapter II, I scrutinize early adoption of innovations at the organizational level and explore which public organizations become early adopters in the diffusion process. The adoption of an innovation is directly related to the motivation to innovate. That is, organizations performing poorly will have a motivation to seek new solutions. I estimate the strength of the motivation by observing prior performance. The main finding of the second chapter is that performance-based motivation has a twofold impact on early innovation adoption: negative for organizations with low performance, but positive for those with very high performance. This study estimates top 3.8% as the turning point defining which organizations attain outstanding performance and show the positive relationship between performance and innovation adoption. In Chapter III, develop a theoretical framework for predicting and explaining active representation in bureaucracy and test two hypotheses from the framework to test its validity. First, active representation requires the loss of organizational rewards. Second, a minority group mobilizes external support to minimize the cost of active representation. These findings support that active representation is a political activity in which bargaining between formal and informal roles occurs. In addition, I add evidence to the literature demonstrating that the two prerequisites – policy discretion and a critical mass – must be satisfied for active representation to occur. In Chapter IV, I argue that organizational change is a result of a relationship between an organization and the environment. And, I suggest and advance the theory of organizational ecology for examining environment effect on organizational decline and death. The theory has been extensively studies in the business sector, so I advance the theory to be applicable to the public sector. First, I add political variables, such as change in the executive branch and the legislature, unified government, and hypothesize that (1) an organization established by a party other than the one in the executive branch in any given year will be more likely to be terminated or decline; that (2) an organization established by a party other than the one in the legislature in any given year will be more likely to be terminated or decline; and that (3) if an unfriendly party controls both the executive branch and the legislature, organizations established by other parties are more likely to be terminated or decline. Second, the effect of the economic environment on the life cycle of public organizations is not as straightforward and simple as their effect on business firms.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Mogwaneng, Pheladi Junior. "Optimization of rolling mill oils evaluation using FT-IR spectroscopy". Diss., Pretoria : [s.n.], 2004. http://upetd.up.ac.za/thesis/available/etd-08132008-103521.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Sathi, Veer Reddy, e Jai Simha Ramanujapura. "A Quality Criteria Based Evaluation of Topic Models". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13274.

Testo completo
Abstract (sommario):
Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources. Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task. Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure. Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Furuhashi, Takeshi, Tomohiro Yoshikawa e Shun Otake. "A Study on Aggregation of Objective Functions in MaOPs Based on Evaluation Criteria". 日本知能情報ファジィ学会, 2010. http://hdl.handle.net/2237/20688.

Testo completo
Abstract (sommario):
Session ID: TH-E1-4
SCIS & ISIS 2010, Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems. December 8-12, 2010, Okayama Convention Center, Okayama, Japan
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Poole, Benjamin Hancock. "A methodology for the robustness-based evaluation of systems-of-systems alternatives using regret analysis". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24648.

Testo completo
Abstract (sommario):
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Mavris, Dimitri; Committee Member: Bishop, Carlee; Committee Member: McMichael, James; Committee Member: Nixon, Janel; Committee Member: Schrage, Daniel
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Wallin, Erik Oskar. "Individual Information Adaptation Based on Content Description". Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3710.

Testo completo
Abstract (sommario):

Today’s increasing information supply raises the needfor more effective and automated information processing whereindividual information adaptation (personalization) is onepossible solution. Earlier computer systems for personalizationlacked the ability to easily define and measure theeffectiveness of personalization efforts. Numerous projectsfailed to live up to the their expectations, and the demand forevaluation increased.

This thesis presents some underlying concepts and methodsfor implementing personalization in order to increase statedbusiness objectives. A personalization system was developedthat utilizes descriptions of information characteristics(metadata) to perform content based filtering in anon-intrusive way.

Most of the described measurement methods forpersonalization in the literature are focused on improving theutility for the customer. The evaluation function of thepersonalization system described in this thesis takes thebusiness operator’s standpoint and pragmatically focuseson one or a few measurable business objectives. In order toverify operation of the personalization system, a functioncalled bifurcation was created. The bifurcation functiondivides the customers stochastically into two or morecontrolled groups with different personalizationconfigurations. Bygiving one of the controlled groups apersonalization configuration that deactivates thepersonalization, a reference group is created. The referencegroup is used to measure quantitatively objectives bycomparison with the groups with active personalization.

Two different companies had their websites personalized andevaluated: one of Sweden’s largest recruitment servicesand the second largest Swedish daily newspaper. The purposewith the implementations was to define, measure, and increasethe business objectives. The results of the two case studiesshow that under propitious conditions, personalization can bemade to increase stated business objectives.

Keywords:metadata, semantic web, personalization,information adaptation, one-to-one marketing, evaluation,optimization, personification, customization,individualization, internet, content filtering, automation.

Gli stili APA, Harvard, Vancouver, ISO e altri
34

Ewertowska, Anna. "Systematic tools based on data envelopment analysis for the life cycle sustainability evaluation of technologies". Doctoral thesis, Universitat Rovira i Virgili, 2017. http://hdl.handle.net/10803/457128.

Testo completo
Abstract (sommario):
El desenvolupament de sistemes energètics més sostenibles és un objectiu vital de les societats modernes que ha augmentat d'importància durant les darreres dècades. La minimització de la dependència dels combustibles fòssils per tal de millorar la sostenibilitat s'ha convertit en la meta pels investigadors i gestors juntament amb el desenvolupament de noves estratègies. Aquesta tesi introdueix la utilització conjunta de dues tècniques: l'anàlisi d'envolvent de dades (AED) i lanàlisi del cicle de vida (ACV) com eines adequades per assolir aquests reptes. S'ha analitzat l'ecoeficiència ambiental de la matriu elèctrica de les principals economies europees i s'han identificat els països ambientalment eficients i ineficients. S'han proporcionats objectius per convertir els països ineficients en eficients. D'altra banda, la metodologia AED + ACV s'ha aplicat a l'anàlisi de l'ecoeficiència d'un sistema considerant la incertesa en les dades a través de la implementació de la matriu de Pedigree i la simulació de Monte Carlo. Els resultats proporcionen una visió valuosa per als governs i gestors que busquen satisfer la demanda d'electricitat, minimitzant l'impacte ambiental associat i considerant la incertesa associada a les dades
El desarrollo de un sistema energético más sostenible es un objetivo vital en la sociedad moderna que ha ido aumentado de importancia durante las últimas décadas. La minimización de la dependencia de los combustibles fósiles con el fin de mejorar la sostenibilidad se ha convertido en la meta para los investigadores y gestores junto con el desarrollo de nuevas estrategias. Esta tesis introduce el uso combinado de dos técnicas, el análisis de envolvente de datos (AED) y el análisis de ciclo de vida (ACV) como herramientas para lograr estos desafíos. Se ha analizado la eco-eficiencia ambiental de la matriz eléctrica de las principales economías europeas y se han identificado los países ambientalmente eficientes e ineficientes. Se han proporcionados objetivos para convertir los países ineficientes en eficientes. Por otra parte, la metodología AED + ACV se ha aplicado al análisis de la ecoeficiencia de un sistema considerando la incertidumbre en los datos a través de la implementación de la matriz de Pedigree y la simulación de Monte Carlo. Los resultados proporcionan una visión valiosa para los gobiernos y gestores que buscan satisfacer la demanda de electricidad, minimizando el impacto ambiental asociado e incluyendo la incertidumbre asociada a los datos.
Moving towards a more sustainable energy system has been a main aim of modern societies during the last decades. The minimization of the fossil fuels dependence in order to improve sustainability has become the goal for investigators and policy makers jointly with the development of new strategies and policies. The presented thesis introduces the combined use of data envelopment analysis (DEA) and life cycle assessment (LCA) technique as a suitable tool to achieve these challenges. The environmental performance (eco-efficiency) of the electricity mix of the top European economies has been studied and the environmentally efficient and inefficient countries have been identified. For the inefficient ones, the targets have been provided in order to make them efficient. Moreover, the DEA+LCA method has been extended to deal with the uncertainty, through the implementation of the Pedigree matrix approach and Monte Carlo simulation to enable the eco-efficiency assessment under uncertainty. Our results provide valuable insight for governments and policy makers that aim to satisfy the electricity demand while minimizing the associated environmental impacts and considering the uncertainties of the data.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Raudenská, Lenka. "Metriky a kriteria pro diagnostiku sociotechnických systémů". Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-233879.

Testo completo
Abstract (sommario):
This doctoral thesis is focused on metrics and the criteria for socio-technical system diagnostics, which is a high profile topic for companies wanting to ensure the best in product quality. More and more customers are requiring suppliers to prove reliability in the production and supply quality of products according to given specifications. Consequently the ability to produce quality goods corresponding to customer requirements has become a fundamental condition in order to remain competitive. The thesis firstly lays out the basic strategies and rules which are prerequisite for a successful working company in order to ensure provision of quality goods at competitive costs. Next, methods and tools for planning are discussed. Planning is important in its impact on budget, time schedules, and necessary sourcing quantification. Risk analysis is also included to help define preventative actions, and reduce the probability of error and potential breakdown of the entire company. The next part of the thesis deals with optimisation problems, which are solved by Swarm based optimisation. Algorithms and their utilisation in industry are described, in particular the Vehicle routing problem and Travelling salesman problem, used as tools for solving specialist problems within manufacturing corporations. The final part of the thesis deals with Qualitative modelling, where solutions can be achieved with less exact quantitative information of the surveyed model. The text includes qualitative algebra descriptions, which discern only three possible values – positive, constant and negative, which are sufficient in the demonstration of trends. The results can also be conveniently represented using graph theory tools.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

MATOS, JÚNIOR Rubens de Souza. "Identification of Availability and Performance Bottlenecks in Cloud Computing Systems: an approach based on hierarchical models and sensitivity analysis". Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18702.

Testo completo
Abstract (sommario):
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-04T17:58:30Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_rubens_digital_biblioteca_08092016.pdf: 4506490 bytes, checksum: 251226257a6b659a6ae047e659147a8a (MD5)
Made available in DSpace on 2017-05-04T17:58:30Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_rubens_digital_biblioteca_08092016.pdf: 4506490 bytes, checksum: 251226257a6b659a6ae047e659147a8a (MD5) Previous issue date: 2016-03-01
CAPES
Cloud computing paradigm is able to reduce costs of acquisition and maintenance of computer systems, and enables the balanced management of resources according to the demand. Hierarchical and composite analytical models are suitable for describing performance and dependability of cloud computing systems in a concise manner, dealing with the huge number of components which constitute such kind of system. That approach uses distinct sub-models for each system level and the measures obtained in each sub-model are integrated to compute the measures for the whole system. Identification of bottlenecks in hierarchical models might be difficult yet, due to the large number of parameters and their distribution among distinct modeling levels and formalisms. This thesis proposes methods for evaluation and detection of bottlenecks of cloud computing systems. The methodology is based on hierarchical modeling and parametric sensitivity analysis techniques tailored for such a scenario. This research introduces methods to build unified sensitivity rankings when distinct modeling formalisms are combined. These methods are embedded in the Mercury software tool, providing an automated sensitivity analysis framework for supporting the process. Distinct case studies helped in testing the methodology, encompassing hardware and software aspects of cloud systems, from basic infrastructure level to applications that are hosted in private clouds. The case studies showed that the proposed approach is helpful for guiding cloud systems designers and administrators in the decision-making process, especially for tune-up and architectural improvements. It is possible to employ the methodology through an optimization algorithm proposed here, called Sensitive GRASP. This algorithm aims at optimizing performance and dependability of computing systems that cannot stand the exploration of all architectural and configuration possibilities to find the best quality of service. This is especially useful for cloud-hosted services and their complex underlying infrastructures.
O paradigma de computação em nuvem é capaz de reduzir os custos de aquisição e manutenção de sistemas computacionais e permitir uma gestão equilibrada dos recursos de acordo com a demanda. Modelos analíticos hierárquicos e compostos são adequados para descrever de forma concisa o desempenho e a confiabilidade de sistemas de computação em nuvem, lidando com o grande número de componentes que constituem esse tipo de sistema. Esta abordagem usa sub-modelos distintos para cada nível do sistema e as medidas obtidas em cada sub-modelo são usadas para calcular as métricas desejadas para o sistema como um todo. A identificação de gargalos em modelos hierárquicos pode ser difícil, no entanto, devido ao grande número de parâmetros e sua distribuição entre os distintos formalismos e níveis de modelagem. Esta tese propõe métodos para a avaliação e detecção de gargalos de sistemas de computação em nuvem. A abordagem baseia-se na modelagem hierárquica e técnicas de análise de sensibilidade paramétrica adaptadas para tal cenário. Esta pesquisa apresenta métodos para construir rankings unificados de sensibilidade quando formalismos de modelagem distintos são combinados. Estes métodos são incorporados no software Mercury, fornecendo uma estrutura automatizada de apoio ao processo. Uma metodologia de suporte a essa abordagem foi proposta e testada ao longo de estudos de casos distintos, abrangendo aspectos de hardware e software de sistemas IaaS (Infraestrutura como um serviço), desde o nível de infraestrutura básica até os aplicativos hospedados em nuvens privadas. Os estudos de caso mostraram que a abordagem proposta é útil para orientar os projetistas e administradores de infraestruturas de nuvem no processo de tomada de decisões, especialmente para ajustes eventuais e melhorias arquiteturais. A metodologia também pode ser aplicada por meio de um algoritmo de otimização proposto aqui, chamado Sensitive GRASP. Este algoritmo tem o objetivo de otimizar o desempenho e a confiabilidade de sistemas em cenários onde não é possível explorar todas as possibilidades arquiteturais e de configuração para encontrar a melhor qualidade de serviço. Isto é especialmente útil para os serviços hospedados na nuvem e suas complexas
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Lien, Sheng-Hung, e 連昇鴻. "Performance Evaluation on Area-Based Image Registration Using Particle Swarm Optimization". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/wm8arj.

Testo completo
Abstract (sommario):
碩士
國立臺北科技大學
工業工程與管理系碩士班
100
Image registration is an important image processing step of aligning two or more images by means of their common information. These images should be taken from different viewpoints, times, or sensors. The procedure of image registration consists of four basic steps: feature extraction, feature matching, transformation model estimation, and image resampling. Image registration methods are classified as feature based and area based methods. The area based methods will be the focus in this thesis. Most cross-correlation and mutual information methods are developed based on image intensities for the direct matching purpose. In this thesis, particle swarm optimization will be utilized to solve the transformation model, and the design of experiments will be conducted for particle swarm optimization, providing a parameter set to improve the registration rate. All the methods considered will be tested for five categories of image sets: medical image, nature image, remote sensing, large translation image and high contrast image. The parameter setting well adapted to cross-correlation, mutual information, and normalize mutual information objective function methods is investigated. At last, the comparison results among the studied methods are reported in terms of the success rate and the number of function evaluations.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Wu, Po-han, e 吳柏翰. "Table Size Reduction and Optimization in Multiplierless Table-Based Function Evaluation Designs". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/31297534248791286907.

Testo completo
Abstract (sommario):
碩士
國立中山大學
資訊工程學系研究所
101
In hardware design of the elementary function evaluation, Table-lookup-and-addition method is a category of multiplier-less method, based on several lookup tables and a multi-operand adder to calculate the final function value. Multipartite table method [3] is a popular multiplierless function evaluation design. These multiplierless methods usually have small delay but the table size grows very fast with respect to accuracy .Thus they are only used in applications with low-precision requirements. In this thesis, new table decomposition methods are proposed in order to reduce table size in the multipartite design. The original table is decomposed into two or three smaller tables so that the total table size is efficiently reduced. Experimental results show that the proposed design can significantly reduce the bit count of the tables for different functions compared with the multipartite methods in [3] which is the best table-addition method reported so far.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Kernagis, Dawn. "Evaluation and Optimization of the Translational Potential of Array-Based Molecular Diagnostics". Diss., 2012. http://hdl.handle.net/10161/5536.

Testo completo
Abstract (sommario):

The translational potential of diagnostic and prognostic platforms developed using expression microarray technology is evident. However, the majority of array-based diagnostics have yet to make their way into the clinical laboratory. Current approaches tend to focus on development of multi-gene classifiers of disease subtypes, but very few studies evaluate the translational potential of these assays. Likewise, only a handful of studies focus on development of approaches to optimize array-based tests for the ultimate goal of clinical utility. Prior to translation into the clinical setting, molecular diagnostic platforms should demonstrate a number of characteristics to ensure optimal and efficient testing and patient care. Assays should be accurate and precise, technically and biologically robust, and should take into account normal sources of biological variance that could ultimately affect test results. The overarching goal of the research presented in this dissertation is to develop methods for evaluating and optimizing the translational potential of molecular diagnostics developed using expression microarray technology.

The first research section of this dissertation is focused on our evaluation of the impact of intratumor heterogeneity on precision in microarray-based assays in breast cancer. We conducted genome-wide expression profiling on 50 needle core biopsies from 18 breast cancer patients. Global profiles of expression were characterized using unsupervised clustering methods and variance components models. Array-based measures of estrogen (ER) and progesterone receptor (PR) status were compared to immunohistochemistry. The precision of genomic predictors of ER pathway status, recurrence risk, and sensitivity to chemotherapeutics were evaluated by interclass correlation. Results demonstrated that intratumor variation was substantially less than the total variation observed across the patient population. Nevertheless, a fraction of genes exhibited significant intratumor heterogeneity in expression. A high degree of reproducibility was observed in single gene predictors of ER (intraclass correlation coefficient (ICC)=0.94) and PR expression (ICC=0.90), and in a multi-gene predictor of ER pathway activation (ICC=0.98) with high concordance with immunohistochemistry. Substantial agreement was also observed for multi-gene signatures of cancer recurrence (ICC=0.71), and chemotherapeutic sensitivity (ICC=0.72 and 0.64). Together, these results demonstrated that intratumor heterogeneity, although present at the level of individual gene expression, does not preclude precise micro-array based predictions of tumor behavior or clinical outcome in breast cancer patients.

Leading into the second research section, we observed that in some cancer types, certain genes behave as molecular switches and have either an "on" or "off" expression state. Specifically, we observed these molecular switch genes exist in breast cancer as robust diagnostic and prognostic markers, including ER, PR, and HER2, and define tumor subtypes associated with different treatment and patient survival. We hypothesized that clinically relevant molecular switch (bimodal) genes exist in epithelial ovarian cancer, a type of cancer with no established molecular subgroups. To test this hypothesis, we applied a bimodal discovery algorithm to a publically available ovarian cancer expression microarray dataset (GSE9891:285 tumors; 246 malignant serous (MS), 20 endometrioid (EM), 18 low malignant potential (LMP) ovarian carcinomas). Genes with robust bimodal expression were identified across all ovarian tumor types and within selected subtypes. Of these bimodal genes, 73 demonstrated differential expression between LMP vs. MS and EM, and 22 genes distinguished MS from EM. Fourteen bimodal genes had significant association with survival among MS tumors. When these genes were combined into a single survival score, the median survival for patients with a favorable versus unfavorable score was 65 versus 29 months (p<0.0001, HR=0.4221). Two independent datasets (n=53 high grade, advanced stage serous and n=119 advanced stage ovarian tumors) validated the survival score performance. Taken together, the results of this study revealed that genes with bimodal expression patterns not only define clinically relevant molecular subtypes of ovarian carcinoma, but also provide ideal targets for translation into the clinical laboratory.

Finally, the third research section of this dissertation focuses on development of robust blood-based molecular markers of decompression stress (DS). DS is defined as the pathophysiological response to inert gas coming out of solution in the blood and tissues when a body experiences a reduction in ambient pressure. To date, there are no established molecular markers of DS. We hypothesized that comparing gene expression before and after human decompression exposures by genome-wide expression profiling would identify candidate molecular markers of DS. Peripheral blood was collected 1hr before and 2hr after 93 hyperoxic, heliox experimental dives (n=54). Control arms included samples collected 1 hour before and 2 hours after high pressure oxygen breathing (n= 9) and surface exercise (n=9), and samples collected at 7am and 5pm for time of day (n=11). Pre and post-dive expression data collected from normoxic nitrox experimental dives were utilized for independent validation. Blood samples were collected into PaxGene RNA tubes. RNA was extracted and processed for globin reduction prior to cDNA synthesis and Affymetrix U133A GeneChip hybridization. 746 genes were differentially expressed following hyperoxic, heliox decompression exposures (permutation adjusted p-value cutoff 1.0E-4). After filtering control significant genes, 726 genes remained. Pathway analysis demonstrated a significant portion of genes were associated with innate immune response (p<0.0001). A 362 multi-gene signature of significant, covariant genes was then applied to the independent dataset and demonstrated differentiation between pre and post-dive samples (p=0.0058). There was no significant correlation between signature and venous bubble grade or bottom time in the validation study. Our results showed that expression profiling of peripheral blood following decompression exposures, while controlling for experimental and normal sources of biological variance, identifies a reproducible multi-gene signature of differentially expressed genes, primarily comprising genes associated with innate immune response and independent of venous bubble grade or dive profile.

Taken together, the research and results presented in this dissertation represent considerable advances in the development of approaches to guide microarray-based diagnostics towards the ultimate goal of clinical translation.


Dissertation
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Shih, Hsueh-Fu, e 施學甫. "Segmentation of wound image and optimization based on genetic algorithm and unsupervised evaluation". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/496kb4.

Testo completo
Abstract (sommario):
碩士
國立臺灣大學
生醫電子與資訊學研究所
105
After the surgery being taken, the after care of the surgical wound has a great impact toward the patients’ prognosis. It’s often takes few days even few weeks for the wound to stabilize. It’s is a great cost of health care and nursing resources. The advance of image process and machine learning improves the accuracy of wound assessment and analysis and there are some recent works started on this field of wound analysis. In our tele-health scenario, we hope the user can use their mobile device to obtain an accurate result without using high-end camera. In this literature, we proposed an image segmentation algorithm based on edge detection and Hough transform. We further developed an optimization method based on unsupervised image segmentation evaluation and genetic algorithm. The result was evaluated by the image provided by NTUH, division of surgery. We also implemented an analysis system cooperate with NTUH telehealth center, which has been used on pacemaker implantation patient. The result of performing this segmentation algorithm on the data set provided by NTUH, Division of cardiovascular surgery, achieve the accuracy of 75.7%, after the optimization of genetic algorithm it achieves 94.3%.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Yu, Bor-Yih, e 余柏毅. "Design, Optimization and Economical Evaluation of Coal-based Poly-generation Process to Produce Chemicals". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7bjr9e.

Testo completo
Abstract (sommario):
博士
國立臺灣大學
化學工程學研究所
105
In this work, the design, optimization, and economic evaluation of coal-based poly-generation processes to produce different kind of chemicals are investigated. We lack energy sources in Taiwan, thus over 98% of the energy sources come from importation. Among those energy sources, coal has several advantages such as its relative low proce, abundance, and easy transportation. Therefore, coal has been one of the most important energy sources in Taiwan, and it is expected that its role will be retained in the future. Gasification is the center of this kind of processes, which is also the starting point of this work. Firstly, a 1-dimension gasifier model is established to investigate the gasification performance of coal or biomass under different operating conditions. After that, the coal-to-synthetic natural gas (SNG) and coal-based poly-generation process to produce SNG and ammonia are followed. Because importation of natural gas required liquefaction and compression, thus the importation price is quite expensive in Taiwan (11.3~11.7 USD/MMBTU). Thus, converting coal into SNG can be economically attractive (10.336 USD/MMBTU). If the poly-generation configuration is adopted, the economic performance can be further enhanced. But the enhancement of economic performance may be not obvious once one of the product flowrate is too small. The next topic to investigate is the novel methanol-to-olefin (MTO) process, and the main products are ethylene and propylene. In this work, the rigorous simulation of this process is studied, and the influences from variables are carefully investigated. Besides, due to the close boiling point between propylene and propane, separating them through distillation is not easy. Thus, four method for separation propane with propylene are studied. They are traditional single-column separation using steam as the heat source (case 1), using waste hot water as the heat source (case 2), distillation with the vapor recompression cycle (case 3), and the extractive distillation using acetonitrile solution as the entrainer (case 4). From the results, it is found that case 2 and case 4 may be economically attractive. The final topic is the syngas-to-ethylene glycol (EG) process. There are two stages in this process. In the first stage, CO in syngas is converted into dimethyl oxalate (DMO) as an intermediate, while in the second stage, DMO is hydrogenated to become EG. In the first stage, the circulation rate of methanol inside the process is the most important one. It is mainly determined by the ratio of methanol to the combined nitric oxide and nitric dioxide flowrate into the packed-bed reactor. In the second stage, the most influential variable is the molar ratio of hydrogen and DMO in the combined feed. In short, many coal-based poly-generation processes for producing chemicals are rigorously studied in this work. The analysis methods include optimization, heat integration, economic and energetic evaluation. Through this work, a better understanding toward these processes can be obtained.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Chen, Yi-Lu, e 陳薏如. "Stock Evaluation Model with Noise Trader Behavior based on Genetic Algorithm and Prospect Theory Optimization". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/86334197894305050871.

Testo completo
Abstract (sommario):
碩士
國立高雄應用科技大學
資訊管理研究所碩士班
96
Traditional financial theory assumes the market is efficient, but there are many phenomenons can’t be explained with “Efficient Market Hypothesis” reasonably, such as “Excess Volatility”, so many scholars attempt to discuss the investing behavior with psychology. “Prospect Theory” explains that when individuals face the same amount of money of gaining or losing, the distressed feel of losing will not be able to cover the satisfied feel of gaining. When individuals face the losing, they will be influenced by negative thinking, so there are many “noise trader” filling in the market. Noise trader’s “Misperceptions” causes the inefficiency of arbitrage, and the noise element is one of the important factors affecting the stock price. De Long et al.’s DSSW model assuming the misestimate appears normal distribution, but according to prospect theory, the emotions of investors which are optimistic or pessimistic should cause different levels of misestimates. In this article, we propose PT-DSSW model which combines the concept of prospect theory and genetic algorithm divides into two systems which are GA-DSSW and GA-PT-DSSW. From our experiments, because the periods of the noise trader’s investing strategies are short, the noise trade model of the shorter training period could be better forecasting ability, and PT-DSSW model defining misestimates as a normal distribution which of different variable levels between right and left tail has powerful explaining ability in different kinds of markets.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Ullah, Zain. "Performance Evaluation and Optimization of Continuous Integration Based Automated Toolchain for Safety Related Embedded Applications Software". Master's thesis, 2016. https://monarch.qucosa.de/id/qucosa%3A20702.

Testo completo
Abstract (sommario):
Continues Integration has been a vital part of software development process in order to make the development process fast and reliable. There are number of actors which play an important role with support of third party tools that helps the development process to be effective and productive in nature. The CI- toolchain is capable of doing much more than the compilation of the software project which covers the daily life tasks of the developers like testing, documentation etc. The important part of automated toolchain is the conversion of source code artifacts into executables with the help of the build system. The selection of proper build system is a matter of subjective in nature and it depends upon the number of factors that should be analyzed before proceeding forward towards the selection mechanism. This thesis focuses on software rebuilding and proves practically with experiments that could help developers and managers to decide between two important software build systems SCons and CMake. It has been experimentally proved that what are the conditions and situations where SCons performs better and what are the moments where it is wise to select CMake as a build tool. At first, individual build tools are evaluated in terms of scalability, conveniency, consistency, correctness, performance (in terms of speed and targets) and later, the build systems are experimented by automating the workflow by increasing the source code artifacts to evaluate the performance when there is limited user interaction. The behavior of the build systems are also tried with other third party tools like Tessy for testing purposes, Jenkins as CI server, and Polarion as requirement engineering tool to show how much effort is required to integrate third party tools with the build system in order to increase the functionality. The evaluation of the build systems is important because that will highlights the areas where potential candidates are better and where there is lack of functional specifications. Generally speaking, SCons has an advantage of being Pythonic in nature and provides the developer ease of use to specify the build configurations using programmatic skills. CMake on other hand are on top of shelves where there is no need to understanding and caring about the underlying platform and where developers want to generate the native build tool solutions which are readily available for exporting them into IDEs. Though both of the build systems has different goals, for example SCons is ready to sacrifices the performance while providing user correctness of the build while CMake focuses on generating native build tools by understanding the underlying platform. All of these types of situations are discussed with experiments in this thesis and serves as the practical guides for high level managers to decide the build tools among others. After evaluation, this thesis firstly suggests the general techniques where the bottlenecks could be covered and then build tool specific optimizations and recommendations are discussed to speed-up the development process.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Jheng, Yu-Peng, e 鄭宇芃. "Point-Spread Function Identification and Entropy Evaluation for Blind Image Deconvolution Based on Particle Swarm Optimization". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/dzrgnt.

Testo completo
Abstract (sommario):
碩士
國立東華大學
電機工程學系
96
This paper proposes a new algorithm that associates Particle Swarm Optimization (PSO) with Richardson-Lucy algorithm to identify unknown point-spread function (PSF) for blind image deconvolution. The goal of proposed PSO-based algorithm is to recover an observed image which degraded by an unknown PSF. At first, a particle of PSO represents a set of estimated parameters of blur model and an estimated recovery is generated from the particle by Richardson-Lucy algorithm. According to the entropy theory, the objective function which is able to distinguish characteristics between a blurred image and a clear image is further created. Therefore, an approximate PSF and an approximate recovery could be obtained when particles are optimized. Generally speaking, a degraded situation of observed image denotes a kind of PSF. According to the information of image’s degradations, different strategies (blur models) for generating particles will be utilized. After identifying PSF, we use Richardson-Lucy algorithm to recover the blurred image. The proposed algorithm can identify the PSF without prior knowledge of PSF and true image. Finally, the proposed algorithm is compared with evolutionary algorithm which has state of the art in great simulations. The superior performance, feasibility and validity of proposed algorithm are demonstrated by simulation results.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

"CANADA’S GRAIN HANDLING AND TRANSPORTATION SYSTEM: A GIS-BASED EVALUATION OF POLICY CHANGES". Thesis, 2014. http://hdl.handle.net/10388/ETD-2014-10-1795.

Testo completo
Abstract (sommario):
Western Canada is in a post Canadian Wheat Board single-desk market, in which grain handlers face policy, allocation, and logistical changes to the transportation of grains. This research looks at the rails transportation problem for allocating wheat from Prairie to port position, offering a new allocation system that fits the evolving environment of Western Canada’s grain market. Optimization and analysis of the transport of wheat by railroads is performed using geographic information system software as well as spatial and historical data. The studied transportation problem searches to minimize the costs of time rather than look purely at locational costs or closest proximity to port. Through optimization three major bottlenecks are found to constrain the transportation problem; 1) an allocation preference towards Thunder Bay and Vancouver ports, 2) small capacity train inefficiency, and 3) a mismatched distribution of supply and demand between the Class 1 railway firms. Through analysis of counterfactual policies and a scaled sensitivity analysis of the transportation problem, the grains transport system of railroads is found to be dynamic and time efficient; specifically when utilizing larger train capacities, offering open access to rail, and under times of increased availability of supplies. Even under the current circumstances of reduced grain movement and inefficiencies, there are policies and logistics that can be implemented to offer grain handlers in Western Canada with the transportation needed to fulfill their export demands.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Husmann, Kai. "Development, evaluation and application of inference-based decision support methods to meet the rising wood demands of the growing bio-economy sector". Thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-0023-3F48-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Zhang, Ti. "Integrated network-based models for evaluating and optimizing the impact of electric vehicles on the transportation system". Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-08-5960.

Testo completo
Abstract (sommario):
The adoption of plug-in electric vehicles (PEV) requires research for models and algorithms tracing the vehicle assignment incorporating PEVs in the transportation network so that the traffic pattern can be more precisely and accurately predicted. To attain this goal, this dissertation is concerned with developing new formulations for modeling travelling behavior of electric vehicle drivers in a mixed flow traffic network environment. Much of the work in this dissertation is motivated by the special features of PEVs (such as range limitation, requirement of long electricity-recharging time, etc.), and the lack of tools of understanding PEV drivers traveling behavior and learning the impacts of charging infrastructure supply and policy on the network traffic pattern. The essential issues addressed in this dissertation are: (1) modeling the spatial choice behavior of electric vehicle drivers and analyzing the impacts from electricity-charging speed and price; (2) modeling the temporal and spatial choices behavior of electric vehicle drivers and analyzing the impacts of electric vehicle range and penetration rate; (3) and designing the optimal charging infrastructure investments and policy in the perspective of revenue management. Stochastic traffic assignment that can take into account for charging cost and charging time is first examined. Further, a quasi-dynamic stochastic user equilibrium model for combined choices of departure time, duration of stay and route, which integrates the nested-Logit discrete choice model, is formulated as a variational inequality problem. An extension from this equilibrium model is the network design model to determine an optimal charging infrastructure capacity and pricing. The objective is to maximize revenue subject to equilibrium constraints that explicitly consider the electric vehicle drivers’ combined choices behavior. The proposed models and algorithms are tested on small to middle size transportation networks. Extensive numerical experiments are conducted to assess the performance of the models. The research results contain the author’s initiative insights of network equilibrium models accounting for PEVs impacted by different scenarios of charging infrastructure supply, electric vehicles characteristics and penetration rates. The analytical tools developed in this dissertation, and the resulting insights obtained should offer an important first step to areas of travel demand modeling and policy making incorporating PEVs.
text
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia