To see the other types of publications on this topic, follow the link: Optimizing parameters.

Dissertations / Theses on the topic 'Optimizing parameters'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimizing parameters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kumar, Ashwani. "Optimizing Parameters for High-quality Metagenomic Assembly." Miami University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=miami1437997082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hoy, Thomas Lavelle. "Optimizing Solvent Blends for a Quinary System." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1462199621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Benslimane, Ziad. "Optimizing Hadoop Parameters Based on the Application Resource Consumption." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-200144.

Full text
Abstract:
The interest in analyzing the growing amounts of data has encouraged the deployment of large scale parallel computing frameworks such as Hadoop. In other words, data analytic is the main reason behind the success of distributed systems; this is due tothe fact that data might not fit on a single disk, and that processing can be very time consuming which makes parallel input analysis very useful. Hadoop relies on the MapReduce programming paradigm to distribute work among the machines; so a good balance of load will eventually influence the execution time of those kinds of applications. This paper introduces a technique to optimize some configuration parameters using the application's CPU utilization in order to tune Hadoop; the theories stated and proved in this paper rely on the fact that the CPUs should neither be over utilized nor under utilized; in other words, the conclusion will be a sort of an equation of the parameter to be optimized in terms of the cluster infrastructure.The future research concerning this topic is planned to focus on tuning other Hadoop parameters and to use more accurate tools to analyze the cluster performance; moreover, it is also interesting to research any possible ways to optimize Hadoop parameters based on other consumption criteria such the input/output statistics and the network traffic.
APA, Harvard, Vancouver, ISO, and other styles
4

Shore, Patrick. "Swinging Babe's Bat: Optimizing Home Run Distance Using Ideal Parameters." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/cmc_theses/2226.

Full text
Abstract:
Significant research has been conducted on the physics of ball and bat collisions in an effort to model and understand real-world conditions. This thesis expands upon previous research to determine the maximum distance a ball can travel under ideal circumstances. Bat mass, bat speed, pitch speed and pitch spin were controlled values. These values were selected based on the highest recorded MLB values for their respective category. Specifically these are: Babe Ruth’s largest bat, Giancarlo Stanton’s recorded swing speed and Aroldis Chapman’s fastest fastball. A model was developed for a planar collision between a bat and ball using conservation laws in order to achieve the maximum exit velocity of the ball during a head-on collision. However, this thesis is focused on home runs and long fly-balls that occur from oblique collisions rather than the line drives produced by head-on collisions. The planar collision model results were adjusted to oblique collisions based on data from previous experimental research. The ball and bat were assumed to be moving in opposite directions parallel to one another at the point of impact with the ball slightly elevated above the bat. The post-collision results for the launch angle, spin and final exit velocity of the ball were calculated as functions of the perpendicular distance from the centerline of the bat to the centerline of the ball. Trajectories of the ball were calculated using a flight model that measured the final distance of the ball based on lift and drag forces. The results indicate that the optimum pre-collision parameters described above will maximize the distance traveled by the ball well beyond the farthest recorded home run distance. Experimentally determined factors such as the drag coefficient and coefficient of restitution have a significant impact on the flight of the ball. Implications of the results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Udomkun, Patchimaporn [Verfasser]. "Increasing nutritional value of papaya (Carica papaya L.) by optimizing pretreatment and drying parameters / Patchimaporn Udomkun." Aachen : Shaker, 2015. http://d-nb.info/1075437016/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Khang D. "Systematic approach to optimizing free parameters in the Goldstone-boson-exchange model of quark-quark interactions." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1566292.

Full text
Abstract:

The set of parameters used in the Goldstone-boson-exchange (GBE) model of quark-quark interactions by a group from the University of Graz to calculate baryon energy spectra is not optimal. A systematic approach to optimize these free parameters for a greater collection of baryons than previously treated is presented here. The baryons considered possess a physical symmetry where their constituent quarks are either made of all identical quarks or just two identical quarks. In order to calculate the various energy states of these baryons, the Faddeev method is used under the premise that three-quark interactions are modeled by an infinitely rising confinement potential. The new parameters and resulting energy calculations obtained yield better agreement with experimental data than previously achieved. In addition to providing a stronger case for the GBE model, these newfound parameters have the potential to give further insight into how quarks interact and pave the way for more advanced work in the field of three-quark problems.

APA, Harvard, Vancouver, ISO, and other styles
7

Berquin, Yann. "Assessing the performances and optimizing the radar sounder design parameters for the EJSM mission (Ganymede and Europa)." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENU001/document.

Full text
Abstract:
On se propose dans un premier temps d'étudier des jeux de données topographiques sur la lune glacée de Jupiter Ganymède et d'estimer l'impact de la topographie sur les performances du futur radar sondeur. Les principaux résultats sont présentés dans [1]. Une seconde partie est dédiée à l'expression mathématique du problème direct du sondage radar planétaire (physique et instrumentation). On rappelle ainsi comment dériver à partir des formulations de Stratton-Chu les formulations volumiques classiques et surfaciques (i.e. Huygens-Fresnel). On s'attache ensuite à détailler un algorithme performant basé sur la formulation surfacique pour simuler des échos radar à partir d'une surface planétaire maillée. Cette approche est largement inspirée par le travail de J.-F. Nouvel [2]. Une troisième partie s'intéresse à l'inversion des paramètres géophysiques de surface à partir des mesures radar. On écrit ainsi le problème dans un cadre probabiliste (c.f. [3]) et on présente trois grandes familles d'algorithmes : (i) une approche avec une linéarisation du problème, (ii) une approche itérative basée sur une méthode de gradient et (iii) une approche statistique pour estimer les densités de probabilités a posteriori. Ces algorithmes sont appliqués à des jeux de données synthétiques pour illustrer leurs performances. [1] Y. Berquin, W. Kofman, A. Herique, G. Alberti, and P. Beck. A study on ganymede's surface topography: Perspectives for radar sounding. Planetary and Space Science, (0), 2012. [2] J.-F. Nouvel, A. Herique, W. Kofman, and A. Safaeinili. Radar signal simulation: Surface modeling with the Facet Method. Radio Science, 39:RS1013, February 2004. [3] A. Tarantola. Inverse problem theory and methods for model parameter estimation. SIAM, 2005
The manuscript details the work performed in the course of my PhD on planetary sounding radar. The main goal of the study is to help designing and assessing the sounding radar performances. This instrument will be embarked on the ac{ESA}'s large class mission ac{JUICE} to probe Jupiter's environment and Jupiter's icy moons Callisto, Ganymede and Europa. As an introduction to the problem, a study on Ganymede's surface ac{DEM} and its implications with regard to the radar performances was performed. The results of this work put forward issues due to a hostile environment with important surface clutter which eventually lead to a decrease in the radar signal bandwidth to 8--10 MHz. A first section is then dedicated to the formulation of the direct problem of sounding radar with a focus on surface formulations. This section eventually leads to a novel algorithm for radar surface echo computation from meshed surfaces which proves to be both efficient and accurate. A second section studies the possibility to use surface formulation to recover geophysical surface parameters from sounding radar data. For that purpose, three main approaches are discussed namely (i) a linear approach, (ii) a gradient-based approach and (iii) a statistical approach. These techniques rely on a probabilistic view of the inverse problem at hand and yield good result with different setups. Although we mainly focus on surface reflectivity, we also discuss surface topography inversion. Finally, a last section discusses the work presented in the manuscript and provides perspectives for future work
APA, Harvard, Vancouver, ISO, and other styles
8

Santhosh, Sandhya. "Determination of surface finish and Metal Removal Rate by varying parameters in wirecut Electrical Discharge Machining and optimizing using genetic algorithm." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Wire Electric Discharge Machining (WEDM) is one of the greatest innovations in the tooling and machining industry. This process has brought dramatic improvements in accuracy, quality, productivity and earnings. Before wire EDM, costly processes were often used to produce finished parts. Now with the aid of computer and wire EDM machines, extremely complicated shapes can be cut automatically, precisely and economically even in materials as hard as carbide. The selection of optimum machining parameters in WEDM is an important step. Improperly selected parameters may result in serious problems like short-circuiting of wire, wire breakage and work surface damage which is imposing certain limits on the production schedule and also reducing productivity. The objective of the present work is to investigate the effects of the various Wire cut EDM process parameters on the surface quality, maximum material removal rates and obtain the optimal sets of process parameters so that the quality and MRR of machined parts can be optimized. Experiments will be conducted on the pieces by parameters. The material used for machining is Aluminium alloy. The process parameters considered are Pulse Time on, Pulse Time off, Input Power, Wire Feed, Servo Voltage and Wire Tension. The optimization will be done by using Genetic Algorithm. The above work is performed in English at WinWill Technical Services, located in Hyderabad, India, for a period of 4months.
APA, Harvard, Vancouver, ISO, and other styles
9

Chu, Joni, and Irving Harrison. "INCREASING MONITORING CAPACITY TO KEEP PACE WITH THE WIRELESS REVOLUTION." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608276.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
With wireless communications becoming the rule rather than the exception, satellite operators need tools to effectively monitor increasingly large and complex satellite constellations. Visual data monitoring increases the monitoring capacity of satellite operators by several orders of magnitude, enabling them to track hundreds of thousands of parameters in real-time on a single screen. With this powerful new tool, operators can proactively address potential problems before they become customer complaints.
APA, Harvard, Vancouver, ISO, and other styles
10

Jamalabadi, Hamidreza [Verfasser], and Steffen [Akademischer Betreuer] Gais. "Optimizing parameters and algorithms of multivariate pattern classification for hypothesis testing in high-density EEG / Hamidreza Jamalabadi ; Betreuer: Steffen Gais." Tübingen : Universitätsbibliothek Tübingen, 2017. http://d-nb.info/119946936X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

MARCON, DIOGO REATO. "NUMERICAL MODELING OF THE CO2 INJECTION IN SALINE AQUIFERS: INVESTIGATION OF THE RELEVANT PARAMETERS FOR OPTIMIZING THE STORAGE IN CCS – CARBON CAPTURE AND STORAGE PROJECTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=15354@1.

Full text
Abstract:
Este trabalho apresenta uma análise da técnica de injeção de dióxido de carbono em aqüíferos salinos para fins de descarte e armazenamento. O objetivo foi obter as características do aqüífero a ser selecionado e do processo de injeção, visando uma maior quantidade de CO2 estocada num menor tempo. Para tanto se realizou uma revisão da literatura sobre os dados de laboratório disponíveis para o CO2 e água salgada. Também se pesquisou sobre campos de CO2 que podem servir como análogos ao método de armazenamento geológico deste gás, bem como, se levantou condições consideradas adequadas para a técnica em questão e sobre estudos de simulação numérica. A partir das informações obtidas na revisão bibliográfica e após validação do modelo de fluidos aos dados de laboratório, se definiu as variáveis do processo a serem analisadas e elaborou-se uma metodologia para o estudo. O procedimento consistiu em estabelecer as premissas para a simulação numérica do cenário-base e gerar os casos derivados. Assim foi necessário alterar individualmente cada um dos seguintes parâmetros: salinidade, profundidade, permeabilidade horizontal, relação entre as permeabilidades vertical e horizontal, vazão de injeção, porosidade e saturação de água residual. Por fim, se aplicaram os critérios, definidos na metodologia proposta, para comparar os resultados das simulações e concluiu-se que as características mais importantes e que possibilitaram, segundo as premissas adotadas, o armazenamento de uma maior quantidade de CO2 em um menor intervalo de tempo são, em ordem decrescente de importância: maior vazão de injeção, maior permeabilidade horizontal e menor profundidade de injeção.
This work shows an analysis of the technique of carbon dioxide injection into saline aquifers with the purpose of discharge and storage. The final goal was to obtain the features of the aquifer, and of the injection process, to be selected in order to make the amount of stored CO2 higher and the injection time smaller. Considering such objective it was initially done a bibliography revision about the available lab data of the CO2 and water properties. It was also surveyed information about the natural CO2 fields that can be applied as analogous into the geological storage of such gas, and it was surveyed important information about the conditions considered suitable to the technique highlighted here and about numerical simulation studies. Then, with all the information surveyed on the previous works and, after validation of the fluid model to the lab data, it was set the process variables to be analyzed and a methodology was built for the study. The procedure consisted in establishing the assumptions to be applied on the numerical simulation of the base case and in generating the derived scenarios. By that way, it was required a change in each of the following properties, individually: salinity, depth, horizontal permeability, relation between vertical and horizontal permeabilities, injection rate, porosity and residual water saturation. Finally, it was applied the criterion set with the proposed methodology in order to compare the simulation results and it was concluded that, following the adopted assumptions, the most important features which allowed the storage of a higher amount of CO2 and a lower injection time, were in a decreasing order: higher injection rate, higher horizontal permeability and lower depth for the injection.
APA, Harvard, Vancouver, ISO, and other styles
12

Herwin, Eric. "Optimizing process parameters to increase the quality of the output in a separator : An application of Deep Kernel Learning in combination with the Basin-hopping optimizer." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158182.

Full text
Abstract:
Achieving optimal efficiency of production in the industrial sector is a process that is continuously under development. In several industrial installations separators, produced by Alfa Laval, may be found, and therefore it is of interest to make these separators operate more efficiently. The separator that is investigated separates impurities and water from crude oil. The separation performance is partially affected by the settings of process parameters. In this thesis it is investigated whether optimal or near optimal process parametersettings, which minimize the water content in the output, can be obtained.Furthermore, it is also investigated if these settings of a session can be testedto conclude about their suitability for the separator. The data that is usedin this investigation originates from sensors of a factory-installed separator.It consists of five variables which are related to the water content in theoutput. Two additional variables, related to time, are created to enforce thisrelationship. Using this data, optimal or near optimal process parameter settings may be found with an optimization technique. For this procedure, a Gaussian Process with the Deep Kernel Learning extension (GP-DKL) is used to model the relationship between the water content and the sensor data. Three models with different kernel functions are evaluated and the GP-DKL with a Spectral Mixture kernel is demonstrated to be the most suitable option. This combination is used as the objective function in a Basin-hopping optimizer, resulting in settings which correspond to a lower water content.Thus, it is concluded that optimal or near optimal settings can be obtained. Furthermore, the process parameter settings of a session can be tested by utilizing the Bayesian properties of the GP-DKL model. However, due to large posterior variance of the model, it can not be determined if the process parameter settings are suitable for the separator.
APA, Harvard, Vancouver, ISO, and other styles
13

Кравченко, Сергій Сергійович. "Конвертація стаціонарного двигуна ГД100 для роботи на низькокалорійних газових паливах." Thesis, НТУ "ХПІ", 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/20947.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.05.03 – двигуни та енергетичні установки. – Національний технічний університет "Харківський політехнічний інститут". – Харків, 2016. Дисертаційна робота присвячена дослідженню особливостей використання низькокалорійних газових палив в двигунах з форкамерно-факельним запалюванням паливо-повітряної суміші та якісним регулюванням потужності, моделюванню внутрішньоциліндрових процесів двигуна та пошуку його раціональних параметрів. Розроблений, реалізований і набув практичного застосування комплекс математичних моделей, що описують внутрішньоциліндрові процеси двигуна з форкамерно-факельним запалюванням. Проведені розрахункові дослідження дозволили визначити вплив властивостей НГП на показники газового двигуна типу ГД100. Запропоновано методику визначення оптимальних параметрів форкамери на основі комплексу критеріїв ефективності – мінімальної енергії запалювання суміші, енергії форкамерного факелу і коефіцієнта продувки форкамери. В результаті виконаного оптимізаційного дослідження запропоновані раціональні параметри форкамери за яких забезпечується якісне запалювання та згоряння паливо-повітряної суміші в циліндрі. Проаналізовано можливості конструктивного забезпечення номінальної потужності двигуна при використанні в якості палива різних низькока-лорійних газів. Отримані конструктивні та регулювальні параметри двигуна ГД100, що дозволять забезпечити високі техніко-економічні показники при його роботі на НГП.
The thesis on competition of a scientific degree of candidate of technical sciences in specialty 05.05.03 – engines and power plants. National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2016. The thesis is devoted to the investigation of the use of low-calorie gas fuels (LCG) in engines with pre-chamber ignition of fuel-air mixture and quality regulation power, cylinder engine processes internally modelling and search his rational parameters. Designed program has been implemented and received practical application of complex mathematical models that describe the internal cylinder engine processes with precham-ber ignition. Carried out calculations have allowed to determine the effect of the properties of the LCG on the performance of gas engine GD100 type. The technique of deter-mination of optimal parameters of the latter on the basis of a set of performance criteria: minimum ignition energy mix, energy pre-chamber torch and purge coefficient pre-chamber. As a result of the optimization performed studies offered rational parameters of the pre-chamber where quality is provided by ignition and combustion of fuel-air mixture in the cylinder. Analyzed the possibility of constructive ensure the rated power of the engine when used as a fuel by various low-calorie gases. Received constructive and adjusting parameters of engine GD100 to ensure high technical-economic indicators in the LCG.
APA, Harvard, Vancouver, ISO, and other styles
14

Кравченко, Сергій Сергійович. "Конвертація стаціонарного двигуна ГД100 для роботи на низькокалорійних газових паливах." Thesis, НТУ "ХПІ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/20945.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.05.03 – двигуни та енергетичні установки. – Національний технічний університет "Харківський політехнічний інститут". – Харків, 2016. Дисертаційна робота присвячена дослідженню особливостей використання низькокалорійних газових палив в двигунах з форкамерно-факельним запалюванням паливо-повітряної суміші та якісним регулюванням потужності, моделюванню внутрішньоциліндрових процесів двигуна та пошуку його раціональних параметрів. Розроблений, реалізований і набув практичного застосування комплекс математичних моделей, що описують внутрішньоциліндрові процеси двигуна з форкамерно-факельним запалюванням. Проведені розрахункові дослідження дозволили визначити вплив властивостей НГП на показники газового двигуна типу ГД100. Запропоновано методику визначення оптимальних параметрів форкамери на основі комплексу критеріїв ефективності – мінімальної енергії запалювання суміші, енергії форкамерного факелу і коефіцієнта продувки форкамери. В результаті виконаного оптимізаційного дослідження запропоновані раціональні параметри форкамери за яких забезпечується якісне запалювання та згоряння паливо-повітряної суміші в циліндрі. Проаналізовано можливості конструктивного забезпечення номінальної потужності двигуна при використанні в якості палива різних низькока-лорійних газів. Отримані конструктивні та регулювальні параметри двигуна ГД100, що дозволять забезпечити високі техніко-економічні показники при його роботі на НГП.
The thesis on competition of a scientific degree of candidate of technical sciences in specialty 05.05.03 – engines and power plants. National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2016. The thesis is devoted to the investigation of the use of low-calorie gas fuels (LCG) in engines with pre-chamber ignition of fuel-air mixture and quality regulation power, cylinder engine processes internally modelling and search his rational parameters. Designed program has been implemented and received practical application of complex mathematical models that describe the internal cylinder engine processes with precham-ber ignition. Carried out calculations have allowed to determine the effect of the properties of the LCG on the performance of gas engine GD100 type. The technique of deter-mination of optimal parameters of the latter on the basis of a set of performance criteria: minimum ignition energy mix, energy pre-chamber torch and purge coefficient pre-chamber. As a result of the optimization performed studies offered rational parameters of the pre-chamber where quality is provided by ignition and combustion of fuel-air mixture in the cylinder. Analyzed the possibility of constructive ensure the rated power of the engine when used as a fuel by various low-calorie gases. Received constructive and adjusting parameters of engine GD100 to ensure high technical-economic indicators in the LCG.
APA, Harvard, Vancouver, ISO, and other styles
15

Weaver, Josh. "The Self-Optimizing Inverse Methodology for Material Parameter Identification and Distributed Damage Detection." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1428316985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Clausner, André. "Möglichkeiten zur Steuerung von Trust-Region Verfahren im Rahmen der Parameteridentifikation." Thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114847.

Full text
Abstract:
Zur Simulation technischer Prozesse ist eine hinreichend genaue Beschreibung des Materialverhaltens notwendig. Die hierfür häufig verwendeten phänomenologischen Ansätze, wie im vorliegenden Fall die HILLsche Fließbedingung, enthalten materialspezifische Parameter, welche nicht direkt messbar sind. Die Identifikation dieser Materialparameter erfolgt in der Regel durch Minimierung eines Fehlerquadratfunktionals, welches Differenzen von Messwerten und zugehörigen numerisch berechneten Vergleichswerten enthält. In diesem Zusammenhang haben sich zur Lösung dieser Minimierungsaufgabe die Trust-Region Verfahren als gut geeignet herausgestellt. Die Aufgabe besteht darin, die verschiedenen Möglichkeiten zur Steuerung eines Trust-Region Verfahrens, im Hinblick auf die Eignung für das vorliegende Identifikationsproblem, zu untersuchen. Dazu werden die Quadratmittelprobleme und deren Lösungsverfahren überblicksmäßig betrachtet. Danach wird näher auf die Trust-Region Verfahren eingegangen, wobei sich im Weiteren auf Verfahren mit positiv definiten Ansätzen für die Hesse-Matrix, den Levenberg-Marquardt Verfahren, beschränkt wird. Danach wird ein solcher Levenberg-Marquardt Algorithmus in verschiedenen Ausführungen implementiert und an dem vorliegenden Identifikationsproblem getestet. Als Ergebnis stellt sich eine gute Kombination aus verschiedenen Teilalgorithmen des Levenberg-Marquardt Algorithmus mit einer hohen Konvergenzgeschwindigkeit heraus, welche für das vorliegende Problem gut geeignet ist.
APA, Harvard, Vancouver, ISO, and other styles
17

Graciano, José Eduardo Alves. "Real time optimization in chemical process: evaluation of strategies, improvements and industrial application." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3137/tde-12072016-094348/.

Full text
Abstract:
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.
O aumento da concorrência motiva a indústria a implementar ferramentas que melhorem a eficiência de seus processos. A automação é uma dessas ferramentas, e o Real Time Optimization (RTO) ou Otimização em Tempo Real, é uma metodologia de automação que considera aspectos econômicos e restrições de processos e equipamentos para atualizar o controle do processo, de acordo com preços de mercado e distúrbios. Basicamente, o RTO usa um modelo fenomenológico em estado estacionário para predizer o comportamento do processo, em seguida, otimiza uma função objetivo econômica sujeita a esse modelo. Embora amplamente utilizado na indústria, não há ainda um consenso geral sobre os benefícios da implementação do RTO, devido a algumas limitações discutidas no presente trabalho: incompatibilidade estrutural entre planta e modelo, problemas de identificabilidade e baixa frequência de atualização dos set points. Algumas metodologias de RTO foram propostas na literatura para lidar com o problema da incompatibilidade entre planta e modelo. No entanto, não há uma comparação que avalie a abrangência e as limitações destas diversas abordagens de RTO, sob diferentes aspectos. Por esta razão, o método clássico de RTO é comparado com metodologias mais recentes, baseadas em derivadas (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality), utilizando-se o método de Monte Carlo. Os resultados desta comparação mostram que o método clássico de RTO é coerente, desde que seja proporcionado um modelo suficientemente flexível para se representar a topologia do processo, um método de estimação de parâmetros apropriado para lidar com características de ruído de medição e um método para melhorar a qualidade da informação da amostra. Já os problemas de identificabilidade podem ser observados a cada iteração de RTO, quando o método atualiza alguns parâmetros-chave do modelo, o que é causado principalmente pela ausência de medidas e ruídos. Por esse motivo, quatro abordagens de estimação de parâmetros (Discriminação Rotacional, Seleção Automática e Estimação de Parâmetros, Reparametrização via Geometria Diferencial e o clássico Mínimos Quadrados não-lineares) são avaliados em relação à sua capacidade de predição, robustez e velocidade. Os resultados revelam que o método de Discriminação Rotacional é o mais adequado para ser implementado em um ciclo de RTO, já que requer menos informação a priori, é simples de ser implementado e evita o sobreajuste observado no método de Mínimos Quadrados. A terceira desvantagem associada ao RTO é a baixa frequência de atualização dos set points, o que aumenta o período em que o processo opera em condições subotimas. Uma alternativa para lidar com este problema é proposta no presente trabalho, integrando-se o RTO e o Self-Optimizing Control (SOC) através de um novo algoritmo de Model Predictive Control (MPC). Os resultados obtidos com a nova abordagem demonstram que é possível reduzir o problema da baixa frequência de atualização dos set points, melhorando o desempenho econômico do processo. Por fim, os aspectos práticos da implementação do RTO são discutidos em um estudo de caso industrial, que trata de um processo de destilação com bomba de calor, localizado na Refinaria de Paulínia (REPLAN - Petrobras). Os resultados deste estudo sugerem que os parâmetros do modelo são estimados com sucesso pelo método de Discriminação Rotacional; que o RTO é capaz de aumentar o lucro do processo em cerca de 3%, o equivalente a 2 milhões de dólares por ano; e que a integração entre SOC e RTO pode ser uma alternativa interessante para o controle deste processo de destilação.
APA, Harvard, Vancouver, ISO, and other styles
18

Cioaca, Alexandru George. "A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51795.

Full text
Abstract:
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Siedhoff, Dominic [Verfasser], Heinrich [Akademischer Betreuer] Müller, and Dorit [Gutachter] Merhof. "A parameter-optimizing model-based approach to the analysis of low-SNR image sequences for biological virus detection / Dominic Siedhoff ; Gutachter: Dorit Merhof ; Betreuer: Heinrich Müller." Dortmund : Universitätsbibliothek Dortmund, 2016. http://d-nb.info/1115464019/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Galvez, ramirez Nicolas. "A Framework for Autonomous Generation of Strategies in Satisfiability Modulo Theories Improving complex SMT strategies with learning Optimizing SMT Solving Strategies by Learning with an Evolutionary Process Evolving SMT Strategies Towards Automated Strategies in Satisfiability Modulo Theory." Thesis, Angers, 2018. http://www.theses.fr/2018ANGE0026.

Full text
Abstract:
La génération de stratégies pour les solveurs en Satisfiabilité Modulo des Théories (SMT) nécessite des outils théoriques et pratiques qui permettent aux utilisateurs d’exercer un contrôle stratégique sur les aspects heuristiques fondamentaux des solveurs de SMT, tout en garantissant leur performance. Nous nous intéressons dans cette thèse au solveur Z3 , l’un des plus efficaces lors des compétitions SMT (SMT-COMP). Dans les solveurs SMT, la définition d’une stratégie repose sur un ensemble de composants et paramètres pouvant être agencés et configurés afin de guider la recherche d’une preuve de (in)satisfiabilité d’une instance donnée. Dans cette thèse, nous abordons ce défi en définissant un cadre pour la génération autonome de stratégies pour Z3, c’est-à-dire un algorithme qui permet de construire automatiquement des stratégies sans faire appel à des connaissances d’expertes. Ce cadre général utilise une approche évolutionnaire (programmation génétique), incluant un système à base de règles. Ces règles formalisent la modification de stratégies par des principes de réécriture, les algorithmes évolutionnaires servant de moteur pour les appliquer. Cette couche intermédiaire permettra d’appliquer n’importe quel algorithme ou opérateur sans qu’il soit nécessaire de modifier sa structure, afin d’introduire de nouvelles informations sur les stratégies. Des expérimentations sont menées sur les jeux classiques de la compétition SMT-COMP
The Strategy Challenge in Satisfiability Modulo Theories (SMT) claims to build theoretical and practical tools allowing users to exert strategic control over core heuristic aspects of high-performance SMT solvers. In this work, we focus in Z3 Theorem Prover: one of the most efficient SMT solver according to the SMT Competition, SMT-COMP. In SMT solvers, the definition of a strategy relies on a set of tools that can be scheduled and configured in order to guide the search for a (un)satisfiability proof of a given instance. In this thesis, we address the Strategy Challenge in SMT defining a framework for the autonomous generation of strategies in Z3, i.e. a practical system to automatically generate SMT strategies without the use of expert knowledge. This framework is applied through an incremental evolutionary approach starting from basic algorithms to more complex genetic constructions. This framework formalise strategies modification as rewriting rules, where algorithms acts as enginess to apply them. This intermediate layer, will allow apply any algorithm or operator with no need to being structurally modified, in order to introduce new information in strategies. Validation is done through experiments on classic benchmarks of the SMT-COMP
APA, Harvard, Vancouver, ISO, and other styles
21

Cheng, Shu_Hui, and 鄭淑慧. "Optimizing Porcess Parameters for furnace Process." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/00196586648080539057.

Full text
Abstract:
碩士
國立交通大學
管理學院碩士在職專班工業工程與管理組
95
In semiconductor manufacturing, the poly doped diffusion process is designed to produce a layer of thin film. Due to complex physical and chemical reactions, the resistance of the thin film varied dynamically. Frequent adjustments of process parameters are therefore needed. In practice, the decision of such process parameter adjustments was based on a simple linear interpolation technique, which is not very effective and leads to a high variation on the film resistance. To reduce the variation of film resistance, this research used the technique of back-propagation neural network (BPNN) and developed several predictor models for determining process parameters for the next run. The development of these predictor models is based on a set of sampled data. And of these predictor models, the one that considers the manufacturing information of the last three runs performs with the best accuracy and is called the best-practice model. Based on a large amount of production data, we could justify that the best-practice model is more effective than the traditional linear interpolation technique in reducing the variation of film resistance.
APA, Harvard, Vancouver, ISO, and other styles
22

Chun-TingLiu and 劉鈞霆. "Automatic-Selection Scheme for Optimizing ART2 Parameters." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/36386552845878098296.

Full text
Abstract:
碩士
國立成功大學
製造資訊與系統研究所碩博士班
98
Abstract Adaptive Resonance Theory 2 (ART2), an unsupervised neural network that solves the common stability-and-plasticity dilemma found in other clustering technique, which updates its model fast to the new data without specifying the number of clusters. However, the network’s clustering result is greatly influenced by the setting of the parameters (e.g. alert parameter, ρ). Most existing researches use trial-and-error method, which is time consuming and may not be feasible to achieve the approximately optimum combination of parameters or dynamically adjust the parameters to the real-time situation. To solve the aforementioned problems, in this research, New ART2 algorithm and its Automatic-Selection Scheme for optimum parameters are introduced. The New ART2 algorithm is capable in achieving better clustering validity, especially to dataset of semiconductor or TFT-LCD industries. Also, it can be applied to the Advance Virtual Metrology System’s metrology data quality evaluation (DQIy) scheme, which improves the effectiveness in detecting metrology data abnormality. Apart from the cosine similarity measure of the classical ART2 algorithm’s, the New ART2 algorithm add Euclidean Distance Check for double checking the similarity between input vector and patterns. Accompany with New ART2 algorithm is the Automatic Selection Scheme for ART2 Optimum Parameters, which firstly utilizes methods such as Run Test and Simply Weighted Moving Average Approaches to predefine the patterns according to the variation and shifting of the process data, and then automatically search for the optimum combination of parameters. In addition, Silhouette Coefficient and Mean Square Error are applied to evaluate the clustering validity of the New ART2 algorithm. Finally, the New ART2 algorithm is employed to DQIy, and is evaluated with real PS-Height data from TFT-LCD industries. Experiment results show that better performance of DQIy is achieved with the new ART algorithm proposed.
APA, Harvard, Vancouver, ISO, and other styles
23

Yu-ChenJheng and 鄭育宸. "Optimizing Geometric and Cutting Parameters of Biopsy Needle." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/g8by49.

Full text
Abstract:
碩士
國立成功大學
機械工程學系
106
The needle biopsy technology has been widely applied to many medical fields. In the biopsy procedure, the tissue cutting force is a key factor which affects the quality of obtained samples. The quality of tissue samples can be improved with a reduced cutting force, which leads to more accurate cancer diagnosis. To study the influence of needle geometry and parameter configuration to the cutting force, this research applies design optimization methodologies to find optimal geometric and cutting parameters of the biopsy needles that minimize the cutting force. The effect of each parameter is also investigated. Two main needle cutting methods, stationary needle insertion and rotational cutting methods are concerned in this study. Gelatin tissue phantom is used to mimic the breast tissue. Taguchi method is applied to optimize the geometry of needles with lancet and back bevel tips. The relative magnitude of cutting force is predicted by solving inclination angle of these two types of needles, and the results are compared with optimal geometric configuration which produced the largest signal-to-noise ratio (SNR) in the Taguchi method. The ANOVA is also used to investigate the main effect of these geometric parameters. For the rotational needles, response surface methodology (RSM) is used to search optimal cutting parameters. From the response surface, the minimal cutting force is found when axial speed and slice-push ratio (SPR) are 2.01 mm/s and 4.66, respectively. As a result, this study provided optimal geometric parameters of two types of non-rotational needles and cutting parameters of rotational needles to minimize the tissue cutting force.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Hong-Ming, and 陳宏銘. "Quality Improvement for Optical Elements by Optimizing Molding Parameters." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/25957523265261207433.

Full text
Abstract:
碩士
國立勤益科技大學
機械工程系
99
During injection molding process, the processing parameters and mold-cavity status are crucial for the quality of the injection products. In this study pressure/temperature sensors were used to monitor the status of the mold cavity to investigate the effect of the status on the surface form precision and roughness of the injection products. Taguchi experimental planning method was used to determine the parameters used and Moldex3D was used to implement simulation analysis to identify the significant factors and the optimized process parameters for the process. The result indicated that the optimized combination of is a melt temperature of 230°C, an injection pressure of 80MPa, a packing pressure of 85MPa, injection speed of 100mm/sec, a screw position of 6.24mm for packing switch-over, a packing time of 5sec, a mold temperature of 80°C and a cooling time of 15sec. The significant factors for surface form precision are packing pressure and mold temperature. The pressure and temperature histories were taken using a multi-sensor plug installed within the mold cavity; the pressure for packing switch-over was then used to control the molding pressure. The optimized quality of lens was achieved with a packing switch-over pressure of 6MPa within cavity. The surface form precision of lens was improved to 7.4881μm from the original 8.7434μm, a 14.36% was achieved. The roughness of lens was improved to 8.7nm from the original 9.2nm, a 5.43% was observed.
APA, Harvard, Vancouver, ISO, and other styles
25

CHOU, CHIH-LU, and 周志錄. "Optimizing Parameters Design of Wireless Sniffer with Taguchi Method." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/48712197494132282068.

Full text
Abstract:
碩士
元培醫事科技大學
資訊管理系數位創新管理碩士班
105
Wireless sniffer is a very useful tool to analyze, verify and debug in IEEE802.11 wireless local access network (WLAN). A wireless sniffer could be used to capture wireless management, control and data frames. Different from data frames, these invisible (control and management) frames are very important information to understand what is happening in wireless network. Remote capture offers more flexibility for sniffer user. That is more important for a wireless network. While wired network is based on solid cable to transmit frames, wireless media implements a different way, which frames are transmitted with different rate, power and bandwidth. However, due to some limitation, the wireless sniffer might not reflect every frame within the channels. There might be some packets lost during the process from sniffer capture, transfer to the end - display on user’s screen. The lost situation could be more severe in wireless sniffer with remote capture. To minimize the lost, it requires adequate optimization of the wireless sniffer system. While evaluating wireless sniffer performance, the selection of throughput level will base on sniffer’s capture ability to prevent packet loss due to packet overflow, instead of initiating a top performance throughput. Taguchi Method analysis will be adopted to determine optimized parameters.
APA, Harvard, Vancouver, ISO, and other styles
26

"Optimizing the Process Parameters for Electrochemical Reduction of Carbon Dioxide." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.45481.

Full text
Abstract:
abstract: One the major problems of this modern industrialized world is its dependence on fossil fuels for its energy needs. Burning of fossils fuels generates green-house gases which have adverse effects on global climate contributing to global warming. According to Environmental Protection Agency (EPA), carbon dioxide makes up 80 percent of green-house gases emitted in USA. Electrochemical reduction of carbon dioxide is an approach which uses CO2 emissions to produce other useful hydrocarbons which can be used in many ways. In this study, primary focus was on optimizing the operating conditions, determining the better catalyst material, and analyzing the reaction products for the process of electrochemical reduction of carbon dioxide (ERC). Membrane electrode assemblies (MEA’s) are developed by air bushing the metal particles with a spray gun on to Nafion-212 which is a solid polymer based electrolyte (SPE), to support the electrodes in the electrochemical reactor gas diffusion layers (GDL) are developed using porous carbon paper. Anode was always made using the same material which is platinum but cathode material was changed as it is the working electrode. The membrane electrode assembly (MEA) is then placed into the electrochemical reactor along with gas diffusion layer (GDL) to assess the performance of the catalyst material by techniques like linear sweep voltammetry and chronoamperometry. Performance of MEA was analyzed at 4 different potentials, 2 different temperatures and for 2 different cathode catalyst materials. The reaction products of the process are analyzed using gas chromatography (GC) which has thermal conductivity detector (TCD) used for detecting hydrogen (H2), carbon monoxide (CO) and flame ionization detector (FID) used for detecting hydrocarbons. The experiments performed at 40o C gave the better results when compared with the experiments performed at ambient temperature. Also results suggested that copper oxide cathode catalyst has better durability than platinum-carbon. Maximum faradaic efficiency for methane was 5.3% it was obtained at 2.25V using copper oxide catalyst. Furthermore, experiments must be carried out to make the electrochemical reactor more robust to withstand all the operating conditions like higher potentials and to make it a solar powered reactor.
Dissertation/Thesis
Masters Thesis Mechanical Engineering 2017
APA, Harvard, Vancouver, ISO, and other styles
27

沈承緯. "Optimizing the system parameters of BS-SPECT for small animals imaging." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/41302247332971434337.

Full text
Abstract:
碩士
國立清華大學
核子工程與科學研究所
101
Single photon emission computed tomography (SPECT) employing pinhole collimators are capable of generating images with sub-millimeter spatial resolution. However, the pinhole collimator is expensive, heavy and difficult to fabricate. In our previous study, beam stopper (BS) device was employed to replace the pinhole collimator for high resolution SPECT imaging. In BS-SPECT system, dual scans with and without the BS are conducted for all directions, and the difference between these two sinograms yields the pinhole equivalent projections. For optimization, a novel ray-tracing model was proposed to evaluate the system performance of BS-SPECT system in this study. By omitting the repeated random sampling, ray-tracing simulations are more efficient than Monte Carol simulations. This model assumes that the BS-SPECT system consists of a flat detector combined with a BS. By calculating the point spread functions (PSF's) of various system geometries and BS designs, the total sensitivities and resolutions can be derived. The results show that (1) circle BS made of gold has the optimal system performance; (2) high sensitivity-to-FWHM ratio can be achieved by using large BS; (3) the BS should be placed as close as possible to the object for optimal sensitivity and resolution; (4) the detector can be placed away from the object to improve the resolution without sensitivity losses; (5) the highest SRN is derived by using 1:1 scan time ratio.
APA, Harvard, Vancouver, ISO, and other styles
28

LI, CIN-WEI, and 李勤緯. "Research on Automatically Optimizing the Parameters of Hadoop Using Taguchi Method." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/z62wbw.

Full text
Abstract:
碩士
國立屏東大學
資訊科學系碩士班
105
The coming of the cloud era has led to an increase in the rate of data processing. There-fore, the need for innovative ways to deal with these huge amounts of data is imperative. Hadoop is being used to store and calculate these large data exploration, processing index, and file record can apply the MapReduce to get better efficiency and scalability. When the Hadoop system is turned on, the relevant parameter settings are imported first. And the task should have the most appropriate combination of parameters so that MapReduce can complete the work with the leant processing time. But since the Hadoop system can’t change the parameter set-tings after startup are must close Hadoop and reenable it again. However, the number of parameters is up to 190 kinds, it will be very time consuming when are want to try all of the parameters combinations. Related papers are mostly based on experimental simulation or a small amount of parameter settings using the violent method to manually find the best set-tings, but it doesn’t work. This paper has made an automated way to startup the Hadoop sys-tem, read the program that is set by the parameters of the configuration, and deliver the rele-vant parameters to the main program to provide the basis for adjustment. With the automation mechanism mentioned above, the thesis uses the industrial Taguchi method to do the experimental design. The experimental results show that this method can use the minimum number of executions to obtain the effect of the approximate optimal parameter setting combination. apply if the proposed method, one can first take only a part of the data as a test to do the parameters adjusting. Use this mechanism to obtain performance results and find the best combination of parameter settings. Finally, use the best parameters setting to carry out the implementation of the complete data set to achieve the best performance.
APA, Harvard, Vancouver, ISO, and other styles
29

Jhang, Po-Ruei, and 張珀瑞. "Research and Experiment Verification of Optimizing Flat Plate Collector Process Parameters." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/57095642033422705554.

Full text
Abstract:
碩士
國立臺灣科技大學
自動化及控制研究所
98
Process parameters are critical to flat plate collector performance, and the key process parameters for designing and manufacturing a flat plate collector include collector materials, absorber materials, number of collectors, collector tube diameter, absorber film type, and understructure insulation material thickness. The quality characteristics are the efficiency coefficient and the heat loss coefficient. Therefore, this study examined the effect of various levels of key process parameters on flat plate collector quality. The Taguchi orthogonal array table was used to design the experiment. The main effect analysis and analysis of variance were conducted on quality data obtained from the experiment in order to determine the optimum parameters for single quality. Quality data from experiment were preprocessed by a grey relational generating operation, and the grey relational theory, coupled with entropy measurement, was employed to determine the optimum process parameter-level combination. Finally, Taguchi verification was carried out to verify experimental and computational confidence intervals and experimental results. In addition, this study applied a back-propagation neural network and Levenberg-Marquardt algorithm to build the flat plate collector process parameter prediction system. It also set control factors as network input and quality characteristics as output, and conducted network learning training. The prediction error rate was within 5%, proving that the prediction system, as established in this study, has excellent prediction capability.
APA, Harvard, Vancouver, ISO, and other styles
30

KUO, En-ming, and 郭恩銘. "Application of Grey Relativity and Neural Network on Optimizing the Injection Parameters." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/35825873423673350437.

Full text
Abstract:
碩士
國立臺灣科技大學
纖維及高分子工程系
90
During the production process, it is imperative to find out one set of appropriate processing parameters for the injection molding so as to obtain better product quality. In the conventional parameter adjustment, most of them are adjusted by try and error method that has usually wasted lots of time and production cost. Therefore, it is necessary to develop a suitable method. In this thesis, polypropylene is used as the processing material. The injection processing conditions being studied are factors like processing temperature, screw pitch, injection pressure, injection speed, screw RPM, holding pressure, holding time, and cooling time, etc. In the meantime, the Taguchi Quality Design Method in conjunction with the Gray Relational Analysis Method are used to obtain optimal processing parameters for the multiplied quality which is verified with the test and the error rate between the multiplied quality optimization result and the target value is within 1%. Finally, the neural network is also applied to structure the processing parameter prediction system with less than 5% of the prediction error rate obtained as well. It has proved an excellent performance from the prediction result of this research in helping users determine optimal processing parameters.
APA, Harvard, Vancouver, ISO, and other styles
31

Te-Wei, Kao, and 高德偉. "Applying Intelligent Computational Methods for Optimizing the Parameters of Plating Film Process." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/58388610875426169510.

Full text
Abstract:
碩士
義守大學
電機工程學系碩士在職專班
101
Plating Film Process of Plasma Enhanced Chemical Vapor Deposition (CVD) is a technique which usually has the complex and nonlinear chemical and physical reactions, non-robust and variations. These conditions certainly will determine the success of the whole manufacturing process. Thus, how to precisely adjust the control parameters of the manufacturing process, improve the good rate of product, and decrease the number of parameter’s adjustment has become an essential work of an engineer. It is well known that many control parameters need to be considered and set in the plating film process. It will increase the defective rate of product, if all control parameters are set based on the experience of technician only. Thus, the objective of this study is to develop a systematic way for the optimization of the control parameters of plating film process. First, the significant influencing factors of the process are selected based on the knowledge of the engineer who is full of experience. Taguchi method is employed to find the optimal combination of parameters. Then, the back-propagation neural network (BPN), desirability function, and genetic algorithm (GA) are utilized to obtain the optimal set of parameters. The study result shows that the proposed method is able to find the best parameters in the plating film process. In other words, the method proposed does have the potential in the real industrial application.
APA, Harvard, Vancouver, ISO, and other styles
32

Tsai, Chiu-Tung, and 蔡秋桐. "Optimizing the Jiles–Atherton Model Parameters of Current Transformer by Genetic Algorithm." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/06388423769541583649.

Full text
Abstract:
碩士
逢甲大學
電機工程所
93
Abstract When use conventional current transformer in the coupling coil assembly of the electric system, the iron core inside the coupling coil is usually made of silicon steel and in layered structure, thus to reduce the eddy current loss due to the cycled magnetic field. Since the layered iron core structure changes the original hysteresis characteristics, the hysteresis loops for the iron core of the simulated current transformer need an accurate model so that the accurate model could be used to analyze the protective relay of the electric system or the expected errors of the measuring and monitoring process. Therefore, the optimal parameters for the simulated hysteresis are important reference indices. Magnetic material is applicable to the problem of optimization for the genetic algorithm. The thesis used J-A model for iron hysteresis to establish a first order non-linear differential equation, and conducted simulated by using the genetic algorithm. The iterations, populations, chromosome genes, crossover ratio, and mutation ratio were set and substituted into the simulated equation to obtain the hysteresis loops with optimal parameters. The results were compared with the actual curves to observe the hysteresis loops change of the frequency change. This study used Matlab software to write the genetic algorithm of the penalty function to obtain the optimal solution for the parameters, and compared with the normal operating conditions to test the parameter change of current transformer hysteresis loops during electric system malfunctioning. The results were provided as reference to the protective relay of the electric system or the expected errors of the measuring and monitoring process.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Geng-Liang, and 劉耿良. "Improving Feeding Mechanism and Optimizing Laser scan parameters of Rapid Prototyping Process." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/x5dvgc.

Full text
Abstract:
碩士
國立臺北科技大學
製造科技研究所
99
The primary purpose of this paper is to design a constant-pressure slurry feeding mechanism. To eliminate the disadvantage of the constant-volume feeding mechanism, which induced the heap of slurry in front of the scraper, a mechanism generated a constant-pressure by a constant weight was developed. The syringe with a fixed pressure generated by 1kg weight delivered slurry to a slit scraper for casting layer with speed of 30 mm/s and 20mm/s. No heap of slurry was found in front of the scrape. At front end of the constant-volume feeding mechanism, a scrape surface was kept to automatically adjust the flow rate when the condition of layer casting changed. Because the ceramic particles contained in the slurry are hard, a rolling diaphragm made of silicon film to replace the syringe for avoiding damage caused by friction between cylinder wall and piston of the syringe. The new process would be used to fabricate the zirconia workpieces with strength of 900 MPa. The process used laser scanning the contour of the workpiece to burn out the binder without ablation. Because the ceramic powder in the scanning line did not bind, the green workpiece was easy to be taken apart from the green block. Therefore, optimizing parameters of burn-out by CO2 was the second purpose of current study. 0.2μm zirconia ceramic powder was a structure material; polyvinyl alcohol was an organic binder, and glutaraldehyde (concentration of 25%) was ascross-linking to strengthen the green body. Each green had to be scanned twice. The optimal parameter combination for binder burn-out was power 7W, scanning speed 40mm/s for the first scanning, and power 12W, scanning speed 80mm/s the parameters for the second scanning. The depth of material removal was 144μm.
APA, Harvard, Vancouver, ISO, and other styles
34

Yu, Mu-Kai, and 俞木凱. "Parameters Optimizing and Investigation of Thermal Separation in Vortex Tube Using CFD Method." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/m3v82b.

Full text
Abstract:
碩士
國立臺北科技大學
車輛工程系所
102
Vortex tube is a simple mechanical for energy separation. Vortex tube can produce two separate high pressure flow and low-pressure flow by compressed air. The tube can output two different temperature cold outlet and hot outlet without any media or power drive. It is Very comply with environmental requirements. Changing the geometry, the minimum temperature can be drop to forty below zero. In this research, use the CFD to find the effect to vortex tube by change the geometry. Two types of vortex tube is determined. One is counter-flow vortex tube the other is uni-flow or parallel flow vortex tube. Counter-flow vortex efficient is more than parallel flow vortex tube. So in the study, we choose counter-flow vortex, it can called normal type vortex tube. When the compressible into the vortex tube. A vortex will be created in tube wall, it called force vortex. In center of tube, the other vortex will be created, it called free vortex. Because the high speed spin in the tube, the compressible flow influence by gravity and fluid viscosity, the density cause the air endothermic and the temperature will be separated. Can be observed at the stagnation point flow, when the system in steady state. It is disclosed that the inlet pressure and the cold fraction are the important parameters influencing the performance. For this study, when angle of nozzle is 5 degree, it cause the minimum temperature will show at CF=0.2.And it have the best performance.
APA, Harvard, Vancouver, ISO, and other styles
35

Fang, Teng-Chin, and 方登進. "The Study of Optimizing manufacturing parameters for Seal process using Six Sigma methodology." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/96171085665084614807.

Full text
Abstract:
碩士
國立成功大學
工業與資訊管理學系專班
94
The Cell process is one of TFT-LCD process which mainly affect product’s yield in the industry of Thin Film Transistor-Liquid Crystal Display. Particularly, to control the accuracy of seal width in the Cell’s sealant dispenser process is the most important topic for a TFT-LCD process engineer. Therefore,how to control the accuracy of seal width is a challenge to be dealt with for a TFT-LCD member.   In order to improve the accuracy of seal width and the yield of sealant dispenser process in cell process,this paper introduced the Six Sigma DMAIC methodology to find the optimal parameters of sealant dispenser process. Further research into “Taugchi method combine response surface methodology” and “2k Factorial design combine response surface methodology” for the purpose of finding the optimal process parameters more efficiency.   From the experiment result, we found “Taugchi method combine response surface methodology” is better than “ 2k Factorial design combine response surface methodology ” in finding process’s optimal parameters and the yield forecast. So, the Six Sigma DMAIC methodology with “Taugchi method combine response surface methodology” can provide a method to find the Optimal manufacturing parameters and can work very well.
APA, Harvard, Vancouver, ISO, and other styles
36

Jiann-JyhChen and 陳建志. "The Study of Optimizing Saw Wire Machine Parameters for Solar Process Using Experimental Methodology." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/57686895947767609499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hung, M. H., and 洪敏雄. "Optimizing simulated annealing control parameters for an improved automated gamma knife treatment planning procedure." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/41294680600292206374.

Full text
Abstract:
碩士
國立陽明大學
醫學工程研究所
88
Abstract Gamma Knife Radiosurgery (GKR) is a treatment method applied to destroy the intracranial lesions, which are inaccessible and not easily removed by traditional surgery. Owning to the advantages of being non-invasive, high accuracy, concentrated energy, low infectivity, and short hospitalization stay, GKR has become the mainstream of treatment in radiosurgery and is gaining more and more popularity. Traditional radiosurgery treatment planning is based on the physicians'' understandings of patients'' lesions. It is a manual maneuver that is both complicate and time-consuming. It will be a great breakthrough if we can automate the treatment planning procedure. Couple of years ago our laboratory proposed an Automated Gamma Knife Radiosurgery Treatment Planning Procedure using Simulated Annealing Algorithm and gained satisfactory results. However, there still exist difficulties in practical usage due to massive calculation time required by the simulated annealing algorithm. It is our believe that by controlling the simulated annealing parameters, for example: the initial temperature, stop criteria, cooling schedule, ranges of motions according to the characteristics of variables of GKR, we can improve the efficiency of the method. Our results showed that after proper control of simulated annealing parameters, we not only speed up the overall processing time but also improve the procedure of the optimization. This study further validates the feasibility of a computer-assisted treatment planning in Gamma Knife Radiosurgery.
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Hsiao-Yun, and 李筱筠. "Study on Optimizing Parameters of Hot-embossing Process for Medical Negative Pressure Preservation Adaptor." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/44haj7.

Full text
Abstract:
碩士
元智大學
工業工程與管理學系
105
With the trend of aging population and increasing cancer or chronic diseases, the demand for medical equipment has continuously increased. As used for the medical devices that are for the wounds outside of human bodies or implanted inside of bodies in long-term, manufacturers should have a more strict management system that applies total quality management to product design, development, production, installation and service safety, efficacy and quality consistency of medical equipments can be enssred. This study investigates a product of a company. The DMAIC (process) was applied to imprare the embossing process. The Taguchi method is used in the improved ment phase for its efficiency and low costs the. The critical quality to process capability optimal parameters. Can be found its manufacturing capacity can be increased by implementing the quality control strategy. Finally, establishing in quality control process increase of product quality and to avoid scrap cost. To find the optimal parameters of hot embossing process, that has the minimum defectiverate in product appearance this research investigates 3 three-level control factors: temperature, time, weight, and 1 two-level noise factor work shift performing L9 orthogonal array experiments. The confirmation experiments were carried out under the optimal parameters. The results show that the product appearance defective rate reduced from 25.55% to 4.44%, which confirms that the process capability can be improved and cost can be reduced by the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
39

Franscasmara, Raviqul Haidir, and 法蘭馬拉. "Optimizing Quality of Service (QoS) Parameters of Topics for Data Distribution Service (DDS) Systems." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/be3b5z.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
107
Recent trends in net-centric systems motivate the development of information management capabilities that ensure the right information is delivered at the right place efficiently within specified time-range to satisfy the quality of service (QoS) requirements in many different environments. Data Distribution Service middleware offers a solution with the use of Quality of Service (QoS) Policies as a set of characters that drive a given behavior of the service so it may able to fulfill those requirements. QoS Policies has a wide range of attributes that can be applied to the Entity objects interacting within the system such as publisher, subscriber, topic etc. for example there are a total of 11 QoS Policies applicable to the Topic entity, however, it is difficult to find an optimal combination of these policies and their appropriate value. In this thesis we propose a way to optimize the performance (loss rate and latency) of a particular system design by using only the QoS Policy applicable to Topic entity. We use correlation analysis and the specification of QoS Policies for the collected dataset from our experiment to find the impact for each QoS policy toward the performance. The final result we found out that the combination of QoS Policies for a particular system design that able to optimize the performance of loss rate are Reliability & Durability QoS while to improve the performance of latency is Deadline QoS.
APA, Harvard, Vancouver, ISO, and other styles
40

Hao, Su Jun, and 蘇俊豪. "Optimizing parameters of Photomask Rapid Prototyping by using Taguchi method and Grey relational analysis." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/37800980057219122254.

Full text
Abstract:
碩士
國立臺灣科技大學
自動化及控制研究所
97
In the past, Rapid Prototyping focused on improvement of single quality characteristics, but different quality characteristics were always estimated in the process, the group of optimal parameters are usually determined by the engineering judgment, but it is not a objective method from the experimental results. Therefore, many researchers use the multiple quality characteristics analysis in recent years. This study select three quality characteristics (dimension accuracy, surface roughness, and making time) to estimate. We use Taguchi method design the orthogonal arrays. From the experimental result, we can compute S/N Ratio and ANOVA. We were training a Back-Propagation Network (BPN) by Taguchi’s experimental results to build a BPN anticipation model. The gray relational analysis have four steps : (1)grey generating; (2) grey relational coefficient; (3) grey relational grade; (4) gey relational ordinal. In Grey Relational Grade of weighting is using Entropy weighting. By the Gray relational analysis we can find a optimal parameter of experiment: light source (Blue) ; simple layer illuminate time (10 sec) ; Reactive oligomer (100 phr) ; deep-dip time (18 sec) ; layer thickness (0.1 mm) ; Photo-initiator (2 phr) ; luminous flux (6000 Lux).
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Rong-De, and 林榮德. "Using of Taguchi Method for Optimizing Manufacturing Parameters of Angle Aluminum Parts of Aircrafts." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/45195762337812421097.

Full text
Abstract:
碩士
國立中興大學
機械工程學系所
95
The purpose of this thesis is to investigate the methods for improving the automatic process of machining aircraft angled aluminum parts. The specific interest is in the parts with features of the material stock length between 0.5” and 120”, thickness of 0.04” to 0.25”, a variety of types and shapes with JOGs (inclined surfaces) or curved fitting profiles. The objective is to provide solutions for the improvements of the machining plan after flattening, designs of clamps and spindle chucks and control of deformation after machining process, such as to establish the relationships between the modifications of the machining process and the improvements on the surface roughness. Beginning with practical considerations and being followed by application of Taguchi method, hopefully, an optimal combination of parametric values is obtained. The factors taken into considerations include the planning of machining manufacturing process, cutting force and types of clamping, the design of clamps and spindle chucks, the selection of cutting tools and cutting parameters and arrangement of cutting paths. Firstly, fish-bone diagrams and machining empirical data are used as the basis for the screening process on deciding machining conditions, which include the spindle speed, feed rate, cutting depth, tool radius, tool characteristics, flute number, stock thickness, material property, cutting direction, clamping length and spinning position. Then, orthogonal table L12211 is used to arrange 12 experiments which are conducted to generate the angled parts. Subsequently, the surface roughness for each part is measured to establish an optimal combination of parametric values for the machining process. This research provides a locally optimized machining process plan for angled aluminum parts. The acquired machining parameters for obtaining optimal surface roughness are: high spindle speed, high feed rate with low cutting depth or low feed rate with high cutting depth, tungsten carbide tool with diameter 3/8” and length 3.0”, clamping length of 6” as a division point, fixed down milling and appropriate retraction and engaging procedures. It is hoped that the related methods and acquired parameters adopted in this research can help to improve the surface roughness resulted from an automatic machining process of aircraft angled aluminum parts.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Zi-An, and 陳子安. "The Study of Optimizing Parameters in Thin wafer Sawing - A Case Study of Schottky Diodes." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/72941723917119527479.

Full text
Abstract:
碩士
明新科技大學
企業管理研究所
98
The emergence of electronic products has profoundly enriched the life of people. In a high-tech era, the ever-increasing request of people has propelled an ongoing development in the technology industry. Moreover, the research and design of IC (integrated circuit) allow human life to advance further than ever. As we shall see it, IC exists in most of electronic products. With a tendency of lightness, slimness, and high-density and for the maintenance of a high yield and a high production efficiency, the slicing technology of wafer which varies with different materials and thicknesses of IC has become one of key factors to ensure the high quality in the packing process. However, it cannot guarantee that the settings of slicing parameters which are determined through the traditional trial and error method and engineers’ experiences are truly optimal. Therefore, this study proposes a systematic procedure for optimizing the slicing quality of wafers by using the experimental design, genetic programming, genetic algorithms/artificial immune systems. The feasibility and effectiveness of the proposed procedure is verified by a case study that improves the slicing process of slim Schottky diode wafers. Furthermore, the proposed procedure is also compared with the traditional Taguchi method. As the result indicates, the cutting quality of slim wafers can mostly approach the ideal value and completely meet the specifications. Hence, the proposed integrated procedure can be considered as an effective tool for resolving a parameter design problem with a single quality characteristic.
APA, Harvard, Vancouver, ISO, and other styles
43

Cheng, Hsin-Chen, and 鄭欣承. "Optimizing Process Parameters for Clean Process Before Gate-OX – A Case Study on Company T." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/25921215607670773960.

Full text
Abstract:
碩士
國立交通大學
管理學院工業工程與管理學程
100
As the science technology changing rapidly, the electronic products which consumers used have become light, thin, short, and small, and the chips inside are growing miniaturized. This is the reason why the chips’ size manufactured by the semiconductor manufacturing foundry industry getting smaller and smaller. Now, the original 8 inch wafer factories face with the challenge that how to use old equipment to produce smaller chips with critical dimension. This case studies an 8 inch wafer manufacturing factory found several defect cases in Poly etching stage as introducing new technology of the Gate OX process, and the product would reduces 6% yield in the packaging and testing plant. After analysis and excluding a number of reasons which may cause abnormal, the FAB thinks that the problem might result from wet cleaning equipment. To help clarify the cause of abnormal for T company, this study uses DOE to analyze main reasons of defects caused by cleaning wafer machine. to offer the improvement program, and to find the optimal process parameters.
APA, Harvard, Vancouver, ISO, and other styles
44

Kuo, Chen-Nan, and 郭振南. "Multiple Quality Characteristics on Optimizing Design Parameters of Permanent Magnet Transverse Flux Linear Synchronous Motor." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/7vrb63.

Full text
Abstract:
碩士
國立雲林科技大學
電機工程系
102
Parameters of linear motor structure such as pole pitch and tooth width of the translator, width and thickness of magnet will influence thrust and cogging force. For promoting the performance of the linear motor, all parameters of proposed structure are adjusted in this thesis. Theoretical analysis and simulation analysis are adopted to grasp the aforementioned parameters that influence the thrust and cogging force. In order to analyze and select the parameters that affect motor operating and obtain the multiple quality characteristics which allow a linear motor to possess larger thrust and less cogging force, Taguchi method and grey system theory are adopted on permanent magnet excited transverse flux linear synchronous motor in this paper. In order to further promote the performance of the linear motor, the analysis of variance is applied to compare the contribution of each parameter, and the parameters with higher contributions are adjusted thereafter for experiments. By using AutoCAD to construct 2D module of motor and COMSOL Multiphysics, the thrust and cogging force will be simulated for carrying out the optimization.
APA, Harvard, Vancouver, ISO, and other styles
45

Wei, Chiu Hao, and 邱浩煒. "Optimizing Process Parameters for Lamination Process of Solar Module-A Case Study of W Company." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/36753431538282023786.

Full text
Abstract:
碩士
國立交通大學
管理學院工業工程與管理學程
101
When the energy shortages, rising oil prices, environmental pollution and global warming have become worst, the alternative energy development is the necessary approach for each nation. In which solar industry is the most appreciative choice in each country. This research topic is to improve solar module yield; however, the bottleneck of general solar module plant process as one of the most important lamination process and it is longest operation time in whole the solar module production process. Otherwise, lamination process complete EVA(ethylene - vinyl acetate copolymer) cross-link process and could detect some potential failures such as “bubble” or “EVA curing incomplete” existing inside EVA layer by next inspection station. These failures cannot be repaired and need to execute scrap process. Furthermore, the solar modules are generally the warranty period up to 20 years in the market, but in the past off customer complaint cases often have reliability failures which are come from potential problem of lamination process. Therefore, this research is to find out related defects of lamination process of solar factory and use DOE to find optimal process parameter to improve its production yield and increase company’s competitive ability.
APA, Harvard, Vancouver, ISO, and other styles
46

Guo, Jin-Ting, and 郭晉廷. "Applying solution-processable electron transport layer on all-inorganic perovskite light-emitting diode and process parameters optimizing." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hs4rn5.

Full text
Abstract:
碩士
國立交通大學
影像與生醫光電研究所
107
All-inorganic perovskites such as CsPbX3 (X=Cl, Br, I) are less susceptible to moisture and oxygen than organic-inorganic hybrid perovskites. With such chemical stability and excellent optical properties, e.g. narrow spectral width and adjustable emission wavelength, it has attracted much attention for optoelectronic applications. In this thesis, we focus on the CsPbBr3 light-emitting diode, and try to simplify the fabrication processes–a nearly fully coating processes. More specifically, ZnO nanoparticles dispersed in Propylene glycol methyl ether acetate (PGMEA, obtained from TWNC) were spin-coated on top of the emission layer of CsPbBr3, serving as an electron transport layer. The experimental results show that this method does not damage the CsPbBr3 film surface, thereby improving the survival time of device and reducing current value. Finally, by adjusting the concentration and dispersed solvent of CsPbBr3 and the speed of ZnO nanoparticle spin coating to optimize the fabrication parameters, the devices with the relatively higher external quantum efficiency were obtained.
APA, Harvard, Vancouver, ISO, and other styles
47

Marrey, Mallikharjun. "A Framework for Optimizing Process Parameters in Powder Bed Fusion (PBF) Process using Artificial Neural Network (ANN)." Thesis, 2019. http://hdl.handle.net/1805/19990.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Powder bed fusion (PBF) process is a metal additive manufacturing process, which can build parts with any complexity from a wide range of metallic materials. Research in the PBF process predominantly focuses on the impact of a few parameters on the ultimate properties of the printed part. The lack of a systematic approach to optimizing the process parameters for a better performance of given material results in a sub-optimal process limiting the potential of the application. This process needs a comprehensive study of all the influential parameters and their impact on the mechanical and microstructural properties of a fabricated part. Furthermore, there is a need to develop a quantitative system for mapping the material properties and process parameters with the ultimate quality of the fabricated part to achieve improvement in the manufacturing cycle as well as the quality of the final part produced by the PBF process. To address the aforementioned challenges, this research proposes a framework to optimize the process for 316L stainless steel material. This framework characterizes the influence of process parameters on the microstructure and mechanical properties of the fabricated part using a series of experiments. These experiments study the significance of process parameters and their variance as well as study the microstructure and mechanical properties of fabricated parts by conducting tensile, impact, hardness, surface roughness, and densification tests, and ultimately obtain the optimum range of parameters. This would result in a more complete understanding of the correlation between process parameters and part quality. Furthermore, the data acquired from the experiments are employed to develop an intelligent parameter suggestion multi-layer feedforward (FF) backpropagation (BP) artificial neural network (ANN). This network estimates the fabrication time and suggests the parameter setting accordingly to the user/manufacturers desired characteristics of the end-product. Further, research is in progress to evaluate the framework for assemblies and complex part designs and incorporate the results in the network for achieving process repeatability and consistency.
APA, Harvard, Vancouver, ISO, and other styles
48

(7037645), Mallikharjun Marrey. "A FRAMEWORK FOR OPTIMIZING PROCESS PARAMETERS IN POWDER BED FUSION (PBF) PROCESS USING ARTIFICIAL NEURAL NETWORK (ANN)." Thesis, 2019.

Find full text
Abstract:

Powder bed fusion (PBF) process is a metal additive manufacturing process, which can build parts with any complexity from a wide range of metallic materials. Research in the PBF process predominantly focuses on the impact of a few parameters on the ultimate properties of the printed part. The lack of a systematic approach to optimizing the process parameters for a better performance of given material results in a sub-optimal process limiting the potentialof the application. This process needs a comprehensive study of all the influential parameters and their impact on the mechanical and microstructural properties of a fabricated part. Furthermore, there is a need to develop a quantitative system for mapping the material properties and process parameters with the ultimate quality of the fabricated part to achieve improvement in the manufacturing cycle as well as the quality of the final part produced by the PBF process. To address the aforementioned challenges, this research proposes a framework to optimize the process for 316L stainless steel material. This framework characterizes the influence of process parameters on the microstructure and mechanical properties of the fabricated part using a series of experiments. These experiments study the significance of process parameters and their variance as well as study the microstructure and mechanical properties of fabricated parts by conducting tensile, impact, hardness, surface roughness, and densification tests, and ultimately obtain the optimum range of parameters. This would result in a more complete understanding of the correlation between process parameters and part quality. Furthermore, the data acquired from the experimentsare employed to develop an intelligent parameter suggestion multi-layer feedforward (FF) backpropagation (BP) artificial neural network (ANN). This network estimates the fabrication time and suggests the parameter setting accordingly to the user/manufacturers desired characteristics of the end-product. Further, research is in progress to evaluate the framework for assemblies and complex part designs and incorporate the results in the network for achieving process repeatability and consistency.


APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Shih-Hsien, and 吳仕賢. "Investigation on optimizing fabrication parameters of ZnO:Ga transparent conductive films by sol-gel method and infarred heating." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/50091727119089138633.

Full text
Abstract:
碩士
南台科技大學
機械工程系
98
In this study, ZnO:Ga(Al) thin films were deposited by sol-gel method. Using RTA as pre-annealing treatments, and TF as the post-annealing treatments, we intend to find the optimum parameters. In order to obtain better zinc oxide films and to improve the electrical properties of the films, we increased the heating rate, the temperature of the RTA furnace, as well as the the post-annealing temperatures and time in the TF furnace. In our experiments, zinc acetate dehydrate was added into methanol and ethanol. Then, Gallium(Aluminum) in the form of gallium trichloride(aluminium nitrate) was added. MEA was added as stabilizer. The GZO(AZO) films were deposited by spinning coating on Eagle 2000 substrates. The deposited films were then pre-heated in a RTA furnace. Finally, the films were post-heated in vacuum. The thicknesses of the films were measured by a FE-SEM, UV-Vis, and a multi-angle SE and then compared. XRD patterns demonstrated that the preferential orientation of the GZO films is the (002) direction. The average transmittance of the samples was over 80% in the visible range. The electrical properties of the films were measured by a Hall measurement and then examined by a four-point probe station. XPS depth profile are mainly used to analyze the influences on the chemical states of oxygen in films. The crystal quality of the films was investigated by a PL. Experimental results show that the lowest resistivity, 2.06×10-3(2.99×10-3) Ωcm was obtained for the GZO(AZO) films pre-heated at 500℃ for 5℃/sec and post-heated at 600℃ for 15 min.
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Pin-Xian, and 吳品賢. "Research on Optimizing Surface Roughness Process Parameters of SU-8 Inclined Micro-Mirror and its Physical Characteristics." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/40136278815461913274.

Full text
Abstract:
碩士
明志科技大學
機電工程研究所
98
The research purpose of this thesis is application a special inclined exposure mechanism and optimized process parameters to fabricate micro-mirrors with optical-level surface roughness in batches. An UV curable material SU-8 is used as the material for micro-mirror structure. The thesis is needed to optimize the process parameters including soft-bake temperature, soft-bake time, exposure dosage, post-exposure-bake (PEB) temperature and PEB time. Research methods used in this thesis includes: test the weight loss percentage of the solvent in SU-8 and use interferometer to measure the surface roughness for optimizing the temperature and time of soft-bake and PEB. And we also observe the fringe pattern on the SU-8 mirror after developing as a reference for correcting the exposure dose. Besides, this thesis also discusses the influence of reflow effects on the surface roughness of inclined microstructure. At present, after fine tune the parameters of soft-bake temperature, time, unit exposure dose, PEB temperature and time, the surface roughness of paired micro-mirror is lower than 40nm within the area of 300μm×300μm and reaches 207.40nm within 605μm×453μm. Compare to other researches, the thesis have larger measure area and lower surface roughness. Therefore, the developed micro-optical structure in this thesis could be applied in micro optical pickup head due to the characteristics of low-cost, 45° paired, wafer-level and batch process. This technology also resolves the problem of man-made assembly and alignment. If the technology of to fabricate paired-45° micro-mirrors could be developed successfully, it could also offset the problem of excessive lapping and polishing procedures in traditional mechanical mirror processing, reduce the manufacturing cost of micro-optical inclined mirror, and improve the assembly steps of optical pickup devicein Blue-ray DVD system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography