Thèses sur le sujet « Selective parameter »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Selective parameter.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Selective parameter ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Sandgren, Niclas. « Parametric methods for frequency-selective MR spectroscopy / ». Uppsala : Univ. : Dept. of Information Technology, Univ, 2004. http://www.it.uu.se/research/reports/lic/2004-001/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Pel, Joel. « A novel electrophoretic mechanism and separation parameter for selective nucleic acid concentration based on synchronous coefficient of drag alteration (SCODA) ». Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/13402.

Texte intégral
Résumé :
Molecular manipulation and separation techniques form the building blocks for much of fundamental science, yet many separation challenges still remain, in fields as diverse as forensics and metagenomics. This thesis presents SCODA (Synchronous Coefficient of Drag Alteration), a novel and general molecular separation and concentration technique aimed at addressing such challenges. SCODA takes advantage of physical molecular properties associated with the non‐linear response of long, charged polymers to electrophoretic fields, which define a novel parameter for DNA separation. The SCODA method is based on superposition of synchronous, time-varying electrophoretic fields, which can generate net drift of charged molecules even when the time-averaged molecule displacement generated by each field individually is zero. Such drift can only occur for molecules, such as DNA, whose motive response to electrophoretic fields is non-linear. This thesis presents the development of SCODA for extraction of DNA, and outlines the design of the instrumentation required to achieve the SCODA effect. We then demonstrate the selectivity, efficiency, and sensitivity of the technique. Contaminant rejection is also quantified for humic acids and proteins, with SCODA displaying excellent performance compared to existing technologies. Additionally, the ability of this technology to extract high molecular weight DNA is demonstrated, as is its inherent fragment length selection capability. Finally, we demonstrate two applications of this method to metagenomics projects where existing technologies performed poorly or failed altogether. The first is the extraction of high molecular weight DNA from soil, which is limited in length to fragments smaller than 50 kb with current direct extraction methods. SCODA was able to recover DNA an order of magnitude larger than this. The second application is DNA extraction from highly contaminated samples originating in the Athabasca tar sands, where existing technology had failed to recover any usable DNA. SCODA was able to recover sufficient DNA to enable the discovery of 200 putatively novel organisms.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Suchý, Jan. « Zpracování vysokopevnostní hliníkové slitiny AlSi9Cu3 technologií selective laser melting ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-319259.

Texte intégral
Résumé :
Method selective laser melting can produce metal parts by using 3D printing. This diploma thesis deals with the influence of process parameters on the workability of AlSi9Cu3 high-strength aluminum alloy using selective laser melting. The theoretical part deals with relations between process parameters and identifies phenomena occurring during the processing of metals by this technology. It also deals with conventionally manufactured aluminum alloy AlSi9Cu3. In the work, material research is performed from single tracks tests, porosity tests with different process parameters and mechanical testing. Here are showing the trends of porosity change at scanning speed, laser power, individual laser stop distance, bulk energy, and powder quality. The workability of the material can be judged by the degree of relative density achieved. Simultaneously the values of the achieved mechanical properties of the selected process parameters are presented. The data obtained are analyzed and compared with literature.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Tan, Matthias H. Y. « Contributions to quality improvement methodologies and computer experiments ». Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48936.

Texte intégral
Résumé :
This dissertation presents novel methodologies for five problem areas in modern quality improvement and computer experiments, i.e., selective assembly, robust design with computer experiments, multivariate quality control, model selection for split plot experiments, and construction of minimax designs. Selective assembly has traditionally been used to achieve tight specifications on the clearance of two mating parts. Chapter 1 proposes generalizations of the selective assembly method to assemblies with any number of components and any assembly response function, called generalized selective assembly (GSA). Two variants of GSA are considered: direct selective assembly (DSA) and fixed bin selective assembly (FBSA). In DSA and FBSA, the problem of matching a batch of N components of each type to give N assemblies that minimize quality cost is formulated as axial multi-index assignment and transportation problems respectively. Realistic examples are given to show that GSA can significantly improve the quality of assemblies. Chapter 2 proposes methods for robust design optimization with time consuming computer simulations. Gaussian process models are widely employed for modeling responses as a function of control and noise factors in computer experiments. In these experiments, robust design optimization is often based on average quadratic loss computed as if the posterior mean were the true response function, which can give misleading results. We propose optimization criteria derived by taking expectation of the average quadratic loss with respect to the posterior predictive process, and methods based on the Lugannani-Rice saddlepoint approximation for constructing accurate credible intervals for the average loss. These quantities allow response surface uncertainty to be taken into account in the optimization process. Chapter 3 proposes a Bayesian method for identifying mean shifts in multivariate normally distributed quality characteristics. Multivariate quality characteristics are often monitored using a few summary statistics. However, to determine the causes of an out-of-control signal, information about which means shifted and the directions of the shifts is often needed. We propose a Bayesian approach that gives this information. For each mean, an indicator variable that indicates whether the mean shifted upwards, shifted downwards, or remained unchanged is introduced. Default prior distributions are proposed. Mean shift identification is based on the modes of the posterior distributions of the indicators, which are determined via Gibbs sampling. Chapter 4 proposes a Bayesian method for model selection in fractionated split plot experiments. We employ a Bayesian hierarchical model that takes into account the split plot error structure. Expressions for computing the posterior model probability and other important posterior quantities that require evaluation of at most two uni-dimensional integrals are derived. A novel algorithm called combined global and local search is proposed to find models with high posterior probabilities and to estimate posterior model probabilities. The proposed method is illustrated with the analysis of three real robust design experiments. Simulation studies demonstrate that the method has good performance. The problem of choosing a design that is representative of a finite candidate set is an important problem in computer experiments. The minimax criterion measures the degree of representativeness because it is the maximum distance of a candidate point to the design. Chapter 5 proposes algorithms for finding minimax designs for finite design regions. We establish the relationship between minimax designs and the classical set covering location problem in operations research, which is a binary linear program. We prove that the set of minimax distances is the set of discontinuities of the function that maps the covering radius to the optimal objective function value, and optimal solutions at the discontinuities are minimax designs. These results are employed to design efficient procedures for finding globally optimal minimax and near-minimax designs.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Residori, Sara. « FABRICATION AND CHARACTERIZATION OF 3D PRINTED METALLIC OR NON-METALLIC GRAPHENE COMPOSITES ». Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/355324.

Texte intégral
Résumé :
Nature develops several materials with remarkable functional properties composed of comparatively simple base substances. Biological materials are often composites, which optime the conformation to their function. On the other hand, synthetic materials are designed a priori, structuring them according to the performance to be achieved. 3D printing manufacturing is the most direct method for specific component production and earmarks the sample with material and geometry designed ad-hoc for a defined purpose, starting from a biomimetic approach to functional structures. The technique has the advantage of being quick, accurate, and with a limited waste of materials. The sample printing occurs through the deposition of material layer by layer. Furthermore, the material is often a composite, which matches the characteristics of components with different geometry and properties, achieving better mechanical and physical performances. This thesis analyses the mechanics of natural and custom-made composites: the spider body and the manufacturing of metallic and non-metallic graphene composites. The spider body is investigated in different sections of the exoskeleton and specifically the fangs. The study involves the mechanical characterization of the single components by the nanoindentation technique, with a special focus on the hardness and Young's modulus. The experimental results were mapped, purposing to present an accurate comparison of the mechanical properties of the spider body. The different stiffness of components is due to the tuning of the same basic material (the cuticle, i.e. mainly composed of chitin) for achieving different mechanical functions, which have improved the animal adaptation to specific evolutive requirements. The synthetic composites, suitable for 3D printing fabrication, are metallic and non-metallic matrices combined with carbon-based fillers. Non-metallic graphene composites are multiscale compounds. Specifically, the material is a blend of acrylonitrile-butadiene-styrene (ABS) matrix and different percentages of micro-carbon fibers (MCF). In the second step, nanoscale filler of carbon nanotubes (CNT) or graphene nanoplatelets (GNP) are added to the base mixture. The production process of composite materials followed a specific protocol for the optimal procedure and the machine parameters, as also foreseen in the literature. This method allowed the control over the percentages of the different materials to be adopted and ensured a homogeneous distribution of fillers in the plastic matrix. Multiscale compounds provide the basic materials for the extrusion of fused filaments, suitable for 3D printing of the samples. The composites were tested in the configuration of compression moulded sheets, as reference tests, and also in the corresponding 3D printed specimens. The addition of the micro-filler inside the ABS matrix caused a notable increment in stiffness and a slight increase in strength, with a significant reduction in deformation at the break. Concurrently, the addition of nanofillers was very effective in improving electrical conductivity compared to pure ABS and micro-composites, even at the lowest filler content. Composites with GNP as a nano-filler had a good impact on the stiffness of the materials, while the electrical conductivity of the composites is favoured by the presence of CNTs. Moreover, the extrusion of the filament and the print of fused filament fabrication led to the creation of voids within the structure, causing a significant loss of mechanical properties and a slight improvement in the electrical conductivity of the multiscale moulded composites. The final aim of this work is the identification of 3D-printed multiscale composites capable of the best matching of mechanical and electrical properties among the different compounds proposed. Since structures with metallic matrix and high mechanical performances are suitable for aerospace and automotive industry applications, metallic graphene composites are studied in the additive manufacturing sector. A comprehensive study of the mechanical and electrical properties of an innovative copper-graphene oxide composite (Cu-GO) was developed in collaboration with Fondazione E. Amaldi, in Rome. An extensive survey campaign on the working conditions was developed, leading to the definition of an optimal protocol of printing parameters for obtaining the samples with the highest density. The composite powders were prepared following two different routes to disperse the nanofiller into Cu matrix and, afterward, were processed by selective laser melting (SLM) technique. Analyses of the morphology, macroscopic and microscopic structure, and degree of oxidation of the printed samples were performed. Samples prepared followed the mechanical mixing procedure showed a better response to the 3D printing process in all tests. The mechanical characterization has instead provided a clear increase in the resistance of the material prepared with the ultrasonicated bath method, despite the greater porosity of specimens. The interesting comparison obtained between samples from different routes highlights the influence of powder preparation and working conditions on the printing results. We hope that the research could be useful to investigate in detail the potential applications suitable for composites in different technological fields and stimulate further comparative analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Lascelles, Kristy Rebecca Rowe. « Evaluative conditioning : experimental parameters and selective associations ». Thesis, University of Sussex, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.666768.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Amezziane, Mohamed. « SMOOTHING PARAMETER SELECTION IN NONPARAMETRIC FUNCTIONAL ESTIMATION ». Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3488.

Texte intégral
Résumé :
This study intends to build up new techniques for how to obtain completely data-driven choices of the smoothing parameter in functional estimation, within the confines of minimal assumptions. The focus of the study will be within the framework of the estimation of the distribution function, the density function and their multivariable extensions along with some of their functionals such as the location and the integrated squared derivatives.
Ph.D.
Department of Mathematics
Arts and Sciences
Mathematics
Styles APA, Harvard, Vancouver, ISO, etc.
8

COSTA, HELDER GOMES. « PARAMETER SELECTION FOR MACHINING : A MULTICRITERIA APPROACH ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1994. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19739@1.

Texte intégral
Résumé :
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
A presente pesquisa apresenta uma inédita abordagem multi-critério aplicada a seleção de parâmetros para a usina mecânica. O modelo proposto aborda a seleção de ferramentas de corte e a otimização das condições de usinagem em múltiplos estágios (mais especificamente a seleção do avanço, da velocidade de corte e da composição de estágios de usinagem). Apresenta-se, também, um sistema computacional (SIAD_T) desenvolvido em pararelo à presente pesquisa, com o objetivo de simular a metodologia aqui proposta. Este sistema, aplicável à seleção de condições de usinagem em operações de carreamento em tornos mecânicos, é dotado das seguintes ferramentas: (i) ferramenta gráfica para entrada das características geométricas da peça a ser usinada; (ii) ferramenta gráfica para definição dos volumes unitários (estágios de usinagem); (iii) código numérico para implementação do modelo multi-critério no processo de seleção de variáveis para a usinagem dos volumes unitários; e, (v) código numérico para implementação do modelo multi-critério no processo de seleção das famílias de volumes unitários (conjuntos de volumes unitários). Além destes pontos, o presente texto apresenta uma inédita análise de sensibilidade, na qual se avalia e questiona o grau de relevância da aplicação de uma abordagem multi-critério ao processo de tomada de decisões no contexto da usinagem.
This work describes a original multicriteria approach to the machining parametes selection problem. The proposed model comprises the cutting tools selection problem and optimal parameters selection for multiple estages machining. In addition, a computacional system (SIAD_T) is developed to simulate the proposed methology implementation. This system, applicable to turning operations, has the following characteristica: (i) a graphical tool to input the geotric features of the workpiece,(ii) a graphical tool to input the unitary volumes, (iii) a numerical code to simulate the application o the multicriteria approach to the cutting tools selection process,(iv) a numerical code to simulate the application of the multicriteria approach on the machining parameters selection process, and (v) a numerical code to simulate the application of the multicriteria approach to the selection of machinable volumes (or selection of a set of unitary volumes). Finally, this work presents an analysis of the revelance of applying a multicriteria approach to decision making process concerning the machining context.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Park, Yonggi. « PARAMETER SELECTION RULES FOR ILL-POSED PROBLEMS ». Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1574079328985475.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zvoníček, Josef. « Vývoj procesních parametrů pro zpracování hliníkové slitiny AlSi7 technologií Selective Laser Melting ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-444404.

Texte intégral
Résumé :
The diploma thesis deals with the study of the influence of process parameters of AlSi7Mg0.6 aluminum alloy processing using the additive technology Selective Laser Melting. The main objective is to clarify the influence of the individual process parameters on the resulting porosity of the material and its mechanical properties. The thesis deals with the current state of aluminum alloy processing in this way. The actual material research of the work is carried out in successive experiments from the welding test to the volume test with subsequent verification of the mechanical properties of the material. Material evaluation in the whole work is material porosity, stability of individual welds, hardness of the material and its mechanical properties. The results are compared with the literature.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Meise, Monika. « Residual based selection of smoothing parameters ». [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974404551.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Jungwirth, Craig A. (Craig Allen). « Vision system parameter selection for flexible materials handling ». Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/102709.

Texte intégral
Résumé :
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1988.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references.
by Craig A. Jungwirth.
B.S.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Shi, Shujing. « Tuning Parameter Selection in L1 Regularized Logistic Regression ». VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2940.

Texte intégral
Résumé :
Variable selection is an important topic in regression analysis and is intended to select the best subset of predictors. Least absolute shrinkage and selection operator (Lasso) was introduced by Tibshirani in 1996. This method can serve as a tool for variable selection because it shrinks some coefficients to exact zero by a constraint on the sum of absolute values of regression coefficients. For logistic regression, Lasso modifies the traditional parameter estimation method, maximum log likelihood, by adding the L1 norm of the parameters to the negative log likelihood function, so it turns a maximization problem into a minimization one. To solve this problem, we first need to give the value for the parameter of the L1 norm, called tuning parameter. Since the tuning parameter affects the coefficients estimation and variable selection, we want to find the optimal value for the tuning parameter to get the most accurate coefficient estimation and best subset of predictors in the L1 regularized regression model. There are two popular methods to select the optimal value of the tuning parameter that results in a best subset of predictors, Bayesian information criterion (BIC) and cross validation (CV). The objective of this paper is to evaluate and compare these two methods for selecting the optimal value of tuning parameter in terms of coefficients estimation accuracy and variable selection through simulation studies.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Malý, Martin. « Experimentální komora pro testování speciálních materiálů technologií SLM ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318407.

Texte intégral
Résumé :
The thesis deals with the influence of process temperature and pressure on 3D printing using Selective Laser Melting. The aim of the thesis is the design, manufacture and testing of the experimental chamber for SLM 280HL from company SLM Solutions. The main task of the experimental chamber is to increase the temperature of the preheating of the powder bed from the original 200 °C to at least at 400 °C. The device will be used to investigate the influence of high process temperature on the properties of printed materials. The thesis also deals with the design of the powder applicator for elevated temperatures.
Styles APA, Harvard, Vancouver, ISO, etc.
15

May, Michael. « Data analytics and methods for improved feature selection and matching ». Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Texte intégral
Résumé :
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Warwick, Jane. « Selecting tuning parameters in minimum distance estimators ». Thesis, Open University, 2002. http://oro.open.ac.uk/19918/.

Texte intégral
Résumé :
Many minimum distance estimators have the potential to provide parameter estimates which are both robust and efficient and yet, despite these highly desirable theoretical properties, they are rarely used in practice. This is because the performance of these estimators is rarely guaranteed per se but obtained by placing a suitable value on some tuning parameter. Hence there is a risk involved in implementing these methods because if the value chosen for the tuning parameter is inappropriate for the data to which the method is applied, the resulting estimators may not have the desired theoretical properties and could even perform less well than one of the simpler, more widely used alternatives. There are currently no data-based methods available for deciding what value one should place on these tuning parameters hence the primary aim of this research is to develop an objective way of selecting values for the tuning parameters in minimum distance estimators so that the full potential of these estimators might be realised. This new method was initially developed to optimise the performance of the density power divergence estimator, which was proposed by Basu, Harris, Hjort and Jones [3]. The results were very promising so the method was then applied to two other minimum distance estimators and the results compared.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Warwick, Jane. « Selecting tuning parameters in minimum distance estimators ». n.p, 2001. http://ethos.bl.uk/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Callister, Ross. « Automatically Selecting Parameters for Graph-Based Clustering ». Thesis, Curtin University, 2020. http://hdl.handle.net/20.500.11937/80407.

Texte intégral
Résumé :
Data streams present a number of challenges, caused by change in stream concepts over time. In this thesis we present a novel method for detection of concept drift within data streams by analysing geometric features of the clustering algorithm, RepStream. Further, we present novel methods for automatically adjusting critical input parameters over time, and generating self-organising nearest-neighbour graphs, improving robustness and decreasing the need to domain-specific knowledge in the face of stream evolution.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Papadopoulos, George. « Feature Selection and Parameter Estimation in Biochemical Signaling Pathways ». Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503053.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

DALCANAL, PAOLA REGINA. « SOME COMMENTS ON SASSI 2000 FREE FIELD PARAMETER SELECTION ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5185@1.

Texte intégral
Résumé :
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
ELETROBRAS TERMONUCLEAR S.A - ELETRONUCLEAR
Estudam-se as formas de resolução do campo-livre e obtenção das funções de transferência adotadas pelo programa SASSI2000 na análise de problemas de interação solo-estrutura, no domínio da freqüência, no caso de uma excitação sísmica. Propõem-se composições do campo-livre no que tange à natureza e ângulo de incidência das ondas que o compõem. Para tal, avalia-se o comportamento de um sistema simples, constituído por uma estrutura superficial com cinco graus de liberdade, apoiada em um terreno estratificado sobre um semiespaço em rocha. Analisa-se a influência, na resposta do sistema, da variação dos seguintes parâmetros definidores do campo-livre: natureza e ângulo de incidência das ondas, propriedades topográficas e constitutivas do terreno, direção do movimento de controle e posição relativa do ponto de controle ao nó de interação. Cria-se um banco de dados em funções de transferência. Com relação à obtenção dessas funções, faz-se a análise de um sistema similar, para o qual são fornecidos diferentes conjuntos de freqüências de análise e, examinando-se as soluções encontradas, propõe-se um roteiro para definição das freqüências de análise utilizando exclusivamente o programa em questão, comprovando sua eficiência em estruturas reais das usinas nucleares de Angra 3 e Angra 1.
One examines a series of SASSI2000 runs to develop sensibility regarding the behavior of the program general solution in order to propose adequate analyst aptitudes in the choice of the analysis frequencies, as well as in the free-field wave composition and body wave incident angle selection. The control motion point localization and control motion direction options are also considered. A five degree of freedom superficial structure model is used founded over a horizontally layered site overlaying a rock halfspace. The study is made on a large set of acceleration transfer functions to the only one interaction node obtained by the variation of the above mentioned parameters. Recommendations are proposed to clarify and to extend the user manual chapters on the determination of the analysis frequencies and on the free-field wave composition. The efficiency of this proposal is checked on a group of examples on nuclear power plant structural models.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Dagiloke, I. F. « Computer aided process parameter selection for high speed machining ». Thesis, Liverpool John Moores University, 1995. http://researchonline.ljmu.ac.uk/4990/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Palmberger, Anna. « Regularization parameter selection methods for an inverse dispersion problem ». Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184296.

Texte intégral
Résumé :
There are many regularization parameter selection methods that can be used when solving inverse problems, but it is not clear which one is best suited for the inverse dispersion problem. The suitability of three different methods for solving the inverse dispersion problem are evaluated here in order to pick a suitable method for these kinds of problems in the future. The regularization parameter selection methods are used to solve the separable non-linear inverse dispersion problem which is adjusted and solved as a linear inverse problem. It is solved with Tikhonov regularization and the model is a time integrated Gaussian puff model. The dispersion problem is used with different settings and is solved with the three methods. The three methods are generalized cross-validation, L-curve method and quasi-optimality criterion. They produce rather different solutions and the results show that generalized cross-validation is the best choice. The other methods are less stable and the errors are sometimes orders of magnitude larger than the errors from generalized cross-validation.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Ljung, Carl. « Copula selection and parameter estimation in market risk models ». Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204420.

Texte intégral
Résumé :
In this thesis, literature is reviewed for theory regarding elliptical copulas (Gaussian, Student’s t, and Grouped t) and methods for calibrating parametric copulas to sets of observations. Theory regarding model diagnostics is also summarized in the thesis. Historical data of equity indices and government bond rates from several geo-graphical regions along with U.S. corporate bond indices are used as proxies of the most significant stochastic variables in the investment portfolio of If P&C. These historical observations are transformed into pseudo-uniform observations, pseudo-observations, using parametric and non-parametric univariate models. The parametric models are fitted using both maximum likelihood and least squares of the quantile function. Ellip-tical copulas are then calibrated to the pseudo-observations using the well known methods Inference Function for Margins (IFM) and Semi-Parametric (SP) as well as compositions of these methods and a non-parametric estimator of Kendall’s tau.The goodness-of-fit of the calibrated multivariate models is assessed in aspect of general dependence, tail dependence, mean squared error as well as by using universal measures such as Akaike and Bayesian Informa-tion Criterion, AIC and BIC. The mean squared error is computed both using the empirical joint distribution and the empirical Kendall distribution function. General dependence is measured using the scale-invariant measures Kendall’s tau, Spearman’s rho, and Blomqvist’s beta, while tail dependence is assessed using Krup-skii’s tail-weighted measures of dependence (see [16]). Monte Carlo simulation is used to estimate these mea-sures for copulas where analytical calculation is not feasible.Gaussian copulas scored lower than Student’s t and Grouped t copulas in every test conducted. However, not all test produced conclusive results. Further, the obtained values of the tail-weighted measures of depen-dence imply a systematically lower tail dependence of Gaussian copulas compared to historical observations.
I den här uppsatsen granskas teori angående elliptiska copulas (Gaussisk, Students t och s.k. Grupperad t) och metoder för att kalibrera parametriska copulas till stickprov av observationer. Uppsatsen summerar även teori kring olika metoder för att analysera och jämföra copula-modeller. Historisk data av aktieindex och stats-obligationer från flera olika geografiska områden samt Amerikanska index för företagsobligationer används för att modellera de huvudsakliga stokastiskt drivande variablerna i investeringsportföljen hos If P&C. Des-sa historiska observationer transformeras med parametriska och icke-parametriska univariata modeller till pseudolikformiga observationer, pseudo-observationer. De parametriska modellerna passas till data med bå-de maximum likelihood och med minsta-kvadratpassning av kvantilfunktionen. Därefter kalibreras elliptiska copulas till pseudo-observationerna med de välkända metoderna Inference Function for Margins (IFM) och Semi-Parametric (SP) samt med blandningar av dessa två metoder och den icke-parametriska estimatorn av Kendalls tau.Hur väl de kalibrerade multivariata modellerna passar de historiska data utvärderas med avseende på ge-nerellt beroende, svansberoende, rotmedelavvikelse samt genom att använda mer allmäna mått som Akaike och Bayesianskt informationskriterium, AIC och BIC. Rotmedelavvikelsen räknas ut både genom att använda den empiriska gemensamma fördelningen och den empiriska Kendall fördelningsfunktionen. Generellt bero-ende mäts med de skalinvarianta måtten Kendalls tau, Spearmans rho och Blomqvists beta, medan svansbe-roende utvärderas med Krupskiis svansviktade beroendemått (se [16]). I de fall där analytiska beräkningsme-toder inte är möjliga för copulas används Monte Carlo-simulering för att skatta dessa mått.De Gaussiska copulas gav sämre resultat än Students t och Grupperad t copulas i varje enskilt test som utfördes. Dock så kan ej alla testresultat anses vara absolut definitiva. Vidare så antyder de erhållna värde-na från de svansviktade beroendemåtten att modellering med Gaussisk copula resulterar i systematiskt lägre svansberoende hos modellen än hos de historiska observationerna.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Reed, Craig. « Bayesian parameter estimation and variable selection for quantile regression ». Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/6118.

Texte intégral
Résumé :
The principal goal of this work is to provide efficient algorithms for implementing the Bayesian approach to quantile regression. There are two major obstacles to overcome in order to achieve this. Firstly, it is necessary to specify a suitable likelihood given that the frequentist approach generally avoids such speci cations. Secondly, sampling methods are usually required as analytical expressions for posterior summaries are generally unavailable in closed form regardless of the prior used. The asymmetric Laplace (AL) likelihood is a popular choice and has a direct link to the frequentist procedure of minimising a weighted absolute value loss function that is known to yield the conditional quantile estimates. For any given prior, the Metropolis Hastings algorithm is always available to sample the posterior distribution. However, it requires the speci cation of a suitable proposal density, limiting it's potential to be used more widely in applications. It is shown that the Bayesian quantile regression model with the AL likelihood can be converted into a normal regression model conditional on latent parameters. This makes it possible to use a Gibbs sampler on the augmented parameter space and thus avoids the need to choose proposal densities. Using this approach of introducing latent variables allows more complex Bayesian quantile regression models to be treated in much the same way. This is illustrated with examples varying from using robust priors and non parametric regression using splines to allowing model uncertainty in parameter estimation. This work is applied to comparing various measures of smoking and which measure is most suited to predicting low birthweight infants. This thesis also offers a short tutorial on the R functions that are used to produce the analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Fletcher, Michael Crawford. « Estimating the kinetic parameters of DNA sequence-selective drugs by footprinting ». Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295695.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Gopalakrishnan, Pradeep. « Influence of Laser Parameters on Selective Retinal Photocoagulation for Macular Diseases ». University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1121436799.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Těšický, Lukáš. « Optimalizace parametrů SLM procesu při zpracování hliníkové slitiny AW2618 ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318646.

Texte intégral
Résumé :
The diploma thesis deals with possibilities for processing aluminium alloy EN AW 2618 using Selective laser melting (SLM). The theoretical part contains basic knowledge about production by this technology and possibilities of evaluation of relative density of samples. It also contains an overview of the current state of knowledge about the processing of aluminium alloys by SLM technology. Above all, aluminium alloys of the series 2000, where the main alloying element is copper. In the experimental part testing samples were designed based on the research. These samples can be divided into three areas: single-track specimens, volume samples and samples for tensile testing. Single-track and volume samples were used to find appropriate processing parameters to achieve a relative density close to full volume of material. For this purpose, the effect of the different scanning strategies on the relative density of the sample were examined. The limiting factor has been the occurrence of small cracks in the broad range of parameters studied. Mechanical properties of samples produced by SLM were compared with extruded material. It was found that the material processed by SLM achieves only half the yield strength and tensile strength of extruded material. This is mainly due to the occurrence of small cracks and other defects in the structure of the material.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Henning, Peter Allen. « Computational Parameter Selection and Simulation of Complex Sphingolipid Pathway Metabolism ». Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/16202.

Texte intégral
Résumé :
Systems biology is an emerging field of study that seeks to provide systems-level understanding of biological systems through the integration of high-throughput biological data into predictive computational models. The integrative nature of this field is in sharp contrast as compared to the Reductionist methods that have been employed since the advent of molecular biology. Systems biology investigates not only the individual components of the biological system, such as metabolic pathways, organelles, and signaling cascades, but also considers the relationships and interactions between the components in the hope that an understandable model of the entire system can eventually be developed. This field of study is being hailed by experts as a potential vital technology in revolutionizing the pharmaceutical development process in the post-genomic era. This work not only provides a systems biology investigation into principles governing de novo sphingolipid metabolism but also the various computational obstacles that are present in converting high-throughput data into an insightful model.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Fraleigh, Lisa Marie. « Optimal sensor selection and parameter estimation for real-time optimization ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ40050.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Sandink, Christopher Albert. « Screening diagnostics for parameter selection in extended Kalman filter implementations ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0006/MQ45297.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Goldes, John. « REGULARIZATION PARAMETER SELECTION METHODS FOR ILL POSED POISSON IMAGING PROBLEMS ». The University of Montana, 2010. http://etd.lib.umt.edu/theses/available/etd-07072010-124233/.

Texte intégral
Résumé :
A common problem in imaging science is to estimate some underlying true image given noisy measurements of image intensity. When image intensity is measured by the counting of incident photons emitted by the object of interest, the data-noise is accurately modeled by a Poisson distribution, which motivates the use of Poisson maximum likelihood estimation. When the underlying model equation is ill-posed, regularization must be employed. I will present a computational framework for solving such problems, including statistically motivated methods for choosing the regularization parameter. Numerical examples will be included.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Korchaiyapruk, Attasit 1977. « Development of framework for soil model input parameter selection procedures ». Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/80944.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Alshammari, Mashaan. « Graph Filtering and Automatic Parameter Selection for Efficient Spectral Clustering ». Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24091.

Texte intégral
Résumé :
Spectral clustering is usually used to detect non-convex clusters. Despite being an effective method to detect this type of clusters, spectral clustering has two deficiencies that made it less attractive for the pattern recognition community. First, the graph Laplacian has to pass through eigen-decomposition to find the embedding space. This has been proved to be a computationally expensive process when the number of points is large. Second, spectral clustering used parameters that highly influence its outcome. Tuning these parameters manually would be a tedious process when examining different datasets. This thesis introduces solutions to these two problems of spectral clustering. For computational efficiency, we proposed approximated graphs with a reduced number of graph vertices. Consequently, eigen-decomposition will be performed on a matrix with reduced size which makes it faster. Unfortunately, reducing graph vertices could lead to a loss in local information that affects clustering accuracy. Thus, we proposed another graph where the number of edges was reduced significantly while keeping the same number of vertices to maintain local information. This would reduce the matrix size, making it computationally efficient and maintaining good clustering accuracy. Regarding influential parameters, we proposed cost functions that test a range of values and decide on the optimum value. Cost functions were used to estimate the number of embedding space dimensions and the number of clusters. We also observed in the literature that the graph reduction step requires manual tuning of parameters. Therefore, we developed a graph reduction framework that does not require any parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Kubrický, Jakub. « Optimalizace SLM procesu pro výrobu úsťového zařízení útočné pušky ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318753.

Texte intégral
Résumé :
The thesis deals with optimization of the manufacturing process of the muzzle device designed for assault rifle. The most common titanium alloy named Ti-6Al-4V was chosen for this task. The introduction summarizes previously existing types of muzzle devices and further describes the SLM technology with a special focus on titanium alloys processing. The optimization methods and their follow-up testing were designed according to theoretical knowledge that is summarized in the theoretical part of this work. Firstly, the aim was to describe the optimization of the manufacturing process with attention to preserving the relative density of the parts. Secondly, the mechanical properties of the parts that underwent different heat treatment were tested.The obtained data were then used to design and manufacture a muzzle device that underwent further testing in real condition afterwards.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Nik, Abdul Rahim H. « The effects of selective logging methods on hydrological parameters in Peninsular Malaysia ». Thesis, Bangor University, 1990. https://research.bangor.ac.uk/portal/en/theses/the-effects-of-selective-logging-methods-on-hydrological-parameters-in-peninsular-malaysia(9ed5e3d1-33ab-4cb1-91b0-7c043891921f).html.

Texte intégral
Résumé :
An experimental forest watershed, consisting of three small catchments at Berembun, Negeri Sembilan, in Peninsular Malaysia has been monitored from 1979 to 1987. Adequate instruments were installed for continuous collection of hydrologic and climatic data. The calibration and post-treatment phases lasted for three and four years respectively. Two types of treatments were imposed -namely commercial selective logging and supervised selective logging in catchment 1 and catchment 3 whilst catchment 2 remained as a control. Pertinent logging guidelines were prescribed and assessed in C3 in terms of hydrological responses. Significant water yield increases were observed after forest treatment in both catchments amounting to 165 mm (70%) and 87 mm (37%) respectively in the first year; increases persisted to the fourth year after treatment. Magnitude and rate of water yield increase primarily depended on the amount of forest removed and the prevailing rainfall regime and the increase was largely associated with baseflow augmentation. Interestingly, both types of selective loggings produced no significant effect on peak discharge while the commercial logging resulted in a significant increase in stormflow volume and initial discharge. Such responses can be explained by the extensive nature of selective logging which normally left a substantial area of forest intact and minimal disturbance to flow channels. Thus, conservation measures introduced in this study - the use of buffer strips, cross drains, an appropriate percentage for the forest road network,- were found to be effective and beneficial in ameliorating the hydrological impacts.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Häggström, Jenny. « Selection of smoothing parameters with application in causal inference ». Doctoral thesis, Umeå universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-39614.

Texte intégral
Résumé :
This thesis is a contribution to the research area concerned with selection of smoothing parameters in the framework of nonparametric and semiparametric regression. Selection of smoothing parameters is one of the most important issues in this framework and the choice can heavily influence subsequent results. A nonparametric or semiparametric approach is often desirable when large datasets are available since this allow us to make fewer and weaker assumptions as opposed to what is needed in a parametric approach. In the first paper we consider smoothing parameter selection in nonparametric regression when the purpose is to accurately predict future or unobserved data. We study the use of accumulated prediction errors and make comparisons to leave-one-out cross-validation which is widely used by practitioners. In the second paper a general semiparametric additive model is considered and the focus is on selection of smoothing parameters when optimal estimation of some specific parameter is of interest. We introduce a double smoothing estimator of a mean squared error and propose to select smoothing parameters by minimizing this estimator. Our approach is compared with existing methods.The third paper is concerned with the selection of smoothing parameters optimal for estimating average treatment effects defined within the potential outcome framework. For this estimation problem we propose double smoothing methods similar to the method proposed in the second paper. Theoretical properties of the proposed methods are derived and comparisons with existing methods are made by simulations.In the last paper we apply our results from the third paper by using a double smoothing method for selecting smoothing parameters when estimating average treatment effects on the treated. We estimate the effect on BMI of divorcing in middle age. Rich data on socioeconomic conditions, health and lifestyle from Swedish longitudinal registers is used.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Rosenbloom, E. S. « Selecting the best of k multinomial parameter estimation procedures using SPRT ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0005/MQ45119.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Mekarapiruk, Wichaya. « Simultaneous optimal parameter selection and dynamic optimization using iterative dynamic programming ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58926.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Alamedine, Dima. « Selection of EHG parameter characteristics for the classification of uterine contractions ». Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2201/document.

Texte intégral
Résumé :
Un des marqueurs biophysique le plus prometteur pour la détection des accouchements prématurés (AP) est l'activité électrique de l'utérus, enregistrée sur l’abdomen des femmes enceintes, l’électrohystérogramme (EHG). Plusieurs outils de traitement du signal (linéaires, non linéaires) ont déjà été utilisés pour l'analyse de l'excitabilité et de la propagation de l’EHG, afin de différencier les contractions de grossesse, qui sont inefficaces, des contractions efficaces d’accouchement, qui pourraient provoquer un AP. Dans ces études nombreuses, les paramètres sont calculés sur des bases de données de signaux différentes, obtenus avec des protocoles d'enregistrement différents. Il est donc difficile de comparer les résultats afin de choisir les «meilleurs» paramètres pour la détection de l’AP. En outre, ce grand nombre de paramètres augmente la complexité de calcul dans un but de diagnostic. Par conséquent, l'objectif principal de cette thèse est de tester, sur une population de femmes donnée, quels outils de traitement du signal EHG permettent une discrimination entre les deux types de contractions (grossesse/accouchement). Dans ce but plusieurs méthodes de sélection de paramètres sont testées afin de sélectionner les paramètres les plus discriminants. La première méthode, développée dans cette thèse, est basée sur la mesure de la distance entre les histogrammes des paramètres pour les différentes classes (grossesse et accouchement) en utilisant la méthode « Jeffrey divergence (JD)». Les autres sont des méthodes de fouille de données existantes issues de la littérature. Les EHG ont été enregistrés en utilisant un système multivoies posé sur l'abdomen de la femme enceinte, pour l'enregistrement simultané de 16 voies d'EHG. Une approche monovariée (caractérisation d’une seule voie) et bivariée (couplage entre deux voies) sont utilisées dans notre travail. Utiliser toutes les voies, analyse monovariée, ou toutes les combinaisons de voies, analyse bivariée, conduit à une grande dimension des paramètres. Par conséquent, un autre objectif de notre thèse est la sélection des voies, ou des combinaisons de voies, qui fournissent l'information la plus utile pour distinguer entre les contractions de grossesse et d’accouchement. Cette étape de sélection de voie est suivie par la sélection des paramètres, sur les voies ou les combinaisons de voies sélectionnées. De plus, nous avons développé cette approche en utilisant des signaux monopolaires et bipolaires.Les résultats de ce travail nous permettent de mettre en évidence, lors du traitement de l’EHG, les paramètres et les voies qui donnent la meilleure discrimination entre les contractions de grossesse et celles d’accouchement. Ces résultats pourront ensuite être utilisés pour la détection des menaces d’accouchement prématuré
One of the most promising biophysical markers of preterm labor is the electrical activity of the uterus, picked up on woman’s abdomen, the electrohysterogram (EHG). Several processing tools of the EHG signal (linear, nonlinear), allow the analysis of both excitability and propagation of the uterine electrical activity in order to differentiate between pregnancy contractions, which are ineffective, from labor effective contractions that might cause preterm birth. Therefore, on these multiple studies, the parameters being computed from different signal databases, obtained with different recording protocols, it is sometimes difficult to compare their results in order to choose the “best” parameter for preterm labor detection. Additionally, this large number of parameters increases the computational complexity for diagnostic purpose. Therefore, the main objective of this thesis is to select, among all the features of interest extracted from multiple studies, the most pertinent feature subsets in order to discriminate, on a given population, pregnancy and labor contractions. For this purpose, several methods for feature selection are tested. The first one, developed in this work, is based on the measurement of the Jeffrey divergence (JD) distance between the histograms of the parameters of the 2 classes, pregnancy and labor. The other are “Filter” and “Wrapper” Data Mining methods, extracted from the literature. In our work monovariate (in one given EHG channel) and bivariate analysis (propagation of EHG by measuring the coupling between channels) are used. The EHG signals are recorded using a multichannel system positioned on the woman’s abdomen for the simultaneous recording of 16 channels of EHG. Using all channels, for the monovariate, or all combinations of channels for the bivariate analysis, leads to a large dimension of parameters for each contraction. Therefore, another objective of our thesis is the selection of the best channels, for the monovariate, or best channel combinations, for the bivariate analysis, that provide the most useful information to discriminate between pregnancy and labor classes. This channel selection step is then followed by the feature selection for the channels or channel combinations selected. Additionally, we tested all our work using monopolar and bipolar signals.The results of this thesis permits us to evidence, when processing the EHG, which channels and features can be used with the best chance of success as inputs of a diagnosis system for discrimination between pregnancy and labor contractions. This could be further used for preterm labor diagnosis
Styles APA, Harvard, Vancouver, ISO, etc.
40

吳焯基 et Cheuk-key Allen Ng. « Multiple comparison and selection of location parameters of exponential populations ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31231949.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Ng, Cheuk-key Allen. « Multiple comparison and selection of location parameters of exponential populations / ». [Hong Kong] : University of Hong Kong, 1990. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12792421.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Becker, Markus. « Active learning : an explicit treatment of unreliable parameters ». Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/2219.

Texte intégral
Résumé :
Active learning reduces annotation costs for supervised learning by concentrating labelling efforts on the most informative data. Most active learning methods assume that the model structure is fixed in advance and focus upon improving parameters within that structure. However, this is not appropriate for natural language processing where the model structure and associated parameters are determined using labelled data. Applying traditional active learning methods to natural language processing can fail to produce expected reductions in annotation cost. We show that one of the reasons for this problem is that active learning can only select examples which are already covered by the model. In this thesis, we better tailor active learning to the need of natural language processing as follows. We formulate the Unreliable Parameter Principle: Active learning should explicitly and additionally address unreliably trained model parameters in order to optimally reduce classification error. In order to do so, we should target both missing events and infrequent events. We demonstrate the effectiveness of such an approach for a range of natural language processing tasks: prepositional phrase attachment, sequence labelling, and syntactic parsing. For prepositional phrase attachment, the explicit selection of unknown prepositions significantly improves coverage and classification performance for all examined active learning methods. For sequence labelling, we introduce a novel active learning method which explicitly targets unreliable parameters by selecting sentences with many unknown words and a large number of unobserved transition probabilities. For parsing, targeting unparseable sentences significantly improves coverage and f-measure in active learning.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Almeida, Filho Valdez Arag?o de. « Aplica??o de superf?cies seletivas em frequ?ncia para melhoria de resposta de arranjos de antenas planares ». Universidade Federal do Rio Grande do Norte, 2014. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15252.

Texte intégral
Résumé :
Made available in DSpace on 2014-12-17T14:55:20Z (GMT). No. of bitstreams: 1 ValdezAAF_TESE.pdf: 2001050 bytes, checksum: d0f0b88178102c3f48880303c1c6d765 (MD5) Previous issue date: 2014-03-12
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
This work aims to show how the application of frequency selective surfaces (FSS) in planar antenna arrays become an alternative to obtain desired radiation characteristics from changes in radiation parameters of the arrays, such as bandwidth, gain and directivity. In addition to analyzing these parameters is also made a study of the mutual coupling between the elements of the array. To accomplish this study, were designed a microstrip antenna array with two patch elements, fed by a network feed. Another change made in the array was the use of the truncated ground plane, with the objective of increasing the bandwidth and miniaturize the elements of the array. In order to study the behavior of frequency selective surfaces applied in antenna arrays, three different layouts were proposed. The first layout uses the FSS as a superstrate (above the array). The second layout uses the FSS as reflector element (below the array). The third layout is placed between two FSS. Numerical and experimental results for each of the proposed configurations are presented in order to validate the research
Este trabalho tem como objetivo apresentar como a aplica??o de superf?cies seletivas em frequ?ncia (FSS) em arranjos de antenas planares se torna uma alternativa interessante para se obter caracter?sticas de radia??o desejadas, a partir de altera??es em par?metros de radia??o dos arranjos, tais como largura de banda, ganho e diretividade. Al?m de analisar esses par?metros, tamb?m ? feito o estudo do acoplamento m?tuo entre os elementos do arranjo. Para realizar tal estudo, foi projetado um arranjo de antenas de microfita, com dois elementos do tipo patch, alimentado por uma rede de alimenta??o. Outra modifica??o feita no arranjo foi a utiliza??o do plano de terra truncado, com o objetivo de aumentar a largura de banda e miniaturizar os elementos do arranjo. Para poder estudar o comportamento das superf?cies seletivas em frequ?ncia aplicadas em arranjos de antenas, foram propostos tr?s layouts diferentes. O primeiro layout consiste em utilizar a FSS como superstrato (acima do arranjo). O segundo consiste em utilizar a FSS como elemento refletor (abaixo do arranjo). O terceiro layout consiste em colocar o arranjo entre duas camadas de FSS, tanto em cima quanto abaixo. Resultados num?ricos e experimentais para cada uma das configura??es propostas s?o apresentados
Styles APA, Harvard, Vancouver, ISO, etc.
44

Maaß, Peter, Sergei V. Pereverzev, Ronny Ramlau et Sergei G. Solodky. « An adaptive discretization for Tikhonov-Phillips regularization with a posteriori parameter selection ». Universität Potsdam, 1998. http://opus.kobv.de/ubp/volltexte/2007/1473/.

Texte intégral
Résumé :
The aim of this paper is to describe an efficient strategy for descritizing ill-posed linear operator equations of the first kind: we consider Tikhonov-Phillips-regularization χ^δ α = (a * a + α I)^-1 A * y ^δ with a finite dimensional approximation A n instead of A. We propose a sparse matrix structure which still leads to optimal convergences rates but requires substantially less scalar products for computing A n compared with standard methods.
Styles APA, Harvard, Vancouver, ISO, etc.
45

MacDonald, Francis J. (Francis Joseph). « An integrative approach to process parameter selection in fused-silica capillary manufacturing ». Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/12817.

Texte intégral
Résumé :
Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1992 and Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering, 1992.
Includes bibliographical references (leaf 110).
by Francis J. MacDonald, Jr.
M.S.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Hamzah, Omar Adel Hamzah. « Parameter selection the powerplant with recovery system Off-gas in the refinery ». Thesis, NTU "KhPI", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/29869.

Texte intégral
Résumé :
Dissertation for the candidate degree in specialty 05.05.03 "Engines and Power Plants". - National Technical University "Kharkiv Polytechnic Institute ", Kharkiv 2017. The thesis is devoted to actual problem - the selection of schemes and power parameters of the power plant for utilization of associated gas. The problem of associated gas flaring continuously rising international conferences on conservation of the environment are held under the auspices of the UN and the World Bank. In particular, the World Conference on Climate in Paris (COP21) in 2015, and was nominated Global Initiatives to eradicate the practice of flaring of associated gas in the oil industry. Worldwide, it was supported by 45 oil companies, governments and other parties through which CO2 emissions can be reduced by 100 million tons per year. The adopted program "Zero Routine Flaring by 2030" provides end to the practice of burning of associated gas by 2030. This Initiatives supported and Iraq, which in 2015 take the second place in the world with the burning of associated gas in flares. Associated petroleum gas is 2% of product yield refineries in Iraq. Given the number of refineries and their power daily when they are flaring loss a lot of energy and is a significant pollution of the territory not only as chemical emissions but as a heat is released during the combustion of associated gas. The work uses a comprehensive approach to the selection circuit and power parameters power plant for utilization of associated gas. The possible options for the utilization of associated gas. According to the options two power plants are taken the first one based on gas turbine engine and the second based on gas turbine engine and a piston engine. The question examined in terms of exergic-anergy balance installation and obtain the best technical and economic performance, taking into account the climatic characteristics of Iraq The features of physical and chemical composition of associated petroleum gas in the refinery in Iraq, in particular methane determined the number, the method of firm Caterpillar. The methane number of the gas fuel affects the choice piston power plant. Significant impact on the choice of installing recycling schemes associated gas temperature features render the region. For their consideration set average temperature for the region. Conducted thermal calculations allowed to analyze the impact of environmental temperature on performance of power plants and conduct a feasibility study best selection circuit installation. Implementation exergic-anergy balance for power plants proposed scheme has allowed to confirm significant reduction in thermal pollution and show the most attractive from that perspective scheme. Economic calculations allowed to determine the payback period of the projects and installations prove the economic feasibility of their construction. The most economically attractive project. The analysis of the economic risks of sensitivity to changes in prices of electricity and to changes in ambient temperature. Similar calculations of sensitivity analysis performed for the two plants power plants. Based on the analysis, commissioned by the Iraqi side was developed based business project for energy generation capacity on the basis of energy utilization units. The results of the study will not only get the necessary electrical energy that can be used not only in the enterprise, but also to improve the environment in accordance with international agreements. The results of the research will be used in the construction of new units in the refineries in Iraq according to a letter from the Ministry of Industry and minerals.
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.05.03 «Двигуни та енергетичні установки» (14 – Електрична інженерія). – Національний технічний університет «Харківський політехнічний інститут», Харків, 2017. Дисертаційна робота присвячена актуальному питанню - вибору схеми та параметрів силової енергетичної установки для утилізації попутного нафтового газу. Проблема спалювання попутних газів постійно підіймається на міжнародних конференціях зі збереження навколишнього середовища які проходять під егідою ООН та Всесвітнього банку. Зокрема, на Всесвітній конференції з клімату в Парижі (COP21) у 2015 році, була висунута та підтримана глобальна ініціатива з викорінення практики спалювання попутного газу в нафтовій промисловості. В усьому світі її підтримали 45 нафтових компаній, урядів та інших сторін завдяки яким викиди CO2 можуть скоротитись на 100 мільйонів тонн на рік. Прийнята програма “Zero Routine Flaring by 2030” передбачає покінчити з практикою спалювання супутнього нафтового газу до 2030 року. Цю ініціативу підтримала і держава Ірак, яка на 2015 рік займала друге місце у світі зі спалювання попутних газів у факелах. Попутний нафтовий газ складає 2% від виходу продуктів нафтопереробного підприємства в Іраку. Враховуючи кількість нафтопереробних підприємств та їх потужності, щоденно, при його спалюванні у факелах, втрачається велика кількість енергії та відбувається значне забруднення навколишньої території не тільки хімічними викидами а і теплотою яка виділяється при згорянні попутного нафтового газу. У роботі використано комплексний підхід до вибору схеми та параметрів силової енергетичної установки для утилізації попутного нафтового газу. Розглянуто можливі варіанти утилізації супутнього нафтового газу. Серед варіантів взято енергетичну установку на базі газотурбінного двигуна та установку на базі газотурбінного двигуна який діє сумісно з поршневим двигуном. Поставлене питання розглянуто з точки зору енерго-ексергетичного балансу установки та отримання найкращих техніко-економічних показників з урахуванням кліматичних особливостей регіону держави Ірак. Розглянуто особливості фізико-хімічного складу супутнього нафтового газу на нафтопереробному заводі держави Ірак, зокрема проведено визначення метанового числа, за методикою фірми Caterpillar. Метанове число газового палива впливає на вибір поршневої енергетичної установки. Значний вплив на вибір схеми установки з утилізації супутнього нафтового газу оказують температурні особливості регіону. Для їх урахування визначено середню температуру для регіону. Проведені теплові розрахунки дозволили проаналізувати вплив температури навколишнього середовища на показники енергетичних установок та провести економічне обґрунтування обрання найкращої схеми установки. Виконання енерго-ексергетичного балансу для запропонованих схем енергогенеруючих установок дозволило підтвердити значне зменшення теплового забруднення навколишнього середовища та вказати на найбільш привабливу з цієї точки зору схему. Економічні розрахунки дозволили визначити термін окупності запропонованих проектів установок та довести економічну доцільність їх побудови. Визначено найбільш економічно привабливий проект. Проведено аналіз економічних ризиків чутливості до зміни ціни електроенергії та до зміни температури навколишнього середовища. Подібні розрахунки аналізу чутливості проведено для заводів з двома енергетичними установками. На основі проведеного аналізу, на замовлення Іракської сторони, було розроблено основу бізнес проекту для енергогенеруючих потужностей на базі енергетичних утилізаційних установок. Виконання результатів дослідження дозволить не тільки отримати необхідну електричну енергію, яку можна використовувати не тільки на підприємстві, а і покращити стан навколишнього середовища у відповідності до міжнародних домовленостей. Результати дисертаційного дослідження будуть використані при будівництві нових енергоблоків на нафтопереробних заводах Іраку згідно листа від Міністерства промисловості і природних ресурсів.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Toni, Tina. « Approximate Bayesian computation for parameter inference and model selection in systems biology ». Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/11481.

Texte intégral
Résumé :
In this thesis we present a novel algorithm for parameter estimation and model selection of dynamical systems. The algorithm belongs to the class of approximate Bayesian computation (ABC) methods, which can evaluate posterior distributions without having to calculate likelihoods. It is based on a sequential Monte Carlo framework, which gives our method a computational advantage over other existing ABC methods. The algorithm is applied to a wide variety of biological systems such as prokaryotic and eukaryotic signalling and stress response pathways, gene regulatory networks, and infectious diseases. We illustrate its applicability to deterministic and stochastic models, and draw inferences from different data frameworks. Posterior parameter distributions are analysed in order to gain further insight into parameter sensitivity and sloppiness. The comprehensive analysis provided in this thesis illustrates the flexibility of our new ABC SMC approach. The algorithm has proven useful for efficient parameter inference, systematic model selection and inference-based modelling, and is a novel and useful addition to the systems biology toolbox.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Tran, The Truyen. « On conditional random fields : applications, feature selection, parameter estimation and hierarchical modelling ». Thesis, Curtin University, 2008. http://hdl.handle.net/20.500.11937/436.

Texte intégral
Résumé :
There has been a growing interest in stochastic modelling and learning with complex data, whose elements are structured and interdependent. One of the most successful methods to model data dependencies is graphical models, which is a combination of graph theory and probability theory. This thesis focuses on a special type of graphical models known as Conditional Random Fields (CRFs) (Lafferty et al., 2001), in which the output state spaces, when conditioned on some observational input data, are represented by undirected graphical models. The contributions of thesis involve both (a) broadening the current applicability of CRFs in the real world and (b) deepening the understanding of theoretical aspects of CRFs. On the application side, we empirically investigate the applications of CRFs in two real world settings. The first application is on a novel domain of Vietnamese accent restoration, in which we need to restore accents of an accent-less Vietnamese sentence. Experiments on half a million sentences of news articles show that the CRF-based approach is highly accurate. In the second application, we develop a new CRF-based movie recommendation system called Preference Network (PN). The PN jointly integrates various sources of domain knowledge into a large and densely connected Markov network. We obtained competitive results against well-established methods in the recommendation field.On the theory side, the thesis addresses three important theoretical issues of CRFs: feature selection, parameter estimation and modelling recursive sequential data. These issues are all addressed under a general setting of partial supervision in that training labels are not fully available. For feature selection, we introduce a novel learning algorithm called AdaBoost.CRF that incrementally selects features out of a large feature pool as learning proceeds. AdaBoost.CRF is an extension of the standard boosting methodology to structured and partially observed data. We demonstrate that the AdaBoost.CRF is able to eliminate irrelevant features and as a result, returns a very compact feature set without significant loss of accuracy. Parameter estimation of CRFs is generally intractable in arbitrary network structures. This thesis contributes to this area by proposing a learning method called AdaBoost.MRF (which stands for AdaBoosted Markov Random Forests). As learning proceeds AdaBoost.MRF incrementally builds a tree ensemble (a forest) that cover the original network by selecting the best spanning tree at a time. As a result, we can approximately learn many rich classes of CRFs in linear time. The third theoretical work is on modelling recursive, sequential data in that each level of resolution is a Markov sequence, where each state in the sequence is also a Markov sequence at the finer grain. One of the key contributions of this thesis is Hierarchical Conditional Random Fields (HCRF), which is an extension to the currently popular sequential CRF and the recent semi-Markov CRF (Sarawagi and Cohen, 2004). Unlike previous CRF work, the HCRF does not assume any fixed graphical structures.Rather, it treats structure as an uncertain aspect and it can estimate the structure automatically from the data. The HCRF is motivated by Hierarchical Hidden Markov Model (HHMM) (Fine et al., 1998). Importantly, the thesis shows that the HHMM is a special case of HCRF with slight modification, and the semi-Markov CRF is essentially a flat version of the HCRF. Central to our contribution in HCRF is a polynomial-time algorithm based on the Asymmetric Inside Outside (AIO) family developed in (Bui et al., 2004) for learning and inference. Another important contribution is to extend the AIO family to address learning with missing data and inference under partially observed labels. We also derive methods to deal with practical concerns associated with the AIO family, including numerical overflow and cubic-time complexity. Finally, we demonstrate good performance of HCRF against rivals on two applications: indoor video surveillance and noun-phrase chunking.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Tran, The Truyen. « On conditional random fields : applications, feature selection, parameter estimation and hierarchical modelling ». Curtin University of Technology, Dept. of Computing, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=18614.

Texte intégral
Résumé :
There has been a growing interest in stochastic modelling and learning with complex data, whose elements are structured and interdependent. One of the most successful methods to model data dependencies is graphical models, which is a combination of graph theory and probability theory. This thesis focuses on a special type of graphical models known as Conditional Random Fields (CRFs) (Lafferty et al., 2001), in which the output state spaces, when conditioned on some observational input data, are represented by undirected graphical models. The contributions of thesis involve both (a) broadening the current applicability of CRFs in the real world and (b) deepening the understanding of theoretical aspects of CRFs. On the application side, we empirically investigate the applications of CRFs in two real world settings. The first application is on a novel domain of Vietnamese accent restoration, in which we need to restore accents of an accent-less Vietnamese sentence. Experiments on half a million sentences of news articles show that the CRF-based approach is highly accurate. In the second application, we develop a new CRF-based movie recommendation system called Preference Network (PN). The PN jointly integrates various sources of domain knowledge into a large and densely connected Markov network. We obtained competitive results against well-established methods in the recommendation field.
On the theory side, the thesis addresses three important theoretical issues of CRFs: feature selection, parameter estimation and modelling recursive sequential data. These issues are all addressed under a general setting of partial supervision in that training labels are not fully available. For feature selection, we introduce a novel learning algorithm called AdaBoost.CRF that incrementally selects features out of a large feature pool as learning proceeds. AdaBoost.CRF is an extension of the standard boosting methodology to structured and partially observed data. We demonstrate that the AdaBoost.CRF is able to eliminate irrelevant features and as a result, returns a very compact feature set without significant loss of accuracy. Parameter estimation of CRFs is generally intractable in arbitrary network structures. This thesis contributes to this area by proposing a learning method called AdaBoost.MRF (which stands for AdaBoosted Markov Random Forests). As learning proceeds AdaBoost.MRF incrementally builds a tree ensemble (a forest) that cover the original network by selecting the best spanning tree at a time. As a result, we can approximately learn many rich classes of CRFs in linear time. The third theoretical work is on modelling recursive, sequential data in that each level of resolution is a Markov sequence, where each state in the sequence is also a Markov sequence at the finer grain. One of the key contributions of this thesis is Hierarchical Conditional Random Fields (HCRF), which is an extension to the currently popular sequential CRF and the recent semi-Markov CRF (Sarawagi and Cohen, 2004). Unlike previous CRF work, the HCRF does not assume any fixed graphical structures.
Rather, it treats structure as an uncertain aspect and it can estimate the structure automatically from the data. The HCRF is motivated by Hierarchical Hidden Markov Model (HHMM) (Fine et al., 1998). Importantly, the thesis shows that the HHMM is a special case of HCRF with slight modification, and the semi-Markov CRF is essentially a flat version of the HCRF. Central to our contribution in HCRF is a polynomial-time algorithm based on the Asymmetric Inside Outside (AIO) family developed in (Bui et al., 2004) for learning and inference. Another important contribution is to extend the AIO family to address learning with missing data and inference under partially observed labels. We also derive methods to deal with practical concerns associated with the AIO family, including numerical overflow and cubic-time complexity. Finally, we demonstrate good performance of HCRF against rivals on two applications: indoor video surveillance and noun-phrase chunking.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Östlund, Rasmus. « PoGO+ Detector Cell Characterisation and Optimisation of Waveform Selection Parameters ». Thesis, KTH, Fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192842.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie