Academic literature on the topic 'Model selection curves'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Model selection curves.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Model selection curves"

1

Müller, Samuel, and Alan H. Welsh. "On Model Selection Curves." International Statistical Review 78, no. 2 (May 5, 2010): 240–56. http://dx.doi.org/10.1111/j.1751-5823.2010.00108.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gonzales Martínez, Rolando. "The Wage Curve, Once More with Feeling: Bayesian Model Averaging of Heckit Models." Econometric Research in Finance 3, no. 2 (October 15, 2018): 79–92. http://dx.doi.org/10.33119/erfin.2018.3.2.1.

Full text
Abstract:
The sensitivity of the wage curve to sample-selection and model uncertainty was evaluated with Bayesian methods. More than 8000 Heckit wage curves were estimated using data from the 2017 household survey of Bolivia. After averaging the estimates with the posterior probability of each model being true, the wage curve elasticity in Bolivia is close to -0.01. This result suggests that in this country the wage curve is inelastic and does not follow the international statistical regularity of wage curves.
APA, Harvard, Vancouver, ISO, and other styles
3

Emmert-Streib, Frank, and Matthias Dehmer. "Evaluation of Regression Models: Model Assessment, Model Selection and Generalization Error." Machine Learning and Knowledge Extraction 1, no. 1 (March 22, 2019): 521–51. http://dx.doi.org/10.3390/make1010032.

Full text
Abstract:
When performing a regression or classification analysis, one needs to specify a statistical model. This model should avoid the overfitting and underfitting of data, and achieve a low generalization error that characterizes its prediction performance. In order to identify such a model, one needs to decide which model to select from candidate model families based on performance evaluations. In this paper, we review the theoretical framework of model selection and model assessment, including error-complexity curves, the bias-variance tradeoff, and learning curves for evaluating statistical models. We discuss criterion-based, step-wise selection procedures and resampling methods for model selection, whereas cross-validation provides the most simple and generic means for computationally estimating all required entities. To make the theoretical concepts transparent, we present worked examples for linear regression models. However, our conceptual presentation is extensible to more general models, as well as classification problems.
APA, Harvard, Vancouver, ISO, and other styles
4

Young, Peg, and J. Keith Ord. "Model selection and estimation for technological growth curves." International Journal of Forecasting 5, no. 4 (January 1989): 501–13. http://dx.doi.org/10.1016/0169-2070(89)90005-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fryer, R. J., A. F. Zuur, and N. Graham. "Using mixed models to combine smooth size-selection and catch-comparison curves over hauls." Canadian Journal of Fisheries and Aquatic Sciences 60, no. 4 (April 1, 2003): 448–59. http://dx.doi.org/10.1139/f03-029.

Full text
Abstract:
Parametric size-selection curves are often combined over hauls to estimate a mean selection curve using a mixed model in which between-haul variation in selection is treated as a random effect. This paper shows how the mixed model can be extended to estimate a mean selection curve when smooth nonparametric size-selection curves are used. The method also estimates the between-haul variation in selection at each length and can model fixed effects in the form of the different levels of a categorical variable. Data obtained to estimate the size-selection of dab by a Nordmøre grid are used for illustration. The method can also be used to provide a length-based analysis of catch-comparison data, either to compare a test net with a standard net or to calibrate two research survey vessels. Haddock data from an intercalibration exercise are used for illustration.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ziming, Zexi Yang, Xiuzhen Wang, Qiang Yue, Zhendong Xia, and Hong Xiao. "Residence Time Distribution (RTD) Applications in Continuous Casting Tundish: A Review and New Perspectives." Metals 12, no. 8 (August 17, 2022): 1366. http://dx.doi.org/10.3390/met12081366.

Full text
Abstract:
The continuous casting tundish is a very important metallurgical reactor in continuous casting production. The flow characteristics of tundishes are usually evaluated by residence time distribution (RTD) curves. At present, the analysis model of RTD curves still has limitations. In this study, we reviewed RTD curve analysis models of the single flow and multi-flow tundish. We compared the mixing model and modified combination model for RTD curves of single flow tundish. At the same time, multi-strand tundish flow characteristics analysis models for RTD curves were analyzed. Based on the RTD curves obtained from a tundish water experiment, the applicability of various models is discussed, providing a reference for the selection of RTD analysis models. Finally, we proposed a flow characteristics analysis of multi-strand tundish based on a cumulative time distribution curve (F-curve). The F-curve and intensity curve can be used to analyze and compare the flow characteristics of multi-strand tundishes. The modified dead zone calculation method is also more reasonable. This method provides a new perspective for the study of multi-strand tundishes or other reactor flow characteristics analysis models.
APA, Harvard, Vancouver, ISO, and other styles
7

KULASEKERA, K. B., and JAVIER OLAYA. "VARIABLE SELECTION IN NONPARAMETRIC REGRESSION MODEL." International Journal of Reliability, Quality and Safety Engineering 11, no. 02 (June 2004): 141–61. http://dx.doi.org/10.1142/s0218539304001415.

Full text
Abstract:
A new procedure is proposed for deciding whether a candidate variable is significant in a general nonparametric regression model with independent covariates. A forward selection process is conducted using a formal test of equality of regression curves at each stage. The proposed procedure does not require multidimensional smoothing at any intermediate step. Asymptotic properties are given. Some simulation results and a real application are given.
APA, Harvard, Vancouver, ISO, and other styles
8

Ralston, Stephen. "Size Selection of Snappers (Lutjanidae) by Hook and Line Gear." Canadian Journal of Fisheries and Aquatic Sciences 47, no. 4 (April 1, 1990): 696–700. http://dx.doi.org/10.1139/f90-078.

Full text
Abstract:
The most commonly used theoretical models of gear selection have been the logistic and normal curves. These are usually applied to trawls and gillnets, respectively. In contrast, little critical work has been completed concerning the selective properties of fish hooks, although both types of selectivity curves have arbitrarily been applied to hook catch data in the literature. No study has clearly demonstrated the actual form of a selection curve for hooks. To determine which type of curve (logistic or normal) best describes the selective sampling characteristics of fish hooks, an experiment was conducted in the Marianas Islands during 1982–84. During all fishing operations two different sizes (#20 and #28) of circle fish hooks were fished simultaneously and in equal number. Under these conditions, the length specific ratios of snapper (Lutjanidae) catches taken by the two hook sizes provided a basis for distinguishing which model was most appropriate. Results showed that neither model in its simplest form depicted hook selectivity well. While small hooks caught substantially more small fish, large hooks were somewhat more effective in capturing the larger size classes.
APA, Harvard, Vancouver, ISO, and other styles
9

Krenek, Sascha, Thomas U. Berendonk, and Thomas Petzoldt. "Thermal performance curves of Paramecium caudatum: A model selection approach." European Journal of Protistology 47, no. 2 (May 2011): 124–37. http://dx.doi.org/10.1016/j.ejop.2010.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Winters, G. H., and J. P. Wheeler. "Direct and Indirect Estimation of Gillnet Selection Curves of Atlantic Herring (Clupea harengus harengus)." Canadian Journal of Fisheries and Aquatic Sciences 47, no. 3 (March 1, 1990): 460–70. http://dx.doi.org/10.1139/f90-050.

Full text
Abstract:
Length-specific selection curves for Atlantic herring (Clupea harengus) were calculated for a series of gillnets ranging in mesh size from 50.8 to 76.2 mm (stretched measure) using Holt's (1963) model (ICNAF Spec. Publ. 5: 106–115). These curves were than compared with direct estimates of length-specific selectivity obtained from a comparison of gillnet catch length frequencies with population length composition data as determined from acoustic surveys. Selection curves calculated indirectly using the Holt model were unimodal and congruent. The empirical selection curves however were multimodal and fishing power varied with mesh size. These differences in selectivities were due to the fact that herring were caught not only by wedging at the maximum girth but also at other body positions such as the gills and snout. Each of these modes of capture have different length-specific selectivity characteristics and, since the relative contributions of the different modes of capture varied both between nets and annually, the selection curve of herring for a particular mesh size is not unique. It can however be reasonably approximated when girth is used as the selection criterion. Direct empirical selectivities are therefore recommended when interpreting population parameters from herring gillnet catch data.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Model selection curves"

1

MONTEIRO, ANDRE MONTEIRO DALMEIDA. "NON-PARAMETRIC ESTIMATIONS OF INTEREST RATE CURVES : MODEL SELECTION CRITERION: MODEL SELECTION CRITERIONPERFORMANCE DETERMINANT FACTORS AND BID-ASK S." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2002. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2684@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta tese investiga a estimação de curvas de juros sob o ponto de vista de métodos não-paramétricos. O texto está dividido em dois blocos. O primeiro investiga a questão do critério utilizado para selecionar o método de melhor desempenho na tarefa de interpolar a curva de juros brasileira em uma dada amostra. Foi proposto um critério de seleção de método baseado em estratégias de re-amostragem do tipo leave-k-out cross validation, onde K k £ £ 1 e K é função do número de contratos observados a cada curva da amostra. Especificidades do problema reduzem o esforço computacional requerido, tornando o critério factível. A amostra tem freqüência diária: janeiro de 1997 a fevereiro de 2001. O critério proposto apontou o spline cúbico natural -utilizado com método de ajuste perfeito aos dados - como o método de melhor desempenho. Considerando a precisão de negociação, este spline mostrou-se não viesado. A análise quantitativa de seu desempenho identificou, contudo, heterocedasticidades nos erros simulados. A partir da especificação da variância condicional destes erros e de algumas hipóteses, foi proposto um esquema de intervalo de segurança para a estimação de taxas de juros pelo spline cúbico natural, empregado como método de ajuste perfeito aos dados. O backtest sugere que o esquema proposto é consistente, acomodando bem as hipóteses e aproximações envolvidas. O segundo bloco investiga a estimação da curva de juros norte-americana construída a partir dos contratos de swaps de taxas de juros dólar-Libor pela Máquina de Vetores Suporte (MVS), parte do corpo da Teoria do Aprendizado Estatístico. A pesquisa em MVS tem obtido importantes avanços teóricos, embora ainda sejam escassas as implementações em problemas reais de regressão. A MVS possui características atrativas para a modelagem de curva de juros: é capaz de introduzir já na estimação informações a priori sobre o formato da curva e sobre aspectos da formação das taxas e liquidez de cada um dos contratos a partir dos quais ela é construída. Estas últimas são quantificadas pelo bid-ask spread (BAS) de cada contrato. A formulação básica da MVS é alterada para assimilar diferentes valores do BAS sem que as propriedades dela sejam perdidas. É dada especial atenção ao levantamento de informação a priori para seleção dos parâmetros da MVS a partir do formato típico da curva. A amostra tem freqüência diária: março de 1997 a abril de 2001. Os desempenhos fora da amostra de diversas especificações da MVS foram confrontados com aqueles de outros métodos de estimação. A MVS foi o método que melhor controlou o trade- off entre viés e variância dos erros.
This thesis investigates interest rates curve estimation under non-parametric approach. The text is divided into two parts. The first one focus on which criterion to use to select the best performance method in the task of interpolating Brazilian interest rate curve. A selection criterion is proposed to measure out-of-sample performance by combining resample strategies leave-k-out cross validation applied upon the whole sample curves, where K k £ £ 1 and K is function of observed contract number in each curve. Some particularities reduce substantially the required computational effort, making the proposed criterion feasible. The data sample range is daily, from January 1997 to February 2001. The proposed criterion selected natural cubic spline, used as data perfect-fitting estimation method. Considering the trade rate precision, the spline is non-biased. However, quantitative analysis of performance determinant factors showed the existence of out-of-sample error heteroskedasticities. From a conditional variance specification of these errors, a security interval scheme is proposed for interest rate generated by perfect-fitting natural cubic spline. A backtest showed that the proposed security interval is consistent, accommodating the evolved assumptions and approximations. The second part estimate US free-for-floating interest rate swap contract curve by using Support Vector Machine (SVM), a method derived from Statistical Learning Theory. The SVM research has got important theoretical results, however the number of implementation on real regression problems is low. SVM has some attractive characteristics for interest rates curves modeling: it has the ability to introduce already in its estimation process a priori information about curve shape and about liquidity and price formation aspects of the contracts that generate the curve. The last information set is quantified by the bid-ask spread. The basic SVM formulation is changed in order to be able to incorporate the different values for bid-ask spreads, without losing its properties. Great attention is given to the question of how to extract a priori information from swap curve typical shape to be used in MVS parameter selection. The data sample range is daily, from March 1997 to April 2001. The out-of-sample performances of different SVM specifications are faced with others method performances. SVM got the better control of trade- off between bias and variance of out-of-sample errors.
APA, Harvard, Vancouver, ISO, and other styles
2

Chia, Yan Wah. "Radiation from curved (conical) frequency selective surfaces." Thesis, Loughborough University, 1993. https://dspace.lboro.ac.uk/2134/7200.

Full text
Abstract:
The thesis deals with the analysis of a microwave Frequency Selective Surface (FSS) on a conical dielectric radome illuminated by a feed hom located at the base. Two approaches have been adopted to solve this problem. The first approach is to calculate the element currents under the assumption that the surface is locally flat. Consequently, the element current at that locality can be determined by employing Floquet modal analysis. The local incidence has been modelled from the radiation pattern of the source or the aperture fields of the feed. Three types of feed model were used to account for the field illumination on the radome. The transmitted fields from the curved surface are obtained from the sum of the radiated fields due to the equivalent magnetic and electric current sources distributed in each local unit cell of the conical surface. This method treats the interaction of neighbouring FSS elements only. In the second approach the curvature is taken into account by dividing the each element into segments which conform to the curved surface. An integral formulation is used to take into account the interaction of all the elements. The current source in each FSS element from the formulation is solved using the method of moments (MOM) technique. A linear system of simultaneous equations is obtained from the MOM and has been solved using elimination method and an iterative method which employs conjugate gradients. The performance of both methods has been compared with regard to the speed of computations and the memory storage capability. New formulations using quasi static approximations have been derived to account for thin dielectric backing in the curved aperture FSS analysis. Computer models have been developed to predict the radiation performance of the curved(conical) FSS. Experiments were performed in an anechoic chamber where the FSS cone was mounted on a jig resting on a turntable. The measuring setup contained a sweep oscillator that supplied power to a transmitting feed placed at the base of the cone. Amplitude and phase values of the far field radiation pattern of the cone were measured with the aid of a vector network analyser. Cones with different dimensions and FSS element geometries were constructed and the measured transmission losses and radiation patterns compared with predictions.
APA, Harvard, Vancouver, ISO, and other styles
3

Paterson, Chay Giles Blair. "Minimal models of invasion and clonal selection in cancer." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28986.

Full text
Abstract:
One of the defining features of cancer is cell migration: the tendency of malignant cells to become motile and move significant distances through intervening tissue. This is a necessary precondition for metastasis, the ability of cancers to spread, which once underway permits more rapid growth and complicates effective treatment. In addition, the emergence and development of cancer is currently believed to be an evolutionary process, in which the emergence of cancerous cell lines and the subsequent appearance of resistant clones is driven by selection. In this thesis we develop minimal models of the relationship between motility, growth, and evolution of cancer cells. These should be simple enough to be easily understood and analysed, but remain realistic in their biologically relevant assumptions. We utilise simple simulations of a population of individual cells in space to examine how changes in mechanical properties of invasive cells and their surroundings can affect the speed of cell migration. We similarly examine how differences in the speed of migration can affect the growth of tumours. From this we conclude that cells with a higher elastic stiffness experience stronger resistance to their movement through tissue, but this resistance is limited by the elasticity of the surrounding tissue. We also find that the growth rate of large lesions depends weakly on the migration speed of escaping cells, and has stronger and more complex dependencies on the rates of other stochastic processes in the model, namely the rate at which cells transition to being motile and the reverse rate at which cells cease to be motile. To examine how the rates of growth and evolution of an ensemble of cancerous lesions depends on their geometry and underlying fitness landscape, we develop an analytical framework in which the spatial structure is coarse grained and the cancer treated as a continuously growing system with stochastic migration events. Both the fully stochastic realisations of the system and deterministic population transport approaches are studied. Both approaches conclude that the whole ensemble can undergo migration-driven exponential growth regardless of the dependence of size on time of individual lesions, and that the relationship between growth rate and rate of migration is determined by the geometrical constraints of individual lesions. We also find that linear fitness landscapes result in faster-than-exponential growth of the ensemble, and we can determine the expected number of driver mutations present in several important cases of the model. Finally, we study data from a clinical study of the effectiveness of a new low-dose combined chemotherapy. This enables us to test some important hypotheses about the growth rate of pancreatic cancers and the speed with which evolution occurs in reality. We test a moderately successful simple model of the observed growth curves, and use it to infer how frequently drug resistant mutants appear in this clinical trial. We conclude that the main shortcomings of the model are the difficulty of avoiding over-interpretation in the face of noise and small datasets. Despite this, we find that the frequency of resistant mutants is far too high to be explained without resorting to novel mechanisms of cross-resistance to multiple drugs. We outline some speculative explanations and attempt to provide possible experimental tests.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Wen-Chyi. "Regularized variable selection in proportional hazards model using area under receiver operating characteristic curve criterion." College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9972.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2009.
Thesis research directed by: Dept. of Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Flake, Darl D. II. "Separation of Points and Interval Estimation in Mixed Dose-Response Curves with Selective Component Labeling." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4697.

Full text
Abstract:
This dissertation develops, applies, and investigates new methods to improve the analysis of logistic regression mixture models. An interesting dose-response experiment was previously carried out on a mixed population, in which the class membership of only a subset of subjects (survivors) were subsequently labeled. In early analyses of the dataset, challenges with separation of points and asymmetric confidence intervals were encountered. This dissertation extends the previous analyses by characterizing the model in terms of a mixture of penalized (Firth) logistic regressions and developing methods for constructing profile likelihood-based confidence and inverse intervals, and confidence bands in the context of such a model. The proposed methods are applied to the motivating dataset and another related dataset, resulting in improved inference on model parameters. Additionally, a simulation experiment is carried out to further illustrate the benefits of the proposed methods and to begin to explore better designs for future studies. The penalized model is shown to be less biased than the traditional model and profile likelihood-based intervals are shown to have better coverage probability than Wald-type intervals. Some limitations, extensions, and alternatives to the proposed methods are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Boruvka, Audrey. "Data-driven estimation for Aalen's additive risk model." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Kyeong Eun. "Bayesian models for DNA microarray data analysis." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/2465.

Full text
Abstract:
Selection of signi?cant genes via expression patterns is important in a microarray problem. Owing to small sample size and large number of variables (genes), the selection process can be unstable. This research proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables in a regression setting and use a Bayesian mixture prior to perform the variable selection. Due to the binary nature of the data, the posterior distributions of the parameters are not in explicit form, and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the posterior distributions. The Bayesian model is ?exible enough to identify the signi?cant genes as well as to perform future predictions. The method is applied to cancer classi?cation via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify the set of signi?cant genes to classify BRCA1 and others. Microarray data can also be applied to survival models. We address the issue of how to reduce the dimension in building model by selecting signi?cant genes as well as assessing the estimated survival curves. Additionally, we consider the wellknown Weibull regression and semiparametric proportional hazards (PH) models for survival analysis. With microarray data, we need to consider the case where the number of covariates p exceeds the number of samples n. Speci?cally, for a given vector of response values, which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the responsible genes, which are controlling the survival time. This approach enables us to estimate the survival curve when n << p. In our approach, rather than ?xing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional ?exibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in e?ect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology with (a) di?use large B??cell lymphoma (DLBCL) complementary DNA (cDNA) data and (b) Breast Carcinoma data. Lastly, we propose a mixture of Dirichlet process models using discrete wavelet transform for a curve clustering. In order to characterize these time??course gene expresssions, we consider them as trajectory functions of time and gene??speci?c parameters and obtain their wavelet coe?cients by a discrete wavelet transform. We then build cluster curves using a mixture of Dirichlet process priors.
APA, Harvard, Vancouver, ISO, and other styles
8

Plašil, Miroslav. "Empirické ověření nové Keynesiánské Philipsovy křivky v ČR." Doctoral thesis, Vysoká škola ekonomická v Praze, 2003. http://www.nusl.cz/ntk/nusl-77088.

Full text
Abstract:
New keynesian Phillips curve (NKPC) has become a central model to study the relation between inflation and real economic activity, notably in the framework of optimal monetary policy design. However, some recent evidence suggests that empirical data are usually at odds with the underlying theory. The model due to its inherent structure represents a statistical challenge in its own right. Since Galí and Gertler (1999) published their seminal paper introducing estimation via GMM techniques, they have triggered a heated debate on its empirical relevance. Their approach has been heavily criticised by later authors, mainly on the grounds of questionable behaviour of GMM estimator in the NKPC context and/or its small sample properties. The common criticism includes sensitivity to the choice of instrument set, weak identification and small sample bias. In this thesis I propose a new estimation strategy that provides a remedy to above mentioned shortcomings and allows to obtain reliable estimates. The procedure exploits recent advances in GMM theory as well as in other fields of statistics, in particular in the area of time series factor analysis and bootstrap. The proposed estimation strategy consists of several consecutive steps: first, to reduce a small sample bias resulting from excessive use of instruments I summarize all available information by employing factor analysis and include estimated factors into information set. In the second step I use statistical information criteria to select optimal instruments and eventually I obtain confidence intervals on parameters using bootstrap method. In NKPC context all these methods were used for the first time and can also be used independently. Their combination however provides synergistic effect that helps to improve the properties of estimates and to check the efficiency of given steps. Obtained results suggest that NKPC model can explain Czech inflation dynamics fairly well and provide some support for underlying theory. Among other things the results imply that the policy of disinflation may not be as costly with respect to a loss in aggregate product as earlier versions of Phillips curve would indicate. However, finding a good proxy for real economic activity has proved to be a difficult task. In particular we demonstrated that results are conditional on how the measure is calculated, some measures even showed countercyclical behaviour. This issue -- in the thesis discussed only in passing -- is a subject of future research. In addition to the proposed strategy and provided parameter estimates the thesis brings some partial simulation-based findings. Simulations elaborate on earlier literature on naive bootstrap in GMM context and study performance of bootstrap modifications of unit root and KPSS test.
APA, Harvard, Vancouver, ISO, and other styles
9

Rückert, Nadja. "Studies on two specific inverse problems from imaging and finance." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-91587.

Full text
Abstract:
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices. In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data. In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Limei. "Probabilistic model designs and selection curves of trawl gears /." 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Model selection curves"

1

Weinberg, Jonathan M. Knowledge, Noise, and Curve-Fitting. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198724551.003.0016.

Full text
Abstract:
The psychology of the ‘Gettier effect’ appears robust—but complicated. Contrary to initial reports, more recent and thorough work by several groups of researchers indicates strongly that it is in fact found widely across cultures. Nonetheless, I argue that the pattern of psychological results should not at all be taken to settle the epistemological questions about the nature of knowledge. For the Gettier effect occurs both intermittently and with sensitivity to epistemically irrelevant factors. In short, the effect is noisy. And good principles of model selection indicate that, the noisier one’s data, the more one should prefer simpler curves over those that may be more complicated yet hew closer to the data. While we should not endorse K=JTB at this time, nonetheless the question ‘Is knowledge really just justified true belief?’ ought to be treated as once again in play.
APA, Harvard, Vancouver, ISO, and other styles
2

Back, Kerry E. Information, Strategic Trading, and Liquidity. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190241148.003.0024.

Full text
Abstract:
The chapter describes some asymmetric information models of liquidity. In these models, trades move prices because of the possibility that the trades are based on information not known to the market. A strategic trader is one who takes into consideration that her trades move prices. The chapter describes the Glosten‐Milgrom model of the bidask spread, the Kyle model of market depth, the Glosten model of limit‐order markets, and models of auctions. Except for the auction models, prices are set in these models by uninformed market makers who face adverse selection from informed traders. In the auction models, prices are set by informed individuals bidding against one another. The winner’s curse, the revenue equivalence theorem, and related aspects of auctions are explained.
APA, Harvard, Vancouver, ISO, and other styles
3

Crespo Miguel, Mario. Automatic corpus-based translation of a spanish framenet medical glossary. 2020th ed. Editorial Universidad de Sevilla, 2020. http://dx.doi.org/10.12795/9788447230051.

Full text
Abstract:
Computational linguistics is the scientific study of language from a computational perspective. It aims is to provide computational models of natural language processing (NLP) and incorporate them into practical applications such as speech synthesis, speech recognition, automatic translation and many others where automatic processing of language is required. The use of good linguistic resources is crucial for the development of computational linguistics systems. Real world applications need resources which systematize the way linguistic information is structured in a certain language. There is a continuous effort to increase the number of linguistic resources available for the linguistic and NLP Community. Most of the existing linguistic resources have been created for English, mainly because most modern approaches to computational lexical semantics emerged in the United States. This situation is changing over time and some of these projects have been subsequently extended to other languages; however, in all cases, much time and effort need to be invested in creating such resources. Because of this, one of the main purposes of this work is to investigate the possibility of extending these resources to other languages such as Spanish. In this work, we introduce some of the most important resources devoted to lexical semantics, such as WordNet or FrameNet, and those focusing on Spanish such as 3LB-LEX or Adesse. Of these, this project focuses on FrameNet. The project aims to document the range of semantic and syntactic combinatory possibilities of words in English. Words are grouped according to the different frames or situations evoked by their meaning. If we focus on a particular topic domain like medicine and we try to describe it in terms of FrameNet, we probably would obtain frames representing it like CURE, formed by words like cure.v, heal.v or palliative.a or MEDICAL CONDITIONS with lexical units such as arthritis.n, asphyxia.n or asthma.n. The purpose of this work is to develop an automatic means of selecting frames from a particular domain and to translate them into Spanish. As we have stated, we will focus on medicine. The selection of the medical frames will be corpus-based, that is, we will extract all the frames that are statistically significant from a representative corpus. We will discuss why using a corpus-based approach is a reliable and unbiased way of dealing with this task. We will present an automatic method for the selection of FrameNet frames and, in order to make sure that the results obtained are coherent, we will contrast them with a previous manual selection or benchmark. Outcomes will be analysed by using the F-score, a measure widely used in this type of applications. We obtained a 0.87 F-score according to our benchmark, which demonstrates the applicability of this type of automatic approaches. The second part of the book is devoted to the translation of this selection into Spanish. The translation will be made using EuroWordNet, a extension of the Princeton WordNet for some European languages. We will explore different ways to link the different units of our medical FrameNet selection to a certain WordNet synset or set of words that have similar meanings. Matching the frame units to a specific synset in EuroWordNet allows us both to translate them into Spanish and to add new terms provided by WordNet into FrameNet. The results show how translation can be done quite accurately (95.6%). We hope this work can add new insight into the field of natural language processing.
APA, Harvard, Vancouver, ISO, and other styles
4

Aguilera-Cobos, Lorena, Rebeca Isabel-Gómez, and Juan Antonio Blasco-Amaro. Efectividad de la limitación de la movilidad en la evolución de la pandemia por Covid-19. AETSA Área de Evaluación de Tecnologías Sanitarias de Andalucía, Fundación Progreso y salud. Consejería de Salud y Familias. Junta de Andalucía, 2022. http://dx.doi.org/10.52766/pyui7071.

Full text
Abstract:
Introduction During the Covid-19 pandemic, non-pharmacological interventions (NPIs) aimed to minimise the spread of the virus as much as possible to avoid the most severe cases and the collapse of health systems. These measures included mobility restrictions in several countries, including Spain. Objective To assess the impact of mobility constraints on incidence, transmission, severe cases and mortality in the evolution of the Covid-19 pandemic. These constraints include: • Mandatory home confinement. • - Recommendation to stay at home. • - Perimeter closures for entry and/or exit from established areas. • - Restriction of night-time mobility (curfew). Methodology Systematic literature review, including documents from official bodies, systematic reviews and meta-analyses. The following reference databases were consulted until October 2021 (free and controlled language): Medline, EMBASE, Cochrane Library, TripDB, Epistemonikos, Royal college of London, COVID-end, COVID-19 Evidence Reviews, WHO, ECDC and CDC. Study selection and quality analysis were performed by two independent researchers. References were filtered firstly by title and abstract and secondly by full text in the Covidence tool using a priori inclusion and exclusion criteria. Synthesis of the results was done qualitatively. The quality of the included studies was assessed using the AMSTAR-II tool. Results The literature search identified 642 studies, of which 38 were excluded as duplicates. Of the 604 potentially relevant studies, 12 studies (10 systematic reviews and 2 official agency papers) were included in the analysis after filtering. One of the official agency papers was from the European Centre for Disease Prevention and Control (ECDC) and the other paper was from the Ontario Agency for Health Promotion and Protection (OHP). The result of the quality assessment with the AMSTAR-II tool of the included systematic reviews was: 3 reviews of moderate quality, 6 reviews of low quality and 1 review of critically low quality. The interventions analysed in the included studies were divided into 2 categories: the first category comprised mandatory home confinement, recommendation to stay at home and curfew, and the second category comprised perimeter blocking of entry and/or exit (local, cross-community, national or international). This division is because the included reviews analysed the measures of mandatory home confinement, advice to stay at home and curfew together without being able to carry out a disaggregated analysis. The included systematic reviews for the evaluation of home confinement, stay-at-home advice and curfew express a decrease in incidence levels, transmission and severe cases following the implementation of mobility limitation interventions compared to the no measure comparator. These conclusions are supported by the quantitative or qualitative results of the studies they include. All reviews also emphasise that to increase the effectiveness of these restrictions it is necessary to combine them with other public health measures. In the systematic reviews included for the assessment of entry and/or exit perimeter closure, most of the studies included in the reviews were found to be modelling studies based on mathematical models. All systematic reviews report a decrease in incidence, transmission and severe case levels following the implementation of travel restriction interventions. The great heterogeneity of travel restrictions applied, such as travel bans, border closures, passenger testing or screening, mandatory quarantine of travellers or optional recommendations for travellers to stay at home, makes data analysis and evaluation of interventions difficult. Conclusions Mobility restrictions in the development of the Covid-19 pandemic were one of the main NPI measures implemented. It can be concluded from the review that there is evidence for a positive impact of NPIs on the development of the COVID-19 pandemic. The heterogeneity of the data from the included studies and their low quality make it difficult to assess the effectiveness of mobility limitations in a disaggregated manner. Despite this, all the included reviews show a decrease in incidence, transmission, hospitalisations and deaths following the application of the measures under study. These measures are more effective when the restrictions were implemented earlier in the pandemic, were applied for a longer period and were more rigorous in their application.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Model selection curves"

1

Dasgupta, Ratan. "Model Selection and Validation in Agricultural Context: Extended Uniform Distribution and Some Characterization Theorems." In Growth Curve Models and Applications, 183–97. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63886-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bab-Hadiashar, Alireza, and Niloofar Gheissari. "Model Selection for Range Segmentation of Curved Objects." In Lecture Notes in Computer Science, 83–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24670-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Zhenqiu, Zhiyong Zhang, and Allan Cohen. "Bayesian Methods and Model Selection for Latent Growth Curve Models with Missing Data." In Springer Proceedings in Mathematics & Statistics, 275–304. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-9348-8_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Towill, D. R. "Selecting Learning Curve Models for Human Operator Performance." In Applications of Human Performance Models to System Design, 403–17. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4757-9244-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fujikoshi, Yasunori. "Model Selection Criteria for Growth Curve Model with Hierachical Within-Individual Design Matrices." In New Developments in Psychometrics, 433–41. Tokyo: Springer Japan, 2003. http://dx.doi.org/10.1007/978-4-431-66996-8_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hammes, Gabriel Almeida, Paula Medina Maçaira, and Fernando Luiz Cyrino Oliveira. "Data Analytics for the Selection of Wind Turbine Power Curve Models." In Operations Management for Social Good, 37–44. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23816-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oung, O., A. Bezuijen, and F. A. Weststrate. "Development of Selective Pore Pressure Transducers to Measure In Situ Pc-S Curves During Model Tests." In Field Screening Europe 2001, 85–90. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-010-0564-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Hongxiao, and Dennis D. Cox. "A Functional Generalized Linear Model with Curve Selection in Cervical Pre-cancer Diagnosis Using Fluorescence Spectroscopy." In Institute of Mathematical Statistics Lecture Notes - Monograph Series, 173–89. Beachwood, Ohio, USA: Institute of Mathematical Statistics, 2009. http://dx.doi.org/10.1214/09-lnms5711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anderson, Raymond A. "Model Training." In Credit Intelligence & Modelling, 741–70. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192844194.003.0024.

Full text
Abstract:
The chapter provides an approach and issues for model-training using Logistic Regression. (1) Regression—key model qualities plus i) options and settings, and ii) outputs to be expected/demanded. (2) Variable selection—i) criteria; ii) automation; iii) stepwise review; iv) constraining betas, where coefficients do not make sense; v) stepping by Gini, model pruning. (3) Correlation checks—i) multicollinearity—checks of variance inflation factors; ii) correlations—further checks to guard against the inclusion of highly correlated variables. (4) Blockwise variable selection—treatment in groups: i) variable reduction; ii) staged, or hierarchical regression; iii) embedded, model outputs as predictors; iv) ensemble, using outputs of other models. (5) Multi-model comparisons—Lorenz curves and strategy curves, should choices not be clear. (6) Calibration—i) simple adjustment by a constant; ii) piecewise, varying adjustments by the prediction; iii) score and points—adjusting the final score or constituent points; iv) MAPA, for more complex situations
APA, Harvard, Vancouver, ISO, and other styles
10

Musonge, P. "A Statistical Approach to Model Selection for Dynamic Adsorption Columns." In Advances in Wastewater Treatment II, 128–67. Materials Research Forum LLC, 2021. http://dx.doi.org/10.21741/9781644901397-5.

Full text
Abstract:
A variety of models have been used to describe and predict breakthrough curves for dynamic adsorption systems, in order to scale up laboratory and pilot plant systems. There are however limitations in the applicability of existing models. The study is aimed at providing unambiguous approaches in selecting the best performing model between Thomas, Yoon-Nelson and Bohart-Adams (B-A) models for three dynamic adsorption systems. Three approaches were implemented in this study using published experimental data of three adsorption systems. The first approach was the application of statistical analysis between actual and predicted breakthrough curves without modifying the models. The second and third approaches were application of local mean values (LMV) and global mean values (GMV) of empirical constants to predict breakthrough curves. Predictive and generalization performances of the three models were evaluated using the statistical criteria of Mean Absolute Error (MAE), Root mean Squared Error (RMSE) and Correlation Coefficient (R2).
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Model selection curves"

1

Arkalgud, Ravi, Andrew McDonald, and Ross Brackenridge. "AUTOMATED SELECTION OF INPUTS FOR LOG PREDICTION MODELS USING A NEW FEATURE SELECTION METHOD." In 2021 SPWLA 62nd Annual Logging Symposium Online. Society of Petrophysicists and Well Log Analysts, 2021. http://dx.doi.org/10.30632/spwla-2021-0091.

Full text
Abstract:
Automation is becoming an integral part of our daily lives as technology and techniques rapidly develop. Many automation workflows are now routinely being applied within the geoscience domain. The basic structure of automation and its success of modelling fundamentally hinges on the appropriate choice of parameters and speed of processing. The entire process demands that the data being fed into any machine learning model is essentially of good quality. The technological advances in well logging technology over decades have enabled the collection of vast amounts of data across wells and fields. This poses a major issue in automating petrophysical workflows. It necessitates to ensure that, the data being fed is appropriate and fit for purpose. The selection of features (logging curves) and parameters for machine learning algorithms has therefore become a topic at the forefront of related research. Inappropriate feature selections can lead erroneous results, reduced precision and have proved to be computationally expensive. Experienced Eye (EE) is a novel methodology, derived from Domain Transfer Analysis (DTA), which seeks to identify and elicit the optimum input curves for modelling. During the EE solution process, relationships between the input variables and target variables are developed, based on characteristics and attributes of the inputs instead of statistical averages. The relationships so developed between variables can then be ranked appropriately and selected for modelling process. This paper focuses on three distinct petrophysical data scenarios where inputs are ranked prior to modelling: prediction of continuous permeability from discrete core measurements, porosity from multiple logging measurements and finally the prediction of key geomechanical properties. Each input curve is ranked against a target feature. For each case study, the best ranked features were carried forward to the modelling stage, and the results are validated alongside conventional interpretation methods. Ranked features were also compared between different machine learning algorithms: DTA, Neural Networks and Multiple Linear Regression. Results are compared with the available data for various case studies. The use of the new feature selection has been proven to improve accuracy and precision of prediction results from multiple modelling algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Zhi Gang, and JianFeng Li. "Human Head Aesthetic Design Using Hybrid Genetic Algorithm." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/dac-34039.

Full text
Abstract:
In this paper a paradigm of human head aesthetic design based on hybrid genetic algorithm is proposed. Hybrid means the integration of interactive and natural selection of the population in genetic algorithm during the evolution process. Firstly, a sketching paradigm is proposed to sketch the head profile by mouse input, the profile is then parameterized and modified; quantities of more profiles are generated based on GA optimizer, where the user can select the best one. After 2D profile design, a 3D reconstitution algorithm is developed to construct the 3D head in NURBS model by analyzing four input sketching curves, i.e. one longitudinal curve, three side profile curves. The 3D head mesh model is capable of modification and easy to be manipulated. The prototype CAD system is developed based on ACIS 3D modeling kernel, and the design result is illustrated to show the effectiveness of this paradigm.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Xiaohong, Haixing Zhang, Zhenyuan Jia, Yixuan Feng, and Steven Y. Liang. "A New Method for the Prediction of Micro-Milling Tool Breakage." In ASME 2017 12th International Manufacturing Science and Engineering Conference collocated with the JSME/ASME 2017 6th International Conference on Materials and Processing. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/msec2017-2999.

Full text
Abstract:
Micro-milling tool breakage has become a bottleneck for the development of micro-milling technology. A new method to predict micro-milling tool breakage based on theoretical model is presented in this paper. Based on the previously built micro-milling force model, the bending stress of the micro-milling cutter caused by the distributed load along the spiral cutting edge is calculated; Then, the ultimate stress of carbide micro-milling tool is obtained by experiments; Finally, the bending stress at the dangerous part of the micro-milling tool is compared with the ultimate stress. Tool breakage curves are drawn with feed per tooth and axial cutting depth as horizontal and vertical axes respectively. The area above the curve is the tool breakage zone, and the area below the curve is the safety zone. The research provides a new method for the prediction of micro-milling tool breakage, and therefore guides the cutting parameters selection in micro-milling.
APA, Harvard, Vancouver, ISO, and other styles
4

Andertova´, Jana, and Frantisˇek Rieger. "Rheology and Rotational Rheometry of Concentrated Clay Based Ceramic Suspensions: Steps From Measured to Relevant Data." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-38487.

Full text
Abstract:
The rheological behavior of ceramic suspensions affects significantly wet ceramic processing. On the base of knowledge of rheological parameters the technological parameters of various processes (mixing, batching, spray drying, slip casting, of rheological parameters the selection of proper geometry and sensors must be done. From the data measured the flow curves must be designed and parameters of appropriate rheological models must be calculated. The power-law is the simplest model mostly used for description of rheological behavior of non-Newtonian fluids. Using this model, the dependence of shear stress on shear rate can be expressed. The aim of this paper is to show how the flow curves necessary for parameters of rheological model evaluation can be obtained from primary experimental data received from measurements on rotational viscometer. The two arrangements of rotational viscometer method were used in rheological measurements. The procedure of experimental data to obtain parameters K (coefficient of consistency) and n (flow behavior index) is presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Cho, Heejin, Rogelio Luck, and Louay M. Chamra. "Power Generation and Heat Recovery Model of Reciprocating Internal Combustion Engines for CHP Applications." In ASME 2009 International Mechanical Engineering Congress and Exposition. ASMEDC, 2009. http://dx.doi.org/10.1115/imece2009-11634.

Full text
Abstract:
This paper presents a power generation and heat recovery model for reciprocating internal combustion engines (ICEs) that can be effectively used in simulations of combined heat and power (CHP) systems. Reciprocating engines are among the most common types of power generation units in CHP systems. In the literature, constant engine efficiencies or empirical efficiency curves are commonly used in the simulations for CHP performance evaluation. These methods do not provide realistic results for the design and component selection processes. The main advantage of this model is that it provides estimates of performance/efficiency maps for both electrical power output and useful thermal output for various capacities of engines without experimental data. The intent of this model is to provide performance/efficiency maps during a preliminary CHP design/simulation process. An example of model calibration to a specific CHP application is presented to demonstrate the capability and benefit of this model. The simulation results are validated with manufacturer’s technical data.
APA, Harvard, Vancouver, ISO, and other styles
6

Quinci, Gianluca, Nam Hoang Phan, and Fabrizio Paolacci. "On the Use of Artificial Neural Network Technique for Seismic Fragility Analysis of a Three-Dimensional Industrial Frame." In ASME 2022 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/pvp2022-83874.

Full text
Abstract:
Abstract Fragility function that defines the probability of exceedance of a damage state given a ground motion intensity (IM) is an essential ingredient of modern approaches to seismic engineering as the performance-based earthquake engineering methodology. Epistemic as well as aleatory uncertainties associated with seismic loads and structural behavior are usually taken into account to analytically develop such curves. However, structural analyses are time-consuming, requiring generally a high computational effort. Moreover, the conditional probability of failure is usually computed by regression analysis assuming predefined probability functions, like the log-normal distribution, without prior information on the real probability distribution. To overcome these problems, the artificial neural network (ANN) technique is used for the development of structural seismic fragility curves considering record-to-record variability and structural parameter uncertainties. In this respect, the following aspects are addressed in this paper: (a) implementation of an efficient algorithm to select IMs as inputs for ANN, selecting the most relevant ones; (b) derivation of surrogate models by using the ANN technique, c) computation of fragility curves with Monte Carlo Simulations method and verification of the validity. These methods enable the implicit treatment of uncertainty in either or both of ground motion intensity and structural properties without making any prior assumption about the probability function. This methodology is then applied to estimate the probability of failure of a non-structural component (NSC), i.e., vertical tank, located on a typical three-dimensional industrial frame. First, an extensive sensitivity analysis on the ANN input parameters is performed (feature selection), identifying the type and number of seismic intensity measures (amplitude-based, frequency-based, and time-based IM). Then different surrogate models are derived investigating the number of hidden layers and parameters. A multiple stripe analysis is then performed on a nonlinear model of the structure, deriving the set of data for the ANN. Different training and test subsets are used to derive the surrogate model. Finally, a Monte Carlo simulation is performed to derive the fragility curves for the limit state considered. Finally, the risk assessment is obtained, evaluating the mean annual rate of failure of the NSC.
APA, Harvard, Vancouver, ISO, and other styles
7

BUENROSTRO, JAVIER, HYONNY KIM, ROBERT K. GOLDBERG, and TRENTON M. RICKS. "HYBRID EXPERIMENTAL AND NUMERICAL CHARACTERIZATION OF THE 3D RESPONSE OF WOVEN POLYMER MATRIX COMPOSITES." In Thirty-sixth Technical Conference. Destech Publications, Inc., 2021. http://dx.doi.org/10.12783/asc36/35940.

Full text
Abstract:
The need for advanced material models to simulate the deformation, damage, and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. The purpose of this work is to characterize carbon epoxy fabrics for composite material models that rely on a large number of input parameters to define their nonlinear and 3D response; e.g. elastic continuum damage mechanics models or plasticity damage models [1, 2]. It is challenging to obtain large sets of experimental stress-strain curves, therefore, careful selection of physical experiments that exhibit nonlinear behavior is done to significantly reduce the cost of generating threedimensional material databases. For this work, plain weave carbon fabrics with 3k and 12k tows are manufactured by VARTM. Testing is done using MTS hydraulic test frames and 2D digital image correlation (DIC) to obtain experimental stress-strain curves for in-plane tension and shear as well as transverse shear. For cases where actual experimental data is either not available or difficult to obtain, the required model input is virtually generated using the NASA Glenn developed Micromechanics Analysis Method/Generalized Method of Cells (MAC/GMC) code. A viscoplastic polymer model is calibrated and utilized to model the matrix constituent within a repeating unit cell (RUC) of a plain weave carbon fiber fabric. Verification and validation of this approach is done using MAT213, a tabulated orthotropic material model in the finite element code LS-DYNA, which relies on 12 input stress-strain curves in various coordinate directions [2]. Based on the model input generated by the micromechanics analyses in combination with available experimental data, a series of coupon level verification and validation analyses are carried out using the MAT 213 composite model.
APA, Harvard, Vancouver, ISO, and other styles
8

Tambat, Abhishek, Hung-Yun Lin, Ian Claydon, Ganesh Subbarayan, Dae-Young Jung, and Bahgat Sammakia. "Modeling Fracture in Dielectric Stacks due to Chip-Package Interaction: Impact of Dielectric Material Selection." In ASME 2011 Pacific Rim Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Systems. ASMEDC, 2011. http://dx.doi.org/10.1115/ipack2011-52237.

Full text
Abstract:
The trend towards decreasing dielectric constant of Interlayer Dielectric (ILD) materials has required significant trade-off between electrical performance and mechanical integrity of the die stack. Fracture caused by thermal stresses due to large coefficient of thermal expansion (CTE) mismatch between these materials arising during fabrication or testing are often the main driving force for failure. In this paper, we use CAD-inspired hierarchical field compositions [1] to carry out Isogeometric (meshfree) fracture simulations. We model cracks as arbitrary curves/surfaces and the crack propagation criterion is based on the evolving energy release rate (ERR) of the system. We simulate the solder reflow process to assess the impact of chip-package interaction on the reliability of ILD stacks. We use multi-level modeling to extract displacement boundary conditions for the local model of the ILD stack. Eight layers of metallization are considered in the ILD stack. We study the relative risks of replacing stronger dielectric (SiO2) with weaker dielectrics (SiCOH, ULK) on the criticality of preexisting flaws in the structure. Further, we study the impact of varying interfacial toughness values on the crack growth patterns in ILD stacks. Crack patterns reflect the propensity towards predominantly bulk failure with increasing interfacial toughness.
APA, Harvard, Vancouver, ISO, and other styles
9

Van Valkenburgh, Owen F., Thomas C. Ekstrom, Erica M. Goodman, Cameryn C. Leborte, Kevin M. Haaland, Nathan K. Yasuda, and Frank J. Shih. "Energy Absorption Characteristics of a Nested Curved Column Reinforced Elastomer Composite." In ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-12096.

Full text
Abstract:
Abstract Advances in 3D printing technology and their increasing availability have allowed for design of novel structures. For designing material systems and structures that crumple to absorb energy from low-velocity impact events, the wide variability in material selection and geometry presents an attractive opportunity to optimize their performance. In the present study, 3D printed structures using high stiffness polymer were produced with a UV stereolithographic process. The structures utilized curved cylindrical columns of varying diameters, whose axial geometries were prescribed with a normal distribution equation, allowing for adjustments in the load absorption profile. The columns were arranged in a circular pattern, with four layers of reinforcements with a slight differential in heights to create cascading failure events. A polyurethane matrix keeps the structures together during impact. The systems, with varying rod geometry, rod curvature, and rod diameters were tested with both quasi-static compression testing and dynamic impact testing. The preliminary results are presented. Force-displacement curves were measured were to determine the optimal design which yields the best failure characteristics. Finite element analysis using ANSYS was used to model the failure characteristics and inform the design of the physical system.
APA, Harvard, Vancouver, ISO, and other styles
10

Neydorf, Rudolf. "“Cut-Glue” Approximation in Problems on Static and Dynamic Mathematical Model Development." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-37236.

Full text
Abstract:
The solution to the problem on building mathematical models of technical objects through the approximation of various experimental dependences is offered in the paper. This approach is especially true for modeling aircraft because the aerodynamic coefficients of their models can be obtained either by full-scale study or by computer simulation only. Currently, the experimental simulation is performed either through the regression analysis (RGA) methods, or through spline approximation. However, the RGA has a significant disadvantage, namely a poor approximability of piecewise and multiextremal dependencies. The RGA gives a rough approximation of the experimental data for similar curves. Spline approximation is free from this disadvantage. However, a high degree of discretization, a strict binding to the number of spline points, and a large number of equations, make this approach inconvenient for application when a compact model building and an analytic transformation are required. A problem solution combining the advantages of both approaches and clearing up the troubles is offered in the paper. The proposed approach is based on the regression construction of the mathematical models of the dependence fragments, the multiplicative excision of these fragments in the local functional form, and on the additive combining of these local functions into a single analytic expression. The effect is achieved by using special “selection” functions multiplicatively limiting a nonzero definition domain for each of the approximating functions. The method is named “cut-glue” by the physical analogy of the approximation techniques. The order and structure of the approximating function for each segment can be arbitrary. A significant advantage of the “cut-glue” approximation is in a single analytic expression of the whole piecewise function instead of a cumbersome system of equations. The analytical and numerical studies of the properties and operational experience of the proposed method are resulted.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Model selection curves"

1

Ungar, Eugene D., Montague W. Demment, Uri M. Peiper, Emilio A. Laca, and Mario Gutman. The Prediction of Daily Intake in Grazing Cattle Using Methodologies, Models and Experiments that Integrate Pasture Structure and Ingestive Behavior. United States Department of Agriculture, July 1994. http://dx.doi.org/10.32747/1994.7568789.bard.

Full text
Abstract:
This project addressed the prediction of daily intake in grazing cattle using methodologies, models and experiments that integrate pasture structure and ingestive behavior. The broad objective was to develop concepts of optimal foraging that predicted ingestive behavior and instantaneous intake rate in single and multi-patch environments and extend them to the greater scales of time and space required to predict daily intake. Specific objectives included: to determine how sward structure affects the shape of patch depletion curves, to determine if the basic components of ingestive behavior of animals in groups differs from animals alone, and to evaluate and modify our existing models of foraging behavior and heterogeneity to incorporate larger scales of time and space. Patch depletion was found to be predominantly by horizon, with a significant decline in bite weight during horizon depletion. This decline derives from bite overlap, and is more pronounced on taller swards. These results were successfully predicted by a simple bite placement simulator. At greater spatial scales, patch selection was aimed at maximizing daily digestible intake, with the between patch search pattern being non-random. The processes of selecting a feeding station and foraging at a feeding station are fundamentally different. The marginal value theorem may not be the most appropriate paradigm for predicting residence time at a feeding station. Basic components of ingestive behavior were unaffected by the presence of other animals. Our results contribute to animal production systems by improving our understanding of the foraging process, by identifying the key sward parameters that determine intake rate and by improving existing conceptual and quantitative models of foraging behavior across spatial and temporal scales.
APA, Harvard, Vancouver, ISO, and other styles
2

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, January 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Full text
Abstract:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and feature selection/reduction. These pre-processing techniques play an important role in training a neural network to optimize its performance. This research studies the impact of applying normalization techniques as a pre-processing step to learning, as used by the IDSs. The impacts of pre-processing techniques play an important role in training neural networks to optimize its performance. This report proposes a Deep Neural Network (DNN) model with two hidden layers for IDS architecture and compares two commonly used normalization pre-processing techniques. Our findings are evaluated using accuracy, Area Under Curve (AUC), Receiver Operator Characteristic (ROC), F-1 Score, and loss. The experimentations demonstrate that Z-Score outperforms no-normalization and the use of Min-Max normalization.
APA, Harvard, Vancouver, ISO, and other styles
3

COMPARATIVE STUDY ON STABILITY OF WELDED AND HOT-ROLLED Q420 L300×30 COLUMNS. The Hong Kong Institute of Steel Construction, August 2022. http://dx.doi.org/10.18057/icass2020.p.255.

Full text
Abstract:
More heavy latticed steel transmission towers are being proposed and constructed in response to the increasing power demand of the rapid developments in China; new steel sections, Q420 L300×30 equal-leg angles, both welded and hot-rolled, are now available in the market. However, previous studies focused less on these large angles and their stability; corresponding design methods were also insufficient. Thus, this research aims to reveal the residual stress distributions and the axial-loaded behaviors of the L300×30 columns, in which sectioning method and FEM analysis were used. The stress distribution models of the welded L300×30 (WL30) and the hot-rolled L300×30 (RL30) sections were compared. Several φ-λ curves in the strong and weak axes were calculated with ABAQUS and compared with the Chinese and European design codes. While axially loaded, it is found that the WL30 columns, which have tensile residual stress at the heel and the toes, are stronger in both axes than the RL30 columns; selections of the design column curves were recommended.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography