Academic literature on the topic 'Generalized Information Criterion (GIC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Generalized Information Criterion (GIC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Generalized Information Criterion (GIC)"

1

Xie, Qichang, and Meng Du. "The Optimal Selection for Restricted Linear Models with Average Estimator." Abstract and Applied Analysis 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/692472.

Full text
Abstract:
The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing ak-class generalized information criterion (k-GIC), which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Lei Lei, and Nai Ping Cheng. "A Novel Channel Estimation Algorithm for Underwater Acoustic Systems." Applied Mechanics and Materials 416-417 (September 2013): 1309–13. http://dx.doi.org/10.4028/www.scientific.net/amm.416-417.1309.

Full text
Abstract:
To decrease the computational complexity and improve the performance of channel estimation for underwater acoustic (UWA) sparse multipath channels, a sparse least square (SLS) channel estimation algorithm is proposed. The proposed algorithm combines advantages of both generalized akaike information criterion (GAIC) estimation and estimation using effective order of channel impulse response (CIR). Known pilot data is used to estimate effective order of CIR and then the position of taps of CIR is estimated. In order to adapt to the environment with low SNR and reduce the dimension of signal space, adaptive threshold is also used. Simulation results indicate that the proposed method has good performance of channel estimation and low complexity.
APA, Harvard, Vancouver, ISO, and other styles
3

Elhassan, Tomader. "Impact of Covid-19 pandemic on stock market returns volatility of Gulf Cooperation Council countries." Investment Management and Financial Innovations 18, no. 4 (October 13, 2021): 45–56. http://dx.doi.org/10.21511/imfi.18(4).2021.05.

Full text
Abstract:
This study examined the asymmetric impact of the COVID-19 pandemic on the Gulf Cooperation Council (GCC) stock market return volatility. The data included daily closing prices of the GCC stock market from the day of the acknowledgment of the first case of COVID-19 in each country to March 6, 2021. In addition, the study employed generalized autoregressive conditional heteroscedasticity (GARCH) family models. According to the Akaike information criterion, GARCH and exponential GARCH (EGARCH) were the most accurate models. The findings of the GARCH model indicate that the COVID-19 pandemic affected the GCC stock markets. The EGARCH model also confirmed the impact of the COVID-19 pandemic on the GCC stock markets, confirming that the COVID-19 negatively affected GCC stock market returns. The value of the persistence of this volatility continued over a long period. This study has potential implications for investors and policymakers in diversifying investment portfolios and adopting strategies to maintain investor confidence during such crises. Moreover, mechanisms must be developed for reducing risks in financial markets in times of crisis, and central banks should take financial measures to mitigate risks to capital markets. AcknowledgmentsThis achievement was made with the aid of my family’s support, thank you all.
APA, Harvard, Vancouver, ISO, and other styles
4

Taniguchi, Masanobu, and Junichi Hirukawa. "Generalized information criterion." Journal of Time Series Analysis 33, no. 2 (September 1, 2011): 287–97. http://dx.doi.org/10.1111/j.1467-9892.2011.00759.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, L. "Wilcoxon-type generalized Bayesian information criterion." Biometrika 96, no. 1 (January 24, 2009): 163–73. http://dx.doi.org/10.1093/biomet/asn060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yingbin, Gao, Kong Xiangyu, Hu Changhua, Li Hongzeng, and Hou Li'an. "A Generalized Information Criterion for Generalized Minor Component Extraction." IEEE Transactions on Signal Processing 65, no. 4 (February 15, 2017): 947–59. http://dx.doi.org/10.1109/tsp.2016.2631444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yiyun, Runze Li, and Chih-Ling Tsai. "Regularization Parameter Selections via Generalized Information Criterion." Journal of the American Statistical Association 105, no. 489 (March 1, 2010): 312–23. http://dx.doi.org/10.1198/jasa.2009.tm08013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pan, Wei. "Akaike's Information Criterion in Generalized Estimating Equations." Biometrics 57, no. 1 (March 2001): 120–25. http://dx.doi.org/10.1111/j.0006-341x.2001.00120.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shao, Jun. "Convergence rates of the generalized information criterion." Journal of Nonparametric Statistics 9, no. 3 (January 1998): 217–25. http://dx.doi.org/10.1080/10485259808832743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Na, Okyoung. "Generalized information criterion for the AR model." Journal of the Korean Statistical Society 46, no. 1 (March 2017): 146–60. http://dx.doi.org/10.1016/j.jkss.2016.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Generalized Information Criterion (GIC)"

1

Alghamdi, Amani Saeed. "Study of Generalized Lomax Distribution and Change Point Problem." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1526387579759835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Du, Toit Jan Valentine. "Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du Toit." Thesis, North-West University, 2006. http://hdl.handle.net/10394/128.

Full text
Abstract:
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots. This methodology involves subjective human judgment, is time consuming, and can result in suboptimal results. The newly developed automated construction algorithm solves these difficulties by performing model selection based on an objective model selection criterion. Partial residual plots are only utilized after the best model is found to gain insight into the relationships between inputs and the target. Models are organized in a search tree with a greedy search procedure that identifies good models in a relatively short time. The automated construction algorithm, implemented in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection methodologies found in the literature. This implementation, which is called AutoGANN, has a simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the variables that must be included in the model and uncertainty about the model structure. Model averaging utilizes in-sample model selection criteria and creates a combined model with better predictive ability than using any single model. In the field of Credit Scoring, the standard theory of scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more accurate scorecard that discriminates better between good and bad applicants. The pre-processing step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates. The time it takes to develop a scorecard may be reduced by utilizing the automated construction algorithm.
Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Thu Trung. "Bayesian model estimation and comparison for longitudinal categorical data." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/19240/1/Thu_Tran_Thesis.pdf.

Full text
Abstract:
In this thesis, we address issues of model estimation for longitudinal categorical data and of model selection for these data with missing covariates. Longitudinal survey data capture the responses of each subject repeatedly through time, allowing for the separation of variation in the measured variable of interest across time for one subject from the variation in that variable among all subjects. Questions concerning persistence, patterns of structure, interaction of events and stability of multivariate relationships can be answered through longitudinal data analysis. Longitudinal data require special statistical methods because they must take into account the correlation between observations recorded on one subject. A further complication in analysing longitudinal data is accounting for the non- response or drop-out process. Potentially, the missing values are correlated with variables under study and hence cannot be totally excluded. Firstly, we investigate a Bayesian hierarchical model for the analysis of categorical longitudinal data from the Longitudinal Survey of Immigrants to Australia. Data for each subject is observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia. Secondly, we examine the Bayesian model selection techniques of the Bayes factor and Deviance Information Criterion for our regression models with miss- ing covariates. Computing Bayes factors involve computing the often complex marginal likelihood p(y|model) and various authors have presented methods to estimate this quantity. Here, we take the approach of path sampling via power posteriors (Friel and Pettitt, 2006). The appeal of this method is that for hierarchical regression models with missing covariates, a common occurrence in longitudinal data analysis, it is straightforward to calculate and interpret since integration over all parameters, including the imputed missing covariates and the random effects, is carried out automatically with minimal added complexi- ties of modelling or computation. We apply this technique to compare models for the employment status of immigrants to Australia. Finally, we also develop a model choice criterion based on the Deviance In- formation Criterion (DIC), similar to Celeux et al. (2006), but which is suitable for use with generalized linear models (GLMs) when covariates are missing at random. We define three different DICs: the marginal, where the missing data are averaged out of the likelihood; the complete, where the joint likelihood for response and covariates is considered; and the naive, where the likelihood is found assuming the missing values are parameters. These three versions have different computational complexities. We investigate through simulation the performance of these three different DICs for GLMs consisting of normally, binomially and multinomially distributed data with missing covariates having a normal distribution. We find that the marginal DIC and the estimate of the effective number of parameters, pD, have desirable properties appropriately indicating the true model for the response under differing amounts of missingness of the covariates. We find that the complete DIC is inappropriate generally in this context as it is extremely sensitive to the degree of missingness of the covariate model. Our new methodology is illustrated by analysing the results of a community survey.
APA, Harvard, Vancouver, ISO, and other styles
4

Tran, Thu Trung. "Bayesian model estimation and comparison for longitudinal categorical data." Queensland University of Technology, 2008. http://eprints.qut.edu.au/19240/.

Full text
Abstract:
In this thesis, we address issues of model estimation for longitudinal categorical data and of model selection for these data with missing covariates. Longitudinal survey data capture the responses of each subject repeatedly through time, allowing for the separation of variation in the measured variable of interest across time for one subject from the variation in that variable among all subjects. Questions concerning persistence, patterns of structure, interaction of events and stability of multivariate relationships can be answered through longitudinal data analysis. Longitudinal data require special statistical methods because they must take into account the correlation between observations recorded on one subject. A further complication in analysing longitudinal data is accounting for the non- response or drop-out process. Potentially, the missing values are correlated with variables under study and hence cannot be totally excluded. Firstly, we investigate a Bayesian hierarchical model for the analysis of categorical longitudinal data from the Longitudinal Survey of Immigrants to Australia. Data for each subject is observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia. Secondly, we examine the Bayesian model selection techniques of the Bayes factor and Deviance Information Criterion for our regression models with miss- ing covariates. Computing Bayes factors involve computing the often complex marginal likelihood p(y|model) and various authors have presented methods to estimate this quantity. Here, we take the approach of path sampling via power posteriors (Friel and Pettitt, 2006). The appeal of this method is that for hierarchical regression models with missing covariates, a common occurrence in longitudinal data analysis, it is straightforward to calculate and interpret since integration over all parameters, including the imputed missing covariates and the random effects, is carried out automatically with minimal added complexi- ties of modelling or computation. We apply this technique to compare models for the employment status of immigrants to Australia. Finally, we also develop a model choice criterion based on the Deviance In- formation Criterion (DIC), similar to Celeux et al. (2006), but which is suitable for use with generalized linear models (GLMs) when covariates are missing at random. We define three different DICs: the marginal, where the missing data are averaged out of the likelihood; the complete, where the joint likelihood for response and covariates is considered; and the naive, where the likelihood is found assuming the missing values are parameters. These three versions have different computational complexities. We investigate through simulation the performance of these three different DICs for GLMs consisting of normally, binomially and multinomially distributed data with missing covariates having a normal distribution. We find that the marginal DIC and the estimate of the effective number of parameters, pD, have desirable properties appropriately indicating the true model for the response under differing amounts of missingness of the covariates. We find that the complete DIC is inappropriate generally in this context as it is extremely sensitive to the degree of missingness of the covariate model. Our new methodology is illustrated by analysing the results of a community survey.
APA, Harvard, Vancouver, ISO, and other styles
5

Akanda, Md Abdus Salam. "A generalized estimating equations approach to capture-recapture closed population models: methods." Doctoral thesis, Universidade de Évora, 2014. http://hdl.handle.net/10174/18297.

Full text
Abstract:
ABSTRACT; Wildlife population parameters, such as capture or detection probabilities, and density or population size, can be estimated from capture-recapture data. These estimates are of particular interest to ecologists and biologists who rely on ac- curate inferences for management and conservation of the population of interest. However, there are many challenges to researchers for making accurate inferences on population parameters. For instance, capture-recapture data can be considered as binary longitudinal observations since repeated measurements are collected on the same individuals across successive points in times, and these observations are often correlated over time. If these correlations are not taken into account when estimating capture probabilities, then parameter estimates will be biased, possibly producing misleading results. Also, an estimator of population size is generally biased under the presence of heterogeneity in capture probabilities. The use of covariates (or auxiliary variables), when available, has been proposed as an alternative way to cope with the problem of heterogeneous capture probabilities. In this dissertation, we are interested in tackling these two main problems, (i) when capture probabilities are dependent among capture occasions in closed population capture-recapture models, and (ii) when capture probabilities are heterogeneous among individuals. Hence, the capture-recapture literature can be improved, if we could propose an approach to jointly account for these problems. In summary, this dissertation proposes: (i) a generalized estimating equations (GEE) approach to model possible effects in capture-recapture closed population studies due to correlation over time and individual heterogeneity; (ii) the corresponding estimating equations for each closed population capture-recapture model; (iii) a comprehensive analysis on various real capture-recapture data sets using classical, GEE and generalized linear mixed models (GLMM); (iv) an evaluation of the effect of ac- counting for correlation structures on capture-recapture model selection based on the ‘Quasi-likelihood Information Criterion (QIC)’; (v) a comparison of the performance of population size estimators using GEE and GLMM approaches in the analysis of capture-recapture data. The performance of these approaches is evaluated by Monte Carlo (MC) simulation studies resembling real capture-recapture data. The proposed GEE approach provides a useful inference procedure for estimating population parameters, particularly when a large proportion of individuals are captured. For a low capture proportion, it is difficult to obtain reliable estimates for all approaches, but the GEE approach outperforms the other methods. Simulation results show that quasi-likelihood GEE provide lower standard error than partial likelihood based on generalized linear modelling (GLM) and GLMM approaches. The estimated population sizes vary on the nature of the existing correlation among capture occasions; RESUMO: Parâmetros populacionais em espécies de vida selvagens, como probabilidade captura ou deteção, e abundância ou densidade da população, podem ser estimados a partir de dados de captura-recaptura. Estas estimativas são de particular interesse para ecologistas e biólogos que dependem de inferências precisas a gestão e conservação das populações. No entanto, há muitos desafios par investigadores fazer inferências precisas de parâmetros populacionais. Por exemplo, os dados de captura-recaptura podem ser considerados como observa longitudinais binárias uma vez que são medições repetidas coletadas nos mesmos indivíduos em pontos sucessivos no tempo, e muitas vezes correlacionadas. Essas correlações não são levadas em conta ao estimar as probabilidades de tura, as estimativas dos parâmetros serão tendenciosas e possivelmente produz resultados enganosos. Também, um estimador do tamanho de uma população geralmente enviesado na presença de heterogeneidade das probabilidades de captura. A utilização de co-variáveis (ou variáveis auxiliares), quando disponível tem sido proposta como uma forma de lidar com o problema de probabilidade captura heterogéneas. Nesta dissertação, estamos interessados em abordar problemas principais em mode1os de captura-recapturar para população fecha (i) quando as probabilidades de captura são dependentes entre ocasiões de captura e (ii) quando as probabilidades de captura são heterogéneas entre os indivíduos Assim, a literatura de captura-recaptura pode ser melhorada, se pudéssemos por uma abordagem conjunta para estes problemas. Em resumo, nesta dissertação propõe-se: (i) uma abordagem de estimação de equações generalizadas (GEE) para modelar possíveis efeitos de correlação temporal e heterogeneidade individual nas probabilidades de captura; (ii) as correspondentes equações de estimação generalizadas para cada modelo de captura-recaptura em população fechadas; (iii) uma análise sobre vários conjuntos de dados reais de captura-recaptura usando a abordagem clássica, GEE e modelos linear generalizados misto (GLMM); (iv) uma avaliação do efeito das estruturas de correlação na seleção de modelos de captura-recaptura com base no ‘critério de informação da Quasi-verossimilhança (QIC); (v) uma comparação da performance das estimativas do tamanho da população usando GEE e GLMM. O desempenho destas abordagens ´e avaliado usando simulações Monte Carlo (MC) que se assemelham a dados de captura- recapture reais. A abordagem GEE proposto ´e um procedimento de inferência útil para estimar parâmetros populacionais, especialmente quando uma grande proporção de indivíduos ´e capturada. Para uma proporção baixa de capturas, ´e difícil obter estimativas fiáveis para todas as abordagens aplicadas, mas GEE supera os outros métodos. Os resultados das simulações mostram que o método da quase-verossimilhança do GEE fornece estimativas do erro padrão menor do que o método da verossimilhança parcial dos modelos lineares generalizados (GLM) e GLMM. As estimativas do tamanho da população variam de acordo com a natureza da correlação existente entre as ocasiões de captura.
APA, Harvard, Vancouver, ISO, and other styles
6

Goosen, Johannes Christiaan. "Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen." Thesis, North-West University, 2011. http://hdl.handle.net/10394/5552.

Full text
Abstract:
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm
Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
7

Lehmann, Rüdiger. "Observation error model selection by information criteria vs. normality testing." Hochschule für Technik und Wirtschaft Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-211721.

Full text
Abstract:
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
APA, Harvard, Vancouver, ISO, and other styles
8

Fearer, Todd Matthew. "Evaluating Population-Habitat Relationships of Forest Breeding Birds at Multiple Spatial and Temporal Scales Using Forest Inventory and Analysis Data." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29243.

Full text
Abstract:
Multiple studies have documented declines of forest breeding birds in the eastern United States, but the temporal and spatial scales of most studies limit inference regarding large scale bird-habitat trends. A potential solution to this challenge is integrating existing long-term datasets such as the U.S. Forest Service Forest Inventory and Analysis (FIA) program and U.S. Geological Survey Breeding Bird Survey (BBS) that span large geographic regions. The purposes of this study were to determine if FIA metrics can be related to BBS population indices at multiple spatial and temporal scales and to develop predictive models from these relationships that identify forest conditions favorable to forest songbirds. I accumulated annual route-level BBS data for 4 species guilds (canopy nesting, ground and shrub nesting, cavity nesting, early successional), each containing a minimum of five bird species, from 1966-2004. I developed 41 forest variables describing forest structure at the county level using FIA data from for the 2000 inventory cycle within 5 physiographic regions in 14 states (AL, GA, IL, IN, KY, MD, NC, NY, OH, PA, SC, TN, VA, and WV). I examine spatial relationships between the BBS and FIA data at 3 hierarchical scales: 1) individual BBS routes, 2) FIA units, and 3) and physiographic sections. At the BBS route scale, I buffered each BBS route with a 100m, 1km, and 10km buffer, intersected these buffers with the county boundaries, and developed a weighted average for each forest variable within each buffer, with the weight being a function of the percent of area each county had within a given buffer. I calculated 28 variables describing landscape structure from 1992 NLCD imagery using Fragstats within each buffer size. I developed predictive models relating spatial variations in bird occupancy and abundance to changes in forest and landscape structure using logistic regression and classification and regression trees (CART). Models were developed for each of the 3 buffer sizes, and I pooled the variables selected for the individual models and used them to develop multiscale models with the BBS route still serving as the sample unit. At the FIA unit and physiographic section scales I calculated average abundance/route for each bird species within each FIA unit and physiographic section and extrapolated the plot-level FIA variables to the FIA unit and physiographic section levels. Landscape variables were recalculated within each unit and section using NCLD imagery resampled to a 400 m pixel size. I used regression trees (FIA unit scale) and general linear models (GLM, physiographic section scale) to relate spatial variations in bird abundance to the forest and landscape variables. I examined temporal relationships between the BBS and FIA data between 1966 and 2000. I developed 13 forest variables from statistical summary reports for 4 FIA inventory cycles (1965, 1975, 1989, and 2000) within NY, PA, MD, and WV. I used linear interpolation to estimate annual values of each FIA variable between successive inventory cycles and GLMs to relate annual variations in bird abundance to the forest variables. At the BBS route scale, the CART models accounted for > 50% of the variation in bird presence-absence and abundance. The logistic regression models had sensitivity and specificity rates > 0.50. By incorporating the variables selected for the models developed within each buffer (100m, 1km, and 10km) around the BBS routes into a multiscale model, I was able to further improve the performance of many of the models and gain additional insight regarding the contribution of multiscale influences on bird-habitat relationships. The majority of the best CART models tended to be the multiscale models, and many of the multiscale logistic models had greater sensitivity and specificity than their single-scale counter parts. The relatively fine resolution and extensive coverage of the BBS, FIA, and NLCD datasets coupled with the overlapping multiscale approach of these analyses allowed me to incorporate levels of variation in both habitat and bird occurrence and abundance into my models that likely represented a more comprehensive range of ecological variability in the bird-habitat relationships relative to studies conducted at smaller scales and/or using data at coarser resolutions. At the FIA unit and physiographic section scales, the regression trees accounted for an average of 54.1% of the variability in bird abundance among FIA units, and the GLMs accounted for an average of 66.3% of the variability among physiographic sections. However, increasing the observational and analytical scale to the FIA unit and physiographic section decreased the measurement resolution of the bird abundance and landscape variables. This limits the applicability and interpretive strength of the models developed at these scales, but they may serve as indices to those habitat components exerting the greatest influences on bird abundance at these broader scales. The GLMs relating average annual bird abundance to annual estimates of forest variables developed using statistical report data from the 1965, 1975, 1989, and 2000 FIA inventories explained an average of 62.0% of the variability in annual bird abundance estimates. However, these relationships were a function of both the general habitat characteristics and the trends in bird abundance specific to the 4-state region (MD, NY, PA, and WV) used for these analyses and may not be applicable to other states or regions. The small suite of variables available from the FIA statistical reports and multicollinearity among all forest variables further limited the applicability of these models. As with those developed at the FIA unit and physiographic sections scales, these models may serve as general indices to the habitat components exerting the greatest influences on bird abundance trends through time at regional scales. These results demonstrate that forest variables developed from the FIA, in conjunction with landscape variables, can explain variations in occupancy and abundance estimated from BBS data for forest bird species with a variety of habitat requirements across spatial and temporal scales.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Lehmann, Rüdiger. "Observation error model selection by information criteria vs. normality testing." Hochschule für Technik und Wirtschaft Dresden, 2015. https://htw-dresden.qucosa.de/id/qucosa%3A23301.

Full text
Abstract:
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
APA, Harvard, Vancouver, ISO, and other styles
10

Saigiridharan, Lakshidaa. "Dynamic prediction of repair costs in heavy-duty trucks." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166133.

Full text
Abstract:
Pricing of repair and maintenance (R&M) contracts is one among the most important processes carried out at Scania. Predictions of repair costs at Scania are carried out using experience-based prediction methods which do not involve statistical methods for the computation of average repair costs for contracts terminated in the recent past. This method is difficult to apply for a reference population of rigid Scania trucks. Hence, the purpose of this study is to perform suitable statistical modelling to predict repair costs of four variants of rigid Scania trucks. The study gathers repair data from multiple sources and performs feature selection using the Akaike Information Criterion (AIC) to extract the most significant features that influence repair costs corresponding to each truck variant. The study proved to show that the inclusion of operational features as a factor could further influence the pricing of contracts. The hurdle Gamma model, which is widely used to handle zero inflations in Generalized Linear Models (GLMs), is used to train the data which consists of numerous zero and non-zero values. Due to the inherent hierarchical structure within the data expressed by individual chassis, a hierarchical hurdle Gamma model is also implemented. These two statistical models are found to perform much better than the experience-based prediction method. This evaluation is done using the mean absolute error (MAE) and root mean square error (RMSE) statistics. A final model comparison is conducted using the AIC to draw conclusions based on the goodness of fit and predictive performance of the two statistical models. On assessing the models using these statistics, the hierarchical hurdle Gamma model was found to perform predictions the best
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Generalized Information Criterion (GIC)"

1

Mielniczuk, Jan, and Hubert Szymanowski. "Selection Consistency of Generalized Information Criterion for Sparse Logistic Model." In Springer Proceedings in Mathematics & Statistics, 111–19. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13881-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tanaka, Toshihisa, and Mitsuaki Shiono. "Acoustic Beamforming with Maximum SNR Criterion and Efficient Generalized Eigenvector Tracking." In Advances in Multimedia Information Processing – PCM 2014, 373–82. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13168-9_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Generalized Information Criterion (GIC)." In Springer Series in Statistics, 107–38. New York, NY: Springer New York, 2008. http://dx.doi.org/10.1007/978-0-387-71887-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Martínez-Abad, Fernando, Patricia Torrijos-Fincias, Adriana Gamazo, and María José Rodríguez Conde. "Assessment of Information Literacy and Its Relationship With Learning Outcomes." In Global Implications of Emerging Technology Trends, 1–18. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-4944-4.ch001.

Full text
Abstract:
The global integration of competence-based education and training systems and the search for a generalized common framework for the incorporation of key competences in the curriculums of national education systems have generated a growing need for information literacy as a way of advancing to the awaited knowledge society. Large-scale assessments of student performance present criterion variables such as language, mathematics, or science, but it is noticeable how these assessments leave aside contents from other key competences such as information literacy. This chapter shows a theoretical approach to the subject and an example of an empirical study that aims to shed some light to the topic of information literacy by analysing the relationship between the level of information literacy shown by a student and their academic performance in subjects such as language and mathematics. The results suggest that it is possible to develop an instrument for the assessment of the complex information literacy competence, and which is also easy to administer in the classroom.
APA, Harvard, Vancouver, ISO, and other styles
5

Poliarus, Oleksandr, and Yevhen Poliakov. "Detection of Landmarks by Mobile Autonomous Robots Based on Estimating the Color Parameters of the Surrounding Area." In Examining Optoelectronics in Machine Vision and Applications in Industry 4.0, 224–57. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-6522-3.ch008.

Full text
Abstract:
Remote detection of landmarks for navigation of mobile autonomous robots in the absence of GPS is carried out by low-power radars, ultrasonic and laser rangefinders, night vision devices, and also by video cameras. The aim of the chapter is to develop the method for landmarks detection using the color parameters of images. For this purpose, the optimal system of stochastic differential equations was synthesized according to the criterion of the generalized variance minimum, which allows to estimate the color intensity (red, green, blue) using a priori information and current measurements. The analysis of classical and nonparametric methods of landmark detection, as well as the method of optimal estimation of color parameters jumps is carried out. It is shown that high efficiency of landmark detection is achieved by nonparametric estimating the first Hilbert-Huang modes of decomposition of the color parameters distribution.
APA, Harvard, Vancouver, ISO, and other styles
6

Pashayeva, Kamala, and Namiq Abdullayev. "Methods and Tools for Assessing Muscle Asymmetry in the Analysis of Electromyographic Signals." In Complementary Therapies [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.103061.

Full text
Abstract:
The generalized information about the possibilities of assessing asymmetry and the prospects of research tools is presented. The important role of the choice of different methods for processing electromyographic signals, the results of which can be considered as an objective criterion for assessing the asymmetry of the muscles of the extremities, is noted, such as the asymmetry coefficient, a widely used parameter in statistical analysis, which characterizes the asymmetry of the statistical distribution. Also applied is the segmental method of studying the body to obtain estimates of the composition and differences between individual body segments. The isokinetic test method, which makes it possible to assess asymmetry in measuring muscle strength, relies on the randomness of the dynamic processes of the biological system. Use of nonlinear dynamics, the theory of dynamic chaos, and fractal analysis allows for determining the fractal properties of biosignals, and from the classical methods used correlation analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Katrichenko, Ksenia, Svitlana Kryvuts, and Olena Vasina. "MEANS OF FORMING INFORMATION AND COMMUNICATION SYSTEMS IN THE DESIGN OF THE INCLUSIVE SPACE OF THE SCHOOL." In European vector of development of the modern scientific researches. Publishing House “Baltija Publishing”, 2021. http://dx.doi.org/10.30525/978-9934-26-077-3-9.

Full text
Abstract:
The article is devoted to the analysis of means of formation of information and communication systems taking into account inclusive design. The constant development of innovative project developments in this direction indicates a change in conceptual approaches to their design and the urgency of changing the paradigm of inclusive education, which takes into account accessibility and safety for all students without exception. The system approach allowed to establish connections between the ergonomic component of the formation of the design of information and communication systems, their functionality and aesthetic expressiveness. The method of abstraction helped separate from certain properties and relations of the object and at the same time focus on those properties that are the direct object of scientific research; the method of generalization contributed to the logical completion of abstraction; the method of classification allowed to determine the specific characteristics in solving the problems of the best examples of project activities with the possibility of their theoretical justification. The social significance and relevance of the chosen research topic lies in the analysis and identification of fundamentally new design solutions for the educational space of secondary schools, which have been implemented in foreign countries. Characteristic features of their solution are taking into account the principles of universal (inclusive) design based on the use of technological innovations, availability of spatial planning, design and artistic solutions, which significantly improve the implementation of information and communication systems. Functional comfort, in this case, is considered as a generalized criterion for optimizing the system "human-object or process-environment". In addition, the inclusive approach takes into account the comfortable and aesthetic conditions of students' adaptation to the new modern standards of education and testifies to its practical significance. It is a special synthesis of ergonomics and design in the educational environment, and also allows you to create new "scenarios" of educational activities of modern students. Analysis of aspects of the developed design model of an inclusive approach in solving information and communication systems will help initiate its implementation in the educational space of secondary schools of Ukraine.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Generalized Information Criterion (GIC)"

1

Karpovich, Dzmitry, and Svjatlana Latushkina. "Management of distributed object using the criterion of generalized error." In 2016 Open Conference of Electrical, Electronic and Information Sciences (eStream). IEEE, 2016. http://dx.doi.org/10.1109/estream39242.2016.7485926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Dalei, Kelvin K. W. Yau, and Chang Ding. "Information Based Model Selection Criterion for Binary Response Generalized Linear Mixed Models." In 2012 Fifth International Joint Conference on Computational Sciences and Optimization (CSO). IEEE, 2012. http://dx.doi.org/10.1109/cso.2012.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhi, Xiyang, and Feng Xue. "A novel approach to blind deconvolution based on generalized Akaike’s information criterion." In Applied Optics and Photonics China (AOPC2015), edited by Chunhua Shen, Weiping Yang, and Honghai Liu. SPIE, 2015. http://dx.doi.org/10.1117/12.2197295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Guijie, Yunlong Cai, Ibo Ngebani, and Minjian Zhao. "Reduced-rank interference suppression algorithm based on generalized MBER criterion for large-scale multiuser MIMO systems." In 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP). IEEE, 2015. http://dx.doi.org/10.1109/chinasip.2015.7230407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dold, Edward J., and Philip A. Voglewede. "Potential Energy as Design Criterion in Planar Multistable Mechanisms." In ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-68550.

Full text
Abstract:
Abstract Toggle mechanisms are used throughout engineering to accomplish various tasks, for example residential electrical switching. The design of toggle mechanisms can be broken into three categories: determination of a topology, geometric parameterization, and optimization. While topological determination and optimization have well established processes for use in design, geometric parameterization which includes defining link lengths and spring stiffness has largely been left to engineering judgement. This paper presents a design methodology using potential energy graphs which informs the engineering decisions made in choosing mechanism parameters, giving designers higher confidence in the design. A kinematic analysis coupled with Lagrange’s equation determines the relationship between the mechanism parameters and the potential energy curve. Plotting the potential energy with respect to the generalized coordinate yields a graph with a slope that is the generalized force or moment. The relationships between parameters and their effects on the mechanism are difficult to observe in the equations of motion, but potential energy plots readily provide information pertinent to the design of toggle mechanisms and decouple their effects. The plots also allow design by position rather than time which makes the design process faster. The design process is applied to three examples: a simple toggle mechanism, a compliant mechanism, and a reconfigurable mechanism to show the nuances of the approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Donmez, Ata, and Ahmet Kahraman. "A Generalized Torsional Dynamics Formulation for Multi-Mesh Gear Trains With Clearances and Torque Fluctuations." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-88988.

Full text
Abstract:
Abstract Vibro-impacts of gears are common in various automotive geared drivetrains used in engines and transmissions, causing excessive noise issues coined as gear rattle. External torque fluctuations lead to contact loss at gear mesh interfaces of such systems to result in sequences of impacts. Drivetrain architectures used in these applications vary widely, especially in diesel engine timing gear systems, requiring specific models for each drivetrain configuration considered. This makes dynamics evaluations of different design concepts a difficult task. In this paper, a generalized unified formulation will be presented that is capable of deriving equations of motion of any drivetrain system. This generalized formulation allows a systematic and coding-friendly derivation of system matrices for rattle simulation of any multi-mesh system. The proposed methodology will be applied to example 3-axis idler and torque split configurations to demonstrate its effectiveness. A criterion will be devised to establish boundaries between no-impact and impact motions. No-rattle boundaries will be plotted on parameter maps for example 3-axis meshing configurations. A piecewise linear solution technique will be used to solve governing equations to study the coupling of vibro-impacts at gear contact interfaces. Finally, the multi-mesh discrete model of this configuration will be verified through comparisons to its deformable-body counterpart.
APA, Harvard, Vancouver, ISO, and other styles
7

Cetintas, Gulten, and Serdar Ethem Hamamci. "A Graphical Stability Analysis Method for Cascade Conjugate Order Systems." In ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-71143.

Full text
Abstract:
Abstract The theory and applications of complex fractional analysis have recently become a hot topic in the fields of mathematics and engineering. Therefore, studies on the complex order systems and their subset called the complex conjugate order systems began to appear in the control community. On the other hand, the concept of stability has always been an important issue, especially in the analysis and control of dynamical systems. In this paper, a graphical method for stability analysis of the complex conjugate order systems is presented. Since the proposed method is based on the Mikhailov stability criterion known from the stability theory of integer order systems, it is named the generalized modified Mikhailov stability criterion. This method gives stability information about the higher order complex conjugate order systems, i.e. cascade conjugate order systems, according to whether it encloses the origin in the complex plane or not. Three simulation examples for the cascade conjugate order systems are given to show the effectiveness and reliability of the method presented. The results are verified by the poles on the first sheet of Riemann surface and also time responses of the systems, which are calculated analytically in a very complex way.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Chung-Ching, Jeng-Hsiung Lee, and Po-Chih Lee. "Dimensional Mobility Criteria of Planar 6T-9R Paradoxical Chains." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-70916.

Full text
Abstract:
Focusing on the structural type of 6 ternary links and 9 revolute pairs (denoted 6T-9R), which stems from the commercialized Expanda-Triangles construction toy, we conduct in-depth study on the mobility constraints of the generalized planar 6T-9R paradoxical chains. According to Grübler criterion, the degree of freedom (DoF) of the chains of this type is minus 3 and they should have no constrained motion. However, due to the special dimensional constraints (Euclidean metric), some of such paradoxical mechanisms can still have the full-cycle mobility. To identify the dimensional constraints of mobility algebraically, we adopt a simple direct algebraic elimination approach to attain an eliminant of sixth-order polynomial in one variable only. The identity of polynomial whose coefficients are merely function of link lengths and structural angles produces six necessary dimensional constraints for movable 6T-9R paradoxical chain. One new and two existing paradoxical chains are revealed and their numerical examples are illustrated to show the correctness and validity of the proposed dimensional constraints. A novel generalized form of planar 6-bar paradoxical chain is disclosed too.
APA, Harvard, Vancouver, ISO, and other styles
9

Koganti, Prasanth B., and Firdaus E. Udwadia. "Synthesis of Stable Optimal Controls From Lyapunov Based Constraints." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12395.

Full text
Abstract:
This paper presents a simple approach to synthesize optimal controls from user prescribed positive definite functions. The synthesized controls minimize a user specified objective function which is a quadratic function of the control variables. The approach followed in this paper is inspired by recent developments in analytical dynamics and the observation that the Lyapunov criterion for stability of dynamical systems can be recast as a constraint to be imposed on the system. The approach used to obtain the fundamental equation of motion for constrained mechanical systems is extended to find control forces that minimize any user-specified quadratic function of the control forces while simultaneously enforcing the stability requirement as an additional constraint. This method can also be generalized to nonautonomous dynamical system described by first order nonlinear differential equations. We further explore the possibility of extending the approach to systems in which we do not have full state control.
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmad, Aftab, Kjell Andersson, and Ulf Sellgren. "A Comparative Study of Friction Estimation and Compensation Using Extended, Iterated, Hybrid, and Unscented Kalman Filters." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12997.

Full text
Abstract:
Transparency is a key performance evaluation criterion for haptic devices, which describes how realistically the haptic force/torque feedback is mimicked from a virtual environment or in case of master-slave haptic device. Transparency in haptic devices is affected by disturbance forces like friction between moving parts. An accurate estimate of friction forces for observer based compensation requires estimation techniques, which are computationally efficient and gives reduced error between measured and estimated friction. In this work different estimation techniques based on Kalman filter, such as Extended Kalman filter (EKF), Iterated Extended Kalman filter (IEKF), Hybrid extended Kalman filter (HEKF) and Unscented Kalman filter (UKF) are investigated with the purpose to find which estimation technique that gives the most efficient and realistic compensation using online estimation. The friction observer is based on a newly developed friction smooth generalized Maxwell slip model (S-GMS). Each studied estimation technique is demonstrated by numerical and experimental simulation of sinusoidal position tracking experiments. The performances of the system are quantified with the normalized root mean-square error (NRMSE) and the computation time. The results from comparative analyses suggest that friction estimation and compensation based on Iterated Extended Kalman filter both gives a reduced tracking error and computational advantages compared to EKF, HEKF, UKF, as well as with no friction compensation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography