To see the other types of publications on this topic, follow the link: Interpretable coefficients.

Journal articles on the topic 'Interpretable coefficients'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interpretable coefficients.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lubiński, Wojciech, and Tomasz Gólczewski. "Physiologically interpretable prediction equations for spirometric indexes." Journal of Applied Physiology 108, no. 5 (May 2010): 1440–46. http://dx.doi.org/10.1152/japplphysiol.01211.2009.

Full text
Abstract:
The need for ethnic-specific reference values of lung function variables (LFs) is acknowledged. Their estimation requires expensive and laborious examinations, and therefore additional use of results in physiology and epidemiology would be profitable. To this end, we proposed a form of prediction equations with physiologically interpretable coefficients: a baseline, the onset age (A0) and rate (S) of LF decline, and a height coefficient. The form was tested with data from healthy, nonsmoking Poles aged 18–85 yr (1,120 men, 1,625 women) who performed spirometry maneuvers according to American Thoracic Society criteria. The values of all the coefficients (also A0) for several LFs were determined with regression of LF on patient's age and deviation of patient's height from the mean height in the year group of this patient. S values for forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), peak expiratory flow, and maximal expiratory flow at 75% of FVC (MEF75) were very similar in both sexes (1.03 ± 0.07%/yr). FEV1/FVC declines four to five times slower. S for MEF25 appeared age dependent. A0 was smallest (28–32 yr) for MEF25 and FEV1. About 50% of each age subgroup (18–40, 41–60, 61–85 yr) exhibited LFs below the mean, and 4–6% were below the 5th percentile lower limits of normal, and thus the form of equations proposed in the paper appeared appropriate for spirometry. Additionally, if this form is accepted, epidemiological and physiological comparison of different LFs and populations will be possible by means of direct comparison of the equation coefficients.
APA, Harvard, Vancouver, ISO, and other styles
2

LIPOVETSKY, STAN. "MEANINGFUL REGRESSION COEFFICIENTS BUILT BY DATA GRADIENTS." Advances in Adaptive Data Analysis 02, no. 04 (October 2010): 451–62. http://dx.doi.org/10.1142/s1793536910000574.

Full text
Abstract:
Multiple regression's coefficients define change in the dependent variable due to a predictor's change while all other predictors are constant. Rearranging data to paired differences of observations and keeping only biggest changes yield a matrix of a single variable change, which is close to orthogonal design, so there is no impact of multicollinearity on the regression. A similar approach is used for meaningful coefficients of nonlinear regressions with coefficients of half-elasticity, elasticity, and odds' elasticity due the gradients in each predictor. In contrast to regular linear and nonlinear regressions, the suggested technique produces interpretable coefficients not prone to multicollinearity effects.
APA, Harvard, Vancouver, ISO, and other styles
3

Lawless, Connor, Jayant Kalagnanam, Lam M. Nguyen, Dzung Phan, and Chandra Reddy. "Interpretable Clustering via Multi-Polytope Machines." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7309–16. http://dx.doi.org/10.1609/aaai.v36i7.20693.

Full text
Abstract:
Clustering is a popular unsupervised learning tool often used to discover groups within a larger population such as customer segments, or patient subtypes. However, despite its use as a tool for subgroup discovery and description few state-of-the-art algorithms provide any rationale or description behind the clusters found. We propose a novel approach for interpretable clustering that both clusters data points and constructs polytopes around the discovered clusters to explain them. Our framework allows for additional constraints on the polytopes including ensuring that the hyperplanes constructing the polytope are axis-parallel or sparse with integer coefficients. We formulate the problem of constructing clusters via polytopes as a Mixed-Integer Non-Linear Program (MINLP). To solve our formulation we propose a two phase approach where we first initialize clusters and polytopes using alternating minimization, and then use coordinate descent to boost clustering performance. We benchmark our approach on a suite of synthetic and real world clustering problems, where our algorithm outperforms state of the art interpretable and non-interpretable clustering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Eshima, Nobuoki, Claudio Giovanni Borroni, Minoru Tabata, and Takeshi Kurosawa. "An Entropy-Based Tool to Help the Interpretation of Common-Factor Spaces in Factor Analysis." Entropy 23, no. 2 (January 24, 2021): 140. http://dx.doi.org/10.3390/e23020140.

Full text
Abstract:
This paper proposes a method for deriving interpretable common factors based on canonical correlation analysis applied to the vectors of common factors and manifest variables in the factor analysis model. First, an entropy-based method for measuring factor contributions is reviewed. Second, the entropy-based contribution measure of the common-factor vector is decomposed into those of canonical common factors, and it is also shown that the importance order of factors is that of their canonical correlation coefficients. Third, the method is applied to derive interpretable common factors. Numerical examples are provided to demonstrate the usefulness of the present approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Jin, Robert A. Perera, Le Kang, Roy T. Sabo, and Robert M. Kirkpatrick. "Obtaining Interpretable Parameters From Reparameterized Longitudinal Models: Transformation Matrices Between Growth Factors in Two Parameter Spaces." Journal of Educational and Behavioral Statistics 47, no. 2 (December 1, 2021): 167–201. http://dx.doi.org/10.3102/10769986211052009.

Full text
Abstract:
This study proposes transformation functions and matrices between coefficients in the original and reparameterized parameter spaces for an existing linear-linear piecewise model to derive the interpretable coefficients directly related to the underlying change pattern. Additionally, the study extends the existing model to allow individual measurement occasions and investigates predictors for individual differences in change patterns. We present the proposed methods with simulation studies and a real-world data analysis. Our simulation study demonstrates that the method can generally provide an unbiased and accurate point estimate and appropriate confidence interval coverage for each parameter. The empirical analysis shows that the model can estimate the growth factor coefficients and path coefficients directly related to the underlying developmental process, thereby providing meaningful interpretation.
APA, Harvard, Vancouver, ISO, and other styles
6

Takada, Masaaki, Taiji Suzuki, and Hironori Fujisawa. "Independently Interpretable Lasso for Generalized Linear Models." Neural Computation 32, no. 6 (June 2020): 1168–221. http://dx.doi.org/10.1162/neco_a_01279.

Full text
Abstract:
Sparse regularization such as [Formula: see text] regularization is a quite powerful and widely used strategy for high-dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary [Formula: see text] regularization selects variables correlated with each other under weak regularizations, which results in deterioration of not only its estimation error but also interpretability. In this letter, we propose a new regularization method, independently interpretable lasso (IILasso), for generalized linear models. Our proposed regularizer suppresses selecting correlated variables, so that each active variable affects the response independently in the model. Hence, we can interpret regression coefficients intuitively, and the performance is also improved by avoiding overfitting. We analyze the theoretical property of the IILasso and show that the proposed method is advantageous for its sign recovery and achieves almost minimax optimal convergence rate. Synthetic and real data analyses also indicate the effectiveness of the IILasso.
APA, Harvard, Vancouver, ISO, and other styles
7

Bazilevskiy, Mikhail Pavlovich. "Program for Constructing Quite Interpretable Elementary and Non-elementary Quasi-linear Regression Models." Proceedings of the Institute for System Programming of the RAS 35, no. 4 (2023): 129–44. http://dx.doi.org/10.15514/ispras-2023-35(4)-7.

Full text
Abstract:
A quite interpretable linear regression satisfies the following conditions: the signs of its coefficients correspond to the meaningful meaning of the factors; multicollinearity is negligible; coefficients are significant; the quality of the model approximation is high. Previously, to construct such models, estimated using the ordinary least squares, the QInter-1 program was developed. In it, according to the given initial parameters, the mixed integer 0-1 linear programming task is automatically generated, as a result of which the most informative regressors are selected. The mathematical apparatus underlying this program was significantly expanded over time: non-elementary linear regressions were developed, linear restrictions on the absolute values of intercorrelations were proposed to control multicollinearity, assumptions appeared about the possibility of constructing not only linear, but also quasi-linear regressions. This article is devoted to the description of the developed second version of the program for constructing quite interpretable regressions QInter-2. The QInter-2 program allows, depending on the initial parameters selected by the user, to automatically formulate for the LPSolve solver the mixed integer 0-1 linear programming task for constructing both elementary and non-elementary quite interpretable quasi-linear regressions. It is possible to set up to nine elementary functions and control such parameters as the number of regressors in the model, the number of signs in real numbers after the decimal point, the absolute contributions of variables to the overall determination, the number of occurrences of explanatory variables in the model, and the magnitude of intercorrelations. In the process of working with the program, you can also control the number of elementary and non-elementarily transformed variables that affect the speed of solving the mixed integer 0-1 linear programming task. The QInter-2 program is universal and can be used to construct quite interpretable mathematical dependencies in various subject areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Yeung, Michael. "Attention U-Net ensemble for interpretable polyp and instrument segmentation." Nordic Machine Intelligence 1, no. 1 (November 1, 2021): 47–49. http://dx.doi.org/10.5617/nmi.9157.

Full text
Abstract:
The difficulty associated with screening and treating colorectal polyps alongside other gastrointestinal pathology presents an opportunity to incorporate computer-aided systems. This paper develops a deep learning pipeline that accurately segments colorectal polyps and various instruments used during endoscopic procedures. To improve transparency, we leverage the Attention U-Net architecture, enabling visualisation of the attention coefficients to identify salient regions. Moreover, we improve performance by incorporating transfer learning using a pre-trained encoder, together with test-time augmentation, softmax averaging, softmax thresholding and connected component labeling to further refine predictions.
APA, Harvard, Vancouver, ISO, and other styles
9

Barnett, Tim, and Patricia A. Lanier. "Comparison of Alternative Response Formats for an Abbreviated Version of Rotter's Locus of Control Scale." Psychological Reports 77, no. 1 (August 1995): 259–64. http://dx.doi.org/10.2466/pr0.1995.77.1.259.

Full text
Abstract:
The present study analyzed the factor structure of an abbreviated version of Rotter's (1966) locus of control scale. The 11-item scale was administered in both the original forced-choice format and a 4-point rating format. The data were derived from administration of the scale as part of the National Longitudinal Survey (N = 7,407). Maximum likelihood factor analysis with oblique rotation gave a three-factor solution for both the forced-choice and rating formats, but the resulting factors were not easily interpretable, and the subscales had high intercorrelations and unacceptably low reliability coefficients. Subsequent analyses suggested that a single-factor solution was more appropriate. The 4-point rating format appeared to be more interpretable and had the highest reliability coefficient.
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Fanglan, Erihe, Kun Li, Jiang Tian, and Xiaojia Xiang. "A federated interpretable scorecard and its application in credit scoring." International Journal of Financial Engineering 08, no. 03 (August 6, 2021): 2142009. http://dx.doi.org/10.1142/s2424786321420093.

Full text
Abstract:
In this paper, we propose a vertical federated learning (VFL) structure for logistic regression with bounded constraint for the traditional scorecard, namely FL-LRBC. Under the premise of data privacy protection, FL-LRBC enables multiple agencies to jointly obtain an optimized scorecard model in a single training session. It leads to the formation of scorecard model with positive coefficients to guarantee its desirable characteristics (e.g., interpretability and robustness), while the time-consuming parameter-tuning process can be avoided. Moreover, model performance in terms of both AUC and the Kolmogorov–Smirnov (KS) statistics is significantly improved by FL-LRBC, due to the feature enrichment in our algorithm architecture. Currently, FL-LRBC has already been applied to credit business in a China nation-wide financial holdings group.
APA, Harvard, Vancouver, ISO, and other styles
11

Deng, Jiale, and Yanyan Shen. "Self-Interpretable Graph Learning with Sufficient and Necessary Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11749–56. http://dx.doi.org/10.1609/aaai.v38i10.29059.

Full text
Abstract:
Self-interpretable graph learning methods provide insights to unveil the black-box nature of GNNs by providing predictions with built-in explanations. However, current works suffer from performance degradation compared to GNNs trained without built-in explanations. We argue the main reason is that they fail to generate explanations satisfying both sufficiency and necessity, and the biased explanations further hurt GNNs' performance. In this work, we propose a novel framework for generating SUfficient aNd NecessarY explanations (SUNNY-GNN for short) that benefit GNNs' predictions. The key idea is to conduct augmentations by structurally perturbing given explanations and employ a contrastive loss to guide the learning of explanations toward sufficiency and necessity directions. SUNNY-GNN introduces two coefficients to generate hard and reliable contrastive samples. We further extend SUNNY-GNN to heterogeneous graphs. Empirical results on various GNNs and real-world graphs show that SUNNY-GNN yields accurate predictions and faithful explanations, outperforming the state-of-the-art methods by improving 3.5% prediction accuracy and 13.1% explainability fidelity on average. Our code and data are available at https://github.com/SJTU-Quant/SUNNY-GNN.
APA, Harvard, Vancouver, ISO, and other styles
12

Yin, Hao, Austin R. Benson, and Johan Ugander. "Measuring directed triadic closure with closure coefficients." Network Science 8, no. 4 (June 1, 2020): 551–73. http://dx.doi.org/10.1017/nws.2020.20.

Full text
Abstract:
AbstractRecent work studying triadic closure in undirected graphs has drawn attention to the distinction between measures that focus on the “center” node of a wedge (i.e., length-2 path) versus measures that focus on the “initiator,” a distinction with considerable consequences. Existing measures in directed graphs, meanwhile, have all been center-focused. In this work, we propose a family of eight directed closure coefficients that measure the frequency of triadic closure in directed graphs from the perspective of the node initiating closure. The eight coefficients correspond to different labeled wedges, where the initiator and center nodes are labeled, and we observe dramatic empirical variation in these coefficients on real-world networks, even in cases when the induced directed triangles are isomorphic. To understand this phenomenon, we examine the theoretical behavior of our closure coefficients under a directed configuration model. Our analysis illustrates an underlying connection between the closure coefficients and moments of the joint in- and out-degree distributions of the network, offering an explanation of the observed asymmetries. We also use our directed closure coefficients as predictors in two machine learning tasks. We find interpretable models with AUC scores above 0.92 in class-balanced binary prediction, substantially outperforming models that use traditional center-focused measures.
APA, Harvard, Vancouver, ISO, and other styles
13

Shapiro, Alexander A. "Thermodynamic Theory of Diffusion and Thermodiffusion Coefficients in Multicomponent Mixtures." Journal of Non-Equilibrium Thermodynamics 45, no. 4 (October 25, 2020): 343–72. http://dx.doi.org/10.1515/jnet-2020-0006.

Full text
Abstract:
AbstractTransport coefficients (like diffusion and thermodiffusion) are the key parameters to be studied in non-equilibrium thermodynamics. For practical applications, it is important to predict them based on the thermodynamic parameters of a mixture under study: pressure, temperature, composition, and thermodynamic functions, like enthalpies or chemical potentials. The current study develops a thermodynamic framework for such prediction. The theory is based on a system of physically interpretable postulates; in this respect, it is better grounded theoretically than the previously suggested models for diffusion and thermodiffusion coefficients. In fact, it translates onto the thermodynamic language of the previously developed model for the transport properties based on the statistical fluctuation theory. Many statements of the previously developed model are simplified and amplified, and the derivation is made transparent and ready for further applications. The n(n+1)/2 independent Onsager coefficients are reduced to 2n+1 determining parameters: the emission functions and the penetration lengths. The transport coefficients are expressed in terms of these parameters. These expressions are much simplified based on the Onsager symmetry property for the phenomenological coefficients. The model is verified by comparison with the known expressions for the diffusion coefficients that were previously considered in the literature.
APA, Harvard, Vancouver, ISO, and other styles
14

Munkhdalai, Lkhagvadorj, Tsendsuren Munkhdalai, Pham Van Van Huy, Jang-Eui Hong, Keun Ho Ryu, and Nipon Theera-Umpon. "Neural Network-Augmented Locally Adaptive Linear Regression Model for Tabular Data." Sustainability 14, no. 22 (November 17, 2022): 15273. http://dx.doi.org/10.3390/su142215273.

Full text
Abstract:
Creating an interpretable model with high predictive performance is crucial in eXplainable AI (XAI) field. We introduce an interpretable neural network-based regression model for tabular data in this study. Our proposed model uses ordinary least squares (OLS) regression as a base-learner, and we re-update the parameters of our base-learner by using neural networks, which is a meta-learner in our proposed model. The meta-learner updates the regression coefficients using the confidence interval formula. We extensively compared our proposed model to other benchmark approaches on public datasets for regression task. The results showed that our proposed neural network-based interpretable model showed outperformed results compared to the benchmark models. We also applied our proposed model to the synthetic data to measure model interpretability, and we showed that our proposed model can explain the correlation between input and output variables by approximating the local linear function for each point. In addition, we trained our model on the economic data to discover the correlation between the central bank policy rate and inflation over time. As a result, it is drawn that the effect of central bank policy rates on inflation tends to strengthen during a recession and weaken during an expansion. We also performed the analysis on CO2 emission data, and our model discovered some interesting explanations between input and target variables, such as a parabolic relationship between CO2 emissions and gross national product (GNP). Finally, these experiments showed that our proposed neural network-based interpretable model could be applicable for many real-world applications where data type is tabular and explainable models are required.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Ho Heon, Youngin Kim, and Yu Rang Park. "Interpretable Conditional Recurrent Neural Network for Weight Change Prediction: Algorithm Development and Validation Study." JMIR mHealth and uHealth 9, no. 3 (March 29, 2021): e22183. http://dx.doi.org/10.2196/22183.

Full text
Abstract:
Background In recent years, mobile-based interventions have received more attention as an alternative to on-site obesity management. Despite increased mobile interventions for obesity, there are lost opportunities to achieve better outcomes due to the lack of a predictive model using current existing longitudinal and cross-sectional health data. Noom (Noom Inc) is a mobile app that provides various lifestyle-related logs including food logging, exercise logging, and weight logging. Objective The aim of this study was to develop a weight change predictive model using an interpretable artificial intelligence algorithm for mobile-based interventions and to explore contributing factors to weight loss. Methods Lifelog mobile app (Noom) user data of individuals who used the weight loss program for 16 weeks in the United States were used to develop an interpretable recurrent neural network algorithm for weight prediction that considers both time-variant and time-fixed variables. From a total of 93,696 users in the coaching program, we excluded users who did not take part in the 16-week weight loss program or who were not overweight or obese or had not entered weight or meal records for the entire 16-week program. This interpretable model was trained and validated with 5-fold cross-validation (training set: 70%; testing: 30%) using the lifelog data. Mean absolute percentage error between actual weight loss and predicted weight was used to measure model performance. To better understand the behavior factors contributing to weight loss or gain, we calculated contribution coefficients in test sets. Results A total of 17,867 users’ data were included in the analysis. The overall mean absolute percentage error of the model was 3.50%, and the error of the model declined from 3.78% to 3.45% by the end of the program. The time-level attention weighting was shown to be equally distributed at 0.0625 each week, but this gradually decreased (from 0.0626 to 0.0624) as it approached 16 weeks. Factors such as usage pattern, weight input frequency, meal input adherence, exercise, and sharp decreases in weight trajectories had negative contribution coefficients of –0.021, –0.032, –0.015, and –0.066, respectively. For time-fixed variables, being male had a contribution coefficient of –0.091. Conclusions An interpretable algorithm, with both time-variant and time-fixed data, was used to precisely predict weight loss while preserving model transparency. This week-to-week prediction model is expected to improve weight loss and provide a global explanation of contributing factors, leading to better outcomes.
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Yuehua, Wenfen Liu, Song Li, Ying Guo, and Wen Chen. "Interpretable Single-dimension Outlier Detection (ISOD): An Unsupervised Outlier Detection Method Based on Quantiles and Skewness Coefficients." Applied Sciences 14, no. 1 (December 22, 2023): 136. http://dx.doi.org/10.3390/app14010136.

Full text
Abstract:
A crucial area of study in data mining is outlier detection, particularly in the areas of network security, credit card fraud detection, industrial flaw detection, etc. Existing outlier detection algorithms, which can be divided into supervised methods, semi-supervised methods, and unsupervised methods, suffer from missing labeled data, the curse of dimensionality, low interpretability, etc. To address these issues, in this paper, we present an unsupervised outlier detection method based on quantiles and skewness coefficients called ISOD (Interpretable Single dimension Outlier Detection). ISOD first fulfils the empirical cumulative distribution function before computing the quantile and skewness coefficients of each dimension. Finally, it outputs the outlier score. This paper’s contributions are as follows: (1) we propose an unsupervised outlier detection algorithm called ISOD, which has high interpretability and scalability; (2) massive experiments on benchmark datasets demonstrated the superior performance of the ISOD algorithm compared with state-of-the-art baselines in terms of ROC and AP.
APA, Harvard, Vancouver, ISO, and other styles
17

Ranasinghe, Nisal, Damith Senanayake, Sachith Seneviratne, Malin Premaratne, and Saman Halgamuge. "GINN-LP: A Growing Interpretable Neural Network for Discovering Multivariate Laurent Polynomial Equations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14776–84. http://dx.doi.org/10.1609/aaai.v38i13.29396.

Full text
Abstract:
Traditional machine learning is generally treated as a black-box optimization problem and does not typically produce interpretable functions that connect inputs and outputs. However, the ability to discover such interpretable functions is desirable. In this work, we propose GINN-LP, an interpretable neural network to discover the form and coefficients of the underlying equation of a dataset, when the equation is assumed to take the form of a multivariate Laurent Polynomial. This is facilitated by a new type of interpretable neural network block, named the “power-term approximator block”, consisting of logarithmic and exponential activation functions. GINN-LP is end-to-end differentiable, making it possible to use backpropagation for training. We propose a neural network growth strategy that will enable finding the suitable number of terms in the Laurent polynomial that represents the data, along with sparsity regularization to promote the discovery of concise equations. To the best of our knowledge, this is the first model that can discover arbitrary multivariate Laurent polynomial terms without any prior information on the order. Our approach is first evaluated on a subset of data used in SRBench, a benchmark for symbolic regression. We first show that GINN-LP outperforms the state-of-the-art symbolic regression methods on datasets generated using 48 real-world equations in the form of multivariate Laurent polynomials. Next, we propose an ensemble method that combines our method with a high-performing symbolic regression method, enabling us to discover non-Laurent polynomial equations. We achieve state-of-the-art results in equation discovery, showing an absolute improvement of 7.1% over the best contender, by applying this ensemble method to 113 datasets within SRBench with known ground-truth equations.
APA, Harvard, Vancouver, ISO, and other styles
18

Sedighi-Maman, Zahra, and Jonathan J. Heath. "An Interpretable Two-Phase Modeling Approach for Lung Cancer Survivability Prediction." Sensors 22, no. 18 (September 8, 2022): 6783. http://dx.doi.org/10.3390/s22186783.

Full text
Abstract:
Although lung cancer survival status and survival length predictions have primarily been studied individually, a scheme that leverages both fields in an interpretable way for physicians remains elusive. We propose a two-phase data analytic framework that is capable of classifying survival status for 0.5-, 1-, 1.5-, 2-, 2.5-, and 3-year time-points (phase I) and predicting the number of survival months within 3 years (phase II) using recent Surveillance, Epidemiology, and End Results data from 2010 to 2017. In this study, we employ three analytical models (general linear model, extreme gradient boosting, and artificial neural networks), five data balancing techniques (synthetic minority oversampling technique (SMOTE), relocating safe level SMOTE, borderline SMOTE, adaptive synthetic sampling, and majority weighted minority oversampling technique), two feature selection methods (least absolute shrinkage and selection operator (LASSO) and random forest), and the one-hot encoding approach. By implementing a comprehensive data preparation phase, we demonstrate that a computationally efficient and interpretable method such as GLM performs comparably to more complex models. Moreover, we quantify the effects of individual features in phase I and II by exploiting GLM coefficients. To the best of our knowledge, this study is the first to (a) implement a comprehensive data processing approach to develop performant, computationally efficient, and interpretable methods in comparison to black-box models, (b) visualize top factors impacting survival odds by utilizing the change in odds ratio, and (c) comprehensively explore short-term lung cancer survival using a two-phase approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Heinemann, Lothar A. J. "How to Measure “Short-Term Hormonal Effects”?" Obstetrics and Gynecology International 2009 (2009): 1–5. http://dx.doi.org/10.1155/2009/459485.

Full text
Abstract:
Background. Interest to assess short-term benefits or risks of sex-steroid hormone use (OC or HRT) exists for years. However, no validated scale is available to evaluate the broad array of described effects of short-term hormone use.Methods. A raw scale consisting of 43 specific items and 47 general data was developed. Surveys in Italy, Germany and Austria were performed and data analyzed by factorial analyses. The resulting new scale with 15 items underwent reliability and validity investigations.Results. The new scale consists of 15 items in 5 domains. Internal consistency reliability coefficients were satisfactory as were test-retest reliability coefficients. Content and concurrent validity were promising.Conclusion. Psychometric properties of the new scale suggest good characteristics to measure short-term effects of sex-steroid hormones in women. The scale seems to be appropriate, feasible, interpretable, reliable, and valid for their application as PRO scale.
APA, Harvard, Vancouver, ISO, and other styles
20

Westö, Johan, and Patrick J. C. May. "Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models." Journal of Neurophysiology 120, no. 2 (August 1, 2018): 703–19. http://dx.doi.org/10.1152/jn.00916.2017.

Full text
Abstract:
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
APA, Harvard, Vancouver, ISO, and other styles
21

Cheng (程思浩), Sihao, Yuan-Sen Ting (丁源森), Brice Ménard, and Joan Bruna. "A new approach to observational cosmology using the scattering transform." Monthly Notices of the Royal Astronomical Society 499, no. 4 (October 15, 2020): 5902–14. http://dx.doi.org/10.1093/mnras/staa3165.

Full text
Abstract:
ABSTRACT Parameter estimation with non-Gaussian stochastic fields is a common challenge in astrophysics and cosmology. In this paper, we advocate performing this task using the scattering transform, a statistical tool sharing ideas with convolutional neural networks (CNNs) but requiring neither training nor tuning. It generates a compact set of coefficients, which can be used as robust summary statistics for non-Gaussian information. It is especially suited for fields presenting localized structures and hierarchical clustering, such as the cosmological density field. To demonstrate its power, we apply this estimator to a cosmological parameter inference problem in the context of weak lensing. On simulated convergence maps with realistic noise, the scattering transform outperforms classic estimators and is on a par with the state-of-the-art CNN. It retains advantages of traditional statistical descriptors, has provable stability properties, allows to check for systematics, and importantly, the scattering coefficients are interpretable. It is a powerful and attractive estimator for observational cosmology and the study of physical fields in general.
APA, Harvard, Vancouver, ISO, and other styles
22

KHOSHGOFTAAR, TAGHI M., and EDWARD B. ALLEN. "LOGISTIC REGRESSION MODELING OF SOFTWARE QUALITY." International Journal of Reliability, Quality and Safety Engineering 06, no. 04 (December 1999): 303–17. http://dx.doi.org/10.1142/s0218539399000292.

Full text
Abstract:
Reliable software is mandatory for complex mission-critical systems. Classifying modules as fault-prone, or not, is a valuable technique for guiding development processes, so that resources can be focused on those parts of a system that are most likely to have faults. Logistic regression offers advantages over other classification modeling techniques, such as interpretable coefficients. There are few prior applications of logistic regression to software quality models in the literature, and none that we know of account for prior probabilities and costs of misclassification. A contribution of this paper is the application of prior probabilities and costs of misclassification to a logistic regression-based classification rule for a software quality model. This paper also contributes an integrated method for using logistic regression in software quality modeling, including examples of how to interpret coefficients, how to use prior probabilities, and how to use costs of misclassifications. A case study of a major subsystem of a military, real-time system illustrates the techniques.
APA, Harvard, Vancouver, ISO, and other styles
23

Ventrucci, Massimo, and Håvard Rue. "Penalized complexity priors for degrees of freedom in Bayesian P-splines." Statistical Modelling 16, no. 6 (September 20, 2016): 429–53. http://dx.doi.org/10.1177/1471082x16659154.

Full text
Abstract:
Bayesian penalized splines (P-splines) assume an intrinsic Gaussian Markov random field prior on the spline coefficients, conditional on a precision hyper-parameter [Formula: see text]. Prior elicitation of [Formula: see text] is difficult. To overcome this issue, we aim to building priors on an interpretable property of the model, indicating the complexity of the smooth function to be estimated. Following this idea, we propose penalized complexity (PC) priors for the number of effective degrees of freedom. We present the general ideas behind the construction of these new PC priors, describe their properties and show how to implement them in P-splines for Gaussian data.
APA, Harvard, Vancouver, ISO, and other styles
24

Kirkland, Angus I., and Rüdiger R. Meyer. "“Indirect” High-Resolution Transmission Electron Microscopy: Aberration Measurement and Wavefunction Reconstruction." Microscopy and Microanalysis 10, no. 4 (August 2004): 401–13. http://dx.doi.org/10.1017/s1431927604040437.

Full text
Abstract:
Improvements in instrumentation and image processing techniques mean that methods involving reconstruction of focal or beam-tilt series of images are now realizing the promise they have long offered. This indirect approach recovers both the phase and the modulus of the specimen exit plane wave function and can extend the interpretable resolution. However, such reconstructions require thea posterioridetermination of the objective lens aberrations, including the actual beam tilt, defocus, and twofold and threefold astigmatism. In this review, we outline the theory behind exit plane wavefunction reconstruction and describe methods for the accurate and automated determination of the required coefficients of the wave aberration function. Finally, recent applications of indirect reconstruction in the structural analysis of complex oxides are presented.
APA, Harvard, Vancouver, ISO, and other styles
25

LIU, JIAN, BIN MA, and MING LI. "PRIMA: PEPTIDE ROBUST IDENTIFICATION FROM MS/MS SPECTRA." Journal of Bioinformatics and Computational Biology 04, no. 01 (February 2006): 125–38. http://dx.doi.org/10.1142/s0219720006001746.

Full text
Abstract:
In proteomics, tandem mass spectrometry is the key technology for peptide sequencing. However, partially due to the deficiency of peptide identification software, a large portion of the tandem mass spectra are discarded in almost all proteomics centers because they are not interpretable. The problem is more acute with the lower quality data from low end but more popular devices such as the ion trap instruments. In order to deal with the noisy and low quality data, this paper develops a systematic machine learning approach to construct a robust linear scoring function, whose coefficients are determined by a linear programming. A prototype, PRIMA, was implemented. When tested with large benchmarks of varying qualities, PRIMA consistently has higher accuracy than commonly used software MASCOT, SEQUEST and X! Tandem.
APA, Harvard, Vancouver, ISO, and other styles
26

Lowther, Aaron P., Paul Fearnhead, Matthew A. Nunes, and Kjeld Jensen. "Semi-automated simultaneous predictor selection for regression-SARIMA models." Statistics and Computing 30, no. 6 (September 4, 2020): 1759–78. http://dx.doi.org/10.1007/s11222-020-09970-6.

Full text
Abstract:
Abstract Deciding which predictors to use plays an integral role in deriving statistical models in a wide range of applications. Motivated by the challenges of predicting events across a telecommunications network, we propose a semi-automated, joint model-fitting and predictor selection procedure for linear regression models. Our approach can model and account for serial correlation in the regression residuals, produces sparse and interpretable models and can be used to jointly select models for a group of related responses. This is achieved through fitting linear models under constraints on the number of nonzero coefficients using a generalisation of a recently developed mixed integer quadratic optimisation approach. The resultant models from our approach achieve better predictive performance on the motivating telecommunications data than methods currently used by industry.
APA, Harvard, Vancouver, ISO, and other styles
27

Angelis, Dimitrios, Filippos Sofos, Konstantinos Papastamatiou, and Theodoros E. Karakasidis. "Fluid Properties Extraction in Confined Nanochannels with Molecular Dynamics and Symbolic Regression Methods." Micromachines 14, no. 7 (July 19, 2023): 1446. http://dx.doi.org/10.3390/mi14071446.

Full text
Abstract:
In this paper, we propose an alternative road to calculate the transport coefficients of fluids and the slip length inside nano-conduits in a Poiseuille-like geometry. These are all computationally demanding properties that depend on dynamic, thermal, and geometrical characteristics of the implied fluid and the wall material. By introducing the genetic programming-based method of symbolic regression, we are able to derive interpretable data-based mathematical expressions based on previous molecular dynamics simulation data. Emphasis is placed on the physical interpretability of the symbolic expressions. The outcome is a set of mathematical equations, with reduced complexity and increased accuracy, that adhere to existing domain knowledge and can be exploited in fluid property interpolation and extrapolation, bypassing timely simulations when possible.
APA, Harvard, Vancouver, ISO, and other styles
28

Kume, Kenji, and Naoko Nose-Togawa. "An Adaptive Orthogonal SSA Decomposition Algorithm for a Time Series." Advances in Data Science and Adaptive Analysis 10, no. 01 (January 2018): 1850002. http://dx.doi.org/10.1142/s2424922x1850002x.

Full text
Abstract:
Singular spectrum analysis (SSA) is a nonparametric spectral decomposition of a time series into arbitrary number of interpretable components. It involves a single parameter, window length [Formula: see text], which can be adjusted for the specific purpose of the analysis. After the decomposition of a time series, similar series are grouped to obtain the interpretable components by consulting with the [Formula: see text]-correlation matrix. To accomplish better resolution of the frequency spectrum, a larger window length [Formula: see text] is preferable and, in this case, the proper grouping is crucial for making the SSA decomposition. When the [Formula: see text]-correlation matrix does not have block-diagonal form, however, it is hard to adequately carry out the grouping. To avoid this, we propose a novel algorithm for the adaptive orthogonal decomposition of the time series based on the SSA scheme. The SSA decomposition sequences of the time series are recombined and the linear coefficients are determined so as to maximizing its squared norm. This results in an eigenvalue problem of the Gram matrix and we can obtain the orthonormal basis vectors for the [Formula: see text]-dimensional subspace. By the orthogonal projection of the original time series on these basis vectors, we can obtain adaptive orthogonal decomposition of the time series without the redundancy of the original SSA decomposition.
APA, Harvard, Vancouver, ISO, and other styles
29

Reivan-Ortiz, Geovanny Genaro, Gisela Pineda-Garcia, Bello León Parias, Patricia Natali Reivan Ortiz, Patricia Elizabeth Ortiz-Rodas, Andrés Alexis Ramírez Coronel, and Pedro Carlos Martinez Suarez. "Adaptación y validación ecuatoriana de la Escala de Factores de Riesgo Asociados a los Trastornos de la Conducta Alimentaria (EFRATA)." Anales de Psicología 38, no. 2 (April 19, 2022): 232–38. http://dx.doi.org/10.6018/analesps.475061.

Full text
Abstract:
The objective of this study was to adapt and know the factorial structure and reliability in the Ecuadorian population of the EFRATA Scale of Risk Factors Associated with Eating Disorders. A non-probabilistic sample of 1172 participants were used (age: M = 21.99; SD = 2.49; 58.6% women and 41.4% men). The first parallel analysis study identified seven interpretable factors that explain 50% of the variance. The second confirmatory factor analysis study indicates an acceptable fit (GFI = 0.96; AGFI = 0.95; NFI = 0.94; RMR = 0.08). The reliability coefficients for Cronbach's alpha and McDonald's omega were 0.89 and 0.90 respectively. The Ecuadorian version of the EFRATA shows good psychometric properties and adapts to the cultural context of this country. El objetivo de este estudio fue adaptar y conocer la estructura factorial y la confiabilidad en la población ecuatoriana de la Escala EFRATA de Factores de Riesgo Asociados a los Trastornos de la Conducta Alimentaria. Se utilizó una muestra no probabilística de 1172 participantes (edad: M = 21.99; DT = 2.49; 58.6% mujeres y 41.4% hombres). El primer estudio de análisis paralelo identificó siete factores interpretables que explican el 50% de la varianza. El segundo estudio de análisis factorial confirmatorio indica un ajuste aceptable (GFI = 0.96; AGFI = 0.95; NFI = 0,94; RMR = 0.08). Los coeficientes de confiabilidad para el alfa de Cronbach y el omega de McDonald's fueron 0.89 y 0.90 respectivamente. La versión ecuatoriana de la EFRATA muestra buenas propiedades psicométricas y se adapta al contexto cultural de este país.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Yuanyuan, Linfei Zhang, Uzair Aslam Bhatti, and Mengxing Huang. "Interpretable Machine Learning for Personalized Medical Recommendations: A LIME-Based Approach." Diagnostics 13, no. 16 (August 15, 2023): 2681. http://dx.doi.org/10.3390/diagnostics13162681.

Full text
Abstract:
Chronic diseases are increasingly major threats to older persons, seriously affecting their physical health and well-being. Hospitals have accumulated a wealth of health-related data, including patients’ test reports, treatment histories, and diagnostic records, to better understand patients’ health, safety, and disease progression. Extracting relevant information from this data enables physicians to provide personalized patient-treatment recommendations. While collaborative filtering techniques and classical algorithms such as naive Bayes, logistic regression, and decision trees have had notable success in health-recommendation systems, most current systems primarily inform users of their likely preferences without providing explanations. This paper proposes an approach of deep learning with a local interpretable model–agnostic explanations (LIME)-based interpretable recommendation system to solve this problem. Specifically, we apply the proposed approach to two chronic diseases common in older adults: heart disease and diabetes. After data preprocessing, we use six deep-learning algorithms to form interpretations. In the heart-disease data set, the actual model recommendation of multi-layer perceptron and gradient-boosting algorithm differs from the local model’s recommendation of LIME, which can be used as its approximate prediction. From the feature importance of these two algorithms, it can be seen that the CholCheck, GenHith, and HighBP features are the most important for predicting heart disease. In the diabetes data set, the actual model predictions of the multi-layer perceptron and logistic-regression algorithm were little different from the local model’s prediction of LIME, which can be used as its approximate recommendation. Moreover, from the feature importance of the two algorithms, it can be seen that the three features of glucose, BMI, and age were the most important for predicting heart disease. Next, LIME is used to determine the importance of each feature that affected the results of the calculated model. Subsequently, we present the contribution coefficients of these features to the final recommendation. By analyzing the impact of different patient characteristics on the recommendations, our proposed system elucidates the underlying reasons behind these recommendations and enhances patient trust. This approach has important implications for medical recommendation systems and encourages informed decision-making in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
31

Smedema, Susan Miller, and René Marie Talbot. "Psychometric Validation of the Job Satisfaction of Persons With Disabilities Scale." Rehabilitation Research, Policy, and Education 34, no. 3 (September 1, 2020): 176–89. http://dx.doi.org/10.1891/re-19-22.

Full text
Abstract:
ObjectiveTo evaluate the measurement structure of the Job Satisfaction of Persons with Disabilities Scale (JSPDS) in a sample of employed U.S. Americans with disabilities.DesignA quantitative descriptive design using exploratory factor analysis (EFA) and correlational analysis.ParticipantsTwo hundred and fifty-nine individuals with disabilities who were employed at least 10 hours per week.ResultsThe EFA indicated a two-factor structure accounting for 42.99% of the total variance. The internal consistency reliability coefficients for the Integrated Work Environment and Job Quality factors were .87 and .74 respectively. Both factors correlated with selected employment and well-being variables in logical directions.ConclusionThe two-factor measurement structure of the JSPDS appears to be valid and interpretable, and can be used in research and clinical settings in order to develop effective strategies for long-term employment success of people with disabilities.
APA, Harvard, Vancouver, ISO, and other styles
32

Smedema, Susan Miller, Fong Chan, Ming-Hung Wang, Emre Umucu, Naoko Yura Yasui, Wei-Mo Tu, Nicole Ditchman, and Chia-Chiang Wang. "Psychometric Validation of the Taiwanese Version of theJob Satisfaction of Persons with Disabilities Scalein a Sample of Individuals with Poliomyelitis." Australian Journal of Rehabilitation Counselling 22, no. 1 (April 13, 2016): 27–39. http://dx.doi.org/10.1017/jrc.2016.1.

Full text
Abstract:
Objective: To evaluate the measurement structure of the Taiwanese Version of theJob Satisfaction of Persons with Disabilities Scale(JSPDS).Design: A quantitative descriptive research design using exploratory factor analysis (EFA).Participants: One hundred and thirty-two gainfully employed individuals from Taiwan with poliomyelitis participated in this study.Results: EFA result indicated a three-factor structure accounting for 54.1 per cent of the total variance. The internal consistency reliability coefficients for theintegrated work environment,job quality, andalienationfactors were 0.91, 0.77, and 0.59, respectively. Only theintegrated work environmentandjob qualityfactors showed positive correlations with life satisfaction. People with higher educational attainment also reported higher levels of job satisfaction than people with lower educational attainment.Conclusion: The three-factor measurement structure of the JSPDS appears to be parsimonious, psychologically meaningful, and interpretable, and can be used to improve the comprehensiveness of vocational rehabilitation outcome evaluation.
APA, Harvard, Vancouver, ISO, and other styles
33

Lei, Minjie, and S. E. Clark. "Probing the Cold Neutral Medium through H I Emission Morphology with the Scattering Transform." Astrophysical Journal 947, no. 2 (April 1, 2023): 74. http://dx.doi.org/10.3847/1538-4357/acc02a.

Full text
Abstract:
Abstract Neutral hydrogen (H I) emission exhibits complex morphology that encodes rich information about the physics of the interstellar medium. We apply the scattering transform (ST) to characterize the H I emission structure via a set of compact and interpretable coefficients, and find a connection between the H I emission morphology and H I cold neutral medium (CNM) phase content. Where H I absorption measurements are unavailable, the H I phase structure is typically estimated from the emission via spectral line decomposition. Here, we present a new probe of the CNM content using measures that are solely derived from H I emission spatial information. We apply the ST to GALFA-H I data at high Galactic latitudes ( b > 30 ° ), and compare the resulting coefficients to CNM fraction measurements derived from archival H I emission and absorption spectra. We quantify the correlation between the ST coefficients and the measured CNM fraction (f CNM), finding that the H I emission morphology encodes substantial f CNM-correlating information and that ST-based metrics for small-scale linearity are particularly predictive of f CNM. This is further corroborated by the enhancement of the I 857/N HI ratio with larger ST measures of small-scale linearity. These results are consistent with the picture of regions with higher CNM content being more populated by small-scale filamentary H I structures. Our work illustrates a physical connection between the H I morphology and phase content, and suggests that future phase decomposition methods can be improved by making use of both H I spectral and spatial information.
APA, Harvard, Vancouver, ISO, and other styles
34

Quyen. "STORM SURGE FORECAST MODEL USING GENETIC PROGRAMMING." Journal of Military Science and Technology, no. 69A (November 16, 2020): 75–89. http://dx.doi.org/10.54939/1859-1043.j.mst.69a.2020.75-89.

Full text
Abstract:
Stormsurge is a typical genuine fiasco coming from the ocean. Therefore, an accurate forecast of surges is a vital assignment to dodge property misfortunes and decrease the chance of tropical storm surges. Genetic Programming (GP) is an evolution-based model learning technique that can simultaneously find the functional form and the numeric coefficients for the model. Moreover, GP has been widely applied to build models for predictive problems. However, GP has seldom been applied to the problem of storm surge forecasting. In this paper, a new method to use GP for evolving models for storm surge forecasting is proposed. Experimental results on data-sets collected from the Tottori coast of Japan show that GP can become more accurate storm surge forecasting models than other standard machine learning methods. Moreover, GP can automatically select relevant features when evolving storm surge forecasting models, and the models developed by GP are interpretable.
APA, Harvard, Vancouver, ISO, and other styles
35

Zheng, Ervine, Qi Yu, and Zhi Zheng. "Sparse Maximum Margin Learning from Multimodal Human Behavioral Patterns." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 5437–45. http://dx.doi.org/10.1609/aaai.v37i4.25676.

Full text
Abstract:
We propose a multimodal data fusion framework to systematically analyze human behavioral data from specialized domains that are inherently dynamic, sparse, and heterogeneous. We develop a two-tier architecture of probabilistic mixtures, where the lower tier leverages parametric distributions from the exponential family to extract significant behavioral patterns from each data modality. These patterns are then organized into a dynamic latent state space at the higher tier to fuse patterns from different modalities. In addition, our framework jointly performs pattern discovery and maximum-margin learning for downstream classification tasks by using a group-wise sparse prior that regularizes the coefficients of the maximum-margin classifier. Therefore, the discovered patterns are highly interpretable and discriminative to support downstream classification tasks. Experiments on real-world behavioral data from medical and psychological domains demonstrate that our framework discovers meaningful multimodal behavioral patterns with improved interpretability and prediction performance.
APA, Harvard, Vancouver, ISO, and other styles
36

Balogh, Vanda, Gábor Berend, Dimitrios I. Diochnos, and György Turán. "Understanding the Semantic Content of Sparse Word Embeddings Using a Commonsense Knowledge Base." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7399–406. http://dx.doi.org/10.1609/aaai.v34i05.6235.

Full text
Abstract:
Word embeddings have developed into a major NLP tool with broad applicability. Understanding the semantic content of word embeddings remains an important challenge for additional applications. One aspect of this issue is to explore the interpretability of word embeddings. Sparse word embeddings have been proposed as models with improved interpretability. Continuing this line of research, we investigate the extent to which human interpretable semantic concepts emerge along the bases of sparse word representations. In order to have a broad framework for evaluation, we consider three general approaches for constructing sparse word representations, which are then evaluated in multiple ways. We propose a novel methodology to evaluate the semantic content of word embeddings using a commonsense knowledge base, applied here to the sparse case. This methodology is illustrated by two techniques using the ConceptNet knowledge base. The first approach assigns a commonsense concept label to the individual dimensions of the embedding space. The second approach uses a metric, derived by spreading activation, to quantify the coherence of coordinates along the individual axes. We also provide results on the relationship between the two approaches. The results show, for example, that in the individual dimensions of sparse word embeddings, words having high coefficients are more semantically related in terms of path lengths in the knowledge base than the ones having zero coefficients.
APA, Harvard, Vancouver, ISO, and other styles
37

CARVALHO, Lucas de Francisco, and Catarina Possenti SETTE. "Revision of the Criticism Avoidance dimension of the Dimensional Clinical Personality Inventory." Estudos de Psicologia (Campinas) 34, no. 2 (June 2017): 219–31. http://dx.doi.org/10.1590/1982-02752017000200004.

Full text
Abstract:
Abstract The aim of this study was to revise the Criticism Avoidance dimension of the Dimensional Clinical Personality Inventory and to investigate its psychometric properties. The participants included 213 subjects aged 18 to 69 years (Mean = 25.56; Standard Deviation = 8.70), mostly females (N = 159; 74.3%). All participants answered the Dimensional Clinical Personality Inventory and the Brazilian versions of the Revised NEO Personality Inventory and the Personality Inventory for DSM-5. A total of 470 new items were developed and selected using content analysis, and 39 items composed the final version. Based on the parallel analysis and factor analysis, three interpretable factors were found. The internal consistency coefficients showed adequate levels of reliability ranging between 0.80 and 0.91 for the factors. Additionally, expected correlations were found between the Dimensional Clinical Personality Inventory and the other tests. The present study demonstrated the adequacy of the dimension revised to assess pathological characteristics of the avoidant personality functioning.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Yang, Le Qi, Yichen Qin, Cunjie Lin, and Yuhong Yang. "Block Weighted Least Squares Estimation for Nonlinear Cost-based Split Questionnaire Design." Journal of Official Statistics 39, no. 4 (December 1, 2023): 459–87. http://dx.doi.org/10.2478/jos-2023-0022.

Full text
Abstract:
Abstract In this study, we advocate a two-stage framework to deal with the issues encountered in surveys with long questionnaires. In Stage I, we propose a split questionnaire design (SQD) developed by minimizing a quadratic cost function while achieving reliability constraints on estimates of means, which effectively reduces the survey cost, alleviates the burden on the respondents, and potentially improves data quality. In Stage II, we develop a block weighted least squares (BWLS) estimator of linear regression coefficients that can be used with data obtained from the SQD obtained in Stage I. Numerical studies comparing existing methods strongly favor the proposed estimator in terms of prediction and estimation accuracy. Using the European Social Survey (ESS) data, we demonstrate that the proposed SQD can substantially reduce the survey cost and the number of questions answered by each respondent, and the proposed estimator is much more interpretable and efficient than present alternatives for the SQD data.
APA, Harvard, Vancouver, ISO, and other styles
39

Hasanov, Fakhri J., Lester C. Hunt, and Jeyhun I. Mikayilov. "Estimating different order polynomial logarithmic environmental Kuznets curves." Environmental Science and Pollution Research 28, no. 31 (April 1, 2021): 41965–87. http://dx.doi.org/10.1007/s11356-021-13463-y.

Full text
Abstract:
AbstractThis paper contributes to the environmental literature by (i) demonstrating that the estimated coefficients and the statistical significance of the non-leading terms in quadratic, cubic, and quartic logarithmic environmental Kuznets curve (EKC) specifications are arbitrary and should therefore not be used to choose the preferred specification and (ii) detailing a proposed general-to-specific type methodology for choosing the appropriate specifications when attempting to estimate higher-order polynomials such as cubic and quartic logarithmic EKC relationships. Testing for the existence and shape of the well-known EKC phenomenon is a hot topic in the environmental economics literature. The conventional approach widely employs quadratic and cubic specifications and more recently also the quartic specification, where the variables are in logarithmic form. However, it is important that researchers understand whether the estimated EKC coefficients, turning points, and elasticities are statistically acceptable, economically interpretable, and comparable. In addition, it is vital that researchers have a clear structured non-arbitrary methodology for determining the preferred specification and hence shape of the estimated EKC. We therefore show mathematically and empirically the arbitrary nature of estimated non-leading coefficients in quadratic, cubic, and quartic logarithmic EKC specifications, being dependent upon the units of measurement chosen for the independent variables (e.g. dependent upon a rescaling of the variables such as moving from $m to $bn). Consequently, the practice followed in many previously papers, whereby the estimates of the non-leading terms are used in the decision to choose the preferred specification of an estimated EKC relationship, is incorrect and should not be followed since it potentially could lead to misleading conclusions. Instead, it should be based upon the sign and statistical significance of the estimated coefficients of the leading terms, the location of turning point(s), and the sign and statistical significance of the estimated elasticities. Furthermore, we suggest that researchers should follow a proposed general-to-specific type methodology for choosing the appropriate order of polynomials when attempting to estimate higher-order polynomial logarithmic EKCs.
APA, Harvard, Vancouver, ISO, and other styles
40

Chang, Ted L., Hongjing Xia, Sonya Mahajan, Rohit Mahajan, Joe Maisog, Shashaank Vattikuti, Carson C. Chow, and Joshua C. Chang. "Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to reduce preventable all-cause readmissions or death." PLOS ONE 19, no. 5 (May 9, 2024): e0302871. http://dx.doi.org/10.1371/journal.pone.0302871.

Full text
Abstract:
We developed an inherently interpretable multilevel Bayesian framework for representing variation in regression coefficients that mimics the piecewise linearity of ReLU-activated deep neural networks. We used the framework to formulate a survival model for using medical claims to predict hospital readmission and death that focuses on discharge placement, adjusting for confounding in estimating causal local average treatment effects. We trained the model on a 5% sample of Medicare beneficiaries from 2008 and 2011, based on their 2009–2011 inpatient episodes (approximately 1.2 million), and then tested the model on 2012 episodes (approximately 400 thousand). The model scored an out-of-sample AUROC of approximately 0.75 on predicting all-cause readmissions—defined using official Centers for Medicare and Medicaid Services (CMS) methodology—or death within 30-days of discharge, being competitive against XGBoost and a Bayesian deep neural network, demonstrating that one need-not sacrifice interpretability for accuracy. Crucially, as a regression model, it provides what blackboxes cannot—its exact gold-standard global interpretation, explicitly defining how the model performs its internal “reasoning” for mapping the input data features to predictions. In doing so, we identify relative risk factors and quantify the effect of discharge placement. We also show that the posthoc explainer SHAP provides explanations that are inconsistent with the ground truth model reasoning that our model readily admits.
APA, Harvard, Vancouver, ISO, and other styles
41

Brown, Kristen C., Kiran D. Bhattacharyya, Sue Kulason, Aneeq Zia, and Anthony Jarc. "How to Bring Surgery to the Next Level: Interpretable Skills Assessment in Robotic-Assisted Surgery." Visceral Medicine 36, no. 6 (2020): 463–70. http://dx.doi.org/10.1159/000512437.

Full text
Abstract:
<b><i>Introduction:</i></b> A surgeon’s technical skills are an important factor in delivering optimal patient care. Most existing methods to estimate technical skills remain subjective and resource intensive. Robotic-assisted surgery (RAS) provides a unique opportunity to develop objective metrics using key elements of intraoperative surgeon behavior which can be captured unobtrusively, such as instrument positions and button presses. Recent studies have shown that objective metrics based on these data (referred to as objective performance indicators [OPIs]) correlate to select clinical outcomes during robotic-assisted radical prostatectomy. However, the current OPIs remain difficult to interpret directly and, therefore, to use within structured feedback to improve surgical efficiencies. <b><i>Methods:</i></b> We analyzed kinematic and event data from da Vinci surgical systems (Intuitive Surgical, Inc., Sunnyvale, CA, USA) to calculate values that can summarize the use of robotic instruments, referred to as OPIs. These indicators were mapped to broader technical skill categories of established training protocols. A data-driven approach was then applied to further sub-select OPIs that distinguish skill for each technical skill category within each training task. This subset of OPIs was used to build a set of logistic regression classifiers that predict the probability of expertise in that skill to identify targeted improvement and practice. The final, proposed feedback using OPIs was based on the coefficients of the logistic regression model to highlight specific actions that can be taken to improve. <b><i>Results:</i></b> We determine that for the majority of skills, only a small subset of OPIs (2–10) are required to achieve the highest model accuracies (80–95%) for estimating technical skills within clinical-like tasks on a porcine model. The majority of the skill models have similar accuracy as models predicting overall expertise for a task (80–98%). Skill models can divide a prediction into interpretable categories for simpler, targeted feedback. <b><i>Conclusion:</i></b> We define and validate a methodology to create interpretable metrics for key technical skills during clinical-like tasks when performing RAS. Using this framework for evaluating technical skills, we believe that surgical trainees can better understand both what can be improved and how to improve.
APA, Harvard, Vancouver, ISO, and other styles
42

Mittal, Anshika, Ritu Arora, and Rita Kakkar. "Pharmacophore modeling, 3D-QSAR and molecular docking studies of quinazolines and aminopyridines as selective inhibitors of inducible nitric oxide synthase." Journal of Theoretical and Computational Chemistry 18, no. 01 (February 2019): 1950002. http://dx.doi.org/10.1142/s0219633619500020.

Full text
Abstract:
Pharmacophore modeling and 3D-Quantitative Structure Activity Relationship (3D-QSAR) studies have been performed on a dataset of thirty-two quinazoline and aminopyridine derivatives to get an insight into the important structural features required for binding to inducible nitric oxide synthase (iNOS). A four-point CPH (Common Pharmacophore Hypothesis), AHPR.29, with a hydrogen bond acceptor, hydrophobic group, positively charged ionizable group and an aromatic ring, has been obtained as the best pharmacophore model. Satisfactory statistical parameters of correlation ([Formula: see text]) and cross-validated ([Formula: see text]) correlation coefficients, 0.9288 and 0.6353, respectively, show high robustness and good predictive ability of our selected model. The contour maps have been developed from this model and the analysis has provided an interpretable explanation of the effect that various features and substituents have on the potency and selectivity of inhibitors towards iNOS. Docking studies have also been performed in order to analyze the interactions between the enzyme and the inhibitors. Our proposed model can thus be further used for screening a large database of compounds and design new iNOS inhibitors.
APA, Harvard, Vancouver, ISO, and other styles
43

Gao, Xuan, Chenglai Zhong, Jun Xiang, Yang Hong, Yudong Guo, and Juyong Zhang. "Reconstructing Personalized Semantic Facial NeRF Models from Monocular Video." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–12. http://dx.doi.org/10.1145/3550454.3555501.

Full text
Abstract:
We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photorealistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/
APA, Harvard, Vancouver, ISO, and other styles
44

Xue, Jingteng, Jingtao Huang, Mingwei Li, Jiaying Chen, Zongfan Wei, Yuan Cheng, Zhonghong Lai, Nan Qu, Yong Liu, and Jingchuan Zhu. "Explanatory Machine Learning Accelerates the Design of Graphene-Reinforced Aluminium Matrix Composites with Superior Performance." Metals 13, no. 10 (October 4, 2023): 1690. http://dx.doi.org/10.3390/met13101690.

Full text
Abstract:
Addressing the exceptional properties of aluminium alloy composites reinforced with graphene, this study presents an interpretable machine learning approach to aid in the rapid and efficient design of such materials. Initially, data on these composites were gathered and optimised in order to create a dataset of composition/process-property. Several machine learning algorithms were used to train various models. The SHAP method was used to interpret and select the best performing model, which happened to be the CatBoost model. The model achieved accurate predictions of hardness and tensile strength, with coefficients of determination of 0.9597 and 0.9882, respectively, and average relative errors of 6.02% and 5.01%, respectively. The results obtained from the SHAP method unveiled the correlation between the composition, process and properties of aluminium alloy composites reinforced with graphene. By comparing the predicted and experimental data in this study, all machine learning models exhibited prediction errors within 10%, confirming their ability to generalise. This study offers valuable insights and support for designing high-performance aluminium matrix composites reinforced with graphene and showcases the implementation of machine learning in materials science.
APA, Harvard, Vancouver, ISO, and other styles
45

Shen, Yuning, Abe Pressman, Evan Janzen, and Irene A. Chen. "Kinetic sequencing (k-Seq) as a massively parallel assay for ribozyme kinetics: utility and critical parameters." Nucleic Acids Research 49, no. 12 (March 27, 2021): e67-e67. http://dx.doi.org/10.1093/nar/gkab199.

Full text
Abstract:
Abstract Characterizing genotype-phenotype relationships of biomolecules (e.g. ribozymes) requires accurate ways to measure activity for a large set of molecules. Kinetic measurement using high-throughput sequencing (e.g. k-Seq) is an emerging assay applicable in various domains that potentially scales up measurement throughput to over 106 unique nucleic acid sequences. However, maximizing the return of such assays requires understanding the technical challenges introduced by sequence heterogeneity and DNA sequencing. We characterized the k-Seq method in terms of model identifiability, effects of sequencing error, accuracy and precision using simulated datasets and experimental data from a variant pool constructed from previously identified ribozymes. Relative abundance, kinetic coefficients, and measurement noise were found to affect the measurement of each sequence. We introduced bootstrapping to robustly quantify the uncertainty in estimating model parameters and proposed interpretable metrics to quantify model identifiability. These efforts enabled the rigorous reporting of data quality for individual sequences in k-Seq experiments. Here we present detailed protocols, define critical experimental factors, and identify general guidelines to maximize the number of sequences and their measurement accuracy from k-Seq data. Analogous practices could be applied to improve the rigor of other sequencing-based assays.
APA, Harvard, Vancouver, ISO, and other styles
46

Tyrsin, A. N. "Scalar measure of the interdependence between random vectors." Industrial laboratory. Diagnostics of materials 84, no. 7 (August 8, 2018): 76–82. http://dx.doi.org/10.26896/1028-6861-2018-84-7-76-82.

Full text
Abstract:
The problem of assessing tightness of the interdependence between random vectors of different dimensionality is considered. These random vectors can obey arbitrary multidimensional continuous distribution laws. An analytical expression is derived for the coefficient of tightness of the interdependence between random vectors. It is expressed in terms of the coefficients of determination of conditional regressions between the components of random vectors. For the case of Gaussian random vectors, a simpler formula is obtained, expressed through the determinants of each of the random vectors and determinant of their association. It is shown that the introduced coefficient meets all the basic requirements imposed on the degree of tightness of the interdependence between random vectors. This approach is more preferable compared to the method of canonical correlations providing determination of the actual tightness of the interdependence between random vectors. Moreover, it can also be used in case of non-linear correlation dependence between the components of random vectors. The measure thus introduced is rather simply interpretable and can be applied in practice to real data samplings. Examples of calculating the tightness of the interdependence between Gaussian random vectors of different dimensionality are given.
APA, Harvard, Vancouver, ISO, and other styles
47

De Francisco Carvalho, Lucas. "Review Study of the Impulsiveness Dimension of the Dimensional Clinical Personality Inventory." Universitas Psychologica 17, no. 1 (March 15, 2018): 1–11. http://dx.doi.org/10.11144/javeriana.upsy17-1.rsid.

Full text
Abstract:
The present study aimed to review the Impulsivity dimension from Dimensional Clinical Personality Inventory (IDCP) as well as to verify its psychometric properties in a non-clinical sample. The procedures were in a 2-stages shape. Step 1 was directed to the development of new items and Step 2 intended for testing the psychometric properties of the revised version. As result of the first step, we selected a set of 38 items. In the second step, the items were tested in a sample of 225 subjects (70.1% females), aging between 18 and 66 years (M = 26.2, SD = 8.1), mostly undergraduate students (58.9%). All subjects answered the IDCP, and the Brazilian versions of both, the Revised NEO Personality Inventory (NEO-PI-R) and the Personality Inventory for DSM-5 (PID-5). As result, we obtained a set of 18 items in three interpretable factors, Inconsequence, Risk Taking and Deceitfulness, with internal consistency coefficients (Cronbach’s α) of .89 for the total score. The correlations of the Impulsivity factors with NEO-PI-R and PID-5 revealed consistent and expected relations. The data reveal the adequacy of the new Impulsivity dimension of IDCP.
APA, Harvard, Vancouver, ISO, and other styles
48

Ketterlinus, Robert D., Fred L. Bookstein, Paul D. Sampson, and Michael E. Lamb. "Partial least squares analysis in developmental psychopathology." Development and Psychopathology 1, no. 4 (October 1989): 351–71. http://dx.doi.org/10.1017/s0954579400000523.

Full text
Abstract:
AbstractDespite extensive theoretical and empirical advances in the last two decades, little attention has been paid to the development of statistical techniques suited for the analysis of data gathered in studies of developmental psychopathology. As in most other studies of developmental processes, research in this area often involves complex constructs, such as intelligence and antisocial behavior, measured indirectly using multiple observed indicators. Relations between pairs of such constructs are sometimes reported in terms of latent variables (LVs): linear combinations of the indicators of each construct. We introduce the assumptions and procedures associated with one method for exploring these relations: partial least squares (PLS) analysis, which maximizes covariances between predictor and outcome LVs; its coefficients are correlations between observed variables and LVs, and its LVs are sums of observable variables weighted by these correlations. In the least squares logic of PLS, familiar notions about simple regressions and principal component analyses may be reinterpreted as rules for including or excluding particular blocks in a model and for “splitting” blocks into multiple dimensions. Guidelines for conducting PLS analyses and interpreting their results are provided using data from the Goteborg Daycare Study and the Seattle Longitudinal Prospective Study on Alcohol and Pregnancy. The major advantages of PLS analysis are that it (1) concisely summarizes the intercorrelations among a large number of variables regardless of sample size, (2) yields coefficients that are readily interpretable, and (3) provides straightforward decision rules about modeling. The advantages make PLS a highly desirable technique for use in longitudinal research on developmental psychopathology. The primer is written primarily for the nonstatistician, although formal mathematical details are provided in Appendix 1.
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Yiran, Jian Wang, Cheng Yang, Yu Zheng, and Haipeng Fu. "A Machine Learning-Based Method for Modeling TEC Regional Temporal-Spatial Map." Remote Sensing 14, no. 21 (November 4, 2022): 5579. http://dx.doi.org/10.3390/rs14215579.

Full text
Abstract:
In order to achieve the high-accuracy prediction of the total electron content (TEC) of the regional ionosphere for supporting the application of satellite navigation, positioning, measurement, and controlling, we proposed a modeling method based on machine learning (ML) and use this method to establish an empirical prediction model of TEC for parts of Europe. The model has three main characteristics: (1) The principal component analysis (PCA) is used to separate TEC’s temporal and spatial variation characteristics and to establish its corresponding map, (2) the solar activity parameters of the 12-month mean flux of the solar radio waves at 10.7 cm (F10.712) and the 12-month mean sunspot number (R12) are introduced into the temporal map as independent variables to reflect the temporal variation characteristics of TEC, and (3) The modified Kriging spatial interpolation method is used to achieve the spatial reconstruction of TEC. Finally, the regression learning method is used to determine the coefficients and harmonic numbers of the model by using the root mean square error (RMSE) and its relative value (RRMSE) as the evaluation standard. Specially, the modeling process is easy to understand, and the determined model parameters are interpretable. The statistical results show that the monthly mean values of TEC predicted by the proposed model in this paper are highly consistent with the observed values curve of TEC, and the RRMSE of the predicted results is 12.76%. Furthermore, comparing the proposed model with the IRI model, it can be found that the prediction accuracy of TEC by the proposed model is much higher than that of the IRI model either with CCIR or URSI coefficients, and the improvement is 38.63% and 35.79%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
50

Duveneck, Eric, Michael Kiehn, Anu Chandran, and Thomas Kühnel. "Reflection angle/azimuth-dependent least-squares reverse time migration." GEOPHYSICS 86, no. 5 (August 30, 2021): S325—S338. http://dx.doi.org/10.1190/geo2020-0701.1.

Full text
Abstract:
Seismic images under complex overburdens such as salt are strongly affected by illumination variations due to overburden velocity variations and imperfect acquisition geometries, making it difficult to obtain reliable image amplitudes. Least-squares reverse time migration (LSRTM) addresses these issues by formulating full wave-equation imaging as a linear inverse problem and solving for a reflectivity model that explains the recorded seismic data. Because subsurface reflection coefficients depend on the incident angle, and possibly on the azimuth, quantitative interpretation under complex overburdens requires LSRTM with output in terms of image gathers, e.g., as a function of the reflection angle or angle and azimuth. We have developed a reflection angle- or angle/azimuth-dependent LSRTM method aimed at obtaining physically meaningful image amplitudes interpretable in terms of angle- or angle/azimuth-dependent reflection coefficients. The method is formulated as a linear inverse problem solved iteratively with the conjugate gradient method. It requires an adjoint pair of linear operators for reflection angle/azimuth-dependent migration and demigration based on full wave-equation propagation. We implement these operators in an efficient way by using a mapping approach between migrated shot gathers and subsurface reflection angle/azimuth gathers. To accelerate convergence of the iterative inversion, we apply image-domain preconditioning operators computed from a single de-remigration step. An angle continuity constraint and a structural dip constraint, implemented via shaping regularization, are used to stabilize the solution in the presence of limited illumination and to control the effects of coherent noise. We examine the method on a synthetic data example and on a wide-azimuth streamer data set from the Gulf of Mexico, where we find that angle/azimuth-dependent LSRTM can achieve significant uplift in subsalt image quality, with overburden- and acquisition-related illumination variation effects on angle/azimuth-dependent image amplitudes largely removed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography