Gotowa bibliografia na temat „Uncertain imputation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Uncertain imputation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Uncertain imputation"

1

G.V., Suresh, i Srinivasa Reddy E.V. "Uncertain Data Analysis with Regularized XGBoost". Webology 19, nr 1 (20.01.2022): 3722–40. http://dx.doi.org/10.14704/web/v19i1/web19245.

Pełny tekst źródła
Streszczenie:
Uncertainty is a ubiquitous element in available knowledge about the real world. Data sampling error, obsolete sources, network latency, and transmission error are all factors that contribute to the uncertainty. These kinds of uncertainty have to be handled cautiously, or else the classification results could be unreliable or even erroneous. There are numerous methodologies developed to comprehend and control uncertainty in data. There are many faces for uncertainty i.e., inconsistency, imprecision, ambiguity, incompleteness, vagueness, unpredictability, noise, and unreliability. Missing information is inevitable in real-world data sets. While some conventional multiple imputation approaches are well studied and have shown empirical validity, they entail limitations in processing large datasets with complex data structures. In addition, these standard approaches tend to be computationally inefficient for medium and large datasets. In this paper, we propose a scalable multiple imputation frameworks based on XGBoost, bootstrapping and regularized method. XGBoost, one of the fastest implementations of gradient boosted trees, is able to automatically retain interactions and non-linear relations in a dataset while achieving high computational efficiency with the aid of bootstrapping and regularized methods. In the context of high-dimensional data, this methodology provides fewer biased estimates and reflects acceptable imputation variability than previous regression approaches. We validate our adaptive imputation approaches with standard methods on numerical and real data sets and shown promising results.
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Jianwei, Ying Zhang, Kai Wang, Xuemin Lin i Wenjie Zhang. "Missing Data Imputation with Uncertainty-Driven Network". Proceedings of the ACM on Management of Data 2, nr 3 (29.05.2024): 1–25. http://dx.doi.org/10.1145/3654920.

Pełny tekst źródła
Streszczenie:
We study the problem of missing data imputation, which is a fundamental task in the area of data quality that aims to impute the missing data to achieve the completeness of datasets. Though the recent distribution-modeling-based techniques (e.g., distribution generation and distribution matching) can achieve state-of-the-art performance in terms of imputation accuracy, we notice that (1) they deploy a sophisticated deep learning model that tends to be overfitting for missing data imputation; (2) they directly rely on a global data distribution while overlooking the local information. Driven by the inherent variability in both missing data and missing mechanisms, in this paper, we explore the uncertain nature of this task and aim to address the limitations of existing works by proposing an u<u>N</u>certainty-driven netw<u>O</u>rk for <u>M</u>issing data <u>I</u>mputation, termed NOMI. NOMI has three key components, i.e., the retrieval module, the neural network gaussian process imputator (NNGPI) and the uncertainty-based calibration module. NOMI~ runs these components sequentially and in an iterative manner to achieve a better imputation performance. Specifically, in the retrieval module, NOMI~ retrieves local neighbors of the incomplete data samples based on the pre-defined similarity metric. Subsequently, we design NNGPI~ that merges the advantages of both the Gaussian Process and the universal approximation capacity of neural networks. NNGPI~ models the uncertainty by learning the posterior distribution over the data to impute missing values while alleviating the overfitting issue. Moreover, we further propose an uncertainty-based calibration module that utilizes the uncertainty of the imputator on its prediction to help the retrieval module obtain more reliable local information, thereby further enhancing the imputation performance. We also demonstrate that our NOMI~ can be reformulated as an instance of the well-known Expectation Maximization (EM) algorithm, highlighting the strong theoretical foundation of our proposed methods. Extensive experiments are conducted over 12 real-world datasets. The results demonstrate the excellent performance of NOMI in terms of both accuracy and efficiency.
Style APA, Harvard, Vancouver, ISO itp.
3

Elimam, Rayane, Nicolas Sutton-Charani, Stéphane Perrey i Jacky Montmain. "Uncertain imputation for time-series forecasting: Application to COVID-19 daily mortality prediction". PLOS Digital Health 1, nr 10 (25.10.2022): e0000115. http://dx.doi.org/10.1371/journal.pdig.0000115.

Pełny tekst źródła
Streszczenie:
The object of this study is to put forward uncertainty modeling associated with missing time series data imputation in a predictive context. We propose three imputation methods associated with uncertainty modeling. These methods are evaluated on a COVID-19 dataset out of which some values have been randomly removed. The dataset contains the numbers of daily COVID-19 confirmed diagnoses (“new cases”) and daily deaths (“new deaths”) recorded since the start of the pandemic up to July 2021. The considered task is to predict the number of new deaths 7 days in advance. The more values are missing, the higher the imputation impact is on the predictive performances. The Evidential K-Nearest Neighbors (EKNN) algorithm is used for its ability to take into account labels uncertainty. Experiments are provided to measure the benefits of the label uncertainty models. Results show the positive impact of uncertainty models on imputation performances, especially in a noisy context where the number of missing values is high.
Style APA, Harvard, Vancouver, ISO itp.
4

Liang, Pei, Junhua Hu, Yongmei Liu i Xiaohong Chen. "Public resources allocation using an uncertain cooperative game among vulnerable groups". Kybernetes 48, nr 8 (2.09.2019): 1606–25. http://dx.doi.org/10.1108/k-03-2018-0146.

Pełny tekst źródła
Streszczenie:
Purpose This paper aims to solve the problem of public resource allocation among vulnerable groups by proposing a new method called uncertain α-coordination value based on uncertain cooperative game. Design/methodology/approach First, explicit forms of uncertain Shapley value with Chouqet integral form and uncertain centre-of-gravity of imputation-set (CIS) value are defined separately on the basis of uncertainty theory and cooperative game. Then, a convex combination of the two values above called the uncertain α-coordination value is used as the best solution. This study proves that the proposed methods meet the basic properties of cooperative game. Findings The uncertain α-coordination value is used to solve a public medical resource allocation problem in fuzzy coalitions and uncertain payoffs. Compared with other methods, the α-coordination value can solve such problem effectively because it balances the worries of vulnerable group’s further development and group fairness. Originality/value In this paper, an extension of classical cooperative game called uncertain cooperative game is proposed, in which players choose any level of participation in a game and relate uncertainty with the value of the game. A new function called uncertain α-Coordination value is proposed to allocate public resources amongst vulnerable groups in an uncertain environment, a topic that has not been explored yet. The definitions of uncertain Shapley value with Choquet integral form and uncertain CIS value are proposed separately to establish uncertain α-Coordination value.
Style APA, Harvard, Vancouver, ISO itp.
5

Bleidorn, Michel Trarbach, Wanderson de Paula Pinto, Isamara Maria Schmidt, Antonio Sergio Ferreira Mendonça i José Antonio Tosta dos Reis. "Methodological approaches for imputing missing data into monthly flows series". Ambiente e Agua - An Interdisciplinary Journal of Applied Science 17, nr 2 (5.04.2022): 1–27. http://dx.doi.org/10.4136/ambi-agua.2795.

Pełny tekst źródła
Streszczenie:
Missing data is one of the main difficulties in working with fluviometric records. Database gaps may result from fluviometric stations components problems, monitoring interruptions and lack of observers. Incomplete series analysis generates uncertain results, negatively impacting water resources management. Thus, proper missing data consideration is very important to ensure better information quality. This work aims to analyze, comparatively, missing data imputation methodologies in monthly river-flow time series, considering, as a case study, the Doce River, located in Southeast Brazil. Missing data were simulated in 5%, 10%, 15%, 25% and 40% proportions following a random distribution pattern, ignoring the missing data generation mechanisms. Ten missing data imputation methodologies were used: arithmetic mean, median, simple and multiple linear regression, regional weighting, spline and Stineman interpolation, Kalman smoothing, multiple imputation and maximum likelihood. Their performances were compared through bias, root mean square error, absolute mean percentage error, determination coefficient and concordance index. Results indicate that for 5% missing data, any methodology for imputing can be considered, recommending caution for arithmetic mean method application. However, as the missing data proportion increases, it is recommended to use multiple imputation and maximum likelihood methodologies when there are support stations for imputation, and the Stineman interpolation and Kalman Smoothing methods when only the studied series is available. Keywords: Doce river, imputation, missing data.
Style APA, Harvard, Vancouver, ISO itp.
6

Gromova, Ekaterina, Anastasiya Malakhova i Arsen Palestini. "Payoff Distribution in a Multi-Company Extraction Game with Uncertain Duration". Mathematics 6, nr 9 (11.09.2018): 165. http://dx.doi.org/10.3390/math6090165.

Pełny tekst źródła
Streszczenie:
A nonrenewable resource extraction game model is analyzed in a differential game theory framework with random duration. If the cumulative distribution function (c.d.f.) of the final time is discontinuous, the related subgames are differentiated based on the position of the initial instant with respect to the jump. We investigate properties of optimal trajectories and of imputation distribution procedures if the game is played cooperatively.
Style APA, Harvard, Vancouver, ISO itp.
7

Lee, Jung Yeon, Myeong-Kyu Kim i Wonkuk Kim. "Robust Linear Trend Test for Low-Coverage Next-Generation Sequence Data Controlling for Covariates". Mathematics 8, nr 2 (8.02.2020): 217. http://dx.doi.org/10.3390/math8020217.

Pełny tekst źródła
Streszczenie:
Low-coverage next-generation sequencing experiments assisted by statistical methods are popular in a genetic association study. Next-generation sequencing experiments produce genotype data that include allele read counts and read depths. For low sequencing depths, the genotypes tend to be highly uncertain; therefore, the uncertain genotypes are usually removed or imputed before performing a statistical analysis. It may result in the inflated type I error rate and in a loss of statistical power. In this paper, we propose a mixture-based penalized score association test adjusting for non-genetic covariates. The proposed score test statistic is based on a sandwich variance estimator so that it is robust under the model misspecification between the covariates and the latent genotypes. The proposed method takes advantage of not requiring either external imputation or elimination of uncertain genotypes. The results of our simulation study show that the type I error rates are well controlled and the proposed association test have reasonable statistical power. As an illustration, we apply our statistic to pharmacogenomics data for drug responsiveness among 400 epilepsy patients.
Style APA, Harvard, Vancouver, ISO itp.
8

Griffin, James M., Jino Mathew, Antal Gasparics, Gábor Vértesy, Inge Uytdenhouwen, Rachid Chaouadi i Michael E. Fitzpatrick. "Machine-Learning Approach to Determine Surface Quality on a Reactor Pressure Vessel (RPV) Steel". Applied Sciences 12, nr 8 (7.04.2022): 3721. http://dx.doi.org/10.3390/app12083721.

Pełny tekst źródła
Streszczenie:
Surface quality measures such as roughness, and especially its uncertain character, affect most magnetic non-destructive testing methods and limits their performance in terms of an achievable signal-to-noise ratio and reliability. This paper is primarily focused on an experimental study targeting nuclear reactor materials manufactured from the milling process with various machining parameters to produce varying surface quality conditions to mimic the varying material surface qualities of in-field conditions. From energising a local area electromagnetically, a receiver coil is used to obtain the emitted Barkhausen noise, from which the condition of the material surface can be inspected. Investigations were carried out with the support of machine-learning algorithms, such as Neural Networks (NN) and Classification and Regression Trees (CART), to identify the differences in surface quality. Another challenge often faced is undertaking an analysis with limited experimental data. Other non-destructive methods such as Magnetic Adaptive Testing (MAT) were used to provide data imputation for missing data using other intelligent algorithms. For data reinforcement, data augmentation was used. With more data the problem of ‘the curse of data dimensionality’ is addressed. It demonstrated how both data imputation and augmentation can improve measurement datasets.
Style APA, Harvard, Vancouver, ISO itp.
9

FLÅM, S. D., i Y. M. ERMOLIEV. "Investment, uncertainty, and production games". Environment and Development Economics 14, nr 1 (luty 2009): 51–66. http://dx.doi.org/10.1017/s1355770x08004579.

Pełny tekst źródła
Streszczenie:
ABSTRACTThis paper explores a few cooperative aspects of investments in uncertain, real options. By hypothesis some production commitments, factors, or quotas are transferable. Cases in point include energy supply, emission of pollutants, and harvest of renewable resources. Of particular interest are technologies or projects that provide anti-correlated returns. Any such project stabilizes the aggregate proceeds. Therefore, given widespread risk aversion, a project of this sort merits a bonus. The setting is formalized as a two-stage, stochastic, production game. Absent economies of scale, such games are quite tractable in analysis, computation, and realization. A core imputation comes in terms of shadow prices that equilibrate competitive, endogenous markets. Such prices emerge as optimal dual solutions to coordinated production programs, featuring pooled commitments, or resources. Alternatively, the prices could result from repeated exchange.
Style APA, Harvard, Vancouver, ISO itp.
10

Le, H., S. Batterman, K. Dombrowski, R. Wahl, J. Wirth, E. Wasilevich i M. Depa. "A Comparison of Multiple Imputation and Optimal Estimation for Missing and Uncertain Urban Air Toxics Data". Epidemiology 17, Suppl (listopad 2006): S242. http://dx.doi.org/10.1097/00001648-200611001-00624.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Uncertain imputation"

1

Elimam, Rayane. "Apprentissage automatique pour la prédiction de performances : du sport à la santé". Electronic Thesis or Diss., IMT Mines Alès, 2024. https://theses.hal.science/tel-04805708.

Pełny tekst źródła
Streszczenie:
De nombreux indicateurs de performance existent en sport et en santé (guérison, réhabilitation, etc.) qui permettent de caractériser différents critères sportifs et thérapeutiques.Ces différents types de performance dépendent généralement de la charge de travail (ou de rééducation) subie par les sportifs ou patients.Ces dernières années, beaucoup d'applications de l'apprentissage automatique au sport et à la santé ont été proposées.La prédiction, voir l'explication de performances à partir de données de charges pourrait permettre d'optimiser les entraînements ou les thérapies.Dans ce contexte la gestion des données manquantes et l'articulation entre les types de charges et les différents indicateurs de performance considérés représentent les 2 problématiques traitées dans ce manuscrit à travers 4 applications. Les 2 premières concernent la gestion des données manquantes par une modélisation incertaine réalisée sur (i) des données de football professionnel fortement incomplètes et (ii) des données COVID-19 bruitées artificiellement. Pour ces 2 contributions, nous avons associé des modèles d'incertitude crédibilistes, textit{i.e.} basés sur la théorie des fonctions de croyance, à différentes méthodes d'imputation adaptées au contexte chronologique des entraînements/matchs et des thérapies.Une fois les données manquantes imputées sous formes de fonctions de croyance, le modèle crédibiliste des $k$ plus proches voisins adapté à la régression a été utilisé de manière à tirer profit des modèles d'incertitudes incertains associés aux données manquantes. Dans un contexte de prédiction de performances en match de handball en fonction des charges de travail passées, des modèles de régression multisorties sont utilisés pour prédire simultanément 7 indicateurs de performance athlétiques et techniques. La dernière application concerne la rééducation de patients post-AVC ayant partiellement perdu l'usage d'un bras. De manière à détecter les patients non-répondant à la thérapie, le problème de la prédiction de différents critères de réhabilitation a permis de réappliquer les différentes contributions de ce manuscrit (imputation crédibiliste de données manquantes et régression multisorties pour la prédiction simultanée de différents indicateurs de performance
Numerous performance indicators exist in sport and health (recovery, rehabilitation, etc.), allowing us to characterize different sporting and therapeutic criteria.These different types of performance generally depend on the workload (or rehabilitation) undergone by athletes or patients.In recent years, many applications of machine learning to sport and health have been proposed.Predicting or even explaining performance based on workload data could help optimize training or therapy.In this context, the management of missing data and the articulation between load types and the various performance indicators considered represent the 2 issues addressed in this manuscript through 4 applications. The first 2 concern the management of missing data through uncertain modeling performed on (i) highly incomplete professional soccer data and (ii) artificially noisy COVID-19 data. For these 2 contributions, we have combined credibilistic uncertainty models, based on the theory of belief functions, with various imputation methods adapted to the chronological context of training/matches and therapies.Once the missing data had been imputed in the form of belief functions, the credibilistic $k$ nearest-neighbor model adapted to regression was used to take advantage of the uncertain uncertainty patterns associated with the missing data. In the context of predicting performance in handball matches as a function of past workloads, multi-output regression models are used to simultaneously predict 7 athletic and technical performance indicators. The final application concerns the rehabilitation of post-stroke patients who have partially lost the use of one arm. In order to detect patients not responding to therapy, the problem of predicting different rehabilitation criteria has enabled the various contributions of this manuscript (credibilistic imputation of missing data and multiscore regression for the simultaneous prediction of different performance indicators
Style APA, Harvard, Vancouver, ISO itp.
2

Bodine, Andrew James. "The Effect of Item Parameter Uncertainty on Test Reliability". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343316705.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Huang, Shiping. "Exploratory visualization of data with variable quality". Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-01115-225546/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Uncertain imputation"

1

Analysis of Integrated Data. Taylor & Francis Group, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Chambers, Raymond L., i Li-Chun Zhang. Analysis of Integrated Data. Taylor & Francis Group, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Chambers, Raymond L., i Lichun Zhang. Analysis of Integrated Data. Taylor & Francis Group, 2021.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Chambers, Raymond L., i Li-Chun Zhang. Analysis of Integrated Data. Taylor & Francis Group, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Uncertain imputation"

1

Little, Roderick J. A., i Donald B. Rubin. "Estimation of Imputation Uncertainty". W Statistical Analysis with Missing Data, 75–93. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781119013563.ch5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ranvier, Thomas, Haytham Elghazel, Emmanuel Coquery i Khalid Benabdeslem. "Accounting for Imputation Uncertainty During Neural Network Training". W Big Data Analytics and Knowledge Discovery, 265–80. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39831-5_24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Shi, Xingjie, Can Yang i Jin Liu. "Using Collaborative Mixed Models to Account for Imputation Uncertainty in Transcriptome-Wide Association Studies". W Methods in Molecular Biology, 93–103. New York, NY: Springer US, 2021. http://dx.doi.org/10.1007/978-1-0716-0947-7_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Erdogan Erten, Gamze, Camilla Zacche da Silva i Jeff Boisvert. "Decorrelation and Imputation Methods for Multivariate Modeling". W Applied Spatiotemporal Data Analytics and Machine Learning [Working Title]. IntechOpen, 2024. http://dx.doi.org/10.5772/intechopen.115069.

Pełny tekst źródła
Streszczenie:
In most mining projects, multivariate modeling of regionalized variables has a critical impact on the final model due to complex multivariate relationships between correlated variables. In geostatistical modeling, multivariate transformations are commonly employed to model complex data relationships. This decorrelates or makes the variables independent, which enables the generation of independent models for each variable while maintaining the ability to restore multivariate relationships through a back-transformation. There are a myriad of transformation methods, however, this chapter discusses the most applied methods in geostatistical procedures. These include principal component analysis (PCA), minimum/maximum autocorrelation factors (MAF), stepwise conditional transform (SCT), and projection pursuit multivariate transform (PPMT). All these transforms require equally sampled data. In the case of unequal sampling, it is common practice to either exclude the incomplete samples or impute the missing values. Data imputation is recommended in many scientific fields as removing incomplete samples usually removes valuable information from modeling workflows. Three common imputation methods are discussed in this chapter: single imputation (SI), maximum likelihood estimation (MLE), and multiple imputation (MI). Bayesian updating (BU) is also discussed as an adaptation of MI to geostatistical analysis. MI methods are preferred in geostatistical analysis because they reproduce the variability of variables and reflect the uncertainty of missing values.
Style APA, Harvard, Vancouver, ISO itp.
5

Lajeunesse, Marc J. "Recovering Missing or Partial Data from Studies: a Survey of Conversions and Imputations for Meta-analysis". W Handbook of Meta-analysis in Ecology and Evolution. Princeton University Press, 2013. http://dx.doi.org/10.23943/princeton/9780691137285.003.0013.

Pełny tekst źródła
Streszczenie:
This chapter discusses possible solutions for dealing with partial information and missing data from published studies. These solutions can improve the amount of information extracted from individual studies, and increase the representation of data for meta-analysis. It begins with a description of the mechanisms that generate missing information within studies, followed by a discussion of how gaps of information can influence meta-analysis and the way studies are quantitatively reviewed. It then suggests some practical solutions to recovering missing statistics from published studies. These include statistical acrobatics to convert available information (e.g., t-test) into those that are more useful to compute effect sizes, as well as a heuristic approaches that impute (fill gaps) missing information when pooling effect sizes. Finally, the chapter discusses multiple-imputation methods that account for the uncertainty associated with filling gaps of information when performing meta-analysis.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Uncertain imputation"

1

Mai, Lihao, Haoran Li i Yang Weng. "Data Imputation with Uncertainty Using Stochastic Physics-Informed Learning". W 2024 IEEE Power & Energy Society General Meeting (PESGM), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/pesgm51994.2024.10688419.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Shunyang, Senzhang Wang, Xianzhen Tan, Renzhi Wang, Ruochen Liu, Jian Zhang i Jianxin Wang. "SaSDim:Self-Adaptive Noise Scaling Diffusion Model for Spatial Time Series Imputation". W Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/283.

Pełny tekst źródła
Streszczenie:
Spatial time series imputation is of great importance to various real-world applications. As the state-of-the-art generative models, diffusion models (e.g. CSDI) have outperformed statistical and autoregressive based models in time series imputation. However, diffusion models may introduce unstable noise owing to the inherent uncertainty in sampling, leading to the generated noise deviating from the intended Gaussian distribution. Consequently, the imputed data may deviate from the real data. To this end, we propose a Self-adaptive noise Scaling Diffusion Model named SaSDim for spatial time series imputation. Specifically, we introduce a novel Probabilistic High-Order SDE Solver Module to scale the noise following the standard Gaussian distribution. The noise scaling operation helps the noise prediction module of the diffusion model to more accurately estimate the variance of noise. To effectively learn the spatial and temporal features, a Spatial guided Global Convolution Module (SgGConv) for multi-periodic temporal dependencies learning with the Fast Fourier Transformation and dynamic spatial dependencies learning with dynamic graph convolution is also proposed. Extensive experiments conducted on three real-world spatial time series datasets verify the effectiveness of SaSDim.
Style APA, Harvard, Vancouver, ISO itp.
3

Azarkhail, M., i P. Woytowitz. "Uncertainty management in model-based imputation for missing data". W 2013 Annual Reliability and Maintainability Symposium (RAMS). IEEE, 2013. http://dx.doi.org/10.1109/rams.2013.6517697.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zhao, Qilong, Yifei Zhang, Mengdan Zhu, Siyi Gu, Yuyang Gao, Xiaofeng Yang i Liang Zhao. "DUE: Dynamic Uncertainty-Aware Explanation Supervision via 3D Imputation". W KDD '24: The 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 6335–43. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3637528.3671641.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Jun, Eunji, Ahmad Wisnu Mulyadi i Heung-Il Suk. "Stochastic Imputation and Uncertainty-Aware Attention to EHR for Mortality Prediction". W 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852132.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Saeidi, Rahim, i Paavo Alku. "Accounting for uncertainty of i-vectors in speaker recognition using uncertainty propagation and modified imputation". W Interspeech 2015. ISCA: ISCA, 2015. http://dx.doi.org/10.21437/interspeech.2015-703.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hwang, Sunghyun, i Dong-Kyu Chae. "An Uncertainty-Aware Imputation Framework for Alleviating the Sparsity Problem in Collaborative Filtering". W CIKM '22: The 31st ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3511808.3557236.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Andrews, Mark, Gavin Jones, Brian Leyde, Lie Xiong, Max Xu i Peter Chien. "A Statistical Imputation Method for Handling Missing Values in Generalized Polynomial Chaos Expansions". W ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-91035.

Pełny tekst źródła
Streszczenie:
Abstract Generalized Polynomial Chaos Expansion (gPCE) is widely used in uncertainty quantification and sensitivity analysis for applications in the aerospace industry. gPCE uses the spectrum projection to fit a polynomial model, the gPCE model, to a sparse grid Design of Experiments (DOEs). The gPCE model can be used to make predictions, analytically determine uncertainties, and calculate sensitivity indices. However, the model’s accuracy is very dependent on having complete DOEs. When a sampling point is missing from the sparse grid DOE, this severely impacts the accuracy of the gPCE analysis and often necessitates running a new DOE. Missing data points are a common occurrence in engineering testing and simulation. This problem complicates the use of the gPCE analysis. In this paper, we present a statistical imputation method for addressing this missing data problem. This methodology allows gPCE modeling to handle missing values in the sparse grid DOE. Using a series of numerical results, the study demonstrates the convergence characteristics of the methodology with respect to reaching steady state values for the missing points. The article concludes with a discussion of the convergence rate, advantages, and feasibility of using the proposed methodology.
Style APA, Harvard, Vancouver, ISO itp.
9

Moreira, Rafael Peralta, Thiago da Silva Piedade i Marcelo Victor Tomaz De Matos. "Credibility Assessment of Annular Casing Cement for P&A Campaigns: A Case Study in Campos Basin Offshore Brazil". W Offshore Technology Conference. OTC, 2023. http://dx.doi.org/10.4043/32625-ms.

Pełny tekst źródła
Streszczenie:
Abstract The assessment of annulus cement barriers is critical in well Plug and Abandonment (P&A) planning and execution. For wells with 10 to 30+ producing years, the data from well construction may be unavailable, incomplete, or not fully compliant with current industry good cementing practices. This case study presents a methodology for assessing qualification and credibility of annular hydraulic isolation, highlighting the challenges involved in the process. The engineering workflow starts by data mining from well construction regarding casing and cementing operations, drilling fluids, pumping schedule, wiper plugs events, casing centralization, washer, spacer, and cement slurry design in addition to anomalous events occurred. A mapping of the cement quality is then performed in representative wells which have availability of data with uniform criteria to establish key performance metrics. Finally, a novel statistical imputation methodology is then performed to overcome missing data followed by modeling and simulations, and a credibility analysis resulting in the qualification degree of the annulus cement – qualified permanent barrier, unqualified or failed. The methodology was applied for subsea wells in P&A campaigns in Campos Basin offshore Brazil, by using this credibility & criticality assessment in the B-annulus hydraulic isolation as an input for the P&A design and resulted in increased applicability of Through Tubing scope - without the necessity to remove the production tubing – and eliminated the need of additional evaluation of the production casing cement through logging tools or pressure testing. Consequently, a 5-day average reduction in P&A intervention was obtained. The analysis showed that the cementing strategies performed in well construction dates, despite different from current practices, provide sufficient cement quality in many scenarios. The studies conducted also show the correlation between cement evaluation logs and modeling of existing cement jobs with data imputation techniques to compensate for missing data in cement-job data. New methodologies and technologies for the assessment of annular isolation in wells to be plugged and abandoned are of relevant interest for the industry and taking in consideration the uncertainties and field experience into the qualification process of existing barriers is a challenge. This paper provides insight on how uncertainty levels may be reduced to still provide quantitative results and allow the selection of an optimized P&A design.
Style APA, Harvard, Vancouver, ISO itp.
10

Wang, Zepu, Dingyi Zhuang, Yankai Li, Jinhua Zhao, Peng Sun, Shenhao Wang i Yulin Hu. "ST-GIN: An Uncertainty Quantification Approach in Traffic Data Imputation with Spatio-Temporal Graph Attention and Bidirectional Recurrent United Neural Networks". W 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2023. http://dx.doi.org/10.1109/itsc57777.2023.10422526.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii