Auswahl der wissenschaftlichen Literatur zum Thema „Estimation scalable de l'incertitude“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Estimation scalable de l'incertitude" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Estimation scalable de l'incertitude"

1

Perret, Christian, P. Marchand, Arnaud Belleville, Rémy Garcon, Damien Sevrez, Stéphanie Poligot-Pitsch, Rachel Puechberty und Gwen Glaziou. „La variabilité en fonction du temps des relations hauteur débit. Sa prise en compte dans l'estimation des incertitudes des données hydrométriques par une méthode tabulée“. La Houille Blanche, Nr. 4 (August 2018): 65–72. http://dx.doi.org/10.1051/lhb/2018043.

Der volle Inhalt der Quelle
Annotation:
La démarche engagée depuis plusieurs années par la communauté des hydromètres pour quantifier les incertitudes associées au processus d'élaboration des données de débit mériterait de trouver des applications opérationnelles. La présente étude incite notamment les gestionnaires des stations hydrométriques à mieux valoriser les jaugeages effectués en systématisant la démarche de précision du modèle de courbe de tarage. Les auteurs proposent ensuite une démarche simplifiée de quantification de l'incertitude associée à une valeur de débit prédite par une courbe de tarage à partir du calcul de l'écart type des écarts en pour cent des jaugeages à la courbe de tarage et d'une tabulation en fonction des quantiles de débits observés. Elle s'appuie sur une démarche classique d'identification des sources d'incertitude et de leur propagation. Cette méthode permet de proposer une estimation de l'incertitude observée en moyenne sur les stations françaises pour la médiane des débits observés. On propose la formulation suivante : pour 45 à 55 % des stations françaises, la valeur la plus probable de l'incertitude au seuil de confiance de 95 % pour le quantile 50 % des débits observés en France est inférieure à 22 %.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jiang, Zhuqing, Likuo Wei, Ganmin Zeng, Shuwen Qi, Haiying Wang, Aidong Men und Yun Zhou. „Bitrate Estimation for Spatial Scalable Videos“. IEEE Transactions on Broadcasting 67, Nr. 2 (Juni 2021): 549–55. http://dx.doi.org/10.1109/tbc.2021.3064278.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wang, Xianglu. „Gaussian graphical model estimation with measurement error“. JUSTC 53, Nr. 11 (2023): 1105. http://dx.doi.org/10.52396/justc-2022-0108.

Der volle Inhalt der Quelle
Annotation:
It is well known that regression methods designed for clean data will lead to erroneous results if directly applied to corrupted data. Despite the recent methodological and algorithmic advances in Gaussian graphical model estimation, how to achieve efficient and scalable estimation under contaminated covariates is unclear. Here a new methodology called convex conditioned innovative scalable efficient estimation (COCOISEE) for Gaussian graphical model under both additive and multiplicative measurement errors is developed. It combines the strengths of the innovative scalable efficient estimation in Gaussian graphical model and the nearest positive semi-definite matrix projection, thus enjoying stepwise convexity and scalability. Comprehensive theoretical guarantees are provided and the effectiveness of the proposed methodology is demonstrated through numerical studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cicala, Marco, Egidio D’Amato, Immacolata Notaro und Massimiliano Mattei. „Scalable Distributed State Estimation in UTM Context“. Sensors 20, Nr. 9 (08.05.2020): 2682. http://dx.doi.org/10.3390/s20092682.

Der volle Inhalt der Quelle
Annotation:
This article proposes a novel approach to the Distributed State Estimation (DSE) problem for a set of co-operating UAVs equipped with heterogeneous on board sensors capable of exploiting certain characteristics typical of the UAS Traffic Management (UTM) context, such as high traffic density and the presence of limited range, Vehicle-to-Vehicle communication devices. The proposed algorithm is based on a scalable decentralized Kalman Filter derived from the Internodal Transformation Theory enhanced on the basis of the Consensus Theory. The general benefit of the proposed algorithm consists of, on the one hand, reducing the estimation problem to smaller local sub-problems, through a self-organization process of the local estimating nodes in response to the time varying communication topology; and on the other hand, of exploiting measures carried out nearby in order to improve the accuracy of the local estimates. In the UTM context, this enables each vehicle to estimate both its own position and velocity, as well as those of the neighboring vehicles, using both on board measurements and information transmitted by neighboring vehicles. A numerical simulation in a simplified UTM scenario is presented, in order to illustrate the salient aspects of the proposed algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Li, Cheng, Sanvesh Srivastava und David B. Dunson. „Simple, scalable and accurate posterior interval estimation“. Biometrika 104, Nr. 3 (25.06.2017): 665–80. http://dx.doi.org/10.1093/biomet/asx033.

Der volle Inhalt der Quelle
Annotation:
Summary Standard posterior sampling algorithms, such as Markov chain Monte Carlo procedures, face major challenges in scaling up to massive datasets. We propose a simple and general posterior interval estimation algorithm to rapidly and accurately estimate quantiles of the posterior distributions for one-dimensional functionals. Our algorithm runs Markov chain Monte Carlo in parallel for subsets of the data, and then averages quantiles estimated from each subset. We provide strong theoretical guarantees and show that the credible intervals from our algorithm asymptotically approximate those from the full posterior in the leading parametric order. Our algorithm has a better balance of accuracy and efficiency than its competitors across a variety of simulations and a real-data example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Emerson, Joseph, Robert Alicki und Karol Życzkowski. „Scalable noise estimation with random unitary operators“. Journal of Optics B: Quantum and Semiclassical Optics 7, Nr. 10 (21.09.2005): S347—S352. http://dx.doi.org/10.1088/1464-4266/7/10/021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Dong, Hua You Su, Wen Mei, Li Xuan Wang und Chun Yuan Zhang. „Scalable Parallel Motion Estimation on Muti-GPU System“. Applied Mechanics and Materials 347-350 (August 2013): 3708–14. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.3708.

Der volle Inhalt der Quelle
Annotation:
With NVIDIA’s parallel computing architecture CUDA, using GPU to speed up compute-intensive applications has become a research focus in recent years. In this paper, we proposed a scalable method for multi-GPU system to accelerate motion estimation algorithm, which is the most time consuming process in video encoding. Based on the analysis of data dependency and multi-GPU architecture, a parallel computing model and a communication model are designed. We tested our parallel algorithm and analyzed the performance with 10 standard video sequences in different resolutions using 4 NVIDIA GTX460 GPUs, and calculated the overall speedup. Our results show that a speedup of 36.1 times using 1 GPU and more than 120 times for 4 GPUs on 1920x1080 sequences. Further, our parallel algorithm demonstrated the potential of nearly linear speedup according to the number of GPUs in the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hassan, Beenish, Sobia Baig und Saad Aslam. „On Scalability of FDD-Based Cell-Free Massive MIMO Framework“. Sensors 23, Nr. 15 (07.08.2023): 6991. http://dx.doi.org/10.3390/s23156991.

Der volle Inhalt der Quelle
Annotation:
Cell-free massive multiple-input multiple-output (MIMO) systems have the potential of providing joint services, including joint initial access, efficient clustering of access points (APs), and pilot allocation to user equipment (UEs) over large coverage areas with reduced interference. In cell-free massive MIMO, a large coverage area corresponds to the provision and maintenance of the scalable quality of service requirements for an infinitely large number of UEs. The research in cell-free massive MIMO is mostly focused on time division duplex mode due to the availability of channel reciprocity which aids in avoiding feedback overhead. However, the frequency division duplex (FDD) protocol still dominates the current wireless standards, and the provision of angle reciprocity aids in reducing this overhead. The challenge of providing a scalable cell-free massive MIMO system in an FDD setting is also prevalent, since computational complexity regarding signal processing tasks, such as channel estimation, precoding/combining, and power allocation, becomes prohibitively high with an increase in the number of UEs. In this work, we consider an FDD-based scalable cell-free network with angular reciprocity and a dynamic cooperation clustering approach. We have proposed scalability for our FDD cell-free and performed a comparative analysis with reference to channel estimation, power allocation, and precoding/combining techniques. We present expressions for scalable spectral efficiency, angle-based precoding/combining schemes and provide a comparison of overhead between conventional and scalable angle-based estimation as well as combining schemes. Simulations confirm that the proposed scalable cell-free network based on an FDD scheme outperforms the conventional matched filtering scheme based on scalable precoding/combining schemes. The angle-based LP-MMSE in the FDD cell-free network provides 14.3% improvement in spectral efficiency and 11.11% improvement in energy efficiency compared to the scalable MF scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ju, Cheng, Susan Gruber, Samuel D. Lendle, Antoine Chambaz, Jessica M. Franklin, Richard Wyss, Sebastian Schneeweiss und Mark J. van der Laan. „Scalable collaborative targeted learning for high-dimensional data“. Statistical Methods in Medical Research 28, Nr. 2 (22.09.2017): 532–54. http://dx.doi.org/10.1177/0962280217729845.

Der volle Inhalt der Quelle
Annotation:
Robust inference of a low-dimensional parameter in a large semi-parametric model relies on external estimators of infinite-dimensional features of the distribution of the data. Typically, only one of the latter is optimized for the sake of constructing a well-behaved estimator of the low-dimensional parameter of interest. Optimizing more than one of them for the sake of achieving a better bias-variance trade-off in the estimation of the parameter of interest is the core idea driving the general template of the collaborative targeted minimum loss-based estimation procedure. The original instantiation of the collaborative targeted minimum loss-based estimation template can be presented as a greedy forward stepwise collaborative targeted minimum loss-based estimation algorithm. It does not scale well when the number p of covariates increases drastically. This motivates the introduction of a novel instantiation of the collaborative targeted minimum loss-based estimation template where the covariates are pre-ordered. Its time complexity is [Formula: see text] as opposed to the original [Formula: see text], a remarkable gain. We propose two pre-ordering strategies and suggest a rule of thumb to develop other meaningful strategies. Because it is usually unclear a priori which pre-ordering strategy to choose, we also introduce another instantiation called SL-C-TMLE algorithm that enables the data-driven choice of the better pre-ordering strategy given the problem at hand. Its time complexity is [Formula: see text] as well. The computational burden and relative performance of these algorithms were compared in simulation studies involving fully synthetic data or partially synthetic data based on a real world large electronic health database; and in analyses of three real, large electronic health databases. In all analyses involving electronic health databases, the greedy collaborative targeted minimum loss-based estimation algorithm is unacceptably slow. Simulation studies seem to indicate that our scalable collaborative targeted minimum loss-based estimation and SL-C-TMLE algorithms work well. All C-TMLEs are publicly available in a Julia software package.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

ADACHI, Ryosuke, Yuh YAMASHITA und Koichi KOBAYASHI. „Distributed Estimation over Delayed Sensor Network with Scalable Communication“. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E102.A, Nr. 5 (01.05.2019): 712–20. http://dx.doi.org/10.1587/transfun.e102.a.712.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Estimation scalable de l'incertitude"

1

Candela, Rosa. „Robust and scalable probabilistic machine learning methods with applications to the airline industry“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS078.

Der volle Inhalt der Quelle
Annotation:
Dans le secteur aérien, la prévision des prix joue un rôle important tant pour les clients que pour les agences de voyage. Dans cette thèse, nous présentons quelques approches pratiques pour aider les voyageurs à faire face à l'incertitude dans l'évolution du prix des billets et nous proposons un cadre basé sur des données pour surveiller les performances des modèles de prévision de séries chronologiques. La descente de gradient stochastique (SGD) représente la méthode d'optimisation plus utilisée dans le domaine de l'apprentissage automatique et cela est également vrai pour les systèmes distribués, qui ces dernières années sont de plus en plus utilisés pour des modèles complexes formés sur des ensembles de données massifs. Dans les systèmes asynchrones, les travailleurs peuvent utiliser des versions obsolètes des paramètres, ce qui ralentit la convergence SGD. Dans cette thèse, nous fournissons une analyse concise du taux de convergence lorsque les effets conjoints de la sparsification et de l'asynchronie sont pris en compte, et montrons que le SGD clairsemé converge au même taux que le SGD standard. Récemment, SGD a également joué un rôle important en tant que moyen d'effectuer une inférence bayésienne approximative. Les algorithmes MCMC à gradient stochastique utilisent SGD avec un taux d'apprentissage constant pour obtenir des échantillons à partir de la distribution postérieure. Dans cette thèse, nous introduisons une approche pratique de l'échantillonnage postérieur, qui nécessite des hypothèses plus faibles que les algorithmes existants
In the airline industry, price prediction plays a significant role both for customers and travel companies. The former are interested in knowing the price evolution to get the cheapest ticket, the latter want to offer attractive tour packages and maximize their revenue margin. In this work we introduce some practical approaches to help travelers in dealing with uncertainty in ticket price evolution and we propose a data-driven framework to monitor time-series forecasting models' performance. Stochastic Gradient Descent (SGD) represents the workhorse optimization method in the field of machine learning and this is true also for distributed systems, which in last years are increasingly used for complex models trained on massive datasets. In asynchronous systems workers can use stale versions of the parameters, which slows SGD convergence. In this thesis we fill the gap in the literature and study sparsification methods in asynchronous settings. We provide a concise convergence rate analysis when the joint effects of sparsification and asynchrony are taken into account, and show that sparsified SGD converges at the same rate of standard SGD. Recently, SGD has played an important role also as a way to perform approximate Bayesian Inference. Stochastic gradient MCMC algorithms use indeed SGD with constant learning rate to obtain samples from the posterior distribution. Despite some promising results restricted to simple models, most of the existing works fall short in easily dealing with the complexity of the loss landscape of deep models. In this thesis we introduce a practical approach to posterior sampling, which requires weaker assumptions than existing algorithms
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rossi, Simone. „Improving Scalability and Inference in Probabilistic Deep Models“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.

Der volle Inhalt der Quelle
Annotation:
Au cours de la dernière décennie, l'apprentissage profond a atteint un niveau de maturité suffisant pour devenir le choix privilégié pour résoudre les problèmes liés à l'apprentissage automatique ou pour aider les processus de prise de décision.En même temps, l'apprentissage profond n'a généralement pas la capacité de quantifier avec précision l'incertitude de ses prédictions, ce qui rend ces modèles moins adaptés aux applications critiques en matière de risque.Une solution possible pour résoudre ce problème est d'utiliser une formulation bayésienne ; cependant, bien que cette solution soit élégante, elle est analytiquement difficile à mettre en œuvre et nécessite des approximations. Malgré les énormes progrès réalisés au cours des dernières années, il reste encore beaucoup de chemin à parcourir pour rendre ces approches largement applicables. Dans cette thèse, nous adressons certains des défis de l'apprentissage profond bayésien moderne, en proposant et en étudiant des solutions pour améliorer la scalabilité et l'inférence de ces modèles.La première partie de la thèse est consacrée aux modèles profonds où l'inférence est effectuée en utilisant l'inférence variationnelle (VI).Plus précisément, nous étudions le rôle de l'initialisation des paramètres variationnels et nous montrons comment des stratégies d'initialisation prudentes peuvent permettre à l'inférence variationnelle de fournir de bonnes performances même dans des modèles à grande échelle.Dans cette partie de la thèse, nous étudions également l'effet de sur-régularisation de l'objectif variationnel sur les modèles sur-paramétrés.Pour résoudre ce problème, nous proposons une nouvelle paramétrisation basée sur la transformée de Walsh-Hadamard ; non seulement cela résout l'effet de sur-régularisation de l'objectif variationnel mais cela nous permet également de modéliser des postérités non factorisées tout en gardant la complexité temporelle et spatiale sous contrôle.La deuxième partie de la thèse est consacrée à une étude sur le rôle des prieurs.Bien qu'étant un élément essentiel de la règle de Bayes, il est généralement difficile de choisir de bonnes prieurs pour les modèles d'apprentissage profond.Pour cette raison, nous proposons deux stratégies différentes basées (i) sur l'interprétation fonctionnelle des réseaux de neurones et (ii) sur une procédure évolutive pour effectuer une sélection de modèle sur les hyper-paramètres antérieurs, semblable à la maximisation de la vraisemblance marginale.Pour conclure cette partie, nous analysons un autre type de modèle bayésien (processus Gaussien) et nous étudions l'effet de l'application d'un a priori sur tous les hyperparamètres de ces modèles, y compris les variables supplémentaires requises par les approximations du inducing points.Nous montrons également comment il est possible d'inférer des a posteriori de forme libre sur ces variables, qui, par convention, auraient été autrement estimées par point
Throughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pinson, Pierre. „Estimation de l'incertitude des prédictions de production éolienne“. Phd thesis, École Nationale Supérieure des Mines de Paris, 2006. http://pastel.archives-ouvertes.fr/pastel-00002187.

Der volle Inhalt der Quelle
Annotation:
L'énergie éolienne connaît un développement considérable en Europe. Pourtant, le caractère intermittent de cette énergie renouvelable introduit des difficultés pour la gestion du réseau électrique. De plus, dans le cadre de la dérégulation des marchés de l'électricité, l'énergie éolienne est pénalisée par rapport aux moyens de production contrôlables. La prédiction de la production éolienne à des horizons de 2-3 jours aide l'intégration de cette énergie. Ces prédictions consistent en une seule valeur par horizon, qui correspond à la production la plus probable. Cette information n'est pas suffisante pour définir des stratégies de commerce ou de gestion optimales. C'est pour cela que notre travail se concentre sur l'incertitude des prédictions éoliennes. Les caractéristiques de cette incertitude sont décrites à travers une analyse des performances de certains modèles de l'état de l'art, et en soulignant l'influence de certaines variables sur les moments des distributions d'erreurs de prédiction. Ensuite, nous décrivons une méthode générique pour l'estimation d'intervalles de prédiction. Il s'agit d'une méthode statistique nonparamétrique qui utilise des concepts de logique floue pour intégrer l'expertise acquise concernant les caractéristiques de cette incertitude. En estimant plusieurs intervalles à la fois, on obtient alors des prédictions probabilistes sous forme de densité de probabilité de production éolienne pour chaque horizon. La méthode est évaluée en terme de fiabilité, finesse et résolution. En parallèle, nous explorons la possibilité d'utiliser des prédictions ensemblistes pour fournir des 'prévisions d'erreur'. Ces prédictions ensemblistes sont obtenues soit en convertissant des prévisions météorologiques ensemblistes (fournies par ECMWF ou NCEP), soit en appliquant une approche de décalage temporel. Nous proposons une définition d'indices de risque, qui reflètent la dispersion des ensembles pour un ou plusieurs horizons consécutifs. Une relation probabiliste entre ces indices de risque et le niveau d'erreur de prédiction est établie. Dans une dernière partie, nous considérons la participation de l'énergie éolienne dans les marchés de l'électricité afin de démontrer la valeur de l'information 'incertitude'. Nous expliquons comment définir des stratégies de participation à ces bourses de l'électricité avec des prédictions déterministes ou probabilistes. Les bénéfices résultant d'une estimation de l'incertitude des prédictions éoliennes sont clairement démontrés.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lu, Ruijin. „Scalable Estimation and Testing for Complex, High-Dimensional Data“. Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93223.

Der volle Inhalt der Quelle
Annotation:
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time.
Doctor of Philosophy
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Blier, Mylène. „Estimation temporelle avec interruption: les effets de localisation et de durée d'interruptions sont-ils sensibles à l'incertitude ?“ Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26367/26367.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Blier, Mylène. „Estimation temporelle avec interruption : les effets de localisation et de durée d'interruption sont-ils sensibles à l'incertitude ?“ Doctoral thesis, Université Laval, 2009. http://hdl.handle.net/20.500.11794/21201.

Der volle Inhalt der Quelle
Annotation:
Lors d'études portant sur la production d'intervalles temporels avec interruption, deux principaux effets sont généralement retrouvés lorsque la localisation et la durée de l'interruption varient: les productions temporelles sont plus courtes 1) lorsque l'interruption arrive tôt dans l'intervalle de temps et 2) lorsque la durée de l'interruption est plus longue. L'effet de localisation s'explique principalement par un partage attentionnel lors de la période qui précède l'interruption, mais peut aussi refléter en partie l'effet de processus préparatoires précédant l'interruption. L'effet de durée d'interruption est expliqué par les processus de préparation qui prennent place durant l'interruption. Une façon de réduire les effets préparatoires est d'augmenter l'incertitude quant au moment d'arrivée du signal auquel on doit réagir. L'objectif principal du mémoire doctoral est de tester la sensibilité des effets de localisation et de durée d'interruption à l'incertitude dans trois expériences. Les deux premières expériences portent sur la production d'intervalle temporelle avec interruption. La dernière expérience vérifie l'effet de l'incertitude sur la préparation dans un paradigme de temps de réaction. Pour chaque expérience, deux conditions sont utilisées, soient un groupe où l'incertitude est faible et un autre où l'incertitude est élevée. Dans la première expérience, l'effet de localisation est plus fort dans le groupe où l'incertitude est faible. Aucun effet significatif n'est retrouvé dans l'expérience où la durée de l'interruption est manipulée, ce qui est expliqué principalement par la taille de l'effet de durée d'interruption. Dans la tâche de temps de réaction, l'effet de la période préparatoire est plus prononcé dans le groupe où l'incertitude est faible contrairement au groupe où l'incertitude est élevée. Les résultats permettent de conclure que l'incertitude affecte le partage attentionnel dans l'effet de localisation, mais affecte surtout l'effet de préparation qui serait impliqué dans les trois expériences. Ces résultats s'expliquent du fait que les participants montrent une meilleure préparation lorsque l'incertitude est faible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kang, Seong-Ryong. „Performance analysis and network path characterization for scalable internet streaming“. Texas A&M University, 2008. http://hdl.handle.net/1969.1/85912.

Der volle Inhalt der Quelle
Annotation:
Delivering high-quality of video to end users over the best-effort Internet is a challenging task since quality of streaming video is highly subject to network conditions. A fundamental issue in this area is how real-time applications cope with network dynamics and adapt their operational behavior to offer a favorable streaming environment to end users. As an effort towards providing such streaming environment, the first half of this work focuses on analyzing the performance of video streaming in best-effort networks and developing a new streaming framework that effectively utilizes unequal importance of video packets in rate control and achieves a near-optimal performance for a given network packet loss rate. In addition, we study error concealment methods such as FEC (Forward-Error Correction) that is often used to protect multimedia data over lossy network channels. We investigate the impact of FEC on the quality of video and develop models that can provide insights into understanding how inclusion of FEC affects streaming performance and its optimality and resilience characteristics under dynamically changing network conditions. In the second part of this thesis, we focus on measuring bandwidth of network paths, which plays an important role in characterizing Internet paths and can benefit many applications including multimedia streaming. We conduct a stochastic analysis of an end-to-end path and develop novel bandwidth sampling techniques that can produce asymptotically accurate capacity and available bandwidth of the path under non-trivial cross-traffic conditions. In addition, we conduct comparative performance study of existing bandwidth estimation tools in non-simulated networks where various timing irregularities affect delay measurements. We find that when high-precision packet timing is not available due to hardware interrupt moderation, the majority of existing algorithms are not robust to measure end-to-end paths with high accuracy. We overcome this problem by using signal de-noising techniques in bandwidth measurement. We also develop a new measurement tool called PRC-MT based on theoretical models that simultaneously measures the capacity and available bandwidth of the tight link with asymptotic accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rahmani, Mahmood. „Urban Travel Time Estimation from Sparse GPS Data : An Efficient and Scalable Approach“. Doctoral thesis, KTH, Transportplanering, ekonomi och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167798.

Der volle Inhalt der Quelle
Annotation:
The use of GPS probes in traffic management is growing rapidly as the required data collection infrastructure is increasingly in place, with significant number of mobile sensors moving around covering expansive areas of the road network. Many travelers carry with them at least one device with a built-in GPS receiver. Furthermore, vehicles are becoming more and more location aware. Vehicles in commercial fleets are now routinely equipped with GPS. Travel time is important information for various actors of a transport system, ranging from city planning, to day to day traffic management, to individual travelers. They all make decisions based on average travel time or variability of travel time among other factors. AVI (Automatic Vehicle Identification) systems have been commonly used for collecting point-to-point travel time data. Floating car data (FCD) -timestamped locations of moving vehicles- have shown potential for travel time estimation. Some advantages of FCD compared to stationary AVI systems are that they have no single point of failure and they have better network coverage. Furthermore, the availability of opportunistic sensors, such as GPS, makes the data collection infrastructure relatively convenient to deploy. Currently, systems that collect FCD are designed to transmit data in a limited form and relatively infrequently due to the cost of data transmission. Thus, reported locations are far apart in time and space, for example with 2 minutes gaps. For sparse FCD to be useful for transport applications, it is required that the corresponding probes be matched to the underlying digital road network. Matching such data to the network is challenging. This thesis makes the following contributions: (i) a map-matching and path inference algorithm, (ii) a method for route travel time estimation, (iii) a fixed point approach for joint path inference and travel time estimation, and (iv) a method for fusion of FCD with data from automatic number plate recognition. In all methods, scalability and overall computational efficiency are considered among design requirements. Throughout the thesis, the methods are used to process FCD from 1500 taxis in Stockholm City. Prior to this work, the data had been ignored because of its low frequency and minimal information. The proposed methods proved that the data can be processed and transformed into useful traffic information. Finally, the thesis implements the main components of an experimental ITS laboratory, called iMobility Lab. It is designed to explore GPS and other emerging data sources for traffic monitoring and control. Processes are developed to be computationally efficient, scalable, and to support real time applications with large data sets through a proposed distributed implementation.

QC 20150525

APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shriram, Alok Kaur Jasleen. „Efficient techniques for end-to-end bandwidth estimation performance evaluations and scalable deployment /“. Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2248.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.
Title from electronic title page (viewed Jun. 26, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Simsa, Jiri. „Systematic and Scalable Testing of Concurrent Programs“. Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/285.

Der volle Inhalt der Quelle
Annotation:
The challenge this thesis addresses is to speed up the development of concurrent programs by increasing the efficiency with which concurrent programs can be tested and consequently evolved. The goal of this thesis is to generate methods and tools that help software engineers increase confidence in the correct operation of their programs. To achieve this goal, this thesis advocates testing of concurrent software using a systematic approach capable of enumerating possible executions of a concurrent program. The practicality of the systematic testing approach is demonstrated by presenting a novel software infrastructure that repeatedly executes a program test, controlling the order in which concurrent events happen so that different behaviors can be explored across different test executions. By doing so, systematic testing circumvents the limitations of traditional ad-hoc testing, which relies on chance to discover concurrency errors. However, the idea of systematic testing alone does not quite solve the problem of concurrent software testing. The combinatorial nature of the number of ways in which concurrent events of a program can execute causes an explosion of the number of possible interleavings of these events, a problem referred to as state space explosion. To address the state space explosion problem, this thesis studies techniques for quantifying the extent of state space explosion and explores several directions for mitigating state space explosion: parallel state space exploration, restricted runtime scheduling, and abstraction reduction. In the course of its research exploration, this thesis pushes the practical limits of systematic testing by orders of magnitude, scaling systematic testing to real-world programs of unprecedented complexity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Estimation scalable de l'incertitude"

1

Zhang, Ying-Jun Angela, Congmin Fan und Xiaojun Yuan. „Scalable Channel Estimation“. In SpringerBriefs in Electrical and Computer Engineering, 23–47. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15884-2_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pelikan, Martin, Kumara Sastry und David E. Goldberg. „Multiobjective Estimation of Distribution Algorithms“. In Scalable Optimization via Probabilistic Modeling, 223–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-34954-9_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Izumi, Taisuke, und Hironobu Kanzaki. „Scalable Estimation of Network Average Degree“. In Lecture Notes in Computer Science, 367–69. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03089-0_32.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sastry, Kumara, Martin Pelikan und David E. Goldberg. „Efficiency Enhancement of Estimation of Distribution Algorithms“. In Scalable Optimization via Probabilistic Modeling, 161–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-34954-9_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ocenasek, Jiri, Erick Cantú-Paz, Martin Pelikan und Josef Schwarz. „Design of Parallel Estimation of Distribution Algorithms“. In Scalable Optimization via Probabilistic Modeling, 187–203. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-34954-9_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chakrabarti, Indrajit, Kota Naga Srinivasarao Batta und Sumit Kumar Chatterjee. „Introduction to Scalable Image and Video Coding“. In Motion Estimation for Video Coding, 85–108. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14376-7_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bosman, Peter A. N., und Dirk Thierens. „Numerical Optimization with Real-Valued Estimation-of-Distribution Algorithms“. In Scalable Optimization via Probabilistic Modeling, 91–120. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-34954-9_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Baglietto, P., M. Maresca, A. Migliaro und M. Migliardi. „A VLSI scalable processor array for motion estimation“. In Image Analysis and Processing, 127–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60298-4_247.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lee, Seongsoo. „Energy-Scalable Motion Estimation for Low-Power Multimedia Applications“. In Interactive Multimedia on Next Generation Networks, 400–409. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-40012-7_33.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Corona, Julio Camejo, Hector Gonzalez und Carlos Morell. „Scalable Generalized Multitarget Linear Regression With Output Dependence Estimation“. In Progress in Artificial Intelligence and Pattern Recognition, 60–68. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89691-1_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Estimation scalable de l'incertitude"

1

Braspenning, Ralph A. C., Gerard de Haan und Christian Hentschel. „Complexity scalable motion estimation“. In Electronic Imaging 2002, herausgegeben von C. C. Jay Kuo. SPIE, 2002. http://dx.doi.org/10.1117/12.453085.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhai, Guangtao, Qian Chen, Xiaokang Yang und Wenjun Zhang. „Scalable visual sensitivity profile estimation“. In ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4517749.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cohen, Edith, Daniel Delling, Fabian Fuchs, Andrew V. Goldberg, Moises Goldszmidt und Renato F. Werneck. „Scalable similarity estimation in social networks“. In the first ACM conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2512938.2512944.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Looks, Moshe. „Scalable estimation-of-distribution program evolution“. In the 9th annual conference. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1276958.1277072.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Konieczny, Jacek, und Adam Luczak. „Motion estimation algorithm for scalable hardware implementation“. In 2009 Picture Coding Symposium (PCS). IEEE, 2009. http://dx.doi.org/10.1109/pcs.2009.5167455.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sun, Chuxiong, Rui Wang, Ruiying Li, Jiao Wu und Xiaohui Hu. „Efficient and Scalable Exploration via Estimation-Error“. In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852234.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Simko, Michal, Christian Mehlfuhrer, Martin Wrulich und Markus Rupp. „Doubly dispersive channel estimation with scalable complexity“. In 2010 International ITG Workshop on Smart Antennas (WSA 2010). IEEE, 2010. http://dx.doi.org/10.1109/wsa.2010.5456443.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lengwehasatit, Krisda, Antonio Ortega, Andrea Basso und Amy R. Reibman. „Novel computationally scalable algorithm for motion estimation“. In Photonics West '98 Electronic Imaging, herausgegeben von Sarah A. Rajala und Majid Rabbani. SPIE, 1998. http://dx.doi.org/10.1117/12.298382.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Noshad, Morteza, und Alfred O. Hero. „Scalable Hash-Based Estimation of Divergence Measures“. In 2018 Information Theory and Applications Workshop (ITA). IEEE, 2018. http://dx.doi.org/10.1109/ita.2018.8503092.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Noshad, Morteza, Yu Zeng und Alfred O. Hero. „Scalable Mutual Information Estimation Using Dependence Graphs“. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683351.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Estimation scalable de l'incertitude"

1

Hunter, Margaret, Jijo K. Mathew, Ed Cox, Matthew Blackwell und Darcy M. Bullock. Estimation of Connected Vehicle Penetration Rate on Indiana Roadways. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317343.

Der volle Inhalt der Quelle
Annotation:
Over 400 billion passenger vehicle trajectory waypoints are collected each month in the United States. This data creates many new opportunities for agencies to assess operational characteristics of roadways for more agile management of resources. This study compared traffic counts obtained from 24 Indiana Department of Transportation traffic counts stations with counts derived by the vehicle trajectories during the same periods. These stations were geographically distributed throughout Indiana with 13 locations on interstates and 11 locations on state or US roads. A Wednesday and a Saturday in January, August, and September 2020 are analyzed. The results show that the analyzed interstates had an average penetration of 4.3% with a standard deviation of 1.0. The non-interstate roads had an average penetration of 5.0% with a standard deviation of 1.36. These penetration levels suggest that connected vehicle data can provide a valuable data source for developing scalable roadway performance measures. Since all agencies currently have a highway monitoring system using fixed infrastructure, this paper concludes by recommending agencies integrate a connected vehicle penetration monitoring program into their traditional highway count station program to monitor the growing penetration of connected cars and trucks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

McMartin, I., M. S. Gauthier und A. V. Page. Updated post-glacial marine limits along western Hudson Bay, central mainland Nunavut and northern Manitoba. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330940.

Der volle Inhalt der Quelle
Annotation:
A digital compilation of updated postglacial marine limits was completed in the coastal regions of central mainland Nunavut and northern Manitoba between Churchill and Queen Maud Gulf. The compilation builds on and updates previous mapping of the marine limits at an unprecedented scale, making use of high-resolution digital elevation models, new field-based observations of the marine limit and digital compilations of supporting datasets (i.e. marine deltas and marine sediments). The updated mapping also permits a first-hand, knowledgedriven interpolation of a continuous limit of marine inundation linking the Tyrrell Sea to Arctic Ocean seawaters. The publication includes a detailed description of the mapping methods, a preliminary interpretation of the results, and a GIS scalable layout map for easy access to the various layers. These datasets and outputs provide robust constraints to reconstruct the patterns of ice retreat and for glacio-isostatic rebound models, important for the estimation of relative sea level changes and impacts on the construction of nearshore sea-transport infrastructures. They can also be used to evaluate the maximum extent of marine sediments and associated permafrost conditions that can affect land-based infrastructures, and potential secondary processes related to marine action in the surficial environment and, therefore, can enhance the interpretation of geochemical anomalies in glacial drift exploration methods. A generalized map of the maximum limit of postglacial marine inundation produced for map representation and readability also constitutes an accessible output relevant to Northerners and other users of geoscience data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie