Literatura científica selecionada sobre o tema "Probabilities – Data processing"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Probabilities – Data processing".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Probabilities – Data processing"

1

Vaidogas, Egidijus Rytas. "Bayesian Processing of Data on Bursts of Pressure Vessels". Information Technology and Control 50, n.º 4 (16 de dezembro de 2021): 607–26. http://dx.doi.org/10.5755/j01.itc.50.4.29690.

Texto completo da fonte
Resumo:
Two alternative Bayesian approaches are proposed for the prediction of fragmentation of pressure vessels triggered off by accidental explosions (bursts) of these containment structures. It is shown how to carry out this prediction with post-mortem data on fragment numbers counted after past explosion accidents. Results of the prediction are estimates of probabilities of individual fragment numbers. These estimates are expressed by means of Bayesian prior or posterior distributions. It is demonstrated how to elicit the prior distributions from relatively scarce post-mortem data on vessel fragmentations. Specifically, it is suggested to develop priors with two Bayesian models known as compound Poisson-gamma and multinomial-Dirichlet probability distributions. The available data is used to specify non-informative prior for Poisson parameter that is subsequently transformed into priors of individual fragment number probabilities. Alternatively, the data is applied to a specification of Dirichlet concentration parameters. The latter priors directly express epistemic uncertainty in the fragment number probabilities. Example calculations presented in the study demonstrate that the suggested non-informative prior distributions are responsive to updates with scarce data on vessel explosions. It is shown that priors specified with Poisson-gamma and multinomial-Dirichlet models differ tangibly; however, this difference decreases with increasing amount of new data. For the sake of brevity and concreteness, the study was limited to fire induced vessel bursts known as boiling liquid expanding vapour explosions (BLEVEs).
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Ivanov, A. I., E. N. Kuprianov e S. V. Tureev. "Neural network integration of classical statistical tests for processing small samples of biometrics data". Dependability 19, n.º 2 (16 de junho de 2019): 22–27. http://dx.doi.org/10.21683/1729-2646-2019-19-2-22-27.

Texto completo da fonte
Resumo:
The Aim of this paper is to increase the power of statistical tests through their joint application to reduce the requirement for the size of the test sample. Methods. It is proposed to combine classical statistical tests, i.e. chi square, Cram r-von Mises and Shapiro-Wilk by means of using equivalent artificial neurons. Each neuron compares the input statistics with a precomputed threshold and has two output states. That allows obtaining three bits of binary output code of a network of three artificial neurons. Results. It is shown that each of such criteria on small samples of biometric data produces high values of errors of the first and second kind in the process of normality hypothesis testing. Neural network integration of three tests under consideration enables a significant reduction of the probabilities of errors of the first and second kind. The paper sets forth the results of neural network integration of pairs, as well as triples of statistical tests under consideration. Conclusions. Expected probabilities of errors of the first and second kind are predicted for neural network integrations of 10 and 30 classical statistical tests for small samples that contain 21 tests. An important element of the prediction process is the symmetrization of the problem, when the probabilities of errors of the first and second kind are made identical and averaged out. Coefficient modules of pair correlation of output states are averaged out as well by means of artificial neuron adders. Only in this case the connection between the number of integrated tests and the expected probabilities of errors of the first and second kind becomes linear in logarithmic coordinates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Romansky, Radi. "Mathematical Model Investigation of a Technological Structure for Personal Data Protection". Axioms 12, n.º 2 (18 de janeiro de 2023): 102. http://dx.doi.org/10.3390/axioms12020102.

Texto completo da fonte
Resumo:
The contemporary digital age is characterized by the massive use of different information technologies and services in the cloud. This raises the following question: “Are personal data processed correctly in global environments?” It is known that there are many requirements that the Data Controller must perform. For this reason, this article presents a point of view for transferring some activities for personal data processing from a traditional system to a cloud environment. The main goal is to investigate the differences between the two versions of data processing. To achieve this goal, a preliminary deterministic formalization of the two cases using a Data Flow Diagram is made. The second phase is the organization of a mathematical (stochastic) model investigation on the basis of a Markov chain apparatus. Analytical models are designed, and their solutions are determined. The final probabilities for important states are determined based on an analytical calculation, and the higher values for the traditional version are defined for data processing in registers (“2”: access for write/read −0.353; “3”: personal data updating −0.212). The investigation of the situations based on cloud computing determines the increasing probability to be “2”. Discussion of the obtained assessment based on a graphical presentation of the analytical results is presented, which permits us to show the differences between the final probabilities for the states in the two versions of personal data processing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Tkachenko, Kirill. "PROVIDING A DEPENDABLE OPERATION OF THE DATA PROCESSING SYSTEM WITH INTERVAL CHANGES IN THE FLOW CHARACTERISTICS BASED ON ANALYTICAL SIMULATIONS". Automation and modeling in design and management 2021, n.º 3-4 (30 de dezembro de 2021): 25–30. http://dx.doi.org/10.30987/2658-6436-2021-3-4-25-30.

Texto completo da fonte
Resumo:
The article proposes a new approach for adjusting the parameters of computing nodes being a part of a data processing system based on analytical simulation of a queuing system with subsequent estimation of probabilities of hypotheses regarding the computing node state. Methods of analytical modeling of queuing systems and mathematical statistics are used. The result of the study is a mathematical model for assessing the information situation for a computing node, which differs from the previously published system model used. Estimation of conditional probabilities of hypotheses concerning adequate data processing by a computing node allows making a decision on the need of adjusting the parameters of a computing node. This adjustment makes it possible to improve the efficiency of working with tasks on the computing node of the data processing system. The implementation of the proposed model for adjusting the parameters of the computer node of the data processing system increases both the efficiency of process applications on the node and, in general, the efficiency of its operation. The application of the approach to all computing nodes of the data processing system increases the dependability of the system as a whole.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Groot, Perry, Christian Gilissen e Michael Egmont-Petersen. "Error probabilities for local extrema in gene expression data". Pattern Recognition Letters 28, n.º 15 (novembro de 2007): 2133–42. http://dx.doi.org/10.1016/j.patrec.2007.06.017.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Čajka, Radim, e Martin Krejsa. "Measured Data Processing in Civil Structure Using the DOProC Method". Advanced Materials Research 859 (dezembro de 2013): 114–21. http://dx.doi.org/10.4028/www.scientific.net/amr.859.114.

Texto completo da fonte
Resumo:
This paper describes the use of measured values in the probabilistic tasks by means of the new method which is under development now - Direct Optimized Probabilistic Calculation (DOProC). This method has been used to solve a number of probabilistic tasks. DOProC has been applied in ProbCalc a part of this software is a module for entering and assessing the measured data. The software can read values saved in a text file and can create histograms with non-parametric (empirical) distribution of the probabilities. In case of the parametric distribution, it is possible to make selection from among 24 defined types and specify the best choice, using the coefficient of determination. This approach has been used, for instance, for modelling and experimental validation of reliability of an additionally prestressed masonry construction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Chervyakov, N. I., P. A. Lyakhov e A. R. Orazaev. "3D-generalization of impulse noise removal method for video data processing". Computer Optics 44, n.º 1 (fevereiro de 2020): 92–100. http://dx.doi.org/10.18287/2412-6179-co-577.

Texto completo da fonte
Resumo:
The paper proposes a generalized method of adaptive median impulse noise filtering for video data processing. The method is based on the combined use of iterative processing and transformation of the result of median filtering based on the Lorentz distribution. Four different combinations of algorithmic blocks of the method are proposed. The experimental part of the paper presents the results of comparing the quality of the proposed method with known analogues. Video distorted by impulse noise with pixel distortion probabilities from 1% to 99% inclusive was used for the simulation. Numerical assessment of the quality of cleaning video data from noise based on the mean square error (MSE) and structural similarity (SSIM) showed that the proposed method shows the best result of processing in all the considered cases, compared with the known approaches. The results obtained in the paper can be used in practical applications of digital video processing, for example, in systems of video surveillance, identification systems and control of industrial processes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Li, Qiude, Qingyu Xiong, Shengfen Ji, Junhao Wen, Min Gao, Yang Yu e Rui Xu. "Using fine-tuned conditional probabilities for data transformation of nominal attributes". Pattern Recognition Letters 128 (dezembro de 2019): 107–14. http://dx.doi.org/10.1016/j.patrec.2019.08.024.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Jain, Kirti. "Sentiment Analysis on Twitter Airline Data". International Journal for Research in Applied Science and Engineering Technology 9, n.º VI (30 de junho de 2021): 3767–70. http://dx.doi.org/10.22214/ijraset.2021.35807.

Texto completo da fonte
Resumo:
Sentiment analysis, also known as sentiment mining, is a submachine learning task where we want to determine the overall sentiment of a particular document. With machine learning and natural language processing (NLP), we can extract the information of a text and try to classify it as positive, neutral, or negative according to its polarity. In this project, We are trying to classify Twitter tweets into positive, negative, and neutral sentiments by building a model based on probabilities. Twitter is a blogging website where people can quickly and spontaneously share their feelings by sending tweets limited to 140 characters. Because of its use of Twitter, it is a perfect source of data to get the latest general opinion on anything.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Buhmann, Joachim, e Hans Kühnel. "Complexity Optimized Data Clustering by Competitive Neural Networks". Neural Computation 5, n.º 1 (janeiro de 1993): 75–88. http://dx.doi.org/10.1162/neco.1993.5.1.75.

Texto completo da fonte
Resumo:
Data clustering is a complex optimization problem with applications ranging from vision and speech processing to data transmission and data storage in technical as well as in biological systems. We discuss a clustering strategy that explicitly reflects the tradeoff between simplicity and precision of a data representation. The resulting clustering algorithm jointly optimizes distortion errors and complexity costs. A maximum entropy estimation of the clustering cost function yields an optimal number of clusters, their positions, and their cluster probabilities. Our approach establishes a unifying framework for different clustering methods like K-means clustering, fuzzy clustering, entropy constrained vector quantization, or topological feature maps and competitive neural networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Probabilities – Data processing"

1

Sun, Liwen, e 孙理文. "Mining uncertain data with probabilistic guarantees". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45705392.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Navas, Portella Víctor. "Statistical modelling of avalanche observables: criticality and universality". Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670764.

Texto completo da fonte
Resumo:
Complex systems can be understood as an entity composed by a large number of interactive elements whose emergent global behaviour cannot be derived from the local laws characterizing their constituents. The observables characterizing these systems can be observed at different scales and they often exhibit interesting properties such as lack of characteristic scales and self-similarity. In this context, power-law type functions take an important role in the description of these observables. The presence of power-law functions resembles to the situation of thermodynamic quantities close to a critical point in equilibrium critical phenomena. Different complex systems can be grouped into the same universality class when the power-law functions characterizing their observables have the same exponents. The response of some complex systems proceeds by the so called avalanche process: a collective response of the system characterized by following an intermittent dynamics, with sudden bursts of activity separated by periods of silence. This kind of out-of-equilibrium systems can be found in different disciplines such as seismology, astrophysics, ecology, finance or epidemiology, just to mention a few of them. Avalanches are characterized by a set of observables such as the size, the duration or the energy. When avalanche observables exhibit lack of characteristic scales, their probability distributions can be statistically modelled by power-law-type distributions. Avalanche criticality occurs when avalanche observables can be characterized by this kind of distributions. In this sense, the concepts of criticality and universality, which are well defined in equilibrium phenomena, can be also extended for the probability distributions describing avalanche observables in out-of-equilibrium systems. The main goal of this PhD thesis relies on providing robust statistical methods in order to characterize avalanche criticality and universality in empirical datasets. Due to limitations in data acquisition, empirical datasets often only cover a narrow range of observation, making it difficult to establish power-law behaviour unambiguously. With the aim of discussing the concepts of avalanche criticality and universality, two different systems are going to be considered: earthquakes and acoustic emission events generated during compression experiments of porous materials in the laboratory (labquakes). The techniques developed in this PhD thesis are mainly focused on the distribution of earthquake and labquake sizes, which is known as the Gutenberg-Richter law. However, the methods are much more general and can be applied to any other avalanche observable. The statistical techniques provided in this work can also be helpful for earthquake forecasting. Coulomb-stress theory has been used for years in seismology to understand how earthquakes trigger each other. Earthquake models that relate earthquake rates and Coulomb stress after a main event, such as the rate-and-state model, assume that the magnitude distribution of earthquakes is not affected by the change in the Coulomb stress. Several statistical analyses are performed to test whether the distribution of magnitudes is sensitive to the sign of the Coulomb-stress increase. The use of advanced statistical techniques for the analysis of complex systems has been found to be necessary and very helpful in order to provide rigour to the empirical results, particularly, to those problems regarding hazard analysis.
Els sistemes complexos es poden entendre com entitats compostes per un gran nombre d’elements en interacció on la seva resposta global i emergent no es pot derivar de les lleis particulars que caracteritzen cadascun dels seus constituents. Els observables que caracteritzen aquests sistemes es poden observar a diferents escales i, sovint, mostren propietats interessants tals com la manca d’escales característiques i autosimilitud. En aquest context, les funcions amb lleis de potència prenen un paper important en la descripció d’aquests observables. La presència de lleis de potència s’assimila a la situació dels fenòmens crítics en equilibri, on algunes quantitats termodinàmiques mostren un comportament funcional similar prop d’un punt crític. Diferents sistemes complexos es poden agrupar en la mateixa classe d’universalitat quan les funcions de lleis de potència que caracteritzen els seus observables tenen els mateixos exponents. Quan són conduïts externament, la resposta d’alguns sistemes complexos segueix el que s’anomonena un procès d’allaus: una resposta col·lectiva del sistema caracteritzada per seguir una dinàmica intermitent, amb sobtats increments d’activitat separats per períodes de silenci. Aquesta mena de sistemes fora de l’equilibri es poden trobar en diferents disciplines tals com la sismologia, astrofísica, ecologia, epidemologia o finances, per mencionar alguns. Les allaus estan caracteritzades per un conjunt d’observables tals com la mida, l’energia o la durada. Quan aquests observables mostren una manca d’escales característiques, les seves distribucions de probabilitat es poden modelitzar estadísticament per distribucions de lleis de potència. S’anomenen allaus crítiques aquelles en que els seus observables es poden caracteritzar per aquestes distribucions. En aquest sentit, els conceptes de criticalitat i universalitat, els quals estan ben definits per fenòmens en equilibri, es poden extendre per les distribucions de probabilitat que descriuen els observables de les allaus en sistemes fora de l’equilibri. L’objectiu principal d’aquesta tesi doctoral és proporcionar mètodes estadístics robusts per tal de caracteritzar la criticalitat i la universalitat en allaus corresponents a dades empíriques. Degut a les limitacions en l’adquisició de dades, les dades empíriques sovint cobreixen un rang petit d’observació, dificultant que es pugui establir un determinat comportament en forma de llei de potència de manera inequívoca. Amb l’objectiu de discutir els conceptes de criticalitat i universalitat en allaus, es consideraran dos sistemes diferents: els terratrèmols i els esdeveniments d’emissió acústica que es generen durant experiments de compressió de materials porosos al laboratori (labquakes). Les tècniques desenvolupades en aquesta tesi doctoral estan enfocades principalment a la distribució de la mida dels terratrèmols i labquakes, altrament coneguda com a llei de Gutenberg-Richter. No obstant, aquests mètodes són molt més generals i es poden aplicar a qualsevol observable de les allaus. Les tècniques estadístistiques proporcionades en aquest treball poden també ajudar al pronòstic de terratrèmols. Durant anys, la teoria d’esforços de Coulomb s’ha utilitzat en sismologia per tal d’entendre com els terratrèmols desencadenen l’ocurrència d’altres de nous. Els models de terratrèmols que relacionen la taxa d’ocurrència de rèpliques i l’esforç de Coulomb després d’un gran esdeveniment, assumeixen que la distribució de la mida dels terratrèmols no està afectada pel canvi en l’esforç de Coulomb. Diverses anàlisi estadístiques s’aplicaran per tal de comprovar si la distribució de magnituds és sensible al signe de l’esforç de Coulomb. S’ha provat que l’ús de tècniques estadístiques avançades en l’anàlisi de sistemes complexos és útil i necessari per tal d’aportar rigor als resultats empírics i, en particular, a problemes d’anàlisi de riscos.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Franco, Samuel. "Searching for long transient gravitational waves in the LIGO-Virgo data". Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01062708.

Texto completo da fonte
Resumo:
This thesis presents the results of the STAMPAS all-sky search for long transient gravitational waves in the 2005-2007 LIGO-Virgo data. Gravitational waves are perturbations of the space-time metric. The Virgo and LIGO experiments are designed to detect such waves. They are Michelson interferometers with 3 km and 4 km long arms, whose light output is altered during the passage of a gravitational wave.Until very recently, transient gravitational wave search pipelines were focused on short transients, lasting less than 1 second, and on binary coalescence signals. STAMPAS is one of the very first pipelines entirely dedicated to the search of long transient gravitational wave signals, lasting from 1s to O(100s).These signals originate, among other sources, from instabilities in protoneutron stars as a result of their violent birth. The standing accretion shock instability in core collapse supernovae or instabilities in accretion disks are also possible mechanisms for gravitational wave long transients. Eccentric black hole binary coalescences are also expected to emit powerful gravitational waves for several seconds before the final plunge.STAMPAS is based on the correlation of data from two interferometers. Time-frequency maps of the data are extracted, and significant pixels are clustered to form triggers. No assumption on the direction, the time or the form of the signals is made.The first STAMPAS search has been performed on the data from the two LIGO detectors, between 2005 and 2007. After a rigorous trigger selection, the analysis revealed that their rate is close to Gaussian noise expectation, which is a significant achievement. No gravitational wave candidate has been detected, and upper limits on the astrophysical rates of several models of accretion disk instability sources and eccentric black holes binary coalescences have been set. The STAMPAS pipeline demonstrated its capabilities to search for any long transient gravitational wave signals during the advanced detector era.Keywords: Gravitational waves, Interferometry, Long transients, Signal Processing, Accretion Disk Instabilities, Eccentric Black Hole Binaries.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom". Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Texto completo da fonte
Resumo:
CAPES
Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais.
Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting". Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Texto completo da fonte
Resumo:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Jiang, Bin Computer Science &amp Engineering Faculty of Engineering UNSW. "Probabilistic skylines on uncertain data". 2007. http://handle.unsw.edu.au/1959.4/40712.

Texto completo da fonte
Resumo:
Skyline analysis is important for multi-criteria decision making applications. The data in some of these applications are inherently uncertain due to various factors. Although a considerable amount of research has been dedicated separately to efficient skyline computation, as well as modeling uncertain data and answering some types of queries on uncertain data, how to conduct skyline analysis on uncertain data remains an open problem at large. In this thesis, we tackle the problem of skyline analysis on uncertain data. We propose a novel probabilistic skyline model where an uncertain object may take a probability to be in the skyline, and a p-skyline contains all the objects whose skyline probabilities are at least p. Computing probabilistic skylines on large uncertain data sets is challenging. An uncertain object is conceptually described by a probability density function (PDF) in the continuous case, or in the discrete case a set of instances (points) such that each instance has a probability to appear. We develop two efficient algorithms, the bottom-up and top-down algorithms, of computing p-skyline of a set of uncertain objects in the discrete case. We also discuss that our techniques can be applied to the continuous case as well. The bottom-up algorithm computes the skyline probabilities of some selected instances of uncertain objects, and uses those instances to prune other instances and uncertain objects effectively. The top-down algorithm recursively partitions the instances of uncertain objects into subsets, and prunes subsets and objects aggressively. Our experimental results on both the real NBA player data set and the benchmark synthetic data sets show that probabilistic skylines are interesting and useful, and our two algorithms are efficient on large data sets, and complementary to each other in performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Murison, Robert. "Problems in density estimation for independent and dependent data". Phd thesis, 1993. http://hdl.handle.net/1885/136654.

Texto completo da fonte
Resumo:
The aim of this thesis is to provide two extensions to the theory of nonparametric kernel density estimation that increase the scope of the technique. The basic ideas of kernel density estimation are not new, having been proposed by Rosenblatt [20] and extended by Parzen [17]. The objective is that for a given set of data, estimates of functions of the distribution of the data such as probability densities are derived without recourse to rigid parametric assumptions and allow the data themselves to be more expressive in the statistical outcome. Thus kernel estimation has captured the imagination of statisticians searching for more flexibility and eager to utilise the computing revolution. The abundance of data and computing power have revealed distributional shapes that are difficult to model by traditional parametric approaches and in this era, the computer intensive technique of kernel estimation can be performed routinely. Also we are aware that computing power can be harnessed to give improved statistical analyses. Thus a lot of modern statistical research involves kernel estimation from complex data sets and our research is concordant with that momentum. The thesis contains three chapters. In Chapter 1 we provide an introduction to kernel density estimation and we give an outline to our two research topics. Our first extension to the theory is given in Chapter 2 where we investigate density estimation from independent data, using high order kernel functions. These kernel functions are designed for bias reduction but they have the penalty of yielding negative density estimates where data are sparse. In common practice, the negative estimates would arise in the tails of the density and we provide four ways of correcting this negativity to give bona fide estimates of the probability density. Our theory shows that the effects of these corrections are asymptotically negligible and thus opens the way for the regular use of bias reducing, high order kernel functions. We also consider density estimation of continuous stationary stochastic processes and this is the content of Chapter 3. With this problem, the dependent nature of the data influences the accuracy of the kernel density estimator and we provide theory regarding the convergence of the kernel estimators of the density and its derivatives to the true functions. An important result from this study is that nonparametric density estimators from dependent processes can have the same rates of convergence as their parametric counterparts yet retain the flexibility of being independent of parametric assumptions. Our other results indicate that the convergence rate of the density estimator can be quite slow if there are large lag dependencies amongst the data and suggests that large samples would be required for reliable inference about such data. The flexibility of kernel density estimation for continuous and discrete data, independent and dependent observations, means that it is a useful statistical tool. The techniques given in this thesis are not restricted to the analysis of simple sets of data but may be employed in the construction of statistical models for complex data with a high degree of structure.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Kanetsi, Khahiso. "Annual peak rainfall data augmentation - A Bayesian joint probability approach for catchments in Lesotho". Thesis, 2017. https://hdl.handle.net/10539/25567.

Texto completo da fonte
Resumo:
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering, 2017
The main problem to be investigated is how short duration data records can be augmented using existing data from nearby catchments with data with long periods of record. The purpose of the investigation is to establish a method of improving hydrological data using data from a gauged catchment to improve data from an ungauged catchment. The investigation is undertaken using rainfall data for catchments in Lesotho. Marginal distributions describing the annual maximum rainfall for the catchments, and a joint distribution of pairs of catchments were established. The parameters of these distributions were estimated using the Bayesian – Markov Chain Monte Carlo approach, and using both the single-site (univariate) estimation and the two-site (bivariate) estimations. The results of the analyses show that for catchments with data with short periods of record, the precision of the estimated location and scale parameters improved when the estimates were carried out using the two-site (bivariate) method. Rainfall events predicted using bivariate analyses parameters were generally higher than the univariate analyses parameters. From the results, it can be concluded that the two-site approach can be used to improve the precision of the rainfall predictions for catchments with data with short periods of record. This method can be used in practice by hydrologists and design engineers to enhance available data for use in designs and assessments.
CK2018
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wang, Haiou. "Logic sampling, likelihood weighting and AIS-BN : an exploration of importance sampling". Thesis, 2001. http://hdl.handle.net/1957/28769.

Texto completo da fonte
Resumo:
Logic Sampling, Likelihood Weighting and AIS-BN are three variants of stochastic sampling, one class of approximate inference for Bayesian networks. We summarize the ideas underlying each algorithm and the relationship among them. The results from a set of empirical experiments comparing Logic Sampling, Likelihood Weighting and AIS-BN are presented. We also test the impact of each of the proposed heuristics and learning method separately and in combination in order to give a deeper look into AIS-BN, and see how the heuristics and learning method contribute to the power of the algorithm. Key words: belief network, probability inference, Logic Sampling, Likelihood Weighting, Importance Sampling, Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks(AIS-BN), Mean Percentage Error (MPE), Mean Square Error (MSE), Convergence Rate, heuristic, learning method.
Graduation date: 2002
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Fazelnia, Ghazal. "Optimization for Probabilistic Machine Learning". Thesis, 2019. https://doi.org/10.7916/d8-jm7k-2k98.

Texto completo da fonte
Resumo:
We have access to great variety of datasets more than any time in the history. Everyday, more data is collected from various natural resources and digital platforms. Great advances in the area of machine learning research in the past few decades have relied strongly on availability of these datasets. However, analyzing them imposes significant challenges that are mainly due to two factors. First, the datasets have complex structures with hidden interdependencies. Second, most of the valuable datasets are high dimensional and are largely scaled. The main goal of a machine learning framework is to design a model that is a valid representative of the observations and develop a learning algorithm to make inference about unobserved or latent data based on the observations. Discovering hidden patterns and inferring latent characteristics in such datasets is one of the greatest challenges in the area of machine learning research. In this dissertation, I will investigate some of the challenges in modeling and algorithm design, and present my research results on how to overcome these obstacles. Analyzing data generally involves two main stages. The first stage is designing a model that is flexible enough to capture complex variation and latent structures in data and is robust enough to generalize well to the unseen data. Designing an expressive and interpretable model is one of crucial objectives in this stage. The second stage involves training learning algorithm on the observed data and measuring the accuracy of model and learning algorithm. This stage usually involves an optimization problem whose objective is to tune the model to the training data and learn the model parameters. Finding global optimal or sufficiently good local optimal solution is one of the main challenges in this step. Probabilistic models are one of the best known models for capturing data generating process and quantifying uncertainties in data using random variables and probability distributions. They are powerful models that are shown to be adaptive and robust and can scale well to large datasets. However, most probabilistic models have a complex structure. Training them could become challenging commonly due to the presence of intractable integrals in the calculation. To remedy this, they require approximate inference strategies that often results in non-convex optimization problems. The optimization part ensures that the model is the best representative of data or data generating process. The non-convexity of an optimization problem take away the general guarantee on finding a global optimal solution. It will be shown later in this dissertation that inference for a significant number of probabilistic models require solving a non-convex optimization problem. One of the well-known methods for approximate inference in probabilistic modeling is variational inference. In the Bayesian setting, the target is to learn the true posterior distribution for model parameters given the observations and prior distributions. The main challenge involves marginalization of all the other variables in the model except for the variable of interest. This high-dimensional integral is generally computationally hard, and for many models there is no known polynomial time algorithm for calculating them exactly. Variational inference deals with finding an approximate posterior distribution for Bayesian models where finding the true posterior distribution is analytically or numerically impossible. It assumes a family of distribution for the estimation, and finds the closest member of that family to the true posterior distribution using a distance measure. For many models though, this technique requires solving a non-convex optimization problem that has no general guarantee on reaching a global optimal solution. This dissertation presents a convex relaxation technique for dealing with hardness of the optimization involved in the inference. The proposed convex relaxation technique is based on semidefinite optimization that has a general applicability to polynomial optimization problem. I will present theoretical foundations and in-depth details of this relaxation in this work. Linear dynamical systems represent the functionality of many real-world physical systems. They can describe the dynamics of a linear time-varying observation which is controlled by a controller unit with quadratic cost function objectives. Designing distributed and decentralized controllers is the goal of many of these systems, which computationally, results in a non-convex optimization problem. In this dissertation, I will further investigate the issues arising in this area and develop a convex relaxation framework to deal with the optimization challenges. Setting the correct number of model parameters is an important aspect for a good probabilistic model. If there are only a few parameters, model may lack capturing all the essential relations and components in the observations while too many parameters may cause significant complications in learning or overfit to the observations. Non-parametric models are suitable techniques to deal with this issue. They allow the model to learn the appropriate number of parameters to describe the data and make predictions. In this dissertation, I will present my work on designing Bayesian non-parametric models as powerful tools for learning representations of data. Moreover, I will describe the algorithm that we derived to efficiently train the model on the observations and learn the number of model parameters. Later in this dissertation, I will present my works on designing probabilistic models in combination with deep learning methods for representing sequential data. Sequential datasets comprise a significant portion of resources in the area of machine learning research. Designing models to capture dependencies in sequential datasets are of great interest and have a wide variety of applications in engineering, medicine and statistics. Recent advances in deep learning research has shown exceptional promises in this area. However, they lack interpretability in their general form. To remedy this, I will present my work on mixing probabilistic models with neural network models that results in better performance and expressiveness of the results.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Probabilities – Data processing"

1

Kelly, Brendan. Data management & probability module. Toronto, ON: Ontario Ministry of Education and Training, 1998.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Basic probability using MATLAB. Boston: PWS Pub. Co., 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Petrushin, V. N. Informat︠s︡ionnai︠a︡ chuvstvitelʹnostʹ kompʹi︠u︡ternykh algoritmov. Moskva: FIZMATLIT, 2010.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Aizaki, Hideo. Stated preference methods using R. Boca Raton: CRC Press, Taylor & Francis Group, 2015.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

T, Callender J., ed. Exploring probability and statistics with spreadsheets. London: Prentice Hall, 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Probability and random processes: Using MATLAB with applications to continuous and discrete time systems. Chicago: Irwin, 1997.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Rozhkov, V. A. Metody i sredstva statisticheskoĭ obrabotki i analiza informat︠s︡ii ob obstanovke v mirovom okeane na primere gidrometeorologii. Obninsk: VNIIGMI-MT︠S︡D, 2009.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Andrews, D. F. Calculations with random variables using mathematica. Toronto: University of Toronto, Dept. of Statistics, 1990.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Petersen, E. R. PROPS+: Proabilistic and optimization spreadsheets plus what-if-solver. Reading, MA: Addison-Wesley, 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Rozhkov, V. A. Metody i sredstva statisticheskoĭ obrabotki i analiza informat︠s︡ii ob obstanovke v mirovom okeane na primere gidrometeorologii. Obninsk: VNIIGMI-MT︠S︡D, 2009.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Probabilities – Data processing"

1

Pegoraro, Marco, Bianka Bakullari, Merih Seran Uysal e Wil M. P. van der Aalst. "Probability Estimation of Uncertain Process Trace Realizations". In Lecture Notes in Business Information Processing, 21–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_2.

Texto completo da fonte
Resumo:
AbstractProcess mining is a scientific discipline that analyzes event data, often collected in databases called event logs. Recently, uncertain event logs have become of interest, which contain non-deterministic and stochastic event attributes that may represent many possible real-life scenarios. In this paper, we present a method to reliably estimate the probability of each of such scenarios, allowing their analysis. Experiments show that the probabilities calculated with our method closely match the true chances of occurrence of specific outcomes, enabling more trustworthy analyses on uncertain data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Kosheleva, Olga, e Vladik Kreinovich. "Beyond p-Boxes and Interval-Valued Moments: Natural Next Approximations to General Imprecise Probabilities". In Statistical and Fuzzy Approaches to Data Processing, with Applications to Econometrics and Other Areas, 133–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45619-1_11.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Kukar, Matjaž, Igor Kononenko e Ciril Grošelj. "Automated Diagnostics of Coronary Artery Disease". In Data Mining, 1043–63. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2455-9.ch053.

Texto completo da fonte
Resumo:
The authors present results and the latest advancement in their long-term study on using image processing and data mining methods in medical image analysis in general, and in clinical diagnostics of coronary artery disease in particular. Since the evaluation of modern medical images is often difficult and time-consuming, authors integrate advanced analytical and decision support tools in diagnostic process. Partial diagnostic results, frequently obtained from tests with substantial imperfections, can be thus integrated in ultimate diagnostic conclusion about the probability of disease for a given patient. Authors study various topics, such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform the medical practice. During their long-term study (1995-2011) authors achieved, among other minor results, two really significant milestones. The first was achieved by using machine learning to significantly increase post-test diagnostic probabilities with respect to expert physicians. The second, even more significant result utilizes various advanced data analysis techniques, such as automatic multi-resolution image parameterization combined with feature extraction and machine learning methods to significantly improve on all aspects of diagnostic performance. With the proposed approach clinical results are significantly as well as fully automatically, improved throughout the study. Overall, the most significant result of the work is an improvement in the diagnostic power of the whole diagnostic process. The approach supports, but does not replace, physicians’ diagnostic process, and can assist in decisions on the cost-effectiveness of diagnostic tests.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kukar, Matjaž, Igor Kononenko e Ciril Grošelj. "Automated Diagnostics of Coronary Artery Disease". In Medical Applications of Intelligent Data Analysis, 91–112. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1803-9.ch006.

Texto completo da fonte
Resumo:
The authors present results and the latest advancement in their long-term study on using image processing and data mining methods in medical image analysis in general, and in clinical diagnostics of coronary artery disease in particular. Since the evaluation of modern medical images is often difficult and time-consuming, authors integrate advanced analytical and decision support tools in diagnostic process. Partial diagnostic results, frequently obtained from tests with substantial imperfections, can be thus integrated in ultimate diagnostic conclusion about the probability of disease for a given patient. Authors study various topics, such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform the medical practice. During their long-term study (1995-2011) authors achieved, among other minor results, two really significant milestones. The first was achieved by using machine learning to significantly increase post-test diagnostic probabilities with respect to expert physicians. The second, even more significant result utilizes various advanced data analysis techniques, such as automatic multi-resolution image parameterization combined with feature extraction and machine learning methods to significantly improve on all aspects of diagnostic performance. With the proposed approach clinical results are significantly as well as fully automatically, improved throughout the study. Overall, the most significant result of the work is an improvement in the diagnostic power of the whole diagnostic process. The approach supports, but does not replace, physicians’ diagnostic process, and can assist in decisions on the cost-effectiveness of diagnostic tests.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Chiverton, John, e Kevin Wells. "PV Modeling of Medical Imaging Systems". In Benford's Law. Princeton University Press, 2015. http://dx.doi.org/10.23943/princeton/9780691147611.003.0018.

Texto completo da fonte
Resumo:
This chapter applies a Bayesian formulation of the Partial Volume (PV) effect, based on the Benford distribution, to the statistical classification of nuclear medicine imaging data: specifically Positron Emission Tomography (PET) acquired as part of a PET-CT phantom imaging procedure. The Benford distribution is a discrete probability distribution of great interest for medical imaging, because it describes the probabilities of occurrence of single digits in many sources of data. The chapter thus describes the PET-CT imaging and post-processing process to derive a gold standard. Moreover, this chapter uses it as a ground truth for the assessment of a Benford classifier formulation. The use of this gold standard shows that the classification of both the simulated and real phantom imaging data is well described by the Benford distribution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Harff, J. E., e R. A. Olea. "From Multivariate Sampling To Thematic Maps With An Application To Marine Geochemistry". In Computers in Geology - 25 Years of Progress. Oxford University Press, 1994. http://dx.doi.org/10.1093/oso/9780195085938.003.0027.

Texto completo da fonte
Resumo:
Integration of mapped data is one of the main problems in geological information processing. Structural, compositional, and genetic features of the Earth's crust may be apparent only if variables that were mapped separately are studied simultaneously. Geologists traditionally solve this problem by the "light table method." Mathematical geologists, in particular, D.F. Merriam, have applied multivariate techniques to data integration (Merriam and Sneath, 1966; Harbaugh and Merriam, 1968; Merriam and Jewett, 1988; Merriam and Sondergard, 1988; Herzfeld and Merriam, 1990; Brower and Merriam, 1990). In this article a regionalization concept based on the interpolation of Bayes' probabilities of class memberships is described using a geostatistical model called "classification probability kriging." The problem of interpolation between data points has not been considered in most of the publications on multivariate techniques mentioned above. An attempt at data integration—including interpolation of multivariate data vectors—was made by Harff and Davis (1990) using the concept of regionalized classification. This concept combines the theory of classification of geological objects (Rodionov, 1981) with the theory of regionalized variables (Matheron, 1970; Journel and Huijbregts, 1978). The method is based on the transformation of the original multivariate space of observed variables into a univariate space of rock types or rock classes. Distances between multivariate class centers and measurement vectors within the feature space are needed for this transformation. Such distances can be interpolated between the data points using kriging. Because of the assumptions of multinormality and the fact that Mahalanobis' distances tend to follow a x2 distribution, the distances must be normalized before kriging (Harff, Davis and Olea, 1991). From the resulting normalized distance vectors at each node of a spatial grid, the Bayes' probability of class membership can be calculated for each class. The corresponding grid nodes will be assigned to the classes with the greatest membership probabilities. The result is a regionalization scheme covering the area under investigation. Let X(r) denote the multivariate field of features, modeled as a regionalized variable (RV).
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Probabilities – Data processing"

1

Lokse, Sigurd, e Robert Jenssen. "Ranking Using Transition Probabilities Learned from Multi-Attribute Data". In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8462132.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Reznik, A. L., A. A. Soloviev e A. V. Torgov. "On the statistics of anomalous clumps in random point images". In Spatial Data Processing for Monitoring of Natural and Anthropogenic Processes 2021. Crossref, 2021. http://dx.doi.org/10.25743/sdm.2021.11.90.030.

Texto completo da fonte
Resumo:
New algorithms for calculating exact analytical formulas describing two related probabilities are proposed, substantiated and software implemented: 1) the probability of the formation of anomalously large local groups in a random point image; 2) the probability of the absence of significant local groupings in a random point image.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ko, Hsiao-Han, Kuo-Jin Tseng, Li-Min Wei e Meng-Hsiun Tsai. "Possible Disease-Link Genetic Pathways Constructed by Hierarchical Clustering and Conditional Probabilities of Ovarian Carcinoma Microarray Data". In 2010 Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). IEEE, 2010. http://dx.doi.org/10.1109/iihmsp.2010.8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Zhang, Xiaodong, Ying Min Low e Chan Ghee Koh. "Prediction of Low Failure Probabilities With Application to Marine Risers". In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61574.

Texto completo da fonte
Resumo:
Offshore riser systems are subjected to wind, wave and current loadings, which are random in nature. Nevertheless, the current deterministic based design and analysis practice could not quantitatively evaluate the safety of structures taking random environmental loadings into consideration, due to high computational costs. Structural reliability method, as an analysis tool to quantify probability of failure of components or systems, can account for uncertainties in environmental conditions and system parameters. It is particularly useful in cases where limited experience exists or a risk-based evaluation of design is required. Monte Carlo Simulation (MCS) method is the most widely accepted method and usually used to benchmark other proposed reliability methods. However, MCS is computationally demanding for predicting low failure probabilities, especially for offshore dynamic problems involving many types of uncertainties. Innovative structural reliability methods are desired to perform reliability analysis, so as to predict the low failure probabilities associated with extreme values. Variety of structural reliability methods are proposed in the literature to reduce the computational burden of MCS. The post processing methods, which recover PDF or tail distribution of random variable from sample data to perform structural reliability analysis, have great advantages over the methods from other categories on solving engineering problems. Thus the main focus of our study is on post processing structural reliability methods. In this paper, four post processing reliability methods are compared on the prediction of low failure probabilities with applications to a drilling riser system and a steel catenary riser (SCR) system: Enhanced Monte Carlo Simulation (EMCS) assumes the failure probability follows the asymptotic behavior and uses high failure probabilities to predict low failure probabilities; Multi-Gaussian Maximum Entropy Method (MGMEM) assumes the probability density function (PDF) is a summation of Gaussian density functions and adopts maximum entropy methods to obtain the model parameters; Shifted Generalized Lognormal Distribution (SGLD) method proposes a distribution that specializes to the normal distribution for zero skewness and is able to assume any finite value of skewness for versatility; and Generalized Extreme-Value Distribution method (GEV) comprises three distribution families: the Gumbel-type, Frechet-type and Weibull-type distribution. The study compares the bias errors (the difference between the predicted values and the exact values) and variance errors (the variability of the predicted values) of these methods on the prediction of low failure probabilities with applications to two riser systems. This study could provide offshore engineers and researchers feasible options for marine riser system structural reliability analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Jimeno Yepes, Antonio, Jianbin Tang e Benjamin Scott Mashford. "Improving Classification Accuracy of Feedforward Neural Networks for Spiking Neuromorphic Chips". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/274.

Texto completo da fonte
Resumo:
Deep Neural Networks (DNN) achieve human level performance in many image analytics tasks but DNNs are mostly deployed to GPU platforms that consume a considerable amount of power. New hardware platforms using lower precision arithmetic achieve drastic reductions in power consumption. More recently, brain-inspired spiking neuromorphic chips have achieved even lower power consumption, on the order of milliwatts, while still offering real-time processing. However, for deploying DNNs to energy efficient neuromorphic chips the incompatibility between continuous neurons and synaptic weights of traditional DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be overcome. Previous work has achieved this by training a network to learn continuous probabilities, before it is deployed to a neuromorphic architecture, such as IBM TrueNorth Neurosynaptic System, by random sampling these probabilities. The main contribution of this paper is a new learning algorithm that learns a TrueNorth configuration ready for deployment. We achieve this by training directly a binary hardware crossbar that accommodates the TrueNorth axon configuration constrains and we propose a different neuron model. Results of our approach trained on electroencephalogram (EEG) data show a significant improvement with previous work (76% vs 86% accuracy) while maintaining state of the art performance on the MNIST handwritten data set.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Wang, Yan. "System Resilience Quantification for Probabilistic Design of Internet-of-Things Architecture". In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59426.

Texto completo da fonte
Resumo:
The objects in the Internet of Things (IoT) form a virtual space of information gathering and sharing through the networks. Designing IoT-compatible products that have the capabilities of data collection, processing, and communication requires open and resilient architecture with flexibility and adapability for dynamically evolving networks. Design for connectivity becomes an important subject in designing such products. To enable a resilience engineering approach for IoT systems design, quantitative measures of resilience are needed for analysis and optimization. In this paper, an approach for probabilistic design of IoT system architecture is proposed, where resilience is quantified with entropy and mutual information associated with the probabilities of detection, prediction, and communication among IoT-compatible products. Information fusion rules and sensitivities are also studied.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kyriazis, A., A. Tsalavoutas, K. Mathioudakis, M. Bauer e O. Johanssen. "Gas Turbine Fault Identification by Fusing Vibration Trending and Gas Path Analysis". In ASME Turbo Expo 2009: Power for Land, Sea, and Air. ASMEDC, 2009. http://dx.doi.org/10.1115/gt2009-59942.

Texto completo da fonte
Resumo:
A fusion method that utilizes performance data and vibration measurements for gas turbine component fault identification is presented. The proposed method operates during the diagnostic processing of available data (process level) and adopts the principles of certainty factors theory. Both performance and vibration measurements are analyzed separately, in a first step, and their results are transformed into a common form of probabilities. These forms are interweaved, in order to derive a set of possible faulty components prior to deriving a final diagnostic decision. Then, in the second step, a new diagnostic problem is formulated and a final set of faulty health parameters are defined with higher confidence. In the proposed method the non-linear gas path analysis is the core diagnostic method, while information provided by vibration measurements trends is used to narrow the domain of unknown health parameters and lead to a well defined solution. It is shown that the presented technique combines effectively different sources of information, by interpreting them into a common form and may lead to improved and safer diagnosis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Galkin, Andrii, Iryna Polchaninova, Olena Galkina e Iryna Balandina. "Retail trade area analysis using multiple variables modeling at residential zone". In Contemporary Issues in Business, Management and Economics Engineering. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/cibmee.2019.041.

Texto completo da fonte
Resumo:
Purpose – the purpose of the paper is to set up a method of retail trade area analysis using multiple variables modeling at a residential zone. Research methodology – system analysis; regression analysis; correlation analysis; simulating; urban characteristics analysis. Findings – retail trade area analysis using multiple variables modeling at residential zone based on the proposed method is performed by directly processing and analysing data in a separate zone. Research limitations – the obtained results can be used for data variation range of conducted experiment. Practical implications – the proposed method makes some adjustments in estimating the limits of the trade area, specifying it with the help of the non-linearity factor and area slope, thereby changing the shape of the circle into a complex figure, depending on the geographical landscape and the structure of the roads. Implementation is made for one of the consumer’s zone in Kharkiv, Ukraine. The results allowed to adjust the trade zone, which in fact reduced it twice. Originality/Value – the probabilities of visiting retailers have been calculated according to the developed model considering surrounding limitation. In such conditions, the analysis of the consumer market and identification of the trade zone of the retailers is one of the forms to improving the efficiency of the retailer functioning
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Harlow, D. Gary. "Lower Tail Estimation of Fatigue Life". In ASME 2019 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/pvp2019-93104.

Texto completo da fonte
Resumo:
Abstract Uncertainty in the prediction of lower tail fatigue life behavior is a combination of many causes, some of which are aleatoric and some of which are systemic. The error cannot be entirely eliminated or quantified due to microstructural variability, manufacturing processing, approximate scientific modeling, and experimental inconsistencies. The effect of uncertainty is exacerbated for extreme value estimation for fatigue life distributions because by necessity those events are rare. In addition, typically, there is a sparsity of data in the region of smaller stress levels in stress–life testing where the lives are considerably longer, extending to giga cycles for some applications. Furthermore, there is often over an order of magnitude difference in the fatigue lives in that region of the stress–life graph. Consequently, extreme value estimation is problematic using traditional analyses. Thus, uncertainty must be statistically characterized and appropriately managed. The primary purpose of this paper is to propose an empirically based methodology for estimating the lower tail behavior of fatigue life cumulative distribution functions, given the applied stress. The methodology incorporates available fatigue life data using a statistical transformation to estimate lower tail behavior at much smaller probabilities than can be estimated by traditional approaches. To assess the validity of the proposed methodology confidence bounds will be estimated for the stress–life data. The development of the methodology and its subsequent validation will be illustrated using extensive fatigue life data for 2024–T4 aluminum alloy specimens readily available in the open literature.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Borozdin, Sergey Olegovich, Anatoly Nikolaevich Dmitrievsky, Nikolai Alexandrovich Eremin, Alexey Igorevich Arkhipov, Alexander Georgievich Sboev, Olga Kimovna Chashchina-Semenova e Leonid Konstantinovich Fitzner. "Drilling Problems Forecast Based on Neural Network". In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/30984-ms.

Texto completo da fonte
Resumo:
Abstract This paper poses and solves the problem of using artificial intelligence methods for processing big volumes of geodata from geological and technological measurement stations in order to identify and predict complications during well drilling. Big volumes of geodata from the stations of geological and technological measurements during drilling varied from units to tens of terabytes. Digital modernization of the life cycle of well construction using machine learning methods contributes to improving the efficiency of drilling oil and gas wells. The clustering of big volumes of geodata from various sources and types of sensors used to measure parameters during drilling has been carried out. In the process of creating, training and applying software components with artificial neural networks, the specified accuracy of calculations was achieved, hidden and non-obvious patterns were revealed in big volumes of geological, geophysical, technical and technological parameters. To predict the operational results of drilling wells, classification models were developed using artificial intelligence methods. The use of a high-performance computing cluster significantly reduced the time spent on assessing the probability of complications and predicting these probabilities for 7-10 minutes ahead. A hierarchical distributed data warehouse has been formed, containing real-time drilling data in WITSML format using the SQL server (Microsoft). The module for preprocessing and uploading geodata to the WITSML repository uses the Energistics Standards DevKit API and Energistic data objects to work with geodata in the WITSML format. Drilling problems forecast accuracy which has been reached with developed system may significantly reduce non-productive time spent on eliminating of stuck pipe, mud loss and oil and gas influx events.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia