Teses / dissertações sobre o tema "Data approximation"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Data approximation".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Ross, Colin. "Applications of data fusion in data approximation". Thesis, University of Huddersfield, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247372.
Texto completo da fonteDeligiannakis, Antonios. "Accurate data approximation in constrained environments". College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2681.
Texto completo da fonteThesis research directed by: Computer Science. Title from abstract of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Tomek, Peter. "Approximation of Terrain Data Utilizing Splines". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236488.
Texto completo da fonteCao, Phuong Thao. "Approximation of OLAP queries on data warehouses". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00905292.
Texto completo da fonteLehman, Eric (Eric Allen) 1970. "Approximation algorithms for grammar-based data compression". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87172.
Texto completo da fonteIncludes bibliographical references (p. 109-113).
This thesis considers the smallest grammar problem: find the smallest context-free grammar that generates exactly one given string. We show that this problem is intractable, and so our objective is to find approximation algorithms. This simple question is connected to many areas of research. Most importantly, there is a link to data compression; instead of storing a long string, one can store a small grammar that generates it. A small grammar for a string also naturally brings out underlying patterns, a fact that is useful, for example, in DNA analysis. Moreover, the size of the smallest context-free grammar generating a string can be regarded as a computable relaxation of Kolmogorov complexity. Finally, work on the smallest grammar problem qualitatively extends the study of approximation algorithms to hierarchically-structured objects. In this thesis, we establish hardness results, evaluate several previously proposed algorithms, and then present new procedures with much stronger approximation guarantees.
by Eric Lehman.
Ph.D.
Hou, Jun. "Function Approximation and Classification with Perturbed Data". The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618266875924225.
Texto completo da fonteZaman, Muhammad Adib Uz. "Bicubic L1 Spline Fits for 3D Data Approximation". Thesis, Northern Illinois University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10751900.
Texto completo da fonteUnivariate cubic L1 spline fits have been successful to preserve the shapes of 2D data with abrupt changes. The reason is that the minimization of L1 norm of the data is considered, as opposite to L2 norm. While univariate L1 spline fits for 2D data are discussed by many, bivariate L1 spline fits for 3D data are yet to be fully explored. This thesis aims to develop bicubic L1 spline fits for 3D data approximation. This can be achieved by solving a bi-level optimization problem. One level is bivariate cubic spline interpolation and the other level is L1 error minimization. In the first level, a bicubic interpolated spline surface will be constructed on a rectangular grid with necessary first and second order derivative values estimated by using a 5-point window algorithm for univariate L 1 interpolation. In the second level, the absolute error (i.e. L1 norm) will be minimized using an iterative gradient search. This study may be extended to higher dimensional cubic L 1 spline fits research.
Cooper, Philip. "Rational approximation of discrete data with asymptotic behaviour". Thesis, University of Huddersfield, 2007. http://eprints.hud.ac.uk/id/eprint/2026/.
Texto completo da fonteSchmid, Dominik. "Scattered data approximation on the rotation group and generalizations". Aachen Shaker, 2009. http://d-nb.info/995021562/04.
Texto completo da fonteMcQuarrie, Shane Alexander. "Data Assimilation in the Boussinesq Approximation for Mantle Convection". BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6951.
Texto completo da fonteMăndoiu, Ion I. "Approximation algorithms for VLSI routing". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/9128.
Texto completo da fonteSantin, Gabriele. "Approximation in kernel-based spaces, optimal subspaces and approximation of eigenfunctions". Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424498.
Texto completo da fonteI metodi kernel forniscono procedure di miglior approssimazione negli spazi di Hilbert nativi, ovvero gli spazi in cui tali kernel sono reproducing kernel. Nel caso notevole di kernel continui e strettamente definiti positivi su insiemi compatti, e' nota l’esistenza di una decomposizione in una serie data dalle autofunzioni (ortonormali in L2 ) di un particolare operatore integrale. L’interesse per questa espansione e' motivata da due ragioni. Da un lato, i sottospazi generati dalle autofunzioni, o elementi della eigenbasis, sono i trial space L2 -ottimali nel senso delle widhts. D’altro canto, tale espansione e' lo strumento fondamentale alla base in alcuni degli algoritmi di riferimento utilizzati nell’approssimazione con kernel. Nonostante queste ragioni motivino decisamente l’interesse per le eigenbasis, la suddetta decomposizione e' generalmente sconosciuta. Alla luce di queste motivazioni, la tesi affronta il problema dell’approssimazione delle eigenbasis per generici kernel continui e strettamente definiti positivi su generici insiemi compatti dello spazio euclideo, per ogni dimensione. Inizieremo col definire un nuovo tipo di ottimalita' basata sulla misura dell’errore tipica dell’interpolazione kernel standard. Il nuovo concetto di width sara' analizzato, ne sara' calcolato il valore e caratterizzati i rispettivi sottospazi ottimali, che saranno generati dalla eigenbasis. Inoltre, questo risultato di ottimalita' risultera' essere adatto ad essere ristretto ad alcuni particolari sottospazi dello spazio nativo. Questa restrizione ci permettera' di dimostrare nuovi risultati sulla costruzione di trial space ottimali che siano effettivamente calcolabili. Questa situazione include anche il caso dell’interpolazione kernel basata su valutazioni puntuali, e fornira' algoritmi per approssimare le autofunzioni tramite metodi kernel standard. Forniremo inoltre stime asintotiche di convergenza del metodo basate sui nuovi risultati teorici. I metodi presentati saranno implementati in algoritmi numerici, e ne testeremo il comportamento nell’approssimazione degli autospazi. Infine analizzeremo l’applicazioni dei metodi kernel a due diversi problemi di approssimazione.
Koufogiannakis, Christos. "Approximation algorithms for covering problems". Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957320821&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268338860&clientId=48051.
Texto completo da fonteIncludes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 70-77). Also issued in print.
Wiley, David F. "Approximation and visualization of scientific data using higher-order elements /". For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2003. http://uclibs.org/PID/11984.
Texto completo da fonteGrishin, Denis. "Fast and efficient methods for multi-dimensional scattered data approximation /". For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2004. http://uclibs.org/PID/11984.
Texto completo da fonteFung, Ping-yuen, e 馮秉遠. "Approximation for minimum triangulations of convex polyhedra". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B29809964.
Texto completo da fonteLewis, Cannada Andrew. "The Unreasonable Usefulness of Approximation by Linear Combination". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83866.
Texto completo da fontePh. D.
Thomas, A. "Data structures, methods of approximation and optimal computation for pedigree analysis". Thesis, University of Cambridge, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.372922.
Texto completo da fonteTurner, David Andrew. "The approximation of Cartesian coordinate data by parametric orthogonal distance regression". Thesis, University of Huddersfield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323778.
Texto completo da fonteSchmid, Dominik [Verfasser]. "Scattered Data Approximation on the Rotation Group and Generalizations / Dominik Schmid". Aachen : Shaker, 2009. http://d-nb.info/1161303006/34.
Texto completo da fonteGrimm, Alexander Rudolf. "Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83840.
Texto completo da fontePh. D.
Swingler, Kevin. "Mixed order hyper-networks for function approximation and optimisation". Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/25349.
Texto completo da fonteKotsakis, Christophoros. "Multiresolution aspects of linear approximation methods in Hilbert spaces using gridded data". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0016/NQ54794.pdf.
Texto completo da fonteFu, Shuting. "Bayesian Logistic Regression Model with Integrated Multivariate Normal Approximation for Big Data". Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/451.
Texto completo da fontePötzelberger, Klaus, e Klaus Felsenstein. "On the Fisher Information of Discretized Data". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/1700/1/document.pdf.
Texto completo da fonteSeries: Forschungsberichte / Institut für Statistik
Lee, Dong-Wook. "Extracting multiple frequencies from phase-only data". Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15031.
Texto completo da fonteHakimi, Sibooni J. "Application of extreme value theory". Thesis, University of Bradford, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384263.
Texto completo da fonteFuruhashi, Takeshi, Tomohiro Yoshikawa, Kanta Tachibana e Minh Tuan Pham. "A Clustering Method for Geometric Data based on Approximation using Conformal Geometric Algebra". IEEE, 2011. http://hdl.handle.net/2237/20706.
Texto completo da fonteQin, Hanzhang S. M. Massachusetts Institute of Technology. "Near-optimal data-driven approximation schemes for joint pricing and inventory control models". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119336.
Texto completo da fonteThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 95-96).
The thesis studies the classical multi-period joint pricing and inventory control problem in a data-driven setting. In the problem, a retailer makes periodic decisions of the prices and inventory levels of an item that the retailer wishes to sell. The objective is to match the inventory level with a random demand that depends on the price in each period, while maximizing the expected profit over finite horizon. In reality, the demand functions or the distribution of the random noise are usually unavailable, whereas past demand data are relatively easy to collect. A novel data-driven nonparametric algorithm is proposed, which uses the past demand data to solve the joint pricing and inventory control problem, without assuming the parameters of the demand functions and the noise distributions are known. Explicit sample complexity bounds are given, on the number of data samples needed to guarantee a near-optimal profit. A simulation study suggests that the algorithm is efficient in practice.
by Hanzhang Qin.
S.M. in Transportation
S.M.
Bingham, Jonathan D. "Comparison of Data Collection and Methods For the Approximation of Streambed Thermal Properties". DigitalCommons@USU, 2009. https://digitalcommons.usu.edu/etd/456.
Texto completo da fonteKim, Jung Hoon. "Performance Analysis and Sampled-Data Controller Synthesis for Bounded Persistent Disturbances". 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199317.
Texto completo da fonteWang, Hongyan. "Analysis of statistical learning algorithms in data dependent function spaces /". access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ma-b23750534f.pdf.
Texto completo da fonte"Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [87]-100)
Mehl, Craig. "Developing a sorting code for Coulomb excitation data analysis". University of the Western Cape, 2015. http://hdl.handle.net/11394/4871.
Texto completo da fonteThis thesis aims at developing a sorting code for Coulomb excitation studies at iThemba LABS. In Coulomb excitation reactions, the inelastic scattering of the projectile transfers energy to the partner nucleus (and vice-versa) through a time-dependent electromagnetic field. At energies well below the Coulomb barrier, the particles interact solely through the well known electromagnetic interaction, thereby excluding nuclear excitations from the process . The data can therefore be analyzed using a semiclassical approximation. The sorting code was used to process and analyze data acquired from the Coulomb excitation of 20Ne beams at 73 and 96 MeV, onto a 194Pt target. The detection of gamma rays was done using the AFRODITE HPGe clover detector array, which consists of nine clover detectors, in coincidence with the 20Ne particles detected with an S3 double-sided silicon detector. The new sorting code includes Doppler-correction effects, charge-sharing, energy and time conditions, kinematics and stopping powers, among others, and can be used for any particle-γ coincidence measurements at iThemba LABS. Results from other Coulomb excitation measurements at iThemba LABS will also be presented.
Bennell, Robert Paul. "Continuous approximation methods for data smoothing and Fredholm integral equations of the first kind when the data are noisy". Thesis, Cranfield University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296023.
Texto completo da fonteLevy, Eythan. "Approximation algorithms for covering problems in dense graphs". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210359.
Texto completo da fonteFinally, we look at the CONNECTED VERTEX COVER (CVC) problem,for which we proposed new approximation results in dense graphs. We first analyze Carla Savage's algorithm, then a new variant of the Karpinski-Zelikovsky algorithm. Our results show that these algorithms provide the same approximation ratios for CVC as the maximal matching heuristic and the Karpinski-Zelikovsky algorithm did for VC. We provide tight examples for the ratios guaranteed by both algorithms. We also introduce a new invariant, the "price of connectivity of VC", defined as the ratio between the optimal solutions of CVC and VC, and showed a nearly tight upper bound on its value as a function of the weak density. Our last chapter discusses software aspects, and presents the use of the GRAPHEDRON software in the framework of approximation algorithms, as well as our contributions to the development of this system.
/
Nous présentons un ensemble de résultats d'approximation pour plusieurs problèmes de couverture dans les graphes denses. Ces résultats montrent que pour plusieurs problèmes, des algorithmes classiques à facteur d'approximation constant peuvent être analysés de manière plus fine, et garantissent de meilleurs facteurs d'aproximation constants sous certaines contraintes de densité. Nous montrons en particulier que l'heuristique du matching maximal approxime les problèmes VERTEX COVER (VC) et MINIMUM MAXIMAL MATCHING (MMM) avec un facteur constant inférieur à 2 quand la proportion d'arêtes présentes dans le graphe (densité faible) est supérieure à 3/4 ou quand le degré minimum normalisé (densité forte) est supérieur à 1/2. Nous montrons également que ce résultat peut être amélioré par un algorithme de type GREEDY, qui fournit un facteur constant inférieur à 2 pour des densités faibles supérieures à 1/2. Nous donnons également des familles de graphes extrémaux pour nos facteurs d'approximation. Nous nous somme ensuite intéressés à plusieurs algorithmes de la littérature pour les problèmes VC et SET COVER (SC). Nous avons présenté une approche unifiée et critique des algorithmes de Karpinski-Zelikovsky, Imamura-Iwama, et Bar-Yehuda-Kehat, identifiant un schéma général dans lequel s'intègrent ces algorithmes.
Nous nous sommes finalement intéressés au problème CONNECTED VERTEX COVER (CVC), pour lequel nous avons proposé de nouveaux résultats d'approximation dans les graphes denses, au travers de l'algorithme de Carla Savage d'une part, et d'une nouvelle variante de l'algorithme de Karpinski-Zelikovsky d'autre part. Ces résultats montrent que nous pouvons obtenir pour CVC les mêmes facteurs d'approximation que ceux obtenus pour VC à l'aide de l'heuristique du matching maximal et de l'algorithme de Karpinski-Zelikovsky. Nous montrons également des familles de graphes extrémaux pour les ratios garantis par ces deux algorithmes. Nous avons également étudié un nouvel invariant, le coût de connectivité de VC, défini comme le rapport entre les solutions optimales de CVC et de VC, et montré une borne supérieure sur sa valeur en fonction de la densité faible. Notre dernier chapitre discute d'aspects logiciels, et présente l'utilisation du logiciel GRAPHEDRON dans le cadre des algorithmes d'approximation, ainsi que nos contributions au développement du logiciel.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Basna, Rani. "Edgeworth Expansion and Saddle Point Approximation for Discrete Data with Application to Chance Games". Thesis, Linnaeus University, School of Computer Science, Physics and Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-8681.
Texto completo da fonteWe investigate mathematical tools, Edgeworth series expansion and the saddle point method, which are approximation techniques that help us to estimate the distribution function for the standardized mean of independent identical distributed random variables where we will take into consideration the lattice case. Later on we will describe one important application for these mathematical tools where game developing companies can use them to reduce the amount of time needed to satisfy their standard requests before they approve any game
Folia, Maria Myrto. "Inference in stochastic systems with temporally aggregated data". Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/inference-in-stochastic-systems-with-temporally-aggregated-data(17940c86-e6b3-4f7d-8a43-884bbf72b39e).html.
Texto completo da fonteKarimi, Belhal. "Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.
Texto completo da fonteMany problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
Choudhury, Salimur Rashid, e University of Lethbridge Faculty of Arts and Science. "Approximation algorithms for a graph-cut problem with applications to a clustering problem in bioinformatics". Thesis, Lethbridge, Alta. : University of Lethbridge, Deptartment of Mathematics and Computer Science, 2008, 2008. http://hdl.handle.net/10133/774.
Texto completo da fontexiii, 71 leaves : ill. ; 29 cm.
Lundberg, Oscar, Oskar Bjersing e Martin Eriksson. "Approximation of ab initio potentials of carbon nanomaterials with machine learning". Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-62568.
Texto completo da fonteSupervisors: Daniel Hedman and Fredrik Sandin
F7042T - Project in Engineering Physics
Eremic, John C. "Iterative methods for estimation of 2-D AR parameters using a data-adaptive Toeplitz approximation algorithm". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28321.
Texto completo da fonteAgarwal, Khushbu. "A partition based approach to approximate tree mining : a memory hierarchy perspective". The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1196284256.
Texto completo da fonteSinghal, Kritika. "Geometric Methods for Simplification and Comparison of Data Sets". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587253879303425.
Texto completo da fonteEssegbey, John W. "Piece-wise Linear Approximation for Improved Detection in Structural Health Monitoring". University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1342729241.
Texto completo da fonteDuan, Xiuwen. "Revisiting Empirical Bayes Methods and Applications to Special Types of Data". Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42340.
Texto completo da fonteQin, Xiao. "A Data-Driven Approach for System Approximation and Set Point Optimization, with a Focus in HVAC Systems". Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/318828.
Texto completo da fonteMorel, Jules. "Surface reconstruction based on forest terrestrial LiDAR data". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0039/document.
Texto completo da fonteIn recent years, the capacity of LiDAR technology to capture detailed information about forests structure has attracted increasing attention in the field of forest science. In particular, the terrestrial LiDAR arises as a promising tool to retrieve geometrical characteristics of trees at a millimeter level.This thesis studies the surface reconstruction problem from scattered and unorganized point clouds, captured in forested environment by a terrestrial LiDAR. We propose a sequence of algorithms dedicated to the reconstruction of forests plot attributes model: the ground and the woody structure of trees (i.e. the trunk and the main branches). In practice, our approaches model the surface with implicit function build with radial basis functions to manage the homogeneity and handle the noise of the sample data points
Morvan, Anne. "Contributions to unsupervised learning from massive high-dimensional data streams : structuring, hashing and clustering". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED033/document.
Texto completo da fonteThis thesis focuses on how to perform efficiently unsupervised machine learning such as the fundamentally linked nearest neighbor search and clustering task, under time and space constraints for high-dimensional datasets. First, a new theoretical framework reduces the space cost and increases the rate of flow of data-independent Cross-polytope LSH for the approximative nearest neighbor search with almost no loss of accuracy.Second, a novel streaming data-dependent method is designed to learn compact binary codes from high-dimensional data points in only one pass. Besides some theoretical guarantees, the quality of the obtained embeddings are accessed on the approximate nearest neighbors search task.Finally, a space-efficient parameter-free clustering algorithm is conceived, based on the recovery of an approximate Minimum Spanning Tree of the sketched data dissimilarity graph on which suitable cuts are performed
Gorman, Joe, Glenn Takata, Subhash Patel e Dan Grecu. "A Constraint-Based Approach to Predictive Maintenance Model Development". International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606187.
Texto completo da fontePredictive maintenance is the combination of inspection and data analysis to perform maintenance when the need is indicated by unit performance. Significant cost savings are possible while preserving a high level of system performance and readiness. Identifying predictors of maintenance conditions requires expert knowledge and the ability to process large data sets. This paper describes a novel use of constraint-based data-mining to model exceedence conditions. The approach extends the extract, transformation, and load process with domain aggregate approximation to encode expert knowledge. A data-mining workbench enables an expert to pose hypotheses that constrain a multivariate data-mining process.
Hildebrandt, Filip, e Leonard Halling. "Identifiering av tendenser i data för prediktiv analys hos Flygresor.se". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209646.
Texto completo da fonteWith digitization, society changes faster than ever and it’s important for companies to stay up to date in order to adapt their business to a constantly changing market. There exists a lot of models in business intelligence, and predictive analytics is an important one. This study investigates to what extent three different methods of predictive analytics are suitable for a specific assignment regarding monthly forecasts based on click data from Flygresor.se. The purpose of the report is to be able to present which of the methods who determines the most precise forecasts for the given data and what trends in the data that contributes to this result. We will use the predictive analytics models Holt-Winters and ARIMA, as well as an expanded linear approximation, on historical click data and render the work process as well as what consequences the data from Flygresor.se brought with them.