Dissertations / Theses on the topic 'Cubic Spline'

To see the other types of publications on this topic, follow the link: Cubic Spline.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cubic Spline.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hassan, Mosavverul Meir Amnon J. "Constructing cubic splines on the sphere." Auburn, Ala., 2009. http://hdl.handle.net/10415/1790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

GORDON, FABIANA. "FORECASTING DAILY LOAD DATA USING STRUCTURAL MODELS AND CUBIC SPLINE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1996. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8325@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Esta tese propõe um modelo para o tratamento de observações diárias e é aplicado na área do setor elétrico, no problema de previsão de carga horária. O modelo proposto é basicamente um modelo estrutural onde a sazonalidade anual (movimentos periódicos dentro do ano) é modelada utilizando a técnica de Splines. Esta técnica também é utilizada na estimação do efeito não linear de uma variável explicativa. O modelo desenvolvido nesta tese também leva em conta os feriados dada a grande influência dos mesmos no consumo de energia elétrica. A metodologia proposta é aplicada à três concessionárias do Sistema Interligado Brasileiro: LIGHT (Estado do Rio de Janeiro); CEMIG (Estado de Minas Gerais) e COPEL (Estado do Paraná). A estimação é levada a cabo utilizando o software STAMP conjuntamente com módulos desenvolvidos no utilitário MATLAB.
This thesis presents a model that deals with daily obsevations applied to the problem of forecasting daily elecricity demand. This approach is basaed on a structural time series model with the annual seasonal pattern being modelled by a Periodic Sppline. The methods of Splines was first used in Harvey and Koopman (1993) to analyse hourly load observations, including temperature used an explanatory variable which is also modelled by a Spline. The main contribuition of this thesis is the treatment of holidays and the temperature response modelled by a spline which considerss the possible vsariations that the effect of temperature has on electricity demand within the year. The methodology is applied to three companies of the Brazilian electrical system: LIGHT (State of Rio de Janeiro), CEMIG (State of Minas Gerais) and COPEL (state of Paraná).
APA, Harvard, Vancouver, ISO, and other styles
3

Tsao, Su-Ching 1961. "Evaluation of drug absorption by cubic spline and numerical deconvolution." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276954.

Full text
Abstract:
A novel approach using smoothing cubic splines and point-area deconvolution to estimate the absorption kinetics of linear systems has been investigated. A smoothing cubic spline is employed as an interpolation function since it is superior to polynomials and other functions commonly used for representation of empirical data in several aspects. An advantage of the method is that results obtained from the same data set will be more consistent, irrespective of who runs the program or how many times you run it. In addition, no initial estimates are needed to run the program. The same sampling time or equally spaced measurement of unit impulse response and response of interest is not required. The method is compared with another method by using simulated data containing various degrees of random noise.
APA, Harvard, Vancouver, ISO, and other styles
4

Kaya, Hikmet Emre. "A comparative study between the cubic spline and b-spline interpolation methods in free energy calculations." Master's thesis, Faculty of Science, 2020. http://hdl.handle.net/11427/32228.

Full text
Abstract:
Numerical methods are essential in computational science, as analytic calculations for large datasets are impractical. Using numerical methods, one can approximate the problem to solve it with basic arithmetic operations. Interpolation is a commonly-used method, inter alia, constructing the value of new data points within an interval of known data points. Furthermore, polynomial interpolation with a sufficiently high degree can make the data set differentiable. One consequence of using high-degree polynomials is the oscillatory behaviour towards the endpoints, also known as Runge's Phenomenon. Spline interpolation overcomes this obstacle by connecting the data points in a piecewise fashion. However, its complex formulation requires nested iterations in higher dimensions, which is time-consuming. In addition, the calculations have to be repeated for computing each partial derivative at the data point, leading to further slowdown. The B-spline interpolation is an alternative representation of the cubic spline method, where a spline interpolation at a point could be expressed as the linear combination of piecewise basis functions. It was proposed that implementing this new formulation can accelerate many scientific computing operations involving interpolation. Nevertheless, there is a lack of detailed comparison to back up this hypothesis, especially when it comes to computing the partial derivatives. Among many scientific research fields, free energy calculations particularly stand out for their use of interpolation methods. Numerical interpolation was implemented in free energy methods for many purposes, from calculating intermediate energy states to deriving forces from free energy surfaces. The results of these calculations can provide insight into reaction mechanisms and their thermodynamic properties. The free energy methods include biased flat histogram methods, which are especially promising due to their ability to accurately construct free energy profiles at the rarely-visited regions of reaction spaces. Free Energies from Adaptive Reaction Coordinates (FEARCF) that was developed by Professor Kevin J. Naidoo has many advantages over the other flat histogram methods. iii Because of its treatment of the atoms in reactions, FEARCF makes it easier to apply interpolation methods. It implements cubic spline interpolation to derive biasing forces from the free energy surface, driving the reaction towards regions with higher energy. A major drawback of the method is the slowdown experienced in higher dimensions due to the complicated nature of the cubic spline routine. If the routine is replaced by a more straightforward B-spline interpolation, sampling and generating free energy surfaces can be accelerated. The dissertation aims to perform a comparative study between the cubic spline interpolation and B-spline interpolation methods. At first, data sets of analytic functions were used instead of numerical data to compare the accuracy and compute the percentage errors of both methods by taking the functions themselves as reference. These functions were used to evaluate the performances of the two methods at the endpoints, inflections points and regions with a steep gradient. Both interpolation methods generated identically approximated values with a percentage error below the threshold of 1%, although they both performed poorly at the endpoints and the points of inflection. Increasing the number of interpolation knots reduced the errors, however, it caused overfitting in the other regions. Although significant speed-up was not observed in the univariate interpolation, cubic spline suffered from a drastic slowdown in higher dimensions with up to 103 in 3D and 105 in 4D interpolations. The same results applied to the classical molecular dynamics simulations with FEARCF with a speed-up of up to 103 when B-spline interpolation was implemented. To conclude, the B-spline interpolation method can enhance the efficiency of the free energy calculations where cubic spline interpolation has been the currently-used method.
APA, Harvard, Vancouver, ISO, and other styles
5

Matějka, Martin. "Analýza metod vyrovnání výnosových křivek." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165093.

Full text
Abstract:
The thesis is focused on finding the most appropriate method for constructing the yield curve which will meet the criteria of Solvency II and also the selected evaluation criteria. An overview of advantages of each method is obtained by comparing these methods. Yield curves are constructed using the Czech interest rate swap data from 2007 to 2013. The selection of the evaluated methods respects their public availability and their practical application in life insurance or central banks. This thesis is divided into two parts. The first part describes the theoretical background which is necessary to understand the examined issues. In the second part the analysis of selected methods was carried out with detailed evaluation.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Eva T. "Estimation of the term structure of interest rates via cubic exponential spline functions." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1279824799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mawk, Russell Lynn. "A survey of applications of spline functions to statistics." [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0714101-104229/restricted/mawksr0809.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Negron, Luis G. "Initial-value technique for singularly perturbed two point boundary value problems via cubic spline." Master's thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4597.

Full text
Abstract:
A recent method for solving singular perturbation problems is examined. It is designed for the applied mathematician or engineer who needs a convenient, useful tool that requires little preparation and can be readily implemented using little more than an industry-standard software package for spreadsheets. In this paper, we shall examine singularly perturbed two point boundary value problems with the boundary layer at one end point. An initial-value technique is used for its solution by replacing the problem with an asymptotically equivalent first order problem, which is, in turn, solved as an initial value problem by using cubic splines. Numerical examples are provided to show that the method presented provides a fine approximation of the exact solution. The first chapter provides some background material to the cubic spline and boundary value problems. The works of several authors and a comparison of different solution methods are also discussed. Finally, some background into the specific singularly perturbed boundary value problems is introduced. The second chapter contains calculations and derivations necessary for the cubic spline and the initial value technique which are used in the solutions to the boundary value problems. The third chapter contains some worked numerical examples and the numerical data obtained along with most of the tables and figures that describe the solutions. The thesis concludes with some reflections on the results obtained and some discussion of the error bounds on the calculated approximations to the exact solutions for the numeric examples discussed.
ID: 029051011; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2010.; Includes bibliographical references (p. 48-50).
M.S.
Masters
Department of Mathematics
Sciences
APA, Harvard, Vancouver, ISO, and other styles
9

Arise, Pavan Kumar. "A DEVELOPMENT OF A COMPUTER AIDED GRAPHIC USER INTERFACE POSTPROCESSOR FOR ROTOR BEARING SYSTEMS." UKnowledge, 2004. http://uknowledge.uky.edu/gradschool_theses/326.

Full text
Abstract:
Rotor dynamic analysis, which requires extensive amount of data and rigorous analytical processing, has been eased by the advent of powerful and affordable digital computers. By incorporating the processor and a graphical interface post processor in a single set up, this program offers a consistent and efficient approach to rotor dynamic analysis. The graphic user interface presented in this program effectively addresses the inherent complexities of rotor dynamic analyses by linking the required computational algorithms together to constitute a comprehensive program by which input data and the results are exchanged, analyzed and graphically plotted with minimal effort by the user. Just by selecting an input file and appropriate options as required, the user can carry out a comprehensive rotor dynamic analysis (synchronous response, stability analysis, critical speed analysis with undamped map) of a particular design and view the results with several options to save the plots for further verification. This approach helps the user to modify the design of turbomachinery quickly, until an efficient design is reached, with minimal compromise in all aspects.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Chunyan. "A comparison of statistics for selecting smoothing parameters for loglinear presmoothing and cubic spline postsmoothing under a random groups design." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1013.

Full text
Abstract:
Smoothing techniques are designed to improve the accuracy of equating functions. The main purpose of this dissertation was to propose a new statistic (CS) and compare it to existing model selection strategies in selecting smoothing parameters for polynomial loglinear presmoothing (C) and cubic spline postsmoothing (S) for mixed-format tests under a random groups design. For polynomial loglinear presmoothing, CS was compared to seven existing model selection strategies in selecting the C parameters: likelihood ratio chi-square test (G2), Pearson chi-square test (PC), likelihood ratio chi-square difference test (G2diff), Pearson chi-square difference test (PCdiff), Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and Consistent Akaike Information Criterion (CAIC). For cubic spline postsmoothing, CS was compared to the ± 1 standard error of equating (± 1 SEE) rule. In this dissertation, both the pseudo-test data, Biology long and short, and Environmental Science long and short, and the simulated data were used to evaluate the performance of the CS statistic and the existing model selection strategies. For both types of data, sample sizes of 500, 1000, 2000, and 3000 were investigated. In addition, No Equating Needed conditions and Equating Needed conditions were investigated for the simulated data. For polynomial loglinear presmoothing, mean absolute difference (MAD), average squared bias (ASB), average squared error (ASE), and mean squared errors (MSE) were computed to evaluate the performance of all model selection strategies based on three sets of criteria: cumulative relative frequency distribution (CRFD), relative frequency distribution (RFD), and the equipercentile equating relationship. For cubic spline postsmoothing, the evaluation of different model selection procedures was only based on the MAD, ASB, ASE, and MSE of equipercentile equating. The main findings based on the pseudo-test data and simulated data were as follows: (1) As sample sizes increased, the average C values increased and the average S values decreased for all model selection strategies. (2) For polynomial loglinear presmoothing, compared to the results without smoothing, all model selection strategies always introduced bias of RFD and significantly reduced the standard errors and mean squared errors of RFD; only AIC reduced the MSE of CRFD and MSE of equipercentile equating across all sample sizes and all test forms; the best CS procedure tended to yield an equivalent or smaller MSE of equipercentile equating than the AIC and G2diff statistics. (3) For cubic spline postsmoothing, both the ± 1 SEE rule and the CS procedure tended to perform reasonably well in reducing the ASE and MSE of equipercentile equating. (4) Among all existing model selection strategies, the ±1 SEE rule in postsmoothing tended to perform better than any of the seven existing model selection strategies in presmoothing in terms of the reduction of random error and total error; (5) pseudo-test data and the simulated data tended to yield similar results. The limitations of the study and possible future research are discussed in the dissertation.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Yijie Dylan. "Hidden Markov model with application in cell adhesion experiment and Bayesian cubic splines in computer experiments." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49061.

Full text
Abstract:
Estimation of the number of hidden states is challenging in hidden Markov models. Motivated by the analysis of a specific type of cell adhesion experiments, a new frame-work based on hidden Markov model and double penalized order selection is proposed. The order selection procedure is shown to be consistent in estimating the number of states. A modified Expectation-Maximization algorithm is introduced to efficiently estimate parameters in the model. Simulations show that the proposed framework outperforms existing methods. Applications of the proposed methodology to real data demonstrate the accuracy of estimating receptor-ligand bond lifetimes and waiting times which are essential in kinetic parameter estimation. The second part of the thesis is concerned with prediction of a deterministic response function y at some untried sites given values of y at a chosen set of design sites. The intended application is to computer experiments in which y is the output from a computer simulation and each design site represents a particular configuration of the input variables. A Bayesian version of the cubic spline method commonly used in numerical analysis is proposed, in which the random function that represents prior uncertainty about y is taken to be a specific stationary Gaussian process. An MCMC procedure is given for updating the prior given the observed y values. Simulation examples and a real data application are given to compare the performance of the Bayesian cubic spline with that of two existing methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Rodolfo, Karl. "A Comparative Study of American Option Valuation and Computation." Science. School of Mathematics and Statistics, 2007. http://hdl.handle.net/2123/2063.

Full text
Abstract:
Doctor of Philosophy (PhD)
For many practitioners and market participants, the valuation of financial derivatives is considered of very high importance as its uses range from a risk management tool, to a speculative investment strategy or capital enhancement. A developing market requires efficient but accurate methods for valuing financial derivatives such as American options. A closed form analytical solution for American options has been very difficult to obtain due to the different boundary conditions imposed on the valuation problem. Following the method of solving the American option as a free boundary problem in the spirit of the "no-arbitrage" pricing framework of Black-Scholes, the option price and hedging parameters can be represented as an integral equation consisting of the European option value and an early exercise value dependent upon the optimal free boundary. Such methods exist in the literature and along with risk-neutral pricing methods have been implemented in practice. Yet existing methods are accurate but inefficient, or accuracy has been compensated for computational speed. A new numerical approach to the valuation of American options by cubic splines is proposed which is proven to be accurate and efficient when compared to existing option pricing methods. Further comparison is made to the behaviour of the American option's early exercise boundary with other pricing models.
APA, Harvard, Vancouver, ISO, and other styles
13

Forsman, Daniel. "Bangenerering för industrirobot med 6 frihetsgrader." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2376.

Full text
Abstract:

This thesis studies path generation for industrial robots of six degrees of freedom. A path is defined by connection of simple geometrical objects like arcs and straight lines. About each point at which the objects connect, a region, henceforth called a zone, is defined in which deviation from the defined path is permitted. The zone allows the robot to follow the path at a constant speed, but the acceleration needed may vary.

Some means of calculating the zone path as to make the acceleration continuous will be presented. In joint space the path is described by the use of cubic splines. The transformation of the Cartesian path to paths in joint space will be examined. Discontinuities in the second order derivatives will appear between the splines.

A few examples of different zone path calculations will be presented where the resulting spline functions are compared with respect to their first and second order derivatives. An investigation of the number of spline functions needed when, given an upper limit of deviation, the transformation back to Cartesian coordinates is made.

APA, Harvard, Vancouver, ISO, and other styles
14

Popiel, Tomasz. "Geometrically-defined curves in Riemannian manifolds." University of Western Australia. School of Mathematics and Statistics, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0119.

Full text
Abstract:
[Truncated abstract] This thesis is concerned with geometrically-defined curves that can be used for interpolation in Riemannian or, more generally, semi-Riemannian manifolds. As in much of the existing literature on such curves, emphasis is placed on manifolds which are important in computer graphics and engineering applications, namely the unit 3-sphere S3 and the closely related rotation group SO(3), as well as other Lie groups and spheres of arbitrary dimension. All geometrically-defined curves investigated in the thesis are either higher order variational curves, namely critical points of cost functionals depending on (covariant) derivatives of order greater than 1, or defined by geometrical algorithms, namely generalisations to manifolds of algorithms from the field of computer aided geometric design. Such curves are needed, especially in the aforementioned applications, since interpolation methods based on applying techniques of classical approximation theory in coordinate charts often produce unnatural interpolants. However, mathematical properties of higher order variational curves and curves defined by geometrical algorithms are in need of substantial further investigation: higher order variational curves are solutions of complicated nonlinear differential equations whose properties are not well-understood; it is usually unclear how to impose endpoint derivative conditions on, or smoothly piece together, curves defined by geometrical algorithms. This thesis addresses these difficulties for several classes of curves. ... The geometrical algorithms investigated in this thesis are generalisations of the de Casteljau and Cox-de Boor algorithms, which define, respectively, polynomial B'ezier and piecewise-polynomial B-spline curves by dividing, in certain ratios and for a finite number of iterations, piecewise-linear control polygons corresponding to finite sequences of control points. We show how the control points of curves produced by the generalised de Casteljau algorithm in an (almost) arbitrary connected finite-dimensional Riemannian manifold M should be chosen in order to impose desired endpoint velocities and (covariant) accelerations and, thereby, piece the curves together in a C2 fashion. A special case of the latter construction simplifies when M is a symmetric space. For the generalised Cox-de Boor algorithm, we analyse in detail the failure of a fundamental property of B-spline curves, namely C2 continuity at (certain) knots, to carry over to M.
APA, Harvard, Vancouver, ISO, and other styles
15

Ataser, Zafer. "Variable Shaped Detector: A Negative Selection Algorithm." Phd thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615629/index.pdf.

Full text
Abstract:
Artificial Immune Systems (AIS) are class of computational intelligent methods developed based on the principles and processes of the biological immune system. AIS methods are categorized mainly into four types according to the inspired principles and processes of immune system. These categories are clonal selection, negative selection, immune network and danger theory. The approach of negative selection algorithm (NSA) is one of the major AIS models. NSA is a supervised learning algorithm based on the imitation of the T cells maturation process in thymus. In this imitation, detectors are used to mimic the cells, and the process of T cells maturation is simulated to generate detectors. Then, NSA classifies the specified data either as normal (self) data or as anomalous (non-self) data. In this classification task, NSA methods can make two kinds of classification errors: a self data is classified as anomalous, and a non-self data is classified as normal data. In this thesis, a novel negative selection method, variable shaped detector (V-shaped detector), is proposed to increase the classification accuracy, or in other words decreasing classification errors. In V-shaped detector, new approaches are introduced to define self and represent detectors. V-shaped detector uses the combination of Local Outlier Factor (LOF) and kth nearest neighbor (k-NN) to determine a different radius for each self sample, thus it becomes possible to model the self space using self samples and their radii. Besides, the cubic b-spline is proposed to generate a variable shaped detector. In detector representation, the application of cubic spline is meaningful, when the edge points are used. Hence, Edge Detection (ED) algorithm is developed to find the edge points of the given self samples. V-shaped detector was tested using different data sets and compared with the well-known one-class classification method, SVM, and the similar popular negative selection method, NSA with variable-sized detector termed V-detector. The experiments show that the proposed method generates reasonable and comparable results.
APA, Harvard, Vancouver, ISO, and other styles
16

naz, saima. "Forecasting daily maximum temperature of Umeå." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-112404.

Full text
Abstract:
The aim of this study is to get some approach which can help in improving the predictions of daily temperature of Umeå. Weather forecasts are available through various sources nowadays. There are various software and methods available for time series forecasting. Our aim is to investigate the daily maximum temperatures of Umeå, and compare the performance of some methods in forecasting these temperatures. Here we analyse the data of daily maximum temperatures and find the predictions for some local period using methods of autoregressive integrated moving average (ARIMA), exponential smoothing (ETS), and cubic splines.  The forecast package in R is used for this purpose and automatic forecasting methods available in the package are applied for modelling with ARIMA, ETS, and cubic splines. The thesis begins with some initial modelling on univariate time series of daily maximum temperatures. The data of daily maximum temperatures of Umeå from 2008 to 2013 are used to compare the methods using various lengths of training period. On the basis of accuracy measures we try to choose the best method. Keeping in mind the fact that there are various factors which can cause the variability in daily temperature, we try to improve the forecasts in the next part of thesis by using multivariate time series forecasting method on the time series of maximum temperatures together with some other variables. Vector auto regressive (VAR) model from the vars package in R is used to analyse the multivariate time series. Results: ARIMA is selected as the best method in comparison with ETS and cubic smoothing splines to forecast one-step-ahead daily maximum temperature of Umeå, with the training period of one year. It is observed that ARIMA also provides better forecasts of daily temperatures for the next two or three days. On the basis of this study, VAR (for multivariate time series) does not help to improve the forecasts significantly. The proposed ARIMA with one year training period is compatible with the forecasts of daily maximum temperature of Umeå obtained from Swedish Meteorological and Hydrological Institute (SMHI).
APA, Harvard, Vancouver, ISO, and other styles
17

Rosa, Francisco Eduardo Lopes Sousa. "Risk neutral probability density for currency options." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/20601.

Full text
Abstract:
Mestrado em Finanças
Este trabalho tem o objectivo de facilitar a previsão para investidores em mercados financeiros. Embora possa ser usado em acções e futuros de petróleo, o principal objectivo é o mercado cambial, mais especificamente, opções de moeda, extraindo com risco neutro a densidade de probabilidade da função através de uma abordagem paramétrica e não paramétrica. Consequentemente, tal foi aplicado a um caso muito recente, em 2019, entre o dólar Norte americano e a libra inglesa, tornando assim mais atractiva a leitura do comportamento da densidade, especialmente com a saída do Reino unido da União Europeia.
This work has the purpose of easing the forecast for financial market investors. Although it can be used on equities and oil futures, the main aim is the Foreign exchange. More so, it is specialized on currency options, extracting then the closer Risk Neutral Probability Density Function through a parametric approach and a nonparametric approach. Subsequently, this was applied to a very recent case, in 2019, between the United States of America dollar and United Kingdom pound, making it more attractive to assess the behaviour of the density, specially linked to the withdrawal of United Kingdom from the European Union.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
18

Fantoni, Anna. "Long-period oscillations in GPS Up time series. Study over the European/Mediterranean area." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25586/.

Full text
Abstract:
The surface of the Earth is subjected to vertical deformations caused by geophysical and geological processes which can be monitored by Global Positioning System (GPS) observations. The purpose of this work is to investigate GPS height time series to identify interannual signals affecting the Earth’s surface over the European and Mediterranean area, during the period 2001-2019. Thirty-six homogeneously distributed GPS stations were selected from the online dataset made available by the Nevada Geodetic Laboratory (NGL) on the basis of the length and quality of the data series. The Principal Component Analysis (PCA) is the technique applied to extract the main patterns of the space and time variability of the GPS Up coordinate. The time series were studied by means of a frequency analysis using a periodogram and the real-valued Morlet wavelet. The periodogram is used to identify the dominant frequencies and the spectral density of the investigated signals; the second one is applied to identify the signals in the time domain and the relevant periodicities. This study has identified, over European and Mediterranean area, the presence of interannual non-linear signals with a period of 2-to-4 years, possibly related to atmospheric and hydrological loading displacements and to climate phenomena, such as El Niño Southern Oscillation (ENSO). A clear signal with a period of about six years is present in the vertical component of the GPS time series, likely explainable by the gravitational coupling between the Earth’s mantle and the inner core. Moreover, signals with a period in the order of 8-9 years, might be explained by mantle-inner core gravity coupling and the cycle of the lunar perigee, and a signal of 18.6 years, likely associated to lunar nodal cycle, were identified through the wavelet spectrum. However, these last two signals need further confirmation because the present length of the GPS time series is still too short when compared to the periods involved.
APA, Harvard, Vancouver, ISO, and other styles
19

WOLFE, GLENN A. "PERFORMANCE MACRO-MODELING TECHNIQUES FOR FAST ANALOG CIRCUIT SYNTHESIS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1100027722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dingus, Thomas Holden. "Flickering Analysis of CH Cygni Using Kepler Data." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/honors/357.

Full text
Abstract:
Utilizing data from the Kepler Mission, we analyze a flickering phenomenon in the symbiotic variable star CH Cygni. We perform a spline interpolation of an averaged lightcurve and subtract the spline to acquire residual data. This allows us to analyze the deviations that are not caused by the Red Giant’s semi-regular periodic variations. We then histogram the residuals and perform moment calculations for variance, skewness, and kurtosis for the purpose of determining the nature of the flickering. Our analysis has shown that we see a much smaller scale flickering than observed in the previous literature. Our flickering scale is on the scale of fractions of a percent of the luminosity. Also, from our analysis, we are very confident that the flickering is a product of the accretion disc of the White Dwarf.
APA, Harvard, Vancouver, ISO, and other styles
21

González, Cindy. "Les courbes algébriques trigonométriques à hodographe pythagorien pour résoudre des problèmes d'interpolation deux et trois-dimensionnels et leur utilisation pour visualiser les informations dentaires dans des volumes tomographiques 3D." Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0001/document.

Full text
Abstract:
Les problèmes d'interpolation ont été largement étudiés dans la Conception Géométrique Assistée par Ordinateur. Ces problèmes consistent en la construction de courbes et de surfaces qui passent exactement par un ensemble de données. Dans ce cadre, l'objectif principal de cette thèse est de présenter des méthodes d'interpolation de données 2D et 3D au moyen de courbes Algébriques Trigonométriques à Hodographe Pythagorien (ATPH). Celles-ci sont utilisables pour la conception de modèles géométriques dans de nombreuses applications. En particulier, nous nous intéressons à la modélisation géométrique d'objets odontologiques. À cette fin, nous utilisons les courbes spatiales ATPH pour la construction de surfaces développables dans des volumes odontologiques. Initialement, nous considérons la construction de courbes planes ATPH avec continuité C² qui interpolent une séquence ordonnée de points. Nous employons deux méthodes pour résoudre ce problème et trouver la « bonne » solution. Nous étendons les courbes ATPH planes à l'espace tridimensionnel. Cette caractérisation 3D est utilisée pour résoudre le problème d'interpolation Hermite de premier ordre. Nous utilisons ces splines ATPH spatiales C¹ continues pour guider des facettes développables, qui sont déployées à l'intérieur de volumes tomodensitométriques odontologiques, afin de visualiser des informations d'intérêt pour le professionnel de santé. Cette information peut être utile dans l'évaluation clinique, diagnostic et/ou plan de traitement
Interpolation problems have been widely studied in Computer Aided Geometric Design (CAGD). They consist in the construction of curves and surfaces that pass exactly through a given data set, such as point clouds, tangents, curvatures, lines/planes, etc. In general, these curves and surfaces are represented in a parametrized form. This representation is independent of the coordinate system, it adapts itself well to geometric transformations and the differential geometric properties of curves and surfaces are invariant under reparametrization. In this context, the main goal of this thesis is to present 2D and 3D data interpolation schemes by means of Algebraic-Trigonometric Pythagorean-Hodograph (ATPH) curves. The latter are parametric curves defined in a mixed algebraic-trigonometric space, whose hodograph satisfies a Pythagorean condition. This representation allows to analytically calculate the curve's arc-length as well as the rational-trigonometric parametrization of the offsets curves. These properties are usable for the design of geometric models in many applications including manufacturing, architectural design, shipbuilding, computer graphics, and many more. In particular, we are interested in the geometric modeling of odontological objects. To this end, we use the spatial ATPH curves for the construction of developable patches within 3D odontological volumes. This may be a useful tool for extracting information of interest along dental structures. We give an overview of how some similar interpolating problems have been addressed by the scientific community. Then in chapter 2, we consider the construction of planar C2 ATPH spline curves that interpolate an ordered sequence of points. This problem has many solutions, its number depends on the number of interpolating points. Therefore, we employ two methods to find them. Firstly, we calculate all solutions by a homotopy method. However, it is empirically observed that only one solution does not have any self-intersections. Hence, the Newton-Raphson iteration method is used to directly compute this \good" solution. Note that C2 ATPH spline curves depend on several free parameters, which allow to obtain a diversity of interpolants. Thanks to these shape parameters, the ATPH curves prove to be more exible and versatile than their polynomial counterpart, the well known Pythagorean-Hodograph (PH) quintic curves and polynomial curves in general. These parameters are optimally chosen through a minimization process of fairness measures. We design ATPH curves that closely agree with well-known trigonometric curves by adjusting the shape parameters. We extend the planar ATPH curves to the case of spatial ATPH curves in chapter 3. This characterization is given in terms of quaternions, because this allows to properly analyze their properties and simplify the calculations. We employ the spatial ATPH curves to solve the first-order Hermite interpolation problem. The obtained ATPH interpolants depend on three free angular values. As in the planar case, we optimally choose these parameters by the minimization of integral shape measures. This process is also used to calculate the C1 interpolating ATPH curves that closely approximate well-known 3D parametric curves. To illustrate this performance, we present the process for some kind of helices. In chapter 4 we then use these C1 ATPH splines for guiding developable surface patches, which are deployed within odontological computed tomography (CT) volumes, in order to visualize information of interest for the medical professional. Particularly, we construct piecewise conical surfaces along smooth ATPH curves to display information related to the anatomical structure of human jawbones. This information may be useful in clinical assessment, diagnosis and/or treatment plan. Finally, the obtained results are analyzed and conclusions are drawn in chapter 5
APA, Harvard, Vancouver, ISO, and other styles
22

Subramanian, Harshavardhan. "Combining scientific computing and machine learning techniques to model longitudinal outcomes in clinical trials." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176427.

Full text
Abstract:
Scientific machine learning (SciML) is a new branch of AI research at the edge of scientific computing (Sci) and machine learning (ML). It deals with efficient amalgamation of data-driven algorithms along with scientific computing to discover the dynamics of the time-evolving process. The output of such algorithms is represented in the form of a governing equation(s) (e.g., ordinary differential equation(s), ODE(s)), which one can solve then for any time point and, thus, obtain a rigorous prediction.  In this thesis, we present a methodology on how to incorporate the SciML approach in the context of clinical trials to predict IPF disease progression in the form of governing equation. Our proposed methodology also quantifies the uncertainties associated with the model by fitting 95\% high density interval (HDI) for the ODE parameters and 95\% posterior prediction interval for posterior predicted samples. We have also investigated the possibility of predicting later outcomes by using the observations collected at early phase of the study. We were successful in combining ML techniques, statistical methodologies and scientific computing tools such as bootstrap sampling, cubic spline interpolation, Bayesian inference and sparse identification of nonlinear dynamics (SINDy) to discover the dynamics behind the efficacy outcome as well as in quantifying the uncertainty of the parameters of the governing equation in the form of 95 \% HDI intervals. We compared the resulting model with the existed disease progression model described by the Weibull function. Based on the mean squared error (MSE) criterion between our ODE approximated values and population means of respective datasets, we achieved the least possible MSE of 0.133,0.089,0.213 and 0.057. After comparing these MSE values with the MSE values obtained after using Weibull function, for the third dataset and pooled dataset, our ODE model performed better in reducing error than the Weibull baseline model by 7.5\% and 8.1\%, respectively. Whereas for the first and second datasets, the Weibull model performed better in reducing errors by 1.5\% and 1.2\%, respectively. Comparing the overall performance in terms of MSE, our proposed model approximates the population means better in all the cases except for the first and second datasets, assuming the latter case's error margin is very small. Also, in terms of interpretation, our dynamical system model contains the mechanistic elements that can explain the decay/acceleration rate of the efficacy endpoint, which is missing in the Weibull model. However, our approach had a limitation in predicting final outcomes using a model derived from  24, 36, 48 weeks observations with good accuracy where as on the contrast, the Weibull model do not possess the predicting capability. However, the extrapolated trend based on 60 weeks of data was found to be close to population mean and the ODE model built on 72 weeks of data. Finally we highlight potential questions for the future work.
APA, Harvard, Vancouver, ISO, and other styles
23

Olaleye, Peter Damilare. "Mortality investigation : does life table PA90 model annuitants mortality in Nigeria?" Master's thesis, Instituto Superior de Economia e Gestão, 2018. http://hdl.handle.net/10400.5/17306.

Full text
Abstract:
Mestrado em Actuarial Science
Este estudo tem como objetivo investigar se a tábua PA90 do Reino Unido constitui um modelo aceitável para a experiência de mortalidade na Nigéria, no que diz respeito à população dos detentores de anuidades. A motivação para o trabalho provém do facto de o mercado nigeriano de anuidades se ter vindo a desenvolver nos últimos anos. Nesta dissertação apresenta-se uma revisão de alguma da literatura relevante sobre o tópico, incluindo algumas noções de base - o que é uma renda vitalícia - e descrições necessariamente breves da investigação sobre questões de mortalidade desenvolvida no Reino Unido e em África, bem como de algumas das razões pelas quais as taxas de mortalidade estão a ser continuamente objeto de estudo. Os dados e as metodologias indispensáveis à prossecução do objetivo são de seguida discutidos e aplicados. Destaque deve ser dado aos dois métodos de suavização utilizados, spline com base natural (NCS) e spline penalizada, que foram usados no training set data, para a obtenção de taxas de mortalidade alisadas. As taxas estimadas são posteriormente comparadas com a tábua PA90, para estudar se esta deve continuar a ser usada na Nigéria, ou se se impõe a realização de um estudo completo da mortalidade no país.
This study aims to investigate PA90 of the UK as a proxy for annuitant mortality table in Nigeria. Annuities seem to grow rapidly across the globe due to reformations and regulations in the public social security systems regarding post retirement plans. Nigerian annuity market is not left out in this global growth as annuity product now gains momentum by the day. The primary focus of this dissertation is to compare PA90 of the UK with crude rates estimated from the national data available, an important topic nowadays in Nigeria. A literature review is provided - what life annuity means, mortality investigations in UK and Africa, and some of the reasons why mortality rates are being assessed. Data and methodology required to accomplish the objective of the work developed are also thoroughly discussed and used. Two smoothing techniques, natural basis spline (NCS) and penalised spline were applied on the training set, to obtain smoothed mortality rates. The rates that have been estimated are then compared with the PA90 rates, to see whether this life table should continue to be used as a proxy for the mortality of Nigerian annuitants, or an independent study should be carried out.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
24

Al-alam, Wagner Guimarães. "SisA3 : Sistema Automatizado de Auditoria de Armaz´ens de Gran´eis." Universidade Catolica de Pelotas, 2010. http://tede.ucpel.edu.br:8080/jspui/handle/tede/113.

Full text
Abstract:
Made available in DSpace on 2016-03-22T17:26:24Z (GMT). No. of bitstreams: 1 Wagner Guimaraes Al-Alam.pdf: 2995290 bytes, checksum: 9902eafe02c0b5318a99f1e796dc399f (MD5) Previous issue date: 2010-01-15
Companies working with bulk materials have appropriate locations for storage during the development of the production and storage of the final product, known as warehouses or storehouses. The values of stocks need to be periodically validated by comparing the control of receipts the and the physical situation (removal of the volume stored in the company). In this context, the calculation of physical inventory as the volume of bulk present in the warehouses is usually done manually with low credibility and prone to errors. The current audit procedures on the contents of warehouses involve inaccurate estimates, and often require emptying the warehouse. Considering the use of technologies which enable the electronic measurement of distances, angles, and automatic controls on actuators enabling mechanical movements on the supporting structures, we sought to develop a system capable of providing both computing solutions, and technology for the problem of calculation of irregular relief (products stocked in warehouses). The Automated Auditing Warehouse SisA3 intends to make this process automatic, fast and precise, without the need for emptying warehouses or having contact the products. To achieve this goal, we developed an integrated system composed of: (i) a scanner equipment, consoling the hybrid prototype of hardware and software called DigSisA3, in order to the measurement of points of relief non-uniform, formed by the products in stock, and (ii) a method for calculating the volume iCone, which combines techniques of scientific visualization, numerical interpolation points and iterative calculation of volume. The parallelization of the prototype iCone was also developed in order to satisfy the test of agility and performance of the method iCone in the audit process. The development for multiprocessor, multi-core, and distributed architectures was done over the DGM (Geometric Distributed Machine), which provides the formalities to ensure creation, management and application processing parallel and / or distributed scientific computing, with emphasis on the exploitation of data parallelism and synchronization steps. The prototype of software iCone was functionally validated, including analysis of error in the method. The analysis of performance in the prototype p-iCone showed satisfactory results. The development of this work strengthens the system SisA3, enabling automatic and reliable measurement of inventories, including broad market application
Empresas que trabalham com produtos a granel possuem locais para estocagem, durante o desenvolvimento do processo produtivo e no armazenamento do produto final, denominados armaz´ens ou silos. Os valores dos estoques devem ser validados periodicamente atrav´es da comparac¸ ao dos estoques fiscal (controle das notas fiscais) e f´ısico (levantamento do volume estocado na empresa). Neste contexto, o c´alculo do estoque f´ısico, ou seja, o volume de gran´eis presentes nos armaz´ens, ´e geralmente efetuado de forma manual e com baixa credibilidade, desta forma com propens ao a erros. Os atuais processos de auditoria no conte´udo de silos, al´em de envolverem estimativas inexatas, est ao frequentemente baseados no esvaziamento do silo. Considerando o uso de tecnologias que viabilizam a medic¸ ao eletr onica de dist ancias, angulos, e controles autom´aticos sobre atuadores que possibilitam movimentos mec anicos sobre estruturas de suporte, buscou-se o desenvolvimento de um sistema capaz de prover tanto soluc¸ oes computacionais, quanto tecnol´ogicas para o problema de c´alculo do volume de relevos irregulares, no caso dos produtos estocados nos armaz´ens. O Sistema Automatizado de Auditoria em Armaz´ens (SisA3) pretende tornar este processo autom´atico, r´apido e preciso, sem a necessidade de esvaziamento ou contato com os produtos. Para alcanc¸ar este objetivo, tem-se um sistema integrado composto de: (i) um equipamento digitalizador, consolidando o prot´otipo h´ıbrido de hardware e software denominado Dig-SisA3 , para a medic¸ ao de pontos do relevo n ao-uniforme, formado pelos produtos estocados; e (ii) m´etodo para o c´alculo do volume (iCone), que combina t´ecnicas de visualizac¸ ao cient´ıfica, interpolac¸ ao num´erica de pontos e c´alculo iterativo de volume. Al´em disto, introduz-se a paralelizac¸ ao do prot´otipo iCone, para diminuir o tempo da obtenc¸ ao dos resultados do m´etodo iCone no processo de auditoria. A an´alise sobre as perspectivas em arquiteturas multiprocessadas, multi-core e paralela distribu´ıda, utiliza o ambiente D-GM (Distributed Geometric Machine), a qual prov e os formalismos para garantir criac¸ ao, gerenciamento e processamento de aplicac¸ oes paralelas e/ou distribu´ıdas da computac¸ ao cient´ıfica, com enfase na explorac¸ ao do paralelismo de dados e nas etapas de sincronizac¸ oes. O prot´otipo de software iCone apresenta-se funcionalmente validado, incluindo an´alise de erro na execuc¸ ao do m´etodo. As an´alises de desempenho no prot´otipo p-iCone apresentaram resultados satisfat´orios. O desenvolvimento deste trabalho consolida o sistema SisA3, viabilizando aferic¸ ao autom´atica e confi´avel de estoques, incluindo ampla aplicac¸ ao no mercado
APA, Harvard, Vancouver, ISO, and other styles
25

Faria, Matheus Nascif. "Análise e extração das expectativas dos agentes de mercado em torno da data do COPOM." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/12044.

Full text
Abstract:
Submitted by Matheus Faria (mathnf@gmail.com) on 2014-08-14T21:19:09Z No. of bitstreams: 1 Dissertacao Matheus Nascif.pdf: 1111352 bytes, checksum: 57093d0997e459e3be627507d6928945 (MD5)
Approved for entry into archive by Gisele Gammaro (gisele.gammaro@fgv.br) on 2014-08-29T17:24:12Z (GMT) No. of bitstreams: 1 Dissertacao Matheus Nascif.pdf: 1111352 bytes, checksum: 57093d0997e459e3be627507d6928945 (MD5)
Approved for entry into archive by Gisele Gammaro (gisele.gammaro@fgv.br) on 2014-08-29T17:41:07Z (GMT) No. of bitstreams: 1 Dissertacao Matheus Nascif.pdf: 1111352 bytes, checksum: 57093d0997e459e3be627507d6928945 (MD5)
Made available in DSpace on 2014-09-23T13:55:26Z (GMT). No. of bitstreams: 1 Dissertacao Matheus Nascif.pdf: 1111352 bytes, checksum: 57093d0997e459e3be627507d6928945 (MD5) Previous issue date: 2014-05-30
This paper explores an important concept developed by Breeden & Litzenberger in which extract information contained in interest options in the Brazilian IDI Option market. It will be demonstrated the IDI Option Behavior under the Securities, Commodities and Futures Exchange (BM & FBOVESPA) before and after the Central Bank Meetings on the Selic Rate. The method involved determines the probability distribution through the prices of options after calculating the implied volatility surface IDI. It uses two common techniques on the market: Cubic Spline interpolation and Black (1976).
Este trabalho explora um importante conceito desenvolvido por Breeden & Litzenberger para extrair informações contidas nas opções de juros no mercado brasileiro (Opção Sobre IDI), no âmbito da Bolsa de Valores, Mercadorias e Futuros de São Paulo (BM&FBOVESPA) dias antes e após a decisão do COPOM sobre a taxa Selic. O método consiste em determinar a distribuição de probabilidade através dos preços das opções sobre IDI, após o cálculo da superfície de volatilidade implícita, utilizando duas técnicas difundidas no mercado: Interpolação Cúbica (Spline Cubic) e Modelo de Black (1976). Serão analisados os quatro primeiros momentos da distribuição: valor esperado, variância, assimetria e curtose, assim como suas respectivas variações.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Won Hee. "Bundle block adjustment using 3D natural cubic splines." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1211476222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Leitch, Sean. "Rasterisation of cubic splines applied to font design." Thesis, Queen's University Belfast, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.484247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Khurana, Saheba. "Numerical method for solving the Boltzmann equation using cubic B-splines." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46532.

Full text
Abstract:
A numerical method for solving a one-dimensional linear Boltzmann equation is developed using cubic B-splines. Collision kernels are derived for smooth and rough hard spheres. A complete velocity kernel for spherical particles is shown that is reduced to the smooth, rigid sphere case. Similarly, a collision kernel for the rough hard sphere is derived that depends upon velocity and angular velocity. The exact expression for the rough sphere collision kernel is reduced to an approximate expression that averages over the rotational degrees of freedom in the system. The rough sphere collision kernel tends to the smooth sphere collision kernel in the limit when translational-rotational energy exchange is attenuated. Comparisons between the smooth sphere and approximate rough sphere kernel are made. Four different representations for the distribution function are presented. The eigenvalues and eigenfunctions of the collision matrix are obtained for various mass ratios and compared with known values. The distribution functions, first and second moments are also evaluated for different mass and temperature ratios. This is done to validate the numerical method and it is shown that this method is accurate and well-behaved. In addition to smooth and rough hard spheres, the collision kernels are used to model the Maxwell molecule. Here, a variety of mass ratios and initial energies are used to test the capability of the numerical method. Massive tracers are set to high initial energies, representing kinetic energy loss experiments with biomolecules in experimental mass spectrometry. The validity of the Fokker-Planck expression for the Rayleigh gas is also tested. Drag coefficients are calculated and compared to analytic expressions. It is shown that these values are well predicted for massive tracers but show a more complex behaviour for small mass ratios especially at higher energies. The numerical method produced well converged values, even when the tracers were initialized far from equilibrium. In general this numerical method produces sparse matrices and can be easily generalized to higher dimensions that can be cast into efficient parallel algorithms. Future work has been planned that involves the use of this numerical method for a multi-dimension linear Boltzmann equation.
APA, Harvard, Vancouver, ISO, and other styles
29

Crétois, Emmanuelle. "Utilisation de la méthode des pas aléatoires en estimation dans les processus ponctuels." Rouen, 1994. http://www.theses.fr/1994ROUE5022.

Full text
Abstract:
Dans ce travail, nous estimons la densité moyenne d'un processus ponctuel de Poisson au moyen d'une partition aléatoire, c'est-à-dire une partition en intervalles dont les extrémités sont certains points de l'échantillon. Nous utilisons successivement la méthode de l'histogramme à pas aléatoires, des splines cubiques à pas aléatoires, de la fenêtre mobile aléatoire et du noyau à pas aléatoires. Nous nous intéressons aussi au cas du processus mixte de Poisson, à un cas particulier du processus mixte de Poisson, le processus de loi binomiale négative, et au cas où la densité moyenne du processus est discontinue. Enfin, nous étendons la méthode de la partition aléatoire à l'estimation de la fonction d'amincissement d'un processus de Poisson aminci, et des mesures de Palm réduites d'un processus de Cox
APA, Harvard, Vancouver, ISO, and other styles
30

Sarfraz, Muhammad. "The representation of curves and surfaces in computer aided geometric design using rational cubic splines." Thesis, Brunel University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.257511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Franěk, Pavel. "Analýza variability srdečního rytmu pomocí rekurentního diagramu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220048.

Full text
Abstract:
The aim of this thesis is to describe the variability of cardiac rhythm and familiarity with the methods of the analysis, ie by monitoring changes in heart rhythm electrogram signal recording and using the methods in the time domain using recurrent diagram. The work describes the quantification of the methods and possibilities of quantifiers in the evaluation of heart rate variability analysis. It also describes the clinical significance of heart rate variability and diagnostic capabilities changes of heart rate variability caused by ischemic heart disease. The practical part describes how to create applications in Matlab to calculate the quantifiers analysis of heart rate variability in the time domain using recurrent diagram. The calculation was made of the positions R wave elektrogram signal isolated rabbit hearts. The calculated values of quantifiers both methods were statistically evaluated and discussed.
APA, Harvard, Vancouver, ISO, and other styles
32

Hladíková, Hana. "Metody konstrukce výnosové křivky státních dluhopisů na českém dluhopisovém trhu." Doctoral thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-71673.

Full text
Abstract:
The zero coupon yield curve is one of the most fundamental tools in finance and is essential in the pricing of various fixed-income securities. Zero coupon rates are not observable in the market for a range of maturities. Therefore, an estimation methodology is required to derive the zero coupon yield curves from observable data. If we deal with approximations of empirical data to create yield curves it is necessary to choose suitable mathematical functions. We discuss the following methods: the methods based on cubic spline functions, methods employing linear combination of the Fourier or exponential basis functions and the parametric model of Nelson and Siegel. The current mathematical apparatus employed for this kind of approximation is outlined. In order to find parameters of the models we employ the least squares minimization of computed and observed prices. The theoretical background is applied to an estimation of the zero-coupon yield curves derived from the Czech coupon bond market. Application of proper smoothing functions and weights of bonds is crucial if we want to select a method which performs best according to given criteria. The best performance is obtained for Bspline models with smoothing.
APA, Harvard, Vancouver, ISO, and other styles
33

Ernhagen, Larsson Manfred, and Hampus Swensson. "Procedurell generering av grottsystem med hjälp av kubiska Bézier-splines." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20186.

Full text
Abstract:
I denna uppsats presenteras ett verktyg för att assistera skapandet av spelbanor i dungeon-miljö. Skapandet av sådana banor är ofta resurskrävande under produktionen och har fokus på design. För att behålla designaspekten men underlätta arbetet är verktyget framtaget för att med tillgängliga parametrar producera grottgångar för sådana banor. Vi undersöker med användartester hur verktyget kan användas för att effektivisera en känd metod för att skapa spelbanor, men samtidigt skapa den kvalité som eftertraktas. Med detta avser vi inte bara att ta fram ett effektivt verktyg, utan även att demonstrera en metod för att använda procedurell generering av spelinnehåll för ett nytt ändamål inom speldesign.
In this article a tool for assisting the creation of game levels in a dungeon environment is presented. Creating such game levels often requires a large amount of resources during a game production and has focus on design. To keep the aspect of design but ease work, the tool is created to produce caverns for such game levels with accessible parameters. We examine with user tests how the tool can be used to make an existing method for creation of game levels more effective. But at the same time producing the quality that is coveted. With this we do not only hope to produce an effective tool, but also to demonstrate a method for using procedural generation for a new purpose in game design.
APA, Harvard, Vancouver, ISO, and other styles
34

Soares, M. J. "A posteriori corrections for cubic and quintic interpolating splines with applications to the solution of two-point boundary value problems." Thesis, Brunel University, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Enge, Leo. "Periodic and Non-Periodic Filter Structures in Lasers." Thesis, KTH, Matematik (Avd.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288181.

Full text
Abstract:
Communication using fiber optics is an integral part of modern societies and one of the most important parts of this is the grating filter of a laser. In this report we introduce both the periodic and the non-periodic grating filter and discuss how there can be resonance in these structures. We then provide an exact method for calculating the spectrum of these grating filters and study three different methods to calculate this approximately. The first one is the \emph{Fourier approximation} which is very simple. For the studied filters the fundamental form of the results for this method is correct, even though the details are not. The second method consists of calculating the spectrum exactly for some values and then use interpolation by splines. This method gives satisfactory results for the types of gratings analysed. Finally a method of perturbation is provided for the periodic grating filter as well as an outline for how this can be extended to the non-periodic grating filter. For the studied filters the results of this method are very promising. The method of perturbations may also give a deeper understanding of how a filter works and we therefore conclude that it would be of interest to study the method of perturbations further, while all the studied methods can be useful for computation of the spectrum depending on the required precision.
Fiberoptisk kommunikation utgör en viktig del i moderna samhällen och en av de grudläggande delarna av detta är Bragg-filter i lasrar. I den här rapporten introducerar vi både det periodiska och det icke-periodiska Bragg-filtret och diskuterar hur resonans kan uppstå i dessa. Vi presenterar sedan en exakt metod för att beräkna spektrumet av dessa filter samt studerar tre approximativa metoder för att beräkna spektrumet. Den första metoden är \emph{Fourier-approximationen} som är väldigt enkel. För de studerade filtrena blir de grundläggande formerna korrekta med Fourier-approximationen, medan detaljerna är fel. Den andra metoden består av att räkna ut spektrumet exakt för några punkter och sedan interpolera med hjälp av splines. Den här metoden ger mycket bra resultat för de studerade filtrena. Till sist presenteras en metod baserad på störningsteori för det periodiska filtret, samt en översikt över hur det här kan utökas till det icke-periodiska filtret. Denna metod ger mycket lovande resulat och den kan även ge djupare insikt i hur ett filter fungerar. Vi sluter oss därför till att det vore intressant att vidare studera metoder med störningar, men även att alla studerade metoder kan vara användabara för beräkningen av spektra beroende på vilken precision som krävs.
APA, Harvard, Vancouver, ISO, and other styles
36

Malina, Jakub. "Vytvoření interaktivních pomůcek z oblasti 2D počítačové grafiky." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-219924.

Full text
Abstract:
In this master’s thesis we focus on the basic properties of computer curves and their practical applicability. We explain how the curve can be understood in general, what are polynomial curves and their composing possibilities. Then we focus on the description of Bezier curves, especially the Bezier cubic. We discuss in more detail some of fundamental algorithms that are used for modelling these curves on computers and then we will show their practical interpretation. Then we explain non uniform rational B-spline curves and De Boor algorithm. In the end we discuss topic rasterization of segment, thick line, circle and ellipse. The aim of master’s thesis is the creation of the set of interactive applets, simulating some of the methods and algorithm we discussed in theoretical part. This applets will help facilitate understanding and will make the teaching more effective.
APA, Harvard, Vancouver, ISO, and other styles
37

Noda, Gleyce Rocha. "Análise de diagnóstico em modelos semiparamétricos normais." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-24062013-075143/.

Full text
Abstract:
Nesta dissertação apresentamos métodos de diagnóstico em modelos semiparamétricos sob erros normais, em especial os modelos semiparamétricos com uma variável explicativa não paramétrica, conhecidos como modelos lineares parciais. São utilizados splines cúbicos para o ajuste da variável resposta e são aplicadas funções de verossimilhança penalizadas para a obtenção dos estimadores de máxima verossimilhança com os respectivos erros padrão aproximados. São derivadas também as propriedades da matriz hat para esse tipo de modelo, com o objetivo de utilizá-la como ferramenta na análise de diagnóstico. Gráficos normais de probabilidade com envelope gerado também foram adaptados para avaliar a adequabilidade do modelo. Finalmente, são apresentados dois exemplos ilustrativos em que os ajustes são comparados com modelos lineares normais usuais, tanto no contexto do modelo aditivo normal simples como no contexto do modelo linear parcial.
In this master dissertation we present diagnostic methods in semiparametric models under normal errors, specially in semiparametric models with one nonparametric explanatory variable, also known as partial linear model. We use cubic splines for the nonparametric fitting, and penalized likelihood functions are applied for obtaining maximum likelihood estimators with their respective approximate standard errors. The properties of the hat matrix are also derived for this kind of model, aiming to use it as a tool for diagnostic analysis. Normal probability plots with simulated envelope graphs were also adapted to evaluate the model suitability. Finally, two illustrative examples are presented, in which the fits are compared with usual normal linear models, such as simple normal additive and partially linear models.
APA, Harvard, Vancouver, ISO, and other styles
38

Carter, Drew Davis. "Characterisation of cardiac signals using level crossing representations." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130760/1/Drew_Carter_Thesis.pdf.

Full text
Abstract:
This study examines a type of event-based sampling known as Level Crossing - its behaviour when applied to noisy signals, and an application to cardiac arrhythmia detection. Using a probabilistic approach, it presents a mathematical description of events sampled from noisy signals, and uses the model to estimate characteristics of the underlying clean signal. It evaluates the use of segments of polynomials, calculated from the Level Crossing samples of real cardiac signals, as features for machine learning algorithms to identify various types of arrhythmia.
APA, Harvard, Vancouver, ISO, and other styles
39

Oki, Fabio Hideto. "Modelos semiparamétricos com resposta binomial negativa." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-26082015-161139/.

Full text
Abstract:
O objetivo principal deste trabalho é discutir estimação e diagnóstico em modelos semiparamétricos com resposta binomial negativa, mais especificamente, modelos de regressão com resposta binomial negativa em que uma das variáveis explicativas contínuas é modelada de forma não paramétrica. Iniciamos o trabalho com um exemplo ilustrativo e fazemos uma breve revisão dos modelos paramétricos com resposta binomial negativa. Em seguida, introduzimos os modelos semiparamétricos com resposta binomial negativa e discutimos alguns aspectos de estimação, inferência e seleção de modelos. Dedicamos um capítulo a procedimentos de diagnóstico, tais como desenvolvimento de medidas de alavanca e de influência sob os aspectos de deleção de pontos e influência local, além de abordar a análise de resíduos. Reanalizamos o exemplo ilustrativo sob o enfoque semiparamétrico e apresentamos algumas conclusões.
The aim of this work is to discuss some aspects on estimation and diagnostics in negative binomial regression models which an explanatory continuous variable is modeled nonparametrically. First, an illustrative example is presented and analyzed under parametric negative binomial regression models. The proposed models are then introduced and some aspects on estimations, inference and model selection are presented. Particular emphasis is given on the development of diagnostic procedures, such as leverage measures, Cook distances, local influence approach and residuals. The motivated example is reanalyzed under the semiparametric viewpoint and some conclusions are given.
APA, Harvard, Vancouver, ISO, and other styles
40

Netto, Junior José Luis da Silva. "Desigualdade regional de renda e migrações : mobilidade intergeracional educacional e intrageracional de renda no Brasil." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/14711.

Full text
Abstract:
A presente tese tem como objetivo analisar as relações entre as variáveis educacionais e a desigualdade de renda no Brasil e suas repercussões no que se refere a mobilidade intergeracional educacional e intrageracional de renda. O objetivo específico é o de verificar como a mobilidade intergeracional educacional e intrageracional de renda se diferencia regionalmente e de que modo se distingue entre os migrantes e não migrantes. Os resultados sugerem que a desigualdade de renda e de capital humano têm uma relação positiva não linear. Nas áreas onde o indicador de desigualdade de capital humano é maior, a influência dos pais nos mais baixos estratos educacionais é grande se comparado as regiões onde a desigualdade educacional é mais baixa. De um modo geral, nas regiões e estados mais pobres, os pais menos qualificados têm maior influência sobre a trajetória educacional de seus filhos. Em paralelo na região onde os estados têm os mais altos indicadores de desigualdade educacional apresenta a menor mobilidade de renda dentre as regiões analisadas. Os pais migrantes com baixa escolaridade têm uma influência menor sobre a educação dos seus filhos que seus equivalentes nas áreas de origem. E por último, os migrantes têm uma mobilidade de renda maior que a população de suas áreas de origem o que sugere uma seletividade positiva destes.
This thesis aims to analyze the relationship between educational variables and income inequality in Brazil and its repercussion related to educational and income mobility. The specific goal is to verify how the income mobility and human capital accumulation behave considering the regional differences in Brazil and migrant and native population. The results show a non-linear and positive relationship between income and human capital inequality. In the areas where the human capital inequality is higher, parents with no schooling have more influence than in the places where educational inequality is lower. At same time, the income mobility is higher in the Center and Southeast regions e lower in Northeast. The migrant parents with low schooling have less influence over the child schooling in comparison with the equivalents in their origin region. population has higher income mobility than non-migrant.
APA, Harvard, Vancouver, ISO, and other styles
41

Daniel, Jérémie. "Trajectory generation and data fusion for control-oriented advanced driver assistance systems." Phd thesis, Université de Haute Alsace - Mulhouse, 2010. http://tel.archives-ouvertes.fr/tel-00608549.

Full text
Abstract:
Since the origin of the automotive at the end of the 19th century, the traffic flow is subject to a constant increase and, unfortunately, involves a constant augmentation of road accidents. Research studies such as the one performed by the World Health Organization, show alarming results about the number of injuries and fatalities due to these accidents. To reduce these figures, a solution lies in the development of Advanced Driver Assistance Systems (ADAS) which purpose is to help the Driver in his driving task. This research topic has been shown to be very dynamic and productive during the last decades. Indeed, several systems such as Anti-lock Braking System (ABS), Electronic Stability Program (ESP), Adaptive Cruise Control (ACC), Parking Manoeuvre Assistant (PMA), Dynamic Bending Light (DBL), etc. are yet market available and their benefits are now recognized by most of the drivers. This first generation of ADAS are usually designed to perform a specific task in the Controller/Vehicle/Environment framework and thus requires only microscopic information, so requires sensors which are only giving local information about an element of the Vehicle or of its Environment. On the opposite, the next ADAS generation will have to consider more aspects, i.e. information and constraints about of the Vehicle and its Environment. Indeed, as they are designed to perform more complex tasks, they need a global view about the road context and the Vehicle configuration. For example, longitudinal control requires information about the road configuration (straight line, bend, etc.) and about the eventual presence of other road users (vehicles, trucks, etc.) to determine the best reference speed. [...]
APA, Harvard, Vancouver, ISO, and other styles
42

Sun, Peng. "Semiparametric Bayesian Approach using Weighted Dirichlet Process Mixture For Finance Statistical Models." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/78189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Feng, Zijie. "Machine learning methods for seasonal allergic rhinitis studies." Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-173090.

Full text
Abstract:
Seasonal allergic rhinitis (SAR) is a disease caused by allergens from both environmental and genetic factors. Some researchers have studied the SAR based on traditional genetic methodologies. As technology develops, a new technique called single-cell RNA sequencing (scRNA-seq) is developed, which can generate high-dimension data. We apply two machine learning (ML) algorithms, random forest (RF) and partial least squares discriminant analysis (PLS-DA), for cell source classification and gene selection based on the SAR scRNA-seq time-series data from three allergic patients and four healthy controls denoised by single-cell variational inference (scVI). We additionally propose a new fitting method consisting of bootstrap and cubic smoothing splines to fit the averaged gene expressions per cell from different populations. To sum up, we find that both RF and PLS-DA could provide high classification accuracy, and RF is more preferable, considering its stable performance and strong gene-selection ability. Based on our analysis, there are 10 genes having discriminatory power to classify cells of allergic patients and healthy controls at any timepoints. Although there is no literature founded to show the direct connections between such 10 genes and SAR, the potential associations are indirectly confirmed by some studies. It shows a possibility that we can alarm allergic patients before a disease outbreak based on their genetic information. Meanwhile, our experiment results indicate that ML algorithms may discover something between genes and SAR compared with traditional techniques, which needs to be analyzed in genetics in the future.
APA, Harvard, Vancouver, ISO, and other styles
44

Lacourt, Aude. "Mésothéliome : étiologie professionnelle à partir d’enquêtes cas-témoins françaises." Thesis, Bordeaux 2, 2010. http://www.theses.fr/2010BOR21738/document.

Full text
Abstract:
Le mésothéliome pleural est considéré comme très spécifique d’une exposition à l’amiante. Cependant, certains aspects de l’étiologie de cette maladie n’ont pas encore été bien caractérisés. Les objectifs de cette étude sont : i) d’estimer la relation dose-effet entre exposition professionnelle aux fibres d’amiante et survenue de mésothéliome pleural selon différents indicateurs temporels d’exposition ; ii) d’étudier l’effet d’une exposition professionnelle aux laines minérales et aux poussières alvéolaires de silice cristalline libre sur le risque de survenue de mésothéliome pleural et iii) d’identifier les professions et secteurs d’activité à risque de survenue de mésothéliome pleural à partir de données recueillies sur une période de 20 ans. Les cas provenaient de ceux recrutés dans une précédente étude cas-témoins réalisée entre 1987 et 1993 et des cas enregistrés dans le programme national de surveillance du mésothéliome entre 1998 et 2006 (1 199 hommes). Les témoins ont été appariés en fréquence sur l’année de naissance et le sexe (2 378 hommes). L’exposition professionnelle à l’amiante, aux laines minérales et à la silice a été évaluée à partir de matrices emplois-exposition. Les relations dose-effet ont été estimées à l’aide du modèle logistique et leur forme a été obtenue grâce à l’utilisation de fonctions splines cubiques restreintes. Si la relation dose-effet à l’amiante est bien confirmée (particulièrement aux faibles doses), cette étude apporte de nouveaux résultats sur la relation temps-effet (rôle du temps écoulé depuis la dernière exposition ou effet de l’âge à la première exposition). Elle ouvre également de nouvelles perspectives sur le rôle des co-expositions (laines minérales) et permet d’identifier de nouvelles activités à risque, comme les mécaniciens automobiles
Asbestos exposure is recognized as the primary cause of pleural mesothelioma. However, some aspects of etiology of this disease have not been well characterized. The objective of this study was to elucidate dose-response relationships of temporal pattern of occupational asbestos exposure in males, using case-control data, to study effect of man made vitreous fibers and silica dust on the risk of pleural mesothelioma and finally, to describe occupations and industries at high risk for this cancer among men in France according a period of twenty years of observation. Cases came from a French case-control study conducted in 1987-1993 and from the French National Mesothelioma Surveillance Program in 1998-2006 (1,199 males). Population controls were frequency matched by sex and year of birth (2,378 males). Occupational asbestos exposure was evaluated with a job-exposure matrix. The dose-response relationships were estimated using logistic regression models and form of this relationship were estimated using restricted cubic spline functions. Dose-response relationship was confirmed (particularly for lowest doses). However, this study provides new results about time-effect relationships (role of time since last exposure or effect of age at first exposure). This study opens up new prospects on the role of co-exposure (mineral wool) and permit to identify new activities at risk for pleural mésothéliome as motor vehicle mechanics
APA, Harvard, Vancouver, ISO, and other styles
45

Relvas, Carlos Eduardo Martins. "Modelos parcialmente lineares com erros simétricos autoregressivos de primeira ordem." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-28052013-182956/.

Full text
Abstract:
Neste trabalho, apresentamos os modelos simétricos parcialmente lineares AR(1), que generalizam os modelos parcialmente lineares para a presença de erros autocorrelacionados seguindo uma estrutura de autocorrelação AR(1) e erros seguindo uma distribuição simétrica ao invés da distribuição normal. Dentre as distribuições simétricas, podemos considerar distribuições com caudas mais pesadas do que a normal, controlando a curtose e ponderando as observações aberrantes no processo de estimação. A estimação dos parâmetros do modelo é realizada por meio do critério de verossimilhança penalizada, que utiliza as funções escore e a matriz de informação de Fisher, sendo todas essas quantidades derivadas neste trabalho. O número efetivo de graus de liberdade e resultados assintóticos também são apresentados, assim como procedimentos de diagnóstico, destacando-se a obtenção da curvatura normal de influência local sob diferentes esquemas de perturbação e análise de resíduos. Uma aplicação com dados reais é apresentada como ilustração.
In this master dissertation, we present the symmetric partially linear models with AR(1) errors that generalize the normal partially linear models to contain autocorrelated errors AR(1) following a symmetric distribution instead of the normal distribution. Among the symmetric distributions, we can consider heavier tails than the normal ones, controlling the kurtosis and down-weighting outlying observations in the estimation process. The parameter estimation is made through the penalized likelihood by using score functions and the expected Fisher information. We derive these functions in this work. The effective degrees of freedom and asymptotic results are also presented as well as the residual analysis, highlighting the normal curvature of local influence under different perturbation schemes. An application with real data is given for illustration.
APA, Harvard, Vancouver, ISO, and other styles
46

Dhabale, Ashwin. "Impact Angle Constrained Guidance Using Cubic Splines." Thesis, 2015. http://etd.iisc.ernet.in/2005/3658.

Full text
Abstract:
In this thesis the cubic spline guidance law and its variants are derived. A detailed analysis is carried out to find the initial conditions for successful interception. The results are applied to three dimensional guidance design and for solving waypoint following problems. The basic cubic spline guidance law is derived for intercepting a stationary target at a desired impact angle in a surface-to-surface engagement scenario. The guidance law is obtained using an inverse method, from a cubic spline curve based trajectory. For overcoming the drawbacks of the basic cubic spline guidance law, it is modified by introducing an additional parameter. This modification has an interesting feature that the guidance command can be obtained using a single cubic spline polynomial even for impact angles greater than π/2, while resulting in substantial improvement in the guidance performance in terms of lateral acceleration demand and length of the trajectory. For imparting robustness to the cubic spline guidance law, in the presence of uncertainties and acceleration saturation, an explicit guidance expression is also derived. A comprehensive capturability study of the proposed guidance law is carried out. The capturability for the cubic spline guidance law is defined in terms of the set of all feasible initial conditions for successful interception. This set is analytically derived and its dependence on various factors, such as initial engagement geometry and interceptor capability, are also established. The basic cubic spline guidance and its variants are also derived for a three dimen- sional scenario. The novelty of the present work lies in the particular representation of the three dimensional cubic spline curve and the adoption of the analytical results available for two dimensional cubic spline guidance law. This enables selection of the boundary condition at launch for given terminal boundary condition and also in avoiding the singularities associated with the inverse method based guidance laws. For establishing the feasibility of the guidance laws in the real world, the rigid body dynamics of the interceptor is presented as a 6 degrees-of-freedom model. Further, using a simplified model, elementary autopilots are also designed. The successful interception of the target in the presence of the rigid body dynamics proves practical applicability of the cubic spline based guidance laws. Finally, the theory developed in the first part of the thesis is applied to solve the waypoint following problem. A smooth path is designed for transition of vehicle velocity from incoming to outgoing direction. The approach developed is similar to Dubins’ path, as it comprises line–cubic spline–line segments. The important feature of this method is that the cubic spline segments are fitted such that the path curvature is bounded by a pre-specified constrained value and the acceleration demand for following the smooth path obtained by this method, gradually increases to the maximum value and then decreases. This property is advantageous from a practical point of view. All the results obtained are verified with the help of numerical simulations which are included in the thesis. The proposed cubic spline guidance law is conceptually simple, does not use linearised kinematic equations, is independent of time-to-go es- timates, and is also computationally inexpensive.
APA, Harvard, Vancouver, ISO, and other styles
47

Yu, Chao Ya, and 游詔雅. "A NOTE ON MONOTONE PIECEWISE CUBIC SPLINE." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/83784414626315526206.

Full text
Abstract:
碩士
逢甲大學
應用數學研究所
82
A physical quantity is often known to have a certain behav- iour, monotonic increasing (or decreasing), as a function of other quantities. Thus, there is a need for algorithms which preserving the monotonicity properties of the monotonic data as well as producing physically reasonable curves and surfaces. Fritsch and Carlson [4] derived necessary and sufficient conditions for a cubic spline to be monotonic from a set of monotonic data. Those conditions may form a basis for developi- ng a numerical method to produce a monotonic interpolation and represent approximately the physical reality.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Lung-Jen, and 王隆仁. "A Fast Cubic-Spline Interpolation and Its Applications." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/23129972138898368567.

Full text
Abstract:
博士
國立中山大學
資訊工程學系研究所
89
In this dissertation, a new cubic-spline interpolation (CSI) for both one-dimensional and two-dimensional signals is developed to sub-sample signal, image and video compression data. This new interpolation scheme that is based on the least-squares method with a cubic-spline function can be implemented by the fast Fourier transform (FFT). The result is a simpler and faster interpolation scheme than can be obtained by other conventional means. It is shown by computer simulation that such a new CSI yields a very accurate algorithm for smoothing. Linear interpolation, linear-spline interpolation, cubic-convolution interpolation and cubic B-spline interpolation tend to be inferior in performance. In addition it is shown in this dissertation that the CSI scheme can be performed by a fast and efficient computation. The proposed method uses a simpler technique in the decimation process. It requires substantially fewer additions and multiplications than the original CSI algorithm. Moreover, a new type of overlap-save scheme is utilized to solve the boundary-condition problems that occur between two neighboring subimages in the actual image. It is also shown in this dissertation that a very efficient 9-point Winograd discrete Fourier transform (Winograd DFT) can be used to replace the FFT needed to implement the CSI scheme. Furthermore, the proposed fast new CSI scheme is used along with the Joint Photographic Experts Group (JPEG) standard to design a modified JPEG encoder- decoder for image data compression. As a consequence, for higher compression ratios the proposed modified JPEG encoder-decoder obtains a better quality of reconstructed image and also requires less computational time than both the conventional JPEG method and the America on Line (AOL) algorithm. Finally, the new fast CSI scheme is applied to the JPEG 2000, MPEG-1 and MPEG-2 algorithms, respectively. A computer simulation shows that in the encoding and decoding, the proposed modified JPEG 2000 encoder-decoder speeds up the JPEG 2000 standard, respectively, and still obtains a good quality of reconstructed image that is similar to JPEG 2000 standard for high compression ratios. Additionally, the reconstructed video using the modified MPEG encoder-decoder indicates a better quality than the conventional MPEG-1 and MPEG-2 algorithms for high compression ratios or low-bit rates.
APA, Harvard, Vancouver, ISO, and other styles
49

Zeng, Zheng. "Multigrid and cubic spline collocation methods for advection equations." 2005. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=362354&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ding, Ching-Yih, and 丁滄益. "The Least P Power Error Method Using Cubic Spline." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/57089542320512406965.

Full text
Abstract:
碩士
國立成功大學
航空太空工程學系
88
The paper develops the least method using the cubic spline curves with fixed location of control points 's. The present method ignores a small fraction of large scattering data and produces a curve with less oscillatory behavior than that of the least squares method using the cubic spline curves. In order to suppress the oscillatory behavior, the second order smoothing term is also added to the error measuring function. Numerical tests show that the number and locations of 's must be properly made. Otherwise, the final curve may deviate the original data significantly.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography