Dissertations / Theses on the topic 'Smoothing'

To see the other types of publications on this topic, follow the link: Smoothing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Smoothing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lawton, Richard. "Exponential smoothing methods." Thesis, University of Bath, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Svetunkov, Ivan. "Complex exponential smoothing." Thesis, Lancaster University, 2016. http://eprints.lancs.ac.uk/82749/.

Full text
Abstract:
Exponential smoothing is one of the most popular forecasting methods in practice. It has been used and researched for more than half a century. It started as an ad-hoc forecasting method and developed to a family of state-space models. Still all exponential smoothing methods are based on time series decomposition and the usage of such components as "level", "trend", "seasonality" and "error". It is assumed that these components may vary from one time series to another and take different forms depending on data characteristics. This makes their definition arbitrary and in fact there is no single way of identifying these components. At the same time the introduction of different types of exponential smoothing components implies that a model selection procedure is needed. This means that a researcher needs to select an appropriate type of model out of 30 different types either manually or automatically for every time series analysed. However several recent studies show that an underlying statistical model may have a form completely different than the one assumed by specific exponential smoothing models. These modelling questions motivate our research. We propose a model without strictly defined "level", "trend" and "seasonality". The model greatly simplifies the selection procedure, distinguishing only between seasonal and non-seasonal time series. Although we call it "Complex Exponential Smoothing" (CES), due to the use of complex-valued functions, its usage simplifies the forecasting procedure. In this thesis we first discuss the main properties of CES and propose an underlying statistical model. We then extend it in order to take seasonality into account and conduct experiments on real data to compare its performance with several well-known univariate forecasting models. We proceed to discuss the parameters estimation for exponential smoothing and propose a "Trace Forecast Likelihood" function that allows estimating CES components more efficiently. Finally we show that Trace Forecast Likelihood has desirable statistical properties, is connected to shrinkage and is generally advisable to use with any univariate model.
APA, Harvard, Vancouver, ISO, and other styles
3

Guevara, Prieto Ivonne Alexandra. "Quantum State Smoothing." Thesis, Griffith University, 2016. http://hdl.handle.net/10072/367795.

Full text
Abstract:
Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both past and future) observations. This method has been well developed in classical systems, but its application to quantum systems has only recently begun to be explored. Previous works have used the term "quantum smoothing" to mean estimating classical parameters, (Phys. Rev. Lett., 102, 250403, (2009)), which is essentially classical smoothing in which the noisy observation of the classical parameters is mediated by a quantum system. In this thesis, I define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. I calculate the smoothed distribution for a hypothetical unobserved record Ut which, when added to the real record O, would complete the monitoring, yielding a pure-state "quantum trajectory". Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state S that is typically purer than the state F conditioned only on the past. I also study how the choice of actual unravelling affects the purity increment over that of the conventional (filtered) state conditioned only on the past record.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Natural Sciences
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
4

Ni, Kai. "Tectonic smoothing and mapping." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41072.

Full text
Abstract:
Large-scale mapping has become the key to numerous applications, e.g. simultaneous localization and mapping (SLAM) for autonomous robots. Despite of the success of many SLAM projects, there are still some challenging scenarios in which most of the current algorithms are not able to deliver an exact solution fast enough. One of these challenges is the size of SLAM problems, which has increased by several magnitudes over the last decade. Another challenge for SLAM problems is the large amount of noise baked in the measurements, which often yields poor initializations and slows or even fails the optimization. Urban 3D reconstruction is another popular application for large-scale mapping and has received considerable attention recently from the computer vision community. High-quality 3D models are useful in various successful cartographic and architectural applications, such as Google Earth or Microsoft Live Local. At the heart of urban reconstruction problems is structure from motion (SfM). Due to the wide availability of cameras, especially on handhold devices, SfM is becoming a more and more crucial technique to handle a large amount of images. In the thesis, I present a novel batch algorithm, namely Tectonic Smoothing and Mapping (TSAM). I will show that the original SLAM graph can be recursively partitioned into multiple-level submaps using the nested dissection algorithm, which leads to the cluster tree, a powerful graph representation. By employing the nested dissection algorithm, the algorithm greatly minimizes the dependencies between two subtrees, and the optimization of the original graph can be done using a bottom-up inference along the corresponding cluster tree. To speed up the computation, a base node is introduced for each submap and is used to represent the rigid transformation of the submap in the global coordinate frame. As a result, the optimization moves the base nodes rather than the actual submap variables. I will also show that TSAM can be successfully applied to the SfM problem as well, in which a hypergraph representation is employed to capture the pairwise constraints between cameras. The hierarchical partitioning based on the hypergraph not only yields a cluster tree as in the SLAM problem but also forces resulting submaps to be nonsingular. I will demonstrate the TSAM algorithm using various simulation and real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
5

Yake, Bronson Thomas. "Self-Smoothing Functional Estimation." MSSTATE, 2002. http://sun.library.msstate.edu/ETD-db/theses/available/etd-09032002-090546/.

Full text
Abstract:
Analysis of measured data is often required when there is no deep understanding of the mathematics that accurately describes the process being measured. Additionally, realistic estimation of the derivative of measured data is often useful. Current techniques of accomplishing this type of data analysis are labor intensive, prone to significant error, and highly dependent on the expertise of the engineer performing the analysis. The ?Self-Smoothing Functional Estimation? (SSFE) algorithm was developed to automate the analysis of measured data and to provide a reliable basis for the extraction of derivative information. In addition to the mathematical development of the SSFE algorithm, an example is included in Chapter III that illustrates several of the innovative features of the SSFE and associated algorithms. Conclusions are drawn about the usefulness of the algorithm from an engineering perspective and additional possible uses are mentioned.
APA, Harvard, Vancouver, ISO, and other styles
6

Kaess, Michael. "Incremental smoothing and mapping." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26572.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Dellaert, Frank; Committee Member: Bobick, Aaron; Committee Member: Christensen, Henrik; Committee Member: Leonard, John; Committee Member: Rehg, James. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
7

Saleh, Mojtaba. "Smoothing of Photovoltaic Generation." Thesis, Curtin University, 2017. http://hdl.handle.net/20.500.11937/66584.

Full text
Abstract:
This study investigates the solutions to eliminate the intermittency of the photovoltaic (PV) generation resources in the electric grids. It addresses the battery storage backup as a conventional solution to absorb the sudden dips and rises in the PV generation, which are mainly attributed to passing clouds. This approach is further improved by incorporating the cloud prediction information produced by processing the real-time sky images. This proved to mitigate the reliance on the storage backup.
APA, Harvard, Vancouver, ISO, and other styles
8

Chakravarty, Subhashish. "Bayesian surface smoothing under anisotropy." Diss., University of Iowa, 2007. http://ir.uiowa.edu/etd/169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Simón, Colomar David, John-Olof Nilsson, and Peter Händel. "Smoothing for ZUPT-aided INSs." KTH, Signalbehandling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-107414.

Full text
Abstract:
Due to the recursive and integrative nature of zero-velocity-update-aided (ZUPT-aided) inertial navigation systems (INSs), the error covariance increases throughout each ZUPT-less period followed by a drastic decrease and large state estimate corrections as soon as ZUPTs are applied. For dead-reckoning with foot-mounted inertial sensors, this gives undesirable discontinuities in the estimated trajectory at the end of each step. However, for many applications, some degree of lag can be tolerated and the information provided by the ZUPTs at the end of a step can be made available throughout the step, eliminating the discontinuities. For this purpose, we propose a smoothing algorithm for ZUPT-aided INSs. For near real-time applications, smoothing is applied to the data in a step-wise manner requiring a suggested varying-lag segmentation rule. For complete off-line processing, full data set smoothing is examined. Finally, the consequences and impact of smoothing are analyzed and quantified based on real-data.

QC 20130114

APA, Harvard, Vancouver, ISO, and other styles
10

Eichmann, Katrin. "Smoothing stochastic bang-bang problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16799.

Full text
Abstract:
Motiviert durch das Problem der optimalen Strategie beim Handel einer großen Aktienposition, behandelt diese Arbeit ein stochastisches Kontrollproblem mit zwei besonderen Eigenschaften. Zum einen wird davon ausgegangen, dass das Kontrollproblem eine exponentielle Verzögerung in der Kontrollvariablen beinhaltet, zum anderen nehmen wir an, dass die Koeffizienten des Kontrollproblems linear in der Kontrollvariablen sind. Wir erhalten ein degeneriertes stochastisches Kontrollproblem, dessen Lösung - sofern sie existiert - Bang-Bang-Charakter hat. Die resultierende Unstetigkeit der optimalen Kontrolle führt dazu, dass die Existenz einer optimalen Lösung nicht selbstverständlich ist und bewiesen werden muss. Es wird eine Folge von stochastischen Kontrollproblemen mit Zustandsprozessen konstruiert, deren jeweilige Diffusionsmatrix invertierbar ist und die ursprüngliche degenerierte Diffusionsmatrix approximiert. Außerdem stellen die Kostenfunktionale der Folge eine konvexe Approximation des ursprünglichen linearen Kostenfunktionals dar. Um die Konvergenz der Lösungen dieser Folge zu zeigen, stellen wir die Kontrollprobleme in Form von stochastischen Vorwärts-Rückwärts-Differential-gleichungen (FBSDEs) dar. Wir zeigen, dass die zu der konstruierten Folge von Kontrollproblemen gehörigen Lösungen der Vorwärts-Rückwärtsgleichungen – zumindest für eine Teilfolge - in Verteilung konvergieren. Mit Hilfe einer Konvexitätsannahme der Koeffizienten ist es möglich, einen Kontroll-prozess auf einem passenden Wahrscheinlichkeitsraum zu konstruieren, der optimal für das ursprüngliche stochastische Kontrollproblem ist. Neben der damit bewiesenen Existenz einer optimalen (Bang-Bang-) Lösung, wird damit auch eine glatte Approximation der unstetigen Bang-Bang-Lösung erreicht, welche man für die numerische Approximation des Problems verwenden kann. Die Ergebnisse werden schließlich dann in Form von numerischen Simulationen auf das Problem der optimalen Handels¬ausführung angewendet.
Motivated by the problem of how to optimally execute a large stock position, this thesis considers a stochastic control problem with two special properties. First, the control problem has an exponential delay in the control variable, and so the present value of the state process depends on the moving average of past control decisions. Second, the coefficients are assumed to be linear in the control variable. It is shown that a control problem with these properties generates a mathematically challenging problem. Specifically, it becomes a stochastic control problem whose solution (if one exists) has a bang-bang nature. The resulting discontinuity of the optimal solution creates difficulties in proving the existence of an optimal solution and in solving the problem with numerical methods. A sequence of stochastic control problems with state processes is constructed, whose diffusion matrices are invertible and approximate the original degenerate diffusion matrix. The cost functionals of the sequence of control problems are convex approximations of the original linear cost functional. To prove the convergence of the solutions, the control problems are written in the form of forward-backward stochastic differential equations (FBSDEs). It is then shown that the solutions of the FBSDEs corresponding to the constructed sequence of control problems converge in law, at least along a subsequence. By assuming convexity of the coefficients, it is then possible to construct from this limit an admissible control process which, for an appropriate reference stochastic system, is optimal for our original stochastic control problem. In addition to proving the existence of an optimal (bang-bang) solution, we obtain a smooth approximation of the discontinuous optimal bang-bang solution, which can be used for the numerical solution of the problem. These results are then applied to the optimal execution problem in form of numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Yi Xin. "Customer concentration and sales smoothing." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3959239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Engelke, Stephan. "Smoothing-type methods for linear programs." [S.l.] : [s.n.], 2001. http://www.sub.uni-hamburg.de/disse/455/smooth.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Meise, Monika. "Residual based selection of smoothing parameters." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974404551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sancar, Yilmaz Aysun. "Edge Preserving Smoothing With Directional Consistency." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608511/index.pdf.

Full text
Abstract:
Images may be degraded by some random error which is called noise. Noise may occur during image capture, transmission or processing and its elimination is achieved by smoothing filters. Linear smoothing filters blur the edges and edges are important characteristics of images and must be preserved. Various edge preserving smoothing filters are proposed in the literature. In this thesis, most common smoothing and edge preserving smoothing filters are discussed and a new method is proposed by modifying Ambrosio Tortorelli approximation of Mumford Shah Model. New method takes into edge direction consistency account and produces sharper results at comparable scales.
APA, Harvard, Vancouver, ISO, and other styles
15

Kuivi, Mawuli Foli. "Estimation of principal points using smoothing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq22619.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sneddon, William Gary. "Smoothing techniques in underdetermined linear models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0027/NQ36562.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Huang, Fuping. "Nonparametric censored regression by smoothing splines." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ61977.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Titrek, Fatih. "Smoothing and differentiation of dynamic data." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611924/index.pdf.

Full text
Abstract:
Smoothing is an important part of the pre-processing step in Signal Processing. A signal, which is purified from noise as much as possible, is necessary to achieve our aim. There are many smoothing algorithms which give good result on a stationary data, but these smoothing algorithms don&rsquo
t give expected result in a non-stationary data. Studying Acceleration data is an effective method to see whether the smoothing is successful or not. The small part of the noise that takes place in the Displacement data will affect our Acceleration data, which are obtained by taking the second derivative of the Displacement data, severely. In this thesis, some linear and non-linear smoothing algorithms will be analyzed in a non-stationary data set.
APA, Harvard, Vancouver, ISO, and other styles
19

Taghavi, Ehsan, Fredrik Lindsten, Lennart Svensson, and Thomas B. Schön. "Adaptive stopping for fast particle smoothing." Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93461.

Full text
Abstract:
Particle smoothing is useful for offline state inference and parameter learning in nonlinear/non-Gaussian state-space models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles. One approach to tackle this issue is to use rejection-sampling-based FFBS (RS-FFBS), which asymptotically reaches linear complexity. In practice, however, the constants can be quite large and the actual gain in computational time limited. In this contribution, we develop a hybrid method, governed by an adaptive stopping rule, in order to exploit the benefits, but avoid the drawbacks, of RS-FFBS. The resulting particle smoother is shown in a simulation study to be considerably more computationally efficient than both FFBS and RS-FFBS.
CNDM
CADICS
APA, Harvard, Vancouver, ISO, and other styles
20

Pötzelberger, Klaus, and Leopold Sögner. "Stochastic equilibrium. Learning by exponential smoothing." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/1514/1/document.pdf.

Full text
Abstract:
This article considers three standard asset pricing models with adaptive agents and stochastic dividends. The models only differ in the parameters to be estimated. We assume that only limited information is used to construct estimators. Therefore, parameters are not estimated consistently. More precisely, we assume that the parameters are estimated by exponential smoothing, where past parameters are down-weighted and the weight of recent observations does not decrease with time. This situation is familiar for applications in finance. Even if time series of volatile stocks or bonds are available for a long time, only recent data is used in the analysis. In this situation the prices do not converge and remain a random variable. This raises the question how to describe equilibrium behavior with stochastic prices. However, prices can reveal properties such as ergodicity, such that the law of the price process converges to a stationary law, which provides a natural and useful extension of the idea of equilibrium behavior of an economic system for a stochastic setup. It is this implied law of the price process that we investigate in this paper. We provide conditions for the ergodicity and analyze the stationary distribution. (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
21

Cunningham, Alexander G. "Scalable online decentralized smoothing and mapping." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51848.

Full text
Abstract:
Many applications for field robots can benefit from large numbers of robots, especially applications where the objective is for the robots to cover or explore a region. A key enabling technology for robust autonomy in these teams of small and cheap robots is the development of collaborative perception to account for the shortcomings of the small and cheap sensors on the robots. In this dissertation, I present DDF-SAM to address the decentralized data fusion (DDF) inference problem with a smoothing and mapping (SAM) approach to single-robot mapping that is online, scalable and consistent while supporting a variety of sensing modalities. The DDF-SAM approach performs fully decentralized simultaneous localization and mapping in which robots choose a relevant subset of variables from their local map to share with neighbors. Each robot summarizes their local map to yield a density on exactly this chosen set of variables, and then distributes this summarized map to neighboring robots, allowing map information to propagate throughout the network. Each robot fuses summarized maps it receives to yield a map solution with an extended sensor horizon. I introduce two primary variations on DDF-SAM, one that uses a batch nonlinear constrained optimization procedure to combine maps, DDF-SAM 1.0, and one that uses an incremental solving approach for substantially faster performance, DDF-SAM 2.0. I validate these systems using a combination of real-world and simulated experiments. In addition, I evaluate design trade-offs for operations within DDF-SAM, with a focus on efficient approximate map summarization to minimize communication costs.
APA, Harvard, Vancouver, ISO, and other styles
22

Moyeed, R. A. "A study of smoothing spline regression." Thesis, University of Bath, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

OLIVEIRA, LUCIANO VEREDA. "COMMODITY PRICE SMOOTHING AND MACROECONOMIC STABILIZATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5959@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
O objetivo principal do trabalho é avaliar as vantagens e desvantagens associadas à utilização, no âmbito de uma pequena economia aberta, de um mecanismo de intervenção sobre variações dos preços relativos domésticos de insumos intermediários comerciáveis. Esse mecanismo pode ser implementado por meio de uma combinação de tributos e subsídios às importações e exportações dos insumos comerciáveis, com alíquotas variáveis ao longo do tempo. Tal intervenção assume o papel de um instrumento auxiliar de estabilização que, ao ser manejado de forma coordenada com a taxa de juros nominal e os demais instrumentos convencionais de política monetária, pode vir a melhorar a resposta da economia aos choques que a atingem. Um exemplo concreto desse tipo de mecanismo é a CIDE (Contribuição de Intervenção sobre o Domínio Econômico), que tinha entre seus intuitos originais impedir que a volatilidade do preço do petróleo no mercado internacional se transmitisse integralmente aos preços domésticos dos derivados. O trabalho, portanto, destina-se a analisar e responder as seguintes questões: (i) determinar as circunstâncias nas quais a intervenção sobre os preços internacionais dos insumos pode se somar à manipulação da taxa de juros nominal na tentativa de melhor estabilizar a economia; (ii) investigar a natureza da intervenção ótima, ou seja, como o instrumento auxiliar e a taxa de juros nominal devem se movimentar conjuntamente em resposta aos choques; e (iii) medir os ganhos de bem estar que advêm da disponibilidade desse instrumento auxiliar de estabilização.
The main purpose of the current work is to evaluate the advantages and disadvantages of using a mechanism which intervenes in domestic relative prices of tradable intermediate goods. Such a mechanism can be implemented by a combination of taxes and subsidies to imports and exports of tradable intermediate goods, with rates that vary over time. This intervention performs as an auxiliary stabilization instrument that is managed in a coordinated fashion with more conventional ones, such as the nominal interest rate. In the end this work will look into the matters of (i) investigating the circumstances under which the intervention in international commodity prices, together with the nominal interest rate, might help in the task of stabilizing the economy; (ii) determining the optimal response of the available instruments to the shocks that hit the economy; and (iii) measuring the welfare consequences of the availability of this auxiliary stabilization instrument.
APA, Harvard, Vancouver, ISO, and other styles
24

Eigen, Gary Fredric. "Smoothing methods for discrete die forming." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Swartwout, Richard Michael Steuben. "Smoothing silver nanowires for optoelectronic applications." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117813.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 113-118).
Silver nanowire meshes have shown potential for becoming a more robust, flexible alternative to traditional ceramic and brittle metal oxides. However, the current methods employed at making these films are not scalable and the high aspect ratios limit their final use. Spray coating is a technique that is widely used in many industries and has proven effective in many coating applications but continues to be limited by drying defects when used traditionally. A simple modification; pulsed spraying can decouple rheology and chemistry and allow for deposition of more uniform silver nanowire mesh films. Additional processing methods, such as metal mesh compression and matrix infilling with a CVD deposited plastic, Parylene-C, can yield a composite that solves roughness issues present with bare films. By combining all of these methods, a silver nanowire-parylene composite can be fabricated using fully scalable techniques ready for sheet-to-sheet or roll-to-roll processing. The composite is flexible, optically transparent, surface smooth, atmospherically stable and conductive, proving itself as a potential replacement for traditional transparent electrodes.
by Richard Michael Steuben Swartwout.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Hernandez, Denise Gomez. "Pension funding and smoothing of contributions." Thesis, City University London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Rothwell, Hughes Neil. "Fuzzy filters for depth map smoothing." Thesis, University of South Wales, 1999. https://pure.southwales.ac.uk/en/studentthesis/fuzzy-filters-for-depth-map-smoothing(31b8862d-cbb8-46f1-8572-02fb5303050f).html.

Full text
Abstract:
This thesis is concerned with the extraction of dense three-dimensional depth maps from sequences of two-dimensional greyscale images using correlation based matching. In particular the thesis is focused on the noise processes that occur in the depth map and the removal of that noise using nonlinear filters based on fuzzy systems. The depth from stereo algorithm is reviewed and a widely used correlation based matcher, the Sum Squared Difference (SSD) matcher, is introduced together with an established method of measuring sub-pixel disparities in stereo pairs of images. The noise in the disparity map associated with this matcher is investigated. The conjecture is made that a fuzzy inferencing system can be trained to perform a nonlinear filtering process which is more effective than conventional filters at removing the mixed impulsive and Gaussian-like noise present in the depth map. Six methods of training fuzzy systems of the Sugeno type based on the simulated annealing algorithm are proposed and tested by training fuzzy systems to approximate a simple function of two variables The thesis reviews existing fuzzy logic based filters and proposes a taxonomy for such systems. This distinguishes between direct and indirect acting fuzzy filters. An indirect acting fuzzy filter is applied to the task of smoothing a disparity map. The first order Sugeno fuzzy system is then proposed as an architecture that would be suitable as the basis for a direct acting fuzzy filter. This architecture is then applied to the task of smoothing depth maps derived from real and simulated data. The main contributions of the thesis are the identification of the Sugeno fuzzy system as a form of filter, the proposed training techniques, and the application of fuzzy filters to depth map smoothing.
APA, Harvard, Vancouver, ISO, and other styles
28

Clur, John-Craig. "Nonparametric smoothing in extreme value theory." Master's thesis, University of Cape Town, 2010. http://hdl.handle.net/11427/10285.

Full text
Abstract:
Includes bibliographical references (leaves 137-138).
This work investigates the modelling of non-stationary sample extremes using a roughness penalty approach, in which smoothed natural cubic splines are fitted to the location and scale parameters of the generalized extreme value distribution and the distribution of the r largest order statistics. Estimation is performed by implementing a Fisher scoring algorithm to maximize the penalized log-likelihood function. The approach provides a flexible framework for exploring smooth trends in sample extremes, with the benefit of balancing the trade-off between 'smoothness' and adherence to the underlying data by simply changing the smoothing parameter. To evaluate the overall performance of the extreme value theory methodology in smoothing extremes a simulation study was performed.
APA, Harvard, Vancouver, ISO, and other styles
29

Löwendick, Martin. "On smoothing under bounds and geometric contraints." [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=959981020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Storve, Sigurd. "Kalman Smoothing Techniques in Medical Image Segmentation." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18823.

Full text
Abstract:
An existing C++ library for efficient segmentation of ultrasound recordings by means of Kalman filtering, the real-time contour tracking library (RCTL), is used as a building block to implement and assess the performance of different Kalman smoothing techniques: fixed-point, fixed-lag, and fixed-interval smoothing. An experimental smoothing technique based on fusion of tracking results and learned mean state estimates at different positions in the heart-cycle is proposed. A set of $29$ recordings with ground-truth left ventricle segmentations provided by a trained medical doctor is used for the performance evaluation.The clinical motivation is to improve the accuracy of automatic left-ventricle tracking, which can be applied to improve the automatic measurement of clinically important parameters such as the ejection fraction. The evaluation shows that none of the smoothing techniques offer significant improvements over regular Kalman filtering. For the Kalman smoothing algorithms, it is argued to be a consequence of the way edge-detection measurements are performed internally in the library. The statistical smoother's lack of improvement is explained by too large interpersonal variations; the mean left-ventricular deformation pattern does not generalize well to individual cases.
APA, Harvard, Vancouver, ISO, and other styles
31

Colomar, Simón David. "Step-wise smoothing of ZUPT-aided INS." Thesis, KTH, Signalbehandling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-99202.

Full text
Abstract:
Due to the recursive nature of most foot-mounted zero-velocityupdate-aided (ZUPT-aided) inertial navigation systems (INSs), the error covariance increases throughout each step and "collapses" at the end of the step, where the ZUPT correction is done. This gives sharp corrections and discontinuities in the estimated trajectory. For applications with tight real-time constraints, this behavior is unavoidable, since every estimate corresponds to the best estimate given all the information up until that time instant. However, for many applications, some degree of lag (non-causality) can be tolerated and the information provided by the ZUPTs at the end of a step, giving the sharp correction, can be made available throughout the step. Consequently, to eliminate the sharp corrections and the unsymmetrical covariance over the steps, the implementation of a smoothing filter for a ZUPT-aided INS is considered in this thesis. To our knowledge, no formal treatment of smoothing for such systems has previously been presented, even though an extensive literature on the general subject exists. Owing to the customary closed-loop complementary filtering used for aided INS, standard smoothing techniques cannot directly be applied. Also since the measurements (the ZUPTs) are irregularly spaced and appear in clusters, some varying-lag smoothing rule is necessary. Therefore, a method based on a mixed open-closedloop complementary filtering combined with a Rauch-Tung-Striebel (RTS) smoothing is suggested in this thesis. Different types of varying-lag smoothing rules are examined. For near real-time applications, smoothing is applied to the data in a step-wise manner. The intervals (steps) for the smoothing are determined based on measurement availability and covariance and timing thresholds. For complete o-line processing, full data set smoothing is examined. Finally, the consequences of the smoothing and the open-closed-loop filtering are quantified based on real data. The impact of the smoothing throughout the steps is illustrated and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Chang-Kye. "Gradient smoothing in finite elasticity : near-incompressibility." Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/94491/.

Full text
Abstract:
This thesis presents the extension of the gradient smoothing technique for finite element approximation (so-called Smoothed Finite Element Method (S-FEM)) and its bubble-enhanced version for non-linear problems involving large deformations in nearly-incompressible and incompressible hyperelastic materials. Finite Element Method (FEM) presents numerous challenges for soft matter applications, such as incompressibility, complex geometries and mesh distortion from large deformation. S-FEM was introduced to overcome the challenges mentioned of FEM. The smoothed strains and the smoothed deformation gradients are evaluated on the smoothing domain selected by either edge information, nodal information or face information. This thesis aims the extension of S-FEM in finite elasticity as a means of alleviating locking and avoiding mesh distortion. S-FEM employs a “cubic” bubble enhancement of the element shape functions with edge-based and face-based S-FEMs, adding a linear displacement field at the centre of the element. Thereby bubble-enhanced S-FEM affords a simple and efficient implementation. This thesis reports the properties and performance of the proposed method for quasi-incompressible hyperelastic materials. Benchmark tests show that the method is well suited to soft matter simulation, overcoming deleterious locking phenomenon and maintaining the accuracy with distorted meshes.
APA, Harvard, Vancouver, ISO, and other styles
33

Cullinan, M. P. "Data smoothing by adjustment of divided differences." Thesis, University of Cambridge, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Cai, Li-Dong. "Scale-based surface understanding using diffusion smoothing." Thesis, University of Edinburgh, 1991. http://hdl.handle.net/1842/6587.

Full text
Abstract:
The research discussed in this thesis is concerned with surface understanding from the viewpoint of recognition-oriented, scale-related processing based on surface curvatures and diffusion smoothing. Four problems below high level visual processing are investigated: 1) 3-dimensional data smoothing using a diffusion process; 2) Behaviour of shape features across multiple scales, 3) Surface segmentation over multiple scales; and 4) Symbolic description of surface features at multiple scales. In this thesis, the noisy data smoothing problem is treated mathematically as a boundary value problem of the diffusion equation instead of the well-known Gaussian convolution, In such a way, it provides a theoretical basis to uniformly interpret the interrelationships amongst diffusion smoothing, Gaussian smoothing, repeated averaging and spline smoothing. It also leads to solving the problem with a numerical scheme of unconditional stability, which efficiently reduces the computational complexity and preserves the signs of curvatures along the surface boundaries. Surface shapes are classified into eight types using the combinations of the signs of the Gaussian curvature K and mean curvature H, both of which change at different scale levels. Behaviour of surface shape features over multiple scale levels is discussed in terms of the stability of large shape features, the creation, remaining and fading of small shape features, the interaction between large and small features and the structure of behaviour of the nested shape features in the KH sign image. It provides a guidance for tracking the movement of shape features from fine to large scales and for setting up a surface shape description accordingly. A smoothed surface is partitioned into a set of regions based on curvature sign homogeneity. Surface segmentation is posed as a problem of approximating a surface up to the degree of Gaussian and mean curvature signs using the depth data alone How to obtain feasible solutions of this under-determined problem is discussed, which includes the surface curvature sign preservation, the reason that a sculptured surface can be segmented with the KH sign image alone and the selection of basis functions of surface fitting for obtaining the KH sign image or for region growing. A symbolic description of the segmented surface is set up at each scale level. It is composed of a dual graph and a geometrical property list for the segmented surface. The graph describes the adjacency and connectivity among different patches as the topological-invariant properties that allow some object's flexibility, whilst the geometrical property list is added to the graph as constraints that reduce uncertainty. With this organisation, a tower-like surface representation is obtained by tracking the movement of significant features of the segmented surface through different scale levels, from which a stable description can be extracted for inexact matching during object recognition.
APA, Harvard, Vancouver, ISO, and other styles
35

何添賢 and Tim Yin Timothy Ho. "Forecasting with smoothing techniques for inventory control." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B42574286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Amezziane, Mohamed. "SMOOTHING PARAMETER SELECTION IN NONPARAMETRIC FUNCTIONAL ESTIMATION." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3488.

Full text
Abstract:
This study intends to build up new techniques for how to obtain completely data-driven choices of the smoothing parameter in functional estimation, within the confines of minimal assumptions. The focus of the study will be within the framework of the estimation of the distribution function, the density function and their multivariable extensions along with some of their functionals such as the location and the integrated squared derivatives.
Ph.D.
Department of Mathematics
Arts and Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
37

Zerkle, Michael L. "Reactivity estimation using delayed neutron precursor smoothing." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jones, Thouis Raymond 1971. "Feature preserving smoothing of 3D surface scans." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/30105.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.
Includes bibliographical references (p. 63-70).
With the increasing use of geometry scanners to create 3D models, there is a rising need for effective denoising of data captured with these devices. This thesis presents new methods for smoothing scanned data, based on extensions of the bilateral filter to 3D. The bilateral filter is a non-linear, edge-preserving image filter; its extension to 3D leads to an efficient, feature preserving filter for a wide class of surface representations, including points and "polygon soups."
by Thouis Raymond Jones.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
39

Rathje, Jason M. "Human-automation collaboration in occluded trajectory smoothing." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59690.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 133-134).
Deciding if and what objects should be engaged in a Ballistic Missile Defense System (BMDS) scenario involves a number of complex issues. The system is large and the timelines may be on the order of a few minutes, which drives designers to highly automate these systems. On the other hand, the critical nature of BMD engagement decisions suggests exploring a human-in-the-loop (HIL) approach to allow for judgment and knowledge-based decisions, which provide for potential automated system override decisions. This BMDS problem is reflective of the role allocation conundrum faced in many supervisory control systems, which is how to determine which functions should be mutually exclusive and which should be collaborative. Clearly there are some tasks that are too computationally intensive for human assistance, while other tasks may be completed without automation. Between the extremes are a number of cases in which degrees of collaboration between the human and computer are possible. This thesis motivates and outlines two experiments that quantitatively investigate human/automation tradeoffs in the specific domain of tracking problems. Human participants in both experiments were tested in their ability to smooth trajectories in different scenarios. In the first experiment, they clearly demonstrated an ability to assist the algorithm in more difficult, shorter timeline scenarios. The second experiment combined the strengths of both human and automation to create a human-augmented system. Comparison of the augmented system to the algorithm showed that adjusting the criterion for having human participation could significantly alter the solution. The appropriate criterion would be specific to each application of this augmented system. Future work should be focused on further examination of appropriate criteria.
by Jason M. Rathje.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
40

Demetriou, I. C. "Data smoothing by piecewise monotonic divided differences." Thesis, University of Cambridge, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.354670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Touchette, Sébastien. "Recovering Cholesky Factor in Smoothing and Mapping." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37935.

Full text
Abstract:
Autonomous vehicles, from self driving cars to small sized unmanned aircraft, is a hotly contested market experiencing significant growth. As a result, fundamental concepts of autonomous vehicle navigation, such as simultaneous localisation and mapping (SLAM) are very active fields of research garnering significant interest in the drive to improve effectiveness. Traditionally, SLAM has been performed by filtering methods but several improvements have brought smoothing and mapping (SAM) based methods to the forefront of SLAM research. Although recent works have made such methods incremental, they retain some batch functionalities from their bundle-adjustment origins. More specifically, re-linearisation and column reordering still require the full re-computation of the solution. In this thesis, the problem of re-computation after column reordering is addressed. A novel method to reflect changes in ordering directly on the Cholesky factor, called Factor Recovery, is proposed. Under the assumption that changes to the ordering are small and localised, the proposed method can be executed faster than the re-computation of the Cholesky factor. To define each method’s optimal region of operation, a function estimating the computational cost of Factor Recovery is derived and compared with the known cost of Cholesky factorisation obtained using experimental data. Combining Factor Recovery and traditional Cholesky decomposition, the Hybrid Cholesky decomposition algorithm is proposed. This novel algorithm attempts to select the most efficient algorithm to compute the Cholesky factor based on an estimation of the work required. To obtain experimental results, the Hybrid Cholesky decomposition algorithm was integrated in the SLAM++ software and executed on popular datasets from the literature. The proposed method yields an average reduction of 1.9 % on the total execution time with reductions of up to 31 % obtained in certain situations. When considering only the time spend performing reordering and factorisation for batch steps, reductions of 18 % on average and up to 78 % in certain situations are observed.
APA, Harvard, Vancouver, ISO, and other styles
42

Vrahimis, Andreas. "Smoothing methodology with applications to nonparametric statistics." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/smoothing-methodology-with-applications-to-nonparametric-statistics(6d6567f2-1bfa-4e77-8dbb-71fea7564185).html.

Full text
Abstract:
The work in this thesis is based on kernel smoothing techniques with applications to nonparametric statistical methods and especially kernel density estimation and nonparametric regression. We examine a bootstrap iterative method of choosing the smoothing parameter, in univariate kernel density estimation, and propose an empirical smoothness correction that generally improves the method for small-medium sample sizes tested. In a simulation study performed, the corrected bootstrap iterative method shows consistent overall performance and can compete with other popular widely used methods. The theoretical asymptotic properties of the smoothed bootstrap method, in univariate kernel density estimation, are examined and an adaptive data-based choice of fixed pilot smoothing parameter formed, that provides a good performance trade-off among distributions of various shapes, with fast relative rate of convergence to the optimal. The asymptotic and practical differences of the smoothed bootstrap method, when the diagonal terms of the error criterion are included or omitted, are also examined. The exclusion of the diagonal terms yields faster relative rates of convergence of the smoothing parameter to the optimal but a simulation study shows that for smaller sample sizes, including the diagonal terms can be favourable. In a real data set application both methods produced similar smoothing parameters and the resulting kernel density estimates were of reasonable smoothness.Existing methods of kernel density estimation in two dimensions are discussed and the corrected bootstrap iterative method is adapted to work in the bivariate kernel density estimation, with considerable success. Additionally, the theoretical asymptotic properties of the smoothed bootstrap method, in the bivariate kernel density estimation, are examined, and adaptive data-based choices for the fixed pilot smoothing parameters formed, that provide fast relative rates of convergence to the optimal, compared to other popular methods. The smoothed bootstrap method with the diagonal terms of the error criterion omitted, exhibits slightly faster relative rates of convergence, compared to the method which includes the diagonal terms, and in a simulation study they performed considerably well, compared to other methods. Also, we discover that a scaling transformation of the data, before applying the method, leads to poor results for distributions of various shapes, and it should be generally avoided. In an application using the iris flowers data set, both smoothed bootstrap versions suggested, produce reasonable kernel density estimates. We also look at various methods of estimating the variance of the errors in nonparametric regression and suggest a simple robust method of estimating the error variance, for the homoscedastic fixed design. The method is based on a multiplicative correction of the variance of the residuals and a comparison with popular difference-based methods shows favourable results, especially when the local linear estimator is employed.
APA, Harvard, Vancouver, ISO, and other styles
43

Cho, Jaerin. "Optimal design in regression and spline smoothing." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ho, Tim Yin Timothy. "Forecasting with smoothing techniques for inventory control." Click to view the E-thesis via HKUTO, 1994. http://sunzi.lib.hku.hk/hkuto/record/B42574286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zheng, Qi. "Local adaptive smoothing in kernel regression estimation." Connect to this title online, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Houben, Dirk. "Return Smoothing in Discrete and Continuous Time." Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/24564.

Full text
Abstract:
In this thesis we propose four novel continuous-time return smoothing models. Borrowing from the private commercial real estate literature, we start with a benchmark model for return smoothing in discrete time. The benchmark model is translated into continuous time, taking us from an autoregressive moving average (ARMA) specification for the smoothed holding period return to an Ornstein-Uhlenbeck (OU) specification for the instantaneous continuously compounding rate of return on the smoothed asset price. The model is then extended to allow for the possibility of predictability in the underlying “true” return. In a second line of investigation, we propose an alternative continuous-time return smoothing model in which we keep the OU smoothing mechanism, but replace the instantaneous smoothed return with the smoothed detrended log price. This model leads to unrealistic autocorrelations in the smoothed return, and we address this with an extension that introduces a higher-order smoothing equation described by a continuous-time autoregressive moving average (CARMA) process—the continuous-time analogue of the ARMA process. We show that each of our four models belongs to a general framework for linear return smoothing in continuous time in which a CARMA process governing the underlying “true” asset price is overlaid with a CARMA-type smoothing equation that summarises the market mechanism whereby the “true” price is transformed into a market-observed, smoothed price. In each model, as with the general framework, the noise in the “true” price is represented by a Lévy process, allowing for non-normality and sample path discontinuities. To quantify the effect the smoothing models have on holding period returns, we develop a common set of smoothing metrics. These metrics are then computed for each of the models (including the discrete-time benchmark model), and form a basis on which the models can be compared. We also rely on the autocorrelation function in appraising the impact of return smoothing. We comment on the ability of the continuous-time smoothing models to reproduce stylised statistical properties commonly associated with smoothed returns, such as a reduction in return variance and an increase in return autocorrelation. Additionally, we develop the theory needed to operationalise the smoothing metrics for the smoothing framework in general.
APA, Harvard, Vancouver, ISO, and other styles
47

Al, Nuaimat Alia. "Hyperbolic smoothing in nonsmooth optimization and applications." Thesis, Federation University Australia, 2014. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/79184.

Full text
Abstract:
Nonsmooth nonconvex optimization problems arise in many applications including economics, business and data mining. In these applications objective functions are not necessarily differentiable or convex. Many algorithms have been proposed over the past three decades to solve such problems. In spite of the significant growth in this field, the development of efficient algorithms for solving this kind of problem is still a challenging task. The subgradient method is one of the simplest methods developed for solving these problems. Its convergence was proved only for convex objective functions. This method does not involve any subproblems, neither for finding search directions nor for computation of step lengths, which are fixed ahead of time. Bundle methods and their various modifications are among the most efficient methods for solving nonsmooth optimization problems. These methods involve a quadratic programming subproblem to find search directions. The size of the subproblem may increase significantly with the number of variables, which makes the bundle-type methods unsuitable for large scale nonsmooth optimization problems. The implementation of bundle-type methods, which require the use of the quadratic programming solvers, is not as easy as the implementation of the subgradient methods. Therefore it is beneficial to develop algorithms for nonsmooth nonconvex optimization which are easy to implement and more efficient than the subgradient methods. In this thesis, we develop two new algorithms for solving nonsmooth nonconvex optimization problems based on the use of the hyperbolic smoothing technique and apply them to solve the pumping cost minimization problem in water distribution. Both algorithms use smoothing techniques. The first algorithm is designed for solving finite minimax problems. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study the main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in GAMS. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem. The second nonsmooth optimization algorithm we developed was used to demonstrate how smooth optimization methods can be applied to solve general nonsmooth (nonconvex) optimization problems. In order to do so we compute subgradients from some neighborhood of the current point and define a system of linear inequalities using these subgradients. Search directions are computed by solving this system. This system is solved by reducing it to the minimization of the convex piecewise linear function over the unit ball. Then the hyperbolic smoothing function is applied to approximate this minimization problem by a sequence of smooth problems which are solved by smooth optimization methods. Such an approach allows one to apply powerful smooth optimization algorithms for solving nonsmooth optimization problems and extend smoothing techniques for solving general nonsmooth nonconvex optimization problems. The convergence of the algorithm based on this approach is studied. The proposed algorithm was implemented in Fortran 95. Preliminary results of numerical experiments are reported and the proposed algorithm is compared with an other five nonsmooth optimization algorithms. We also implement the algorithm in GAMS and compare it with GAMS solvers using results of numerical experiments.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
48

Gentile, Maria <1981&gt. "Local Trigonometric Methods for Time Series Smoothing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6494/1/Maria_Gentile_PhD_Thesis.pdf.

Full text
Abstract:
The thesis is concerned with local trigonometric regression methods. The aim was to develop a method for extraction of cyclical components in time series. The main results of the thesis are the following. First, a generalization of the filter proposed by Christiano and Fitzgerald is furnished for the smoothing of ARIMA(p,d,q) process. Second, a local trigonometric filter is built, with its statistical properties. Third, they are discussed the convergence properties of trigonometric estimators, and the problem of choosing the order of the model. A large scale simulation experiment has been designed in order to assess the performance of the proposed models and methods. The results show that local trigonometric regression may be a useful tool for periodic time series analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Gentile, Maria <1981&gt. "Local Trigonometric Methods for Time Series Smoothing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6494/.

Full text
Abstract:
The thesis is concerned with local trigonometric regression methods. The aim was to develop a method for extraction of cyclical components in time series. The main results of the thesis are the following. First, a generalization of the filter proposed by Christiano and Fitzgerald is furnished for the smoothing of ARIMA(p,d,q) process. Second, a local trigonometric filter is built, with its statistical properties. Third, they are discussed the convergence properties of trigonometric estimators, and the problem of choosing the order of the model. A large scale simulation experiment has been designed in order to assess the performance of the proposed models and methods. The results show that local trigonometric regression may be a useful tool for periodic time series analysis.
APA, Harvard, Vancouver, ISO, and other styles
50

Garg, Deepak. "Smoothing Wavelet Reconstruction." Thesis, 2013. http://hdl.handle.net/1969.1/149509.

Full text
Abstract:
This thesis present a new algorithm for creating high quality surfaces from large data sets of oriented points, sampled using a laser range scanner. This method works in two phases. In the first phase, using wavelet surface reconstruction method, we calculate a rough estimate of the surface in the form of Haar wavelet coefficients, stored in an Octree. In the second phase, we modify these coefficients to obtain a higher quality surface. We cast this method as a gradient minimization problem in the wavelet domain. We show that the solution to the gradient minimization problem, in the wavelet domain, is a sparse linear system with dimensionality roughly proportional to the surface of the model in question. We introduce a fast inplace method, which uses various properties of Haar wavelets, to solve the linear system and demonstrate the results of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography