Dissertations / Theses on the topic 'Fitting technique'

To see the other types of publications on this topic, follow the link: Fitting technique.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Fitting technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ferreira, Ronaldo da Silva. "Interpretation of pressuremeter tests using a curve fitting technique." reponame:Repositório Institucional da UFSC, 1992. https://repositorio.ufsc.br/handle/123456789/111234.

Full text
Abstract:
Made available in DSpace on 2013-12-09T16:31:26Z (GMT). No. of bitstreams: 0Bitstream added on 2016-01-08T17:44:03Z : No. of bitstreams: 1 88252.pdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
APA, Harvard, Vancouver, ISO, and other styles
2

De, Wet Pierre. "Powered addition as modelling technique for flow processes." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4166.

Full text
Abstract:
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: The interpretation – and compilation of predictive equations to represent the general trend – of collected data is aided immensely by its graphical representation. Whilst, by and large, predictive equations are more accurate and convenient for use in applications than graphs, the latter is often preferable since it visually illustrates deviations in the data, thereby giving an indication of reliability and the range of validity of the equation. Combination of these two tools – a graph for demonstration and an equation for use – is desirable to ensure optimal understanding. Often, however, the functional dependencies of the dependent variable are only known for large and small values of the independent variable; solutions for intermediate quantities being obscure for various reasons (e.g. narrow band within which the transition from one regime to the other occurs, inadequate knowledge of the physics in this area, etc.). The limiting solutions may be regarded as asymptotic and the powered addition to a power, s, of such asymptotes, f0 and f¥ , leads to a single correlating equation that is applicable over the entire domain of the dependent variable. This procedure circumvents the introduction of ad hoc curve fitting measures for the different regions and subsequent, unwanted jumps in piecewise fitted correlative equations for the dependent variable(s). Approaches to successfully implement the technique for different combinations of asymptotic conditions are discussed. The aforementioned method of powered addition is applied to experimental data and the semblances and discrepancies with literature and analytical models are discussed; the underlying motivation being the aspiration towards establishing a sound modelling framework for analytical and computational predictive measures. The purported procedure is revealed to be highly useful in the summarising and interpretation of experimental data in an elegant and simplistic manner.
AFRIKAANSE OPSOMMING: Die interpretasie – en samestelling van vergelykings om die algemene tendens voor te stel – van versamelde data word onoorsienbaar bygestaan deur die grafiese voorstelling daarvan. Ten spyte daarvan dat vergelykings meer akkuraat en geskik is vir die gebruik in toepassings as grafieke, is laasgenoemde dikwels verskieslik aangesien dit afwykings in die data visueel illustreer en sodoende ’n aanduiding van die betroubaarheid en omvang van geldigheid van die vergelyking bied. ’n Kombinasie van hierdie twee instrumente – ’n grafiek vir demonstrasie en ’n vergelyking vir aanwending – is wenslik om optimale begrip te verseker. Die funksionele afhanklikheid van die afhanklike veranderlike is egter dikwels slegs bekend vir groot en klein waardes van die onafhanklike veranderlike; die oplossings by intermediêre hoeveelhede onduidelik as gevolg van verskeie redes (waaronder, bv. ’n smal band van waardes waarbinne die oorgang tussen prosesse plaasvind, onvoldoende kennis van die fisika in hierdie area, ens.). Beperkende oplossings / vergelykings kan as asimptote beskou word en magsaddisie tot ’n mag, s, van sodanige asimptote, f0 en f¥, lei tot ’n enkel, saamgestelde oplossing wat toepaslik is oor die algehele domein van die onafhanklike veranderlike. Dié prosedure voorkom die instelling van ad hoc passingstegnieke vir die verskillende gebiede en die gevolglike ongewensde spronge in stuksgewyspassende vergelykings van die afhankilke veranderlike(s). Na aanleiding van die moontlike kombinasies van asimptotiese toestande word verskillende benaderings vir die suksesvolle toepassing van hierdie tegniek bespreek. Die bogemelde metode van magsaddisie word toegepas op eksperimentele data en die ooreenkomste en verskille met literatuur en analitiese modelle bespreek; die onderliggend motivering ’n strewe na die daarstelling van ’n modellerings-raamwerk vir analitiese- en rekenaarvoorspellingsmaatreëls. Die voorgestelde prosedure word aangetoon om, op ’n elegante en eenvoudige wyse, hoogs bruikbaar te wees vir die lesing en interpretasie van eksperimentele data.
APA, Harvard, Vancouver, ISO, and other styles
3

Adjei, Seth Akonor. "Refining Learning Maps with Data Fitting Techniques." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/178.

Full text
Abstract:
Learning maps have been used to represent student knowledge for many years. These maps are usually hand made by experts in a given domain. However, these hand-made maps have not been found to be predictive of student performance. Several methods have been proposed to find bet-ter fitting learning maps. These methods include the Learning Factors Analysis (LFA) model and the Rule-space method. In this thesis we report on the application of one of the proposed operations in the LFA method to a small section of a skill graph and develop a greedy search algorithm for finding better fitting models for this graph. Additionally an investigation of the factors that influence the search for better data fitting models using the proposed algorithm is reported. We also present an empirical study in which PLACEments, an adaptive testing system that employs a skill graph, is modified to test the strength of prerequisite skill links in a given learning map and propose a method for refining learning maps based on those findings. It was found that the proposed greedy search algorithm performs as well as an original skill graph but with a smaller set of skills in the graph. Additionally it was found that, among other factors, the number of unnecessary skills, the number of items in the graph, and the guess and slip rates of the items tagged with skills in the graph have an impact on the search. Further, the size of the evaluation data set impacts the search. The more data there is for the search, the more predictive the learned skill graph. Additionally, PLACEments, an adaptive testing feature of ASSISTments, has been found to be useful for refining skill graphs by detecting the strengths of prerequisite links between skills in a graph.
APA, Harvard, Vancouver, ISO, and other styles
4

Assun??o, Joaquim Vinicius Carvalho. "Fitting techniques to knowledge discovery through stochastic models." Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2016. http://tede2.pucrs.br/tede2/handle/tede/7179.

Full text
Abstract:
Submitted by Caroline Xavier (caroline.xavier@pucrs.br) on 2017-03-20T14:37:41Z No. of bitstreams: 1 TES_JOAQUIM_VINICIUS_CARVALHO_ASSUNCAO_COMPLETO.pdf: 5447781 bytes, checksum: f414b8262d7361d1082fc73dfea5f008 (MD5)
Made available in DSpace on 2017-03-20T14:37:41Z (GMT). No. of bitstreams: 1 TES_JOAQUIM_VINICIUS_CARVALHO_ASSUNCAO_COMPLETO.pdf: 5447781 bytes, checksum: f414b8262d7361d1082fc73dfea5f008 (MD5) Previous issue date: 2016-08-09
Modelos estoc?sticos podem ser ?teis para representar de maneira compacta cen?rios n?o determin?sticos. Al?m disso, simula??es aplicadas em um modelo compacto s?o mais r?pidas e demandam menos recursos computacionais do que t?cnicas de minera??o em grandes volumes de dados. O desafio est? na constru??o desses modelos. A acur?cia, juntamente com tempo e a quantidade de recursos usados para ajustar um modelo s?o fatores chave para sua utilidade. Tratamos aqui de t?cnicas de aprendizado de m?quina para ajustes de estruturas com a propriedade de Markov; especialmente formalismos complexos como Modelos Ocultos de Markov (HMM) e Redes de Automatos Estoc?sticos (SAN). Quanto a acur?cia, levamos em considera??o as atuais t?cnicas de ajuste, e medidas baseadas em verossimilhan?a. Quanto ao tempo de cria??o, automatizamos o processo de mapeamento de dados via s?ries temporais e t?cnicas de representa??o. Quanto aos recursos computacionais, usamos s?ries temporais e t?cnicas de redu??o de dimensionalidade, evitando assim, problemas com a explos?o do espa?o de estados. Tais t?cnicas s?o demonstradas em um processo que incorpora uma s?rie de passos comuns para o ajuste de modelos com s?ries temporais. Algo semelhante ao que o processo de descoberta de conhecimento em banco de dados (KDD) faz; por?m, tendo como componente principal, modelos estoc?sticos.
Stochastic models might be useful for creating compact representations of non-deterministic scenarios. Furthermore, simulations applied to a compact model, are faster and require fewer computational resources than the use of data mining techniques over large volumes of data. The challenge is to build such models. The accuracy as well as the time and the amount of resources used to fit such models, are the key factors related to their utility. We use machine learning techniques for the fitting of structures characterized by a Markov property; especially, complex formalisms such as Hidden Markov Models (HMM) and Stochastic Automata Networks (SAN). Regarding the accuracy, we considered the state of the art on fitting techniques and model measurements based on likelihood. Regarding the computational resources, we used time series and dimensionality reduction techniques to avoid the space state explosion. Such techniques are demonstrated in a process that embodies a set of common steps for the model fitting through time series. Similar to the knowledge discovery in databases (KDD), yet using stochastic models as a main component.
APA, Harvard, Vancouver, ISO, and other styles
5

LeMay, Valerie. "Comparison of fitting techniques for systems of forestry equations." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29137.

Full text
Abstract:
In order to describe forestry problems, a system of equations is commonly used. The chosen system may be simultaneous, in that a variable which appears on the left hand side of an equation also appears on the right hand side of another equation in the system. Also, the error terms among equations of the system may be contemporaneously correlated, and error terms within individual equations may be non-iid in that they may be dependent (serially correlated) or not identically distributed (heteroskedastic) or both. Ideally, the fitting technique used to fit systems of equations should be simple; estimates of coefficients and their associated variances should be unbiased, or at least consistent, and efficient: small and large sample properties of the estimates should be known; and logical compatibility should be present in the fitted system. The first objective of this research was to find a fitting technique from the literature which meets the desired criteria for simultaneous, contemporaneously correlated systems of equations, in which the error terms for individual equations are non-iid. This objective was not met in that no technique was found in the literature which satisfies the desired criteria for a system of equations with this error structure. However, information from the literature was used to derive a new fitting technique as part of this research project, and labelled multistage least squares (MSLS). The MSLS technique is an extension of three stage least squares from econometrics research, and can be used to find consistent and asymptotically efficient estimates of coefficients, and confidence limits can also be calculated for large sample sizes. For small sample sizes, an iterative routine labelled iterated multistage least squares (IMSLS) was derived. The second objective was to compare this technique to the commonly used techniques of using ordinary least squares (simple or multiple linear regression and nonlinear least squares regresion), and of substituting all of the equations into a composite model and using ordinary least squares to fit the composite model. The three techniques were applied to three forestry problems for which a system of equations is used. The criteria for comparing the results included comparing goodness-of-fit measures (Fit Index, Mean Absolute Deviation, Mean Deviation), comparing the traces of the estimated coefficient co variance matrices, and calculating a summed rank, based on the presence or absence of desired properties of the estimates. The comparison indicated that OLS results in the best goodness-of-fit measures for all three forestry- problems; however, estimates of coefficients are biased and inconsistent for simultaneous systems. Also, the estimated coefficient covariance matrix cannot be used to calculate confidence intervals for the true parameters, or to test hypothesis statements. Finally, compatibility among equations is not assured. The fit of the composite model was attractive for the systems tested; however, only one left hand side variable was estimated, and, for larger systems with more variables and more equations, this technique may not be appropriate. The MSLS technique resulted in goodness-of-fit measures which were close to the OLS goodness-of-fit measures. Of most importance, however, is that the MSLS fit ensures compatibility among equations, estimates of coefficients and their variances are consistent, estimates are asymptotically efficient, and confidence limits can be calculated for large sample sizes using the estimated variances and probabilities from the normal distribution. Also, the number and difficulty of steps required for the MSLS technique were similar to the OLS fit of individual equations. The main disadvantage to using the MSLS technique is that a large amount of computer memory is required; for some forestry problems with very large sample sizes, the use of a subsample or the exclusion of the final step of the MSLS fit were suggested. This would result in some loss of efficiency, but estimated coefficients and their variances would be consistent.
Forestry, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
6

Hooli, Santosh. "Development of FPGA based low-power digital pulse height fitting." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4963.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xi, 248 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 90-93).
APA, Harvard, Vancouver, ISO, and other styles
7

Voisin, Sophie. "3D model acquisition, segmentation and reconstruction using primitive fitting." Dijon, 2008. http://www.theses.fr/2008DIJOS056.

Full text
Abstract:
La rétro-conception d'un objet 3D consiste à retrouver les parties principales, ou primitives, qui reconstruisent au mieux le nuage de points qui le décrit. Le succès du processus de reconstruction étant grandement influencé par les erreurs engendrées le long de la chaîne de rétro-conception, nous nous sommes intéressés à améliorer deux des étapes du procédé. Dans un premier temps, afin de minimiser les erreurs liées à l’acquisition du nuage de point au moyen d'un scanneur à projection de lumière structurée, nous présentons une méthode permettant de choisir les conditions optimales d’éclairage et les couleurs les plus adaptées pour l'apparence de l'objet en fonction des possibilités du scanneur utilisé. Puis, afin d’obtenir une représentation plus abstraite de l’objet tout en gardant une certaine précision et réalité des parties le composant, nous présentons des méthodes entrant dans le cadre de la représentation du modèle via les étapes de reconstruction et de segmentation. L’originalité de ces méthodes est l’utilisation des algorithmes génétiques permettant la représentation du modèle au moyen de primitives, dans notre cas des superquadriques ou des supershapes. Les particularités de ces méthodes résident dans la flexibilité apportée par les algorithmes génétiques dans le mécanisme de résolution des problèmes d’optimisation qui ne dépend pas de l’initialisation du processus, et dans possibilités de la représentation par supershapes permettant la reconstruction d’objets de formes très complexes. En dépit de temps de calcul relativement importants, les résultats obtenus montrent de bonnes performances en termes de reconstruction et de segmentation d’objets ou de scènes
The reverse engineering of a 3D object consists to identify the main parts or primitives, which best reconstruct its 3D point cloud. Because the success of the reconstruction process is greatly influenced by the errors generated along the reverse engineering chain, we focus our research on improving two phases of the process. Firstly, in order to minimize the point cloud acquisition errors associated with the use of a structured light projection scanner, we present a method to select the best illumination source and the best object appearance colors depending on the characteristics of the scanner used. Secondly, in order to obtain a simplified representation of the object while maintaining accuracy and realistic representation, we present novel 3D reconstruction and segmentation methods. The originality of these methods is the use of genetic algorithms to obtain the representation of the model using primitives, in our case using superquadriques or supershapes. The particularities of these methods lie in the flexibility provided by the genetic algorithms in solving optimization problems since they do not depend on the initialization process, and lie on the capabilities of the supershapes representation allowing to reconstruct very complex 3D shapes. Despite computing time relatively expensive, we present good performance results in terms of reconstruction and segmentation of objects and/or scenes
APA, Harvard, Vancouver, ISO, and other styles
8

Mouat, Cameron Thomas. "Fast algorithms and preconditioning techniques for fitting radial basis functions." Thesis, University of Canterbury. Mathematics and Statistics, 2001. http://hdl.handle.net/10092/5598.

Full text
Abstract:
Radial basis functions are excellent interpolators for scattered data in Rd. Previously the use of RBFs had been restricted to small or medium sized data sets due to the high computational cost of solving the interpolation equations when using global basic functions. The construction of fast multipole methods, which reduce the cost of finding a matrix-vector product to O(N log N) or O(N) operations, has created the opportunity to dramatically reduce the cost of solving RBF equations. This thesis presents preconditioners which in conjunction with matrix iterative methods reduce the cost of solving these systems from O(N3) operations to O(N log N) operations. The usual formulation of the radial basis function interpolation equations are generally badly conditioned for large N. Thus the accuracy of the solution is less certain. However, it is not the problem that is badly conditioned but instead the basis built from the Φ functions. Preconditioners in this thesis improve the conditioning of the system by converting to a better basis.
APA, Harvard, Vancouver, ISO, and other styles
9

Balakrishnan, Purnima Parvathy. "Studies of optimal track-fitting techniques for the DarkLight experiment." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/83813.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 49).
The DarkLight experiment is searching for a dark force carrier, the A' boson, and hopes to measure its mass with a resolution of approximately 1 MeV/c 2 . This mass calculation requires precise reconstruction to turn data, in the form of hits within the detector, into a particle track with known initial momentum. This thesis investigates the appropriateness of the Billoir optimal fit to reconstruct helical, low-energy lepton tracks while accounting for multiple scattering, using two separate track parameterizations. The first method approximates the track as a piecewise concatenation of parabolas in three-dimensions, and (wrongly) assumes that the y and z components of the track are independent. When tested using simulated data, this returns a track which geometrically fits the data. However, the momentum extracted from this geometrical representation is an order of magnitude higher than the true momentum of the track. The second method approximates the track as a piecewise concatenation of helical segments. This returns a track which geometrically fits the data even better than the parabolic parameterization, but which returns a momentum which depends on the seeds to the algorithm. Some further work must be done to modify this fitting method so that it will reliably reconstruct tracks.
by Purnima Parvathy Balakrishnan.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
10

Babu, Prabhu. "Spectral Analysis of Nonuniformly Sampled Data and Applications." Doctoral thesis, Uppsala universitet, Avdelningen för systemteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180391.

Full text
Abstract:
Signal acquisition, signal reconstruction and analysis of spectrum of the signal are the three most important steps in signal processing and they are found in almost all of the modern day hardware. In most of the signal processing hardware, the signal of interest is sampled at uniform intervals satisfying some conditions like Nyquist rate. However, in some cases the privilege of having uniformly sampled data is lost due to some constraints on the hardware resources. In this thesis an important problem of signal reconstruction and spectral analysis from nonuniformly sampled data is addressed and a variety of methods are presented. The proposed methods are tested via numerical experiments on both artificial and real-life data sets. The thesis starts with a brief review of methods available in the literature for signal reconstruction and spectral analysis from non uniformly sampled data. The methods discussed in the thesis are classified into two broad categories - dense and sparse methods, the classification is based on the kind of spectra for which they are applicable. Under dense spectral methods the main contribution of the thesis is a non-parametric approach named LIMES, which recovers the smooth spectrum from non uniformly sampled data. Apart from recovering the spectrum, LIMES also gives an estimate of the covariance matrix. Under sparse methods the two main contributions are methods named SPICE and LIKES - both of them are user parameter free sparse estimation methods applicable for line spectral estimation. The other important contributions are extensions of SPICE and LIKES to multivariate time series and array processing models, and a solution to the grid selection problem in sparse estimation of spectral-line parameters. The third and final part of the thesis contains applications of the methods discussed in the thesis to the problem of radial velocity data analysis for exoplanet detection. Apart from the exoplanet application, an application based on Sudoku, which is related to sparse parameter estimation, is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Hodgkinson, Gerald James. "In search of C₂ and C₆₀ and improved line-profile fitting techniques." Thesis, Open University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fu, Nicole Christina. "Physical Properties of Massive, Star-Forming Galaxies When the Universe Was Only Two Billion Years Old." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19956.

Full text
Abstract:
Due to the finite speed of light and a vast, expanding universe, telescopes are just now receiving the light emitted by galaxies as they were forming in the very early universe. The light from these galaxies has been redshifted (stretched to longer, redder wavelengths) as a result of its journey through expanding space. Using sophisticated techniques and exceptional multi-wavelength optical and infrared data, we isolate a population of 378 galaxies in the process of formation when the Universe was only two billion years old. By matching the distinctive properties of the light spectra of these galaxies to models, the redshift, age, dust content, star formation rate and total stellar mass of each galaxy are determined. Comparing our results to similar surveys of galaxy populations at other redshifts, a picture emerges of the growth and evolution of massive, star-forming galaxies over the course of billions of years.
APA, Harvard, Vancouver, ISO, and other styles
13

Mamic, G. J. "Representation and recognition of 3-D free-form objects incorporating statistical techniques." Thesis, Queensland University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Silva, Pedro Redol Lourenço da. "Os vitrais dos séculos XV e XVI do Mosteiro de Santa Maria da Vitória-estudo sobre o seu significado cultural e artístico, e sobre a sua conservação." Master's thesis, Instituições portuguesas -- UL-Universidade de Lisboa -- -Faculdade de Letras, 1999. http://dited.bn.pt:80/29121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

McPhillips, Kenneth J. "Far field shallow water horizontal wave number estimation given a linear towed array using fast maximum likelihood, matrix pencil, and subspace fitting techniques /." View online ; access limited to URI, 2007. http://0-digitalcommons.uri.edu.helin.uri.edu/dissertations/AAI3276997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Breßler, Ingo [Verfasser], Andreas F. [Akademischer Betreuer] Thünemann, Michael [Akademischer Betreuer] Gradzielski, Michael [Gutachter] Gradzielski, and Andreas F. [Gutachter] Thünemann. "Scattering techniques for nanoparticle analysis : classical curve fitting and Monte Carlo methods / Ingo Breßler ; Gutachter: Michael Gradzielski, Andreas F. Thünemann ; Andreas F. Thünemann, Michael Gradzielski." Berlin : Technische Universität Berlin, 2017. http://d-nb.info/1156018390/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Galbincea, Nicholas D. "Critical Analysis of Dimensionality Reduction Techniques and Statistical Microstructural Descriptors for Mesoscale Variability Quantification." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500642043518197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

McKenna, Frederick W. "Studies of cell survival curve fitting, effective doses for radiobiological evaluation in SBRT treatment techniques and the dependence of optical density growth in Gafchromic EBT film used in IMRT." Oklahoma City : [s.n.], 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
19

Laranjeira, Moreira Matheus. "Visual servoing on deformable objects : an application to tether shape control." Electronic Thesis or Diss., Toulon, 2019. http://www.theses.fr/2019TOUL0007.

Full text
Abstract:
Cette thèse porte sur le problème du contrôle de la forme d'ombilicaux pour des robots sous-marins légers téléopérés (mini-ROVs), qui conviennent, grâce à leur petite taille et grande manoeuvrabilité, à l'exploration des eaux peu profondes et des espaces encombrés. La régulation de la forme de l'ombilical est cependant un tâche difficile, car ces robots n'ont pas une puissance de propulsion suffisante pour contrebalancer les forces de traînée du câble. Pour faire face à ce problème, nous avons introduit le concept de Cordée de mini-ROVs, dans lequel plusieurs robots sont reliés à l'ombilical et peuvent, ensemble, contrebalancer les perturbations extérieures et contrôler la forme du câble. Nous avons étudié l'utilisation des caméras embarquées pour réguler la forme d'une portion de l'ombilical reliant deux robots successifs, un leader et un suiveur. Seul le robot suiveur se chargera de la tâche de régulation de la forme du câble. Le leader est libéré pour explorer ses alentours. L'ombilical est supposé être légèrement pesant et donc modélisé par une chaînette. Les paramètres de forme du câble sont estimés en temps réel par une procédure d'optimisation non-linéaire qui adapte le modèle de chaînette aux points détectés dans les images des caméras. La régulation des paramètres de forme est obtenue grâce à une commande reliant le mouvement du robot à la variation de la forme de l'ombilical. L'asservissement visuel proposé s'est avéré capable de contrôler correctement la forme du câble en simulations et expériences réalisées en basin
This thesis addresses the problem of tether shape contrai for small remotely operated underwater vehicles (mini-ROVs), which are suitable, thanks to their small size and high maneuverability, for the exploration of shallow waters and cluttered spaces. The management of the tether is, however, a hard task, since these robots do not have enough propulsion power to counterbalance the drag forces acting on the tether cable. ln order to cape with this problem, we introduced the concept of a Chain of miniROVs, where several robots are linked to the tether cable and can, together, manage the external perturbations and contrai the shape of the cable. We investigated the use of the embedded cameras to regulate the shape of a portion of tether linking two successive robots, a leader and a follower. Only the follower robot deals with the tether shape regulation task. The leader is released to explore its surroundings. The tether linking bath robots is assumed to be negatively buoyant and is modeled by a catenary. The tether shape parameters are estimated in real-time by a nonlinear optimization procedure that fits the catenary model to the tether detected points in the image. The shape parameter regulation is thus achieved through a catenary-based contrai scheme relating the robot motion with the tether shape variation. The proposed visual servoing contrai scheme has proved to properly manage the tether shape in simulations and real experiments in pool
APA, Harvard, Vancouver, ISO, and other styles
20

Atoui, Ibrahim Abdelhalim. "Data reduction techniques for wireless sensor networks using mathematical models." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD009.

Full text
Abstract:
Dans ce travail, nous présentons des techniques de réduction de données et de sécurité conçues pour économiser de l’énergie dans les réseaux de capteurs sans fil. Premièrement, nous proposons un modèle d’agrégation de données basé sur la fonction de similarité servant à éliminer les données redondantes. En plus, nous avons travaillé sur l’envoi, le moins possible, de caractéristiques de données en se basant sur des fonctions d’ajustement qui expriment ces caractéristiques. Deuxièmement, nous nous sommes intéressés à l’hétérogénéité des données tout en étudiant la corrélation entre ces caractéristiques multi variantes après avoir éliminé les mesures identiques durant la phase d’agrégation. Finalement, nous donnons un cadre de sécurité rigoureux, conçu à partir de la cryptographie, qui satisfait le niveau d’exigence atteint normalement dans les réseaux de capteurs sans fil arborescents. Il empêche les pirates d’obtenir des informations à propos des données détectées en assurant une certaine confidentialité de bout-en-bout entre les nœuds du capteur et le puits. Afin de valider nos techniques proposées, nous avons implémenté les simulations de la première technique sur des données collectées en temps réel à partir du réseau Sensor Scope déployé à Grand-St-Bernard. Les simulations de la deuxième et de la troisième technique sont réalisées sur des données collectées en temps réel à partir de 54 capteurs déployés au laboratoire de recherche Intel Berkeley. L’efficacité de nos techniques est évaluée selon le taux de réduction de données, la consommation de l’énergie, la précision des données et la complexité de temps
In this thesis, we present energy-efficient data reduction and security techniques dedicated for wireless sensor networks. First, we propose a data aggregation model based on the similarity function that helps in removing the redundant data. In addition, based on the fitting functions we worked on sending less data features, accompanied with the fitting function that expresses all features. Second, we focus on heterogeneity of the data while studying the correlation among these multivariate features in order to enhance the data prediction technique that is based on the polynomial function, all after removing the similar measures in the aggregation phase using the Euclidean distance. Finally, we provide a rigorous security framework inherited from cryptography satisfies the level of exigence usually attained in tree-based WSNs. It prevents attackers from gaining any information about sensed data, by ensuring an end-to-end privacy between sensor nodes and the sink. In order to validate our proposed techniques, we implemented the simulations of the first technique on real readings collected from a small Sensor Scope network which is deployed at the Grand-St-Bernard, while the simulations of the second and the third techniques are conducted on real data collected from 54 sensors deployed in the Intel Berkeley Research Lab. The performance of our techniques is evaluated according to data reduction rate, energy consumption, data accuracy and time complexity
APA, Harvard, Vancouver, ISO, and other styles
21

Agnani, Deep Bentz Joe. "Computational simulations to study the kinetics of drug efflux via multidrug resistant membrane proteins expressed in confluent cell monolayers : a critical evaluation of different models employed, data fitting techniques and global optimization strategies /." Philadelphia, Pa. : Drexel University, 2009. http://hdl.handle.net/1860/3030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kempthorne, Daryl Matthew. "The development of virtual leaf surface models for interactive agrichemical spray applications." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/84525/12/84525%28thesis%29.pdf.

Full text
Abstract:
This project constructed virtual plant leaf surfaces from digitised data sets for use in droplet spray models. Digitisation techniques for obtaining data sets for cotton, chenopodium and wheat leaves are discussed and novel algorithms for the reconstruction of the leaves from these three plant species are developed. The reconstructed leaf surfaces are included into agricultural droplet spray models to investigate the effect of the nozzle and spray formulation combination on the proportion of spray retained by the plant. A numerical study of the post-impaction motion of large droplets that have formed on the leaf surface is also considered.
APA, Harvard, Vancouver, ISO, and other styles
23

Amara, Mounir. "Segmentation de tracés manuscrits. Application à l'extraction de primitives." Rouen, 1998. http://www.theses.fr/1998ROUES001.

Full text
Abstract:
Ce travail expose la modélisation de courbes par des éléments de droites, cercles et côniques dans une même formulation grâce à un filtrage étendu de Kalman. L'application est la segmentation des tracés manuscrits par une succession de primitives géométriques. Pour l'ajustement, nous décrivons les différentes courbes par des équations cartésiennes. Ces équations sont soumises à une contrainte de normalisation de manière à obtenir une estimation précise et robuste des paramètres des primitives géométriques. Ces deux équations, écrites sous forme implicite, constituent le système d'observation. Leur linéarisation est effectuée par un développement de Taylor au premier ordre. L'équation d'état du système est fixée constante pour spécifier la description d'une seule forme. La formulation de l'estimation récursive des paramètres utilise un filtre étendu de Kalman qui minimise une erreur quadratique, la distance d'un point à une forme, pondérée par la covariance du bruit d'observation. Pour reconstruire un tracé manuscrit, nous définissons une stratégie de changement de modèles. Dans notre cas, leur détection est fondée soit sur des critères spatio-temporels, en utilisant la dynamique du tracé manuscrit, soit sur des critères purement géométriques. Nous appliquons notre méthodologie à la segmentation de tracés réels constitués de chiffres ou de lettres manuscrites. Nous montrons que les différentes méthodes que nous avons développées permettent de reconstruire en temps réel, de façon précise, robuste, complète et totalement autonome, un tracé manuscrit quelconque. Ces méthodes autorisent un codage des tracés manuscrits: une stratégie est mise en œuvre pour la sélection de la primitive la plus significative au sens d'un critère approprié Nous proposons une démarche qui tient compte du taux de réduction de données et de la précision de la description. Ces méthodes ont l'avantage de réduire considérablement la quantité d'information initiale tout en gardant la partie informative. Elles ont été appliquées sur des tracés manuscrits réels en vue de l'analyse de dessins ou d'écritures d'enfants de scolarité primaire.
APA, Harvard, Vancouver, ISO, and other styles
24

顏宏添. "Compression of color image via the technique of surface fitting." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/13037037203945551144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tien, Yung-Chang, and 田詠昌. "CBCT scatter correction by two dimensional imaging curve fitting technique." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/58693289123747247096.

Full text
Abstract:
碩士
國立陽明大學
生物醫學影像暨放射科學系
101
In current medical imaging modalities, cone-beam CT is a new breakthrough. The difference between cone-beam CT and clinical CT is that CBCT takes shorter scan time and lower dose. However, CBCT has some technical defects, so it is only used in academic research and industrial testing until recently. Owing to CBCT’s advantages, it has also been used in image assessment in the dental field. This research focused on Compton effect, which was produced during imaging, and we developed the algorithm to correct the effect caused by scattered photon on the reconstructed image. The first method was ‘image inpainting’ which had been used in archeology. Image inpainting was the technique to fill holes in damaged region to the removal of selected objects. The second method was the mathematical function with model of polynomial to fit the data. We used the remaining of the scatter to estimate the scatter distribution and validate our methods by Monte Carlo (MC) simulation package (GATE). We used a domestic dental CT to scan the physical phantoms and analyze the data to asses our algorithms. The results found that polynomial fitting correction is better than image inpainting, and it can improve contrast to noise ratio (CNR) and reduce cupping artifact. On the first physical phantom, cupping artifact was reduced from 21.7% to 10.6% and contrast was improved from 10.47 to 18.60. On the second physical phantom, cupping artifact was reduced from 22.7% to 18.2% and contrast was improved from 1.98 to 2.29.
APA, Harvard, Vancouver, ISO, and other styles
26

"Fitting of Hodgkin-Huxley experimental data via a new deformation kinetic based model." 2012. http://library.cuhk.edu.hk/record=b5549108.

Full text
Abstract:
Hodgkin-Huxley (HH) 模型對於電流生理學的發展有著深遠的影響。它能精確地模擬離子通道的變化。然而,隨著多年來的反覆驗證,研究人員發現HH模型亦有其局限性和不足之處。有見及此,本論文提出一個建基於變形動力學的模型,藉此以更深入的物理層面解釋Hodgkin與Huxley的實驗數據。新的模型為鉀與鈉離子通道建立了新的電導方程。在這模型的詮釋下,HH模型的鉀離子通道電導方程[附圖] 被[附圖]取代,而HH模型的鈉離子通道電導方程 [附圖] 則被 [附圖] 取而代之。縱使 n(t), m(t)和 h(t)在兩個模型中被授予不同的物理意義,但它們均是一階微分方程。此論文詳細闡述模型的建立過程及參數的推導,並論證它能準確地描繪Hodgkin與Huxley對於烏賊巨軸突的實驗數據。模型參數經由遺傳演算法優化後,新的模型不僅能夠準確描述離子通道的電導變化,還能闡述Cole-Moore shift現象。在相同強度的去極化刺激和溫度下,新的模型比HH模型能接近地模擬膜動作電位的實驗數據。
Hodgkin-Huxley (HH) model has a profound influence on the development of electrophysiology. It is capable of modeling the transient responses of voltage-gated ion channels precisely. Nevertheless, limitations and deficiencies of the model were found as researchers conducted subsequent experiments. In this regard, a new model based on deformation kinetic has been put forth to help explaining the HH experimental data with a deeper level of physical insight. Under the proposed model, the famous HH equation [with formula] for the description of potassium conductance was replaced by [with formula] and the HH sodium conductance equation [with formula] was substituted by [with formula]. Meanwhile, n(t), m(t) and h(t) are still first order differential equations as the HH case. This thesis contributes to illustrate the capability of the new model in approximating HH’s experimental data on squids’ giant axons. Detailed derivation of the new model and identification of the parametric functions are summarized in this report. A customized genetic algorithm was utilized to optimize the model parameters. After fine tuning the new model, we are able to describe the conductance behaviors of voltage-gated ion channels closely, and manage to account for the Cole-Moore shift phenomenon. Under identical initial depolarizing stimuli and temperature as stated in HH’s experiments, close approximations of membrane action potential can also be obtained by the new model.
Detailed summary in vernacular field only.
Yu, Cheuk Him Derek.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 69-70).
Abstracts also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Overview of Electrophysiological Models --- p.4
Chapter 1.2 --- The Hodgkin-Huxley Membrane Current Model --- p.4
Chapter 1.2.1 --- Hodgkin-Huxley Potassium Channel --- p.6
Chapter 1.2.2 --- Hodgkin-Huxley Sodium Channel --- p.8
Chapter 1.3 --- Proliferation of the Deformation Kinetic Based Model --- p.10
Chapter 1.4 --- Thesis Outline --- p.12
Chapter 2 --- The Deformation Kinetic Based Model --- p.13
Chapter 2.1 --- The Molecular Theory --- p.13
Chapter 2.1.1 --- Application of Deformation Kinetics --- p.13
Chapter 2.1.2 --- The Energy Function E{U+2093} (q) --- p.14
Chapter 2.1.3 --- The Population Distribution Function P{U+2093} (N,t) --- p.17
Chapter 2.1.4 --- Conductance Model for Voltage-gated Ion Channels --- p.18
Chapter 2.2 --- The Approximate Solutions --- p.19
Chapter 2.2.1 --- Approximation of the General Solution for G{U+2093} (N) --- p.19
Chapter 2.2.2 --- Approximation of the General Solution for P{U+2093} (N,t) --- p.19
Chapter 2.2.3 --- The Approximate Solution for Molecular g{U+2093} (t) --- p.23
Chapter 2.2.4 --- A Convenient Form of the Approximate Solutions --- p.24
Chapter 2.3 --- Chapter Summary --- p.25
Chapter 3 --- Voltage-gated Ion Channel Modeling --- p.27
Chapter 3.1 --- Voltage-gated Potassium Channel Modeling --- p.27
Chapter 3.2 --- Voltage-gated Sodium Channel Modeling --- p.29
Chapter 3.3 --- Chapter Summary --- p.31
Chapter 4 --- The Parametric Functions --- p.32
Chapter 4.1 --- The Curve Fitting References - HH Experimental Data --- p.32
Chapter 4.2 --- Curve Fitting through Genetic Algorithm --- p.34
Chapter 4.3 --- Functional Approximations w.r.t. HH Experimental Data --- p.37
Chapter 4.3.1 --- Parametric Functions for Voltage-gated Potassium Channel --- p.37
Chapter 4.3.2 --- Parametric Functions for Voltage-gated Sodium Channel --- p.39
Chapter 4.4 --- Chapter Summary --- p.46
Chapter 5 --- The Tracing Results --- p.47
Chapter 5.1 --- Voltage Clamp Tracings --- p.47
Chapter 5.1.1 --- Potassium Conductance Tracings --- p.48
Chapter 5.1.2 --- Sodium Conductance Tracings --- p.49
Chapter 5.2 --- Membrane Action Potential Tracings --- p.54
Chapter 5.3 --- Propagated Action Potential Tracings --- p.56
Chapter 5.4 --- Chapter Summary --- p.59
Chapter 6? --- The Cole-Moore Shift Phenomenon --- p.60
Chapter 6.1 --- Cole-Moore shift Phenomenon of Voltage-gated Potassium Channel --- p.61
Chapter 6.2 --- Cole-Moore Shift Phenomenon of Voltage-gated Sodium Channel --- p.62
Chapter 6.3 --- Chapter Summary --- p.64
Chapter 7 --- Discussions --- p.65
Conclusion --- p.67
Future Works --- p.68
References --- p.69
Chapter Appendix I --- Hodgkin-Huxley’s Analysis of Voltage-gated Channels’ Voltage Clamp Data
Chapter (a) --- HH’s Analysis of Potassium Conductance Change in Voltage Clamp Experiments --- p.71
Chapter (b) --- HH’s Analysis of Sodium Conductance Change in Voltage Clamp Experiments --- p.71
Chapter Appendix II --- Numerical Estimations of Hodgkin-Huxley’s Experimental Data
Chapter (a) --- Numerical Estimations of Podium Conductance Change in Voltage Clamp Experiments for HH axon 17 --- p.72
Chapter (b) --- Numerical Estimations of Sodium Conductance Change in Voltage Clamp Experiments for HH axon 17 --- p.73
Chapter (c) --- Numerical Estimations of Membrane Action Potential with Different Initial Depolarizations for HH axon 17 --- p.74
Chapter Appendix III --- Verification of the Replica of HH Model’s Simulations Results
Chapter (a) --- Comparison between HH Membrane Action Potential and Its Replica --- p.75
Chapter (b) --- Comparison between HH Propagated Action Potential and Its Replica --- p.76
APA, Harvard, Vancouver, ISO, and other styles
27

ZHANG, MING-HUA, and 張銘樺. "Weighted Subspace Fitting Technique Based on Swarm Optimization for Carrier Frequency Offset Estimation." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/r4zgt8.

Full text
Abstract:
碩士
嶺東科技大學
資訊科技系碩士班
107
This thesis deals with the blind carrier frequency offset (CFO) based on the swarm intelligence (SI) optimization algorithms with the weighted subspace fitting (WSF) criterion for interleaved frequency division multiplexing access (OFDMA) uplink system. For the CFO estimation problem, it is well know that the WSF has superior statistical characteristics and better estimation performance. However, this the type of CFO estimation must pass through the high dimensional space problem. Optimizing complex nonlinear multi-modal functions requires a large computational load, which makes it seem difficult and not easy to maximize or minimize nonlinear objective functions in large parameter spaces. Therefore, this thesis uses the SI optimization algorthms to improve the estimation accuracy and reduce computational load. The main optimization algorithms include particle swarm optimization (PSO), gravitational search algorithm (GSA), the hybrid PSO and GSA (PSOGSA), and the whale optimization algorithm (WOA). Meanwhile, this thesis also adds the fuzzy inference system to PSO and GSA for reducing the required number of iterations. Finally, several simulation results are provided for illustrating the effectiveness of the proposed CFO estimators.
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, You-Ching, and 李侑青. "Automatic Optical Inspection Techniques Based on Non-uniform Background Fitting." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/97599602036063089551.

Full text
Abstract:
博士
國立中央大學
資訊工程學系
101
In recent years, object detection has become more popular for industry applications due to the usage of advanced scanning devices and the requirement of visual inspector. Moreover, due to the growth of image and video data, many issues of automatic object detection are expected such as substitution for human inspection, acceleration of inspection speed, and increases on inspection correctness. These issues include biomedical image diagnosis, deoxyribonucleic acid (DNA) electrophoresis analysis, protein electrophoresis analysis, vehicle safety monitoring, solar cell production inspection, semi-conductor wafer inspection, thin-film-transistor liquid-crystal display (TFT-LCD) inspection, texture segmentation, human face detection, house surveillance, etc. In this dissertation, we’ll discuss three of these interesting auto-detection issues: DNA electrophoresis analysis, TFT-LCD inspection, and texture segmentation. Among the three issues, foreground objects appear on non-uniform background. For DNA electrophoresis analysis, backgrounds are fitted by one dimensional curves; for TFT-LCD inspection, backgrounds are fitted by two dimensional planes; for texture segmentation, Gabor magnitudes of texture backgrounds are fitted by hyper planes. Before background fitting, each issue needs some pre-processing which is suitable for the image features of each issue. In DNA electrophoresis analysis, we proposed a completely automatic band detection system for pulsed-field gel electrophoresis (PFGE) images. Band detection comprises lane segmentation and band assignment. The lane segmentation algorithm characterizes features of the PFGE images and uses optimal line fitting to separate lanes. The band assignment algorithm uses polynomial fitting to remove the uneven background and uses gradient features of bands to detect bands. In TFT-LCD inspection, we proposed an online TFT-LCD mura defect detection method which consists of illumination calibration, multi-image accumulation, and multi-resolution background subtraction. First, an LCD on a moving product conveyer is contiguously captured by several images with different locations and a synthesized LCD image is used to calibrate the non-uniform illumination of the images. Second, the images are aligned in position to accumulate the gray levels of pixels which all correspond to a point on the LCD. Third, the multi-resolution backgrounds of the accumulated image are progressively estimated based on the discrete wavelet transform (DWT). We take the accumulated image into a multi-resolution and then refine the estimated background from coarse to fine. The accumulated image subtracted from the estimated background leaves the defect candidates. Finally, a standard thresholding method is used to “threshold out” the mura defects. In texture segmentation, we proposed an unsupervised texture segmentation method using optimal asymmetric Gabor filter (AGF) based on active contour model. First, we create a formula of the asymmetric Gaussian function and multiply a two dimensional (2D) complex sinusoidal function to the function to construct a 2D AGF. Then, compute the average and the variation of the Gabor magnitudes to capture the probability distribution of the Gabor magnitudes. The average and variation are used in the level-set energy functional to evolve the level-set contour. To obtain an AGF which is optimal to the current evolution contour, we propose a Fisher-like function which determines the optimal AGF for the processed image at every iteration determined. Finally, the proposed algorithm of active contour is described. Experiments demonstrate the proposed automatic object detection techniques: band detection system, mura detection method, and unsupervised texture segmentation. The band detection system can automatically segment the lanes in the gel images and detect the bands in the lanes. The band detection rate is 98.42%. The mura detection method can detect mura defects with arbitrary directions, shapes, and sizes. The detection rate of mura regions is 100%. The proposed unsupervised texture segmentation method can distinguish two different textural regions without pre-selecting a suitable Gabor filter.
APA, Harvard, Vancouver, ISO, and other styles
29

Yi-Kai, Peng. "Applying Curve Fitting Techniques to Construct the Synopsis of Data Streams." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-0109200613412884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Peng, Yi-Kai, and 彭義凱. "Applying Curve Fitting Techniques to Construct the Synopsis of Data Streams." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/60091574710726087517.

Full text
Abstract:
碩士
國立清華大學
資訊系統與應用研究所
94
A data stream is a real-time, continuous, and ordered sequence of data items. It is a widely used data format to deal with large amount of dynamic data. Dynamic content and unbounded storage requirement are the two main characteristics of data streams. We need to deal with these two issues while processing data streams. For the dynamic content issue, the approximate answering is a widely used approach to process queries on data streams. For the unbounded storage size issue, some data structures have been proposed to summarize the data streams and keep the storage space required small. Synopsis is a data structure that summarizes the data streams. By using some algorithms, users can get approximate answers of data streams from the summarized information stored in synopsis. In this thesis, we use the curve fitting technique to construct the synopsis of data streams in the form of a curve that expressed by a polynomial function. The algorithms for constructing the synopsis data structure and querying the data stream are also proposed. We prove that the storage space required by the proposed method is O(logN). From the experimental results, we observe that our approach can achieve 95% accuracy on data contents for the queries.
APA, Harvard, Vancouver, ISO, and other styles
31

Feng, Shaw Ching. "The use of tricubic solids and surface fitting techniques in automated mold production." 1987. http://catalog.hathitrust.org/api/volumes/oclc/18427589.html.

Full text
Abstract:
Thesis (Ph. D.)--University of Wisconsin--Madison, 1987.
Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 210-220).
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, ShihHsiang, and 林士翔. "Exploring the Use of Data Fitting and Clustering Techniques for Robust Speech Recognition." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/62281383807216796339.

Full text
Abstract:
碩士
國立臺灣師範大學
資訊教育學系
95
Speech is the primary and the most convenient means of communication between individuals. It is also expected that automatic speech recognition (ASR) will play a more active role and will serve as the major human-machine interface for the interaction between people and different kinds of intelligent electronic devices in the near future. Most of the current state-of-the-art ASR systems can achieve quite high recognition performance levels in controlled laboratory environments. However, as the systems are moved out of the laboratory environments and deployed into real-world applications, the performance of the systems often degrade dramatically due to the reason that varying environmental effects will lead to a mismatch between the acoustic conditions of the training and test speech data. Therefore, robustness techniques have received great importance and attention in recent years. Robustness techniques in general fall into two aspects according to whether the methods’ orientation is either from feature domain or from their corresponding probability distributions. Methods of each have their own superiority and limitations. In this thesis, several attempts were made to integrate these two distinguishing information to improve the current speech robustness methods by using a novel data-fitting scheme. Firstly, cluster-based polynomial-fit histogram equalization (CPHEQ), based on histogram equalization and polynomial regression, was proposed to directly characterize the relationship between the speech feature vectors and their corresponding probability distributions by utilizing stereo speech training data. Moreover, we extended the idea of CPHEQ with some elaborate assumptions, and two different methods were derived as well, namely, polynomial-fit histogram equalization (PHEQ) and selective cluster-based polynomial-fit histogram equalization (SCPHEQ). PHEQ uses polynomial regression to efficiently approximate the inverse of the cumulative density functions of speech feature vectors for HEQ. It can avoid the need of high computation cost and large disk storage consumption caused by traditional HEQ methods. SCPHEQ is based on the missing feature theory and use polynomial regression to reconstruct unreliable feature components. All experiments were carried out on the Aurora-2 database and task. Experimental results shown that for clean-condition training, our method achieved a considerable word error rate reduction over the baseline system and also significantly outperformed the other robustness methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Tzu-Hsuan, and 黃子軒. "Predicting Vt Variation and Static IR Drop of Ring Oscillators Using Model-Fitting Techniques." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/67062075019998685866.

Full text
Abstract:
碩士
國立交通大學
電子研究所
105
This thesis presents a statistical model-fitting framework to efficiently decompose the impact of device Vt variation and power-network IR drop from the measured ring-oscillator frequencies without adding any extra circuitry to the original ring oscillators. The framework applies Gaussian process regression as its core model-fitting technique and stepwise regression as a preprocess to select significant predictor features. The experiments conducted based on the SPICE simulation of an industrial 28nm technology demonstrate that our framework can simultaneously predict the NMOS Vt, PMOS Vt and static IR drop of the ring oscillators based on their frequencies measured at different external supply voltages. The final resulting R squares of the predicted features are all more than 99.93%.
APA, Harvard, Vancouver, ISO, and other styles
34

ASSONITIS, ALESSIA. "Shock-fitting techniques on 2D/3D unstructured and structured grids: algorithmic developments and advanced applications." Doctoral thesis, 2023. https://hdl.handle.net/11573/1666903.

Full text
Abstract:
Over the last decades the simulations of compressible flows featuring shocks have been one of the major drivers for developing new computational algorithms and tools able to compute also complex flow configurations. Nowadays, Computational fluid dynamics (CFD) solvers are mainly based on shock capturing methods, which rely on the integral form of the governing equations and can compute all type of flows, including those with shocks, using the same discretization at all grid points. Consequently, these methods can be implemented with ease and provide physically meaningful solutions also for complex flow configurations, features particularly attractive for CFD community. Although shock capturing methods have been the subject of development and innovations for more than 40 years, they are plagued by several numerical problems due to the shocks capture process, such as discontinuities finite-width, numerical instabilities and reduction of accuracy order in the shock downstream region, which are still unsolved and probably will never find a solution. For this reason, there is a renewed interest in shock-fitting techniques: in particular, these methods explicitly identify the discontinuities within the flow field and compute them by enforcing the Rankine-Hugoniot jump relations. Because of this modelling, shocks are represented by zero thickness discontinuities, so that significant advantages can be gained in terms of solution quality and accuracy improvements. Furthermore, this class of methods is immune to the numerical problems linked to shock capture process. Following this research line, the presented Thesis proposes new developments and advanced applications of shock-fitting techniques, which prove that these methods are an effective option regarding shock capturing ones in simulating flows with shocks, able to provide also a better understanding of all the phenomena linked to shock waves.
APA, Harvard, Vancouver, ISO, and other styles
35

Adelani, Titus Olufemi. "An Evaluation of Traffic Matrix Estimation Techniques for Large-Scale IP Networks." 2010. http://hdl.handle.net/1993/3869.

Full text
Abstract:
The information on the volume of traffic flowing between all possible origin and destination pairs in an IP network during a given period of time is generally referred to as traffic matrix (TM). This information, which is very important for various traffic engineering tasks, is very costly and difficult to obtain on large operational IP network, consequently it is often inferred from readily available link load measurements. In this thesis, we evaluated 5 TM estimation techniques, namely Tomogravity (TG), Entropy Maximization (EM), Quadratic Programming (QP), Linear Programming (LP) and Neural Network (NN) with gravity and worst-case bound (WCB) initial estimates. We found that the EM technique performed best, consistently, in most of our simulations and that the gravity model yielded better initial estimates than the WCB model. A hybrid of these techniques did not result in considerable decrease in estimation errors. We, however, achieved most significant reduction in errors by combining iterative proportionally-fitted estimates with the EM technique. Therefore, we propose this technique as a viable approach for estimating the traffic matrix of large-scale IP networks.
APA, Harvard, Vancouver, ISO, and other styles
36

Hlavacek-Larrondo, Julie. "Analyse cinématique de l'hydrogène ionisé et étude du gaz ionisé diffus de trois galaxies du Groupe Sculpteur : NGC253, NGC300 et NGC247." Thèse, 2009. http://hdl.handle.net/1866/8048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography