Dissertations / Theses on the topic 'Evaluation parameter'

To see the other types of publications on this topic, follow the link: Evaluation parameter.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Evaluation parameter.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Lu. "Computerized evaluation of parameters for HEMT DC and microwave S parameter models." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179518920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rückert, Nadja, Robert S. Anderssen, and Bernd Hofmann. "Stable Parameter Identification Evaluation of Volatility." Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-85402.

Full text
Abstract:
Using the dual Black-Scholes partial differential equation, Dupire derived an explicit formula, involving the ratio of partial derivatives of the evolving fair value of a European call option (ECO), for recovering information about its variable volatility. Because the prices, as a function of maturity and strike, are only available as discrete noisy observations, the evaluation of Dupire’s formula reduces to being an ill-posed numerical differentiation problem, complicated by the need to take the ratio of derivatives. In order to illustrate the nature of ill-posedness, a simple finite difference scheme is first used to approximate the partial derivatives. A new method is then proposed which reformulates the determination of the volatility, from the partial differential equation defining the fair value of the ECO, as a parameter identification activity. By using the weak formulation of this equation, the problem is localized to a subregion on which the volatility surface can be approximated by a constant or a constant multiplied by some known shape function which models the local shape of the volatility function. The essential regularization is achieved through the localization, the choice of the analytic weight function, and the application of integration-by-parts to the weak formulation to transfer the differentiation of the discrete data to the differentiation of the analytic weight function.
APA, Harvard, Vancouver, ISO, and other styles
3

Clouse, Randy Wayne. "Evaluation of GLEAMS considering parameter uncertainty." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/44516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Clouse, Randy W. "Evaluation of GLEAMS considering parameter uncertainty /." This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-09042008-063009/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

BORGA, PAULA CECILIA. "DESIGN PARAMETER FOR EVALUATION OF PILE FOUNDATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2001. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2037@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Os projetos de capacidade de suporte de estacas estão baseados em dados de ensaio de campo de maneira direta ou indireta. Devido a sua praticidade, os métodos empíricos são amplamente utilizados. No Brasil os métodos de Decourt e Quaresma (1978, 1982) e de Aoki e Velloso (1975) se destacam. Este trabalho procura avaliar o uso de dados de SPT e CPT para estimativa de parâmetros geotécnicos necessários na previsão de capacidade de suporte de estacas através de métodos teóricos. São apresentadas e avaliadas formulações empíricas de estimativa de parâmetros para materiais granulares e materiais argilosos. Outro elemento importante na previsão da capacidade de suporte é o estado de tensões atuante em torno da estaca que é analisado através de considerações a respeito do coeficiente de empuxo. Finalmente, são mostrados alguns resultados de provas de carga para a análise da seleção de parâmetros e do estado de tensões, além de uma avaliação dos métodos empíricos de previsão de capacidade de suporte.
The main objective of this thesis is to discuss the applicability of in-situ tests like the Standard Penetration Test (SPT) and the Cone Penetration Test (CPT) to determine directly the design parameters to predict the bearing capacity of pile foundations. In case it will be considered the use of empirical correlation to indicate the mechanical properties of the soil in terms of shear resistance, and the application of these values directly in the classic formulation based on the theory of equilibrium limit to evaluate distinctly the shaft and the base resistance of piles. Adaptations of these values will be proceeded considering aspects related with the non-linear behavior of the soil; the mechanism of load transfer and the influence of the constructive aspects.The results obtained through this new methodology will be compared with experimental results, obtained from static and dynamic load tests and also with other empiric procedures that use the results obtained from in-situ tests to evaluate directly the bearing capacity of deep foundations.
APA, Harvard, Vancouver, ISO, and other styles
6

Yao, Zhengrong. "Model Based Coding : Initialization, Parameter Extraction and Evaluation." Doctoral thesis, Umeå University, Applied Physics and Electronics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-434.

Full text
Abstract:

This thesis covers topics relevant to model-based coding. Model-based coding is a promising very low bit rate video coding technique. The idea behind this technique is to parameterize a talking head and to extract and transmit the parameters describing facial movements. At the receiver, the parameters are used to reconstruct the talking head. Since only high-level animation parameters are transmitted, very high compression can be achieved with this coding scheme. This thesis covers the following three key problems.

Although it is a fundamental problem, the initialization problem, has been neglected some extent in the literature. In this thesis, we pay particular attention to the study of this problem. We propose a pseudo-automatic initialization scheme: an Analysis-by-Synthesis scheme based on Simulated Annealing. It has been proved to be an efficient scheme.

Owing to technical advance today and the newly emerged MPEG-4 standard, new schemes of performing texture mapping and motion estimation are suggested which use sample based direct texture mapping; the feasibility of using active motion estimation is explored which proves to be able to give more than 10 times tracking resolution. Based on the matured face detection technique, Dynamic Programming is introduced to face detection module and work for face tracking.

Another important problem addressed in this thesis is how to evaluate the face tracking techniques. We studied the evaluation problems by examining the commonly used method, which employs a physical magnetic sensor to provide "ground truth". In this thesis we point out that it is quite misleading to use such a method.

APA, Harvard, Vancouver, ISO, and other styles
7

Banis, Y. Norouzi. "Data provision and parameter evaluation for erosion modelling." Thesis, University of Newcastle Upon Tyne, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lundberg, Ted. "Analysis of simplified dynamic truck models for parameter evaluation." Thesis, KTH, Fordonsdynamik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177304.

Full text
Abstract:
The ride comfort in heavy commercial trucks is an important property that requires detailed testing to investigate how different vehicle components affect the response to road input. Trucks come in many different configurations and a customer purchasing a new truck often has the choice to specify number of axles, drive line and cabin type, type of suspension and so on. This gives that there are many vehicle variations that have to be tested to ensure good ride comfort. Testing many combinations of, for example, dampers in different environments and road conditions might be time consuming. The work could be helped by pre-test computer simulations where the vehicle is simulated with the same conditions as in the tests. The simulation results could then be used to better understand how different components affect the vehicle response to a certain road input. By using a MultiBody System (MBS) software, a full truck could be modeled and simulated to acquire accurate results. The simulation would however be computationally demanding and take long time. It also requires that the test engineer is familiar with the MBS software to be able to create the model and run the simulations. This thesis focuses on investigating if simplified dynamic truck models developed in Matlab could be an alternative to more complex models created in an MBS software. Three different models are developed: a quarter car model, a 2D half truck model and a 3D truck model. The models are derived using the Lagrangian energy method and the dynamic response from a given road input is calculated numerically in Matlab. Different methods of solving the systems of differential equations are discussed and the implementation of the implicit Newmark -method is explained. To validate the truck models and solver, the models are replicated in the MBS software Adams View. The response of the Adams and Matlab models from an excitation on the wheels are compared to determine that the equations and solver are correctly derived and implemented. To test the models capabilities to predict the response in a real truck, tests on a road simulator are performed. A four-wheel Scania tractor is tested in a hydraulic road simulator rig. The road simulator excited the truck through the wheels with a sinus sweep from 0-20 Hz and the resulting accelerations in the tractor are measured. Three different setups of front axle dampers are tested to get a parameter variation to study with the models: a standard damper, harder damper and undamped front axle. The same tests are simulated in Matlab and the acceleration responses are compared to see how well the models predict the accelerations seen in the real truck. The models in Matlab and Adams give the same results and are therefore reasoned to be mathematically correct. The Newmark -method is efficient and gives reasonable computing times. In the comparison with the road simulator test the models do not give the same results as measured on the truck. To be able to compare the results from the measurements and simulations, the tire stiffnesses have to be trimmed so that the correct eigenfrequency of the axles are found. The results with modified tire stiffnesses give better results but still with considerable deviations from the experimental results. The measurements on the truck show that the eigenfrequency of the front axle decrease when removing the front axle damper while the models show that the eigenfrequency increase. Also there are differences in the acceleration measured in the cabin and frame as the models do not predict many of the higher eigenfrequencies. In conclusion it is discussed that the models have to be more complex to give useful information about the effects of variation of dampers on the axles. It is also discussed that using commercially available software to perform the same simulations might be a better alternative that gives the user more freedom to overlook and make changes to the model.
APA, Harvard, Vancouver, ISO, and other styles
9

Patil, Vivek. "Criteria for Data Consistency Evaluation Prior to Modal Parameter Estimation." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627667589352536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Batley, Rose-Marie. "The effects on parameter estimation of correlated dimensions and a differentiated ability in a two-dimensional, two-parameter item response model." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/21362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Thornton, Ben Johnston. "Parameter Evaluation and Sensitivity Analysis for an Automotive Damper Model." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354115794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lauri, Linus. "Algorithmic evaluation of Parameter Estimation for Hidden Markov Models in Finance." Thesis, KTH, Matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-141187.

Full text
Abstract:
Modeling financial time series is of great importance for being successful within the financial market. Hidden Markov Models is a great way to include the regime shifting nature of financial data. This thesis will focus on getting an in depth knowledge of Hidden Markov Models in general and specifically the parameter estimation of the models. The objective will be to evaluate if and how financial data can be fitted nicely with the model. The subject was requested by Nordea Markets with the purpose of gaining knowledge of HMM’s for an eventual implementation of the theory by their index development group. The research chiefly consists of evaluating the algorithmic behavior of estimating model parameters. HMM’s proved to be a good approach of modeling financial data, since much of the time series had properties that supported a regime shifting approach. The most important factor for an effective algorithm is the number of states, easily explained as the distinguishable clusters of values. The suggested algorithm of continuously modeling financial data is by doing an extensive monthly calculation of starting parameters that are used daily in a less time consuming usage of the EM-algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Mangado, López Nerea. "Cochlear implantation modeling and functional evaluation considering uncertainty and parameter variability." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/586214.

Full text
Abstract:
Recent innovations in computational modeling have led to important advances towards the development of predictive tools to simulate and optimize surgery outcomes. This thesis is focused on cochlear implantation surgery, technique which allows recovering functional hearing in patients with severe deafness. The success of this intervention, however, relies on factors, which are unpredictable or difficult to control. This, combined with the high variability of hearing restoration levels among patients, makes the prediction of this surgery a very challenging process. The aim of this thesis is to develop computational tools to assess the functional outcome of the cochlear implantation. To this end, this thesis addresses a set of challenges, such as the automatic optimization of the implantation and stimulation parameters by evaluating the neural response evoked by the cochlear implant or the functional evaluation of a large set of virtual patients.
Recientes mejoras en el desarrollo del modelado computacional han facilitado importantes avances en herramientas predictivas para simular procesos quirúrgicos maximizando así los resultados de la cirugía. Esta tesis se focaliza en la cirugía de implantación coclear. Dicha técnica permite recuperar el sentido auditivo a pacientes con sordera severa. Sin embargo, el éxito de la intervención depende de un conjunto de factores, difíciles de controlar o incluso impredecibles. Por este motivo, existe una gran variabilidad interindividual, lo cual lleva a considerar la predicción de esta cirugía como un proceso complejo. El objetivo de esta tesis es el desarrollo de herramientas computacionales para la evaluación funcional de dicha cirugía. Para este fi n, esta tesis aborda una serie de retos, entre ellos la optimización automática de la respuesta neural inducida por el implante coclear y la evaluación numérica de grandes grupos de pacientes.
APA, Harvard, Vancouver, ISO, and other styles
14

Misbah, M. M. A. "Parameter evaluation and sensitivity for transient heat transfer in fixed beds." Thesis, Swansea University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638201.

Full text
Abstract:
The dynamic response of a packed bed to a pulse of heat input has been studied to provide estimates of the axial dispersion coefficient, fluid-particle heat transfer coefficient, intraparticle thermal conductivity and particle heat capacity. Packings of spheres and cylinders of different material and size were investigated. The dynamic response found in experiment was compared with the dynamic response calculated for the same inlet perturbation by numerical analysis of the partial differential equations for the fluid-phase dispersion model including intraparticle conduction. The fluid-phase dispersion model was used in the analysis as unlike other models it has been subjected to statistical tests which showed its validity and its consistency over a wide range of experimental results. The method of analysis of beds of spheres was the Alternating Direction Implicit Method and it has been given in an earlier study. This method was then developed to the Alternating Phase Implicit-Explicit Method to enable its application to beds of cylinders. A Bayesian approach was followed in parameter estimations. In this approach prior information and information provided by the experimental run were combined to give mean posterior values of the four heat transfer parameters together with estimates of their accuracy. The Bayesian approach adopted in this study was also applied to resolve the parameter interaction which existed between the axial thermal dispersion and the heat transfer coefficient, two parameters which were found to be highly correlated.
APA, Harvard, Vancouver, ISO, and other styles
15

Lunev, Alexey. "Evaluation and visualization of complexity in parameter setting in automotive industry." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-362711.

Full text
Abstract:
Parameter setting is a process primary used to specify in what kind of vehicle an electronic control unit of each type is used. This thesis is targeted to investigate whether the current strategy to measure complexity gives user satisfactory results. The strategy consists of structure-based algorithms that are an essential part of the Complexity Analyzer - a prototype application used to evaluate the complexity.     The results described in this work suggest that the currently implemented algorithms have to be properly defined and adapted to be used in terms of parameter setting. Moreover, the measurements that the algorithms output has been analyzed in more detail making the results easier to interpret.     It has been shown that a typical parameter setting file can be regarded as a tree structure. To measure variation in this structure a new concept, called Path entropy has been formulated, tested and implemented.     The main disadvantage of the original version of the Complexity Analyzer application is its lack of user-friendliness. Therefore, a web version of the application based on Model-View-Controller technique has been developed. Different to the original version it has user interface included and it takes just a couple of seconds to see the visualization of data, compared to the original version where it took several minutes to run the application.
APA, Harvard, Vancouver, ISO, and other styles
16

Boudriga, Sami. "Evaluation of "parameter design" methodologies for the design of chemical processing units." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5672.

Full text
Abstract:
'Parameter design' is a procedure to identify settings of design variables that minimize variation in performance of a processing unit. However their application to the design of chemical processing units has not been widely reported. The objective of this study was to investigate the feasibility of using 'parameter design' methodologies to determine values of the design variables for a non-isothermal Continuous Stirred Tank Reactor which minimized the sensitivity of product quality to disturbances. The relative merits of each approach and the potential for application of 'parameter design' to chemical unit design are discussed. In summary, Response Surface Methodology associated with Monte Carlo simulations was found to be very efficient. Error Transmission Analysis did not perform well because of the inaccuracy of the first-order approximation adopted in the transmitted variation function. The use of the Taguchi Method was not always justified due to the universal logarithmic transformation, and the use of the marginal means. Sequential Elimination of Levels was found to be generally unreliable. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
17

Rudraraju, Prasad V. "Motion parameter evaluation, camera calibration and surface code generation using computer vision." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182460668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Fisch, Barbara. "MR-tomographische Evaluation hämo- und hydrodynamischer Parameter bei Patienten mit Multipler Sklerose." Diss., Ludwig-Maximilians-Universität München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:19-177466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sherrell, Ian Michael. "Parameter Evaluation and Modeling of a Fine Coal Dewatering Screen-Bowl Centrifuge." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/32562.

Full text
Abstract:
A vast majority of coal and mineral cleaning and upgrading processes involve the addition of water. The water allows the movement of particles throughout the processing plant and the upgrading of the material. When the process is complete the finished product must be dewatered. This is due to storage concerns, in which the water takes up a majority of the space, and high transportation costs, in which no compensation is obtained from the buyer for the shipment of the liquid. Dewatering is accomplished by many devices, with the two most common pieces of equipment being the screen-bowl centrifuge and disk filter. This thesis tests and compares the effect of reagents on dewatering using the screen-bowl centrifuge and disk filter. Coal was obtained from the Upper Banner, Pittsburgh No. 8, Taggart, and Dorchester seams, crushed and ground to the desired size, and run through the dewatering circuits. The results showed that the moisture content of the product can be greatly reduced in the disk filter while being only slightly reduced in the screen-bowl centrifuge. It was also shown that the recovery can be slightly increased in the screen-bowl centrifuge. Overall, with the addition of reagents, the disk filter outperformed the centrifuge in both recovery and moisture content. A model was also developed for the screen-bowl centrifuge. The results from the screen-bowl tests helped in the development of this model. This model can be used to predict the moisture content of the product, the recovery, particle size distribution of the effluent and particle size distribution of the product. The model also predicted how the product moisture and recovery were affected by changing the feed flow rate, feed percent solids, centrifuge speed, and particle size distribution.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
20

Chandra, Sanjay. "Evaluation of oxygen uptake rate as an activated sludge process control parameter." Thesis, Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/101173.

Full text
Abstract:
A debate currently exists concerning whether or not oxygen uptake rate is a valid control parameter for monitoring the activated sludge process. A laboratory study was conducted to attempt to shed light on the controversy. Two bench-scale reactors were operated at steady state and under shock load. Oxygen uptake rate (OUR) was measured with the BOD bottle technique and with an on-line respirometer. The reliability of the results obtained from the BOD bottle technique was also of interest. No relationship could be deduced between effluent quality and oxygen uptake rate thereby suggesting that the latter would not be useful as a control parameter. As was concluded from the shock load data, the oxygen uptake rate varies very inconsistently at high organic loadings. It was found that the BOD bottle technique completely failed at very high organic loadings and gave meaningless results. The on-line respirometer, in spite of its high sensitivity, gave more realistic and consistent results.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
21

Cagidiaco, Edoardo Ferrari. "Periodontal evaluation of restorative and prosthodontic margins." Doctoral thesis, Università di Siena, 2021. http://hdl.handle.net/11365/1126080.

Full text
Abstract:
Prosthodontic and periodontal correlation on teeth In the daily dental practice 3 fundamental/empiric/clinical parameters have a role to establish the clinical success of prosthodontic treatment: function, aesthetic and longevity of the restorations. But, from a scientific point of view, how do we rate the success of the restorations? When analyzing the existing literature, it can be noted that many authors focus their attention on the precision of the margin, to pursuit a small gap between the abutment and the crown, and to achieve the clinical success. Christensen et al.1 and Mc Lean & Von Fraunhofer2 investigated the margins’ clinical acceptability by dentists and asked to measure the gap between the abutment and the crown to a number of practitioners: it was shown that a clinician can clinically appreciate a gap not lower than 120 microns using a sharp explorer. This result may end in a not sure and sufficient seal between the crown and abutment, and consequently leakage at the margins. This finding is not in agreement with the existing data coming from an in vitro study in which the acceptable marginal gap is lower than 50 microns3 Sorensen3 reported that small defects less or equal then 0,050 mm were associated with significantly less fluid flow and bone loss than defects exceeding this value. Martignoni4-5 reported that there are variable definitions regarding what constitutes a margin that cab ne clinically acceptable, and there is no definite threshold for the maximum marginal discrepancy that is clinically acceptable. Many authors accept the criteria established by McLean and Von Fraunhofer2, they completed a 5-year examination of 1000 restorations and concluded that 120 microns should be considered the maximum marginal gap. The adaptation, the precision and the quality of the restoration margin can be of greater significance in terms of gingival health, than the position of the margin6. According to Lang et al. 7 following the placement of restorations with overhanging margins, a subgingival flora was detected which closely resembled that of chronic periodontitis. Following the placement of the restorations with clinically perfect margins, a microflora characteristic for gingival health or initial gingivitis was observed. In patients with suitable oral hygiene, tooth-supported and implant-supported crowns with intra-sulcular margins were not predisposed to unfavorable gingival and microbial responses8. Even among patients receiving regular preventive dental care, subgingival margins are associated with unfavorable periodontal reactions9. Ercoli and Caton10, in a systematic review, describe how placement of restoration margins within the junctional epithelium and supracrestal connective tissue attachment can be associated with gingival inflammation and, potentially, recession or periodontal pocket. The presence of fixed prostheses finish line within the gingival sulcus or wearing of partial, removable dental prostheses does not cause gingivitis if the patients are complaint with self-performed plaque control and periodic maintenance. Procedures adopted for the fabrication of dental restorations and fixed prostheses have the potential to cause traumatic loss of the periodontal supporting tissue. They concluded that restoration margins located within the gingival sulcus do not cause gingivitis if the patients are complaint with self-performed plaque control and periodic maintenance. Tooth-supported and/or tooth-retained restorations and their design, fabrication, delivery, and materials, have often been associated with plaque retention and loss of attachment. Restoration margins placed within the junctional epithelium and supracrestal connective tissue attachment can be associated with inflammation and, potentially, recession. Factors related to the presence, design, fabrication, delivery and materials of tooth-supported prostheses seem to influence the periodontium, generally related to localized increase in plaque accumulation and, less often, to traumatic and allergic reactions to dental materials10. Jansson showd that the influence of a marginal overhang on pocket depth and radiographic attachment decrease with increasing loss of periodontal attachment in periodontitis-prone patients, and the effect on pocket depth of a marginal overhang may act synergistically, potentiating the effect of poor oral hygiene11. Subgingival restorations with their apical borders still located subgingivally after periodontal treatment should be regarded as a risk factor in the progression of periodontitis12. Consequently, placement of the restoration margin supragingivally is recommended, especially in periodontitis-prone patients with an insufficient plaque control12. Dental restorations may be suggested as a risk indicator for periodontal disease and tooth loss. Routine SPT (Supportive Periodontal Therapy) was found to be associated with decrease in the prevalence of deep PPD over time, and it is of the utmost importance in maintaining periodontal health, especially adjacent to teeth with restorations. Finally, these findings may support the treatment of caries lesions and faulty restorations as part of a comprehensive cause-related therapy and should be followed by a regular maintenance program13. The relationship between dental restorations and periodontal status has been examined for some time. Research has shown that overhanging dental restorations and subgingival margin placement play an important role providing an ecologic niche for periodontal pathogens14. An overhanging dental restoration is primarily found in the class II restoration, since access for interdental finishing and polishing of the restoration, and cleansing is often difficult in these areas, even for patients with good oral hygiene. Many studies have shown that there is more periodontal attachment loss and inflammation associated with teeth with overhangs than those without. Presences of overhangs may cause an increase in plaque formation15-21 and a shift in the microbial composition from healthy flora to one characteristic of periodontal disease14. The location of the gingival margin of a restoration is directly related to the health status of the adjacent periodontium8. Numerous studies8-12-25 have shown that subgingival margins are associated with more plaque, more severe gingival inflammation and deeper periodontal pockets than supragingival ones. In a 26-year prospective cohort study, Schatzle et al. 25 followed middle class Scandinavian men for a period of 26 years. Gingival index, and attachment level were compared between those who did and those who did not have restorative margins greater than 1mm from the gingival margin. After 10 years, the cumulative mean loss of attachment was 0.5 mm more for the group with subgingival margins. This was statistically significant. At each examination during 26 years of the study, the degree of inflammation in the gingival tissue adjacent to subgingival restorations was much greater than in the gingiva adjacent to supragingival margins. This is the first study to document a time sequence between the placement of subgingival margins and periodontal attachment loss, confirming that the subgingival placement of margins is detrimental to gingival and periodontal health. Plaque at apical margin of a subgingival restoration will cause periodontal inflammation that may in turn destroy connective tissue and bone approximately, 1-2 mm away from inflamed area14. Determination of the distance between the restorative margin and the alveolar crest is often done with bitewing radiographs; however, it is important to remember that a radiograph is a 2-dimensional representation of 3-dimensional anatomy and structure. Thus, clinical assessment and judgment are important adjuncts in determining if, and how much, bone should be removed to maintain adequate room for the dento-gingival supra crestal connective tissue height attachment14. Although surface textures of restorative materials differ in their capacity to retain plaque26, all of them can be adequately maintained if they are correctly polished and accessible to patient care27. This includes underside of pontics. Composite resins are difficult to finish interproximally and may be more likely to show marginal defects than other materials28. As a result, they are more likely to harbor bacterial plaque29. Intra-subject comparisons of unilateral direct compositive “veneers” showed a statistically significant increase in plaque and gingival indices adjacent to the composites, 5-6 years after placement28. In addition, when a diastema is closed with composite, the restorations are often overcontoured in the cervical-interproximal area, leading to increased plaque retention28. As more plaque is retained, this could pose a significant problem for a patient with moderate to poor oral hygiene14. For that, in absence of more specific prosthodontic parameters to evaluate the integration of crowns in to the periodontal environment, another way to determine the success and health of the restoration is to use the periodontal parameters such as: PPD (Periodontal Probing Depth) that is the measurement of the periodontal sulcus/pocket between the gingival margin and the bottom of the sulcus/pocket; REC (Recession) is the apical migration of the gingival margin measured with the distance between the gingival margin and the CEJ (Cement-Enamel Junction); PI (Plaque Index) the index records the presence of supragingival plaque; BOP (Bleeding On Probing) the presence or not of bleeding on surfaces of the teeth during the probing. The aim of this study/thesis was to propose a clinical procedure to evaluate single unit restorations and their relations with periodontal tissues by a new clinical score: the FIT ( Functional Index for Teeth). FIT, that is a novel index for the assessment of the prosthetic results of lithium disilicate crowns, based on seven restorative-periodontal parameters, that evaluate crowns placed on natural abutments, and want to be a reliable and objective instrument in assessing single partial crown success and periodontal outcome as perceived by patients and dentists.
APA, Harvard, Vancouver, ISO, and other styles
22

Nassr, Husam, and Kurt Kosbar. "PERFORMANCE EVALUATION FOR DECISION-FEEDBACK EQUALIZER WITH PARAMETER SELECTION ON UNDERWATER ACOUSTIC COMMUNICATION." International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/626999.

Full text
Abstract:
This paper investigates the effect of parameter selection for the decision feedback equalization (DFE) on communication performance through a dispersive underwater acoustic wireless channel (UAWC). A DFE based on minimum mean-square error (MMSE-DFE) criterion has been employed in the implementation for evaluation purposes. The output from the MMSE-DFE is input to the decoder to estimate the transmitted bit sequence. The main goal of this experimental simulation is to determine the best selection, such that the reduction in the computational overload is achieved without altering the performance of the system, where the computational complexity can be reduced by selecting an equalizer with a proper length. The system performance is tested for BPSK, QPSK, 8PSK and 16QAM modulation and a simulation for the system is carried out for Proakis channel A and real underwater wireless acoustic channel estimated during SPACE08 measurements to verify the selection.
APA, Harvard, Vancouver, ISO, and other styles
23

Browne, R. Edwin. "An evaluation of 'Coast Down Time' as a diagnostic parameter for condition monitoring." Thesis, Glasgow Caledonian University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.486489.

Full text
Abstract:
The behaviour of rotating machinery during deceleration when the power of the drive to the rotating machinery is stopped is known as Coast Down Phenomenon (CDP). The total time elapsed for the entire momentum due to sustained operation of the rotating machinery to dissipate is known as Coast Down Time (CDT). The characteristic profile of CDT (CDT-P) depends on inertia forces of the machinery components, tribological behaviour and environmental effects such as fluid drag. The aim of the research is to investigate the potential and the feasibiJity of using CDT as an effective tool for condition monitoring of machinery. A review of current literature has given a clear understanding to what extent the CDP has been explored with respect to utilising it as a diagnostic parameter. However no evidence for effective use of CDT in condition based monitoring programmes can be found. The main focus of this research is concentrated on ascertaining the validity of Coast Down Time as one of the condition monitoring parameters for a horizontal rotor system. Extensive experimental analysis is conducted on a rotor system consisting of a journal bearing using different lubricants under various mechanical and operating conditions An empirical formula is developed to determine the Coast Down Factor (CDF). Furthermore, the effect of rub on the motor has been studied with the aid of an electromagnetic clutch and isolating the rotor from the drive system. This research has shown a novel way to interpret the CDT-P to be used as a diagnostic parameter in condition monitoring by the formulation of Coast Down Factor. CDF gives a simple, but highly flexible tool for CDT analysis. COT monitoring through the trend of CDF can be used to analyse the performance of a journal bearing under given tribological conditions; to utilise as a tool for optimisation of selection of lubrication oil; to evaluate and understand. the tribological behaviour of lubrication and the performance of a journal bearing under different oil pressures; and to diagnose the mechanical degradation with simulated unbalance to the rotor system.
APA, Harvard, Vancouver, ISO, and other styles
24

Mamishev, Alexander V. 1974. "Interdigital dielectrometry sensor design and parameter estimation algorithms for non-destructive materials evaluation." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/16729.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Vita.
Includes bibliographical references (v. 2, p. 677-704).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
The major objective of this thesis is to develop instrumentation and parameter estimation algorithms for nondestructive measurement of non-homogeneous material property profiles with fringing electric field dielectrometry sensors. The instrumentation includes interdigital sensors and sensor arrays, other types of fringing field sensors, electronic circuit boards for measurement of sensor signals, and mechanical setups for specific applications. The parameter estimation algorithms require solving forward and inverse problems of material property estimation. The forward problem implies calculation of the sensor admittance matrix as a function of geometry and material properties. The inverse problem, inherently more difficult than the forward problem, implies estimation of unknown geometry and material properties based on known properties and measured entries of the sensor admittance matrix. The developed instrumentation and algorithms are applied to practical problems which include mo++
nitoring of moisture dynamics in transformer pressboard, evaluation of the saturation state of chemical garments, detection of flaws in fiberglass flywheels, and detection of buried metal and plastic landmines. The design strategy and fabrication practices are described for multiple penetration depth interdigital sensors designed for measurement of conductivity and permittivity of electrical insulation of power transformers. An extensive overview of interdigital electrode technology in other fields is given. A number of disturbance parameters that affect interdigital dielectrometry measurements is characterized and either eliminated or accounted for using empirical, analytical, and numerical simulation approaches. A new type of fringing field sensor has been developed to improve the cross-correlation between different fringing field patterns. In most cases, the forward problem has been solved using commercial finite-element software "Maxwell" by Ansoft Corp. Other methods, such as a co++
ntinuum model, analytical expressions, and direct calibration were used for comparison and to achieve greater accuracy in simple cases. A family of algorithms for solving inverse problems has been developed to address different applications.
(cont.) No single algorithm provides the most accurate and reliable results in all cases. The most appropriate algorithm for each given application should be chosen on the basis of required speed and accuracy, number of known and unknown parameters, type of distribution of material properties, contact conditions between the sensor head and the material, and a specific type of sensor selected for the task. Major types of property estimation algorithms include direct calibration; use of empirically and numerically determined approximations; use of pre-computed lookup tables; iterative guesses at dielectric and geometry properties while solving the minimization problem of matching theoretical and measured entries of sensor admittance matrix; direct mapping between the sensor output and the physical variable of interest (not necessarily a dielectric property);
(cont.) pattern recognition in the dielectric spectroscopy signature; and search for signal characteristics in the sensor output due to material property variations. Each of these major types of algorithms has been implemented in one or more forms to achieve the desired results for each specific problem. One of the algorithmic approaches has been generalized to other types of problemsby implementing it as a generic optimization tool. Moisture dynamics in transformer pressboard has been studied extensively with numerical simulations of the forward and inverse problem. The developed algorithm has been applied to experimental dataobtained by another graduate student, Yanqing Du, in a concurrent Ph.D. thesis. It has been demonstrated that a three-wavelength interdigital sensor can be used to measure time-dependent continuous smoothly varying moisture profiles in oil-impregnated power transformer pressboard. Ultimately, this technology is capable of preventing partial discharges and transformer failures due to flow electrification and static charging of the oil-pressboard interface. Preliminary measurements also demonstrated adequate sensitivity and selectivity of fringing field sensors for the detection of flaws in fiberglass flywheels and detection and discrimination of buried plastic and metal landmines. The saturation state of chemical protective garments hasbeen determined for relatively high levels of saturation. Additional work is needed to improve sensitivity in the low saturation region.
by Alexander V. Mamishev.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
25

Weichert, Miriam [Verfasser], and Christian [Akademischer Betreuer] Stremmel. "Praeoperative Parameter und perioperativer Verlauf - zur funktionellen Evaluation vor operativer Therapie des Lungenkarzinoms." Freiburg : Universität, 2014. http://d-nb.info/1123479321/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

McKinnon, Douglas John Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Novel efficiency evaluation methods and analysis for three-phase induction machines." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2005. http://handle.unsw.edu.au/1959.4/21869.

Full text
Abstract:
This thesis describes new methods of evaluating the efficiency of three-phase induction machines using synthetic loading. Synthetic loading causes the induction machine to draw full-load current without the need to connect a mechanical load to the machine's drive shaft. The synthetic loading methods cause the machine to periodically accelerate and decelerate, producing an alternating motor-generator action. This action causes the machine, on average over each synthetic loading cycle, to operate at rated rms current, rated rms voltage and full-load speed, thereby producing rated copper losses, iron loss and friction and windage loss. The excitation voltages are supplied from a PWM inverter with a large capacity DC bus capable of supplying rated rms voltage. The synthetic loading methods of efficiency evaluation are verified in terms of the individual losses in the machine by using a new dynamic model that accounts for iron loss and all parameter variations. The losses are compared with the steady-state loss distribution determined using very accurate induction machine parameters. The parameters were identified using a run-up-to-speed test at rated voltage and the locked rotor and synchronous speed tests conducted with a variable voltage supply. The latter tests were used to synthesise the variations in stator leakage reactance, magnetising reactance and the equivalent iron loss resistance over the induction machine's speed range. The run-up-to-speed test was used to determine the rotor resistance and leakage reactance variations over the same speed range. The test method results showed for the first time that the rotor leakage reactance varied in the same manner as the stator leakage and magnetising reactances with respect to current. When all parameter variations are taken into account there is good agreement between theoretical and measured results for the synthetic loading methods. The synthetic loading methods are applied to three-phase induction machines with both single- and double-cage rotors to assess the effect of rotor parameter variations in the method. Various excitation waveforms for each method were used and the measured and modelled efficiencies compared to conventional efficiency test results. The results verify that it is possible to accurately evaluate the efficiency of three-phase induction machines using synthetic loading.
APA, Harvard, Vancouver, ISO, and other styles
27

Yamada, Yoshiyuki, Hiroshi Hasegawa, and Ken-ichi Sato. "Evaluation of Network Parameter Dependencies of Hierarchical Optical Path Network Cost Considering Waveband Protection." IEEE, 2008. http://hdl.handle.net/2237/12093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Code, James Edward Eick J. David. "Experimental evaluation of solvents with a biological substrate based on solubility parameter theory solubility parameter determinations by computational chemistry using a rational design process /." Diss., UMK access, 2004.

Find full text
Abstract:
Thesis (Ph. D.)--School of Dentistry and Dept. of Chemistry. University of Missouri--Kansas City, 2004.
"A dissertation in oral biology and chemistry." Advisor: J. David Eick. Typescript. Vita. Description based on contents viewed Feb. 23, 2006; title from "catalog record" of the print edition. Includes bibliographical references (leaves 137-145). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
29

Koutris, Andreas. "Testing for Structural Change: Evaluation of the Current Methodologies, a Misspecification Testing Perspective and Applications." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/26716.

Full text
Abstract:
The unit root revolution in time series modeling has created substantial interest in non- stationarity and its implications for empirical modeling. Beyond the original interest in trend vs. di¤erence non-stationarity, there has been renewed interest in testing and modeling structural breaks. The focus of my dissertation is on testing for departures from stationarity in a broader framework where unit root, mean trends and structural break non-stationarity constitute only a small subset of the possible forms of non-stationarity. In the fi¦rst chapter the most popular testing procedures for the assumption, in view of the fact that general forms of non-stationarity render each observation unique, I develop a testing procedure using a resampling scheme which is based on a Maximum Entropy replication algorithm. The proposed misspecification testing procedure relies on resampling techniques to enhance the informational content of the observed data in an attempt to capture heterogeneity 'locally' using rolling window estimators of the primary moments of the stochastic process. This provides an e¤ective way to enhance the sample information in order to assess the presence of departures from stationarity. Depending on the sample size, the method utilizes overlapping or non-overlapping window estimates. The e¤ectiveness of the testing procedure is assessed using extensive Monte Carlo simulations. The use of rolling non-overlapping windows improves the method by improving both the size and power of the test. In particular, the new test has empirical size very close to the nominal and very high power for a variety of departures from stationarity. The proposed procedure is then applied on seven macroeconomic series in the fourth chapter. Finally, the optimal choice of orthogonal polynomials, for hypothesis testing, is investigated in the last chapter.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Garcia, Bartels Natalie [Verfasser]. "Moderne Technologien zur Evaluation trichologischer Parameter : ihr Stellenwert in Diagnostik und Therapie / Natalie Garcia Bartels." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2012. http://d-nb.info/1031421343/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Badibanga, Remy Kalombo. "Evaluation of the fatigue resistance of power line conductors function of the H/w parameter." reponame:Repositório Institucional da UnB, 2017. http://repositorio.unb.br/handle/10482/31789.

Full text
Abstract:
Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2017.
Submitted by Raquel Almeida (raquel.df13@gmail.com) on 2018-04-13T21:32:43Z No. of bitstreams: 1 2017_RemyKalomboBadibanga.pdf: 8873647 bytes, checksum: 070e92f93e52bbbaaedb218a72a95ba3 (MD5)
Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2018-05-08T18:54:11Z (GMT) No. of bitstreams: 1 2017_RemyKalomboBadibanga.pdf: 8873647 bytes, checksum: 070e92f93e52bbbaaedb218a72a95ba3 (MD5)
Made available in DSpace on 2018-05-08T18:54:11Z (GMT). No. of bitstreams: 1 2017_RemyKalomboBadibanga.pdf: 8873647 bytes, checksum: 070e92f93e52bbbaaedb218a72a95ba3 (MD5) Previous issue date: 2018-05-08
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).
Deste 1960, um grupo instituído pela CIGRÉ (Conseil Intrntional des Grands Réseaux Électriques) propôs o uso do Every Day Stress (EDS) para projeto de linhas de transmissão aéreas. Porém, investigações em campo chamaram atenção para ocorrência de danos devidos à fadiga em condutores, mesmo com o uso de EDS recomendados. Mais recentemente, a CIGRÉ propôs o uso do parâmetro H/w para projeto, com a intenção de generalizar o comportamento em fadiga dos condutores aéreos. O objetivo deste trabalho, então, é conduzir um estudo experimental para avaliar os efeitos do parâmetro catenário (H/w) na vida em fadiga de condutores aéreos. Comparações entre as curvas S-N geradas mostraram que o condutor CAA Tern sustenta um número significativamente maior de ciclos antes da falha por fadiga ocorrer do que o CAL 900 MCM, para os valores de H/w. Enquanto isso, o CA Orchid apresentou uma vida em fadiga localizada entre os dois condutores citados e similar à do ACAR 750 MCM. Os dados experimentais dos testes estáticos se adequaram bem aos valores teóricos estimados. Análises de falha de amostras de quebradas (fios) revelaram, não apenas que as trincas tiveram início nas áreas de escorregamento dos fios de alumínio, mas também, sua morfologia apresentou clara evidência de falhas por fadiga, assim como marcas de praia e trincas secundárias. Adicionalmente, uma análise de falha foi realizada, não só em termos da camada em que ocorreu a quebra do fio e do tipo de superfície de falha, mas também de acordo com a posição relativa à boca do grampo em que as falhas ocorreram dentro do grampo de suspensão. Dados apresentados nesse estudo podem ser utilizados em vários programas de elementos finitos não lineares para melhor entender o comportamento mecânico dos condutores. Ademais, a informação gerada pode ser útil para planejamento de manutenção em linhas de transmissão. Baseada na avaliação do parâmetro H/w apresentada nesse trabalho, foi verificado que o parâmetro representa um claro avanço no projeto de linhas de transmissão contra fadiga devido a vibrações eólicas quando comparado com o anteriormente recomendado Every Day Stress (EDS). Isso é suportado pelo fato de que, quando utilizado o parâmetro H/w, os dados estáticos são consistentes com aqueles previstos pelo uso de equações apropriadas.
Since 1960, a panel instituted by CIGRÉ (Conseil International des Grands Réseaux Électriques) proposed the use of the Every Day Stress (EDS) for overhead conductor design. But field investigations drew attention to the occurrence of fatigue damage of conductors even though the recommended EDS were adhered. More recently, then, CIGRÉ proposed the use of H/w (The ratio between the horizontal tensile load, H, and the conductor weight per unit length, w) parameter for design purpose with the goal of generalising the fatigue behaviour of overhead conductors. The objective of this work, then, was to conduct an experimental study to evaluate the effects of the catenary parameter (H/w) on the fatigue life of overhead conductors. Comparison between the generated S-N curves proved that the ACSR Tern conductor could sustain a significantly higher number of cycles before fatigue failure than the AAAC 900 MCM for different values of H/w. Meanwhile, the AAC Orchid presented a fatigue life which is located between the two conductors cited above and presents a similar fatigue life as the ACAR 750 MCM. The experimental data from static tests agreed quite well with the estimated theoretical values. Failure analysis of the broken samples (wires) revealed not only that cracks initiated in the fretted areas of the aluminium wires, but also that their morphology presented clear evidence of fatigue failure, such as beach marks and secondary cracks. Additionally, a failure analysis was performed, not only in terms of the layer in which the wires broke and the type of fracture surface but also according to the position from the clamp mouth where these failures occurred inside the suspension clamp. Data presented in this study could be used in various non-linear finite-element programs in order to better understand the mechanical behaviour of conductors. Furthermore, the generated information could be helpful for planning the maintenance of power lines. Based on the evaluation of the parameter H/w presented in this work, it emerged that the H/w parameter represents a clear advance in the design of transmission lines against fatigue due to aeolian vibrations when compared to the previously recommended Every Day Stress (EDS). This is supported by the fact that, when using the H/w parameter, the static data are consistent with those predicted using appropriate equations.
APA, Harvard, Vancouver, ISO, and other styles
32

Pradhan, Shashank. "Dynamic soil-structure interaction using disturbed state concept and artificial neural networks for parameter evaluation." Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/289773.

Full text
Abstract:
Interaction between the superstructure and foundation depends on the behavior of soil supporting the foundation. To study the behavior of interfaces, it is necessary to characterize the behavior at the interface, model constitutive relationships mathematically, and incorporate the model together with the governing equations of mechanics into numerical procedures such as the finite element method. Such an approach then can be used for solving complex problems that involve dynamic loading, nonlinear material behavior, and the presence of water, leading to saturated interfaces. In this dissertation, a general model, called the Disturbed State Concept constitutive model has been developed to model saturated Ottawa sand-Concrete interface and saturated Nevada sand. In the DSC, the material is assumed to transform continuously from the relative intact state to the fully adjusted state under loading. Hence the observed response of the material is expressed in terms of response of relatively intact and fully adjusted states. The DSC model is a unified approach and allows for elastic and plastic strains, damage, and softening and stiffening. The model parameters for saturated Ottawa sand-Concrete interface and saturated Nevada sand are evaluated using data from laboratory tests and are used for the verification of DSC model. The model predictions showed satisfactory correlation with the test results. In this dissertation, a new program based on concept of neural computing is developed to facilitate determination of interface parameters when no test data is available. The back propagation training algorithm with bias nodes is used to train the network. The program is developed in FORTRAN language using Microsoft Developer Studio. The reason for selecting FORTRAN as a programming language to develop Biased Artificial Neural Network (BANN) simulator is due to its proficiency in number crunching operations which is the core requirement of the ANN. A nonlinear dynamic finite element program (DSC-DYN2D) based on the DSC model is used to solve two problems, a centrifuge test and an axially loaded pile involving interface behavior. Overall, it can be stated that the DSC model allows realistic simulation of complex dynamic soil-structure interaction problems, and is capable of characterizing behavior of saturated interfaces involving liquefaction under dynamic and earthquake loading.
APA, Harvard, Vancouver, ISO, and other styles
33

YU, JINSONG. "Development of Microfabricated Electrochemical Sensors for Environmental Parameter Measurements Applicable to Corrosion Evaluation and Gaseous Oxygen Detection." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1206981091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Haaf, Philip. "In-vitro-Evaluation biologischer Herzklappenprothesen in einem pulsativen Strömungsmodell anhand optischer und objektiver Parameter der Hämodynamik /." Inhaltsverzeichnis, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016969653&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Smal, Ruan. "Evaluation of the Catchment Parameter (CAPA) and Midgley and Pitman (MIPI) empirical design flood estimation methods." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71809.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: The devastating effects floods have on both social and economic level make effective flood risk management an essential part of rural and urban development. A major part of effective flood risk management is the application of reliable design flood estimation methods. Research over the years has illustrated that current design flood estimation methods as a norm show large discrepancies which can mainly be attributed to the fact that these methods are outdated (Smithers, 2007). The research presented focused on the evaluation and updating of the Midgley and Pitman (MIPI) and the Catchment Parameter (CAPA or McPherson) empirical design flood estimation methods. The evaluation was done by means of comparing design floods estimated by each method with more reliable probabilistic design floods derived from historical flow records. Flow gauging stations were selected as drainage data points based on the availability of flow data and available catchment characteristics. A selection criterion was developed resulting in 53 gauging stations. The Log Normal (LN) and Log Pearson Type III (LP III) distributions were used to derive the probabilistic floods for each gauging station. The flow gauging stations were used to delineate catchments and to quantify catchment characteristics using Geographic Information Systems (GIS) software and their associated applications. The two methods were approximated by means derived formulas instead of evaluating and updating the two methods from first principles. This was done as a result of the constraints brought about by both time and the attainment of the relevant literature. The formulae were derived by means of plotting method inputs and resulted in graphs, fitting a trendline through the points and deriving a formula best describing the trendline. The derived formulae and the catchment characteristics were used to estimate the design floods for each method. A comparison was then done between the design flood results of the two methods and the probabilistic design floods. The results of these comparisons were used to derive correction factors which could potentially increase the reliability of the two methods used to estimate design floods. The effectiveness of any updating would be the degree (or level) in which the reliability of a method could be increased. It was proven that the correction factors did decrease the difference between the „assumed and more reliable probabilistic design floods‟ and the methods‟ estimates. However, the increase in reliability of the methods through the use of the recommended correction factors is questionable due to factors such as the reliability of the flow data as well as the methods which had to be used to derive the correction factors.
AFRIKAANSE OPSOMMING: Die verwoestende gevolge van vloede op beide ekonomiese en sosiale gebiede beklemtoon die belangrikheid van effektiewe vloed risiko bestuur vir ontwikellings doeleindes. „n Baie belangrikke gedeelte van effektiewe vloed risiko bestuur is die gebruik van betroubare ontwerp vloed metodes. Navorsing oor die laaste paar jaar het die tekortkominge van die metodes beklemtoon, wat meestal toegeskryf kan word aan die metodes wat verouderd is. Die navorsing het gefokus op die evaluering en moontlike opdatering van die Midley en Pitman (MIPI) en die “Catchment Parameter” (CAPA of McPherson) empiriese ontwerp vloed metodes. Die evaluering het geskied deur middel van die vergelyking van die ontwerp vloed soos bereken deur die twee metodes en die aanvaarde, meer betroubare probabilistiese ontwerp vloede, bepaal deur middel van statistiese ontledings. Vloei meetstasies is gekies as data-punte omrede die beskikbaarheid van vloei data en beskikbare opvanggebied eienskappe. „n Seleksie kriteruim is ontwikkel waaruit 53 meetstasies gekies is. Die Log Normale (LN) en Log Pearson Tipe III (LP III) verspreidings is verder gebruik om die probabilistiese ontwerp vloede te bereken vir elke meetstasie. Die posisie van die meetstasies is ook verder gebruik om opvanggebiede te definieer en opvanggebied eienskappe te bereken. Geografiese inligtingstelsels (GIS) is vir die doel gebruik inplaas van die oorspronlik hand metodes. Die twee metodes is benader deur die gebruik van afgeleide formules inplaas van „n eerste beginsel benadering. Dit is gedoen as gevolg van die beperkings wat teweeggebring is deur beide tyd en die beskikbaarheid van die relevante litratuur wat handel oor die ontwikkeling van die twee metodes. Die formules is verkry deur middel van die plot van beide insette en resultate in grafieke, die passing van tendenslyne en die afleiding van formules wat die tendenslyne die beste beskryf. Die afgeleide formules saam met die opvanggebied eienskappe is toe verder gebruik om die ontwerp vloede van elke meet stasie te bepaal, vir beide metodes. The resultate van die twee metodes is toe vergelyk met die probabilistiese ontwerp vloede. Die resultate van hierdie vergelyking is verder gebruik om korreksie faktore af te lei wat moontlik die betroubaarheid van die twee metodes kon verhoog. Die doeltreffendheid van enige opdatering sal die mate wees waarin die betroubaarheid van n metode verhoog kan word. Gedurende die verhandeling is dit bewys dat die korreksie faktore wel n vermindering teweebring in die verskil tussen die ontwerp vloede van die aanvaarde meer betroubare probabilistiese ontwerp vloede van beide metodes. Die toename in betroubaarheid van die metodes deur die gebruik van die voorgestelde korreksie faktore is egter bevraagteken as gevolg van faktore soos die betroubaarheid van die vloei data self asook die metodologie wat gevolg is om die korreksie faktore af te lei.
APA, Harvard, Vancouver, ISO, and other styles
36

Wagner, Timothy Paul. "The Mechanical Design of a Suspension Parameter Identification and Evaluation Rig (SPIdER) for Wheeled Military Vehicles." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1322502477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Dong, Wei. "Analysis and Evaluation of Soft-switching Inverter Techniques in Electric Vehicle Applications." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/28859.

Full text
Abstract:
This dissertation presents the systematic analysis and the critical assessment of the AC side soft-switching inverters in electric vehicle (EV) applications. Although numerous soft-switching inverter techniques were claimed to improve the inverter performance, compared with the conventional hard-switching inverter, there is the lack of comprehensive investigations of analyzing and evaluating the performance of soft-switching inverters. Starting with an efficiency comparison of a variety of the soft-switching inverters using analytical calculation, the dissertation first reveals the effects of the auxiliary circuit's operation and control on the loss reduction. Three types of soft-switching inverters realizing the zero-voltage-transition (ZVT) or zero-current-transition (ZCT) operation are identified to achieve high efficiency operation. Then one hard-switching inverter and the chosen soft-switching inverters are designed and implemented with the 55 kW power rating for the small duty EV application. The experimental evaluations on the dynamometer provide the accurate description of the performance of the soft-switching inverters in terms of the loss reductions, the electromagnetic interference (EMI) noise, the total harmonic distortion (THD) and the control complexity. An analysis of the harmonic distortion caused by short pulses is presented and a space vector modulation scheme is proposed to alleviate the effect. To effectively analyze the soft-switching inverters' performance, a simulation based electrical modeling methodology is developed. Not only it extends the EMI noise analysis to the higher frequency region, but also predicts the stress and the switching losses accurately. Three major modeling tasks are accomplished. First, to address the issues of complicated existing scheme, a new parameter extraction scheme is proposed to establish the physics-based IGBT model. Second, the impedance based measurement method is developed to derive the internal parasitic parameters of the half-bridge modules. Third, the finite element analysis software is used to develop the model for the laminated bus bar including the coupling effects of different phases. Experimental results from the single-leg operation and the three-phase inverter operation verify the effectiveness of the presented systematic electrical modeling approach. With the analytical tools verified by the testing results, the performance analysis is further extended to different power ratings and different bus voltage designs.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Jiaqi. "The Ability-weighted Bayesian Three-parameter Logistic Item Response Model for the Correction of Guessing." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563876972690161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Schapperer, Johannes. "Das PTSD-Konzept bei Patienten mit automatischem implantierten Kardioverter-Defibrillator (AICD) psychophysiologische Parameter zur Evaluation eines Maladaptionssyndroms /." [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972309217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ternes, David Richard. "Building large sets of haptic icons : rhythm as a design parameter, and between-subjects MDS for evaluation." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/32268.

Full text
Abstract:
Haptic icons (brief, tactile stimuli with associated meanings) are a useful new way to convey information through the modality of touch, but they are difficult to create because of our lack of understanding into what makes good haptic stimuli and how people will perceive them. This thesis aims to enlarge our capabilities to design and evaluate haptic icons, despite these problems. We seek to do this via two overlapping threads of research. In the first thread, we introduce the design parameter of rhythm as a means of extending the expressive capabilities of the simple tactile stimuli used in haptic icons. This allows us to create a set of expressive and perceptually distinguishable haptic stimuli larger by almost an order of magnitude than any previously created. In the second thread of research, we tackle the problem of how to evaluate the perceptual characteristics of such a large set of stimuli with real people. We develop a means of evaluation that allows us to collect perceived difference data by present each user with only a subset of the total stimulus collection, and then stitch together an aggregate picture of how the stimuli are perceived via data collected from overlapping subsets from different users. To advance these two threads of research, two user studies are run in order to examine how our haptic stimulus set is perceived and to validate our new method of gathering perceptual difference data. One study uses an established but cumbersome technique to study our stimulus set, and finds that haptic rhythms are perceived according to several different aspects of rhythm, and that users can consistently differentiate between haptic stimuli along these aspects. The second study uses our newly developed data collection method to study the same stimulus set, and we find that the new technique produces results that show no significant difference from the established technique, but using a data collection task that is much quicker and less arduous for users to perform. We conclude by recommending the use of our new haptic stimulus set and evaluation technique as a powerful and viable means of extending the use of haptic icons to larger sets.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
41

Tutzschke, Robin [Verfasser], JOHANNA [Akademischer Betreuer] HUEBSCHER, and Christoph [Akademischer Betreuer] Anders. "Evaluation der Effektivität der Neuen Rückenschule auf muskulär-physiologische Parameter / Robin Tutzschke. Gutachter: Johanna Hübscher ; Christoph Anders." Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2015. http://d-nb.info/1069105082/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Armbrust, Florian [Verfasser]. "Perkutaner Pulmonalklappenersatz : In-vivo-Evaluation von selbstexpandierenden, pulmonalklappentragenden Nitinol-Stents anhand hämodynamischer Parameter und Makropathologie / Florian Armbrust." Kiel : Universitätsbibliothek Kiel, 2009. http://d-nb.info/1019868783/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Taparugssanagorn, A. (Attaphongse). "Evaluation of MIMO radio channel characteristics from TDM-switched MIMO channel sounding." Doctoral thesis, University of Oulu, 2007. http://urn.fi/urn:isbn:9789514286506.

Full text
Abstract:
Abstract The present dissertation deals with the evaluation of multiple-input multiple-output (MIMO) radio channel characteristics from time-division multiplexing (TDM)-switched MIMO channel sounding. The research can be divided into three main areas. First, the impacts of phase noise in TDM-switched MIMO channel sounding on channel capacity are studied. Second, we focus on those impacts on channel parameter estimation using the SAGE algorithm. And in the last part, spatial correlation, channel eigenvalue distribution, and ergodic capacity in realistic environments are analyzed. The rationale behind the first two areas is that most advanced MIMO radio channel sounders employ the TDM technique, which has significant problems from phase noise of the TX and RX phase locked loop (PLL) oscillators causing measurement errors in terms of estimated channel capacity and parameters. We propose statistical models that reproduce the capacity estimates. The effects of the sounding mode (SM), the length of pseudo-random noise (PN) sequence L of the sounding signal, and the system size are disclosed. The distinctive basis is to consider the impact of the actual phase noise in TDM switched MIMO channel sounding, instead of assuming white Gaussian-type phase noise. In a reality, the short-term phase noise component affecting one measurement cycle of a MIMO system plays an important role in the traditional estimators of the radio channel parameters and capacity. We show that the performance impairment is less than that been under the hypothesis of uncorrelated white Gaussian phase-noises samples. The difference is due to the non-vanishing correlation of phase-noise within the measurement cycle. Two approaches to mitigating the impact of phase noise are proposed. The former is the simple and efficient sliding averaging method, where the signal-to-noise ratio (SNR) of the channel impulse response can be increased. The latter is the choice of SM and L, which is more thorough. In the second part, two approaches to mitigating its impact on channel parameter estimation using the SAGE algorithm are also discussed. Besides the sliding averaging, which in general can increase the SNR, the new SAGE algorithm based channel parameter estimation based on the improved signal model accounting for the phase noise in the measurement device is proposed. Finally, the channel eigenvalue distribution and ergodic capacity based on complex hypergeometric functions and their asymptotic characteristics are analyzed. It is shown that the derived theoretical expressions closely approximate the simulated results of the measured finite-dimensional MIMO channels. The spatial correlation and the eigenvalue statistics in frequency selective channels for single and dual polarized antennas are investigated. This knowledge is useful when different MIMO and beamforming techniques are applied.
APA, Harvard, Vancouver, ISO, and other styles
44

Gollvik, Martin. "Metamodeling for ultra-fast parameter estimation : Theory and evaluation of use in real-time diagnosis of diffuse liver disease." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-108750.

Full text
Abstract:
Diffuse liver disease is a growing problem and a major cause of death worldwide. In the final stages the treatment often involves liver resection or transplant and in deciding what course of action is to be taken it is crucial to have a correct assessment of the function of the liver. The current “gold standard” for this assessment is to take a liver biopsy which has a number of disadvantages. As an alternative, a method involving magnetic resonance imaging and mechanistic modeling of the liver has been developed at Linköping University. One of the obstacles for this method to overcome in order to reach clinical implementation is the speed of the parameter estimation. In this project the methodology of metamodeling is tested as a possible solution to this speed problem. Metamodeling involve making models of models using extensive model simulations and mathematical tools. With the use of regression methods, clustering algorithms, and optimization, different methods for parameter estimation have been evaluated. The results show that several, but not all, of the parameters could be accurately estimated using metamodeling and that metamodeling could be a highly useful tool when modeling biological systems. With further development, metamodeling could bring this non-invasive method for estimation of liver function a major step closer to application in the clinic.
APA, Harvard, Vancouver, ISO, and other styles
45

Eyries, Pascal. "A dynamic distributed-parameter modeling approach for performance monitoring of oral drug delivery systems." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0501103-161142.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: mass balance approach; bioavailability; drug delivery; dynamic modeling; partial differential equations; sensitivity analysis; dynamic simulations. Includes bibliographical references (p. 62-67).
APA, Harvard, Vancouver, ISO, and other styles
46

Docherty, Paul David. "Evaluation and Development of the Dynamic Insulin Sensitivity and Secretion Test for Numerous Clinical Applications." Thesis, University of Canterbury. Department of Mechanical Engineering, 2011. http://hdl.handle.net/10092/5525.

Full text
Abstract:
Given the high and increasing social, health and economic costs of type 2 diabetes, early diagnosis and prevention are critical. Insulin sensitivity and insulin secretion are important etiological factors of type 2 diabetes and are used to define an individual’s risk or progression to the disease state. The dynamic insulin sensitivity and secretion test (DISST) concurrently measures insulin sensitivity and insulin secretion. The protocol uses glucose and insulin boluses as stimulus, and the participant response is observed during a relatively short protocol via glucose, insulin and C-peptide assays. In this research, the DISST insulin sensitivity value was successfully validated against the gold standard euglycaemic clamp with a high correlation (R=0.82), a high insulin resistance diagnostic equivalence (ROC c-unit=0.96), and low bias (-10.6%). Endogenous insulin secretion metrics obtained via the DISST were able to describe clinically important distinctions in participant physiology that were not observed with euglycaemic clamp, and are not available via most established insulin sensitivity tests. The quick dynamic insulin sensitivity test (DISTq) is a major extension of the DISST that uses the same protocol but uses only glucose assays. As glucose assays are usually available immediately, the DISTq is capable of providing insulin sensitivity results immediately after the final blood sample, creating a real-time clinical diagnostic. The DISTq correlated well with the euglycaemic clamp (R=0.76), had a high insulin resistance diagnostic equivalence (ROC c-unit=0.89), and limited bias (0.7%). These DISTq results meet or exceed the outcomes of most validation studies from established insulin sensitivity tests such as the IVGTT, HOMA and OGTT metrics. Furthermore, none of the established insulin sensitivity tests are capable of providing immediate or real-time results. Finally, and most of the established tests require considerably more intense clinical protocols than the DISTq. A range of DISST-based tests that used the DISST protocol and varying assay regimens were generated to provide optimum compromises for any given clinical or screening application. Eight DISST-based variants were postulated and assessed via their ability to replicate the fully sampled DISST results. The variants that utilised insulin assays correlated well to the fully sampled DISST insulin sensitivity values R~0.90 and the variants that assayed C-peptide produced endogenous insulin secretion metrics that correlated well to the fully-sampled DISST values (R~0.90 to 1). By taking advantage of the common clinical protocol, tests in the spectrum could be used in a hierarchical system. For example, if a DISTq result is close to a diagnostic threshold, stored samples could be re-assayed for insulin, and the insulin sensitivity value could be ‘upgraded’ without an additional protocol. Equally, adding C-peptide assays would provide additional insulin secretion information. Importantly, one clinical procedure thus yields potentially several test results. In-silico investigations were undertaken to evaluate the efficacy of two additional, specific DISTq protocol variations and to observe the pharmacokinetics of anti-diabetic drugs. The first variation combined the boluses used in the DISTq and reduced the overall test time to 20 minutes with only two glucose assays. The results of this investigation implied no significant degradation of insulin sensitivity values is caused by the change in protocol and suggested that clinical trials of this protocol are warranted. The second protocol variant added glucose content to the insulin bolus to enable observation of first phase insulin secretion concurrently with insulin sensitivity from glucose data alone. Although concurrent observation was possible without simulated assay noise, when clinically realistic noise was added, model identifiability was lost. Hence, this protocol is not recommended for clinical investigation. Similar analyses are used to apply the overall dynamic, model-based clinical test approach to other therapeutics. In-silico analysis showed that although the pharmacokinetics of insulin sensitizers drugs were described well by the dynamic protocol. However, the pharmacokinetics of insulin secretion enhancement drugs were less observable. The overall thesis is supported by a common model parameter identification method. The iterative integral parameter identification method is a development of a single, simple integral method. The iterative method was compared to the established non-linear Levenberg-Marquardt parameter identification method. Although the iterative integral method is limited in the type of models it can be used with, it is more robust, accurate and less computationally intense than the Levenberg-Marquardt method. Finally, a novel, integral-based method for the evaluation of a-priori structural model identifiability is also presented. This method differs significantly from established, derivative based approaches as it accounts for sample placement, measurement error, and probable system responses. Hence, it is capable of defining the true nature of identifiability, which is analogous, not binary as assumed by the established methods. The investigations described in this thesis were centred on model-based insulin sensitivity and secretion identification from dynamic insulin sensitivity tests with a strong focus on maximising clinical efficacy. The low intensity and informative DISST was successfully validated against the euglycaemic clamp. DISTq further reduces the clinical cost and burden, and was also validated against the euglycaemic clamp. DISTq represents a new paradigm in the field of low-cost insulin sensitivity testing as it does not require insulin assays. A number of in-silico investigations were undertaken and provided insight regarding the suitability of the methods for clinical trials. Finally, two novel mathematical methods were developed to identify model parameters and asses their identifiability, respectively.
APA, Harvard, Vancouver, ISO, and other styles
47

Kellermann, Anh Pham. "Missing Data in Complex Sample Surveys: Impact of Deletion and Imputation Treatments on Point and Interval Parameter Estimates." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7633.

Full text
Abstract:
The purpose of this simulation study was to evaluate the relative performance of five missing data treatments (MDTs) for handling missing data in complex sample surveys. The five missing data methods included in this study were listwise deletion (LW), single hot-deck imputation (HS), single regression imputation (RS), hot-deck-based multiple imputation (HM), and regression-based multiple imputation (RM). These MDTs were assessed in the context of regression weight estimates in multiple regression analysis in complex sample data with two data levels. In this study, the multiple regression equation had six regressors without missing data and two regressors with missing data. The four performance measures used in this study were statistical bias, RMSE, CI width, and coverage probability (i.e., 95%) of the confidence interval. The five MDTs were evaluated separately for three types of missingness: MCAR, MAR, and MNAR. For each type of missingness, the studied MDTs were evaluated at four levels of missingness (10%, 30%, 50%, and 70%) along with complete sample conditions as a reference point for interpretation of results. In addition, ICC levels (.0, .25, .50) and high and low density population were also manipulated as studied factors. The study’s findings revealed that the performance of each individual MDT varied across missing data types, but their relative performance was quite similar for all missing data types except for LW’s performance in MNAR. RS produced the most inaccurate estimates considering bias, RMSE, and coverage of confidence interval; RM and HM were the second poorest performers. LW as well as HS procedure outperformed the rest on the measures of accuracy and precision in MCAR; however LW’s measures of precision decreased in MAR and MNAR, and LW’s CI width was the widest in MNAR data. In addition, in all three missing data types, those poor performers were less accurate and less precise on variables with missing data than they were on variables without missing data; and the degree of accuracy and precision of these poor performers depended mostly on the level of data ICC. The proportion of missing data only noticeably affected the performance of HM such that in higher missing data levels, HM yielded worse performance measures. Population density factor had negligible effects on most of the measures produced by all studied MDTs except for RMSE, CI width, and CI coverage produced by LW which were modestly influenced by population density.
APA, Harvard, Vancouver, ISO, and other styles
48

Rizk, Amr [Verfasser]. "Non-asymptotic performance evaluation and sampling-based parameter estimation for communication networks with long memory traffic / Amr Rizk." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2013. http://d-nb.info/1044693703/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Fisch, Barbara [Verfasser], and Birgit [Akademischer Betreuer] Ertl-Wagner. "MR-tomographische Evaluation hämo- und hydrodynamischer Parameter bei Patienten mit Multipler Sklerose / Barbara Fisch. Betreuer: Birgit Ertl-Wagner." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2014. http://d-nb.info/1065610130/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dörmann, Ulrike [Verfasser]. "Isometrische und isoinertiale Parameter in der Kraftdiagnostik: Reliabilitätsprüfung und Evaluation von Effekten mechanischer und elektrischer Krafttrainingsreize / Ulrike Dörmann." Köln : Zentralbibliothek der Deutschen Sporthochschule, 2011. http://d-nb.info/1070953199/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography