Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Complementary cumulative distribution function.

Thèses sur le sujet « Complementary cumulative distribution function »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 24 meilleures thèses pour votre recherche sur le sujet « Complementary cumulative distribution function ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Oltean, Elvis. « Modelling income, wealth, and expenditure data by use of Econophysics ». Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/20203.

Texte intégral
Résumé :
In the present paper, we identify several distributions from Physics and study their applicability to phenomena such as distribution of income, wealth, and expenditure. Firstly, we apply logistic distribution to these data and we find that it fits very well the annual data for the entire income interval including for upper income segment of population. Secondly, we apply Fermi-Dirac distribution to these data. We seek to explain possible correlations and analogies between economic systems and statistical thermodynamics systems. We try to explain their behaviour and properties when we correlate physical variables with macroeconomic aggregates and indicators. Then we draw some analogies between parameters of the Fermi-Dirac distribution and macroeconomic variables. Thirdly, as complex systems are modelled using polynomial distributions, we apply polynomials to the annual sets of data and we find that it fits very well also the entire income interval. Fourthly, we develop a new methodology to approach dynamically the income, wealth, and expenditure distribution similarly with dynamical complex systems. This methodology was applied to different time intervals consisting of consecutive years up to 35 years. Finally, we develop a mathematical model based on a Hamiltonian that maximises utility function applied to Ramsey model using Fermi-Dirac and polynomial utility functions. We find some theoretical connections with time preference theory. We apply these distributions to a large pool of data from countries with different levels of development, using different methods for calculation of income, wealth, and expenditure.
Styles APA, Harvard, Vancouver, ISO, etc.
2

BAIG, CLEMENT RANJITH ANTHIKKAD &amp IRFAN AHMED. « PERFORMANCE ENHANCEMENT OF OFDM IN PAPR REDUCTION USING NEW COMPANDING TRANSFORM AND ADAPTIVE AC EXTENSION ALGORITHM FOR NEXT GENERATION NETWORKSPERFORMANCE ENHANCEMENT OF OFDM IN PAPR REDUCTION USING NEW COMPANDING TRANSFORM AND ADAPTIVE AC EXTENSION ALGORITHM FOR NEXT GENERATION NETWORKS ». Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6011.

Texte intégral
Résumé :
This paper presents a new hybrid PAPR reduction technique for the OFDM signal, which combines a multiple symbol representations method with a signal clipping method. The clipping method is a nonlinear PAPR reduction scheme, where the amplitude of the signal is limited to a given threshold. Considering the fact that the signal must be interpolated before A/D conversion, a variety of clipping methods has been proposed. Some methods suggest the clipping before interpolation, having the disadvantage of the peaks re-growth. Other methods contributed that the clipping after interpolation, having the disadvantage of out-of-band power production. In order to overcome this problem different filtering techniques have been proposed. Filtering can also cause peak re-growth, but less than the clipping before interpolation. Another clipping technique supposes that only subcarriers having the highest phase difference between the original signal and its clipped variant will be changed. This is the case of the partial clipping method. To further reduce the PAPR, the dynamic of the clipped signal can be compressed. Linear methods like partial transmit sequence or selective mapping has been proposed for the reduction of PAPR as well. Another PAPR reduction method is the tone reservation. It uses tones on which no data is sent to reduce the transmitted signal peaks. Derivatives of this method with lower computation complexity and improved performance have been proposed: One-Tone One-Peak and one by-one iteration. A similar PAPR reduction method is the multiple symbol representations, where alternative signalling points are used to represent one symbol. The simulation results highlight the advantages of the proposed PAPR reduction method.
The proposed technique namely Adaptive Active Constellation Extension (Adaptive ACE) Algorithm reduced the high Peak-to-Average Power Ratio (PAPR) of the Orthogonal Frequency Division Multiplexing (OFDM) systems. The Peak-to-Average Power Ratio (PAPR) is equal to 6.8 dB for the target clipping ratios of 4 dB, 2 dB and 0 dB by using Adaptive Active Constellation Extension (Adaptive ACE) Algorithm. Thus, the minimum PAPR can be achieved for low target clipping ratios. The Signal-to-Noise Ratio (SNR) of the Orthogonal Frequency Division Multiplexing (OFDM) signal obtained by the Adaptive Active Constellation Extension (Adaptive ACE) algorithm is equal to 1.2 dB at a Bit Error Rate (BER) of 10-0..4 for different constellation orders like 4-Quadrature Amplitude Modulation (4-QAM), 16-Quadrature Amplitude Modulation (16-QAM) and 64-Quadrature Amplitude Modulation (16-QAM). Here, the Bit Error Rate of 10-0.4 or 0.398, that means a total of 398-bits are in error when 1000-bits are transmitted via a communication channel or approximately 4-bits are in error when 10-bits are transmitted via a communication channel, which is high when compared to that of the original Orthogonal Frequency Division Multiplexing (OFDM) signal. The other problems faced by the Adaptive Active Constellation Extension (Adaptive ACE) algorithm are Out-of-Band Interference (OBI) and peak regrowth. Here, the Out-of-Band Interference (OBI) is a form of noise or an unwanted signal, which is caused when the original Orthogonal Frequency Division Multiplexing (OFDM) signal is clipped for reducing the peak signals which are outside of the predetermined area and the peak regrowth is obtained after filtering the clipped signal. The peak regrowth results to, increase in the computational time and computational complexity. In this paper, we have proposed a PAPR reduction scheme to improve the bit error rate performance by applying companding transform technique. Hence, 1-1.5 dB reduction in PAPR with this Non-companding technique is achieved. In Future, We can accept to implement the same on Rician and Rayleigh channels.
Clement Ranjith Anthikkad (E-mail: clement.ranjith@gmail.com / clan11@bth.se) & Irfan Ahmed Baig (E-mail: baig.irfanahmed@gmail.com / ir-a11@bth.se )
Styles APA, Harvard, Vancouver, ISO, etc.
3

Liu, Xuecheng 1963. « Nonparametric maximum likelihood estimation of the cumulative distribution function with multivariate interval censored data : computation, identifiability and bounds ». Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79036.

Texte intégral
Résumé :
This thesis addresses nonparametric maximal likelihood (NPML) estimation of the cumulative distribution function (CDF) given multivariate interval censored data (MILD). The methodology consists in applying graph theory to the intersection graph of censored data. The maximal cliques of this graph and their real representations contain all the information needed to find NPML estimates (NPMLE). In this thesis, a new algorithm to determine the maximal cliques of an MICD set is introduced. The concepts of diameter and semi-diameter of the polytope formed by all NPMLEs are introduced and simulation to investigate the properties of the non-uniqueness polytope of the CDF NPMLEs for bivariate censored data is described. Also, an a priori bounding technique for the total mass attributed to a set of maximal cliques by a self-consistent estimate of the CDF (including the NPMLE) is presented.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Forgo, Vincent Z. Mr. « A Distribution of the First Order Statistic When the Sample Size is Random ». Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3181.

Texte intégral
Résumé :
Statistical distributions also known as probability distributions are used to model a random experiment. Probability distributions consist of probability density functions (pdf) and cumulative density functions (cdf). Probability distributions are widely used in the area of engineering, actuarial science, computer science, biological science, physics, and other applicable areas of study. Statistics are used to draw conclusions about the population through probability models. Sample statistics such as the minimum, first quartile, median, third quartile, and maximum, referred to as the five-number summary, are examples of order statistics. The minimum and maximum observations are important in extreme value theory. This paper will focus on the probability distribution of the minimum observation, also known as the first order statistic, when the sample size is random.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jeisman, Joseph Ian. « Estimation of the parameters of stochastic differential equations ». Queensland University of Technology, 2006. http://eprints.qut.edu.au/16205/.

Texte intégral
Résumé :
Stochastic di®erential equations (SDEs) are central to much of modern finance theory and have been widely used to model the behaviour of key variables such as the instantaneous short-term interest rate, asset prices, asset returns and their volatility. The explanatory and/or predictive power of these models depends crucially on the particularisation of the model SDE(s) to real data through the choice of values for their parameters. In econometrics, optimal parameter estimates are generally considered to be those that maximise the likelihood of the sample. In the context of the estimation of the parameters of SDEs, however, a closed-form expression for the likelihood function is rarely available and hence exact maximum-likelihood (EML) estimation is usually infeasible. The key research problem examined in this thesis is the development of generic, accurate and computationally feasible estimation procedures based on the ML principle, that can be implemented in the absence of a closed-form expression for the likelihood function. The overall recommendation to come out of the thesis is that an estimation procedure based on the finite-element solution of a reformulation of the Fokker-Planck equation in terms of the transitional cumulative distribution function(CDF) provides the best balance across all of the desired characteristics. The recommended approach involves the use of an interpolation technique proposed in this thesis which greatly reduces the required computational effort.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Ericok, Ozlen. « Uncertainty Assessment In Reserv Estimation Of A Naturally Fractured Reservoir ». Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605713/index.pdf.

Texte intégral
Résumé :
ABSTRACT UNCERTAINTY ASSESSMENT IN RESERVE ESTIMATION OF A NATURALLY FRACTURED RESERVOIR ERIÇ
OK, Ö
zlen M.S., Department of Petroleum and Natural Gas Engineering Supervisor : Prof. Dr. Fevzi GÜ
MRAH December 2004, 169 pages Reservoir performance prediction and reserve estimation depend on various petrophysical parameters which have uncertainties due to available technology. For a proper and economical field development, these parameters must be determined by taking into consideration their uncertainty level and probable data ranges. For implementing uncertainty assessment on estimation of original oil in place (OOIP) of a field, a naturally fractured carbonate field, Field-A, is chosen to work with. Since field information is obtained by drilling and testing wells throughout the field, uncertainty in true ranges of reservoir parameters evolve due to impossibility of drilling every location on an area. This study is based on defining the probability distribution of uncertain variables in reserve estimation and evaluating probable reserve amount by using Monte Carlo simulation method. Probabilistic reserve estimation gives the whole range of probable v original oil in place amount of a field. The results are given by their likelyhood of occurance as P10, P50 and P90 reserves in summary. In the study, Field-A reserves at Southeast of Turkey are estimated by probabilistic methods for three producing zones
Karabogaz Formation, Kbb-C Member of Karababa formation and Derdere Formation. Probability density function of petrophysical parameters are evaluated as inputs in volumetric reserve estimation method and probable reserves are calculated by @Risk software program that is used for implementing Monte Carlo method. Outcomes of the simulation showed that Field-A has P50 reserves as 11.2 MMstb in matrix and 2.0 MMstb in fracture of Karabogaz Formation, 15.7 MMstb in matrix and 3.7 MMstb in fracture of Kbb-C Member and 10.6 MMstb in matrix and 1.6 MMstb in fracture of Derdere Formation. Sensitivity analysis of the inputs showed that matrix porosity, net thickness and fracture porosity are significant in Karabogaz Formation and Kbb-C Member reserve estimation while water saturation and fracture porosity are most significant in estimation of Derdere Formation reserves.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Cunha, Lucas Santana da. « Modelos não lineares resultantes da soma de regressões lineares ponderadas por funções distribuição acumulada ». Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-04052016-100308/.

Texte intégral
Résumé :
Os controladores eletrônicos de pulverização visam minimizar a variação das taxas de insumos aplicadas no campo. Eles fazem parte de um sistema de controle, e permitem a compensação da variação de velocidade de deslocamento do pulverizador durante a operação. Há vários tipos de controladores eletrônicos de pulverização disponíveis no mercado e uma das formas de selecionar qual o mais eficiente nas mesmas condições, ou seja, em um mesmo sistema de controle, é quantificar o tempo de resposta do sistema para cada controlador específico. O objetivo desse trabalho foi estimar os tempos de resposta para mudanças de velocidade de um sistema eletrônico de pulverização via modelos de regressão não lineares, estes, resultantes da soma de regressões lineares ponderadas por funções distribuição acumulada. Os dados foram obtidos no Laboratório de Tecnologia de Aplicação, localizado no Departamento de Engenharia de Biossistemas da Escola Superior de Agricultura \"Luiz de Queiroz\", Universidade de São Paulo, no município de Piracicaba, São Paulo, Brasil. Os modelos utilizados foram o logístico e de Gompertz, que resultam de uma soma ponderada de duas regressões lineares constantes com peso dado pela função distribuição acumulada logística e Gumbell, respectivamente. Reparametrizações foram propostas para inclusão do tempo de resposta do sistema de controle nos modelos, com o objetivo de melhorar a interpretação e inferência estatística dos mesmos. Foi proposto também um modelo de regressão não linear difásico que resulta da soma ponderada de regressões lineares constantes com peso dado pela função distribuição acumulada Cauchy seno hiperbólico exponencial. Um estudo de simulação foi feito, utilizando a metodologia de Monte Carlo, para avaliar as estimativas de máxima verossimilhança dos parâmetros do modelo.
The electronic controllers spray aimed at minimizing the variation of inputs rates applied in the field. They are part of a control system, and allow for compensation for variation spray travel speed during operation. There are several types of electronic spray controllers on the market and one way to select which more efficient under the same conditions, ie in the same system of control, is to quantify the system response time for each specific driver. The objective of this study was to estimate the response times for changes in speed of an electronic spraying system via nonlinear regression models, these resulting from the sum of weighted linear regressions for cumulative distribution functions. Data were obtained on the Application Technology Laboratory, located in the Department of Biosystems Engineering from College of Agriculture \"Luiz de Queiroz\", University of Sao Paulo, in Piracicaba, Sao Paulo, Brazil. The models used were the logistic and Gompertz, resulting from a weighted sum of two constant linear regressions with weight given by the cumulative distribution function logistics and Gumbell respectively. Reparametrization been proposed for inclusion in the control system response time models, in order to improve the statistical interpretation and inference of the same. It has also been proposed a non-linear regression model two-phase which is the weighted sum of constant linear regressions weight given by a cumulative distribution function exponential hyperbolic sine Cauchy in which a simulation study was conducted using the methodology of Monte Carlo to evaluating the maximum likelihood estimates of the model parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Bothenna, Hasitha Imantha. « Approximation of Information Rates in Non-Coherent MISO wireless channels with finite input signals ». University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1516369758012866.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Dhuness, Kahesh. « An offset modulation method used to control the PAPR of an OFDM transmission ». Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/27258.

Texte intégral
Résumé :
Orthogonal frequency division multiplexing (OFDM) has become a very popular method for high-data-rate communication. However, it is well known that OFDM is plagued by a large peak-to-average power ratio (PAPR) problem. This high PAPR results in overdesigned power amplifiers, which amongst other things leads to inefficient amplifier usage, which is undesirable. Various methods have been recommended to reduce the PAPR of an OFDM transmission; however, all these methods result in a number of drawbacks. In this thesis, a novel method called offset modulation (OM-OFDM) is proposed to control the PAPR of an OFDM signal. The proposed OM-OFDM method does not result in a number of the drawbacks being experienced by current methods in the field. The theoretical bandwidth occupancy and theoretical bit error rate (BER) expression for an OM-OFDM transmission is derived. A newly applied power performance decision metric is also introduced, which can be utilised throughout the PAPR field, in order to compare various methods. The proposed OM-OFDM method appears to be similar to a well-known constant envelope OFDM (CE-OFDM) transmission. The modulation, structural and performance differences between an OM-OFDM and a CE-OFDM method are discussed. By applying the power performance decision metric, the OM-OFDM method is shown to offer significant performance gains when compared to CE-OFDM and traditional OFDM transmissions. In addition, the OM-OFDM method is able to accurately control the PAPR of a transmission for a targeted BER. By applying the power performance decision metric and complementary cumulative distribution function (CCDF), the proposed OM-OFDM method is shown to offer further performance gains when compared to existing PAPR methods, under frequency selective fading conditions. In this thesis, the OM-OFDM method has been combined with an existing active constellation extended (ACE) PAPR reduction method. To introduce a novel method called offset modulation with active constellation extension (OM-ACE), to control the PAPR of an OFDM signal. The theoretical BER expression for an OM-ACE transmission is presented and validated. Thereafter, by applying the decision metric and CCDF, the OM-ACE method is shown to offer performance improvements when compared to various PAPR methods. The use of OM-OFDM for cognitive radio applications is also investigated. Cognitive radio applications require transmissions that are easily detectable. The detection characteristics of an OM-OFDM and OFDM transmission are studied by using receiver operating characteristic curves. A derivation of a simplified theoretical closed-form expression, which relates the probability of a missed detection to the probability of a false alarm, for an unknown deterministic signal, at various signal-to-noise ratio (SNR) values is derived and validated. Previous expressions have been derived, which relate the probability of a missed detection to the probability of a false alarm. However, they have not been presented in such a generic closed-form expression that can be used for any unknown deterministic signal (for instance OFDM and OM-OFDM). Thereafter, an examination of the spectrum characteristics of an OM-OFDM transmission indicates its attractive detection characteristics. The proposed OM-OFDM method is further shown to operate at a significantly lower SNR value than an OFDM transmission, while still offering better detection characteristics than that of an OFDM transmission under Rician, Rayleigh and frequency selective fading channel conditions. In addition to its attractive PAPR properties, OM-OFDM also offers good detection characteristics for cognitive radio applications. These aspects make OM-OFDM a promising candidate for future deployment.
Thesis (PhD)--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
unrestricted
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ahmad, Shafiq, et Shafiq ahmad@rmit edu au. « Process capability assessment for univariate and multivariate non-normal correlated quality characteristics ». RMIT University. Mathematical and Geospatial Sciences, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20091127.121556.

Texte intégral
Résumé :
In today's competitive business and industrial environment, it is becoming more crucial than ever to assess precisely process losses due to non-compliance to customer specifications. To assess these losses, industry is extensively using Process Capability Indices for performance evaluation of their processes. Determination of the performance capability of a stable process using the standard process capability indices such as and requires that the underlying quality characteristics data follow a normal distribution. However it is an undisputed fact that real processes very often produce non-normal quality characteristics data and also these quality characteristics are very often correlated with each other. For such non-normal and correlated multivariate quality characteristics, application of standard capability measures using conventional methods can lead to erroneous results. The research undertaken in this PhD thesis presents several capability assessment methods to estimate more precisely and accurately process performances based on univariate as well as multivariate quality characteristics. The proposed capability assessment methods also take into account the correlation, variance and covariance as well as non-normality issues of the quality characteristics data. A comprehensive review of the existing univariate and multivariate PCI estimations have been provided. We have proposed fitting Burr XII distributions to continuous positively skewed data. The proportion of nonconformance (PNC) for process measurements is then obtained by using Burr XII distribution, rather than through the traditional practice of fitting different distributions to real data. Maximum likelihood method is deployed to improve the accuracy of PCI based on Burr XII distribution. Different numerical methods such as Evolutionary and Simulated Annealing algorithms are deployed to estimate parameters of the fitted Burr XII distribution. We have also introduced new transformation method called Best Root Transformation approach to transform non-normal data to normal data and then apply the traditional PCI method to estimate the proportion of non-conforming data. Another approach which has been introduced in this thesis is to deploy Burr XII cumulative density function for PCI estimation using Cumulative Density Function technique. The proposed approach is in contrast to the approach adopted in the research literature i.e. use of best-fitting density function from known distributions to non-normal data for PCI estimation. The proposed CDF technique has also been extended to estimate process capability for bivariate non-normal quality characteristics data. A new multivariate capability index based on the Generalized Covariance Distance (GCD) is proposed. This novel approach reduces the dimension of multivariate data by transforming correlated variables into univariate ones through a metric function. This approach evaluates process capability for correlated non-normal multivariate quality characteristics. Unlike the Geometric Distance approach, GCD approach takes into account the scaling effect of the variance-covariance matrix and produces a Covariance Distance variable that is based on the Mahanalobis distance. Another novelty introduced in this research is to approximate the distribution of these distances by a Burr XII distribution and then estimate its parameters using numerical search algorithm. It is demonstrates that the proportion of nonconformance (PNC) using proposed method is very close to the actual PNC value.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Povalač, Karel. « Sledování spektra a optimalizace systémů s více nosnými pro kognitivní rádio ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-233577.

Texte intégral
Résumé :
The doctoral thesis deals with spectrum sensing and subsequent use of the frequency spectrum by multicarrier communication system, which parameters are set on the basis of the optimization technique. Adaptation settings can be made with respect to several requirements as well as state and occupancy of individual communication channels. The system, which is characterized above is often referred as cognitive radio. Equipments operating on cognitive radio principles will be widely used in the near future, because of frequency spectrum limitation. One of the main contributions of the work is the novel usage of the Kolmogorov – Smirnov statistical test as an alternative detection of primary user signal presence. The new fitness function for Particle Swarm Optimization (PSO) has been introduced and the Error Vector Magnitude (EVM) parameter has been used in the adaptive greedy algorithm and PSO optimization. The dissertation thesis also incorporates information about the reliability of the frequency spectrum sensing in the modified greedy algorithm. The proposed methods are verified by the simulations and the frequency domain energy detection is implemented on the development board with FPGA.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zambrano, Martínez Jorge Luis. « Efficient Traffic Management in Urban Environments ». Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/129865.

Texte intégral
Résumé :
[ES] En la actualidad, uno de los principales desafíos a los que se enfrentan las grandes áreas metropolitanas es la congestión provocada por el tráfico, la cual se ha convertido en un problema importante al que se enfrentan las autoridades de cada ciudad. Para abordar este problema es necesario implementar una solución eficiente para controlar el tráfico que genere beneficios para los ciudadanos, como reducir los tiempos de viaje de los vehículos y, en consecuencia, el consumo de combustible, el ruido, y la contaminación ambiental. De hecho, al analizar adecuadamente la demanda de tráfico, es posible predecir las condiciones futuras del tráfico, y utilizar esa información para la optimización de las rutas tomadas por los vehículos. Este enfoque puede ser especialmente efectivo si se aplica en el contexto de los vehículos autónomos, que tienen un comportamiento más predecible, lo cual permite a los administradores de la ciudad mitigar los efectos de la congestión, como es la contaminación, al mejorar el flujo de tráfico de manera totalmente centralizada. La validación de este enfoque generalmente requiere el uso de simulaciones que deberían ser lo más realistas posible. Sin embargo, lograr altos grados de realismo puede ser complejo cuando los patrones de tráfico reales, definidos a través de una matriz de Origen/Destino (O-D) para los vehículos en una ciudad, son desconocidos, como ocurre la mayoría de las veces. Por lo tanto, la primera contribución de esta tesis es desarrollar una heurística iterativa para mejorar el modelado de la congestión de tráfico; a partir de las mediciones de bucle de inducción reales hechas por el Ayuntamiento de Valencia (España), pudimos generar una matriz O-D para la simulación de tráfico que se asemeja a la distribución de tráfico real. Si fuera posible caracterizar el estado del tráfico prediciendo las condiciones futuras del tráfico para optimizar la ruta de los vehículos automatizados, y si se pudieran tomar estas medidas para mitigar de manera preventiva los efectos de la congestión con sus problemas relacionados, se podría mejorar el flujo de tráfico en general. Por lo tanto, la segunda contribución de esta tesis es desarrollar una Ecuación de Predicción de Tráfico para caracterizar el comportamiento en las diferentes calles de la ciudad en términos de tiempo de viaje con respecto al volumen de tráfico, y aplicar una regresión logística a esos datos para predecir las condiciones futuras del tráfico. La tercera y última contribución de esta tesis apunta directamente al nuevo paradigma de gestión de tráfico previsto, tratándose de un servidor de rutas capaz de manejar todo el tráfico en una ciudad, y equilibrar los flujos de tráfico teniendo en cuenta las condiciones de congestión del tráfico presentes y futuras. Por lo tanto, realizamos un estudio de simulación con datos reales de congestión de tráfico en la ciudad de Valencia (España), para demostrar cómo se puede mejorar el flujo de tráfico en un día típico mediante la solución propuesta. Los resultados experimentales muestran que nuestra solución, combinada con una actualización frecuente de las condiciones del tráfico en el servidor de rutas, es capaz de lograr mejoras sustanciales en términos de velocidad promedio y tiempo de trayecto, ambos indicadores de un menor grado de congestión y de una mejor fluidez del tráfico.
[CA] En l'actualitat, un dels principals desafiaments als quals s'enfronten les grans àrees metropolitanes és la congestió provocada pel trànsit, que s'ha convertit en un problema important al qual s'enfronten les autoritats de cada ciutat. Per a abordar aquest problema és necessari implementar una solució eficient per a controlar el trànsit que genere beneficis per als ciutadans, com reduir els temps de viatge dels vehicles i, en conseqüència, el consum de combustible, el soroll, i la contaminació ambiental. De fet, en analitzar adequadament la demanda de trànsit, és possible predir les condicions futures del trànsit, i utilitzar aqueixa informació per a l'optimització de les rutes preses pels vehicles. Aquest enfocament pot ser especialment efectiu si s'aplica en el context dels vehicles autònoms, que tenen un comportament més predictible, i això permet als administradors de la ciutat mitigar els efectes de la congestió, com és la contaminació, en millorar el flux de trànsit de manera totalment centralitzada. La validació d'aquest enfocament generalment requereix l'ús de simulacions que haurien de ser el més realistes possible. No obstant això, aconseguir alts graus de realisme pot ser complex quan els patrons de trànsit reals, definits a través d'una matriu d'Origen/Destinació (O-D) per als vehicles en una ciutat, són desconeguts, com ocorre la majoria de les vegades. Per tant, la primera contribució d'aquesta tesi és desenvolupar una heurística iterativa per a millorar el modelatge de la congestió de trànsit; a partir dels mesuraments de bucle d'inducció reals fetes per l'Ajuntament de València (Espanya), vam poder generar una matriu O-D per a la simulació de trànsit que s'assembla a la distribució de trànsit real. Si fóra possible caracteritzar l'estat del trànsit predient les condicions futures del trànsit per a optimitzar la ruta dels vehicles automatitzats, i si es pogueren prendre aquestes mesures per a mitigar de manera preventiva els efectes de la congestió amb els seus problemes relacionats, es podria millorar el flux de trànsit en general. Per tant, la segona contribució d'aquesta tesi és desenvolupar una Equació de Predicció de Trànsit per a caracteritzar el comportament en els diferents carrers de la ciutat en termes de temps de viatge respecte al volum de trànsit, i aplicar una regressió logística a aqueixes dades per a predir les condicions futures del trànsit. La tercera i última contribució d'aquesta tesi apunta directament al nou paradigma de gestió de trànsit previst. Es tracta d'un servidor de rutes capaç de manejar tot el trànsit en una ciutat, i equilibrar els fluxos de trànsit tenint en compte les condicions de congestió del trànsit presents i futures. Per tant, realitzem un estudi de simulació amb dades reals de congestió de trànsit a la ciutat de València (Espanya), per a demostrar com es pot millorar el flux de trànsit en un dia típic mitjançant la solució proposada. Els resultats experimentals mostren que la nostra solució, combinada amb una actualització freqüent de les condicions del trànsit en el servidor de rutes, és capaç d'aconseguir millores substancials en termes de velocitat faig una mitjana i de temps de trajecte, tots dos indicadors d'un grau menor de congestió i d'una fluïdesa millor del trànsit.
[EN] Currently, one of the main challenges that large metropolitan areas have to face is traffic congestion, which has become an important problem faced by city authorities. To address this problem, it becomes necessary to implement an efficient solution to control traffic that generates benefits for citizens, such as reducing vehicle journey times and, consequently, use of fuel, noise and environmental pollution. In fact, by properly analyzing traffic demand, it becomes possible to predict future traffic conditions, and to use that information for the optimization of the routes taken by vehicles. Such an approach becomes especially effective if applied in the context of autonomous vehicles, which have a more predictable behavior, thus enabling city management entities to mitigate the effects of traffic congestion and pollution by improving the traffic flow in a city in a fully centralized manner. Validating this approach typically requires the use of simulations, which should be as realistic as possible. However, achieving high degrees of realism can be complex when the actual traffic patterns, defined through an Origin/Destination (O-D) matrix for the vehicles in a city, are unknown, as occurs most of the times. Thus, the first contribution of this thesis is to develop an iterative heuristic for improving traffic congestion modeling; starting from real induction loop measurements made available by the City Hall of Valencia, Spain, we were able to generate an O-D matrix for traffic simulation that resembles the real traffic distribution. If it were possible to characterize the state of traffic by predicting future traffic conditions for optimizing the route of automated vehicles, and if these measures could be taken to preventively mitigate the effects of congestion with its related problems, the overall traffic flow could be improved. Thereby, the second contribution of this thesis was to develop a Traffic Prediction Equation to characterize the different streets of a city in terms of travel time with respect to the vehicle load, and applying logistic regression to those data to predict future traffic conditions. The third and last contribution of this thesis towards our envisioned traffic management paradigm was a route server capable of handling all the traffic in a city, and balancing traffic flows by accounting for present and future traffic congestion conditions. Thus, we perform a simulation study using real data of traffic congestion in the city of Valencia, Spain, to demonstrate how the traffic flow in a typical day can be improved using our proposed solution. Experimental results show that our proposed solution, combined with frequent updating of traffic conditions on the route server, is able to achieve substantial improvements in terms of average travel speeds and travel times, both indicators of lower degrees of congestion and improved traffic fluidity.
Finally, I want to thank the Ecuatorian Republic through the "Secretaría de Educación Superior, Ciencia, Tecnología e Innovación" (SENESCYT), for granting me the scholarship to finance my studies.
Zambrano Martínez, JL. (2019). Efficient Traffic Management in Urban Environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/129865
TESIS
Styles APA, Harvard, Vancouver, ISO, etc.
13

Xu, Jia Cheng. « Evaluation of Thoracic Injury Risk of Heavy Goods Vehicle Occupants during Steering Wheel Rim Impacts to Different Rib Levels ». Thesis, KTH, Medicinteknik och hälsosystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266357.

Texte intégral
Résumé :
The interior of heavy goods vehicles (HGVs) differs from passenger cars. Both the steering wheel and the occupant are positioned differently in a HGV and increases the risk of steering wheel rim impacts. Such impact scenarios are relatively unexplored compared to passenger car safety studies that are more prevalent within the field of injury biomechanics. The idea with using human body models (HBMs) is to complement current crash test dummies with biomechanical data. Furthermore, the biofidelity of a crash dummy for loading similar to a steering wheel rimimpact is relatively unstudied and especially to different rib levels. Therefore, the aim with this thesis was to evaluate HGV occupant thoracic response between THUMS v4.0 and Hybrid III (H3) during steering wheel rim impacts with respect to different rib levels (level 1-2, 3-4, 6-7, 7-8, 9-10) with regards to ribs, aorta, liver, and spleen. To the author’s best knowledge, use of local injury risk functions for thoracic injuries is fairly rare compared to the predominant usage of global injury criteria that mainly predicts the most commonthoracic injury risk, i.e. rib fractures. Therefore, local injury criteria using experimental test datahave been developed for the ribs and the organs. The measured parameters were chest deflectionand steering wheel to thorax contact force on a global level, whilst 1st principal Green-Lagrangestrains was assessed for the rib and the organ injury risk. The material models for the liver and the spleen were remodelled using an Ogden material model based on experimental stress-strain data to account for hyperelasticity. Rate-dependency was included by iteration of viscoelastic parameters. The contact modelling of the organs was changed from a sliding contact to a tied contact to minimize unrealistic contact separations during impact. The results support previous findings that H3 needs additional instrumentation to accurately register chest deflection for rib levels beyond its current range, namely at ribs 1-2, 7-8, and 9-10. For THUMS, the chest deflection were within reasonable values for the applied velocities, but there were no definite injury risk. Fact is, the global injury criteria might overpredict the AIS3 injury risk (rib fractures) for rib level 1-2, 7-8, and 9-10. The rib strains could not be correlated with the measured chest deflections. This was explained by the unique localized loading characterized by pure steering wheel rim impact that mainly affected the sternum and the rib cartilage while minimizing rib deformation. The organ strains indicate some risk of rupture where the spleen deforms the most at rib levels 3-4 and 6-7, and the liver and the aorta at rib levels 6-7 and 7-8. This study provides a framework for complementing H3 with THUMS for HGV occupant safety with emphasis on the importance of using local injury criteria for functional injury prediction, i.e. prediction of injury risk using parameters directly related to rib fracture or organ rupture. Local injury criteria are thus a powerful safety assessment tool as it is independent on exterior loading such as airbag, steering wheel hub, or seat belt loading. It was noticed that global injury criteria with very localized impacts such as rim impacts have not been studied and will affect rib fracture risk differently than what has been studied using airbag or seat belt restraints. However, improvements are needed to accurately predict thoracic injury risk at a material level by finding more data for the local injury risk functions. Conclusively, it is clear that Hybrid III has insufficient instrumentation and is in need of upgrades to register chest deflections at multiple rib levels. Furthermore, the following are needed: better understanding of global injury criteria specific for HGV occupant safety evaluation, more data for age-dependent (ribs) and rate-dependent (organs) injury risk functions, a tiebreak contact with tangential sliding for better organ kinematics during impacts, and improving the biofidelity of the material models using data from tissue level experiments.
Förarmiljön i lastbilar gentemot personbilar är annorlunda, i detta kontext med avseende på främst ratt- och förarposition som ökar risken för islag med rattkransen för lastbilsförare. Sådana islag är relativt outforskat jämfört med passiv säkerhet för personbilar inom skadebiomekaniken. Tanken bakom användning av humanmodeller är att komplettera nuvarande krockdockor med biomekanisk information. Dessutom är biofideliteten hos en krockdocka vid rattislag relativt okänt, speciellt vid olika revbensnivåer. Därför är målet med detta examensarbete att undersöka thoraxresponsen hos en lastbilsförare genom att använda THUMS v4.0 och Hybrid III (H3) under rattislag med avseende på revbensnivåer (nivå 1-2, 3-4, 6-7, 7-8, och 9-10) och revben, aorta, lever, och mjälte. Enligt författaren verkar användning av lokala riskfunktioner för thoraxskador relativt ostuderat jämfört med den övervägande användningen av globala riskfunktioner som huvudsakligen förutser den mest vanligt förekommande thoraxskadan, nämligen revbensfrakturer. Därför har lokala riskfunktioner skapats för revben och organ, baserat på experimentell data. Uppmätta parametrar var bröstinträngning och kontaktkraft mellan ratt och thorax på global nivå, medan första Green-Lagrange huvudtöjningen användes för att evaluera skaderisken för revben och organ. Materialmodeller för lever och mjälte ommodellerades baserat på experimentell spänning-töjningsdata med Ogdens materialmodell för att ta hänsyn till hyperelasticitet. Töjningshastighetsberoendet inkluderades genom att iterera fram viskoelastiska parametrar. Kontaktmodellering av organ gjordes genom att ändra från glidande kontakt till en låsande kontakt för att minimera orealistisk kontaktseparation under islagsfallen. Resultaten stödjer tidigare studier där H3 visat sig behöva ytterligare givare för att noggrannt kunna registrera bröstinträngning vid olika revbensnivåer bortom dess nuvarande räckvidd, nämligen vid revben 1-2, 7-8, och 9-10. Uppmätt bröstinträngning i THUMS var rimliga för hastighetsfallen men gav inte någon definitiv risk för skada. Faktum är att de globala riskfunktionerna kan överskatta AIS3 risken vid revben 1-2, 7-8, och 9-10. Revbenstöjningarna kunde inte korreleras med bröstinträngningarna. Detta kunde förklaras genom de unika lastfallen som karakteriseras av rena rattislag som främst påverkar sternum och revbensbrosk som i sin tur minimerar deformation av revben. Organtöjningarna indikerar på någon risk för ruptur där mjälten deformerar som mest vid revben 3-4 och 6-7, medan för både levern och aortan sker det vid revben 6-7 och 7-8. Denna studie presenterar ett sätt att komplettera H3 med THUMS inom passiv säkerhet för lastbilsförare med fokus på lokala riskfunktioner för funktionell skadeprediktering dvs. prediktering av skaderisken med hjälp av parametrar som är direkt relaterat till revbensfraktur eller organruptur. Lokala riskfunktioner utgör en kraftfull säkerhetsbedömning som är oberoende av externa lastfall som t.ex. airbag, rattcentrum, eller bälteslast. I denna studie noterades det att de globala riskkriterierna inte har undersökts med väldigt lokala islag som rattislagen utgör och kommer därför att påverka risken för revbensfraktur annorlunda gentemot vad som har studerat, t.ex. airbag eller bältelast. Däremot behövs det mer data för de lokala riskkriterierna för att kunna prediktera thoraxskaderisken med ökad noggrannhet. Avslutningsvis, det är tydligt att Hybrid III har otillräckligt med givare och behöver förbättras för att kunna registrera bröstinträngning vid flera revbensnivåer. Vidare behövs följande: bättre förståelse för globala riskfunktioner anpassat inom passiv säkerhet för lastbilsförare, mer data för åldersberoende (revben) och töjningshastighetsberoende (organ) riskfunktioner, en ”tiebreak” kontakt med tangientiell glidning för bättre organkinematik, och ökad biofidelitet av materialmodeller genom att använda data från vävnadsexperiment.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Beisler, Matthias Werner. « Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects ». Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.

Texte intégral
Résumé :
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
Styles APA, Harvard, Vancouver, ISO, etc.
15

Yan, Chu Hou, et 朱厚恩. « Cumulative Hazard Function Estimation for Exponentiated Weibull Distribution ». Thesis, 2004. http://ndltd.ncl.edu.tw/handle/10896849858471540176.

Texte intégral
Résumé :
碩士
輔仁大學
數學系研究所
92
In this article , we consider the cumulative hazard function of a series system product which is composed by two independent components . We consider a nonparametric approach when there are no information about the distributions of these two independent components . We propose a direct estimator and an indirect estimator of the cumulative hazard function of the system . Then compare these two estimators via their asymptotic mean square errors . On the other hand , if the two independent components of the series are from Exponentiated Weibull distribution , we propose a direct estimator and an indirect estimator of the cumulative hazard function of the system . Again , we study the large sample behavior of these two estimators , then compare their asymptotic mean square errors .It is shown that both nonparametric approach and parametric approach lead to the same conclusion that indirect estimator is better than the direct estimator in the sense of mean square error .
Styles APA, Harvard, Vancouver, ISO, etc.
16

Huang, Jim C. « Cumulative Distribution Networks : Inference, Estimation and Applications of Graphical Models for Cumulative Distribution Functions ». Thesis, 2009. http://hdl.handle.net/1807/19194.

Texte intégral
Résumé :
This thesis presents a class of graphical models for directly representing the joint cumulative distribution function (CDF) of many random variables, called cumulative distribution networks (CDNs). Unlike graphical models for probability density and mass functions, in a CDN, the marginal probabilities for any subset of variables are obtained by computing limits of functions in the model. We will show that the conditional independence properties in a CDN are distinct from the conditional independence properties of directed, undirected and factor graph models, but include the conditional independence properties of bidirected graphical models. As a result, CDNs are a parameterization for bidirected models that allows us to represent complex statistical dependence relationships between observable variables. We will provide a method for constructing a factor graph model with additional latent variables for which graph separation of variables in the corresponding CDN implies conditional independence of the separated variables in both the CDN and in the factor graph with the latent variables marginalized out. This will then allow us to construct multivariate extreme value distributions for which both a CDN and a corresponding factor graph representation exist. In order to perform inference in such graphs, we describe the `derivative-sum-product' (DSP) message-passing algorithm where messages correspond to derivatives of the joint cumulative distribution function. We will then apply CDNs to the problem of learning to rank, or estimating parametric models for ranking, where CDNs provide a natural means with which to model multivariate probabilities over ordinal variables such as pairwise preferences. We will show that many previous probability models for rank data, such as the Bradley-Terry and Plackett-Luce models, can be viewed as particular types of CDN. Applications of CDNs will be described for the problems of ranking players in multiplayer team-based games, document retrieval and discovering regulatory sequences in computational biology using the above methods for inference and estimation of CDNs.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Liu, Chih-En, et 劉志恩. « Feature Selection Method Based on Support Vector Machine and Cumulative Distribution Function ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/06692637254680399272.

Texte intégral
Résumé :
碩士
輔仁大學
資訊工程學系
98
Abstract Feature selection is an important method in machine learning and data mining. It reduces dimensionality of data and increases performance in classification and clustering. Zhang et al. (2006) have developed a feature selection algorithm, named as recursive support vector machine (R-SVM), to select important biomarkers for biological data. R-SVM is based on the technology of support vector machine. However, it only works for linear kernels. To overcome this limitation of R-SVM, we propose a distance-based cumulative distribution function (DCDF) algorithm that works for linear and nonlinear kernels. In this study, DCDF is also implemented to compare with R-SVM. The experiments include eight different types of cancer data and four UCI datasets. The results show that DCDF outperforms R-SVM using either linear or nonlinear kernels. In some datasets, the DCDF method using nonlinear kernels achieve much better results and significantly outperform R-SVM. Keywords:Feature selection, Support vector machine, Cumulative distribution function, Recursive-SVM
Styles APA, Harvard, Vancouver, ISO, etc.
18

Lin, I.-Chen, et 林宜禎. « Analysis of Qualities of Numerical Methods for Calculating the Inverse Normal Cumulative Distribution Function ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/77986574706470418900.

Texte intégral
Résumé :
碩士
國立臺灣大學
資訊工程學研究所
95
The inverse normal cumulative distribution function does not have a close form. Several algorithms for evaluating (or approximating) the inverse normal cumulative distribution function have been proposed. The aim of the thesis is to compare a few numerical methods of the inverse normal cumulative distribution function, which are used for calculating approximate values of the inverse normal cumulative distribution function. They include the built-in function in EXCEL, a numerical method by Peter J. Acklam, and a numerical method by Moro. The numerical method of Moro is implemented with MATLAB in this thesis. Related errors of the above-mentioned numerical methods are analyzed in the current thesis as well.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Lin, I.-Chen. « Analysis of Qualities of Numerical Methods for Calculating the Inverse Normal Cumulative Distribution Function ». 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1707200711352800.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Olvera, Isaac Daniel. « Statistical and Economic Implications Associated with Precision of Administering Weight-based Medication in Cattle ». Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8916.

Texte intégral
Résumé :
Metaphylactic treatment of incoming feedlot cattle is a common preventative action against bovine respiratory disease (BRD). Cattle are dosed based on estimated or actual lot average weights, rather than on an individual basis, to reduce initial processing time. There has been limited research conducted on the effects of accurate weight- based dosing in feedlot cattle. The objective of this study was to evaluate the economic effects of precision weight- based dosing of cattle as compared to dosing the lot average or lot averages plus 50 lb and minus 50 lb. An economic model was created and stochastic simulations performed to evaluate potential outcomes of different dosing scenarios. Economic analyses of the effects of precision weight-based dosing were conducted using SIMETAR© to determine the stochastic dominance and economic effects of different dosing regimens. Data were obtained from a commercial feedlot for different lots of cattle where individual animal weights were available; for this analysis the minimum lot size was 30 animals, and the maximum lot size was 126 animals. Within lots, individual weight deviations were calculated from the lot mean, the lot mean was rounded up to the nearest 50 lb increment or down to the nearest 50 lb increment to represent mild overestimation and mild underestimation, respectively. Tulathromycin (Draxxin®, Pfizer Animal Health, New York, NY), an antimicrobial commonly prescribed for treatment of bovine respiratory disease, was used to illustrate the impacts of uniform dosing versus exact dosing per body weight. Based on the dilution space method used to evaluate time of drug effectiveness, it was estimated that Draxxin® administered at the recommended dosage to cattle weighing between 500 and 1000 lb should be provided with 191 hours (7.96 days) of protection from pneumonia-causing bacteria. Due to the pharmacokinetic properties of Draxxin®, an animal that is administered half the recommended dose is only protected from pneumonia-causing bacteria for 8 hours, which is 4.2 percent of the coverage time of the proper dose. This limits the effectiveness of the prescribed treatment to fully administer therapeutic treatment. In all cases, the correct weight-based dosing strategy cost less than any other dosing technique. Overall, dosing all cattle at the lot average weight costs $6.04 per animal more than dosing at the exact, correct dose. Dosing all animals at the lot average weight plus 50 lb costs $6.24 per animal more; dosing all animals at lot average minus 50 lb costs $4.01 per animal more. The use of individual animal weights to determine per head dosing of Draxxin® is more cost effective than using lot averages. This concept would appear to extend to all weight-based pharmaceutical products in general, and should be considered a necessary management strategy.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Teng, Yun Lung, et 鄧雲龍. « Application of adaptive sampling based on cumulative distribution function of order statistics in delineating hazardous areas of contaminated soils ». Thesis, 2003. http://ndltd.ncl.edu.tw/handle/54687331754329218300.

Texte intégral
Résumé :
碩士
國立臺灣大學
農業化學研究所
91
It is essential to determine the spatial distribution of environmental pollutants in contaminated sites before making a remediation plan. It is especially important to accurately classify “hazardous” or “non-hazardous” areas based on the threshold value for remediation. Although the spatial distribution of pollutants can be estimated by using kriging, there is high probability of misclassification in areas where pollutant concentrations are close to the threshold value because of errors of estimation. The misclassification results in wasting costs of remediation or potential hazards to environment. Therefore, in this study, an adaptive sampling based on the cumulative distribution function of order statistics (CDFOS) was proposed to reduce the chance of misclassification of “hazardous” or “non-hazardous” areas in contaminated sites. In this proposed method, the pollutant concentration in each sampling location is transformed into CDFOS, representing the probability of pollutant concentration being lower than the cutoff value. There is more chance having misclassification in areas neighboring to locations with CDFOS close to 0.5. For reducing the chance of misclassification, the adaptive sampling is proposed to do additional sampling from the areas, where the pollutant concentrations, estimated by kriging using the samples from the first sampling, are in a concentration range corresponding to a specified range of CDFOS (for example, 0.4 to 0.6). In this study, a comparison of adaptive sampling based on CDFOS and simple random sampling for delineating “hazardous” or “non-hazardous” areas was carried out in simulation. An area about 340 ha located in Hsingchu city, Taiwan was used for illustration. The soil Cu concentrations in 177 sampling blocks (1 ha per block) were measured. One hundred replications of sampling simulation drawn from the data of Cu concentration by using adaptive sampling based on CDFOS and simple random sampling respectively were used for kriging estimation. The classification of each block into “hazardous” or “non-hazardous” based on the kriging estimated or actually observed soil Cu concentration was compared. The results show that using adaptive sampling based on CDFOS can reduce the chance of misclassification compared to using simple random sampling. It suggests that the proposed adaptive sampling method is suitable for delineating “hazardous’ areas in contaminated sites for remediation.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Chandra, Shailesh. « Design and Optimization of a Feeder Demand Responsive Transit System in El Cenizo,TX ». 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-3231.

Texte intégral
Résumé :
The colonias along the Texas-Mexico border are one of the most rapidly growing areas in Texas. Because of the relatively low income of the residents and an inadequate availability of transportation services, the need for basic social activities for the colonias cannot be properly met. The objectives of this study are to have a better comprehension of the status quo of these communities by examining the potential demand for an improved transportation service and evaluate the capacity and optimum service time interval of a new demand responsive transit "feeder" service within one representative colonia, El Cenizo. A comprehensive analysis of the results of a survey conducted through a questionnaire is presented to explain the existing travel patterns and potential demand for a feeder service. The results of this thesis and work from the subsequent simulation analysis showed that a single shuttle would be able to comfortably serve 150 passengers/day. It further showed that the optimal cycle length between consecutive departures from the terminal should be between 11-13 minutes for best service quality. This exploratory study should serve as a first step towards improving transportation services within these growing underprivileged communities especially those with demographics and geography similar to the target area of El Cenizo.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Lundberg, Andreas. « Analysis of RISE's VIRC for Automotive EMC Immunity Testing ». Thesis, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176745.

Texte intégral
Résumé :
RCs (Reverberation Chambers) has historically been used mainly for aerospace and military purposes in EMC (Electromagnetic Compatibility) testing, but the interest also seems to increase in the automotive industry (the development of an international standard for vehicles is in progress). The vehicles of the future will most likely be electrified, wirelessly connected and autonomous, i.e., more control units, more communication systems, and more sensors, will be implemented in the vehicles requiring increased robustness against all possible electromagnetic interferences. EMC testing in an RC is a step in the direction of ensuring this robustness for the future vehicle platforms. Compared to a traditional EMC test method in a fully or semi-AC (Anechoic Chamber), testing in an RC has the advantage that the electromagnetic field will be isotropic, randomly polarized and homogeneous in a statistical sense, i.e., the exposed object will be surrounded by electromagnetic energy from all directions. It can be considered relatively expensive to build a brand new RC with motorized stirrers and associated measurement instrumentation, instead it would be desirable to perform immunity tests in a more cost-effective conductive fabric tent. The great advantage is the flexibility, the tent can be set up almost anywhere, even in already existing semi-ACs, such set-up is referred to as VIRC (Vibrating Intrinsic Reverberation Chamber). This thesis aims to develop a new test method in a VIRC environment. In order to achieve good RC conditions, the electromagnetic field must be statistically Rayleigh distributed. Furthermore, it is of great importance to avoid LoS (Line of Sight) between the antenna and the test object, and to achieve good stirring in the tent. Provided this can be achieved, there are still some challenges by testing in a tent. For example, the classical dwell time of two seconds for immunity testing in EMC is not possible to achieve in a VIRC environment. The validation in this thesis shows that the dwell time or the total exposure time in the tent might be enough to trigger possible malfunctions in today's modern high-speed communication vehicles. Furthermore, it is showed, testing in a VIRC gives good field uniformity and repeatability, and can trigger malfunctions that are not triggered in traditional EMC testing in semi-AC, i.e., ALSE (Absorber-Lined Shielded Enclosure) testing.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Beisler, Matthias Werner. « Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects ». Doctoral thesis, 2010. https://tubaf.qucosa.de/id/qucosa%3A22775.

Texte intégral
Résumé :
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results.
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie