Dissertations / Theses on the topic 'Complementary cumulative distribution function'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 24 dissertations / theses for your research on the topic 'Complementary cumulative distribution function.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Oltean, Elvis. "Modelling income, wealth, and expenditure data by use of Econophysics." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/20203.
Full textBAIG, CLEMENT RANJITH ANTHIKKAD & IRFAN AHMED. "PERFORMANCE ENHANCEMENT OF OFDM IN PAPR REDUCTION USING NEW COMPANDING TRANSFORM AND ADAPTIVE AC EXTENSION ALGORITHM FOR NEXT GENERATION NETWORKSPERFORMANCE ENHANCEMENT OF OFDM IN PAPR REDUCTION USING NEW COMPANDING TRANSFORM AND ADAPTIVE AC EXTENSION ALGORITHM FOR NEXT GENERATION NETWORKS." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6011.
Full textThe proposed technique namely Adaptive Active Constellation Extension (Adaptive ACE) Algorithm reduced the high Peak-to-Average Power Ratio (PAPR) of the Orthogonal Frequency Division Multiplexing (OFDM) systems. The Peak-to-Average Power Ratio (PAPR) is equal to 6.8 dB for the target clipping ratios of 4 dB, 2 dB and 0 dB by using Adaptive Active Constellation Extension (Adaptive ACE) Algorithm. Thus, the minimum PAPR can be achieved for low target clipping ratios. The Signal-to-Noise Ratio (SNR) of the Orthogonal Frequency Division Multiplexing (OFDM) signal obtained by the Adaptive Active Constellation Extension (Adaptive ACE) algorithm is equal to 1.2 dB at a Bit Error Rate (BER) of 10-0..4 for different constellation orders like 4-Quadrature Amplitude Modulation (4-QAM), 16-Quadrature Amplitude Modulation (16-QAM) and 64-Quadrature Amplitude Modulation (16-QAM). Here, the Bit Error Rate of 10-0.4 or 0.398, that means a total of 398-bits are in error when 1000-bits are transmitted via a communication channel or approximately 4-bits are in error when 10-bits are transmitted via a communication channel, which is high when compared to that of the original Orthogonal Frequency Division Multiplexing (OFDM) signal. The other problems faced by the Adaptive Active Constellation Extension (Adaptive ACE) algorithm are Out-of-Band Interference (OBI) and peak regrowth. Here, the Out-of-Band Interference (OBI) is a form of noise or an unwanted signal, which is caused when the original Orthogonal Frequency Division Multiplexing (OFDM) signal is clipped for reducing the peak signals which are outside of the predetermined area and the peak regrowth is obtained after filtering the clipped signal. The peak regrowth results to, increase in the computational time and computational complexity. In this paper, we have proposed a PAPR reduction scheme to improve the bit error rate performance by applying companding transform technique. Hence, 1-1.5 dB reduction in PAPR with this Non-companding technique is achieved. In Future, We can accept to implement the same on Rician and Rayleigh channels.
Clement Ranjith Anthikkad (E-mail: clement.ranjith@gmail.com / clan11@bth.se) & Irfan Ahmed Baig (E-mail: baig.irfanahmed@gmail.com / ir-a11@bth.se )
Liu, Xuecheng 1963. "Nonparametric maximum likelihood estimation of the cumulative distribution function with multivariate interval censored data : computation, identifiability and bounds." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79036.
Full textForgo, Vincent Z. Mr. "A Distribution of the First Order Statistic When the Sample Size is Random." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3181.
Full textJeisman, Joseph Ian. "Estimation of the parameters of stochastic differential equations." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16205/.
Full textEricok, Ozlen. "Uncertainty Assessment In Reserv Estimation Of A Naturally Fractured Reservoir." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605713/index.pdf.
Full textOK, Ö
zlen M.S., Department of Petroleum and Natural Gas Engineering Supervisor : Prof. Dr. Fevzi GÜ
MRAH December 2004, 169 pages Reservoir performance prediction and reserve estimation depend on various petrophysical parameters which have uncertainties due to available technology. For a proper and economical field development, these parameters must be determined by taking into consideration their uncertainty level and probable data ranges. For implementing uncertainty assessment on estimation of original oil in place (OOIP) of a field, a naturally fractured carbonate field, Field-A, is chosen to work with. Since field information is obtained by drilling and testing wells throughout the field, uncertainty in true ranges of reservoir parameters evolve due to impossibility of drilling every location on an area. This study is based on defining the probability distribution of uncertain variables in reserve estimation and evaluating probable reserve amount by using Monte Carlo simulation method. Probabilistic reserve estimation gives the whole range of probable v original oil in place amount of a field. The results are given by their likelyhood of occurance as P10, P50 and P90 reserves in summary. In the study, Field-A reserves at Southeast of Turkey are estimated by probabilistic methods for three producing zones
Karabogaz Formation, Kbb-C Member of Karababa formation and Derdere Formation. Probability density function of petrophysical parameters are evaluated as inputs in volumetric reserve estimation method and probable reserves are calculated by @Risk software program that is used for implementing Monte Carlo method. Outcomes of the simulation showed that Field-A has P50 reserves as 11.2 MMstb in matrix and 2.0 MMstb in fracture of Karabogaz Formation, 15.7 MMstb in matrix and 3.7 MMstb in fracture of Kbb-C Member and 10.6 MMstb in matrix and 1.6 MMstb in fracture of Derdere Formation. Sensitivity analysis of the inputs showed that matrix porosity, net thickness and fracture porosity are significant in Karabogaz Formation and Kbb-C Member reserve estimation while water saturation and fracture porosity are most significant in estimation of Derdere Formation reserves.
Cunha, Lucas Santana da. "Modelos não lineares resultantes da soma de regressões lineares ponderadas por funções distribuição acumulada." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-04052016-100308/.
Full textThe electronic controllers spray aimed at minimizing the variation of inputs rates applied in the field. They are part of a control system, and allow for compensation for variation spray travel speed during operation. There are several types of electronic spray controllers on the market and one way to select which more efficient under the same conditions, ie in the same system of control, is to quantify the system response time for each specific driver. The objective of this study was to estimate the response times for changes in speed of an electronic spraying system via nonlinear regression models, these resulting from the sum of weighted linear regressions for cumulative distribution functions. Data were obtained on the Application Technology Laboratory, located in the Department of Biosystems Engineering from College of Agriculture \"Luiz de Queiroz\", University of Sao Paulo, in Piracicaba, Sao Paulo, Brazil. The models used were the logistic and Gompertz, resulting from a weighted sum of two constant linear regressions with weight given by the cumulative distribution function logistics and Gumbell respectively. Reparametrization been proposed for inclusion in the control system response time models, in order to improve the statistical interpretation and inference of the same. It has also been proposed a non-linear regression model two-phase which is the weighted sum of constant linear regressions weight given by a cumulative distribution function exponential hyperbolic sine Cauchy in which a simulation study was conducted using the methodology of Monte Carlo to evaluating the maximum likelihood estimates of the model parameters.
Bothenna, Hasitha Imantha. "Approximation of Information Rates in Non-Coherent MISO wireless channels with finite input signals." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1516369758012866.
Full textDhuness, Kahesh. "An offset modulation method used to control the PAPR of an OFDM transmission." Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/27258.
Full textThesis (PhD)--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
unrestricted
Ahmad, Shafiq, and Shafiq ahmad@rmit edu au. "Process capability assessment for univariate and multivariate non-normal correlated quality characteristics." RMIT University. Mathematical and Geospatial Sciences, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20091127.121556.
Full textPovalač, Karel. "Sledování spektra a optimalizace systémů s více nosnými pro kognitivní rádio." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-233577.
Full textZambrano, Martínez Jorge Luis. "Efficient Traffic Management in Urban Environments." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/129865.
Full text[CA] En l'actualitat, un dels principals desafiaments als quals s'enfronten les grans àrees metropolitanes és la congestió provocada pel trànsit, que s'ha convertit en un problema important al qual s'enfronten les autoritats de cada ciutat. Per a abordar aquest problema és necessari implementar una solució eficient per a controlar el trànsit que genere beneficis per als ciutadans, com reduir els temps de viatge dels vehicles i, en conseqüència, el consum de combustible, el soroll, i la contaminació ambiental. De fet, en analitzar adequadament la demanda de trànsit, és possible predir les condicions futures del trànsit, i utilitzar aqueixa informació per a l'optimització de les rutes preses pels vehicles. Aquest enfocament pot ser especialment efectiu si s'aplica en el context dels vehicles autònoms, que tenen un comportament més predictible, i això permet als administradors de la ciutat mitigar els efectes de la congestió, com és la contaminació, en millorar el flux de trànsit de manera totalment centralitzada. La validació d'aquest enfocament generalment requereix l'ús de simulacions que haurien de ser el més realistes possible. No obstant això, aconseguir alts graus de realisme pot ser complex quan els patrons de trànsit reals, definits a través d'una matriu d'Origen/Destinació (O-D) per als vehicles en una ciutat, són desconeguts, com ocorre la majoria de les vegades. Per tant, la primera contribució d'aquesta tesi és desenvolupar una heurística iterativa per a millorar el modelatge de la congestió de trànsit; a partir dels mesuraments de bucle d'inducció reals fetes per l'Ajuntament de València (Espanya), vam poder generar una matriu O-D per a la simulació de trànsit que s'assembla a la distribució de trànsit real. Si fóra possible caracteritzar l'estat del trànsit predient les condicions futures del trànsit per a optimitzar la ruta dels vehicles automatitzats, i si es pogueren prendre aquestes mesures per a mitigar de manera preventiva els efectes de la congestió amb els seus problemes relacionats, es podria millorar el flux de trànsit en general. Per tant, la segona contribució d'aquesta tesi és desenvolupar una Equació de Predicció de Trànsit per a caracteritzar el comportament en els diferents carrers de la ciutat en termes de temps de viatge respecte al volum de trànsit, i aplicar una regressió logística a aqueixes dades per a predir les condicions futures del trànsit. La tercera i última contribució d'aquesta tesi apunta directament al nou paradigma de gestió de trànsit previst. Es tracta d'un servidor de rutes capaç de manejar tot el trànsit en una ciutat, i equilibrar els fluxos de trànsit tenint en compte les condicions de congestió del trànsit presents i futures. Per tant, realitzem un estudi de simulació amb dades reals de congestió de trànsit a la ciutat de València (Espanya), per a demostrar com es pot millorar el flux de trànsit en un dia típic mitjançant la solució proposada. Els resultats experimentals mostren que la nostra solució, combinada amb una actualització freqüent de les condicions del trànsit en el servidor de rutes, és capaç d'aconseguir millores substancials en termes de velocitat faig una mitjana i de temps de trajecte, tots dos indicadors d'un grau menor de congestió i d'una fluïdesa millor del trànsit.
[EN] Currently, one of the main challenges that large metropolitan areas have to face is traffic congestion, which has become an important problem faced by city authorities. To address this problem, it becomes necessary to implement an efficient solution to control traffic that generates benefits for citizens, such as reducing vehicle journey times and, consequently, use of fuel, noise and environmental pollution. In fact, by properly analyzing traffic demand, it becomes possible to predict future traffic conditions, and to use that information for the optimization of the routes taken by vehicles. Such an approach becomes especially effective if applied in the context of autonomous vehicles, which have a more predictable behavior, thus enabling city management entities to mitigate the effects of traffic congestion and pollution by improving the traffic flow in a city in a fully centralized manner. Validating this approach typically requires the use of simulations, which should be as realistic as possible. However, achieving high degrees of realism can be complex when the actual traffic patterns, defined through an Origin/Destination (O-D) matrix for the vehicles in a city, are unknown, as occurs most of the times. Thus, the first contribution of this thesis is to develop an iterative heuristic for improving traffic congestion modeling; starting from real induction loop measurements made available by the City Hall of Valencia, Spain, we were able to generate an O-D matrix for traffic simulation that resembles the real traffic distribution. If it were possible to characterize the state of traffic by predicting future traffic conditions for optimizing the route of automated vehicles, and if these measures could be taken to preventively mitigate the effects of congestion with its related problems, the overall traffic flow could be improved. Thereby, the second contribution of this thesis was to develop a Traffic Prediction Equation to characterize the different streets of a city in terms of travel time with respect to the vehicle load, and applying logistic regression to those data to predict future traffic conditions. The third and last contribution of this thesis towards our envisioned traffic management paradigm was a route server capable of handling all the traffic in a city, and balancing traffic flows by accounting for present and future traffic congestion conditions. Thus, we perform a simulation study using real data of traffic congestion in the city of Valencia, Spain, to demonstrate how the traffic flow in a typical day can be improved using our proposed solution. Experimental results show that our proposed solution, combined with frequent updating of traffic conditions on the route server, is able to achieve substantial improvements in terms of average travel speeds and travel times, both indicators of lower degrees of congestion and improved traffic fluidity.
Finally, I want to thank the Ecuatorian Republic through the "Secretaría de Educación Superior, Ciencia, Tecnología e Innovación" (SENESCYT), for granting me the scholarship to finance my studies.
Zambrano Martínez, JL. (2019). Efficient Traffic Management in Urban Environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/129865
TESIS
Xu, Jia Cheng. "Evaluation of Thoracic Injury Risk of Heavy Goods Vehicle Occupants during Steering Wheel Rim Impacts to Different Rib Levels." Thesis, KTH, Medicinteknik och hälsosystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266357.
Full textFörarmiljön i lastbilar gentemot personbilar är annorlunda, i detta kontext med avseende på främst ratt- och förarposition som ökar risken för islag med rattkransen för lastbilsförare. Sådana islag är relativt outforskat jämfört med passiv säkerhet för personbilar inom skadebiomekaniken. Tanken bakom användning av humanmodeller är att komplettera nuvarande krockdockor med biomekanisk information. Dessutom är biofideliteten hos en krockdocka vid rattislag relativt okänt, speciellt vid olika revbensnivåer. Därför är målet med detta examensarbete att undersöka thoraxresponsen hos en lastbilsförare genom att använda THUMS v4.0 och Hybrid III (H3) under rattislag med avseende på revbensnivåer (nivå 1-2, 3-4, 6-7, 7-8, och 9-10) och revben, aorta, lever, och mjälte. Enligt författaren verkar användning av lokala riskfunktioner för thoraxskador relativt ostuderat jämfört med den övervägande användningen av globala riskfunktioner som huvudsakligen förutser den mest vanligt förekommande thoraxskadan, nämligen revbensfrakturer. Därför har lokala riskfunktioner skapats för revben och organ, baserat på experimentell data. Uppmätta parametrar var bröstinträngning och kontaktkraft mellan ratt och thorax på global nivå, medan första Green-Lagrange huvudtöjningen användes för att evaluera skaderisken för revben och organ. Materialmodeller för lever och mjälte ommodellerades baserat på experimentell spänning-töjningsdata med Ogdens materialmodell för att ta hänsyn till hyperelasticitet. Töjningshastighetsberoendet inkluderades genom att iterera fram viskoelastiska parametrar. Kontaktmodellering av organ gjordes genom att ändra från glidande kontakt till en låsande kontakt för att minimera orealistisk kontaktseparation under islagsfallen. Resultaten stödjer tidigare studier där H3 visat sig behöva ytterligare givare för att noggrannt kunna registrera bröstinträngning vid olika revbensnivåer bortom dess nuvarande räckvidd, nämligen vid revben 1-2, 7-8, och 9-10. Uppmätt bröstinträngning i THUMS var rimliga för hastighetsfallen men gav inte någon definitiv risk för skada. Faktum är att de globala riskfunktionerna kan överskatta AIS3 risken vid revben 1-2, 7-8, och 9-10. Revbenstöjningarna kunde inte korreleras med bröstinträngningarna. Detta kunde förklaras genom de unika lastfallen som karakteriseras av rena rattislag som främst påverkar sternum och revbensbrosk som i sin tur minimerar deformation av revben. Organtöjningarna indikerar på någon risk för ruptur där mjälten deformerar som mest vid revben 3-4 och 6-7, medan för både levern och aortan sker det vid revben 6-7 och 7-8. Denna studie presenterar ett sätt att komplettera H3 med THUMS inom passiv säkerhet för lastbilsförare med fokus på lokala riskfunktioner för funktionell skadeprediktering dvs. prediktering av skaderisken med hjälp av parametrar som är direkt relaterat till revbensfraktur eller organruptur. Lokala riskfunktioner utgör en kraftfull säkerhetsbedömning som är oberoende av externa lastfall som t.ex. airbag, rattcentrum, eller bälteslast. I denna studie noterades det att de globala riskkriterierna inte har undersökts med väldigt lokala islag som rattislagen utgör och kommer därför att påverka risken för revbensfraktur annorlunda gentemot vad som har studerat, t.ex. airbag eller bältelast. Däremot behövs det mer data för de lokala riskkriterierna för att kunna prediktera thoraxskaderisken med ökad noggrannhet. Avslutningsvis, det är tydligt att Hybrid III har otillräckligt med givare och behöver förbättras för att kunna registrera bröstinträngning vid flera revbensnivåer. Vidare behövs följande: bättre förståelse för globala riskfunktioner anpassat inom passiv säkerhet för lastbilsförare, mer data för åldersberoende (revben) och töjningshastighetsberoende (organ) riskfunktioner, en ”tiebreak” kontakt med tangientiell glidning för bättre organkinematik, och ökad biofidelitet av materialmodeller genom att använda data från vävnadsexperiment.
Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.
Full textDie Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
Yan, Chu Hou, and 朱厚恩. "Cumulative Hazard Function Estimation for Exponentiated Weibull Distribution." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/10896849858471540176.
Full text輔仁大學
數學系研究所
92
In this article , we consider the cumulative hazard function of a series system product which is composed by two independent components . We consider a nonparametric approach when there are no information about the distributions of these two independent components . We propose a direct estimator and an indirect estimator of the cumulative hazard function of the system . Then compare these two estimators via their asymptotic mean square errors . On the other hand , if the two independent components of the series are from Exponentiated Weibull distribution , we propose a direct estimator and an indirect estimator of the cumulative hazard function of the system . Again , we study the large sample behavior of these two estimators , then compare their asymptotic mean square errors .It is shown that both nonparametric approach and parametric approach lead to the same conclusion that indirect estimator is better than the direct estimator in the sense of mean square error .
Huang, Jim C. "Cumulative Distribution Networks: Inference, Estimation and Applications of Graphical Models for Cumulative Distribution Functions." Thesis, 2009. http://hdl.handle.net/1807/19194.
Full textLiu, Chih-En, and 劉志恩. "Feature Selection Method Based on Support Vector Machine and Cumulative Distribution Function." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/06692637254680399272.
Full text輔仁大學
資訊工程學系
98
Abstract Feature selection is an important method in machine learning and data mining. It reduces dimensionality of data and increases performance in classification and clustering. Zhang et al. (2006) have developed a feature selection algorithm, named as recursive support vector machine (R-SVM), to select important biomarkers for biological data. R-SVM is based on the technology of support vector machine. However, it only works for linear kernels. To overcome this limitation of R-SVM, we propose a distance-based cumulative distribution function (DCDF) algorithm that works for linear and nonlinear kernels. In this study, DCDF is also implemented to compare with R-SVM. The experiments include eight different types of cancer data and four UCI datasets. The results show that DCDF outperforms R-SVM using either linear or nonlinear kernels. In some datasets, the DCDF method using nonlinear kernels achieve much better results and significantly outperform R-SVM. Keywords:Feature selection, Support vector machine, Cumulative distribution function, Recursive-SVM
Lin, I.-Chen, and 林宜禎. "Analysis of Qualities of Numerical Methods for Calculating the Inverse Normal Cumulative Distribution Function." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/77986574706470418900.
Full text國立臺灣大學
資訊工程學研究所
95
The inverse normal cumulative distribution function does not have a close form. Several algorithms for evaluating (or approximating) the inverse normal cumulative distribution function have been proposed. The aim of the thesis is to compare a few numerical methods of the inverse normal cumulative distribution function, which are used for calculating approximate values of the inverse normal cumulative distribution function. They include the built-in function in EXCEL, a numerical method by Peter J. Acklam, and a numerical method by Moro. The numerical method of Moro is implemented with MATLAB in this thesis. Related errors of the above-mentioned numerical methods are analyzed in the current thesis as well.
Lin, I.-Chen. "Analysis of Qualities of Numerical Methods for Calculating the Inverse Normal Cumulative Distribution Function." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1707200711352800.
Full textOlvera, Isaac Daniel. "Statistical and Economic Implications Associated with Precision of Administering Weight-based Medication in Cattle." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8916.
Full textTeng, Yun Lung, and 鄧雲龍. "Application of adaptive sampling based on cumulative distribution function of order statistics in delineating hazardous areas of contaminated soils." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/54687331754329218300.
Full text國立臺灣大學
農業化學研究所
91
It is essential to determine the spatial distribution of environmental pollutants in contaminated sites before making a remediation plan. It is especially important to accurately classify “hazardous” or “non-hazardous” areas based on the threshold value for remediation. Although the spatial distribution of pollutants can be estimated by using kriging, there is high probability of misclassification in areas where pollutant concentrations are close to the threshold value because of errors of estimation. The misclassification results in wasting costs of remediation or potential hazards to environment. Therefore, in this study, an adaptive sampling based on the cumulative distribution function of order statistics (CDFOS) was proposed to reduce the chance of misclassification of “hazardous” or “non-hazardous” areas in contaminated sites. In this proposed method, the pollutant concentration in each sampling location is transformed into CDFOS, representing the probability of pollutant concentration being lower than the cutoff value. There is more chance having misclassification in areas neighboring to locations with CDFOS close to 0.5. For reducing the chance of misclassification, the adaptive sampling is proposed to do additional sampling from the areas, where the pollutant concentrations, estimated by kriging using the samples from the first sampling, are in a concentration range corresponding to a specified range of CDFOS (for example, 0.4 to 0.6). In this study, a comparison of adaptive sampling based on CDFOS and simple random sampling for delineating “hazardous” or “non-hazardous” areas was carried out in simulation. An area about 340 ha located in Hsingchu city, Taiwan was used for illustration. The soil Cu concentrations in 177 sampling blocks (1 ha per block) were measured. One hundred replications of sampling simulation drawn from the data of Cu concentration by using adaptive sampling based on CDFOS and simple random sampling respectively were used for kriging estimation. The classification of each block into “hazardous” or “non-hazardous” based on the kriging estimated or actually observed soil Cu concentration was compared. The results show that using adaptive sampling based on CDFOS can reduce the chance of misclassification compared to using simple random sampling. It suggests that the proposed adaptive sampling method is suitable for delineating “hazardous’ areas in contaminated sites for remediation.
Chandra, Shailesh. "Design and Optimization of a Feeder Demand Responsive Transit System in El Cenizo,TX." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-3231.
Full textLundberg, Andreas. "Analysis of RISE's VIRC for Automotive EMC Immunity Testing." Thesis, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176745.
Full textBeisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, 2010. https://tubaf.qucosa.de/id/qucosa%3A22775.
Full textDie Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.