Dissertations / Theses on the topic 'AUC'

To see the other types of publications on this topic, follow the link: AUC.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'AUC.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Karabudak, Engin. "Development of MWL-AUC / CCD-C-AUC / SLS-AUC detectors for the analytical ultracentrifuge." Phd thesis, Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/3992/.

Full text
Abstract:
Analytical ultracentrifugation (AUC) has made an important contribution to polymer and particle characterization since its invention by Svedberg (Svedberg and Nichols 1923; Svedberg and Pederson 1940) in 1923. In 1926, Svedberg won the Nobel price for his scientific work on disperse systems including work with AUC. The first important discovery performed with AUC was to show the existence of macromolecules. Since that time AUC has become an important tool to study polymers in biophysics and biochemistry. AUC is an absolute technique that does not need any standard. Molar masses between 200 and 1014 g/mol and particle size between 1 and 5000 nm can be detected by AUC. Sample can be fractionated into its components due to its molar mass, particle size, structure or density without any stationary phase requirement as it is the case in chromatographic techniques. This very property of AUC earns it an important status in the analysis of polymers and particles. The distribution of molar mass, particle sizes and densities can be measured with the fractionation. Different types of experiments can give complementary physicochemical parameters. For example, sedimentation equilibrium experiments can lead to the study of pure thermodynamics. For complex mixtures, AUC is the main method that can analyze the system. Interactions between molecules can be studied at different concentrations without destroying the chemical equilibrium (Kim et al. 1977). Biologically relevant weak interactions can also be monitored (K ≈ 10-100 M-1). An analytical ultracentrifuge experiment can yield the following information: • Molecular weight of the sample • Number of the components in the sample if the sample is not a single component • Homogeneity of the sample • Molecular weight distribution if the sample is not a single component • Size and shape of macromolecules & particles • Aggregation & interaction of macromolecules • Conformational changes of macromolecules • Sedimentation coefficient and density distribution Such an extremely wide application area of AUC allows the investigation of all samples consisting of a solvent and a dispersed or dissolved substance including gels, micro gels, dispersions, emulsions and solutions. Another fact is that solvent or pH limitation does not exist for this method. A lot of new application areas are still flourishing, although the technique is 80 years old. In 1970s, 1500 AUC were operational throughout the world. At those times, due to the limitation in detection technologies, experimental results were obtained with photographic records. As time passed, faster techniques such as size exclusion chromatography (SEC), light scattering (LS) or SDS-gel electrophoresis occupied the same research fields with AUC. Due to these relatively new techniques, AUC began to loose its importance. In the 1980s, only a few AUC were in use throughout the world. In the beginning of the 1990s a modern AUC -the Optima XL-A - was released by Beckman Instruments (Giebeler 1992). The Optima XL-A was equipped with a modern computerized scanning absorption detector. The addition of Rayleigh Interference Optics is introduced which is called XL-I AUC. Furthermore, major development in computers made the analysis easier with the help of new analysis software. Today, about 400 XL-I AUC exist worldwide. It is usually applied in the industry of pharmacy, biopharmacy and polymer companies as well as in academic research fields such as biochemistry, biophysics, molecular biology and material science. About 350 core scientific publications which use analytical ultracentrifugation are published every year (source: SciFinder 2008 ) with an increasing number of references (436 reference in 2008). A tremendous progress has been made in method and analysis software after digitalization of experimental data with the release of XL-I. In comparison to the previous decade, data analysis became more efficient and reliable. Today, AUC labs can routinely use sophisticated data analysis methods for determination of sedimentation coefficient distributions (Demeler and van Holde 2004; Schuck 2000; Stafford 1992), molar mass distributions (Brookes and Demeler 2008; Brookes et al. 2006; Brown and Schuck 2006), interaction constants (Cao and Demeler 2008; Schuck 1998; Stafford and Sherwood 2004), particle size distributions with Angstrom resolution (Cölfen and Pauck 1997) and the simulations determination of size and shape distributions from sedimentation velocity experiments (Brookes and Demeler 2005; Brookes et al. 2006). These methods are also available in powerful software packages that combines various methods, such as, Ultrascan (Demeler 2005), Sedift/Sedphat (Schuck 1998; Vistica et al. 2004) and Sedanal (Stafford and Sherwood 2004). All these powerful packages are free of charge. Furthermore, Ultrascans source code is licensed under the GNU Public License (http://www.gnu.org/copyleft/gpl.html). Thus, Ultrascan can be further improved by any research group. Workshops are organized to support these software packages. Despite of the tremendous developments in data analysis, hardware for the system has not developed much. Although there are various user developed detectors in research laboratories, they are not commercially available. Since 1992, only one new optical system called “the fluorescence optics” (Schmidt and Reisner, 1992, MacGregor et al. 2004, MacGregor, 2006, Laue and Kroe, in press) has been commercialized. However, except that, there has been no commercially available improvement in the optical system. The interesting fact about the current hardware of the XL-I is that it is 20 years old, although there has been an enormous development in microelectronics, software and in optical systems in the last 20 years, which could be utilized for improved detectors. As examples of user developed detector, Bhattacharyya (Bhattacharyya 2006) described a Multiwavelength-Analytical Ultracentrifuge (MWL-AUC), a Raman detector and a small angle laser light scattering detector in his PhD thesis. MWL-AUC became operational, but a very high noise level prevented to work with real samples. Tests with the Raman detector were not successful due to the low light intensity and thus high integration time is required. The small angle laser light scattering detector could only detect latex particles but failed to detect smaller particles and molecules due to low sensitivity of the detector (a photodiode was used as detector). The primary motivation of this work is to construct a detector which can measure new physico-chemical properties with AUC with a nicely fractionated sample in the cell. The final goal is to obtain a multiwavelength detector for the AUC that measures complementary quantities. Instrument development is an option for a scientist only when there is a huge potential benefit but there is no available commercial enterprise developing appropriate equipment, or if there is not enough financial support to buy it. The first case was our motivation for developing detectors for AUC. Our aim is to use today’s technological advances in microelectronics, programming, mechanics in order to develop new detectors for AUC and improve the existing MWL detector to routine operation mode. The project has multiple aspects which can be listed as mechanical, electronical, optical, software, hardware, chemical, industrial and biological. Hence, by its nature it is a multidisciplinary project. Again by its nature it contains the structural problem of its kind; the problem of determining the exact discipline to follow at each new step. It comprises the risk of becoming lost in some direction. Having that fact in mind, we have chosen the simplest possible solution to any optical, mechanical, electronic, software or hardware problem we have encountered and we have always tried to see the overall picture. In this research, we have designed CCD-C-AUC (CCD Camera UV/Vis absorption detector for AUC) and SLS-AUC (Static Light Scattering detector for AUC) and tested them. One of the SLS-AUC designs produced successful test results, but the design could not be brought to the operational stage. However, the operational state Multiwavelength Analytical Ultracentrifuge (MWL-AUC) AUC has been developed which is an important detector in the fields of chemistry, biology and industry. In this thesis, the operational state Multiwavelength Analytical Ultracentrifuge (MWL-AUC) AUC is to be introduced. Consequently, three different applications of MWL-AUC to the aforementioned disciplines shall be presented. First of all, application of MWL-AUC to a biological system which is a mixture of proteins lgG, aldolase and BSA is presented. An application of MWL-AUC to a mass-produced industrial sample (β-carotene gelatin composite particles) which is manufactured by BASF AG, is presented. Finally, it is shown how MWL-AUC will impact on nano-particle science by investigating the quantum size effect of CdTe and its growth mechanism. In this thesis, mainly the relation between new technological developments and detector development for AUC is investigated. Pioneering results are obtained that indicate the possible direction to be followed for the future of AUC. As an example, each MWL-AUC data contains thousands of wavelengths. MWL-AUC data also contains spectral information at each radial point. Data can be separated to its single wavelength files and can be analyzed classically with existing software packages. All the existing software packages including Ultrascan, Sedfit, Sedanal can analyze only single wavelength data, so new extraordinary software developments are needed. As a first attempt, Emre Brookes and Borries Demeler have developed mutliwavelength module in order to analyze the MWL-AUC data. This module analyzes each wavelength separately and independently. We appreciate Emre Brookes and Borries Demeler for their important contribution to the development of the software. Unfortunately, this module requires huge amount of computer power and does not take into account the spectral information during the analysis. New software algorithms are needed which take into account the spectral information and analyze all wavelengths accordingly. We would like also invite the programmers of Ultrascan, Sedfit, Sedanal and the other programs, to develop new algorithms in this direction.
Die analytische Chemie versucht die chemische Zusammensetzung, chemische und physikalische Eigenschaften von biologischen oder künstlichen Materialien zu bestimmen. Mit der Entwicklung deren Methoden können genauere Informationen über die Umweltverschmutzung, das Ozonloch, Proteinfunktionen und Wechselwirkungen im menschlichen Körper erlangt werden. Es sind eine Vielzahl von analytischen Techniken vorhanden, die durch Verbesserungen in der Mikroelektronik, Mechanik, Informatik und Nanotechnologie einer markanten Entwicklung unterworfen wurden. In dieser Arbeit wurde versucht die Detektionskapazität der analytischen Ultrazentrifuge zu erhöhen. Die analytische Ultrazentrifuge (AUZ) ist eine gut bekannte, sehr leistungsstarke Trennungsmethode. AUZ benutzt die Zentrifugalkraft zum Trennen von Stoffen. Die Probe kann für die Messung gelöst oder in einer Flüssigkeit dispergiert werden. Makromoleküle, Proteine und kolloidale Systeme in Lösung können in einer AUZ Zelle zwischen 1000-60000 Rotationen pro Minute zentrifugiert werden, wie beispielsweise in der kommerziellen Beckmann AUZ. Die Rotationsbeschleunigung entspricht 73-262mal der Erdschwerebeschleunigung (= 9.81 m s-2) für eine radiale Position von 6.5 Zentimeter. Diese Kraft ist der Schlüsselfaktor für die Fähigkeit der AUZ sogar kleine Moleküle und Ionen zu trennen. Die Experimente wurden bei kontrollierter Rotationsgeschwindigkeit und Temperatur ausgeführt. Drei verschiedene, neue Detektoren wurden im Rahmen dieser Arbeit konstruiert und getestet. Diese Detektoren haben die analytischen Informationen sehr verbessert. Dies wurde für Proteine, halbleitende Nanopartikel sowie auch für industrielle Produkte gezeigt.
APA, Harvard, Vancouver, ISO, and other styles
2

Yuan, Yan. "Empirical Likelihood-Based NonParametric Inference for the Difference between Two Partial AUCS." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/math_theses/32.

Full text
Abstract:
Compare the accuracy of two continuous-scale tests is increasing important when a new test is developed. The traditional approach that compares the entire areas under two Receiver Operating Characteristic (ROC) curves is not sensitive when two ROC curves cross each other. A better approach to compare the accuracy of two diagnostic tests is to compare the areas under two ROC curves (AUCs) in the interested specificity interval. In this thesis, we have proposed bootstrap and empirical likelihood (EL) approach for inference of the difference between two partial AUCs. The empirical likelihood ratio for the difference between two partial AUCs is defined and its limiting distribution is shown to be a scaled chi-square distribution. The EL based confidence intervals for the difference between two partial AUCs are obtained. Additionally we have conducted simulation studies to compare four proposed EL and bootstrap based intervals.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Xin. "Bootstrap and Empirical Likelihood-based Semi-parametric Inference for the Difference between Two Partial AUCs." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/math_theses/54.

Full text
Abstract:
With new tests being developed and marketed, the comparison of the diagnostic accuracy of two continuous-scale diagnostic tests are of great importance. Comparing the partial areas under the receiver operating characteristic curves (pAUC) is an effective method to evaluate the accuracy of two diagnostic tests. In this thesis, we study the semi-parametric inference for the difference between two pAUCs. A normal approximation for the distribution of the difference between two pAUCs has been derived. The empirical likelihood ratio for the difference between two pAUCs is defined and its asymptotic distribution is shown to be a scaled chi-quare distribution. Bootstrap and empirical likelihood based inferential methods for the difference are proposed. We construct five confidence intervals for the difference between two pAUCs. Simulation studies are conducted to compare the finite sample performance of these intervals. We also use a real example as an application of our recommended intervals.
APA, Harvard, Vancouver, ISO, and other styles
4

LEITAO, JUNIOR CLAUDIO B. "Estudo dos parametros de processo da reducao do tricarbonato de amonio e uranilo a dioxido de uranio em forno de leito fluidizado." reponame:Repositório Institucional do IPEN, 1992. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10297.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:37:02Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:56:31Z (GMT). No. of bitstreams: 1 04495.pdf: 1601893 bytes, checksum: e55704a48b48bc206869a88930e54240 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Shimin. "The ROC Curve and the Area under the Curve (AUC)." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etsu-works/139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vrabeľ, Matej. "Neurčitost spojená s designem sběru dat v modelech druhové distribuce." Master's thesis, Česká zemědělská univerzita v Praze, 2017. http://www.nusl.cz/ntk/nusl-262770.

Full text
Abstract:
Properly choosen input data, their form, way of the collection an the following correction as well, are the key factors affecting the accuracy of the more and more popular models of the species model distribution (SDM). A design unfluence of the data to the choosen distribution model of the virtual species has been tested. The General Linear Model (GLM) has been used. Four types of the present-absent collection designs have been tested for a virtual species on the area of the Czech Republic as follows: accidental, systematic, points on the easy accessible areas (near the communications) and the points with the higher concentration of the scientists (an area of CHKO).The TSS (True skills statistics), KAPPA and AUC (Area under curve) have been used to compare the accuracy of the prediction of the models .The points being choosen from the easy to access areas and CHKO had worse results in all of the monitored values than the accidental or the systematic choice of the points from the whole area of the Czech Republic.It results that the data collection mode affects the final accuracy of the species distribution models.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Fangfang. "Semi-parametric inference for the partial area under the ROC curve." unrestricted, 2008. http://etd.gsu.edu/theses/available/etd-11192008-113213/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2008.
Title from file title page. Gengsheng Qin, committee chair; Yu-Sheng Hsu, Yixin Fang, Yuanhui Xiao, committee members. Description based on contents viewed July 22, 2009. Includes bibliographical references (p. 29-30).
APA, Harvard, Vancouver, ISO, and other styles
8

Sala, Fernanda Angélica. "Estudo da interação entre domínios C-terminais de septinas humanas: implicação na formação e estabilidade do filamento." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/75/75133/tde-10082015-140053/.

Full text
Abstract:
Septinas compreendem uma conservada família de proteínas de ligação a nucleotídeo de guanina e formação de heterofilamentos. Em termos estruturais, elas possuem uma organização comum: um domínio GTPase central, uma região N-terminal e um domínio C-terminal, este último é predito para formar estruturas em coiled coil. Atualmente, o heterocomplexo de septinas humanas (SEPT2/SEPT6/SEPT7) mais bem caracterizado revela a importância do domínio GTPase na formação do filamento, todavia a ausência de densidade eletrônica para os domínios C-terminais faz com que sua função permaneça obscura. Estudos com septinas de mamíferos, e de outros organismos como C. elegans e S. cerevisea sugerem que alguns grupos de septinas (por exemplo, II e IV em mamíferos) interagem através de seus domínios C-terminais, e estes poderiam atuar de modo determinante para a montagem correta do filamento. Assim, o presente projeto objetivou estudar a afinidade homo/heterotípicas para os domínios C-terminais das septinas humanas dos grupos II (SEPT6C/8C/10C/11C) e IV (SEPT7C), investigando se esses domínios contribuem para preferência das septinas interagirem com proteínas de grupos distintos durante a formação do heterofilamento. Os domínios C-terminais foram expressos em E. coli e purificados. Foram conduzidos estudos de ultracentrifugação analítica e espectropolarimetria de dicroísmo circular, que permitiram identificar maior afinidade e estabilidade da associação heterotípica comparada à homotípica. Foram obtidas constantes de dissociação aparente para homodímeros em torno de baixo µM, enquanto que para heterodímeros os dados já existentes no grupo revelaram constante de dissociação na ordem de nM. Para entender os fatores no nível atômico responsáveis pela significativa predileção na interação entre os domínios C-terminais dos grupos II e IV foram realizados estudos utilizando modelagem e análise das sequências primárias. As análises sugerem a presença de um alto número de resíduos carregados na posição a do coiled coil como responsável pela seletividade. Consequentemente, o heterodímero seria favorecido em virtude do menor efeito repulsivo proveniente do intercalamento dos resíduos carregados em a. Desse modo, os resultados indicaram a atuação decisiva ou cooperativa dos domínios C-terminais na organização preferencial das septinas durante a formação do filamento, favorecendo a interface NC entre septinas dos grupos II e IV.
Septins comprise a conserved protein family that binds guanidine nucleotide and forms heterofilaments. In structural terms they have a common organization: a central GTPase domain, a N-terminal domain and a C-terminal domain, this last one is predicted to form coiled coil structures. Currently, the human septin heterocomplex best characterized (SEPT2/SEPT6/SEPT7) reveals the importance of the GTPase domain in filament assembly, however the absence of electron density for the C-terminal domains makes its function still unknown. Studies with mammals septins, and of others organisms like C. elegans and S. cerevisea suggests that some septins groups (e.g. II e IV in mammals) interact via its C-terminal domains and this could act in a determinative way to correct filament assembly. In this way, this project aimed to study the homo/heterotypical affinity for the C-terminal domains of human septins belonging to groups II (SEPT6C/8C/10C/11C) e IV (SEPT7C), investigating whether this domain contributes with the preference of septins to interact with proteins of different groups during assembly of the heterofilament. The C-terminal domains were expressed in E. coli and purificated. It was carried out studies using analytical ultracentrifugation and circular dichroism spectropolarimetry tecniques which allowed identification of major affinity and stability in the heterotypical association compared to homotypical. It was measured apparent dissociation constants for homodimers of low µM range while for heterodimers our group\'s data shows dissociation constants in the nM range. To understand at atomic level the factors responsible for this significant preference in the C-terminal domains interaction between groups II and IV was performed molecular modelling studies and analysis of the primary sequence. These analysis suggests the presence of a high number of charged residues in position a of the coiled coil as responsible for selectivity. Consequently, the heterodimer would be therefore favoured because of the minor repulsive effect coming from the staggered of charged residues in a. Thus, these results indicate the crucial or cooperative action of C-terminal domains in preferential organization of septins during filament assembly, favouring the NC interface between septins of groups II and IV.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Dongmei. "Some Contributions in Statistical Discrimination of Different Pathogens Using Observations through FTIR." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/78.

Full text
Abstract:
Fourier Transform Infrared (FTIR) has been use to discriminate different pathogens by signals from cells infected with these versus normal cells as references. To do the statistical analysis, Partial Least Square Regression (PLSR) was utilized to distinguish any two kinds of virus‐infected cells and normal cells. Validation using Bootstrap method and Cross‐validations were employed to calculate the shrinkages of Area Under the ROC Curve (AUC) and specificities corresponding to 80%, 90%, and 95% sensitivities. The result shows that our procedure can significantly discriminate these pathogens when we compare infected cells with the normal cells. On the height of this success, PLSR was applied again to simultaneously compare two kinds of virus‐infected cells and the normal cells. The shrinkage of Volume Under the Surface (VUS) was calculated to do the evaluation of model diagnostic performance. The high value of VUS demonstrates that our method can effectively differentiate virus‐infected cells and normal cells.
APA, Harvard, Vancouver, ISO, and other styles
10

SANTOS, LAURO R. dos. "Unidade piloto de obtencao do tricarbonato de amonio e uranilo." reponame:Repositório Institucional do IPEN, 1989. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10220.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:36:09Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:59:22Z (GMT). No. of bitstreams: 1 01803.pdf: 1763975 bytes, checksum: 1d9670fc7ad262d61966c6814c757dc7 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Yao-Wei. "LIMITED SAMPLING STRATEGIES FOR FACILE DETERMINATION OF THE AREA UNDER THE CURVE OF ANTI-CANCER AGENTS, PACLITAXEL AND SU5416." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin984580545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Lin Tubbs Jack Dale. "Semiparametric AUC regression for testing treatment effect in clinical trial." Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Haochuan. "Statistical Inferences for the Youden Index." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_diss/5.

Full text
Abstract:
In diagnostic test studies, one crucial task is to evaluate the diagnostic accuracy of a test. Currently, most studies focus on the Receiver Operating Characteristics Curve and the Area Under the Curve. On the other hand, the Youden index, widely applied in practice, is another comprehensive measurement for the performance of a diagnostic test. For a continuous-scale test classifying diseased and non-diseased groups, finding the Youden index of the test is equivalent to maximize the sum of sensitivity and specificity for all the possible values of the cut-point. This dissertation concentrates on statistical inferences for the Youden index. First, an auxiliary tool for the Youden index, called the diagnostic curve, is defined and used to evaluate the diagnostic test. Second, in the paired-design study to assess the diagnostic accuracy of two biomarkers, the difference in paired Youden indices frequently acts as an evaluation standard. We propose an exact confidence interval for the difference in paired Youden indices based on generalized pivotal quantities. A maximum likelihood estimate-based interval and a bootstrap-based interval are also included in the study. Third, for certain diseases, an intermediate level exists between diseased and non-diseased status. With such concern, we define the Youden index for three ordinal groups, propose the empirical estimate of the Youden index, study the asymptotic properties of the empirical Youden index estimate, and construct parametric and nonparametric confidence intervals for the Youden index. Finally, since covariates often affect the accuracy of a diagnostic test, therefore, we propose estimates for the Youden index with a covariate adjustment under heteroscedastic regression models for the test results. Asymptotic properties of the covariate-adjusted Youden index estimators are investigated under normal error and non-normal error assumptions.
APA, Harvard, Vancouver, ISO, and other styles
14

Gábor, Lukáš. "Zkvalitňují environmentální filtry modely druhové distribuce ?" Master's thesis, Česká zemědělská univerzita v Praze, 2016. http://www.nusl.cz/ntk/nusl-256602.

Full text
Abstract:
Species distribution models (SDM) are widely used tool in biogeography, macroecology and nature conservation. With gradual development, it has become an important means used by, for example, in determining the potentially threatened locations by invasive species, or studying the impact of climate change on biodiversity. With the progressive development it becomes obvious that one of the major factors limiting the species distribution modelling are input data. The presence data are most readily available, but they suffer from an uneven collection, for example, with a predominance of records in easily accessible locations. The aim of this work is to show, that popular climate filtering of presence data input, in order to eliminate uneven sampling, affects the final model in a negative way. For this purpose, there were virtual sorts of different species and different prevalence of recorded occurrences on the territory of the Iberian Peninsula generated. Subsequently, species distribution models with and without climate filters were created by using Maxent. They were evaluated by AUC. The difference between virtual reality, which is presented to the suitability of the virtual species, and the resulting model was tested by paired T test. Comparison of the AUC confirmed that the species distribution models based on climate filtering have better discriminative ability. However, it only points to the skilful work with the selected sample bias that already does not reflect reality. In contrast, comparison of the differences between virtual reality and the models with and without climate filtering using a paired T test shows greater congruence between unfiltered models and virtual reality. Thus it was proved that the climate filtering does not lead to higher validity species distribution models.
APA, Harvard, Vancouver, ISO, and other styles
15

MARCONDES, GILBERTO H. "Obtencao do Usub(3)O(sub)8 para combustiveis tipo MTR a partir do tricarbonato de amonio e uranilo (TCAU)." reponame:Repositório Institucional do IPEN, 1999. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10740.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:43:30Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:09:43Z (GMT). No. of bitstreams: 1 06636.pdf: 4715244 bytes, checksum: 8df295514eae9493e07c3d6352c4bf32 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
16

Vege, Sri Harsha. "Ensemble of Feature Selection Techniques for High Dimensional Data." TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1164.

Full text
Abstract:
Data mining involves the use of data analysis tools to discover previously unknown, valid patterns and relationships from large amounts of data stored in databases, data warehouses, or other information repositories. Feature selection is an important preprocessing step of data mining that helps increase the predictive performance of a model. The main aim of feature selection is to choose a subset of features with high predictive information and eliminate irrelevant features with little or no predictive information. Using a single feature selection technique may generate local optima. In this thesis we propose an ensemble approach for feature selection, where multiple feature selection techniques are combined to yield more robust and stable results. Ensemble of multiple feature ranking techniques is performed in two steps. The first step involves creating a set of different feature selectors, each providing its sorted order of features, while the second step aggregates the results of all feature ranking techniques. The ensemble method used in our study is frequency count which is accompanied by mean to resolve any frequency count collision. Experiments conducted in this work are performed on the datasets collected from Kent Ridge bio-medical data repository. Lung Cancer dataset and Lymphoma dataset are selected from the repository to perform experiments. Lung Cancer dataset consists of 57 attributes and 32 instances and Lymphoma dataset consists of 4027 attributes and 96 ix instances. Experiments are performed on the reduced datasets obtained from feature ranking. These datasets are used to build the classification models. Model performance is evaluated in terms of AUC (Area under Receiver Operating Characteristic Curve) performance metric. ANOVA tests are also performed on the AUC performance metric. Experimental results suggest that ensemble of multiple feature selection techniques is more effective than an individual feature selection technique.
APA, Harvard, Vancouver, ISO, and other styles
17

Lu, Qing. "Methods for Designing and Forming Predictive Genetic Tests." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1212197560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ataman, Kaan. "Learning to rank by maximizing the AUC with linear programming for problems with binary output." Diss., University of Iowa, 2007. http://ir.uiowa.edu/etd/151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Hailun. "Some Conclusions of Statistical Analysis of the Spectropscopic Evaluation of Cervical Cancer." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/math_theses/58.

Full text
Abstract:
To significantly improve the early detection of cervical precancers and cancers, LightTouch™ is under development by SpectRx Inc.. LightTouch™ identifies cancers and precancers quickly by using a spectrometer to analyze light reflected from the cervix. Data from the spectrometer is then used to create an image of the cervix that highlights the location and severity of disease. Our research is conducted to find the appropriate models that can be used to generate map-like image showing disease tissue from normal and further diagnose the cervical cancerous conditions. Through large work of explanatory variable search and reduction, logistic regression and Partial Least Square Regression successfully applied to our modeling process. These models were validated by 60/40 cross validation and 10 folder cross validation. Further examination of model performance, such as AUC, sensitivity and specificity, threshold had been conducted.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Ping. "Evaluation of Repeated Biomarkers: Non-parametric Comparison of Areas under the Receiver Operating Curve Between Correlated Groups Using an Optimal Weighting Scheme." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4261.

Full text
Abstract:
Receiver Operating Characteristic (ROC) curves are often used to evaluate the prognostic performance of a continuous biomarker. In a previous research, a non-parametric ROC approach was introduced to compare two biomarkers with repeated measurements. An asymptotically normal statistic, which contains the subject-specific weights, was developed to estimate the areas under the ROC curve of biomarkers. Although two weighting schemes were suggested to be optimal when the within subject correlation is 1 or 0 by the previous study, the universal optimal weight was not determined. We modify this asymptotical statistic to compare AUCs between two correlated groups and propose a solution to weight optimization in non-parametric AUCs comparison to improve the efficiency of the estimator. It is demonstrated how the Lagrange multiplier can be used as a strategy for finding the weights which minimize the variance function subject to constraints. We show substantial gains of efficiency by using the novel weighting scheme when the correlation within group is high, the correlation between groups is high, and/or the disease incidence is small, which is the case for many longitudinal matched case-control studies. An illustrative example is presented to apply the proposed methodology to a thyroid function dataset. Simulation results suggest that the optimal weight performs well with a sample size as small as 50 per group.
APA, Harvard, Vancouver, ISO, and other styles
21

Bitara, Matúš. "Srovnání heuristických a konvenčních statistických metod v data miningu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-400833.

Full text
Abstract:
The thesis deals with the comparison of conventional and heuristic methods in data mining used for binary classification. In the theoretical part, four different models are described. Model classification is demonstrated on simple examples. In the practical part, models are compared on real data. This part also consists of data cleaning, outliers removal, two different transformations and dimension reduction. In the last part methods used to quality testing of models are described.
APA, Harvard, Vancouver, ISO, and other styles
22

Mistry, Pritesh. "A Knowledge Based Approach of Toxicity Prediction for Drug Formulation. Modelling Drug Vehicle Relationships Using Soft Computing Techniques." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/14440.

Full text
Abstract:
This multidisciplinary thesis is concerned with the prediction of drug formulations for the reduction of drug toxicity. Both scientific and computational approaches are utilised to make original contributions to the field of predictive toxicology. The first part of this thesis provides a detailed scientific discussion on all aspects of drug formulation and toxicity. Discussions are focused around the principal mechanisms of drug toxicity and how drug toxicity is studied and reported in the literature. Furthermore, a review of the current technologies available for formulating drugs for toxicity reduction is provided. Examples of studies reported in the literature that have used these technologies to reduce drug toxicity are also reported. The thesis also provides an overview of the computational approaches currently employed in the field of in silico predictive toxicology. This overview focuses on the machine learning approaches used to build predictive QSAR classification models, with examples discovered from the literature provided. Two methodologies have been developed as part of the main work of this thesis. The first is focused on use of directed bipartite graphs and Venn diagrams for the visualisation and extraction of drug-vehicle relationships from large un-curated datasets which show changes in the patterns of toxicity. These relationships can be rapidly extracted and visualised using the methodology proposed in chapter 4. The second methodology proposed, involves mining large datasets for the extraction of drug-vehicle toxicity data. The methodology uses an area-under-the-curve principle to make pairwise comparisons of vehicles which are classified according to the toxicity protection they offer, from which predictive classification models based on random forests and decisions trees are built. The results of this methodology are reported in chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
23

Judice, Douglas C. "Columbia's attempt at peace an analysis of the demobilization of the Auto-Defensas Unidas De Colombia (AUC) /." Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FJudice.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Judice, Douglas C. "Colombia's attempt at peace an analysis of the demobilization of the Auto-Defensas Unidas De Colombia (AUC)." Thesis, Monterey, California. Naval Postgraduate School, 2007. http://hdl.handle.net/10945/3589.

Full text
Abstract:
This thesis analyzes the Government of Colombiaαs (GOCαs) demobilization of the paramilitary organizations known collectively as the Auto-defensas Unidas de Colombia (AUC) and the re-insertion of its fighters into Colombian society. So far, this DDR process has not achieved the majority its goals, while other problems loom on the horizon. The thesis addresses the implications for Colombia and makes recommendations for future DDR processes. The study divides the process into two elements: the agreement between the AUC and the GOC, and the implementation of the terms of the agreement. The contract between the two parties is found in Colombian Law 782 of 2002 and 975 of 2005, better known as the Peace and Justice Law. The Ministry of Justice and of Interior is responsible for implementing the terms found in these laws. A central argument of this thesis is that, in order for the GOC successfully to carry out DDR of the AUC, it must not only contend with current and former paramilitary members, but must also address the societal problems that permit illegal armed groups to thrive in Colombia. Success in Colombia must be matched in the international community which must perceive the DDR process as legitimate. In short, if the public perceives penalties as being too lenient, the process will be de-legitimized. If penalties are perceived as being too harsh by the former paramilitary members, or the Paras, they will likely stop participating and reconstitute their former organizations. In order to prevent either of these pitfalls, the GOC must develop a formula that gains control over former AUC controlled terrain and its population, essentially extending government control throughout the country. This is the only way to prevent both de-legitimization of the process and former paramilitary members from reconstituting their organizations. If the GOC can accomplish this, the DDR will be successful; if it cannot, the DDR will fail.
APA, Harvard, Vancouver, ISO, and other styles
25

Khamesipour, Alireza. "IMPROVED GENE PAIR BIOMARKERS FOR MICROARRAY DATA CLASSIFICATION." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1573.

Full text
Abstract:
The Top Scoring Pair (TSP) classifier, based on the notion of relative ranking reversals in the expressions of two marker genes, has been proposed as a simple, accurate, and easily interpretable decision rule for classification and class prediction of gene expression profiles. We introduce the AUC-based TSP classifier, which is based on the Area Under the ROC (Receiver Operating Characteristic) Curve. The AUCTSP classifier works according to the same principle as TSP but differs from the latter in that the probabilities that determine the top scoring pair are computed based on the relative rankings of the two marker genes across all subjects as opposed to for each individual subject. Although the classification is still done on an individual subject basis, the generalization that the AUC-based probabilities provide during training yield an overall better and more stable classifier. Through extensive simulation results and case studies involving classification in ovarian, leukemia, colon, and breast and prostate cancers and diffuse large b-cell lymphoma, we show the superiority of the proposed approach in terms of improving classification accuracy, avoiding overfitting and being less prone to selecting non-informative pivot genes. The proposed AUCTSP is a simple yet reliable and robust rank-based classifier for gene expression classification. While the AUCTSP works by the same principle as TSP, its ability to determine the top scoring gene pair based on the relative rankings of two marker genes across {\em all} subjects as opposed to each individual subject results in significant performance gains in classification accuracy. In addition, the proposed method tends to avoid selection of non-informative (pivot) genes as members of the top-scoring pair.\\ We have also proposed the use of the AUC test statistic in order to reduce the computational cost of the TSP in selecting the most informative pair of genes for diagnosing a specific disease. We have proven the efficacy of our proposed method through case studies in ovarian, colon, leukemia, breast and prostate cancers and diffuse large b-cell lymphoma in selecting informative genes. We have compared the selected pairs, computational cost and running time and classification performance of a subset of differentially expressed genes selected based on the AUC probability with the original TSP in the aforementioned datasets. The reduce sized TSP has proven to dramatically reduce the computational cost and time complexity of selecting the top scoring pair of genes in comparison to the original TSP in all of the case studies without degrading the performance of the classifier. Using the AUC probability, we were able to reduce the computational cost and CPU running time of the TSP by 79\% and 84\% respectively on average in the tested case studies. In addition, the use of the AUC probability prior to applying the TSP tends to avoid the selection of genes that are not expressed (``pivot'' genes) due to the imposed condition. We have demonstrated through LOOCV and 5-fold cross validation that the reduce sized TSP and TSP have shown to perform approximately the same in terms of classification accuracy for smaller threshold values. In conclusion, we suggest the use of the AUC test statistic in reducing the size of the dataset for the extensions of the TSP method, e.g. the k-TSP and TST, in order to make these methods feasible and cost effective.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Binhuan. "Statistical Evaluation of Continuous-Scale Diagnostic Tests with Missing Data." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/math_diss/8.

Full text
Abstract:
The receiver operating characteristic (ROC) curve methodology is the statistical methodology for assessment of the accuracy of diagnostics tests or bio-markers. Currently most widely used statistical methods for the inferences of ROC curves are complete-data based parametric, semi-parametric or nonparametric methods. However, these methods cannot be used in diagnostic applications with missing data. In practical situations, missing diagnostic data occur more commonly due to various reasons such as medical tests being too expensive, too time consuming or too invasive. This dissertation aims to develop new nonparametric statistical methods for evaluating the accuracy of diagnostic tests or biomarkers in the presence of missing data. Specifically, novel nonparametric statistical methods will be developed with different types of missing data for (i) the inference of the area under the ROC curve (AUC, which is a summary index for the diagnostic accuracy of the test) and (ii) the joint inference of the sensitivity and the specificity of a continuous-scale diagnostic test. In this dissertation, we will provide a general framework that combines the empirical likelihood and general estimation equations with nuisance parameters for the joint inferences of sensitivity and specificity with missing diagnostic data. The proposed methods will have sound theoretical properties. The theoretical development is challenging because the proposed profile log-empirical likelihood ratio statistics are not the standard sum of independent random variables. The new methods have the power of likelihood based approaches and jackknife method in ROC studies. Therefore, they are expected to be more robust, more accurate and less computationally intensive than existing methods in the evaluation of competing diagnostic tests.
APA, Harvard, Vancouver, ISO, and other styles
27

Dai, Huaien. "Structural and functional studies of interactions between [beta]-1,3-glucan and the N-terminal domains of [beta]-1,3-glucan recognition proteins involved in insect innate immunity." Diss., Kansas State University, 2013. http://hdl.handle.net/2097/15286.

Full text
Abstract:
Doctor of Philosophy
Department of Biochemistry
Ramaswamy Krishnamoorthi
Insect [beta]-1,3-glucan recognition protein ([beta]GRP), a soluble receptor in the hemolymph, binds to the surfaces of bacteria and fungi and activates serine protease cascades that promote destruction of pathogens by means of melanization or expression of antimicrobial peptides. Delineation of mechanistic details of these processes may help develop strategies to control insect-borne diseases and economic losses. Multi-dimensional nuclear magnetic resonance (NMR) techniques were employed to solve the solution structure of the Indian meal moth (Plodia interpunctella) [beta]GRP N-terminal domain (N-[beta]GRP), which is sufficient to activate the prophenoloxidase (proPO) pathway resulting in melanin formation. This is the first determined three-dimensional structure of N-[beta]GRP, which adopts an immunoglobulin fold. Addition of laminarin, a [beta]-1,3 and [beta]-1,6 link-containing glucose polysaccharide (∼6 kDa) that activates the proPO pathway, to N-[beta]GRP results in the loss of NMR cross-peaks from the backbone [subscript]1[subscript]5N-[subscript]1H groups of the protein, suggesting the formation of a large complex. Analytical ultracentrifugation (AUC) studies of formation of the N-[beta]GRP:laminarin complex show that ligand binding induces self-association of the protein-carbohydrate complex into a macro structure, likely containing six protein and three laminarin molecules (∼102 kDa). The macro complex is quite stable, as it does not undergo dissociation upon dilution to submicromolar concentrations. The structural model thus derived from this study for the N-[beta]GRP:laminarin complex in solution differs from the one in which a single N-[beta]GRP molecule has been proposed to bind to a triple-helical form of laminarin on the basis of a X-ray crystal structure of the N-[beta]GRP:laminarihexaose complex. AUC studies and phenoloxidase activation measurements made with designed mutants of N-[beta]GRP indicate that electrostatic interactions between the ligand-bound protein molecules contribute to the stability of the N-[beta]GRP:laminarin complex and that a decreased stability results in a reduction of proPO activation. These novel findings suggest that ligand-induced self-association of the [beta]GRP:[beta]-1,3-glucan complex may form a platform on a microbial surface for recruitment of downstream proteases, as a means of amplification of the pathogen recognition signal. In the case of the homolog of GNBPA2 from Anopheles gambiae, the malaria-causing Plasmodium carrier, multiligand specificity was characterized, suggesting a functional diversity of the immunoglobulin domain structure.
APA, Harvard, Vancouver, ISO, and other styles
28

Watanabe, Mizuki. "Analysis of factors that have impacts on various infectious diseases after allogenic hematopoietic stem cell transplantation." Kyoto University, 2020. http://hdl.handle.net/2433/253195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Häntzschel, Ingmar. "Targeting mycophenolate mofetil for graft-versus-host disease prophylaxis after allogenic blood stem cell transplantation." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-64815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Belouni, Mohamad. "Plans d'expérience optimaux en régression appliquée à la pharmacocinétique." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM056/document.

Full text
Abstract:
Le problème d'intérêt est d'estimer la fonction de concentration et l'aire sous la courbe (AUC) à travers l'estimation des paramètres d'un modèle de régression linéaire avec un processus d'erreur autocorrélé. On construit un estimateur linéaire sans biais simple de la courbe de concentration et de l'AUC. On montre que cet estimateur construit à partir d'un plan d'échantillonnage régulier approprié est asymptotiquement optimal dans le sens où il a exactement la même performance asymptotique que le meilleur estimateur linéaire sans biais (BLUE). De plus, on montre que le plan d'échantillonnage optimal est robuste par rapport à la misspecification de la fonction d'autocovariance suivant le critère du minimax. Lorsque des observations répétées sont disponibles, cet estimateur est consistant et a une distribution asymptotique normale. Les résultats obtenus sont généralisés au processus d'erreur de Hölder d'indice compris entre 0 et 2. Enfin, pour des tailles d'échantillonnage petites, un algorithme de recuit simulé est appliqué à un modèle pharmacocinétique avec des erreurs corrélées
The problem of interest is to estimate the concentration curve and the area under the curve (AUC) by estimating the parameters of a linear regression model with autocorrelated error process. We construct a simple linear unbiased estimator of the concentration curve and the AUC. We show that this estimator constructed from a sampling design generated by an appropriate density is asymptotically optimal in the sense that it has exactly the same asymptotic performance as the best linear unbiased estimator (BLUE). Moreover, we prove that the optimal design is robust with respect to a misspecification of the autocovariance function according to a minimax criterion. When repeated observations are available, this estimator is consistent and has an asymptotic normal distribution. All those results are extended to the error process of Hölder with index including between 0 and 2. Finally, for small sample sizes, a simulated annealing algorithm is applied to a pharmacokinetic model with correlated errors
APA, Harvard, Vancouver, ISO, and other styles
31

Vayatis, Nicolas. "Approches statistiques en apprentissage : boosting et ranking." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00120738.

Full text
Abstract:
Depuis une dizaine d'années, la théorie statistique de l'apprentissage a connu une forte expansion. L'avènement d'algorithmes hautement performants pour la classification de données en grande dimension, tels que le boosting ou les machines à noyaux (SVM) a engendré de nombreuses questions statistiques que la théorie de Vapnik-Chervonenkis (VC) ne permettait pas de résoudre. En effet, le principe de Minimisation du Risque Empirique ne rend pas compte des méthodes d'apprentissage concrètes et le concept de complexité combinatoire de VC dimension ne permet pas d'expliquer les capacités de généralisation d'algorithmes
sélectionnant un estimateur au sein d'une classe massive telle que l'enveloppe convexe d'une classe de VC. Dans le premier volet du mémoire, on rappelle les interprétations des algorithmes de boosting comme des implémentations de principes de minimisation
de risques convexes et on étudie leurs propriétés sous cet angle. En particulier, on montre l'importance de la
régularisation pour obtenir des stratégies consistantes. On développe également une nouvelle classe d'algorithmes de type gradient stochastique appelés algorithmes de descente miroir avec moyennisation et on évalue leur comportement à travers des simulations informatiques. Après avoir présenté les principes fondamentaux du boosting, on s'attache dans le
deuxième volet à des questions plus avancées telles que
l'élaboration d'inégalités d'oracle. Ainsi, on étudie la
calibration précise des pénalités en fonction des critères
de coût utilisés. On présente des résultats
non-asymptotiques sur la performance des estimateurs du boosting pénalisés, notamment les vitesses rapides sous les conditions de marge de type Mammen-Tsybakov et on décrit les capacités d'approximation du boosting utilisant les "rampes" (stumps) de décision. Le troisième volet du mémoire explore le problème du ranking. Un enjeu important dans des applications
telles que la fouille de documents ou le "credit scoring" est d'ordonner les instances plutôt que de les catégoriser. On propose une formulation simple de ce problème qui permet d'interpréter le ranking comme une classification sur des paires d'observations. La différence dans ce cas vient du fait que les
critères empiriques sont des U-statistiques et on développe donc la théorie de la classification adaptée à ce contexte. On explore également la question de la généralisation de l'erreur de ranking afin de pouvoir inclure des a priori sur l'ordre des instances, comme dans le cas où on ne s'intéresse qu'aux "meilleures" instances.
APA, Harvard, Vancouver, ISO, and other styles
32

Egil, Martinsson. "Kvantitativ Modellering av förmögenhetsrättsliga dispositiva tvistemål." Thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-224516.

Full text
Abstract:
I den här uppsatsen beskrivs en ansats till att med hjälp av statistiska metoder förutse utfallet i förmögenhetsrättsliga dispositiva tvistemål. Logistiska- och multilogistiska regressionsmodeller skattades på data för 13299 tvistemål från 5 tingsrätter och användes  till att förutse utfallet för 1522 tvistemål från 3 andra tingsrätter.   Modellerna presterade bättre än slumpen vilket ger stöd för slutsatsen att man kan använda statistiska metoder för att förutse utfallet i denna typ av tvistemål.
BACKROUND: The idea of legal automatization is a controversial topic that's been discussed for hundreds of years, in modern times in the context of Law & Artificial Intelligence. Strangely, real world applications are very rare. Assuming that the judicial system is like any system that transforms inputs into outputs one would think that we should be able measure it and and gain insight into its inner workings and ultimately use these measurements to make predictions about its output. In this thesis, civil procedures on commercial matters amenable to out-of-court settlement (Förmögenhetsrättsliga Dispositiva Tvistemål) was devoted particular interest and the question was posed: Can we predict the outcome of civil procedures using Statistical Methods? METHOD: By analyzing procedural law and legal doctrin, the civil procedure was modeled in terms of a random variable with a discrete observable outcome. Some data for 14821 cases was extracted from eight different courts. Five of these courts (13299 cases) were used to train the models and three courts (1522 cases) were chosen randomly and kept untouched for validation. Most cases seemed to concern monetary claims (66%) and/or damages (12%). Binary- and Multinomial- logistic regression methods were used as classifiers. RESULTS: The models where found to be uncalibrated but they clearly outperformed random score assignment at separating classes and at a preset threshold gave accuracies significantly higher (p<<0.001) than that of random guessing and in identifying settlements or the correct type of verdict performance was significantly better (p<<0.003) than consequently guessing the most common outcome. CONCLUSION: Using data for cases from one set of courts can to some extent predict the outcomes of cases from another set of courts. The results from applying the models to new data concludes that the outcome in civil processes can be predicted using statistical methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Hanfang. "Jackknife Emperical Likelihood Method and its Applications." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/math_diss/9.

Full text
Abstract:
In this dissertation, we investigate jackknife empirical likelihood methods motivated by recent statistics research and other related fields. Computational intensity of empirical likelihood can be significantly reduced by using jackknife empirical likelihood methods without losing computational accuracy and stability. We demonstrate that proposed jackknife empirical likelihood methods are able to handle several challenging and open problems in terms of elegant asymptotic properties and accurate simulation result in finite samples. These interesting problems include ROC curves with missing data, the difference of two ROC curves in two dimensional correlated data, a novel inference for the partial AUC and the difference of two quantiles with one or two samples. In addition, empirical likelihood methodology can be successfully applied to the linear transformation model using adjusted estimation equations. The comprehensive simulation studies on coverage probabilities and average lengths for those topics demonstrate the proposed jackknife empirical likelihood methods have a good performance in finite samples under various settings. Moreover, some related and attractive real problems are studied to support our conclusions. In the end, we provide an extensive discussion about some interesting and feasible ideas based on our jackknife EL procedures for future studies.
APA, Harvard, Vancouver, ISO, and other styles
34

Chan, Victor. "A Necessary Evil: Livy's Cyclical History and the Metus Hostilis." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1369.

Full text
Abstract:
This thesis aims to discern whether cyclical history can be appropriately applied to the Ab Urbe Condita, and from these findings discern Livy's authorial implications for the contemporary political program. This process is conducted by analyzing exempla , as well as constructing a new definition of metus hostilis. Doing so allows for the detection of patterns, that when imprinted upon the existing formulaic model, examines whether the metus hostilis enhances the case for Livy writing the AUC with cyclical intent. Based on this analysis, the implications for contemporary Rome are clear in that the narrative insinuates the Augustan regime's necessity.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Yi. "A Generalization of AUC to an Ordered Multi-Class Diagnosis and Application to Longitudinal Data Analysis on Intellectual Outcome in Pediatric Brain-Tumor Patients." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_diss/1.

Full text
Abstract:
Receiver operating characteristic (ROC) curves have been widely used in evaluation of the goodness of the diagnostic method in many study fields, such as disease diagnosis in medicine. The area under the ROC curve (AUC) naturally became one of the most used variables in gauging the goodness of the diagnosis (Mossman, Somoza 1991). Since medical diagnosis often is not dichotomous, the ROC curve and AUC need to be generalized to a multi-dimensional case. The generalization of AUC to multi-class case has been studied by many researchers in the past decade. Most recently, Nakas & Yiannoutsos (2004) considered the ordered d classes ROC analysis by only considering the sensitivities of each class. Hence, their dimension is only d. Cha (2005) considered more types of mis-classification in the ordered multiple-class case, but reduced the dimension of Ferri, at.el. from d(d-1) to 2(d-1). In this dissertation we are trying to adjust and calculate the VUS for an ordered multipleclass with Cha’s 2(d-1)-dimension method. Our methodology of finding the VUS is introduced. We present the method of adjusting and calculating VUS and their statistical inferences for the 2(d-1)-dimension. Some simulation results are included and a real example will be presented. Intellectual outcomes in pediatric brain-tumor patients were investigated in a prospective longitudinal study. The Standard-Binet Intelligence Scale-Fourth Edition (SB-IV) Standard Age Score (SAS) and Composite intelligence quotient (IQ) score are examined as cognitive outcomes in pediatric brain-tumor patients. Treatment factors, patient factors and time since diagnosis are taken into account as the risk factors. Hierarchical linear/quadratic models and Gompertz based hierarchical nonlinear growth models were applied to build linear and nonlinear longitudinal curves. We use PRESS and Volume Under the Surface (VUS) as the criterions to compare these two methods. Some model interpretations are presented in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
36

SILVA, NETO JOAO B. da. "Processo alternativo para obtenção de tetrafluoreto de urânio a partir de efluentes fluoretados da etapa de reconversão de urânio." reponame:Repositório Institucional do IPEN, 2008. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11709.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:54:58Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:07:31Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
37

Prabhavalkar, Rohit Prakash. "Discriminative Articulatory Feature-based Pronunciation Models with Application to Spoken Term Detection." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1374183801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Charraud, Jocelyn, and Saez Adrian Garcia. "Bankruptcy prediction models on Swedish companies." Thesis, Umeå universitet, Företagsekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185143.

Full text
Abstract:
Bankruptcies have been a sensitive topic all around the world for over 50 years. From their research, the authors have found that only a few bankruptcy studies have been conducted in Sweden and even less on the topic of bankruptcy prediction models. This thesis investigates the performance of the Altman, Ohlson and Zmijewski bankruptcy prediction models. This research investigates all Swedish companies during the years 2017 and 2018.  This study has the intention to shed light on some of the most famous bankruptcy prediction models. It is interesting to explore the predictive abilities and usability of those three models in Sweden. The second purpose of this study is to create two models from the most significant variable out of the three models studied and to test its prediction power with the aim to create two models designed for Swedish companies.  We identified a research gap in terms of Sweden, where bankruptcy prediction models have been rather unexplored and especially with those three models. Furthermore, we have identified a second research gap regarding the time period of the research. Only a few studies have been conducted on the topic of bankruptcy prediction models post the financial crisis of 2007/08.  We have conducted a quantitative study in order to achieve the purpose of the study. The data used was secondary data gathered from the Serrano database. This research followed an abductive approach with a positive paradigm. This research has studied all active Swedish companies between the years 2017 and 2018. Finally, this contributed to the current field of knowledge on the topic through the analysis of the results of the models on Swedish companies, using the liquidity theory, solvency and insolvency theory, the pecking order theory, the profitability theory, the cash flow theory, and the contagion effect. The results aligned with the liquidity theory, the solvency and insolvency theory and the profitability theory. Moreover, from this research we have found that the Altman model has the lowest performance out of the three models, followed by the Ohlson model that shows some mixed results depending on the statistical analysis. Lastly, the Zmijewski model has the best performance out of the three models. Regarding the performance and the prediction power of the two new models were significantly higher than the three models studied.
APA, Harvard, Vancouver, ISO, and other styles
39

Yu, Daoping. "Early Stopping of a Neural Network via the Receiver Operating Curve." Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etd/1732.

Full text
Abstract:
This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the imbalanced data set of the uneven class distribution, random sampling and k-means clustering are implemented to draw a smaller subset of representatives from the original training data set. Finally, the confidence interval for the AUC is estimated in a non-parametric approach.
APA, Harvard, Vancouver, ISO, and other styles
40

CASELLA, MARIALUISA. "Sviluppo di un metodo per il dosaggio plasmatico del busulfan mediante HPLC-ESI-MS/MS in studi di farmacocinetica su pazienti talassemici." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2010. http://hdl.handle.net/2108/1212.

Full text
Abstract:
L'ottimizzazione del dosaggio busulfan in pazienti sottoposti a trapianto di midollo osseo si raccomanda al fine di ridurre gli effetti tossici associati all’elevata esposizione al farmaco. Una variabilità dell'area sotto la curva di concentrazione / tempo (AUC) comporta il rischio di sovra o sotto dosaggio con il conseguente aumento del rischio di tossicità o di recidiva. E’ stato sviluppato un metodo di analisi rapido, sensibile e specifico per il rilevamento del busulfan nel plasma umano. Il test si basa su una estrazione liquido-liquido con acetato di etile e di rilevamento mediante cromatografia liquida, con ionizzazione electrospray e spettrometria tandem massa (HPLC-ESI-MS/MS). Il busulfan deuterato è stato utilizzato come standard interno. Il metodo è lineare nel range 39-2500 ng / mL, con r2> 0,99 e una durata della corsa cromatografica di soli 5 minuti (con un tempo di ritenzione del busulfan di 1,62 minuti). La precisione inter-day and intra-day è nel range di 1,01 - 3,72% e di 0,35-4,32%, rispettivamente. Il recupero è > 70%. Il limite di rivelabilità e di quantificazione sono di 6 e 10 ng/mL, rispettivamente. Il metodo di analisi riportato, richiede soltanto 200 μL di plasma per l'analisi. Il metodo convalidato è stato applicato con successo per l’analisi di campioni di plasma ottenuti da bambini talassemici sottoposti a un regime di condizionamento con busulfan e sottoposti a trapianto di midollo osseo. Questo metodo permette la correzione della dose consentendo un dosaggio adeguato al fine di ottenere la concentrazione ottimale di busulfan. Questo metodo è attualmente utilizzato per analizzare le concentrazioni plasmatiche di busulfan dopo somministrazione per via endovenosa ed è applicato nel monitoraggio terapeutico dei farmaci.
Optimisation of busulfan dosage in patients undergoing bone marrow transplantation is recommended in order to reduce toxic effects associated with high drug exposure. Variation in the area under the concentration /time curve (AUC) results in risk of over or under treatment with excess risk of toxicity or relapse. A rapid, sensitive and specific assay for detection of busulfan in human plasma was developed. The assay is based on a liquid-liquid extraction with ethyl acetate and detection by high performance liquid chromatography with electrospray ionization and tandem mass spectrometry (HPLC-ESI-MS/MS). Deuterated busulfan was used as internal standard. The method was linear over the range 39-2500 ng/mL, with r2 > 0.99 and a run time of only 5 minutes (busulfan retention time of 1.62 minutes). The inter-day and intra-day precision were in the range 1,01- 3,72 % and 0,35-4,32 %, respectively. The recovery was >70%. The limits of detection and quantification were 6 and 10 ng/mL, respectively. The reported assay required only 200 µL of plasma for the analysis. The validated method was successfully applied to analyze plasma samples from children with thalassemia undergoing a conditioning regimen with busulfan and submitted to bone marrow transplantation. This method permits dose correction allowing a better dosing adjustment towards the target level of busulfan. This method is currently used to analyze the plasma concentrations of busulfan after intravenous administration and it is applied in therapeutic drug monitoring.
APA, Harvard, Vancouver, ISO, and other styles
41

Hansén, Jacob, and Axel Gustafsson. "A Study on Comparison Websites in the Airline Industry and Using CART Methods to Determine Key Parameters in Flight Search Conversion." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254309.

Full text
Abstract:
This bachelor thesis in applied mathematics and industrial engineering and management aimed to identify relationships between search parameters in flight comparison search engines and the exit conversion rate, while also investigating how the emergence of such comparison search engines has impacted the airline industry. To identify such relationships, several classification models were employed in conjunction with several sampling methods to produce a predictive model using the program R. To investigate the impact of the emergence of comparison websites, Porter's 5 forces and a SWOT - analysis were employed to analyze findings of a literature study and a qualitative interview. The classification models developed performed poorly with regards to several assessments metrics which suggested that there were little to no significance in the relationship between the search parameters investigated and exit conversion rate. Porter's 5 forces and the SWOT-analysis suggested that the competitive landscape of the airline industry has become more competitive and that airlines which do not manage to adapt to this changing market environment will experience decreasing profitability.
Detta kandidatexamensarbete inriktat på tillämpad matematik och industriell ekonomi syftade till att identifiera samband mellan sökparametrar från flygsökmotorer och konverteringsgraden för utträde till ett flygbolags hemsida, och samtidigt undersöka hur uppkomsten av flygsökmotorer har påverkat flygindustrin för flygbolag. För att identifiera sådana samband, tillämpades flera klassificeringsmodeller tillsammans med stickprovsmetoder för att bygga en predikativ modell i programmet R. För att undersöka påverkan av flygsökmotorer tillämpades Porters 5 krafter och SWOT-analys som teoretiska ramverk för att analysera information uppsamlad genom en litteraturstudie och en intervju. Klassificeringsmodellerna som byggdes presterade undermåligt med avseende på flera utvärderingsmått, vilket antydde att det fanns lite eller inget samband mellan de undersökta sökparametrarna och konverteringsgraden för utträde. Porters 5 krafter och SWOT-analysen visade att flygindustrin hade blivit mer konkurrensutsatt och att flygbolag som inte lyckas anpassa sig efter en omgivning i ändring kommer att uppleva minskande lönsamhet.
APA, Harvard, Vancouver, ISO, and other styles
42

Ho, Dan. "BAYESIAN-DERIVED VANCOMYCIN AUC24H THRESHOLD FOR NEPHROTOXICITY IN SPECIAL POPULATIONS." Scholarly Commons, 2021. https://scholarlycommons.pacific.edu/uop_etds/3760.

Full text
Abstract:
A Bayesian-derived 24-hour area under the concentration-time curve over minimum inhibitory concentration from broth microdilution (AUC24h/MICBMD) ratio of 400 to 600 is recommended as the new monitoring parameter for vancomycin to optimize efficacy and minimize nephrotoxicity. The AUC24h threshold of 600 mg*h/L for nephrotoxicity was extrapolated from studies that assessed the general population. It is unclear if this upper threshold is consistent or varies when used in special populations such as critically ill patients, obese patients, patients with preexisting renal disease, and patients on concomitant nephrotoxins.The purpose of this study is to investigate the generalizability of the proposed vancomycin AUC24h threshold of 600 mg*h/L for nephrotoxicity. The objective is to determine the optimal Bayesian-derived AUC24h threshold to minimize vancomycin-associated nephrotoxicity in special populations such as critically ill patients, obese patients, patients with preexisting renal disease, and patients on concomitant loop diuretics, ACEIs, ARBs, NSAIDs, aminoglycosides, piperacillin-tazobactam, and IV contrast dyes. The study design is a single-center, retrospective cohort study. For each patient, nephrotoxicity was assessed and the Bayesian-derived AUC24h was estimated. Using classification and regression tree (CART) analysis, the AUC24h threshold for nephrotoxicity was determined for each special population that had at least ten nephrotoxic patients. The predictive performances (e.g., positive predictive value [PPV], negative predictive value [NPV], sensitivity, specificity, and area under the receiver operating characteristic [ROC] curve) of each CART-derived threshold were then compared to the guideline threshold’s predictive performances. PPV and sensitivity were given greater weight when comparing the thresholds. Of the 336 patients, 29 (8.6%) nephrotoxic patients were observed after initiating vancomycin. Among the special populations of interest, critically ill patients, obese patients, patients with preexisting renal disease, and patients on concomitant loop diuretics included at least ten nephrotoxic patients and thus were further analyzed to determine the CART-derived AUC24h thresholds. The CART-derived AUC24h thresholds were 544 mg*h/L for critically ill patients (n=116), 586 mg*h/L for obese patients (n=111), 539 mg*h/L for patients with preexisting renal disease (n=54), and 543 mg*h/L for patients on concomitant loop diuretics (n=126). Compared to the guideline threshold of 600 mg*h/L, the CART-derived thresholds for critically ill patients, patients with preexisting renal disease, and patients on concomitant loop diuretics had comparable PPVs but significantly higher sensitivities. On the other hand, the CART-derived threshold for obese patients did not have a significantly different PPV, NPV, sensitivity, specificity, and area under the ROC curve. For critically ill patients, patients with preexisting renal disease, and patients on concomitant loop diuretics, a lower vancomycin AUC24h threshold for nephrotoxicity such as 544 mg*h/L, 539 mg*h/L, and 543 mg*h/L, respectively, may be considered to minimize the risk of nephrotoxicity. On the other hand, this study supports the continued use of the guideline threshold of 600 mg*h/L to minimize the risk of nephrotoxicity in obese patients.
APA, Harvard, Vancouver, ISO, and other styles
43

Faccenda, Francesco. "Advanced audio algorithms for enhancing comfort in automotive environments." Doctoral thesis, Università Politecnica delle Marche, 2016. http://hdl.handle.net/11566/243140.

Full text
Abstract:
Le conversazioni in abitacolo possono essere assistite da sistemi di amplificazione, ma gli ac-coppiamenti acustici possono creare effetti di feedback ed eco. A causa di questi, l’intelligibilità del parlato potrebbe diminuire e guadagni di canale elevato possono innescare l’effetto Larsen. Metodi di cancellazione di feedback ed eco acustici devono essere adottati per mantenere stabile il sistema. In questo lavoro è proposto un test-bench per AFC ibrido HW/SW, basato sul DSP TMS320C6748, testato con gli approcci PEM-AFROW e Suppressor-PEM. I test hanno confermato l’adeguatezza del framework in varie condizioni acustiche. È stato preso in considerazione anche uno scenario Dual-Channel, dove devono essere consi-derati anche gli accoppiamenti di eco. Sono stati inseriti anche rilevamenti del parlato e di double talk. Simulazioni al computer hanno dimostrato l’efficacia dell’approccio. Il comfort in auto è degradato anche dal rumore. Sistemi di controllo attivo del rumore possono migliorare la qualità sonora suonando un opportuno segnale di anti-noise da una sorgente secondaria. Gli approcci a banda stretta sono stati approfonditi per la loro semplicità ed efficacia nella cancellazione di rumori tonali. In questo lavoro è anche proposto un algoritmo robusto basato sul narrow-band FxLMS. Le simulazioni hanno dimostrato come le sue performance sono comparabili con quelle delle solu-zioni classiche. Sono stati anche approfonditi i filtri adattativi basati sull’algoritmo di Kalman, come alternativa a quelli basati sul FxLMS. L’approccio di Kalman può risolvere alcuni problemi di questi ultimi, come la convergenza lenta e la sensibilità alla diffusione degli autovalori, al costo di requisiti computazionali maggiori. Tuttavia, le varianti a banda stretta riducono i tap necessari e risultano ancora competitive in una ricca serie di scenari, come quelli automotive. Varie soluzioni sono infine presentate e la loro efficacia è comparata con quella delle controparti basate sul FxLMS.
Acoustical comfort in automotive environments is a relevant issue. Conversations among passengers can be assisted by suitable amplification systems, but acoustic coupling might arise feedback and echo effects. Speech intelligibility may result ruined and high channel gains may lead to Larsen effect. Acoustic feedback and echo cancellation methods are needed to keep the system stable. In this work, an hybrid HW/SW Test Bench for AFC algorithms, based on TMS320C6748 processor, is proposed and tested with PEM-AFROW and Suppressor-PEM solutions. Several experiments confirmed the framework suitability under diverse acoustic conditions. Also Dual-Channel scenario has been taken into account, where echo effects must be consid-ered as well. Voice-activity and double-talk detectors have been also included. Performed com-puter simulations have shown the approach effectiveness. Comfort in automotive is also degraded by noises. Active noise control solutions can improve sound quality by playing a proper anti-noise signal through secondary sources. Narrow-band ap-proaches have been widely investigated because of their simplicity and effectiveness cancelling tonal noises. In this work a robust algorithm based on narrow-band FxLMS is also proposed. Simulations show proposed algorithm behaviours under stress conditions, proving that its per-formance is comparable with classic narrow-band one. Adaptive filters based on Kalman control theory has been investigated as well, as an alternative for FxLMS-based algorithms. Kalman approach can overcome some problems of the latter ones, like slow convergence and high sensitivity to the eigenvalue spread at the cost of higher computational requirements. However, narrow-band variants significantly reduce needed taps and still results competitive in a rich series of scenarios, like automotive ones. Various possible solutions are finally presented and their effectiveness are compared to each own FxLMS-based counterpart.
APA, Harvard, Vancouver, ISO, and other styles
44

SIRI, GIACOMO. "The use of the Joint Models to improve the accuracy of prognostication of death in patients with heart failure and reduced ejection fraction (HFrEF)." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1057805.

Full text
Abstract:
The work presented in this thesis has been developed during a scholarship at the Scientific Directorate - Unit of Biostatistics of the Galliera Hospital in Genoa under the supervision of Dr. Matteo Puntoni. This scholarship was partially supported by a grant from Ministry of Health, Italy "Bando Ricerca Finalizzata - Giovani Ricercatori" (Project code: GR-2013-02355479) won by Dr. Puntoni for conducting a cancer research study. The main objective of my research was to apply the Joint Model for longitudinal and survival data to improve the dynamic prediction of cardiovascular diseases in patients undergoing cancer treatment. These patients are usually followed after the start of the therapy with several visits in the course of which different longitudinal data are collected. These data are usually collected and interpreted by clinicians but not in a systematic way. The innovation of my project consisted in a more formal use of these data in a statistical model. The Joint Model is essentially based on the simultaneous modelling of a linear mixed model for longitudinal data and a survival model for the probability of an event. The utility of this model is twofold: on one hand it links the change of a longitudinal measurement to a change in the risk of an event, on the other hand the prediction of survival probabilities using the Joint Model can be updated whenever a new measurement is taken. Unfortunately, the clinical study on cancer therapy for which the project was thought is still ongoing at this moment and the longitudinal data are not available. So, we applied the developed methods based on Joint Model to another dataset with a similar clinical interest. The case of study presented in the Chapter 6 of this thesis is developed after a meeting between Dr. Puntoni and me and Dr. Marco Canepa of the Cardiovascular Disease Unit of the San Martino Hospital in Genoa. The necessity of the last one was to prove that the longitudinal data collected in patients after a heart failure could be used to improve the prognostication of death and, more in general, the patient management and care with a personalized therapy. The last one could be better calibrated by a dynamic update of the prognosis of patients related to a better analysis of the longitudinal data provided during each follow-up visit. The Joint Model for longitudinal and survival data solves the problem of the simultaneous analysis of the biomarkers collected at each follow-up visits and the dynamic update of the survival probabilities each time a new measurements are collected (see Chapter 4). The next step, developed in the Chapter 5, was to find a statistical index that was simple to understand and practical for clinicians but also methodologically adequate to assess and prove that the longitudinal data are advantage in the prognostication of death. To do this, two different indexes seemed most suitable: the area under the Receiver Operating Characteristic Curve (AUC-ROC) to assess the prediction capability of the Joint Model, and the Net Reclassification Improvement (NRI) to evaluate the improvement in prognostication in comparison with other approaches commonly used in clinical studies. In Section 5.3, a new definition of time-dependent AUC-ROC and time-dependent NRI in the Joint Model context is given. Even if a function to derive the AUC after a Joint Model was present in literature, we needed to reformulate it and implement in the statistical software R to make it comparable with the index derived after the use of the common survival models, such as the Weibull Model. Regarding the NRI, no indexes are present in the literature. Some methods and functions were developed for binary and survival context but no one for the Joint Model. A new definition of time-dependent NRI is presented in Section 5.3.2 and used to compare the common Weibull survival model and the Joint Model. This thesis is divided in 6 chapters. Chapters 1 and 2 are preparatory to the introduction of the Joint Model in Chapter 3. In particular, Chapter 1 is an introduction to the analysis of longitudinal data with the use of Linear Mixed Models while Chapter 2 presents concepts and models used in the thesis from survival analysis. In Chapter 3 the elements introduced in the first two chapters are joined to defined the Joint Model for longitudinal and survival data following the approach proposed by Rizopoulos (2012). Chapter 4 introduces the main ideas behind dynamic prediction in the Joint Model context. In Chapter 5 relevant notions of prediction capability are introduced in relation to the indexes AUC and NRI. Initially, these two indexes are presented in relation to a binary outcome. Then, it is shown how they change when the outcome is the time to an event of interest. Ending, the definitions of time-dependent AUC and NRI are formulated in the Joint Model context. The case of study is presented in the Chapter 6 along with strength and limitations related to the use of the Joint Model in clinical studies.
APA, Harvard, Vancouver, ISO, and other styles
45

Glisson, Wesley J., Courtney J. Conway, Christopher P. Nadeau, and Kathi L. Borgmann. "Habitat models to predict wetland bird occupancy influenced by scale, anthropogenic disturbance, and imperfect detection." WILEY, 2017. http://hdl.handle.net/10150/625200.

Full text
Abstract:
Understanding species-habitat relationships for endangered species is critical for their conservation. However, many studies have limited value for conservation because they fail to account for habitat associations at multiple spatial scales, anthropogenic variables, and imperfect detection. We addressed these three limitations by developing models for an endangered wetland bird, Yuma Ridgway's rail (Rallus obsoletus yumanensis), that examined how the spatial scale of environmental variables, inclusion of anthropogenic disturbance variables, and accounting for imperfect detection in validation data influenced model performance. These models identified associations between environmental variables and occupancy. We used bird survey and spatial environmental data at 2473 locations throughout the species' U.S. range to create and validate occupancy models and produce predictive maps of occupancy. We compared habitat-based models at three spatial scales (100, 224, and 500 m radii buffers) with and without anthropogenic disturbance variables using validation data adjusted for imperfect detection and an unadjusted validation dataset that ignored imperfect detection. The inclusion of anthropogenic disturbance variables improved the performance of habitat models at all three spatial scales, and the 224-m-scale model performed best. All models exhibited greater predictive ability when imperfect detection was incorporated into validation data. Yuma Ridgway's rail occupancy was negatively associated with ephemeral and slow-moving riverine features and high-intensity anthropogenic development, and positively associated with emergent vegetation, agriculture, and low-intensity development. Our modeling approach accounts for common limitations in modeling species-habitat relationships and creating predictive maps of occupancy probability and, therefore, provides a useful framework for other species.
APA, Harvard, Vancouver, ISO, and other styles
46

Busson, Francois, Jean-Guy Pierozak, Hugues Richard, and Gerard Kipfer. "TOWARDS A NEW TRACKING ARCHITECTURE." International Foundation for Telemetering, 2016. http://hdl.handle.net/10150/624267.

Full text
Abstract:
A telemetry facility may connect numerous telemetry receivers to a single tracking antenna depending on the number of TM channels involved in the test and on the required redundancy. The tracking data, i.e. AM normalized analog signals extracted by the receivers from the TM signal and the AGC analog signals, are sent to the Antenna Control Unit (ACU) for tracking error calculation. The number of cables between receivers and ACU becomes important in some telemetry facilities and the tracking signals being analog, the distance must be limited. This paper proposes a new tracking architecture that moves from analog to digital links between receivers and ACU with the following main benefits:  Keeping the capability to acquire tracking data (AM&AGC) from several telemetry receivers,  Having more flexibility for integration,  Improving interoperability,  Providing availability of simultaneous tracking errors for enhanced tracking algorithms, for C-band tracking improvement for example.
APA, Harvard, Vancouver, ISO, and other styles
47

Plch, Vít. "Detekce fibrilace síní v EKG." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-402125.

Full text
Abstract:
This diploma thesis deals with detection of atrial fibrillation from HRV, classification of Poincare map and in the end the divide into two groups, one with detected atrial fibrillation and one not. The result is the decision on which variables are statistically significant for the identification of atrial fibrillations and which are not, and classification of the ECG signals.
APA, Harvard, Vancouver, ISO, and other styles
48

Pattiam, Giriprakash Pavithran. "Systemic Identification of Radiomic Features Resilient to Batch Effects and Acquisition Variations for Diagnosis of Active Crohn's Disease on CT Enterography." Cleveland State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=csu1629542175523398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Mervin, Lewis. "Improved in silico methods for target deconvolution in phenotypic screens." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/283004.

Full text
Abstract:
Target-based screening projects for bioactive (orphan) compounds have been shown in many cases to be insufficiently predictive for in vivo efficacy, leading to attrition in clinical trials. Phenotypic screening has hence undergone a renaissance in both academia and in the pharmaceutical industry, partly due to this reason. One key shortcoming of this paradigm shift is that the protein targets modulated need to be elucidated subsequently, which is often a costly and time-consuming procedure. In this work, we have explored both improved methods and real-world case studies of how computational methods can help in target elucidation of phenotypic screens. One limitation of previous methods has been the ability to assess the applicability domain of the models, that is, when the assumptions made by a model are fulfilled and which input chemicals are reliably appropriate for the models. Hence, a major focus of this work was to explore methods for calibration of machine learning algorithms using Platt Scaling, Isotonic Regression Scaling and Venn-Abers Predictors, since the probabilities from well calibrated classifiers can be interpreted at a confidence level and predictions specified at an acceptable error rate. Additionally, many current protocols only offer probabilities for affinity, thus another key area for development was to expand the target prediction models with functional prediction (activation or inhibition). This extra level of annotation is important since the activation or inhibition of a target may positively or negatively impact the phenotypic response in a biological system. Furthermore, many existing methods do not utilize the wealth of bioactivity information held for orthologue species. We therefore also focused on an in-depth analysis of orthologue bioactivity data and its relevance and applicability towards expanding compound and target bioactivity space for predictive studies. The realized protocol was trained with 13,918,879 compound-target pairs and comprises 1,651 targets, which has been made available for public use at GitHub. Consequently, the methodology was applied to aid with the target deconvolution of AstraZeneca phenotypic readouts, in particular for the rationalization of cytotoxicity and cytostaticity in the High-Throughput Screening (HTS) collection. Results from this work highlighted which targets are frequently linked to the cytotoxicity and cytostaticity of chemical structures, and provided insight into which compounds to select or remove from the collection for future screening projects. Overall, this project has furthered the field of in silico target deconvolution, by improving the performance and applicability of current protocols and by rationalizing cytotoxicity, which has been shown to influence attrition in clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
50

Mackových, Marek. "Analýza experimentálních EKG." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241981.

Full text
Abstract:
This thesis is focused on the analysis of experimental ECG records drawn up in isolated rabbit hearts and aims to describe changes in EKG caused by ischemia and left ventricular hypertrophy. It consists of a theoretical analysis of the problems in the evaluation of ECG during ischemia and hypertrophy, and describes an experimental ECG recording. Theoretical part is followed by a practical section which describes the method for calculating morphological parameters, followed by ROC analysis to evaluate their suitability for the classification of hypertrophy and at the end is focused on classification.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography