Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Dose prediction.

Dissertationen zum Thema „Dose prediction“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Dose prediction" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Eriksson, Niclas. „On the Prediction of Warfarin Dose“. Doctoral thesis, Uppsala universitet, Klinisk farmakologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-172864.

Der volle Inhalt der Quelle
Annotation:
Warfarin is one of the most widely used anticoagulants in the world. Treatment is complicated by a large inter-individual variation in the dose needed to reach adequate levels of anticoagulation i.e. INR 2.0 – 3.0. The objective of this thesis was to evaluate which factors, mainly genetic but also non-genetic, that affect the response to warfarin in terms of required maintenance dose, efficacy and safety with special focus on warfarin dose prediction. Through candidate gene and genome-wide studies, we have shown that the genes CYP2C9 and VKORC1 are the major determinants of warfarin maintenance dose. By combining the SNPs CYP2C9 *2, CYP2C9 *3 and VKORC1 rs9923231 with the clinical factors age, height, weight, ethnicity, amiodarone and use of inducers (carbamazepine, phenytoin or rifampicin) into a prediction model (the IWPC model) we can explain 43 % to 51 % of the variation in warfarin maintenance dose. Patients requiring doses < 29 mg/week and doses ≥ 49 mg/week benefitted the most from pharmacogenetic dosing. Further, we have shown that the difference across ethnicities in percent variance explained by VKORC1 was largely accounted for by the allele frequency of rs9923231. Other novel genes affecting maintenance dose (NEDD4 and DDHD1), as well as the replicated CYP4F2 gene, have small effects on dose predictions and are not likely to be cost-effective, unless inexpensive genotyping is available. Three types of prediction models for warfarin dosing exist: maintenance dose models, loading dose models and dose revision models. The combination of these three models is currently being used in the warfarin treatment arm of the European Pharmacogenetics of Anticoagulant Therapy (EU-PACT) study. Other clinical trials aiming to prove the clinical validity and utility of pharmacogenetic dosing are also underway. The future of pharmacogenetic warfarin dosing relies on results from these ongoing studies, the availability of inexpensive genotyping and the cost-effectiveness of pharmacogenetic driven warfarin dosing compared with new oral anticoagulant drugs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

SKARPMAN, MUNTER JOHANNA. „Dose-Volume Histogram Prediction using KernelDensity Estimation“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155893.

Der volle Inhalt der Quelle
Annotation:
Dose plans developed for stereotactic radiosurgery are assessed by studying so called Dose-Volume Histograms. Since it is hard to compare an individual dose plan with doseplans created for other patients, much experience and knowledge is lost. This thesis therefore investigates a machine learning approach to predicting such Dose-Volume Histograms for a new patient, by learning from previous dose plans.The training set is chosen based on similarity in terms of tumour size. The signed distances between voxels in the considered volume and the tumour boundary decide the probability of receiving a certain dose in the volume. By using a method based on Kernel Density Estimation, the intrinsic probabilistic properties of a Dose-Volume Histogramare exploited.Dose-Volume Histograms for the brainstem of 22 Acoustic Schwannoma patients, treated with the Gamma Knife,have been predicted, solely based on each patient’s individual anatomical disposition. The method has proved higher prediction accuracy than a “quick-and-dirty” approach implemented for comparison. Analysis of the bias and variance of the method also indicate that it captures the main underlying factors behind individual variations. However,the degree of variability in dose planning results for the Gamma Knife has turned out to be very limited. Therefore, the usefulness of a data driven dose planning tool for the Gamma Knife has to be further investigated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nilsson, Viktor. „Prediction of Dose Probability Distributions Using Mixture Density Networks“. Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273610.

Der volle Inhalt der Quelle
Annotation:
In recent years, machine learning has become utilized in external radiation therapy treatment planning. This involves automatic generation of treatment plans based on CT-scans and other spatial information such as the location of tumors and organs. The utility lies in relieving clinical staff from the labor of manually or semi-manually creating such plans. Rather than predicting a deterministic plan, there is great value in modeling it stochastically, i.e. predicting a probability distribution of dose from CT-scans and delineated biological structures. The stochasticity inherent in the RT treatment problem stems from the fact that a range of different plans can be adequate for a patient. The particular distribution can be thought of as the prevalence in preferences among clinicians. Having more information about the range of possible plans represented in one model entails that there is more flexibility in forming a final plan. Additionally, the model will be able to reflect the potentially conflicting clinical trade-offs; these will occur as multimodal distributions of dose in areas where there is a high variance. At RaySearch, the current method for doing this uses probabilistic random forests, an augmentation of the classical random forest algorithm. A current direction of research is learning the probability distribution using deep learning. A novel parametric approach to this is letting a suitable deep neural network approximate the parameters of a Gaussian mixture model in each volume element. Such a neural network is known as a mixture density network. This thesis establishes theoretical results of artificial neural networks, mainly the universal approximation theorem, applied to the activation functions used in the thesis. It will then proceed to investigate the power of deep learning in predicting dose distributions, both deterministically and stochastically. The primary objective is to investigate the feasibility of mixture density networks for stochastic prediction. The research question is the following. U-nets and Mixture Density Networks will be combined to predict stochastic doses. Does there exist such a network, powerful enough to detect and model bimodality? The experiments and investigations performed in this thesis demonstrate that there is indeed such a network.
Under de senaste åren har maskininlärning börjat nyttjas i extern strålbehandlingsplanering. Detta involverar automatisk generering av behandlingsplaner baserade på datortomografibilder och annan rumslig information, såsom placering av tumörer och organ. Nyttan ligger i att avlasta klinisk personal från arbetet med manuellt eller halvmanuellt skapa sådana planer. I stället för att predicera en deterministisk plan finns det stort värde att modellera den stokastiskt, det vill säga predicera en sannolikhetsfördelning av dos utifrån datortomografibilder och konturerade biologiska strukturer. Stokasticiteten som förekommer i strålterapibehandlingsproblemet beror på att en rad olika planer kan vara adekvata för en patient. Den särskilda fördelningen kan betraktas som förekomsten av preferenser bland klinisk personal. Att ha mer information om utbudet av möjliga planer representerat i en modell innebär att det finns mer flexibilitet i utformningen av en slutlig plan. Dessutom kommer modellen att kunna återspegla de potentiellt motstridiga kliniska avvägningarna; dessa kommer påträffas som multimodala fördelningar av dosen i områden där det finns en hög varians. På RaySearch används en probabilistisk random forest för att skapa dessa fördelningar, denna metod är en utökning av den klassiska random forest-algoritmen. En aktuell forskningsriktning är att generera in sannolikhetsfördelningen med hjälp av djupinlärning. Ett oprövat parametriskt tillvägagångssätt för detta är att låta ett lämpligt djupt neuralt nätverk approximera parametrarna för en Gaussisk mixturmodell i varje volymelement. Ett sådant neuralt nätverk är känt som ett mixturdensitetsnätverk. Den här uppsatsen fastställer teoretiska resultat för artificiella neurala nätverk, främst det universella approximationsteoremet, tillämpat på de aktiveringsfunktioner som används i uppsatsen. Den fortsätter sedan att utforska styrkan av djupinlärning i att predicera dosfördelningar, både deterministiskt och stokastiskt. Det primära målet är att undersöka lämpligheten av mixturdensitetsnätverk för stokastisk prediktion. Forskningsfrågan är följande. U-nets och mixturdensitetsnätverk kommer att kombineras för att predicera stokastiska doser. Finns det ett sådant nätverk som är tillräckligt kraftfullt för att upptäcka och modellera bimodalitet? Experimenten och undersökningarna som utförts i denna uppsats visar att det faktiskt finns ett sådant nätverk.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Harris, Shelley A. „The development and validation of a pesticide dose prediction model“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0002/NQ41170.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Irving, Benjamin. „Radiation dose measurement and prediction for linear slit scanning radiography“. Master's thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/3251.

Der volle Inhalt der Quelle
Annotation:
Includes abstract.
Includes bibliographical references (leaves 112-117).
This study describes dose measurements made for linear slit scanning radiography (LSSR) and a dose prediction model that was developed for LSSR. The measurement and calculation methods used for determining entrance dose and effective dose (E) in conventional X-ray imaging systems were verified for use with LSSR. Entrance dose and E were obtained for LSSR and compared to dose measurements on conventional radiography units. Entrance dose measurements were made using an ionisation chamber and dosemeter; E was calculated from these entrance dose measurements using a Monte Carlo simulator. Comparisons with data from around the world showed that for most examinations the doses obtained for LSSR were considerably lower than those of conventional radiography units for the same image quality. Reasons for the low dose obtained with LSSR include scatter reduction and the beam geometry of LSSR. These results have been published as two papers in international peer reviewed journals. A new method to calculate entrance dose and effective dose for LSSR is described in the second part of this report. This method generates the energy spectrum for a particular set of technique factors, simulates a filter through which the beam is attenuated and then calculates entrance dose directly from this energy spectrum. The energy spectrum is then combined with previously generated organ energy absorption data for a standard sized patient to calculate effective dose to a standard sized patient.Energy imparted for different patient thicknesses can then be used to adjust the effective dose to a patient of any size. This method is performed for a large number of slit beams moving across the body in order to more effectively simulate LSSR. This also allows examinations with technique factors that vary for different parts of the anatomy to be simulated. This method was tested against measured data and Monte Carlo simulations. This model was shown to be accurate, while being specifically suited to LSSR and being considerably faster than Monte Carlo simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Eriksson, Ivar. „Image Distance Learning for Probabilistic Dose–Volume Histogram and Spatial Dose Prediction in Radiation Therapy Treatment Planning“. Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273608.

Der volle Inhalt der Quelle
Annotation:
Construction of radiotherapy treatments for cancer is a laborious and time consuming task. At the same time, when presented with a treatment plan, an oncologist can quickly judge whether or not it is suitable. This means that the problem of constructing these treatment plans is well suited for automation. This thesis investigates a novel way of automatic treatment planning. The treatment planning system this pipeline is constructed for provides dose mimicking functionality with probability density functions of dose–volume histograms (DVHs) and spatial dose as inputs. Therefore this will be the output of the pipeline. The input is historically treated patient scans, segmentations and spatial doses. The approach involves three modules which are individually replaceable with little to no impact on the remaining two modules. The modules are: an autoencoder as a feature extractor to concretise important features of a patient segmentation, a distance optimisation step to learn a distance in the previously constructed feature space and, finally, a probabilistic spatial dose estimation module using sparse pseudo-input Gaussian processes trained on voxel features. Although performance evaluation in terms of clinical plan quality was beyond the scope of this thesis, numerical results show that the proposed pipeline is successful in capturing salient features of patient geometry as well as predicting reasonable probability distributions for DVH and spatial dose. Its loosely connected nature also gives hope that some parts of the pipeline can be utilised in future work.
Skapandet av strålbehandlingsplaner för cancer är en tidskrävande uppgift. Samtidigt kan en onkolog snabbt fatta beslut om en given plan är acceptabel eller ej. Detta innebär att uppgiften att skapa strålplaner är väl lämpad för automatisering. Denna uppsats undersöker en ny metod för att automatiskt generera strålbehandlingsplaner. Planeringssystemet denna metod utvecklats för innehåller funktionalitet för dosrekonstruktion som accepterar sannolikhetsfördelningar för dos–volymhistogram (DVH) och dos som input. Därför kommer detta att vara utdatan för den konstruerade metoden. Metoden är uppbyggd av tre beståndsdelar som är individuellt utbytbara med liten eller ingen påverkan på de övriga delarna. Delarna är: ett sätt att konstruera en vektor av kännetecken av en patients segmentering, en distansoptimering för att skapa en distans i den tidigare konstruerade känneteckensrymden, och slutligen en skattning av sannolikhetsfördelningar med Gaussiska processer tränade på voxelkännetecken. Trots att utvärdering av prestandan i termer av klinisk plankvalitet var bortom räckvidden för detta projekt uppnåddes positiva resultat. De estimerade sannolikhetsfördelningarna uppvisar goda karaktärer för både DVHer och doser. Den löst sammankopplade strukturen av metoden gör det dessutom möjligt att delar av projektet kan användas i framtida arbeten.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Patel, Raj B., und Raj B. Patel. „Prediction of Human Intestinal Absorption“. Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/624487.

Der volle Inhalt der Quelle
Annotation:
The proposed human intestinal absorption prediction model is applied to over 900 pharmaceuticals and has about 82.5% true prediction power. This study will provide a screening tool that can differentiate well absorbed and poorly absorbed drugs in the early stage of drug discovery and development. This model is based on fundamental physicochemical properties and can be applied to virtual compounds. The maximum well-absorbed dose (i.e., the maximum dose that will be more than 50 percent absorbed) calculated using this model can be utilized as a guideline for drug design, synthesis, and pre-clinical studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Schuler, Paul Joseph. „Polymer dose prediction for sludge dewatering with a belt filter press“. Thesis, Virginia Tech, 1990. http://hdl.handle.net/10919/42227.

Der volle Inhalt der Quelle
Annotation:
This study was undertaken to examine the polymer mixing requirements for sludge dewatering with a belt filter press. This involved correlating full-scale field studies to small scale laboratory testing. Bench testing involved the use of a high-speed mixer and two sludge dewatering response tests: the capillary suction time test and the time-to filter test. Full-scale testing measured the belt press response to belt speed, sludge throughput, and polymer dose. Data indicated that the conditioning and dewatering scheme of the three belt filter presses was a low shear, low total mixing energy operation. The Gt, or total mixing energy, of these operations was in the range of 8,000-12,000. Optimal dose predicted by the bench-scale testing correlated well to the optimal dose for maximum cake solids coming off the belt filter press. Also, the amount of water removed from the sludge with the belt press was largely a function of the type of solids present in the sludge and less of a function of the number of rollers or residence time in the press.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Eriksson, Oskar. „Scenario dose prediction for robust automated treatment planning in radiation therapy“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302568.

Der volle Inhalt der Quelle
Annotation:
Cancer is a group of diseases that are characterized by abnormal cell growth and is considered a leading cause of death globally. There are a number of different cancer treatment modalities, one of which is radiation therapy. In radiation therapy treatment planning, it is important to make sure that enough radiation is delivered to the tumor and that healthy organs are spared, while also making sure to account for uncertainties such as misalignment of the patient during treatment. To reduce the workload on clinics, data-driven automated treatment planning can be used to generate treatment plans for new patients based on previously delivered plans. In this thesis, we propose a novel method for robust automated treatment planning where a deep learning model is trained to deform a dose in accordance with a set of potential scenarios that account for the different uncertainties while maintaining certain statistical properties of the input dose. The predicted scenario doses are then used in a robust optimization problem with the goal of finding a treatment plan that is robust to these uncertainties. The results show that the proposed method for deforming doses yields realistic doses of high quality and that the proposed pipeline can potentially generate doses that conform better to the target than the current state of the art but at the cost of dose homogeneity.
Cancer är ett samlingsnamn för sjukdomar som karaktäriseras av onormal celltillväxt och betraktas som en ledande dödsorsak globalt. Det finns olika typer av cancerbehandling, varav en är strålterapi. Inom strålterapiplanering är det viktigt att säkerställa att tillräckligt med strålning ges till tumören, att friska organ skonas, och att osäkerheter som felplacering av patienten under behandlingen räknas med. För att minska arbetsbelastningen på kliniker används data-driven automatisk strålterapiplanering för att generera behandlingsplaner till nya patienter baserat på tidigare levererade behandlingar. I denna uppsats föreslår vi en ny metod för robust automatisk strålterapiplanering där en djupinlärningsmodell tränas till att deformera en dos i enlighet med en mängd potentiella scenarion som motsvarar de olika osäkerheterna medan vissa statistiska egenskaper bibehålls från originaldosen. De predicerade scenariodoserna används sedan i ett robust optimeringsproblem där målet är att hitta en behandlingsplan som är robust mot dessa osäkerheter. Resultaten visar att den föreslagna metoden för dosdeformation ger realistiska doser av hög kvalitet, vilket i sin tur kan leda till robusta doser med högre doskonformitet än tidigare metoder men på bekostnad av doshomogenitet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

McCurdy, Boyd Matthew Clark. „Development of a portal dose image prediction algorithm for arbitrary detector systems“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ62655.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Hellström, Terese. „Deep-learning based prediction model for dose distributions in lung cancer patients“. Thesis, Stockholms universitet, Fysikum, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-196891.

Der volle Inhalt der Quelle
Annotation:
Background To combat one of the leading causes of death worldwide, lung cancer treatment techniques and modalities are advancing, and the treatment options are becoming increasingly individualized. Modern cancer treatment includes the option for the patient to be treated with proton therapy, which can in some cases spare healthy tissue from excessive dose better than conventional photon radiotherapy. However, to assess the benefit of proton therapy compared to photon therapy, it is necessary to make both treatment plans to get information about the Tumour Control Probability (TCP) and the Normal Tissue Complication Probability (NTCP). This requires excessive treatment planning time and increases the workload for planners.  Aim This project aims to investigate the possibility for automated prediction of the treatment dose distribution using a deep learning network for lung cancer patients treated with photon radiotherapy. This is an initial step towards decreasing the overall planning time and would allow for efficient estimation of the NTCP for each treatment plan and lower the workload of treatment planning technicians. The purpose of the current work was also to understand which features of the input data and training specifics were essential for producing accurate predictions.  Methods Three different deep learning networks were developed to assess the difference in performance based on the complexity of the input for the network. The deep learning models were applied for predictions of the dose distribution of lung cancer treatment and used data from 95 patient treatments. The networks were trained with a U-net architecture using input data from the planning Computed Tomography (CT) and volume contours to produce an output of the dose distribution of the same image size. The network performance was evaluated based on the error of the predicted mean dose to Organs At Risk (OAR) as well as the shape of the predicted Dose-Volume Histogram (DVH) and individual dose distributions.  Results  The optimal input combination was the CT scan and lung, mediastinum envelope and Planning Target Volume (PTV) contours. The model predictions showed a homogenous dose distribution over the PTV with a steep fall-off seen in the DVH. However, the dose distributions had a blurred appearance and the predictions of the doses to the OARs were therefore not as accurate as of the doses to the PTV compared to the manual treatment plans. The performance of the network trained with the Houndsfield Unit input of the CT scan had similar performance as the network trained without it.  Conclusions As one of the novel attempts to assess the potential for a deep learning-based prediction model for the dose distribution based on minimal input, this study shows promising results. To develop this kind of model further a larger data set would be needed and the training method could be expanded as a generative adversarial network or as a more developed U-net network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Arvola, Maja. „Deep Learning for Dose Prediction in Radiation Therapy : A comparison study of state-of-the-art U-net based architectures“. Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447081.

Der volle Inhalt der Quelle
Annotation:
Machine learning has shown great potential as a step in automating radiotherapy treatment planning. It can be used for dose prediction and a popular deep learning architecture for this purpose is the U-net. Since it was proposed in 2015, several modifications and extensions have been proposed in the literature. In this study, three promising modifications are reviewed and implemented for dose prediction on a prostate cancer data set and compared with a 3D U-net as a baseline. The tested modifications are residual blocks, densely connected layers and attention gates. The different models are compared in terms of voxel error, conformity, homogeneity, dose spillage and clinical goals. The results show that the performance was similar in many aspects for the models. The residual blocks model performed similar or better than the baseline in almost all evaluations. The attention gates model performed very similar to the baseline and the densely connected layers were uneven in the results, often with low dose values in comparison to the baseline. The study also shows the importance of consistent ground truth data and how inconsistencies affect metrics such as isodose Dice score and Hausdorff distance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Scharin, Täng Margareta. „Importance of cardiac reserve for evaluation and prediction of cardiac function and morbidity assessed by low-dose dobutamine stress echocardiography /“. Göteborg : Göteborg University, 2007. http://hdl.handle.net/2077/4426.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Taulbee, Timothy Dale. „Measurement and model prediction of proton-recoil track length distributions in NTA film dosimeters for neutron energy spectroscopy and retrospective dose assessment“. Cincinnati, Ohio : University of Cincinnati, 2009. http://www.ohiolink.edu/etd/view.cgi?acc_num=ucin1235764236.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--University of Cincinnati, 2009.
Advisors: Henry Spitz PhD (Committee Chair), Bingjing Su PhD (Committee Member), John Christenson PhD (Committee Member). Title from electronic thesis title page (viewed May 1, 2009). Keywords: NTA; proton-recoil; neutron spectroscopy; dose assessment; track length; Monte Carlo; neutron transport; neutron interactions. Includes abstract. Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Onthank, David C. „Prediction of "First Dose in Human" for radiopharmaceuticals/imaging agents based on allometric scaling of pharmacokinetics in pre-clinical animal models“. Link to electronic dissertation, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-011006-132234/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Onthank, David C. „Prediction of "First Dose in Human" for Radiopharmaceuticals/Imaging Agents Based on Allometric Scaling of Pharmacokinetics in Pre-Clinical Animal Models“. Digital WPI, 2006. https://digitalcommons.wpi.edu/etd-dissertations/443.

Der volle Inhalt der Quelle
Annotation:
It is an FDA requirement that the“first in human" dose be based on pre-clinical animal model efficacy and safety testing to ensure a safe entry into Phase I clinical trials. Pre-clinical safety and efficacy models range from mouse to non-human primates. Interspecies scaling of pharmacokinetic parameters is therefore important for predicting drug doses in human clinical trials, although it continues to be less than optimal. Understanding the disposition of the compound in different species through in vitro and in vivo experiments is necessary to ensure appropriate species are selected for human estimates. Data for three imaging agents and a pharmacological stress agent (Oncology tumor agent (DPC-A80351), Thrombus agent (DMP-444), Infection agent (RP-517) Pharmacological stress agent (DPC-A78445-00)) that entered clinical trials and an imaging agent being developed (RP845), were assessed for scaling accuracy. Initially, pharmacokinetic data from animal models were used to extrapolate to human though body weight allometric scaling. Subsequently, the impact of adjusting for plasma protein binding and the impact of metabolic stability in the different models were examined. Allometric scaling of animal pharmacokinetic parameters (clearance (CL), half-life (t½) and volume of distribution (Vdss)) achieved a prediction of the human pharmacokinetic parameter within 13 to 109% of the observed values. This prediction was further improved by adjusting for plasma protein binding of the drug, and achieved an estimate within 5 to 57% of the clinically observed values. Since the parent compound was the dominant species (>95%) in the circulation, metabolic stability was not used as a correction factor. Weight based allometric scaling was further examined for an atherosclerotic plaque targeted radiopharmaceutical imaging agent, RP845-Tc-99m, currently in development. Pharmacokinetic parameters were determined in mouse, rat and rabbit followed by allometric scaling to predict the non-human primate values. Differences between predicted versus observed non-human primate Cl, t½ and Vdss were 40%, 52% and 8%, respectively. Correcting for plasma protein binding improved the prediction for Cl and t½ to within 12 and 3 %, respectively. The Vdss prediction, however became less accurate (38% difference). Since blood clearance is the major parameter in predicting human dose, the improvement from 40% to 12% was important. The plasma protein binding adjusted animal data was then used with allometric scaling to predict human CL, t½ and Vdss. The predicted values were 7.6 mL/min/kg, 70.6 minutes and 0.87 L/kg respectively. Based on the predicted human blood clearance and the dose required to image atherosclerosis in a rabbit model, the estimated human dose would be unacceptably high. This demonstrates how allometric scaling can be used in research projects to assess clinical feasibility. The impact of metabolism differences influencing the reliability of various species to predict for man was highlighted by DPC-A78445-00. DPC-A78445-00 is being developed as an alternative to exercise in myocardial perfusion imaging for the evaluation of coronary artery disease. DPC-A78445-00 was rapidly metabolized to the carboxylic acid by mouse and rat blood in vitro and in vivo, however longer stability was observed in the dog. In vitro human blood data was consistent with the dog, suggesting that mouse and rat would not be representative species. DPC-A78445-00 plasma protein binding was at a similar, moderate level in rat, dog and human plasma and metabolism by hepatocytes was similar in dog and human. Phase I human clinical trial testing determined the area under the blood concentration-time curve (AUC) and clearance predicted by the dog were within 32% of the human values. Overall, body weight based allometric scaling of pharmacokinetic parameters from animal models, when corrected for plasma protein binding, yielded reliable predictions of the human pharmacokinetics (within 50%) for radiopharmaceutical imaging agent. However, although predictive scaling from animal data can give insight into feasibility of compounds working in human, it is important to identify species differences with respect to metabolic stability. This allometric scaling method provides an additional tool to better predict doses in human for novel Medical Imaging agents.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Mylona, Eugenia. „From global to local spatial models for improving prediction of urinary toxicity following prostate cancer radiotherapy“. Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S109.

Der volle Inhalt der Quelle
Annotation:
La radiothérapie externe est un traitement locorégional du cancer. L’objectif de la radiothérapie impose un compromis entre la délivrance d’une dose maximale dans la tumeur afin d’augmenter le contrôle local et la curabilité, et d’une dose minimale aux organes sains afin de limiter la toxicité. Les symptômes urinaires peuvent être liés à l’irradiation de régions spécifiques de la vessie ou de l'urètre. Dans ce cas, la dose reçue par l'ensemble de la vessie peut ne pas suffire à expliquer la toxicité urinaire. Dans le contexte du traitement du cancer de la prostate par radiothérapie, ce travail de thèse vise à analyser les corrélations spatiales entre la dose et les effets secondaires, cette problématique étant abordée dans un cadre d'analyse de population. Pour évaluer la contribution de l'urètre à la toxicité urinaire, nous proposons une méthode de segmentation basée sur plusieurs atlas pour identifier avec précision cette structure sur les images CT. Nous utilisons ensuite deux méthodes pour analyser la distribution de dose spatiale. L'une basée sur la construction de cartes 2D dose-surface (DSM) couplée à des comparaisons pixel par pixel et l'autre basée sur des cartes 3D dose-volume (DVM) combinées à des comparaisons par voxel. Les sous-régions identifiées ont été validées dans des populations externes, ouvrant la perspective d'une planification de traitement spécifique du patient. Nous étudions également le potentiel d'une amélioration complémentaire de la prédiction en exploitant de méthodes d'apprentissage automatique
External beam radiotherapy (EBRT) is a clinical standard for treating prostate cancer. The objective of EBRT is to deliver a high radiation dose to the tumor to maximize the probability of local control while sparing the neighboring organs (mainly the rectum and the bladder) in order to minimize the risk of complications. Developing reliable predictive models of genitourinary (GU) toxicity is of paramount importance to prevent radiation-induced side-effects, and improve treatment reliability. Urinary symptoms may be linked to the irradiation of specific regions of the bladder or the urethra, in which case the dose received by the entire bladder may not be sufficient to explain GU toxicity. Going beyond the global, whole-organ-based models towards more local, sub-organ approaches, this thesis aims to improve our understanding of radiation-induced urinary side-effects and ameliorate the prediction of urinary toxicity following prostate cancer radiotherapy. With the objective to assess the contribution of urethra damage to urinary toxicity, we propose a multi-atlas-based segmentation method to accurately identify this structure on CT images. The second objective is to identify specific symptom-related subregions in the bladder and the urethra predictive of different urinary symptoms. For this purpose, we propose two methodologies for analyzing the spatial dose distribution; one based on the construction of 2D dose-surface maps (DSM) coupled with pixel wise comparisons and another based on 3D dosevolume maps (DVMs) combined with voxel-wise comparisons. Identified subregions are validated in external populations, opening the perspective for patient specific treatment planning. We also implement and compare different machine learning strategies and data augmentation techniques, paving the way to further improve urinary toxicity prediction. We open the perspective of patient-specific treatment planning with reduced risk of complications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Alassaad, Anna. „Improving the Quality and Safety of Drug Use in Hospitalized Elderly : Assessing the Effects of Clinical Pharmacist Interventions and Identifying Patients at Risk of Drug-related Morbidity and Mortality“. Doctoral thesis, Uppsala universitet, Institutionen för medicinska vetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-234488.

Der volle Inhalt der Quelle
Annotation:
Older people admitted to hospital are at high risk of rehospitalization and medication errors. We have demonstrated, in a randomized controlled trial, that a clinical pharmacist intervention reduces the incidence of revisits to hospital for patients aged 80 years or older admitted to an acute internal medicine ward. The aims of this thesis were to further study the effects of the intervention and to investigate possibilities of targeting the intervention by identifying predictors of treatment response or adverse health outcomes. The effect of the pharmacist intervention on the appropriateness of prescribing was assessed, by using three validated tools. This study showed that the quality of prescribing was improved for the patients in the intervention group but not for those in the control group. However, no association between the appropriateness of prescribing at discharge and revisits to hospital was observed. Subgroup analyses explored whether the clinical pharmacist intervention was equally effective in preventing emergency department visits in patients with few or many prescribed drugs and in those with different levels of inappropriate prescribing on admission. The intervention appeared to be most effective in patients taking fewer drugs, but the treatment effect was not altered by appropriateness of prescribing. The most relevant risk factors for rehospitalization and mortality were identified for the same study population, and a score for risk-estimation was constructed and internally validated (the 80+ score). Seven variables were selected. Impaired renal function, pulmonary disease, malignant disease, living in a nursing home, being prescribed an opioid and being prescribed a drug for peptic ulcer or gastroesophageal reflux disease were associated with an increased risk, while being prescribed an antidepressant drug (tricyclic antidepressants not included) was linked with a lower risk. These variables made up the components of the 80+ score. Pending external validation, this score has potential to aid identification of high-risk patients. The last study investigated the occurrence of prescription errors when patients with multi-dose dispensed (MDD) drugs were discharged from hospital. Twenty-five percent of the MDD orders contained at least one medication prescription error. Almost half of the errors were of moderate or major severity, with potential to cause increased health-care utilization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Jeremias, A. Teresa. „Dor crónica lombar: modelo preditivo dos resultados da fisioterapia“. Master's thesis, Instituto Politécnico de Setúbal. Escola Superior de Saúde, 2013. http://hdl.handle.net/10400.26/5558.

Der volle Inhalt der Quelle
Annotation:
Dissertação de mestrado em Fisioterapia.
Objetivo: O estudo pretendeu determinar se um modelo baseado nos fatores de prognóstico, identificados na literatura, pode prever os resultados da intervenção em Fisioterapia, a curto prazo, em utentes com DLC, ao nível da incapacidade funcional, intensidade da dor e perceção global de melhoria. Introdução: Estima-se elevada prevalência e incidência de dor crónica lombar tanto em Portugal como em todos os países desenvolvidos. Esta é responsável por elevados índices incapacidade funcional, absentismo laboral e pela maioria dos custos do sistema de saúde. A fisioterapia é das intervenções a que mais se recorre, no entanto os efeitos reportados são diversificados. A obtenção de “bons/ maus” resultados tem sido associada à tipologia de tratamento, mas também a características intrínsecas aos indivíduos ou à forma como a DCL está presente nas pessoas. No que respeita às características intrínsecas aos indivíduos, tem sido estudada a capacidade preditiva de fatores de natureza sócio demográfica e clínica na antecipação desses resultados. Contudo, parece não existir consenso acerca dos mesmos, com os modelos preditivos resultantes a demonstrar reduzida capacidade de explicação da variância dos “bons/ maus” resultados obtidos. Metodologia: Tratou-se de uma coorte prospetiva não probabilística, com dois momentos de avaliação, num período de 6 semanas. Selecionou-se uma amostra por conveniência a partir dos indivíduos que recorreram a serviços de Fisioterapia em Portugal e que cumpriam os critérios de inclusão definidos á priori. Os resultados foram analisados segundo um modelo de regressão logística multivariada, sendo sintetizados de um modo quantitativo. Resultados: A amostra final foi de 171 pessoas com dor crónica lombar e idade média de 48 anos. O curso clínico observado foi, a redução significativa na QBPDS-PT (p=0,000; z= -7,994) e EVA (p=0,000; z= -8,742). O modelo de regressão logística analisado para o outcome incapacidade funcional, revelou ser estatisticamente significativo [X²(2)=22,628 (p<0,001)], explicando 16,6% (Nagelkerke R2 value) da variância da probabilidade de obter “bons” resultados. E evidenciou capacidade preditiva razoável (sensibilidade é de 76,8% e a especificidade de 60,5%) assim como boa capacidade discriminativa (ROC c=0.712, p<0.001).Também o modelo de regressão logística para o outcome intensidade da dor, demonstrou ser estatisticamente significativo [X²(2)=25,731 (p<0,001)], explicando 18,7% (Nagelkerke R2 value) da variância da probabilidade de obter “bons” resultados. E registou boa capacidade discriminativa (ROC c=0,713; p<0.001) e preditiva (sensibilidade é de 75% e a especificidade de 51,9%). Verificou-se ainda, que à semelhança dos anteriores, o modelo de regressão logística, para o outcome perceção global de melhoria, é estatisticamente significativo [X²(2)= 14,936 (p<0,001)], explicando 11,4% (Nagelkerke R2 value) da variância da probabilidade de obter “bons” resultados. Demonstrou também, moderada capacidade discriminativa (habilitações literárias ROC c=0.665, p<0.001) e preditiva (sensibilidade é de 73,1% e a especificidade de 58,7%). Conclusão: Os dados do estudo sugerem que ao fim de 6 semanas de fisioterapia, o nível da incapacidade funcional e de intensidade da dor diminuem significativamente. Indicam ainda que, os modelos determinados são significativos e possuem capacidade preditiva e discriminativa razoável dos “bons” resultados da fisioterapia, em termos de incapacidade funcional, intensidade da dor e perceção global de melhoria.
Abstract: Aim: The aim of this study was to determine if short-term successful outcomes following Physical Therapy treatment could be predicted from prognostic factors at baseline, in patients with chronic low back pain (CLBP). Introduction: Prevalence and incidence of chronic low back pain is estimated to be as high in Portugal as in every developed country. This condition is responsible for high functional disability indexes, increased work absence and for the largest amount of money spent by the health care system. Physical therapy is a common intervention for CLBP, however the reported effects are diverse. The “good/bad” results have been connected with the treatments typology as well as the internal characteristics of the patients or how CLBP affects their life. Concerning the internal characteristics of the patients, the predictive capacity of socio demographic and clinical factors have been studied, in the way they can anticipate the results. However, there seems to be no consensus about them, with the resulting predictive models to demonstrate reduced ability to explain the variance of the "good / bad" results obtained. Methodology: It was used a non-probabilistic prospective cohort, with two assessment moments, on a period of 6 weeks. A sample group was chosen from patients that use the Physical therapy services in Portugal following the criteria previously defined. Results were analyzed according to a multivariate logistic regression model and synthesized in a quantitative way. Results: The final sample was composed by 171 patients with an average age of 48 years old, which presented CLBP. Was observed that, significant decrease in QBPDS-PT (p=0,000; z= -7,994) and VAS (p=0,000; z= -8,742). The multivariate logistic regression model for the outcome related to the functional disability, showed to be statistically significant [X²(2)=22,628 (p<0,001)], managing to explain 16,6% (Nagelkerke R2 value) of the “good” results variance probability. Showed a reasonable predictive capacity (76,8% sensibility and 60,5% specificity) as well as a good discriminative competence (ROCcurve=0.712, p<0.001). Also the multivariate logistic regression model for the outcome for pain intensity also demonstrated to be statistically significant [X²(2)=25,731 (p<0,001)], explaining 18,7% (Nagelkerke R2 value) of the “good” results variance probability. It registered likewise a good discriminative (ROC c=0,713; p<0.001) and predictive competence (75% sensibility and 51,9% specificity). Like the mentioned previously for the others models, it was shown that the multivariate logistic regression model for the general improvement perception outcome is statistically significant [X²(2)= 14,936 (p<0,001)], being able to explain 11,4% (Nagelkerke R2 value) of the “good” results variance probability. Showed moderate discriminative (literary qualifications ROC c=0.665, p<0.001) and predictive capacity (73,1% sensibility and 58,7% specificity). Conclusion: Data received from the study suggest that after 6 weeks of physical therapy intervention, the level of functional disability and pain intensity decreased significantly. Also indicate that the models are significant and have reasonable predictive and discriminative ability of "good" physical therapy results in terms of functional disability, pain intensity and global perception of improvement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Dickson, Jeanette. „Predicting normal tissue radiosensitivity“. Thesis, University of Glasgow, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366256.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Tongtoe, Samruam. „Failure Prediction of Spatial Wood Structures: Geometric and Material Nonlinear Finite Element Analysis“. Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30557.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study is to investigate spatial wood structures, trace their response on equilibrium paths, identify failure modes, and predict the ultimate load. The finite element models of this study are based on the Crafts Pavilion dome (Triax) in Raleigh, North Carolina, and the Church of the Nazarene dome (Varax) in Corvallis, Oregon. Modeling considerations include 3-d beam finite elements, transverse isotropy, torsional warping, beam-decking connectors, beam-beam connectors, geometric and material nonlinearities, and the discretization of pressure loads. The primary objective of this study is to test the hypothesis that the beam-decking connectors (B-D connectors) form the weakest link of the dome. The beam-decking connectors are represented by nonlinear springs which model the load slip behavior of nails between the beam and the decking. The secondary objective of this study is to develop models that are sufficiently simple to use in engineering practice.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Roy, Janine. „From Correlation to Causality: Does Network Information improve Cancer Outcome Prediction?“ Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-144933.

Der volle Inhalt der Quelle
Annotation:
Motivation: Disease progression in cancer can vary substantially between patients. Yet, patients often receive the same treatment. Recently, there has been much work on predicting disease progression and patient outcome variables from gene expression in order to personalize treatment options. A widely used approach is high-throughput experiments that aim to explore predictive signature genes which would provide identification of clinical outcome of diseases. Microarray data analysis helps to reveal underlying biological mechanisms of tumor progression, metastasis, and drug-resistance in cancer studies. Despite first diagnostic kits in the market, there are open problems such as the choice of random gene signatures or noisy expression data. The experimental or computational noise in data and limited tissue samples collected from patients might furthermore reduce the predictive power and biological interpretability of such signature genes. Nevertheless, signature genes predicted by different studies generally represent poor similarity; even for the same type of cancer. Integration of network information with gene expression data could provide more efficient signatures for outcome prediction in cancer studies. One approach to deal with these problems employs gene-gene relationships and ranks genes using the random surfer model of Google's PageRank algorithm. Unfortunately, the majority of published network-based approaches solely tested their methods on a small amount of datasets, questioning the general applicability of network-based methods for outcome prediction. Methods: In this thesis, I provide a comprehensive and systematically evaluation of a network-based outcome prediction approach -- NetRank - a PageRank derivative -- applied on several types of gene expression cancer data and four different types of networks. The algorithm identifies a signature gene set for a specific cancer type by incorporating gene network information with given expression data. To assess the performance of NetRank, I created a benchmark dataset collection comprising 25 cancer outcome prediction datasets from literature and one in-house dataset. Results: NetRank performs significantly better than classical methods such as foldchange or t-test as it improves the prediction performance in average for 7%. Besides, we are approaching the accuracy level of the authors' signatures by applying a relatively unbiased but fully automated process for biomarker discovery. Despite an order of magnitude difference in network size, a regulatory, a protein-protein interaction and two predicted networks perform equally well. Signatures as published by the authors and the signatures generated with classical methods do not overlap -- not even for the same cancer type -- whereas the network-based signatures strongly overlap. I analyze and discuss these overlapping genes in terms of the Hallmarks of cancer and in particular single out six transcription factors and seven proteins and discuss their specific role in cancer progression. Furthermore several tests are conducted for the identification of a Universal Cancer Signature. No Universal Cancer Signature could be identified so far, but a cancer-specific combination of general master regulators with specific cancer genes could be discovered that achieves the best results for all cancer types. As NetRank offers a great value for cancer outcome prediction, first steps for a secure usage of NetRank in a public cloud are described. Conclusion: Experimental evaluation of network-based methods on a gene expression benchmark dataset suggests that these methods are especially suited for outcome prediction as they overcome the problems of random gene signatures and noisy expression data. Through the combination of network information with gene expression data, network-based methods identify highly similar signatures over all cancer types, in contrast to classical methods that fail to identify highly common gene sets across the same cancer types. In general allows the integration of additional information in gene expression analysis the identification of more reliable, accurate and reproducible biomarkers and provides a deeper understanding of processes occurring in cancer development and progression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Estefan, Dalia. „Predicting toxicity caused by high-dose-ratebrachytherapy boost for prostate cancer“. Thesis, Örebro universitet, Institutionen för medicinska vetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-76216.

Der volle Inhalt der Quelle
Annotation:
Introduction Treating localized prostate cancer with combination radiotherapy consisting ofexternal beam radiotherapy (EBRT) and high-dose-rate brachytherapy (HDR-BT) has beenproven to result in better disease outcome than EBRT only. There is, however, a decreasingtrend in utilization of combination therapy, partially due to concerns for elevated toxicityrisks. Aim To determine which parameters correlate to acute and late (≤ 6 months) urinary toxicity(AUT and LUT) and acute and late rectal toxicity (ART and LRT), and thereafter createpredictive models for rectal toxicity. Methods Data on toxicity rates and 32 patient, tumor and treatment parameters were collectedfrom 359 patients treated between 2008 and 2018 with EBRT (42 Gy in 14 fractions) andHDR-BT (14.5 Gy in 1 fraction) for localized prostate cancer at Örebro University Hospital.Bivariate analyses were conducted on all parameters and the outcome variables AUT, LUT,ART and LRT grade ≥ 1, graded according to the RTOG-criteria. Parameters correlating toART and LRT in this and previous studies were included in multivariate logistic regressionanalyses for creation of predictive models. Results Most toxicities, 86%, were of grade 0 or 1, only 9% of patients had grade 2 – 3toxicity. Only 2 – 4 parameters correlated to the respective toxicities in bivariate analyses.Logistic regressions generated no significant predictors of ART or LRT. Therefore, nopredictive models were obtained. Conclusion None of the included parameters have enough discriminative abilities regardingrectal toxicity. Predictive models can most probably be obtained by including otherparameters and more patients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Moerup, Casper Jacob. „Prediction of claim cost in general insurance“. Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/18176.

Der volle Inhalt der Quelle
Annotation:
Mestrado em Actuarial Science
O trabalho seguinte foi realizado durante uma colocação de estágio na If Industrial P & C Insurance, em Estocolmo, na Suécia. Este relatório destaca e discute algumas das diferenças entre o seguro industrial e privado e percorre o processo de “Análise do Ano Normal”. A análise avalia os dados das reivindicações com o objetivo de projetar as perdas em um ano no futuro. A Teoria do Risco Colectivo e a Estimação da Máxima Verossimilhança são utilizadas para obter uma estimativa da gravidade das reivindicações. Além disso, as reservas são estimadas usando o método Chain-ladder. A seção final do relatório descreve uma análise de sensibilidade de um modelo para as reservas de ajuste de sinistros. Esta análise mostra o impacto da introdução de dois novos parâmetros, o que explica a parte já desenvolvida das reivindicações abertas.
The following work was carried out during an internship placement at If Industrial P&C Insurance in Stockholm, Sweden. This report highlights and discusses some of the differences between Industrial and Private insurance and walks through the “Normal Year Analysis”-procedure. The analysis assesses the claims data with the goal of projecting the losses one year into the future. Collective Risk Theory and Maximum Likelihood Estimation is used to obtain an estimate of the severity of the claims. In addition, the reserves are estimated, using the Chain-ladder method. The final section of the report describes a sensitivity analysis of a model for the Claims Adjustment Reserves. This analysis shows the impact of introducing two new parameters, which accounts for the already developed part of the open claims.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

PELOUX, ANNE-FRANCOISE. „Mise au point et validation d'un modele de cytotoxicite aigue predictif de la dl 50“. Aix-Marseille 2, 1991. http://www.theses.fr/1991AIX22954.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Glória, Leonardo Siqueira. „Assessment of genome-wide prediction by using Bayesian regularized neural networks“. Universidade Federal de Viçosa, 2015. http://www.locus.ufv.br/handle/123456789/6866.

Der volle Inhalt der Quelle
Annotation:
Submitted by Amauri Alves (amauri.alves@ufv.br) on 2015-12-02T16:52:21Z No. of bitstreams: 1 texto completo.pdf: 4119749 bytes, checksum: abab366057d5b5d02a6202d32f277b7c (MD5)
Made available in DSpace on 2015-12-02T16:52:21Z (GMT). No. of bitstreams: 1 texto completo.pdf: 4119749 bytes, checksum: abab366057d5b5d02a6202d32f277b7c (MD5) Previous issue date: 2015-05-25
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Recentemente, há um aumento de interesse na utilização de métodos não paramétricos, tais como redes neurais artificiais (RNA), na área de seleção genômica ampla (SGA). Uma classe especial de RNA é aquela com regularização Bayesiana, a qual não exige um conhecimento a priori da arquitetura genética da característica, tais como outros métodos tradicionais de SGA (RR-BLUP, Bayes A, B, Cπ, BLASSO). O objetivo do presente estudo foi aplicar a RNA baseado em regularização Bayesiana na predição de valores genéticos genômicos utilizando conjuntos de dados simulados a fim de selecionar os marcadores SNP mais relevantes por meio de dois métodos diferentes. Objetivou-se ainda estimar herdabilidades para as características consideradas e comparar os resultados da RNA com dois métodos tradicionais (RR-BLUP e Lasso Bayesiano). A arquitetura mais simples da rede neural com regularização Bayesiana obteve os melhores resultados para as duas características avaliadas, os quais foram muito similares às metodologias tradicionais RR-BLUP e Lasso Bayesiano (BLASSO). A identificação de importância dos SNPs baseada nas RNA apresentaram correlações entre os efeitos verdadeiros e simulados de 0,61 e 0,81 para as características 1 e 2, respectivamente. Estas foram maiores do que aquelas produzidas pelo método tradicional BLASSO (0,55 e 0,71, para característica 1 e 2 respectivamente). Em relação a herdabilidade (assumindo o valor verdadeiro igual a 0,35), a RNA mais simples obteve valor de herdabilidade igual a 0,33, enquanto os métodos tradicionais a subestimaram (com média igual igual a 0,215).
Recently there is an increase interest to use nonparametric methods, such as artificial neural networks (ANN). In animal breeding, an especial class of ANN called Bayesian Regularized Neural Network (BRNN) has been preferable since it not demands a priori knowledge of the genetic architecture of the characteristic as assumed by the most used parametric methods (RR-BLUP, Bayes A, B, Cπ, BLASSO). Although BRNN has been shown to be effective for genomic enable prediction. The aim of the present study was to apply the ANN based on Bayesian regularization to genome-enable prediction regarding simulated data sets, to select the most relevant SNP markers by using two proposed methods, to estimate heritabilities for the considered traits, and to compare the results with two traditional methods (RR-BLUP and BLASSO). The simplest Bayesian Regularized Neural Network (BRNN) model gave consistent predictions for both traits, which were similar to the results obtained from the traditional RR-BLUP and BLASSO methods. The SNP importance identification methods based on BRNN proposed here showed correlation values (0.61 and 0.81 for traits 1 and 2, respectively) between true and estimated marker effects higher than the traditional BLASSO (0.55 and 0.71, respectively for traits 1 and 2) method. With respect to h 2 estimates (assuming 0.35 as true value), the simplest BRNN recovered 0.33 for both traits, thus outperforming the RR-BLUP and BLASSO, that, in average, estimated h 2 equal to 0.215.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

CHANDRASEKARAN, LATHA. „PREDICTING DISEASE INCIDENCE DUE TO CONTAMINATED INTRUSION IN A WATER DISTRIBUTION SYSTEM“. University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1155506232.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Bessa, Iuri Sidney. „Laboratory and field study of fatigue cracking prediction in asphalt pavements“. Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3138/tde-15012018-160715/.

Der volle Inhalt der Quelle
Annotation:
The prediction of asphalt pavements performance in relation to their main distresses has been proposed by different researchers, by means of laboratory characterization and field data evaluation. In relation to fatigue cracking, there is no universal consensus about the laboratory testing to be performed, the damage criterion to be considered, the testing condition to be set (level and frequency of loading, and temperature), and the specimen geometry to be used. Tests performed in asphalt binders and in asphalt mixes are used to study fatigue behavior and to predict fatigue life. The characterization of asphalt binders is relevant, since fatigue cracking is highly dependent on the rheological characteristics of these materials. In the present research, the linear viscoelastic characterization, time sweep tests, and amplitude sweep tests were done. In respect to the laboratorial characterization of asphalt mixes, tests based on indirect tensile, four point flexural bending beam, and tension-compression were performed. Field damage evolution data of two asphalt pavement sections were collected from an experimental test site in a very heavy traffic highway. Three asphalt binders (one neat binder, one SBS-modified binder and one highly modified binder, HiMA), and one asphalt concrete constituted by the neat binder were tested in laboratory. The experimental test site was composed by two segments, constituted by different base layers (unbound course and cement-treated crushed stone) that provided different mechanical responses in the asphalt wearing course. The field damage data were compared to fatigue life models that use empirical results obtained in the laboratory and computer simulations. Correlations among the asphalt materials scales were discussed in this dissertation, with the objective of predicting the fatigue cracking performance of asphalt pavements.
A previsão do desempenho de pavimentos asfálticos em relação aos seus principais defeitos tem sido proposta por diferentes pesquisadores, por meio da caracterização em laboratório e da avaliação de dados de campo. No que diz respeito ao trincamento por fadiga, não há um consenso universal sobre o tipo de ensaio a ser realizado, o critério de dano a ser considerado, e as condições de ensaio a serem adotadas (nível e frequência de carregamento, e temperatura), além da geometria das amostras testadas. Ensaios realizados em ligantes asfálticos e em misturas asfálticas são usados para estudar o comportamento em relação à fadiga e para prever a vida útil. A caracterização dos ligantes asfálticos é relevante, uma vez que o trincamento por fadiga é altamente dependente das características reológicas desses materiais. Nesta pesquisa, a obtenção dos parâmetros viscoelásticos lineares e a caracterização por meio de ensaios de varredura de tempo e de varredura de deformação foram realizados. Em relação à caracterização laboratorial das misturas asfálticos, ensaios baseados em compressão diametral, vigota de quatro pontos e em tração-compressão axial foram realizados. Dados de evolução do dano de campo obtidos em duas seções de pavimentos asfálticos foram coletados de um trecho experimental construído em uma rodovia de alto volume de tráfego. Três ligantes asfálticos (um ligante não modificado, um ligante modificado por polímero do tipo SBS e um ligante altamente modificado, HiMA), e uma mistura asfáltica do tipo concreto asfáltico constituída pelo ligante não modificado foram testados em laboratório. O trecho experimental era composto por dois segmentos, constituídos por diferentes tipos de camadas de base (brita graduada simples e brita graduada tratada com cimento) que forneciam diferentes respostas mecânicas à camada de revestimento asfáltico. Os dados de campo foram comparados com modelos de previsão de vida de fadiga que utilizam resultados empíricos obtidos em laboratórios e simulações computacionais. Correlações entre as diferentes escalas são discutidas nesta tese, com o objetivo de prever o desempenho de pavimentos asfálticos ao trincamento por fadiga.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Kawamura, Mitsue. „A scoring system predicting acute radiation dermatitis in patients with head and neck cancer treated with intensity-modulated radiotherapy“. Kyoto University, 2019. http://hdl.handle.net/2433/244519.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Zlobec, Inti. „A predictive model of rectal tumour response to pre-operative high-dose rate endorectal brachytherapy /“. Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103189.

Der volle Inhalt der Quelle
Annotation:
Pre-operative radiotherapy for patients with locally advanced rectal carcinoma has been shown to improve survival rates and local tumour control. The ability to identify tumours most likely to undergo a complete or partial response would improve the selection of patients for radiotherapy and potentially modify post-treatment planning. The aim of this study was to develop a multi-marker model of tumour response to pre-operative high-dose rate endorectal brachytherapy (HDREB). Immunohistochemistry (IHC) for p53, Bcl-2, VEGF, APAF-1 and EGFR was carried out on 104 pre-treatment rectal tumour biopsies from patients undergoing a pre-operative HDREB protocol. Immunoreactivity was scored by at least three pathologists using a semi-quantitative scoring method. The reproducibility of the scoring system was evaluated. Receiver operating characteristic curve (ROC) analysis was performed for each protein to determine clinically relevant cutoff scores for defining tumour positivity. Multivariate logistic regression analysis was carried out to identify independent predictive factors of tumour response. Both the semi-quantitative scoring system and ROC curve analysis were found to be reproducible. In addition, the combined analysis of VEGF and EGFR was highly predictive of complete pathologic response to radiotherapy. EGFR was found to independently predict complete or partial tumour regression but only with low sensitivity and specificity. A large-scale prospective study is necessary to confirm these findings. Moreover, the novel methodology proposed and validated in this study to assess immunoreactivity could significantly enhance the value of IHC findings in colorectal cancer as well as other tumour types.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Melo, Gustavo da Silva Vieira de. „Measurement and prediction of sound absorption of room surfaces and contents at low frequencies“. Florianópolis, SC, 2002. http://repositorio.ufsc.br/xmlui/handle/123456789/84349.

Der volle Inhalt der Quelle
Annotation:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia Mecânica.
Made available in DSpace on 2012-10-20T08:32:17Z (GMT). No. of bitstreams: 1 188655.pdf: 9575761 bytes, checksum: 78ad975fff9c15d8b5ba76309b3a5dd2 (MD5)
Na área de transmissão sonora em edificações, uma recente ênfase tem sido dada ao estudo de freqüências audíveis, abaixo de 100 Hz. Isto se deve ao aumento do número de fontes de ruído de baixa freqüência, tais como uso de aparelhos domésticos de som ou TV com a capacidade de emitir sons graves cada vez mais potentes, etc. Existe uma preocupação especial com ruídos de baixa freqüência devido a sua eficiente propagação através do ar e eficácia reduzida de sua atenuação por parte de várias estruturas, como por exemplo, protetores auditivos e paredes entre residências. Contudo, em baixas freqüências, as abordagens teóricas mais utilizadas apresentam deficiências explicativas sobre a realidade estudada e precisam ser aprimoradas. Adicionalmente, observa-se que as normas referentes a isolamento sonoro não abrangem a região de freqüências abaixo de 100 Hz e nem mesmo a introdução do Anexo F da norma ISO 140/3 (1995) foi capaz de garantir um nível adequado de reprodutibilidade dos resultados. Nesse sentido, modelos de transmissão sonora entre salas que utilizam técnicas de EF têm demonstrado as características modais dos campos acústicos e vibratórios envolvidos no sistema sala-parede-sala, indicando a necessidade de modelos apropriados para a absorção sonora em baixas freqüências. Neste trabalho um novo modelo de EF foi utilizado para descrever as relações entre as características de absorção sonora das superfícies internas de uma sala e a resposta em freqüência desta sala, para o intervalo de freqüências de 20 Hz a 200 Hz. Inicialmente, o modelo numérico foi validado por comparação com resultados experimentais para uma pequena câmara reverberante vazia, denominada sala de referência. Adicionalmente, investigou-se o efeito da introdução de elementos de mobília no interior da sala, os quais foram abordados como obstáculos rígidos e macios, a fim de verificar possíveis modificações nas freqüências naturais e amortecimento seletivo dos modos do sistema. O efeito da localização de tais obstáculos também foi incluído nas investigações. Os resultados obtidos apresentaram um grau de concordância satisfatório entre valores medidos e simulados, permitindo a conclusão de que a absorção sonora não modifica significativamente as respostas em freqüência da sala em baixas freqüências.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Andel, Stephanie Anne. „Personality as a Predictor of Occupational Safety: Does it Really Matter?“ Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5824.

Der volle Inhalt der Quelle
Annotation:
Past research demonstrates the high prevalence of occupational accidents and injuries, and therefore much work has gone into examining potential antecedents to such incidences. However, while some research has examined personality as a potential antecedent, results suggesting personality as a significant predictor of occupational safety remain inconclusive. Therefore, the purpose of the current work is to conduct a cross-sectional multi-source survey study that will take a closer look at the relationships between various personality variables and occupational safety. Essentially, the purpose of the current study is threefold: (1) to examine the relationships between two Big Five personality factors, safety locus of control, and optimism bias as antecedents of safety performance and outcomes, (2) to take a facet-level analysis by breaking up the extraversion and conscientiousness factors into their constituent facets in order to see if each facet may be differentially related to occupational safety when compared the overall factor, and (3) to examine various moderators that may affect the relationships between extraversion and occupational safety. Results of this study suggest that the extraversion and conscientiousness facets are not differentially related to occupational safety. Further, some evidence for contextual moderators in the relationships between personality and safety performance was found. Overall, this study provides further insight into the role that personality may play in predicting safety across various industries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Pitre, Kevin M. „Predicting Wind Noise Inside Porous Dome Filters for Infrasound Sensing on Mars“. Thesis, University of Louisiana at Lafayette, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10244134.

Der volle Inhalt der Quelle
Annotation:

The study described in this thesis aims to assess the effects of wind-generated noise on potential infrasound measurements on future Mars missions. Infrasonic sensing on Mars is being considered as a means to probe the long-scale atmospheric dynamics, thermal balance, and also to infer bolide impact statistics. In this study, a preliminary framework for predicting the principal wind noise mechanisms to the signal detected by a sensor placed inside a hemispherical porous dome on the Martian surface is developed. The method involves calculating the pressure power density spectra in the infrasonic range generated by turbulent interactions and filtered by dome shaped filters of varying porosities. Knowing the overall noise power spectrum will allow it to be subtracted from raw signals of interest and aid in the development of infrasound sensors for the Martian environment. In order to make these power spectral predictions, the study utilizes the Martian Climate Database (MCD) global circulation model, developed by Laboratoire de Meteorologie Dynamique in Paris, France. Velocity profiles are generated and used in semi empirical functions generated by von Kármán along with equations for describing the physical turbulent interactions. With these, turbulent interactions in the free atmosphere above the Martian surface are described. For interactions of turbulence with the porous filter, semi-empirical formulations are adapted to the Martian parameters generated by the MCD and plotted alongside contributions in the free atmosphere outside and inside the dome to obtain the total wind noise contribution from turbulence. In conclusion, the plots of power spectral densities versus frequency are analyzed to determine what porosity filter would provide the best wind-noise suppression when measured at the center the dome. The study shows that 55% (0.02 to 5 Hz) and 80% (6 to 20 Hz) porosities prove to be the better of the five porosities tested.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Sebastián, Soto Niflin Roger. „Aplicación de la termografía en el mantenimiento predictivo - DOE RUN PERÚ“. Universidad Nacional de Ingeniería. Programa Cybertesis PERÚ, 2006. http://cybertesis.uni.edu.pe/uni/2006/sebastian_sn/html/index-frames.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Schünke, Marco Aurélio. „Aplicação de algoritmos de classificação para análise dos fatores que influenciam na predição do fator de impacto nas redes sociais“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/134588.

Der volle Inhalt der Quelle
Annotation:
Atualmente empresas como Google e Facebook fazem parte da lista das maiores companhias do mundo. O investimento em publicidade e criação de páginas para a divulgação de anúncios e marcas, tem levado o Facebook a uma posição de destaque neste cenário. Neste contexto, o presente trabalho tem o objetivo de analisar e predizer o número de interações em notícias divulgadas em cinco páginas de fãs, que se constituem nas mais acessadas da Rede Social Facebook no Brasil. Como contribuição propõem-se determinar o fator de impacto de publicações, considerando a média de três características mencionadas, o número de curtidas, o número de comentários e o número de vezes que a notícia foi compartilhada. Serão avaliados resultados da aplicação de diferentes técnicas para a classificação, além da influência de características relacionadas a palavras e termos mais frequentes, verificando qual combinação produz melhores resultados no processo de gerar um modelo de aprendizado para prever o Fator de Impacto de notícias publicadas nas páginas de fãs da Rede Social Facebook. Apresenta-se também os motivos que podem exercer influência no fator de impacto através do processo de descoberta de conhecimentos em base de dados e também fazendo uso de técnicas de processamento de linguagem natural com o objetivo de atender a expectativa do trabalho.
Currently companies as Google and Facebook are on the top of the largest companies in the world and according to news released on the website tecmundo the main reason that led to this privileged position, in particular Facebook, appears to be the result of its investments in publicity focused on mobile devices through general advertisements in its own social network. In this context the present research aims to estimate the number of news interactions published on the five most accessed fans pages of Facebook Social Network in Brazil. Are considered examples of interactions in this study the number of likes, the number of comments and also the amount of times a message was shared. As also disclose attributes that influence interactions. As a contribution is proposed the impact factor of a publication, considering the average of three mentioned interactions, the number of likes, the number of comments and also the number of times the news was shared, in order to improve the results in predicting interactions of a fan page of Facebook Social Network. In addition to analyze the results of prediction algorithms applying different techniques of text pre- processing checking which combination produces best results in generating a learning process model to foresee the impact of news published on Facebooks fan pages and exhibit reasons that may influence the impact factor through the discovering process of database knowledge, from the feeling analysis as well as making use of processing of natural language techniques in order to fulfill work expectation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Chang, Wan-Yin. „The Predictive Accuracy of Conscientiousness when Responses are Dissimulated: Does Self-Consistency Matter?“ Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9960.

Der volle Inhalt der Quelle
Annotation:
The present study used a laboratory setting to explore the criterion-related validity of non-cognitive measures as related to personnel selection. The focal study investigated psychological processes resulting from situational causes of motivation to distort item responses. In particular, I investigated whether differences in the motivation to distort item responses interacted with self-consistency in the prediction of performance on a clerical task. Findings suggested that despite range restriction and the existence of faking behavior, a positive correlation between conscientiousness and performance exists. Variation of selection ratio (SR) and monetary incentives successfully produced faking behaviors, and the existence of faking behaviors was found in selection setting. Results partially supported the proposed hypothesis that there are positive and negative effects of faking behaviors. Implications of the present study were further discussed.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Roy, Janine [Verfasser], Michael [Akademischer Betreuer] Schroeder und Tim [Akademischer Betreuer] Beißbarth. „From Correlation to Causality: Does Network Information improve Cancer Outcome Prediction? / Janine Roy. Gutachter: Michael Schroeder ; Tim Beißbarth“. Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://d-nb.info/1068447214/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Karlsson, Beppe. „Tweeting opinions : How does Twitter data stack up against the polls and betting odds?“ Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-75839.

Der volle Inhalt der Quelle
Annotation:
With the rise of social media, people have gained a platform to express opinions and discuss current subjects with others. This thesis investigates whether a simple sentiment analysis — determining how positive a tweet about a given party is — can be used to predict the results of the Swedish general election and compares the results to betting odds and opinion polls. The results show that while the idea is an interesting one, and sometimes the data can point in the right direction, it is by far a reliable source to predict election outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

PALUMBO, MARK V. „COGNITIVE ABILITY, JOB KNOWLEDGE, AND STEREOTYPE THREAT: WHEN DOES ADVERSE IMPACT RESULT?“ Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1187103730.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Hernandez, Miriam B. „Predicting kindergarten reading outcomes from initial language and literacy skills does dialect density matter? /“. Tallahassee, Florida : Florida State University, 2010. http://etd.lib.fsu.edu/theses/available/etd-04122010-234627/.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Florida State University, 2010.
Advisor: Stephanie Al Otaiba, Florida State University, College of Education, Dept. of Teacher Education. Title and description from dissertation home page (viewed on July 16, 2010). Document formatted into pages; contains viii, 82 pages. Includes bibliographical references (p. 79-81).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Fucik, Markus. „Bayesian risk management : "Frequency does not make you smarter"“. Phd thesis, Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2011/5308/.

Der volle Inhalt der Quelle
Annotation:
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity.
Die vorliegende Arbeit befasst sich mit den Ansätzen eines Bayes’schen Risikomanagements zur Messung von Risiken. Dabei konzentriert sich die Arbeit auf folgende zentrale Fragestellungen: (1) Wie ist es möglich, transparent Risiken zu quantifizieren, falls nur eine begrenzte Anzahl an geeigneten historischen Beobachtungen zur Datenanalyse zur Verfügung steht? (2) Wie ist es möglich, transparent Risiken zu quantifizieren, falls mangels geeigneter historischer Beobachtungen keine Datenanalyse möglich ist? (3) Inwieweit ist es möglich, Willkür und Beliebigkeit bei der Risikoquantifizierung zu begrenzen? Zur Beantwortung der ersten Frage schlägt diese Arbeit die Anwendung der Bayes’schen Statistik vor. Im Gegensatz zu klassischen Kleinste-Quadrate bzw. Maximum-Likelihood Punktschätzern können Bayes’sche A-Posteriori Verteilungen die dateninduzierte Parameter- und Modellunsicherheit explizit messen. Als Anwendungsbeispiel werden in der Arbeit zwölf verschiedene stochastische Prozesse an CO2-Preiszeitreihen mittels des effizienten Bayes’schen Markov Chain Monte Carlo (MCMC) Simulationsalgorithmus kalibriert. Da die Bayes’sche Statistik die Berechnung von Modellwahrscheinlichkeiten zur kardinalen Modellgütemessung erlaubt, konnten Log-Varianz Prozesse als mit Abstand beste Modellklasse identifiziert werden. Für ausgewählte Prozesse wurden zusätzlich die Auswirkung von Parameterunsicherheit auf abgeleitete Risikomaße (ex-ante/ ex-post Value-at-Risks, regulatorische Kapitalrücklagen, Optionspreise) untersucht. Generell sind die Unterschiede zwischen Bayes’schen und klassischen Risikomaßen umso größer, je komplexer die Modellannahmen für den CO2-Preis sind. Überdies sind Bayes’sche Value-at-Risks und Kapitalrücklagen konservativer als ihre klassischen Pendants (Risikoprämie für Parameterunsicherheit). Bezüglich der zweiten Frage ist die in dieser Arbeit vertretene Position, dass eine Risikoquantifizierung ohne (ausreichend) verlässliche Daten nur durch die Berücksichtigung von Expertenwissen erfolgen kann. Dies erfordert ein strukturiertes Vorgehen. Daher wird das integrated Bayesian Risk Analysis (iBRA) Konzept vorgestellt, welches Konzepte, Techniken und Werkzeuge zur expertenbasierten Identifizierung und Quantifizierung von Risikofaktoren und deren Abhängigkeiten vereint. Darüber hinaus bietet es Ansätze für den Umgang mit konkurrierenden Expertenmeinungen. Da gerade ressourceneffiziente Werkzeuge zur Quantifizierung von Expertenwissen von besonderem Interesse für die Praxis sind, wurden im Rahmen dieser Arbeit der Onlinemarkt PCXtrade und die Onlinebefragungsplattform PCXquest konzipiert und mehrfach erfolgreich getestet. In zwei empirischen Studien wurde zudem untersucht, inwieweit Menschen überhaupt in der Lage sind, ihre Unsicherheiten zu quantifizieren und inwieweit sie Selbsteinschätzungen von Experten bewerten. Die Ergebnisse deuten an, dass Menschen zu einer Selbstüberschätzung ihrer Prognosefähigkeiten neigen und tendenziell hohes Vertrauen in solche Experteneinschätzungen zeigen, zu denen der jeweilige Experte selbst hohes Zutrauen geäußert hat. Zu letzterer Feststellung ist jedoch zu bemerken, dass ein nicht unbeträchtlicher Teil der Befragten sehr hohe Selbsteinschätzung des Experten als negativ ansehen. Da der Bayesianismus Wahrscheinlichkeiten als Maß für die persönliche Unsicherheit propagiert, bietet er keinerlei Rahmen für die Verifizierung bzw. Falsifizierung von Einschätzungen. Dies wird mitunter mit Beliebigkeit gleichgesetzt und könnte einer der Gründe sein, dass offen praktizierter Bayesianismus in Deutschland ein Schattendasein fristet. Die vorliegende Arbeit stellt daher das Konzept des Bayesian Due Diligence zur Diskussion. Es schlägt eine kriterienbasierte Bewertung von Experteneinschätzungen vor, welche insbesondere die Intersubjektivität und den Informationsgehalt von Einschätzungen beleuchtet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Kunkel, Lynn Elizabeth. „The Health Belief Model as a Predictor of Gynecological Exams: Does Sexual Orientation Matter?“ PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/4943.

Der volle Inhalt der Quelle
Annotation:
Screening and early detection are essential for the management and control of most diseases. It is important for women to practice routine health care that includes both clinical and self examinations. Today, many women go without health care due to barriers which prevent them from obtaining adequate care. The present study was designed to investigate, using the Health Belief Model, whether there is a difference between heterosexual and lesbian women in obtaining gynecological exams. Responses from 23 8 participants, 70 heterosexuals and 168 lesbians, indicated that the Health Belief Model was a significant predictor of whether women complied with recommended guidelines for Pap smears. Further analyses indicated that the most predictive components of the model were self-efficacy and perceived barriers. The more self-efficacy the women reported, the more likely they were to comply; whereas, the more barriers the women reported, the less likely they were to comply. Surprisingly, there were no interactions between sexual orientation and the components of the Health BeliefModel with respect to compliance. Thus, the model predicts compliance in the same way for both lesbian and heterosexual women. The results are consistent with past research indicating that the Health Belief Model is a good predictor of health behavior for some groups. Suggestions for future studies are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Giacomel, Felipe dos Santos. „Um método algorítmico para operações na bolsa de valores baseado em ensembles de redes neurais para modelar e prever os movimentos dos mercados de ações“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/134586.

Der volle Inhalt der Quelle
Annotation:
A previsão de séries temporais financeiras tem sido um tópico popular da literatura nos últimos anos. Contudo, embora muitos estudos de previsão de séries temporais foquem na previsão exata de valores futuros, defendemos que este tipo de previsão é de difícil aplicação em cenários reais, sendo mais vantajoso transformar este problema de previsão em um problema de classificação que indique se a série temporal irá subir ou descer no próximo período. Neste trabalho é proposto um método de compra e venda de ações baseado nas previsões feitas por dois ensembles de redes neurais adaptados para diferentes perfis de investimento: um para investidores moderados e outro para investidores mais agressivos. Os resultados desses ensembles preveem se determinada ação irá subir ou descer no próximo período ao invés de prever seus valores futuros, permitindo que se criem recomendações de operações de compra ou venda para o próximo período de tempo. A criação de tais ensembles, contudo, pode encontrar dificuldades no fato de que cada mercado se comporta de uma maneira diferente: fatores como a sazonalidade e a localidade da bolsa de valores são determinantes no desenvolvimento das redes neurais apropriadas. Para mostrar a eficiência do nosso método em diferentes situações, o mesmo é avaliado exaustivamente em dois conjuntos de dados diferentes: os mercados de ações norteamericano (S&P 500) e brasileiro (Bovespa). Operações reais foram simuladas nestes mercados e fomos capazes de lucrar em 89% dos casos avaliados, superando os resultados das abordagens comparativas na grande maioria dos casos.
Financial time series prediction has been a hot topic in the last years. However, although many time series prediction studies focus on the exact prediction for future values, we defend that this kind of prediction is hard to apply in real scenarios, being more profitable to transform the prediction problem into a classification problem that indicates if the time series is going to raise or fall in the next period. In this work we propose a stock buy and sell method based on predictions made by two neural network ensembles adjusted for different investment profiles: one for moderate investors and another for aggressive investors. The results of these ensembles predict if certain stock will raise of fall in the next time period instead of predicting its future values, allowing the creation of buy and sell operations recommendations for the next time period. The creation of such ensembles, however, can find difficulties in the fact that each market behaves in a different manner: factors as the seasonality and the location of the stock market are determinant in the development of the appropriate neural networks. To show the efficiency of our method in different situations, it is tested exhaustively in two differents datasets: the north american (S&P 500) and brazilian (Bovespa) stock markets. Real operations were simulated in these markets and we were able to profit in 89% of the tested cases, outperforming the results of the comparative approaches in most of the cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

O'Neil, Madeline. „Does the School Day Matter? The Association Between Adolescent School Attachment and Involvement and Adult Criminal Behavior“. PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2990.

Der volle Inhalt der Quelle
Annotation:
Research with adolescence demonstrates school involvement and attachment greatly influences students' outcomes and choices outside of their school environment. Many studies have addressed whether delinquent behavior while in adolescence is associated with various aspects of schooling, but there is limited research looking at the long-term effects schooling has on criminal behavior in adulthood. The purpose of this study was to assess whether students' attachment to their school or involvement in extracurricular activities at school shapes students' outcomes in adulthood--specifically their criminality and likelihood of being arrested. In addition, this study took on a gendered relationship, examining how gender moderates the associations between attachment and adult crime, and involvement and adult crime. The study took a quantitative approach using Waves 1 and 4 of the National Longitudinal Study of Adolescent Health. Findings indicate that a students' attachment to school is negatively associated with the likelihood of being arrested as an adult. In addition, the likelihood of adult criminal behavior is negatively associated with students' school involvement. Lastly, in this study I found that gender acts as a moderating mechanism between attachment and criminality, as well as sports involvement and arrested as an adult. Thus, this research adds to the established field, which has demonstrated how school involvement and attachment improve outcomes in adolescence, by showing that these positive experiences impact downstream outcomes such as criminal behavior in adulthood.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Tadano, Yara de Souza. „Simulação da dispersão dos poluentes atmosféricos para aplicação em análise de impacto“. [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265147.

Der volle Inhalt der Quelle
Annotation:
Orientadores: Ricardo Augusto Mazza, Edson Tomaz
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-20T02:12:17Z (GMT). No. of bitstreams: 1 Tadano_YaradeSouza_D.pdf: 4050458 bytes, checksum: f099a0b7346d14abcd5e32043374fc4b (MD5) Previous issue date: 2012
Resumo: Atualmente os estudos de poluição atmosférica são divididos naqueles que simulam a dispersão dos poluentes e nos que avaliam o impacto da poluição na saúde, não sendo frequentes estudos que envolvem as duas áreas. O objetivo desta pesquisa é propor uma metodologia de união da dispersão com o impacto na saúde, utilizando ferramentas já consolidadas, no intuito de possibilitar avaliações de impacto em regiões que não possuem dados de monitoramento, e ainda a previsão de novos impactos. O estudo foi divido em três partes...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital
Abstract: Currently, air pollution studies are divided in studies that simulate pollutants dispersion and those assessing pollution impacts on health. Studies involving these two areas are not usual. Then, this research aims to present a methodology of union between dispersion and health impact, using tools already consolidated, in order to enable impact assessment in areas that do not have monitoring data and also the prediction of further impacts. This study was divided in three parts...Note: The complete abstract is available with the full electronic document
Doutorado
Termica e Fluidos
Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Cavanaugh, Jennifer A. „Does the way we measure fit matter?| Predicting behaviors and attitudes using different measures of fit“. Thesis, State University of New York at Albany, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10109998.

Der volle Inhalt der Quelle
Annotation:

The literature on person-organization (P-O) fit has been plagued with inconsistencies in the conceptualization, operationalization and measurement of P-O fit. Despite numerous studies examining the relationship between P-O fit and outcomes, these inconsistencies in measurement and operationalization have led to mixed findings concerning specific individual outcomes. The goal of this dissertation was to address some of these inconsistencies by examining the relationship between P-O fit, using perceived and subjective measures of fit, and attitudinal and behavioral outcomes. In addition, previously unexplored mediators of the P-O fit-outcome relationships were examined. Although not formally hypothesized, it was believed that the magnitude of the relationships would differ such that perceived fit would have a stronger relationship with attitudinal outcomes than subjective fit, and that subjective fit would have a stronger relationship with job performance than perceived fit.

A sample of 188 entry-level managerial employees, working in a national transportation organization, was used to examine the relationship between P-O fit and job attitudes (i.e., job satisfaction, commitment, organizational citizenship behaviors and turnover intentions) and supervisor rated job performance. The results of this dissertation suggest that perceived fit is related to positive attitudes and better job performance. Furthermore, perceived organizational support partially mediates the relationship between perceived fit and the attitudinal outcomes studied, lending partial support for hypotheses. Role ambiguity was also examined as a potential mediator between fit and job performance, however, although perceived fit was significantly related to role ambiguity, the results did not support the relationship between role ambiguity and job performance. Tests of the specific hypotheses for subjective fit were not supported. Instead, the results indicated that organizational values, rather than fit between person and organizational values, were a strong predictor of attitudinal outcomes.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Danter, Elizabeth Hall. „The intention-behavior gap to what degree does Fishbein's integrated model of behavioral prediction predict whether teachers implement material learned in a professional development workshop? /“. Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1111698037.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xv, 246 p.; also includes graphics (some col.) Includes bibliographical references (p. 172-182). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Soares, Paulo da Silva 1966. „Sistema de avaliação preditiva de falhas em máquinas elétricas usando lógica fuzzy com análise dos parâmetros de vibração, corrente e temperatura“. [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260877.

Der volle Inhalt der Quelle
Annotation:
Orientador: José Antonio Siqueira Dias
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T04:25:15Z (GMT). No. of bitstreams: 1 Soares_PaulodaSilva_D.pdf: 5062177 bytes, checksum: b08acd7cd74b46ed7ade778b2cbed7cf (MD5) Previous issue date: 2014
Resumo: Neste trabalho é apresentado o desenvolvimento de um sistema de baixo custo para monitoração e acompanhamento de equipamentos industriais com vistas à manutenção preditiva. O sistema desenvolvido monitora a vibração do elemento em estudo, e faz o registro da mesma comparando-se a um padrão de vibração considerado nominal, ou seja, uma condição de operação satisfatória da máquina. Quando uma variação na vibração do dispositivo monitorado é identificada, há de se observar seu comportamento, não só de amplitude mas também no espectro de frequência, pois geralmente a incidência de falhas ou anomalias apresentam vibrações em frequências diferentes da nominal de trabalho de um dispositivo. A Transformada de Fourier do sinal e os registros de leituras frequentes nos permitem um acompanhamento contínuo do equipamento monitorado. Adicionalmente ao acompanhamento da vibração mecânica, faz-se o monitoramento da corrente elétrica do motor de acionamento do elemento para observância de eventual sobrecarga, desequilíbrio das fases e análise do espectro de frequência do sinal elétrico de corrente, que nos permite avaliar as alterações na alimentação do mesmo, indicando alguma anomalia de natureza elétrica ao motor em estudo. Por fim ainda monitora-se a temperatura dos elementos em estudo, pois a vida útil destes depende das temperaturas a que são submetidos em regime de operação que têm impacto direto em isolamentos de bobinas e lubrificantes das partes mecânicas. Este sistema permite uma avaliação de uma máquina, sem intervenção humana para as medições de vibração e elétricas, aliado ao histórico levantado, torna-se uma ferramenta poderosa para a implementação de um programa de manutenção preditiva
Abstract: This work aims to develop a low-cost system for monitoring and tracking of industrial equipment with a view to predictive maintenance. It presents a system that monitors vibration of the recording studied element comparing the same to a vibration pattern considered nominal. When it detects a change in the vibration of the device is to observe their behavior, not only amplitude but in the frequency spectrum, usually because the incidence of faults or anomalies exhibit vibrations at different frequencies of nominal working of a device. The Fourier transform of the signal and records of frequent readings allow to close monitoring of the element under study. In addition to monitoring the mechanical vibration, it is monitoring the electrical current of the motor drive element for compliance with any overload, phase imbalance and even the analysis of spectro signal frequency electric current, which allows us to evaluate changes feeding the same, indicating an abnormality of the electric motor in nature study. Finally even monitor the temperature of the elements under study, because the life of these depends on the temperatures to which they are subjected in operation regime that have direct impact on coil insulation and lubricants of the mechanical parts. This system permits an automatic evaluation of the condition of a machine, without human intervention for measurement of vibration and electrical therein, together with the historical raised, becomes a powerful tool for industry implementation of a predictive maintenance program
Doutorado
Eletrônica, Microeletrônica e Optoeletrônica
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Pires, César Augusto. „A utilização dos indicadores contábeis como previsão de recuperação judicial de empresas brasileiras de capital aberto usando análise discriminante e regressão logística“. Pontifícia Universidade Católica de São Paulo, 2017. https://tede2.pucsp.br/handle/handle/20320.

Der volle Inhalt der Quelle
Annotation:
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2017-08-30T12:55:32Z No. of bitstreams: 1 César Augusto Pires.pdf: 1556011 bytes, checksum: b840c06ef7083d306486c8464b9a921a (MD5)
Made available in DSpace on 2017-08-30T12:55:32Z (GMT). No. of bitstreams: 1 César Augusto Pires.pdf: 1556011 bytes, checksum: b840c06ef7083d306486c8464b9a921a (MD5) Previous issue date: 2017-08-23
This paper aims to identify the accounting performance indicators through techniques applied in companies that signal the judicial recovery using logistic regression and discriminant analysis, according to its relevance because it seeks to help decision making by the corporate body of Organizations to avoid future financial problems. The origin and evolution of bankruptcy legislation in Brazil and several models of insolvency used in the literature were presented in research, because it is a descriptive research in relation to its objectives, and quantitative, in terms of procedures, using statistical analysis techniques to evaluate the performance of classification techniques applied to the insolvency problem of publicly held companies; documents and accounting data from 2005 to 2015 were collected from the BM & FBovespa database for the application of empirical tests. The discriminant analysis was able to e valuate 88% of the cases correctly, which is a good percentage of prediction and does not present type II error, that is, to classify a solvent company in judicial recovery, and with 11 variables, since one was discarded, but when logistic regression is compared to discriminant analysis, it provides predictive accuracy comparable to a simpler statistical variable that used the same substantial interpretation with only one variable less and with a global 90% hit percentage. From the results of the logistic regression, it is possible to focus only on the variables X4 = asset structure and X2 = Return concerning equity as the main ones in the differentiation of groups, since the goal of the analysis is not to increase the likelihood of success, once that logistic regression provides a direct technique to distinguish firms' judicial recovery from solvent enterprises and to understand the relative impact of each independent variable in creating differences between the two gro ups of firms. Finally, the results presented show that logistic regression, even using a smaller number of variables, holds a better percentage of correctness
Este trabalho tem por objetivo identificar os indicadores de desempenho contábeis através de técnicas aplicadas em empresas que sinalizam a recuperação judicial utilizando-se da regressão logística e da análise discriminante, haja vista sua relevância porque busca auxiliar a tomada de decisões por parte do corpo corporativo das organizações para evitar problemas futuros financeiros. Foram apresentados no decorrer da pesquisa à origem e a evolução da legislação falimentar no Brasil e diversos modelos de insolvência utilizados pela literatura, por se tratar de uma pesquisa que se caracteriza como descritiva em relação a seus objetivos, e quantitativa, quanto aos procedimentos, ao utilizar técnicas de análise estatísticas para avaliação do desempenho das técnicas de classificação aplicadas ao problema de insolvência de empresas de capital aberto, foram coletados da base de dados do site BM&FBovespa documentos e dados contábeis de 2005 à 2015 para aplicação dos testes empíricos. A Análise discriminante conseguiu avaliar 88% dos casos corretamente, o que é uma boa porcentagem de predição e não apresenta erro do tipo II, ou seja, classificar uma empresa solvente em recuperação judicial, e com 11 variáveis, já que uma foi descartada, mas, quando a regressão logística é comparada com a análise discriminante, ela fornece precisão preditiva comparável com uma variável estatística mais simples que usava a mesma interpretação substancial, apenas com uma variável a menos e com uma porcentagem global de acerto de 90%. A partir dos resultados da regressão logística, é possível se concentrar apenas nas variáveis X4 = estrutura de ativos e X2= Retorno sobre o patrimônio líquido como as principais na diferenciação de grupos, pois a meta da análise não é aumentar a probabilidade de sucesso, ainda que a regressão logística forneça uma técnica direta para distinguir as empresas recuperação judicial das empresas solventes e compreender o impacto relativo de cada variável independente na criação de diferenças entre os dois grupos de empresas. Por fim, os resultados apresentados evidencia que a regressão logística mesmo utilizando um menor número de variável tem melhor porcentagem de acerto
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Mostrag-Szlichtyng, A. S. „Development of knowledge within a chemical-toxicological database to formulate novel computational approaches for predicting repeated dose toxicity of cosmetics-related compounds“. Thesis, Liverpool John Moores University, 2017. http://researchonline.ljmu.ac.uk/6798/.

Der volle Inhalt der Quelle
Annotation:
The European Union (EU) Cosmetics Regulation established the ban on animal testing for cosmetics ingredients. This ban does not assume that all cosmetics ingredients are safe, but that the non-testing procedures (in vitro and in silico) have to be applied for their safety assessment. To this end, the SEURAT-1 cluster was funded by EU 7th Framework Programme and Cosmetics Europe. The COSMOS (Integrated In Silico Models for the Prediction of Human Repeated Dose Toxicity of COSMetics to Optimise Safety) project was initiated as one of the seven consortia of the cluster, with the purpose of facilitating the prediction of human repeated dose toxicity associated with exposure to cosmetics-related compounds through in silico approaches. A critical objective of COSMOS was to address the paucity of publicly available data for cosmetics ingredients and related chemicals. Therefore a database was established containing (i) an inventory of cosmetics ingredients and related structures; (ii) skin permeability/absorption data (route of exposure relevant to cosmetics); and (iii) repeated dose toxicity data. This thesis describes the process of “knowledge discovery from the data”, including collation of the content of the COSMOS database and its subsequent application for developing tools to support the prediction of repeated dose toxicity of cosmetics and related compounds. A rigorous strategy of curation and quality control of chemical records was applied in developing the database (as documented in the Standard Operating Procedure, chapter 2). The chemical space of the cosmetics-related compounds was compared to food-related compounds from the U.S. FDA CFSAN PAFA database using the novel approach combining the analysis of structural features (ToxPrint chemotypes) and physicochemical properties. The cosmetics- and food- specific structural classes related to particular use functions and manifested by distinct physicochemical properties were identified (chapter 3). The novel COSMOS Skin Permeability Database containing in vivo and in vitro skin permeability/absorption data was developed by integrating existing databases and enriching them with new data for cosmetics harvested from regulatory documents and scientific literature (chapter 4). Compounds with available data on human in vitro maximal flux (JMAX) were subsequently extracted from the developed database and analysed in terms of their structural features (ToxPrint chemotypes) and physicochemical properties. The profile of compounds exhibiting low or high skin permeability potential was determined. The results of this analysis can support rapid screening and classification of the compounds without experimental data (chapter 5). The new COSMOS oral repeated dose toxicity database was established through consolidation of existing data sources and harvesting new regulatory documents and scientific literature. The unique data structure of the COSMOS oRepeatToxDB allows capturing all toxicological effects observed at particular dose levels and sites, which are hierarchically differentiated as organs, tissues, and cells (chapter 6). Such design of this database enabled the development of liver toxicity ontology, followed by mechanistic mining of in vivo data (chapter 7). As a result, compounds associated with liver steatosis, steatohepatitis and fibrosis phenotypic effects were identified and further analysed. The probable mechanistic reasoning for toxicity (Peroxisome Proliferator-Activated Receptor gamma (PPAR ) activation) was formulated for two hepatotoxicants, namely 1,3-bis-(2,4-diaminophenoxy)-propane and piperonyl butoxide. Key outcomes of this thesis include an extensive curated database, Standard Operating Procedures, skin permeability potential classification rules, and the set of structural features associated with liver steatosis. Such knowledge is particularly important in the light of the 21st Century Toxicology (NRC, 2007) and the ongoing need to move away from animal toxicity testing to non-testing alternatives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie