Дисертації з теми "FZG machine"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: FZG machine.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-26 дисертацій для дослідження на тему "FZG machine".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Grenet, de Bechillon Nicolas. "Approche multi-échelles pour l'étude du grippage des dentures d'engrenages." Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0024.

Повний текст джерела
Анотація:
La prochaine génération de moteurs civils nécessite l’intégration de réducteurs à engrenages afin de permettre l’augmentation du rendement. Afin de concevoir un produit fiable, il convient de s’intéresser aux différents modes de défaillance et en particulier le grippage des dentures. Le grippage engendre une dégradation de l’état de surface des dentures, qui se traduit par une réduction du rendement de la transmission. Cet endommagement est caractérisé par la formation et l’arrachement de microsoudures au cours de l’engrènement. De nombreux critères ont été mis en place afin de tenter de prédire l’initiation de cette défaillance, sans pour autant que l’un d’entre eux ne fasse l’unanimité au sein de la communauté scientifique. La physique de l’initiation du grippage doit donc être étudiée. Afin d’apporter des éléments de réponse sur le mécanisme d’initiation de ce phénomène, la première partie de cette étude s’est attachée à investiguer le rôle de la rugosité. Un modèle numérique a été mis en place afin d’évaluer le rôle de la rugosité vis-à-vis des températures localement atteintes dans la zone de contact. Les calculs réalisés montrent que les températures atteintes ne semblent pas en mesure d’expliquer la formation de microsoudures par fusion des surfaces dans un contact lubrifié. Ces microsoudures semblent donc être la conséquence d’une potentielle rupture du film lubrifiant et non le point de départ de l’initiation du grippage. Cette rupture du film a été étudiée expérimentalement sur un dispositif de laboratoire permettant de simuler un contact représentatif de l’engrènement via l’utilisation de deux disques. Dans cette seconde partie, une procédure permettant d’étudier le phénomène en agissant sur l’épaisseur du film lubrifiant a été développée. Les essais réalisés semblent montrer que la rupture du film lubrifiant est gouvernée par sa température, qui dépend des conditions de fonctionnement. Ainsi, une piste de critère de grippage a été établie sur disques. Dans une dernière partie, des essais dentures ont été mis en place afin d’évaluer la piste de critère identifiée sur disques en la transposant à la denture. Les essais réalisés montrent, comme lors des essais bi-disques, que la température totale à elle seule ne permet pas de prédire le grippage. Cependant, le critère développé sur disques ne semble pas en mesure d’expliquer les grippages des dentures lors des essais réalisés. Dans la mesure où ni les critères classiques ni la piste identifiée sur disques ne semblent en mesure d’expliquer le grippage, une nouvelle approche est proposée. Enfin, les conclusions sur la chronologie du mécanisme d’initiation du grippage sont regroupées et des perspectives sont mises en avant. Celles-ci visent à améliorer la représentativité des disques vis-à-vis des dentures au sujet de l’état de surface, ou à investiguer expérimentalement l’hypothèse de la rupture du film lubrifiant comme mécanisme d’initiation du grippage
Environmental concerns are driving the aerospace industry to innovate and develop new technologies to achieve sustainable aviation. Among these innovations, the next generation of civil engines requires the integration of gearboxes within them. In order to design a reliable product, different failure modes, such as gear scuffing, must be taken into account. Scuffing is a sudden gear failure where material is transferred from one surface to another. This transfer is caused by local surface welding during meshing. Scuffing leads to degradation of the tooth surface, which reduces gear efficiency. Although this mode of gear failure has been extensively studied, there are no commonly accepted initiation criteria. Therefore, physical understanding of scuffing initiation is needed. The first part of this study focused on the role of roughness. A numerical model was set up to evaluate the temperatures reached locally in the contact zone. The calculations carried out show that these last ones at the roughness scale do not seem able to explain the formation of micro-welds by fusion of the surface asperities in a lubricated contact. Scuffing therefore appear to be the consequence of a potential break in the lubricant film. In a second part, this film breakage was studied experimentally on a twin-disk machine. A procedure was developed to study the phenomenon by acting on the lubricant film thickness. The performed tests seem to show that the breakdown of the lubricating film is governed by its temperature, which depends directly on the operating conditions. Thus, a scuffing criterion was established on discs.In the last part, gear tests were carried out. It was shown, as for disc tests, that total temperature alone does not predict scuffing. However, the criterion developed on discs does not seem to be able to explain tooth scuffing. Since no criteria seem to be able to explain the scuffing, a new approach is proposed. Finally, conclusions and prospects are proposed. The chronology of the scuffing initiation mechanism are recalled. The prospects aim, on the one hand, to improve the representativeness of the tests on discs compared to gears, in particular with regard to the geometry of the surface roughness; and, on the other hand, to analyse in detail and experimentally the hypothesis of the lubricating film breakage as a mechanism of scuffing initiation
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Badokhon, Alaa. "An Adaptable, Fog-Computing Machine-to-Machine Internet of Things Communication Framework." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1492450137643915.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Holas, Jiří. "Modernizace řízení frézky FNG." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-442843.

Повний текст джерела
Анотація:
This thesis deals with a proposal modernisation of control and electroinstalation of the milling machine FNG 32. The thesis is divided into several sections: The first section is dedicated to the research of the milling machine and the description of its current condition. The second section deals with possible options of retrofitting and with the components.The third section includes the technical and economical evaluation and the selection of the solution. In the next section, the selected components are described and an electronic documentation of the machine has been created in EPLAN. In the last section, a proposal of control of the milling machine has been created with the help of the TwinCAT development environment provided by Beckhoff company.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gullo, Thomas W. "A Methodology to Evaluate the Dynamic Behavior of Back-to-back Test Machines." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555588592218025.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lu, Shen. "Early identification of Alzheimer's disease using positron emission tomography imaging and machine learning." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23735.

Повний текст джерела
Анотація:
Dementia is a chronic neurodegenerative disease impacting millions of elderlies globally every single day. There are several well-known forms of dementia. Alzheimer’s Disease (AD) is the most prevalent and deadly with no known cure. However, there are effective interventions to delay onset of AD only if it is detected early. Fludeoxyglucose/Pittsburgh compound B Positron Emission Tomography (FDG/PiB PET) is one of the most effective medical imaging modalities used by doctors for early AD identification for prospective patients. The task of diagnosing AD by visually examining PET images, however, is usually time consuming and highly depends on the experience of the examiners. Another challenge related to this task is the potential loss of valuable information contained in past PET scans once they are archived post-diagnosis. There is a need for computer-aided early AD identification systems to assist doctors in clinical environment by providing useful second opinion. However, building such systems are extremely challenging due to three main reasons. Firstly, the poor resolution of brain scans acquired from PET scanner makes extracting salient spatial features from these scans a challenging task. Secondly, in PET images obtained from clinical environment are usually poorly labelled due to the fact that Alzheimer’s disease cannot be confirmed until a series of follow-up scans over a long period of time are examined or after post-mortem. This prevents supervised machine learning models from being trained properly. At last, longitudinal PET images are usually difficult to obtain due to subject drop-off during follow-up period. These data provide extremely valuable insights into the development of dementia. And understanding how dementia progresses for different subject will greatly benefit the development of personalised dementia care. In this thesis, we focus on solving these key issues by proposing several different machine learning-based methods. To select the salient feature variables from PET images, we propose a novel feature selection algorithm inspired by evolutionary computing. We also propose a method for simultaneous feature selection and Alzheimer’s disease classification, and we show the consistency between the selected features and clinically verified knowledge. To solve the PET image quality issue, we propose a novel kernel learning method under the Robust Programming framework. We then focus on the challenging data issues existing in clinical environment and propose two different but related semi-supervised learning methods. At last, we design and implement a pipeline for dementia progression direction prediction using longitudinal PET images. The pipeline contains a sequence recovery component to recover missing observations and a sequence recognition component for temporal pattern recognition. The significance of our study is that it shows doctors the usefulness of the second opinion for early AD detection in several different practical scenarios, provided by efficient and effective machine learning models built on PET imaging data.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Egli, Sebastian [Verfasser], and Jörg [Akademischer Betreuer] Bendix. "Satellite-Based Fog Detection: A Dynamic Retrieval Method for Europe Based on Machine Learning / Sebastian Egli ; Betreuer: Jörg Bendix." Marburg : Philipps-Universität Marburg, 2019. http://d-nb.info/1187443476/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Di, Donato Davide. "Sviluppo, Deployment e Validazione Sperimentale di Architetture Distribuite di Machine Learning su Piattaforma fog05." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19021/.

Повний текст джерела
Анотація:
Ultimamente sta crescendo sempre di più l'interesse riguardo al fog computing e alle possibilità che offre, tra cui la capacità di poter fruire di una capacità computazionale considerevole anche nei nodi più vicini all’utente finale: questo permetterebbe di migliorare diversi parametri di qualità di un servizio come la latenza nella sua fornitura e il costo richiesto per le comunicazioni. In questa tesi, sfruttando le considerazioni sopra, abbiamo creato e testato due architetture di machine learning distribuito e poi le abbiamo utilizzate per fornire un servizio di predizione (legato al condition monitoring) che migliorasse la soluzione cloud relativamente ai parametri citati prima. Poi, è stata utilizzata la piattaforma fog05, un tool che permette la gestione efficiente delle varie risorse presenti in una rete, per eseguire il deployment delle architetture sopra. Gli obiettivi erano due: validare le architetture in termini di accuratezza e velocità di convergenza e confermare la capacità di fog05 di gestire deployment complessi come quelli necessari nel nostro caso. Innanzitutto, sono state scelte le architetture: per una, ci siamo basati sul concetto di gossip learning, per l'altra, sul federated learning. Poi, queste architetture sono state implementate attraverso Keras e ne è stato testato il funzionamento: è emerso chiaramente come, in casi d'uso come quello in esame, gli approcci distribuiti riescano a fornire performance di poco inferiori a una soluzione centralizzata. Infine, è stato eseguito con successo il deployment delle architetture utilizzando fog05, incapsulando le funzionalità di quest'ultimo dentro un orchestratore creato ad-hoc al fine di gestire nella maniera più automatizzata e resiliente possibile la fornitura del servizio offerto dalle architetture sopra.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Anjum, Ayesha. "Differentiation of alzheimer's disease dementia, mild cognitive impairment and normal condition using PET-FDG and AV-45 imaging : a machine-learning approach." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2238/.

Повний текст джерела
Анотація:
Nous avons utilisé l'imagerie TEP avec les traceurs F18-FDG et AV45 en conjonction avec les méthodes de classification du domaine du "Machine Learning". Les images ont été acquises en mode dynamique, une image toutes les 5 minutes. Les données ont été transformées par Analyse en Composantes Principales et Analyse en Composantes Indépendantes. Les images proviennent de trois sources différentes: la base de données ADNI (Alzheimer's Disease Neuroimaging Initiative) et deux protocoles réalisés au sein du centre TEP de l'hôpital Purpan. Pour évaluer la performance de la classification nous avons eu recours à la méthode de validation croisée LOOCV (Leave One Out Cross Validation). Nous donnons une comparaison entre les deux méthodes de classification les plus utilisées, SVM (Support Vector Machine) et les réseaux de neurones artificiels (ANN). La combinaison donnant le meilleur taux de classification semble être SVM et le traceur AV45. Cependant les confusions les plus importantes sont entre les patients MCI et les sujets normaux. Les patients Alzheimer se distinguent relativement mieux puisqu'ils sont retrouvés souvent à plus de 90%. Nous avons évalué la généralisation de telles méthodes de classification en réalisant l'apprentissage sur un ensemble de données et la classification sur un autre ensemble. Nous avons pu atteindre une spécificité de 100% et une sensibilité supérieure à 81%. La méthode SVM semble avoir une meilleure sensibilité que les réseaux de neurones. L'intérêt d'un tel travail est de pouvoir aider à terme au diagnostic de la maladie d'Alzheimer
We used PET imaging with tracers F18-FDG and AV45 in conjunction with the classification methods in the field of "Machine Learning". PET images were acquired in dynamic mode, an image every 5 minutes. The images used come from three different sources: the database ADNI (Alzheimer's Disease Neuro-Imaging Initiative, University of California Los Angeles) and two protocols performed in the PET center of the Purpan Hospital. The classification was applied after processing dynamic images by Principal Component Analysis and Independent Component Analysis. The data were separated into training set and test set. To evaluate the performance of the classification we used the method of cross-validation LOOCV (Leave One Out Cross Validation). We give a comparison between the two most widely used classification methods, SVM (Support Vector Machine) and artificial neural networks (ANN) for both tracers. The combination giving the best classification rate seems to be SVM and AV45 tracer. However the most important confusion is found between MCI patients and normal subjects. Alzheimer's patients differ somewhat better since they are often found in more than 90%. We evaluated the generalization of our methods by making learning from set of data and classification on another set. We reached the specifity score of 100% and sensitivity score of more than 81%. SVM method showed a bettrer sensitivity than Artificial Neural Network method. The value of such work is to help the clinicians in diagnosing Alzheimer's disease
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Dukart, Jürgen. "Contribution of FDG-PET and MRI to improve Understanding, Detection and Differentiation of Dementia." Doctoral thesis, Universitätsbibliothek Leipzig, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-66495.

Повний текст джерела
Анотація:
Progression and pattern of changes in different biomarkers of Alzheimer’s disease (AD) and frontotemporal lobar degeneration (FTLD) like [18F]fluorodeoxyglucose positron emission tomography (FDG-PET) and magnetic resonance imaging (MRI) have been carefully investigated over the past decades. However, there have been substantially less studies investigating the potential of combining these imaging modalities to make use of multimodal information to further improve understanding, detection and differentiation of various dementia syndromes. Further the role of preprocessing has been rarely addressed in previous research although different preprocessing algorithms have been shown to substantially affect diagnostic accuracy of dementia. In the present work common preprocessing procedures used to scale FDG-PET data were compared to each other. Further, FDG-PET and MRI information were jointly analyzed using univariate and multivariate techniques. The results suggest a highly differential effect of different scaling procedures of FDG-PET data onto detection and differentiation of various dementia syndromes. Additionally, it has been shown that combining multimodal information does further improve automatic detection and differentiation of AD and FTLD.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Castellanos, Carlos. "Development of a validation shape sensing algorithm in Python with predictive and automatedanalysis." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-454942.

Повний текст джерела
Анотація:
Difficulties with wind turbines can arise during operation due to externalforces provoked by the wind. Calculating the deflection of the blades can beused to give break points for maintenance, design and/or monitoring purposes. Fiber Bragg Grating (FBG) sensors can be installed on the windblades to detect signals that can be reinterpreted as deflection in differentdirections. In this project a tool was developed that can take this information in real time to analyze critical issues which is important to save timeand operational and maintenance costs (O&M). To do so, a predictive model is used to anticipate the deflection in the blades caused by the impact ofthe wind in different orientations. The main purpose of this work is to showan algorithm that can transform optical signals from the FBG sensors into ashape calculator for the deflection for maintenance purposes. At the sametime, it is shown that this algorithm can be used as a forecast tool takinginto consideration the weather data.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wheeler, Nathan. "On the Effectiveness of an IOT - FOG - CLOUD Architecture for a real-world application." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/855.

Повний текст джерела
Анотація:
Fog Computing is an emerging computing paradigm that shifts certain processing closer to the Edge of a network, generally within one network hop, where latency is minimized, and results can be obtained the quickest. However, not a lot of research has been done on the effectiveness of Fog in real-world applications. The aim of this research is to show the effectiveness of the Fog Computing paradigm as the middle layer in a 3-tier architecture between the Internet of Things (IoT) and the Cloud. Two applications were developed: one utilizing Fog in a 3-tier architecture and another application using IoT and Cloud with no Fog. A quantitative and qualitative analysis followed the application development, with studies focused on application response time and walkthroughs for AWS Greengrass and Amazon Machine Learning. Furthermore, the application itself demonstrates an architecture which is of both business and research value, providing a real-life coffee shop use-case and utilizing a newly available Fog offering from Amazon known as Greengrass. At the Cloud level, the newly available Amazon Machine Learning API was used to perform predictive analytics on the data provided by the IoT devices. Results suggest that Fog-enabled applications have a much lower range of response times as well as lower response times overall. These results suggest Fog-enabled solutions are suitable for applications which require network stability and reliably lower latency.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Abboud, Rita. "Méthode de mesure sans contact de la température intégrée au rotor d’une machine électrique tournante au moyen d’une fibre optique à réseaux de Bragg." Thesis, Compiègne, 2021. http://www.theses.fr/2021COMP2645.

Повний текст джерела
Анотація:
Dans le domaine des systèmes de transport, des problèmes de chauffage apparaissent avec l'augmentation de la température dans différents types de machines électriques. Dans la conception classique des machines électriques, l'analyse thermique doit être prise en compte dans la conception initiale, le contrôle et la surveillance des machines électriques. La mesure de la température locale, en particulier dans le rotor, est importante pour plusieurs raisons telles que l'extension de la durée de vie des composants de la machine électrique et la localisation des points chauds à l'intérieur de la machine, ce qui permet de développer des systèmes de refroidissement appropriés et de protéger la machine. De nombreuses approches pour la mesure de la température peuvent être utilisées telles que les thermocouples, les thermistances, les capteurs infrarouges ou les caméras infrarouges. Cette thèse présente une technique sans contact qui mesure la température du rotor d'une machine tournante en utilisant un capteur à réseaux de Bragg (FBGs). La surveillance de la température locale, en particulier à l'intérieur du rotor, est importante afin de détecter le vieillissement thermique précoce de la machine. Les points chauds dans les parties rotatives peuvent être localisés en utilisant cette technique. L'originalité principale du travail proposé est de mesurer des températures élevées (70°C) avec une vitesse de rotation élevée (860 RPM) des machines tournantes et surtout d'intégrer le capteur FBG dans un rotor électrique de véhicules à petite échelle géométrique. La réponse du capteur FBG a été simulée en utilisant la méthode de la matrice de transfert (TMM). Ensuite, le FBG a été calibré en utilisant un four de chauffage fabriqué dans notre laboratoire et la température a été modifiée de 20 °C à 70 °C. Une machine rotative avec un FBG intégré a ensuite été conçue et fabriquée. La température du rotor a été modifiée pendant la rotation de la machine. Les décalages de longueur d'onde dus aux variations de température ont été mesurés expérimentalement jusqu'à 860 RPM. Une sensibilité à la température de 4.7 pm/°C a été atteinte expérimentalement. La capacité de ce capteur à surveiller les variations de température du rotor en temps réel a été validée expérimentalement
In the transportation system domain, heating problems appear with the temperature increase in different types of electrical machines. In the classical design of electrical machines, thermal analysis should be considered in the initial design, control and monitoring of electrical machines. The measurement of local temperature especially in the rotor is important for several reasons such as extending the lifetime of the electrical machine components, and localizing the hot spots inside the machine which allows the development of appropriate cooling systems and protects the machine. Numerous approaches for temperature measurement can be used such as thermocouples, thermistors, infrared sensors or infra-red cameras. This thesis presents a non-contact technique that measures the temperature of the rotor of a small machine using Fiber Bragg Gratings (FBGs) sensor. Monitoring local temperature especially inside the rotor is important in order to detect early thermal aging of the machine. Hot spot in the rotating parts can be localized by using this technique. The main originality of the proposed work is measuring high temperatures (70°C) with high speed of rotation (860 RPM) of rotating machines and most importantly integrating the FBG sensor into a geometrically small scale electrical rotor of vehicles. The FBG sensor response has been simulated using Transfer matrix method (TMM). After that, the FBG has been calibrated from 20 °C to 70 °C using a heating furnace fabricated at our laboratory. A small rotating machine with embedded FBG has then been designed and fabricated. The temperature of the rotor has been changed while rotating the machine and wavelength shifts due to temperature variations have been experimentally measured up to 860 RPM. A temperature sensitivity of 4.7 pm/°C have been experimentally reached. The ability of this sensor to monitor real time temperature variations of the rotor has been experimentally validated
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Vanhoutte, Matthieu. "Caractérisation par imagerie TEP 18F-FDG de la maladie d’Alzheimer à début précoce." Thesis, Lille 2, 2018. http://www.theses.fr/2018LIL2S026/document.

Повний текст джерела
Анотація:
La maladie d’Alzheimer (AD) est la principale cause de démence neurodégénérative, caractérisée à 95% par des formes tardives (LOAD) qui présentent des troubles mnésiques et progressent lentement. Cependant, environ 5% des patients atteints d’AD présentent une forme précoce de la maladie (EOAD) débutant avant 65 ans. Bien que le substratum lésionnel soit identique à la LOAD, l’EOAD est caractérisée par une plus grande sévérité des dépôts de plaques amyloïdes, des enchevêtrements neurofibrillaires et de l’atrophie cérébrale. De plus, l’EOAD est plus hétérogène que la LOAD, car même si la majorité des troubles sont mnésiques il existe une proportion importante de formes atypiques affectées par des troubles du langage, visuospatiaux ou exécutifs. Bien que de nombreuses études en imagerie TEP 18F-FDG aient permis de caractériser métaboliquement l’EOAD par rapport à la LOAD ou à un groupe de contrôles sains, très peu différentiaient pas les formes typiques (mnésiques) des formes atypiques. Dans ce travail de thèse, nous avons examiné les données d’imagerie TEP 18F-FDG, complémentées par l’IRM structurelle, afin d’améliorer la caractérisation et la compréhension des formes typiques et atypiques d’EOAD. Suite à un premier travail d’harmonisation des reconstructions TEP 18F-FDG entre deux machines GE et Siemens ayant toutes deux servies à l’acquisition des données patients, notre second objectif a été d’étudier à l’inclusion sur le cerveau entier les patterns hypométaboliques caractéristiques des différentes formes d’EOAD et leurs corrélations potentielles avec la performance neuropsychologique. Cette étude a montré que chaque forme clinique d’EOAD était caractérisée par des patterns hypométaboliques spécifiques fortement corrélés aux symptômes cliniques et aux scores neuropsychologiques du domaine cognitif associé. Par la suite, nous nous sommes intéressés à la progression sur 3 ans de l’hypométabolisme sur la surface corticale en fonction des formes typiques ou atypiques d’EOAD. Bien que des patterns similaires d’évolution de l’hypométabolisme entre les formes typiques et atypiques aient été observés au niveau du cortex pariétal, seules les formes atypiques ont présenté une réduction du métabolisme bilatérale plus importante au niveau du cortex orbito-frontal latéral associée à des déclins cognitifs plus sévères. Temporellement, les résultats suggèrent que l’hypométabolisme chez les formes typiques progresserait selon un axe antérieur-vers-postérieur en cohérence avec les stades de Braak et Braak, alors que l’hypométabolisme chez les formes atypiques progresserait selon un axe postérieur-vers-antérieur. Pris ensemble, ces résultats confortent l’hypothèse d’une distribution différente de la pathologie tau en termes de charge et d’évolution temporelle entre ces deux formes d’EOAD. Notre dernier objectif a été de déterminer les capacités discriminatives des données TEP 18F-FDG, seules ou combinées aux données de l’IRM structurelle, afin de classifier de manière automatique et supervisée des patients atteints d’EOAD en forme typique ou atypique. Nous avons mis en application des algorithmes de machine learning combinés à des méthodes de validation croisée afin d’évaluer les influences de diverses composantes sur les performances de classification. Des précisions équilibrées maximales égales à 80,8% en imagerie monomodale TEP 18F-FDG et 92,4% en imagerie multimodale TEP 18F-FDG/IRM T1 ont été obtenues, validant ainsi la TEP 18F-FDG comme un biomarqueur sensible de l’EOAD et soulignant l’apport incontestable de la multimodalité. En conclusion, nos travaux ont permis une meilleure caractérisation et compréhension des formes cliniques d’EOAD, ouvrant la voie à un management personnalisé du patient et des traitements plus efficaces pour ces formes distinctes
Alzheimer’s disease (AD) is the most common form of neurodegenerative dementia, characterized at 95% by late-onset forms (LOAD) which present episodic memory impairments and progress slowly. However, 5% of AD patients have an early-onset form (EOAD) of the disease whose onset begins before 65. Although the lesion substratum is similar between EOAD and LOAD, EOAD has more severe neuritic plaque deposits, neurofibrillary tangles and brain atrophy. Moreover, EOAD is more heterogeneous than LOAD, because even if most of the impairments are about episodic memory there is a high proportion of atypical forms impaired in language, visuospatial or executive functions. Although many 18F-FDG PET studies allowed to metabolically characterize EOAD compared to LOAD or healthy controls group, very few differentiated typical from atypical forms. In this thesis, we examined 18F-FDG PET data, complemented by structural MRI, in order to improve characterization and comprehension of typical and atypical forms of EOAD. Following a first harmonization work between 18F-FDG PET reconstructions from both GE and Siemens scanners used for the acquisition of patient data, our second aim was to study at baseline on the whole brain hypometabolic patterns characterizing the clinical forms of EOAD and their correlations with neuropsychological performance. This work showed that each clinical form of EOAD was characterized by specific hypometabolic patterns highly correlated with clinical symptoms and neuropsychological performance of the associated cognitive domain. Then, we focused on the 3-year hypometabolism progression on the cortical surface according typical or atypical forms of EOAD. Although similar patterns of hypometabolism evolution between typical and atypical forms were observed in parietal cortices, atypical only showed a more severe reduction of metabolism in lateral orbitofrontal cortices associated with more severe cognitive declines. Temporally, the results suggest that hypometabolism in typical forms would progress according to an anterior-to-posterior axis coherently with Braak and Braak stages, whereas in atypical forms hypometabolism would progress according a posterior-to-anterior axis. Taken together, results consolidate the hypothesis of a different tau distribution in terms of burden and temporal evolution between both forms of EOAD. Our last goal was to determine the discriminative power of 18F-FDG PET data, alone or combined to structural MRI data, in order to automatically classify in a supervised manner EOAD patients into typical or atypical form. We applied machine learning algorithms combined to cross-validation methods to assess influence of some components on classification performances. Maximum balanced accuracies equal to 80.8% in monomodal 18F-FDG PET and 92.4% in multimodal 18F-FDG PET/T1 MRI were obtained, validating 18F-FDG PET as a sensible biomarker of EOAD and highlighting the incontestable contribution of multimodality. In conclusion, our works allowed a better characterization and comprehension of clinical forms of EOAD, paving the way to personalized patient management and more effective treatments for these distinct clinical forms
Стилі APA, Harvard, Vancouver, ISO та ін.
14

CONCONE, Federico. "EFFICIENT AND SECURE ALGORITHMS FOR MOBILE CROWDSENSING THROUGH PERSONAL SMART DEVICES." Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/481969.

Повний текст джерела
Анотація:
The success of the modern pervasive sensing strategies, such as the Social Sensing, strongly depends on the diffusion of smart mobile devices. Smartwatches, smart- phones, and tablets are devices capable of capturing and analyzing data about the user’s context, and can be exploited to infer high-level knowledge about the user himself, and/or the surrounding environment. In this sense, one of the most relevant applications of the Social Sensing paradigm concerns distributed Human Activity Recognition (HAR) in scenarios ranging from health care to urban mobility management, ambient intelligence, and assisted living. Even though some simple HAR techniques can be directly implemented on mo- bile devices, in some cases, such as when complex activities need to be analyzed timely, users’ smart devices should be able to operate as part of a more complex architecture, paving the way to the definition of new distributed computing paradigms. The general idea behind these approaches is to move early analysis to- wards the edge of the network, while relying on other intermediate (fog) or remote (cloud) devices for computations of increasing complexity. This logic represents the main core of the fog computing paradigm, and this thesis investigates its adoption in distributed sensing frameworks. Specifically, the conducted analysis focused on the design of a novel distributed HAR framework in which the heavy computation from the sensing layer is moved to intermediate devices and then to the cloud. Smart personal devices are used as processing units in order to guarantee real-time recognition, whereas the cloud is responsible for maintaining an overall, consistent view of the whole activity set. As compared to traditional cloud-based solutions, this choice allows to overcome processing and storage limitations of wearable devices while also reducing the overall bandwidth consumption. Then, the fog-based architecture allowed the design and definition of a novel HAR technique that combines three machine learning algorithms, namely k-means clustering, Support Vector Machines (SVMs), and Hidden Markov Models (HMMs), to recognize complex activities modeled as sequences of simple micro- activities. The capability to distribute the computation over the different entities in the network, allowing the use of complex HAR algorithms, is definitely one of the most significant advantages provided by the fog architecture. However, because both of its intrinsic nature and high degree of modularity, the fog-based system is particularly prone to cyber security attacks that can be performed against every element of the infrastructure. This aspect plays a main role with respect to social sensing since the users’ private data must be preserved from malicious purposes. Security issues are generally addressed by introducing cryptographic mechanisms that improve the system defenses against cyber attackers while, at the same time, causing an increase of the computational overhead for devices with limited resources. With the goal to find a trade-off between security and computation cost, the de- sign and definition of a secure lightweight protocol for social-based applications are discussed and then integrated into the distributed framework. The protocol covers all tasks commonly required by a general fog-based crowdsensing application, making it applicable not only in a distributed HAR scenario, discussed as a case study, but also in other application contexts. Experimental analysis aims to assess the performance of the solutions described so far. After highlighting the benefits the distributed HAR framework might bring in smart environments, an evaluation in terms of both recognition accuracy and complexity of data exchanged between network devices is conducted. Then, the effectiveness of the secure protocol is demonstrated by showing the low impact it causes on the total computational overhead. Moreover, a comparison with other state-of-art protocols is made to prove its effectiveness in terms of the provided security mechanisms.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Rinaldi, Riccardo. "Deployment e Gestione di Applicazioni di Federated Learning in Edge Cloud Computing basate sul Framework Fog05." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Анотація:
Il Federated Learning è la nuova branca del Machine Learning nata per sopperire al bisogno di nuovi sistemi architetturali che siano in grado di gestire i Big Data e allo stesso tempo garantire la privacy di eventuali dati sensibili. Per poter operare a queste due condizioni si è pensato di raccogliere i dati in un database centralizzato in modo che questi non lascino mai i margini della rete. Ecco perché è subentrato il mondo dell’edge computing in cui dispositivi intelligenti, posti tra il cloud e le Things, hanno il compito di pre-processare i dati raccolti per poi aggregarli su un unico server. Federated Learning e Edge/Cloud Computing sono due facce della stessa medaglia. I due mondi sono infatti profondamente interconnessi poiché fare Federated Learning vuol dire operare in un ambiente di tipo edge e cloud.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Mi, Hongmei. "PDE modeling and feature selection : prediction of tumor evolution and patient outcome in therapeutic follow-up with FDG-PET images." Rouen, 2015. http://www.theses.fr/2015ROUES005.

Повний текст джерела
Анотація:
La radiothérapie adaptative peut potentiellement améliorer le résultat du traitement du patient à partir d'un plan de traitement ré-optimisé précoce ou au cours du traitement en prenant en compte les spécificités individuelles. Des études prédictives sur le suivi thérapeutique du patient pourraient être d'intérêt sur la façon d’adapter le traitement à chaque patient. Dans cette thèse, nous menons deux études prédictives en utilisant la tomographie par émission de positons (TEP). La première étude a pour but de prédire l'évolution de la tumeur pendant la radiothérapie. Nous proposons un modèle de croissance tumorale spécifique au patient qui est basé sur des équations aux dérivées partielles. Ce modèle est composé de trois termes représentant trois processus biologiques respectivement, où les paramètres du modèle de croissance tumorale sont estimés à partir des images TEP précédentes du patient. La deuxième partie de la thèse porte sur le cas où des images fréquentes de la tumeur est indisponible. Nous effectuons donc une autre étude dont l'objectif est de sélectionner des caractéristiques prédictives, parmi lesquelles des caractéristiques issues des images TEP et d'autres cliniques, pour prédire l’état du patient après le traitement. Notre deuxième contribution est donc une méthode « wrapper » de sélection de caractéristiques qui recherche vers l'avant dans un espace hiérarchique de sous-ensemble de caractéristiques, et évalue les sous-ensembles par leurs performances de prédiction utilisant la machine à vecteurs de support (SVM) comme le classificateur. Pour les deux études prédictives, des résultats obtenus chez des patients atteints de cancer sont encourageants
Adaptive radiotherapy has the potential to improve patient’s outcome from a re-optimized treatment plan early or during the course of treatment by taking individual specificities into account. Predictive studies in patient’s therapeutic follow-up could be of interest in how to adapt treatment to each individual patient. In this thesis, we conduct two predictive studies using patient’s positron emission tomography (PET) imaging. The first study aims to predict tumor evolution during radiotherapy. We propose a patient-specific tumor growth model derived from the advection-reaction equation composed of three terms representing three biological processes respectively, where the tumor growth model parameters are estimated based on patient’s preceding sequential PET images. The second part of the thesis focuses on the case where frequent imaging of the tumor is not available. We therefore conduct another study whose objective is to select predictive factors, among PET-based and clinical characteristics, for patient’s outcome after treatment. Our second contribution is thus a wrapper feature selection method which searches forward in a hierarchical feature subset space, and evaluates feature subsets by their prediction performance using support vector machine (SVM) as the classifier. For the two predictive studies, promising results are obtained on real-world cancer-patient datasets
Стилі APA, Harvard, Vancouver, ISO та ін.
17

BERRI, PIER CARLO. "Design and development of algorithms and technologies applied to prognostics of aerospace systems." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2927464.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Darrous, Jad. "Scalable and Efficient Data Management in Distributed Clouds : Service Provisioning and Data Processing." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN077.

Повний текст джерела
Анотація:
Cette thèse porte sur des solutions pour la gestion de données afin d'accélérer l'exécution efficace d'applications de type « Big Data » (très consommatrices en données) dans des centres de calculs distribués à grande échelle. Les applications de type « Big Data » sont de plus en plus souvent exécutées sur plusieurs sites. Les deux principales raisons de cette tendance sont 1) le déplacement des calculs vers les sources de données pour éliminer la latence due à leur transmission et 2) le stockage de données sur un site peut ne pas être réalisable à cause de leurs tailles de plus en plus importantes.La plupart des applications s'exécutent sur des clusters virtuels et nécessitent donc des images de machines virtuelles (VMI) ou des conteneurs d’application. Par conséquent, il est important de permettre l’approvisionnement rapide de ces services afin de réduire le temps d'attente avant l’exécution de nouveaux services ou applications. Dans la première partie de cette thèse, nous avons travaillé sur la récupération et le placement des données, en tenant compte de problèmes difficiles, notamment l'hétérogénéité des connexions au réseau étendu (WAN) et les besoins croissants en stockage pour les VMIs et les conteneurs d’application.Par ailleurs, les applications de type « Big Data » reposent sur la réplication pour fournir des services fiables et rapides, mais le surcoût devient de plus en plus grand. La seconde partie de cette thèse constitue l'une des premières études sur la compréhension et l'amélioration des performances des applications utilisant la technique, moins coûteuse en stockage, des codes d'effacement (erasure coding), en remplacement de la réplication
This thesis focuses on scalable data management solutions to accelerate service provisioning and enable efficient execution of data-intensive applications in large-scale distributed clouds. Data-intensive applications are increasingly running on distributed infrastructures (multiple clusters). The main two reasons for such a trend are 1) moving computation to data sources can eliminate the latency of data transmission, and 2) storing data on one site may not be feasible given the continuous increase of data size.On the one hand, most applications run on virtual clusters to provide isolated services, and require virtual machine images (VMIs) or container images to provision such services. Hence, it is important to enable fast provisioning of virtualization services to reduce the waiting time of new running services or applications. Different from previous work, during the first part of this thesis, we worked on optimizing data retrieval and placement considering challenging issues including the continuous increase of the number and size of VMIs and container images, and the limited bandwidth and heterogeneity of the wide area network (WAN) connections.On the other hand, data-intensive applications rely on replication to provide dependable and fast services, but it became expensive and even infeasible with the unprecedented growth of data size. The second part of this thesis provides one of the first studies on understanding and improving the performance of data-intensive applications when replacing replication with the storage-efficient erasure coding (EC) technique
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Fraunholz, Uwe, and Manuel Schramm. "Innovation durch Konzentration? Schwerpunktbildung und Wettbewerbsfähigkeit im Hochschulwesen der DDR und der Bundesrepublik, 1949-1990." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-138872.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Fraunholz, Uwe, and Manuel Schramm. "Innovation durch Konzentration? Schwerpunktbildung und Wettbewerbsfähigkeit im Hochschulwesen der DDR und der Bundesrepublik, 1949-1990: BMBF-Forschungsverbund »Innovationskultur in Deutschland« [Abschlussbericht]." Technische Universität Dresden, 2005. https://tud.qucosa.de/id/qucosa%3A27788.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Duthon, Pierre. "Descripteurs d'images pour les systèmes de vision routiers en situations atmosphériques dégradées et caractérisation des hydrométéores." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC065/document.

Повний текст джерела
Анотація:
Les systèmes de vision artificielle sont de plus en plus présents en contexte routier. Ils sont installés sur l'infrastructure, pour la gestion du trafic, ou placés à l'intérieur du véhicule, pour proposer des aides à la conduite. Dans les deux cas, les systèmes de vision artificielle visent à augmenter la sécurité et à optimiser les déplacements. Une revue bibliographique retrace les origines et le développement des algorithmes de vision artificielle en contexte routier. Elle permet de démontrer l'importance des descripteurs d'images dans la chaîne de traitement des algorithmes. Elle se poursuit par une revue des descripteurs d'images avec une nouvelle approche source de nombreuses analyses, en les considérant en parallèle des applications finales. En conclusion, la revue bibliographique permet de déterminer quels sont les descripteurs d'images les plus représentatifs en contexte routier. Plusieurs bases de données contenant des images et les données météorologiques associées (ex : pluie, brouillard) sont ensuite présentées. Ces bases de données sont innovantes car l'acquisition des images et la mesure des conditions météorologiques sont effectuées en même temps et au même endroit. De plus, des capteurs météorologiques calibrés sont utilisés. Chaque base de données contient différentes scènes (ex: cible noir et blanc, piéton) et divers types de conditions météorologiques (ex: pluie, brouillard, jour, nuit). Les bases de données contiennent des conditions météorologiques naturelles, reproduites artificiellement et simulées numériquement. Sept descripteurs d'images parmi les plus représentatifs du contexte routier ont ensuite été sélectionnés et leur robustesse en conditions de pluie évaluée. Les descripteurs d'images basés sur l'intensité des pixels ou les contours verticaux sont sensibles à la pluie. A l'inverse, le descripteur de Harris et les descripteurs qui combinent différentes orientations sont robustes pour des intensités de pluie de 0 à 30 mm/h. La robustesse des descripteurs d'images en conditions de pluie diminue lorsque l'intensité de pluie augmente. Finalement, les descripteurs les plus sensibles à la pluie peuvent potentiellement être utilisés pour des applications de détection de la pluie par caméra.Le comportement d'un descripteur d'images en conditions météorologiques dégradées n'est pas forcément relié à celui de la fonction finale associée. Pour cela, deux détecteurs de piéton ont été évalués en conditions météorologiques dégradées (pluie, brouillard, jour, nuit). La nuit et le brouillard sont les conditions qui ont l'impact le plus important sur la détection des piétons. La méthodologie développée et la base de données associée peuvent être utilisées à nouveau pour évaluer d'autres fonctions finales (ex: détection de véhicule, détection de signalisation verticale).En contexte routier, connaitre les conditions météorologiques locales en temps réel est essentiel pour répondre aux deux enjeux que sont l'amélioration de la sécurité et l'optimisation des déplacements. Actuellement, le seul moyen de mesurer ces conditions le long des réseaux est l'installation de stations météorologiques. Ces stations sont coûteuses et nécessitent une maintenance particulière. Cependant, de nombreuses caméras sont déjà présentes sur le bord des routes. Une nouvelle méthode de détection des conditions météorologiques utilisant les caméras de surveillance du trafic est donc proposée. Cette méthode utilise des descripteurs d'images et un réseau de neurones. Elle répond à un ensemble de contraintes clairement établies afin de pouvoir détecter l'ensemble des conditions météorologiques en temps réel, mais aussi de pourvoir proposer plusieurs niveaux d'intensité. La méthode proposée permet de détecter les conditions normales de jour, de nuit, la pluie et le brouillard. Après plusieurs phases d'optimisation, la méthode proposée obtient de meilleurs résultats que ceux obtenus dans la littérature, pour des algorithmes comparables
Computer vision systems are increasingly being used on roads. They can be installed along infrastructure for traffic monitoring purposes. When mounted in vehicles, they perform driver assistance functions. In both cases, computer vision systems enhance road safety and streamline travel.A literature review starts by retracing the introduction and rollout of computer vision algorithms in road environments, and goes on to demonstrate the importance of image descriptors in the processing chains implemented in such algorithms. It continues with a review of image descriptors from a novel approach, considering them in parallel with final applications, which opens up numerous analytical angles. Finally the literature review makes it possible to assess which descriptors are the most representative in road environments.Several databases containing images and associated meteorological data (e.g. rain, fog) are then presented. These databases are completely original because image acquisition and weather condition measurement are at the same location and the same time. Moreover, calibrated meteorological sensors are used. Each database contains different scenes (e.g. black and white target, pedestrian) and different kind of weather (i.e. rain, fog, daytime, night-time). Databases contain digitally simulated, artificial and natural weather conditions.Seven of the most representative image descriptors in road context are then selected and their robustness in rainy conditions is evaluated. Image descriptors based on pixel intensity and those that use vertical edges are sensitive to rainy conditions. Conversely, the Harris feature and features that combine different edge orientations remain robust for rainfall rates ranging in 0 – 30 mm/h. The robustness of image features in rainy conditions decreases as the rainfall rate increases. Finally, the image descriptors most sensitive to rain have potential for use in a camera-based rain classification application.The image descriptor behaviour in adverse weather conditions is not necessarily related to the associated final function one. Thus, two pedestrian detectors were assessed in degraded weather conditions (rain, fog, daytime, night-time). Night-time and fog are the conditions that have the greatest impact on pedestrian detection. The methodology developed and associated database could be reused to assess others final functions (e.g. vehicle detection, traffic sign detection).In road environments, real-time knowledge of local weather conditions is an essential prerequisite for addressing the twin challenges of enhancing road safety and streamlining travel. Currently, the only mean of quantifying weather conditions along a road network requires the installation of meteorological stations. Such stations are costly and must be maintained; however, large numbers of cameras are already installed on the roadside. A new method that uses road traffic cameras to detect weather conditions has therefore been proposed. This method uses a combination of a neural network and image descriptors applied to image patches. It addresses a clearly defined set of constraints relating to the ability to operate in real-time and to classify the full spectrum of meteorological conditions and grades them according to their intensity. The method differentiates between normal daytime, rain, fog and normal night-time weather conditions. After several optimisation steps, the proposed method obtains better results than the ones reported in the literature for comparable algorithms
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Markel, Daniel. "Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Co-registered 18-FDG PET/CT Images." Thesis, 2011. http://hdl.handle.net/1807/31332.

Повний текст джерела
Анотація:
Variability between oncologists in defining the tumor during radiation therapy planning can be as high as 700% by volume. Robust, automated definition of tumor boundaries has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal. The approach was compared to a number of alternative methods and found to have the highest overlap with that of an oncologist's tumor definition.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

"3D - Patch Based Machine Learning Systems for Alzheimer’s Disease classification via 18F-FDG PET Analysis." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.44163.

Повний текст джерела
Анотація:
abstract: Alzheimer’s disease (AD), is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. It is the cause of 60% to 70% of cases of dementia. There is growing interest in identifying brain image biomarkers that help evaluate AD risk pre-symptomatically. High-dimensional non-linear pattern classification methods have been applied to structural magnetic resonance images (MRI’s) and used to discriminate between clinical groups in Alzheimers progression. Using Fluorodeoxyglucose (FDG) positron emission tomography (PET) as the pre- ferred imaging modality, this thesis develops two independent machine learning based patch analysis methods and uses them to perform six binary classification experiments across different (AD) diagnostic categories. Specifically, features were extracted and learned using dimensionality reduction and dictionary learning & sparse coding by taking overlapping patches in and around the cerebral cortex and using them as fea- tures. Using AdaBoost as the preferred choice of classifier both methods try to utilize 18F-FDG PET as a biological marker in the early diagnosis of Alzheimer’s . Addi- tional we investigate the involvement of rich demographic features (ApoeE3, ApoeE4 and Functional Activities Questionnaires (FAQ)) in classification. The experimental results on Alzheimer’s Disease Neuroimaging initiative (ADNI) dataset demonstrate the effectiveness of both the proposed systems. The use of 18F-FDG PET may offer a new sensitive biomarker and enrich the brain imaging analysis toolset for studying the diagnosis and prognosis of AD.
Dissertation/Thesis
Thesis Defense Presentation
Masters Thesis Computer Science 2017
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Dukart, Jürgen. "Contribution of FDG-PET and MRI to improve Understanding, Detection and Differentiation of Dementia." Doctoral thesis, 2010. https://ul.qucosa.de/id/qucosa%3A11143.

Повний текст джерела
Анотація:
Progression and pattern of changes in different biomarkers of Alzheimer’s disease (AD) and frontotemporal lobar degeneration (FTLD) like [18F]fluorodeoxyglucose positron emission tomography (FDG-PET) and magnetic resonance imaging (MRI) have been carefully investigated over the past decades. However, there have been substantially less studies investigating the potential of combining these imaging modalities to make use of multimodal information to further improve understanding, detection and differentiation of various dementia syndromes. Further the role of preprocessing has been rarely addressed in previous research although different preprocessing algorithms have been shown to substantially affect diagnostic accuracy of dementia. In the present work common preprocessing procedures used to scale FDG-PET data were compared to each other. Further, FDG-PET and MRI information were jointly analyzed using univariate and multivariate techniques. The results suggest a highly differential effect of different scaling procedures of FDG-PET data onto detection and differentiation of various dementia syndromes. Additionally, it has been shown that combining multimodal information does further improve automatic detection and differentiation of AD and FTLD.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Tsu-ChiCheng and 鄭子琪. "Development of lymph node metastasis diagnosis system for patients with non-small-cell lung cancer (NSCLC) on F-18-FDG PET/CT images via machine learning algorithm." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/59qf6t.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Wilson, Preethy. "Inter-device authentication protocol for the Internet of Things." Thesis, 2017. http://hdl.handle.net/1828/8139.

Повний текст джерела
Анотація:
The Internet of things (IoT) recently blossomed remarkably and has been transforming the everyday physical entities around us into an ecosystem of information that will enrich our lives in unimaginable ways. Authentication is one of the primary goals of security in the IoT and acts as the main gateway to a secure system which transmits confidential and/or private data.This thesis focuses on a Device-to-Device Mutual Authentication Protocol, designed for the smart home network, which is an essential component of communication in the Internet of Things(IoT). The protocol has been developed based on asymmetric cryptography to authenticate the devices in the network and for the devices to agree on a shared secret session key. In order to ensure the security of a communications session between the devices, the session keys are changed frequently - ideally after every communication session. The proposed scheme has been programmed in HLPSL, simulated and its efficiency verified using the SPAN/ AVISPA tool. When SPAN substantiates the protocol simulation and the attacker simulation, the back-ends of the AVISPA tool verifies the safety and security of the proposed authentication protocol. The thesis also evaluates the protocol's security against the attacks successful against protocols proposed by other researchers.
Graduate
0544
0984
0537
pwilson1@uvic.ca
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії