Dissertations / Theses on the topic 'Intelligent imaging'

To see the other types of publications on this topic, follow the link: Intelligent imaging.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Intelligent imaging.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Москаленко, Альона Сергіївна, Алена Сергеевна Москаленко, and Alona Serhiivna Moskalenko. "Intelligent decision support system for renal radionuclide imaging." Thesis, Sumy State University, 2016. http://essuir.sumdu.edu.ua/handle/123456789/46806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Radionuclide imaging of kidneys has a special place in nuclear medicine. It allows to register functional changes, far earlier than the structural and anatomical changes. Therefore, it is indispensable at early diagnosis. The reliability of data interpretation of renal scintigraphy studies depends on the level of doctor-diagnostician’s professional qualification and on the presence of their practical experience.
2

Fukuda, Toshio, Naoyuki Kubota, Baiqing Sun, Fei Chen, Tomoya Fukukawa, and Hironobu Sasaki. "ACTIVE SENSING FOR INTELLIGENT ROBOT VISION WITH RANGE IMAGING SENSOR." IEEE, 2010. http://hdl.handle.net/2237/14442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amza, Catalin Gheorghe. "Intelligent X-ray imaging inspection system for the food industry." Thesis, De Montfort University, 2002. http://hdl.handle.net/2086/10731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The inspection process of a product is an important stage of a modern production factory. This research presents a generic X-ray imaging inspection system with application for the detection of foreign bodies in a meat product for the food industry. The most important modules in the system are the image processing module and the high-level detection system. This research discusses the use of neural networks for image processing and fuzzy-logic for the detection of potential foreign bodies found in x-ray images of chicken breast meat after the de-boning process. The meat product is passed under a solid-state x-ray sensor that acquires a dual-band two-dimensional image of the meat (a low- and a high energy image). A series of image processing operations are applied to the acquired image (pre-processing, noise removal, contrast enhancement). The most important step of the image processing is the segmentation of the image into meaningful objects. The segmentation task is a difficult one due to the lack of clarity of the acquired X-ray images and the resulting segmented image represents not only correctly identified foreign bodies but also areas caused by overlapping muscle regions in the meat which appear very similar to foreign bodies in the resulting x-ray image. A Hopfield neural network architecture was proposed for the segmentation of a X-ray dual-band image. A number of image processing measurements were made on each object (geometrical and grey-level based statistical features) and these features were used as the input into a fuzzy logic based high-level detection system whose function was to differentiate between bones and non-bone segmented regions. The results show that system's performance is considerably improved over non-fuzzy or crisp methods. Possible noise affecting the system is also investigated. The proposed system proved to be robust and flexible while achieving a high level of performance. Furthermore, it is possible to use the same approach when analysing images from other applications areas from the automotive industry to medicine.
4

Dong, Leng. "Intelligent computing applications to assist perceptual training in medical imaging." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/22333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The research presented in this thesis represents a body of work which addresses issues in medical imaging, primarily as it applies to breast cancer screening and laparoscopic surgery. The concern here is how computer based methods can aid medical practitioners in these tasks. Thus, research is presented which develops both new techniques of analysing radiologists performance data and also new approaches of examining surgeons visual behaviour when they are undertaking laparoscopic training. Initially a new chest X-Ray self-assessment application is described which has been developed to assess and improve radiologists performance in detecting lung cancer. Then, in breast cancer screening, a method of identifying potential poor performance outliers at an early stage in a national self-assessment scheme is demonstrated. Additionally, a method is presented to optimize whether a radiologist, in using this scheme, has correctly localised and identified an abnormality or made an error. One issue in appropriately measuring radiological performance in breast screening is that both the size of clinical monitors used and the difficulty in linking the medical image to the observer s line of sight hinders suitable eye tracking. Consequently, a new method is presented which links these two items. Laparoscopic surgeons have similar issues to radiologists in interpreting a medical display but with the added complications of hand-eye co-ordination. Work is presented which examines whether visual search feedback of surgeons operations can be useful training aids.
5

Sasaki, Hironobu, Toshio Fukuda, Masashi Satomi, and Naoyuki Kubota. "Growing neural gas for intelligent robot vision with range imaging camera." IEEE, 2009. http://hdl.handle.net/2237/13913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Scott-Jackson, William. "Marker-less respiratory gating for PET imaging with intelligent gate optimisation." Thesis, University of Surrey, 2018. http://epubs.surrey.ac.uk/849418/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
PET image degradation imposed by patient respiratory motion is a well-established problem in clinical oncology; strategies exist to study and correct this. Some attempt to minimise or arrest patient motion through restraining hardware; their effectiveness is subject to the comfort and compliance. Another practice is to gate PET data based on signals acquired from an external device. This thesis presents several contributions to the field of respiratory motion correction research in PET imaging. First and foremost, this thesis presents a framework which allows a researcher to process list mode data from a Siemens Biograph mCT scanner and reconstruct sinograms of which in the open source image reconstruction package STIR. Secondly, it demonstrates the viability of a depth camera for respiratory monitoring and gating in a clinical environment. It was demonstrated that it was an effective device to capture anterior surface motion. Similarly, it has been shown that it can be used to perform respiratory gating. The third contribution is the design, implementation and validation of a novel respiring phantom. It has individually programmable degrees of freedom and was able to reproduce realistic respiration motion derived from real volunteers. The final contribution is a new gating algorithm which optimises the number and width of gates based on respiratory motion data and the distribution of radioactive counts. This new gating algorithm iterates on amplitude based gating, where gates as positioned based on respiratory pose at a given instant. The key improvement is that it considers the distribution of counts as a consequence of the distribution of motion in a typical PET study. The results show that different studies can be optimised with a unique number of gates based on the maximum extent of motion present and can take into account shifts in baseline position due to patient perturbation.
7

Fukuda, Toshio, Baiqing Sun, Fei Chen, Tomoya Fukukawa, and Hironobu Sasaki. "Active Sensing and Information Structuring for Intelligent Robot Vision with Range Imaging Sensor." IEEE, 2010. http://hdl.handle.net/2237/14441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sharif, Mhd Saeed. "An artificial intelligent system for oncological volumetric medical PET classification." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/13095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Positron emission tomography (PET) imaging is an emerging medical imaging modality. Due to its high sensitivity and ability to model physiological function, it is effective in identifying active regions that may be associated with different types of tumour. Increasing numbers of patient scans have led to an urgent need for the development of new efficient data analysis system to aid clinicians in the diagnosis of disease and save decent amount of the processing time, as well as the automatic detection of small lesions. In this research, an automated intelligent system for oncological PET volume analysis has been developed. Experimental NEMA (national electrical manufacturers association) IEC (International Electrotechnical Commission) body phantom data set, Zubal anthropomorphic phantom data set with simulated tumours, clinical data set from patient with histologically proven non-small cell lung cancer, and clinical data sets from seven patients with laryngeal squamous cell carcinoma have been utilised in this research. The initial stage of the developed system involves different thresholding approaches, and transforming the processed volumes into the wavelet domain at different levels of decomposition by deploying Haar wavelet transform. K-means approach is also deployed to classify the processed volume into a distinct number of classes. The optimal number of classes for each processed data set has been obtained automatically based on Bayesian information criterion. The second stage of the system involves artificial intelligence approaches including feedforward neural network, adaptive neuro-fuzzy inference system, self organising map, and fuzzy C-means. The best neural network design for PET application has been thoroughly investigated. All the proposed classifiers have been evaluated and tested on the experimental, simulated and clinical data sets. The final stage of the developed system includes the development of new optimised committee machine for PET application and tumour classification. Objective and subjective evaluations have been carried out for all the systems outputs, they show promising results for classifying patient lesions. The new approach results have been compared with all of the results obtained from the investigated classifiers and the developed committee machines. Superior results have been achieved using the new approach. An accuracy of 99.95% is achieved for clinical data set of patient with histologically proven lung tumour, and an average accuracy of 98.11% is achieved for clinical data set of seven patients with laryngeal tumour.
9

關福延 and Folk-year Kwan. "An intelligent approach to automatic medical model reconstruction fromserial planar CT images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31243216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Kun. "An Intelligent Analysis Framework for Clinical-Translational MRI Research." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1592254585828664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Esbrand, C. "Feature analysis methods for intelligent breast imaging parameter optimisation using CMOS active pixel sensors." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/19200/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis explores the concept of real time imaging parameter optimisation in digital mammography using statistical information extracted from the breast during a scan. Transmission and Energy dispersive x-ray diffraction (EDXRD) imaging were the two very different imaging modalities investigated. An attempt to determine if either could be used in a real time imaging system enabling differentiation between healthy and suspicious tissue regions was made. This would consequently enable local regions (potentially cancerous regions) within the breast to be imaged using optimised imaging parameters. The performance of possible statistical feature functions that could be used as information extraction tools were investigated using low exposure breast tissue images. The images were divided into eight regions of interest, seven regions corresponding to suspicious tissue regions marked by a radiologist, where the final region was obtained from a location in the breast consisting solely of healthy tissue. Results obtained from this investigation showed that a minimum of 82% of the suspicious tissue regions were highlighted in all images, whilst the total exposure incident on the sample was reduced in all instances. Three out of the seven (42%) intelligent images resulted in an increased contrast to noise ratio (CNR) compared to the conventionally produced transmission images. Three intelligent images were of similar diagnostic quality to their conventional counter parts whilst one was considerably lower. EDXRD measurements were made on breast tissue samples containing potentially cancerous tissue regions. As the technique is known to be able to distinguish between breast tissue types, diffraction signals were used to produce images corresponding to three suspicious tissue regions consequently enabling pixel intensities within the images to be analysed. A minimum of approximately 70% of the suspicious tissue regions were highlighted in each image, with at least 50% of each image remaining unsuspicious, hence was imaged with a reduced incident exposure.
12

Bhuiyan, Mofazzal H. "An intelligent system's approach to reservoir characterization in Cotton Valley." Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=2131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--West Virginia University, 2001.
Title from document title page. Document formatted into pages; contains viii, 92 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 85-88).
13

Nöjdh, Oscar. "Intelligent boundary extraction for area and volume measurement : Using LiveWire for 2D and 3D contour extraction in medical imaging." Thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis tries to answer if a semi-automatic tool can speed up the process of segmenting tumors to find the area of a slice in the tumor or the volume of the entire tumor. A few different 2D semi-automatic tools were considered. The final choice was to implement live-wire. The implemented live-wire was evaluated and improved upon with hands-on testing from developers. Two methods were found for extending live-wire to 3D bodies. The first method was to interpolate the seed points and create new contours using the new seed points. The second method was to let the user segment contours in two orthogonal projections. The intersections between those contours and planes in the third orthogonal projection were then used to create automatic contours in this third projection. Both tools were implemented and evaluated. The evaluation compared the two tools to manual segmentation on two cases posing different difficulties. Time-on-task and accuracy were measured during the evaluation. The evaluation revealed that the semi-automatic tools could indeed save the user time while maintaining acceptable (80%) accuracy. The significance of all results were analyzed using two-tailed t-tests.
14

André, Barbara. "Atlas intelligent pour guider le diagnostic en endomicroscopie : une application clinique de la reconnaissance d'images par le contenu." Phd thesis, École Nationale Supérieure des Mines de Paris, 2011. http://pastel.archives-ouvertes.fr/pastel-00640899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'Endomicrocopie Confocale par Minisondes (ECM) permet l'observation dynamique des tissus au niveau cellulaire, in vivo et in situ, pendant une endoscopie. Grâce à ce nouveau système d'imagerie, les médecins endoscopistes ont la possibilité de réaliser des "biopsies optiques" non invasives. Les biopsies traditionnelles impliquent le diagnostic ex vivo d'images histologiques par des médecins pathologistes. Le diagnostic in vivo d'images ECM est donc un véritable challenge pour les endoscopistes, qui ont en général seulement un peu d'expertise en anatomopathologie. Les images ECM sont néanmoins de nouvelles images, qui ressemblent visuellement aux images histologiques. Cette thèse a pour but principal d'assister les endoscopistes dans l'interprétation in vivo des séquences d'images ECM. Lors de l'établissement d'un diagnostic, les médecins s'appuient sur un raisonnement par cas. Afin de mimer ce processus, nous explorons les méthodes de Reconnaissance d'Images par le Contenu (CBIR) pour l'aide au diagnostique. Notre premier objectif est le développement d'un système capable d'extraire de manière automatique un certain nombre de vidéos ECM qui sont visuellement similaires à la vidéo requête, mais qui ont en plus été annotées avec des métadonnées comme par exemple un diagnostic textuel. Un tel système de reconnaissance devrait aider les endoscopistes à prendre une décision éclairée, et par là-même, à établir un diagnostic ECM plus précis. Pour atteindre notre but, nous étudions la méthode des Sacs de Mots Visuels, utilisée en vision par ordinateur. L'analyse des propriétés des données ECM nous conduit à ajuster la méthode standard. Nous mettons en œuvre la reconnaissance de vidéos ECM complètes, et pas seulement d'images ECM isolées, en représentant les vidéos par des ensembles de mosaïques. Afin d'évaluer les méthodes proposées dans cette thèse, deux bases de données ECM ont été construites, l'une sur les polypes du colon, et l'autre sur l'œsophage de Barrett. En raison de l'absence initiale d'une vérité terrain sur le CBIR appliquée à l'ECM, nous avons d'abord réalisé des évaluations indirectes des méthodes de reconnaissance, au moyen d'une classification par plus proches voisins. La génération d'une vérité terrain éparse, contenant les similarités perçues entre des vidéos par des experts en ECM, nous a ensuite permis d'évaluer directement les méthodes de reconnaissance, en mesurant la corrélation entre la distance induite par la reconnaissance et la similarité perçue. Les deux évaluations, indirecte et directe, démontrent que, sur les deux bases de données ECM, notre méthode de reconnaissance surpasse plusieurs méthodes de l'état de l'art en CBIR. En termes de classification binaire, notre méthode de reconnaissance est comparable au diagnostic établi offline par des endoscopistes experts sur la base des Polypes du Colon. Parce que diagnostiquer des données ECM est une pratique de tous les jours, notre objectif n'est pas seulement d'apporter un support pour un diagnostique ponctuel, mais aussi d'accompagner les endoscopistes sans leurs progrès. A partir des résultats de la reconnaissance, nous estimons la difficulté d'interprétation des vidéos ECM. Nous montrons l'existence d'une corrélation entre la difficulté estimée et la difficulté de diagnostic éprouvée par plusieurs endoscopistes. Cet estimateur pourrait ainsi être utilisé dans un simulateur d'entraînement, avec différents niveaux de difficulté, qui devrait aider les endoscopistes à réduire leur courbe d'apprentissage. La distance standard basée sur les mots visuels donne des résultats adéquats pour la reconnaissance de données ECM. Cependant, peu de connaissance clinique est intégrée dans cette distance. En incorporant l'information a priori sur les similarités perçues par les experts en ECM, nous pouvons apprendre une distance de similarité qui s'avère être plus juste que la distance standard. Dans le but d'apprendre la sémantique des données ECM, nous tirons également profit de plusieurs concepts sémantiques utilisés par les endoscopistes pour décrire les vidéos ECM. Des signatures sémantiques basées mots visuels sont alors construites, capables d'extraire, à partir de caractéristiques visuelles de bas niveau, des connaissances cliniques de haut niveau qui sont exprimées dans le propre langage de l'endoscopiste.
15

Curtis, Phillip. "Data Driven Selective Sensing for 3D Image Acquisition." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/30224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is well established that acquiring large amounts of range data with vision sensors can quickly lead to important data management challenges where processing capabilities become saturated and pre-empt full usage of the information available for autonomous systems to make educated decisions. While sub-sampling offers a naïve solution for reducing dataset dimension after acquisition, it does not capitalize on the knowledge available in already acquired data to selectively and dynamically drive the acquisition process over the most significant regions in a scene, the latter being generally characterized by variations in depth and surface shape in the context of 3D imaging. This thesis discusses the development of two formal improvement measures, the first based upon surface meshes and Ordinary Kriging that focuses on improving scene accuracy, and the second based upon probabilistic occupancy grids that focuses on improving scene coverage. Furthermore, three selection processes to automatically choose which locations within the field of view of a range sensor to acquire next are proposed based upon the two formal improvement measures. The first two selection processes each use only one of the proposed improvement measures. The third selection process combines both improvement measures in order to counterbalance the parameters of the accuracy of knowledge about the scene and the coverage of the scene. The proposed algorithms mainly target applications using random access range sensors, defined as sensors that can acquire depth measurements at a specified location within their field of view. Additionally, the algorithms are applicable to the case of estimating the improvement and point selection from within a single point of view, with the purpose of guiding the random access sensor to locations it can acquire. However, the framework is developed to be independent of the range sensing technology used, and is validated with range data of several scenes acquired from many different sensors employing various sensing technologies and configurations. Furthermore, the experimental results of the proposed selection processes are compared against those produced by a random sampling process, as well as a neural gas selective sensing algorithm.
16

Pincet, Lancelot. "Dynamic excitation systems for quantitative and super-resolved fluorescence microscopy." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La microscopie de localisation de molécule unique (SMLM) est une technique optique super-résolue permettant d'observer des échantillons biologiques marqués par des fluorophores avec une résolution bien inférieure à la limite de diffraction. La qualité de cette imagerie dépend fortement de la capacité à observer les molécules de manière individuelle, ce qui nécessite un contrôle précis de la photophysique des fluorophores pour qu'ils émettent de manière décalée dans l'espace et le temps. Jusqu'à présent, les méthodes d'excitation dynamique visaient à produire une illumination uniforme sur des champs larges (200 um x 200 um). Cependant, ces types d'illumination rencontrent des difficultés pour imager les échantillons biologiques denses, comme les neurones, où la diversité de la densité de fluorophores entrave la génération d'un régime de molécule unique de qualité uniforme sur toute la zone observée. Pour résoudre ce problème, je propose une nouvelle approche qui ajuste dynamiquement l'illumination en fonction de la densité de l'échantillon. Cette méthode combine un nouveau système optique d'excitation tri-dynamique avec une boucle de rétroaction basée sur l'analyse de densité, bénéficiant d'une étude approfondie de la photophysique des fluorophores. Le système d'imagerie intelligent, où le motif d'excitation varie dans le temps, intègre un système de balayage 2D, un système de zoom variable et un laser. Ceci permet de générer une variété de cartes d'éclairement dynamiques pour s'adapter à l'échantillon observé et à la densité de localisations détectées localement. Cette nouvelle approche a été validée sur différents échantillons biologiques. De plus, le système d’excitation dynamique a également permis d’explorer des modalités d'imagerie d'échantillons vivants, telles que le MSIM ou le FRAP
Single Molecule Localization Microscopy (SMLM) is a super-resolution optical technique enabling the observation of biological samples labeled with fluorescent dyes at resolutions well below the diffraction limit. The quality of this imaging heavily relies on the ability to observe molecules individually, requiring precise control of fluorescent dye photophysics for them to emit with a high sparsity in both space and time. Until now, dynamic excitation methods aimed to produce uniform illumination over large fields (200 um x 200 um). However, these types of illumination encounter difficulties in imaging dense biological samples, such as neurons, where the diversity in dye density prevented the generation of a uniform single molecule regime across the entire observed area. To address this issue, I propose a new approach that dynamically adjusts illumination based on sample density. This method combines a novel tri-dynamic optical excitation system with a feedback loop based on density analysis, benefiting from an in-depth study of fluorescent dye photophysics. The intelligent imaging system, where the excitation pattern varies over time, integrates a 2D scanning system, a variable zoom system, and a laser. This allows for the generation of a variety of dynamically changing illumination patterns to adapt to the observed sample and the density of locally detected localizations. This new approach has been validated on various biological samples. Additionally, the dynamic excitation system has also been explored for live samples imaging techniques, such as MSIM or FRAP
17

Rajagopal, A. "IMAGINE : An Intelligent Electonic Marketplace." Thesis, Indian Institute of Science, 2001. https://etd.iisc.ac.in/handle/2005/254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent times, the Internet revolution has spawned numerous innovative enterprises-virtual companies, and electronic markets. Electronic markets (or digital markets) are scalable web-based platforms for buyers, sellers, marketmakers, and brokers to carry out business transactions. Over the last two years, there has been a proliferation of such E-Markets on the web. In this thesis, we develop an E-marketplace, which we call IMAGINE (Intelligent Market with AGents and Integrative NEgotiations) that improves upon the existing state-of-the-art in several non-trivial ways. IMAGINE combines the best features of existing E-marketplaces with several innovations. The thesis describes the conceptualization, analysis, and design of IMAGINE and provides details of implementation of a prototype of IMAGINE at the Electronic Enterprises Laboratory, Department of Computer Science and Automation, Indian Institute of Science. IMAGINE is a collaborative, co-operative, intelligent E-Market that maximizes the combined utility value of the all traders involved. IMAGINE has several distinctive features: • It uses an innovative business model, which is intelligent in the sense of perceiving the nature of the market and market forces and using this market intelligence in matching buyers with sellers and in determining the prices. • It uses integrative negotiations, which make it attractive for buyers and sellers to reveal their true business interests and valuations. • A sound and robust software architecture for a web-based implementation using best practices in object technology. • Implementation of a prototype of IMAGINE has been carried out using leading edge Internet technologies such as multi-agent technology, Jini, and Javaspaces.
18

Rajagopal, A. "IMAGINE : An Intelligent Electonic Marketplace." Thesis, Indian Institute of Science, 2001. http://hdl.handle.net/2005/254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent times, the Internet revolution has spawned numerous innovative enterprises-virtual companies, and electronic markets. Electronic markets (or digital markets) are scalable web-based platforms for buyers, sellers, marketmakers, and brokers to carry out business transactions. Over the last two years, there has been a proliferation of such E-Markets on the web. In this thesis, we develop an E-marketplace, which we call IMAGINE (Intelligent Market with AGents and Integrative NEgotiations) that improves upon the existing state-of-the-art in several non-trivial ways. IMAGINE combines the best features of existing E-marketplaces with several innovations. The thesis describes the conceptualization, analysis, and design of IMAGINE and provides details of implementation of a prototype of IMAGINE at the Electronic Enterprises Laboratory, Department of Computer Science and Automation, Indian Institute of Science. IMAGINE is a collaborative, co-operative, intelligent E-Market that maximizes the combined utility value of the all traders involved. IMAGINE has several distinctive features: • It uses an innovative business model, which is intelligent in the sense of perceiving the nature of the market and market forces and using this market intelligence in matching buyers with sellers and in determining the prices. • It uses integrative negotiations, which make it attractive for buyers and sellers to reveal their true business interests and valuations. • A sound and robust software architecture for a web-based implementation using best practices in object technology. • Implementation of a prototype of IMAGINE has been carried out using leading edge Internet technologies such as multi-agent technology, Jini, and Javaspaces.
19

Wong, Alison. "Artificial Intelligence for Astronomical Imaging." Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Astronomy is the ultimate observational science. Objects outside our solar system are beyond our reach, so we are limited to acquiring knowledge at a distance. This motivates the need to advance astrophysical imaging technologies, particularly for the field of high contrast imaging, where some of the most highly prized science goals require high fidelity imagery of exoplanets and of the circumstellar structures associated with stellar and planetary birth. Such technical capabilities address questions of both the birth and death of stars which in turn informs the grand recycling of matter in the chemical evolution of the galaxy and universe itself. Ground-based astronomical observation primarily relies on extreme adaptive optics systems in order to extract signals arising from faint structures within the immediate vicinity of luminous host stars. These systems are distinguished from standard adaptive optics systems in performing faster and more precise wavefront correction which leads to better imaging performance. The overall theme of this thesis therefore ties together advanced topics in artificial intelligence with techniques and technologies required for the field of high contrast imaging. This is accomplished with demonstrations of deep learning methods used to improve the performance of extreme adaptive optics systems and is deployed and benchmarked with data obtained at the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system operating at the observatory on the summit of Mauna Kea in Hawaii. Solutions encompass both hardware and software, with optimal recovery of scientific outcomes delivered by model fitting of high contrast imaging data with modern machine learning techniques. This broad-ranging study subjecting acquisition, analysis and modelling of data hopes to yield more accurate and higher fidelity observables which in turn delivers improved interpretation and scientific delivery.
20

Allen, Axel. "Imagining intelligent artefacts : Myths and a digital sublime regarding artificial intelligence in Swedish newspaper Svenska Dagbladet." Thesis, Stockholms universitet, JMK, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-172782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence (AI) has over the past years become a hot topic for discussion in Sweden, as the technology presents exciting unique possibilities and challenges for the country and its citizens. Coverage of AI in Swedish news media presents imagined scenarios with both current and future AI that contribute to myths about how the technology is able to radically transform life, that spring out of a central digital sublime. Through a mixed-method study of 55 newspaper items about AI from Svenska Dagbladet from 2017 to 2018, the thesis studies what evident AI myths occur in coverage and how such discourses spring out digital sublime regarding AI. A total of four AI myths are found in news media coverage that revolve around existing and future intelligent computers, robots, machines and perceptions with them. Myths and hopes and concerns with them point to digital sublime regarding AI as a force of intelligent digitization that promises to empower a sublime citizen, economy, and welfare state. Emotional values with sublime AI are understood to reflect a general Swedish techno-optimism as digital artefacts have allowed Sweden to become prosperous.
21

Carrass-Milling, Anders, and Camilla Johansson. "Artificiell intelligens inom mammografiscreening : En litteraturstudie." Thesis, Jönköping University, Hälsohögskolan, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-49092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Den senaste utvecklingen av artificiell intelligens (AI) och djupinlärning (DL) har gjort bild- och funktionsmedicin till en högst trolig kandidat att tidigt anta tekniken. AI inom mammografiscreening syftar till hälsofrämjande effekter genom en förhoppning om säkrare bilddiagnostik. Röntgensjuksköterskans (RSS) arbete präglas av korrekt utförd bildtagning och ett aktivt aktualiserande av den egna yrkesrollen gällande såväl tekniska framsteg som förnyade arbetssätt. Litteraturstudien har upprättats i syfte att belysa potentiella effekter av AI på bilddiagnostik inom mammografiscreening. Genom manifest innehållsanalys av resultat erhållna ur ämnesrelevanta vetenskapliga studier publicerade i databaserna Cinahl och Medline under år 2019–2020 identifierades och beskrevs kategorier sammanställda av subkategorier med liknande innehåll. Effekter inom granskningsprocessen och diagnostisk säkerhet skildrar flera perspektiv gällande AI:s effekter på bilddiagnostik. Utöver en stundtals ökad förmåga till cancerdetektion vid AI-assistans har artificiell bildgranskning även visat sig kunna reducera arbetsbördan för radiologer i form av friskrivning av mammogram med låg sannolikhet för bröstcancer. Vid tillämpning av AI ses lovande effekter inom framförallt klassificering av bröstvävnad samt vid reducering av falska positiva svar. Forskningen förbehålls dock med kvarstående etiska dilemman och avsaknad av ett juridiskt ramverk, vilket lämnar utrymme för vidare studier.
Recent developments in artificial intelligence (AI) and deep learning (DL) have made diagnostic imaging a prime candidate to adopt the technology. AI in mammography screening aims at promoting health with hopes of higher diagnostic accuracy. The radiographers work is characterized by properly performed imaging and actively updating the profession regarding technical developments and renewed working methods. The aim of this systematic review was to illustrate feasible effects of AI on diagnostic imaging within mammography screening. Through manifest content analysis of results obtained from subject related scientific studies published 2019–2020 in the databases Cinahl and Medline the authors identified and described categories compiled by subcategories with similar contents. Effects within the image interpretation process and diagnostic accuracy describes several perspectives regarding the outputs of AI on diagnostic imaging. AI-systems have proven to be useful in both assisting with image interpretation and reducing the workload for radiologists by disclaiming mammograms with low probability of breast cancer. Most promising effects are seen in the classification of breast tissue and reduction of false positives, but research is challenged by ethical dilemmas and the need for a legal framework, which are areas suggested for future research.
22

Vilhelmsson, Kajsa, and Tilda Sigurdsson. "Tillämpning av Artificiell Intelligens vid diagnostisering av lungemboli : Litteraturstudie." Thesis, Jönköping University, Hälsohögskolan, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-52968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

COLOMBO, ALESSANDRO. "HIGH PERFORMANCE COMPUTATIONAL INTELLIGENCE FOR COHERENT DIFFRACTION DATA ANALYSIS AND IMAGING." Doctoral thesis, Università degli Studi di Milano, 2018. http://hdl.handle.net/2434/607138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Coherent Diffraction Imaging (CDI) is a lens-less technique that allows imaging of matter at a spatial resolution not limited by lens aberrations. This technique exploits the measured diffraction pattern of a coherent beam scattered by periodic and non–periodic objects to retrieve spatial information. The diffracted intensity, for weak–scattering objects, is proportional to the modulus of the Fourier Transform of the object density distribution. Any phase information, needed to retrieve the sample density, has to be retrieved by means of suitable algorithms. This work presents a new approach based on Computational Intelligence, in particular on Genetic Algorithms, to face the phase retrieval problem. This new approach, called Memetic Phase Retrieval, is described, along with its implementation specifically designed and optimized for High Performance Computing hardware. Tests on both simulated and experimental data are performed, showing a remarkable performance improvement with respect to standard algorithms. Memetic Phase Retrieval proves to be a new powerful tool for the study of matter via CDI. Moreover, it represents a novelty, laying the foundations for a more extensive use of Computational Intelligence in the field of CDI and opening new perspectives in those applications in which reliable phase retrieval is necessary.
24

Rebaud, Louis. "Whole-body / total-body biomarkers in PET imaging." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse, réalisée en partenariat avec l'Institut Curie et Siemens Healthineers, explore l'utilisation de l'imagerie par tomographie par émission de positrons (TEP) pour le pronostic du cancer, en se concentrant sur les lymphomes non hodgkiniens, en particulier le lymphome folliculaire (FL) et le lymphome diffus à grandes cellules B (DLBCL). Partant de l'hypothèse que les biomarqueurs actuels calculés dans les images TEP sous-utilisent leur richesse en informations, ce travail se concentre sur la recherche de nouveaux biomarqueurs en imagerie TEP corps entier. Une première approche manuelle a permis de valider une caractéristique précédemment identifiée (fragmentation de la tumeur) et d'explorer l'importance pronostique de l'atteinte splénique dans les DLBCL, en constatant que le volume de l'atteinte splénique ne permet pas de stratifier davantage les patients présentant une telle atteinte. Pour dépasser les limites empiriques de la recherche manuelle, une méthode d'identification semi-automatique des caractéristiques a été mise au point. Elle consiste à extraire automatiquement des milliers de biomarqueurs candidats et à les tester à l'aide d'un pipeline de sélection conçu pour trouver des caractéristiques quantifiant de nouvelles informations pronostiques. Les biomarqueurs sélectionnés ont ensuite été analysés et recodés de manière plus simple et plus intuitive. Cette approche a permis d'identifier 22 nouveaux biomarqueurs basés sur l'image, qui reflètent des informations biologiques sur les tumeurs, mais aussi l'état de santé général du patient. Parmi eux, 10 caractéristiques se sont avérées pronostiques à la fois pour les patients atteints de FL que pour ceux souffrant de DLBCL. La thèse aborde également le défi que représente l'utilisation de ces caractéristiques dans la pratique clinique, en proposant le modèle ICARE (Individual Coefficient Approximation for Risk Estimation). Ce modèle d'apprentissage automatique, conçu pour réduire le surapprentissage et améliorer la généralisation, a démontré son efficacité dans le cadre du challenge HECKTOR 2022 visant à prédire le risque de rechute de patients atteints de cancer des voies aérodigestives supérieures à partir de leurs images TEP. Ce modèle s'est également avéré plus résistant au surapprentissage que d'autres méthodes d'apprentissage automatique lors d'une comparaison exhaustive sur un benchmark de 71 jeux de données médicales. Ces développements ont été implémentés dans une extension logicielle d'un prototype développé par Siemens Healthineers
This thesis in partnership with Institut Curie and Siemens Healthineers explores the use of Positron Emission Tomography (PET) for cancer prognosis, focusing on non-Hodgkin lymphomas, especially follicular lymphoma (FL) and diffuse large B cell lymphoma (DLBCL). Assuming that current biomarkers computed in PET images overlook significant information, this work focuses on the search for new biomarkers in whole-body PET imaging. An initial manual approach validated a previously identified feature (tumor fragmentation) and explored the prognostic significance of splenic involvement in DLBCL, finding that the volume of splenic involvement does not further stratify patients with such an involvement. To overcome the empirical limitations of the manual search, a semi-automatic feature identification method was developed. It consisted in the automatic extraction of thousands of candidate biomarkers and there subsequent testing by a selection pipeline design to identify features quantifying new prognostic information. The selected biomarkers were then analysed and re-encoded in simpler and more intuitive ways. Using this approach, 22 new image-based biomarkers were identified, reflecting biological information about the tumours, but also the overall health status of the patient. Among them, 10 features were found prognostic of both FL and DLBCL patient outcome. The thesis also addresses the challenge of using these features in clinical practice, proposing the Individual Coefficient Approximation for Risk Estimation (ICARE) model. This machine learning model, designed to reduce overfitting and improve generalizability, demonstrated effectiveness in the HECKTOR 2022 challenge for predicting outcomes from head and neck cancer patients [18F]-PET/CT scans. This model was also found to overfit less than other machine learning methods on an exhaustive comparison using a benchmark of 71 medical datasets. All these developments were implemented in a software extension of a prototype developed by Siemens Healthineers
25

La, Barbera Giammarco. "Learning anatomical digital twins in pediatric 3D imaging for renal cancer surgery." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les cancers rénaux pédiatriques représentent 9% des cancers pédiatriques avec un taux de survie de 9/10 au prix de la perte d'un rein. La chirurgie d'épargne néphronique (NSS, ablation partielle du rein) est possible si le cancer répond à des critères précis (sur le volume et la localisation de la lésion). L'indication de la NSS repose sur l'imagerie préopératoire, en particulier la tomodensitométrie à rayons X (CT). Si l'évaluation de tous les critères sur des images 2D n'est pas toujours facile, les modèles 3D spécifiques au patient offrent une solution prometteuse. La construction de modèles 3D de l'anatomie rénale à partir de la segmentation est développée chez les adultes mais pas chez les enfants. Il existe un besoin de méthodes de traitement d'images dédiées aux patients pédiatriques en raison des spécificités de ces images, comme l'hétérogénéité de la forme et de la taille des structures. De plus, dans les images CT, l'injection d'un produit de contraste est souvent utilisée (ceCT) pour faciliter l'identification de l'interface entre les différents structures mais cela peut conduire à une hétérogénéité dans le contraste de certaines structures anatomiques, même parmi les patients acquis avec la même procédure. Le premier objectif de cette thèse est d'effectuer une segmentation des organes/tumeurs à partir d'images ceCT, à partir de laquelle un modèle 3D sera dérivé. Des approches d'apprentissage par transfert (des données d'adultes aux images d'enfants) sont proposées. La première question consiste à savoir si de telles méthodes sont réalisables, malgré la différence structurelle évidente entre les ensembles de données. Une deuxième question porte sur la possibilité de remplacer les techniques standard d’augmentation des données par des techniques d’homogénéisation des données utilisant des "Spatial Transformer Networks", améliorant ainsi le temps d’apprentissage, la mémoire requise et les performances. La segmentation de certaines structures anatomiques dans des images ceCT peut être difficile à cause de la variabilité de la diffusion du produit de contraste. L'utilisation combinée d'images CT sans contrast (CT) et ceCT atténue cette difficulté, mais au prix d'une exposition doublée aux rayonnements. Le remplacement d'une des acquisitions CT par des modèles génératifs permet de maintenir les performances de segmentation, en limitant les doses de rayons X. Un deuxième objectif de cette thèse est de synthétiser des images ceCT à partir de CT et vice-versa, à partir de bases d'apprentissage d'images non appariées, en utilisant une extension du "Cycle Generative Adversarial Network". Des contraintes anatomiques sont introduites en utilisant le score d'un "Self-Supervised Body Regressor", améliorant la sélection d'images anatomiquement appariées entre les deux domaines et renforçant la cohérence anatomique. Un troisième objectif de ce travail est de compléter le modèle 3D d'un patient atteint d'une tumeur rénale en incluant également les artères, les veines et les uretères. Une étude approfondie et une analyse comparative de la littérature sur la segmentation des structures tubulaires anatomique sont présentées. En outre, nous présentons pour la première fois l'utilisation de la fonction de ''vesselness'' comme fonction de perte pour l'entraînement d'un réseau de segmentation. Nous démontrons que la combinaison de l’information sur les valeurs propres avec les informations structurelles d’autres fonctions de perte permet d’améliorer les performances. Enfin, nous présentons un outil développé pour utiliser les méthodes proposées dans un cadre clinique réel ainsi qu'une étude clinique visant à évaluer les avantages de l'utilisation de modèles 3D dans la planification préopératoire. L'objectif à terme de cette recherche est de démontrer, par une évaluation rétrospective d'experts, comment les critères du NSS sont plus susceptibles d'être trouvés dans les images 3D que dans les images 2D. Cette étude est toujours en cours
Pediatric renal cancers account for 9% of pediatric cancers with a 9/10 survival rate at the expense of the loss of a kidney. Nephron-sparing surgery (NSS, partial removal of the kidney) is possible if the cancer meets specific criteria (regarding volume, location and extent of the lesion). Indication for NSS is relying on preoperative imaging, in particular X-ray Computerized Tomography (CT). While assessing all criteria in 2D images is not always easy nor even feasible, 3D patient-specific models offer a promising solution. Building 3D models of the renal tumor anatomy based on segmentation is widely developed in adults but not in children. There is a need of dedicated image processing methods for pediatric patients due to the specificities of the images with respect to adults and to heterogeneity in pose and size of the structures (subjects going from few days of age to 16 years). Moreover, in CT images, injection of contrast agent (contrast-enhanced CT, ceCT) is often used to facilitate the identification of the interface between different tissues and structures but this might lead to heterogeneity in contrast and brightness of some anatomical structures, even among patients of the same medical database (i.e., same acquisition procedure). This can complicate the following analyses, such as segmentation. The first objective of this thesis is to perform organ/tumor segmentation from abdominal-visceral ceCT images. An individual 3D patient model is then derived. Transfer learning approaches (from adult data to children images) are proposed to improve state-of-the-art performances. The first question we want to answer is if such methods are feasible, despite the obvious structural difference between the datasets, thanks to geometric domain adaptation. A second question is if the standard techniques of data augmentation can be replaced by data homogenization techniques using Spatial Transformer Networks (STN), improving training time, memory requirement and performances. In order to deal with variability in contrast medium diffusion, a second objective is to perform a cross-domain CT image translation from ceCT to contrast-free CT (CT) and vice-versa, using Cycle Generative Adversarial Network (CycleGAN). In fact, the combined use of ceCT and CT images can improve the segmentation performances on certain anatomical structures in ceCT, but at the cost of a double radiation exposure. To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it. We present an extension of CycleGAN to generate such images, from unpaired databases. Anatomical constraints are introduced by automatically selecting the region of interest and by using the score of a Self-Supervised Body Regressor, improving the selection of anatomically-paired images between the two domains (CT and ceCT) and enforcing anatomical consistency. A third objective of this work is to complete the 3D model of patient affected by renal tumor including also arteries, veins and collecting system (i.e. ureters). An extensive study and benchmarking of the literature on anatomic tubular structure segmentation is presented. Modifications to state-of-the-art methods for our specific application are also proposed. Moreover, we present for the first time the use of the so-called vesselness function as loss function for training a segmentation network. We demonstrate that combining eigenvalue information with structural and voxel-wise information of other loss functions results in an improvement in performance. Eventually, a tool developed for using the proposed methods in a real clinical setting is shown as well as a clinical study to further evaluate the benefits of using 3D models in pre-operative planning. The intent of this research is to demonstrate through a retrospective evaluation of experts how criteria for NSS are more likely to be found in 3D compared to 2D images. This study is still ongoing
26

COLELLI, GIULIA. "Artificial Intelligence, Mathematical Modeling and Magnetic Resonance Imaging for Precision Medicine in Neurology and Neuroradiology." Doctoral thesis, Università degli studi di Pavia, 2022. https://hdl.handle.net/11571/1468414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La tesi affronta la possibilità di utilizzare metodi matematici, tecniche di simulazione, teorie fisiche riadattate e algoritmi di intelligenza artificiale per soddisfare le esigenze cliniche in neuroradiologia e neurologia al fine di descrivere e prevedere i patterns e l’evoluzione temporale di una malattia, nonché di supportare il processo decisionale clinico. La tesi è suddivisa in tre parti. La prima parte riguarda lo sviluppo di un workflow radiomico combinato con algoritmi di Machine Learning al fine di prevedere parametri che favoriscono la descrizione quantitativa dei cambiamenti anatomici e del coinvolgimento muscolare nei disordini neuromuscolari, con particolare attenzione alla distrofia facioscapolo-omerale. Il workflow proposto si basa su sequenze di risonanza magnetica convenzionali disponibili nella maggior parte dei centri neuromuscolari e, dunque, può essere utilizzato come strumento non invasivo per monitorare anche i più piccoli cambiamenti nei disturbi neuromuscolari oltre che per la valutazione della progressione della malattia nel tempo. La seconda parte riguarda l’utilizzo di un modello cinetico per descrivere la crescita tumorale basato sugli strumenti della meccanica statistica per sistemi multi-agente e che tiene in considerazione gli effetti delle incertezze cliniche legate alla variabilità della progressione tumorale nei diversi pazienti. L'azione dei protocolli terapeutici è modellata come controllo che agisce a livello microscopico modificando la natura della distribuzione risultante. Viene mostrato come lo scenario controllato permetta di smorzare le incertezze associate alla variabilità della dinamica tumorale. Inoltre, sono stati introdotti metodi di simulazione numerica basati sulla formulazione stochastic Galerkin del modello cinetico sviluppato. La terza parte si riferisce ad un progetto ancora in corso che tenta di descrivere una porzione di cervello attraverso la teoria quantistica dei campi e di simularne il comportamento attraverso l'implementazione di una rete neurale con una funzione di attivazione costruita ad hoc e che simula la funzione di risposta del modello biologico neuronale. E’ stato ottenuto che, nelle condizioni studiate, l'attività della porzione di cervello può essere descritta fino a O(6), i.e, considerando l’interazione fino a sei campi, come un processo gaussiano. Il framework quantistico definito può essere esteso anche al caso di un processo non gaussiano, ovvero al caso di una teoria di campo quantistico interagente utilizzando l’approccio della teoria wilsoniana di campo efficace.
The thesis addresses the possibility of using mathematical methods, simulation techniques, repurposed physical theories and artificial intelligence algorithms to fulfill clinical needs in neuroradiology and neurology. The aim is to describe and to predict disease patterns and its evolution over time as well as to support clinical decision-making processes. The thesis is divided into three parts. Part 1 is related to the development of a Radiomic workflow combined with Machine Learning algorithms in order to predict parameters that quantify muscular anatomical involvement in neuromuscular diseases, with special focus on Facioscapulohumeral dystrophy. The proposed workflow relies on conventional Magnetic Resonance Imaging sequences available in most neuromuscular centers and it can be used as a non-invasive tool to monitor even fine change in neuromuscular disorders and to evaluate longitudinal diseases’ progression over time. Part 2 is about the description of a kinetic model for tumor growth by means of classical tools of statistical mechanics for many-agent systems also taking into account the effects of clinical uncertainties related to patients’ variability in tumor progression. The action of therapeutic protocols is modeled as feedback control at the microscopic level. The controlled scenario allows the dumping of uncertainties associated with the variability in tumors’ dynamics. Suitable numerical methods, based on Stochastic Galerkin formulation of the derived kinetic model, are introduced. Part 3 refers to a still-on going project that attempts to describe a brain portion through a quantum field theory and to simulate its behavior through the implementation of a neural network with an ad-hoc activation function mimicking the biological neuron model response function. Under considered conditions, the brain portion activity can be expressed up to O(6), i.e., up to six fields interaction, as a Gaussian Process. The defined quantum field framework may also be extended to the case of a Non-Gaussian Process behavior, or rather to an interacting quantum field theory in a Wilsonian Effective Field theory approach.
27

Wallis, David. "A study of machine learning and deep learning methods and their application to medical imaging." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous utilisons d'abord des réseaux neuronaux convolutifs (CNNs) pour automatiser la détection des ganglions lymphatiques médiastinaux dans les images TEP/TDM. Nous construisons un modèle entièrement automatisé pour passer directement des images TEP/TDM à la localisation des ganglions. Les résultats montrent une performance comparable à celle d'un médecin. Dans la seconde partie de la thèse, nous testons la performance, l'interprétabilité et la stabilité des modèles radiomiques et CNN sur trois ensembles de données (IRM cérébrale 2D, TDM pulmonaire 3D, TEP/TDM médiastinale 3D). Nous comparons la façon dont les modèles s'améliorent lorsque davantage de données sont disponibles et nous examinons s'il existe des tendances communess aux différents problèmes. Nous nous demandons si les méthodes actuelles d'interprétation des modèles sont satisfaisantes. Nous étudions également comment une segmentation précise affecte les performances des modèles. Nous utilisons d'abord des réseaux neuronaux convolutifs (CNNs) pour automatiser la détection des ganglions lymphatiques médiastinaux dans les images TEP/TDM. Nous construisons un modèle entièrement automatisé pour passer directement des images TEP/TDM à la localisation des ganglions. Les résultats montrent une performance comparable à celle d'un médecin. Dans la seconde partie de la thèse, nous testons la performance, l'interprétabilité et la stabilité des modèles radiomiques et CNN sur trois ensembles de données (IRM cérébrale 2D, TDM pulmonaire 3D, TEP/TDM médiastinale 3D). Nous comparons la façon dont les modèles s'améliorent lorsque davantage de données sont disponibles et nous examinons s'il existe des tendances communess aux différents problèmes. Nous nous demandons si les méthodes actuelles d'interprétation des modèles sont satisfaisantes. Nous étudions également comment une segmentation précise affecte les performances des modèles
We first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models. We first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models
28

Lindström, Sofia, and Maja Becarevic. "Gadoliniumansamling hos patienter med multipel skleros samt implementering av artificiell intelligens vid magnetresonanstomografi." Thesis, Luleå tekniska universitet, Institutionen för hälsa, lärande och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-82796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduktion: Uppskattningsvis 40% av alla magnetresonanstomografiska (MRT) undersökningar som görs i Europa och USA utförs med gadoliniumbaserade kontrastmedel. Under det senaste decenniet har det i flera studier uppmärksammats att ansamling av gadolinium sker i olika strukturer i hjärnan. Patienter med multipel skleros följs regelbundet upp med MRT undersökningar och MRT med kontrastförstärkning är den vanligaste metoden för att urskilja nytillkomna patologiska förändringar. Utveckling inom teknologi och metoder inom artificiell intelligens har visat att det finns anledning att kartlägga om röntgensjuksköterskans arbete med undersökningar och läkemedel som administreras till patienter kan förändras så att det förebygger ansamling av gadolinium. Syfte: Syftet med denna litteraturöversikt var att kartlägga ansamlingen av gadoliniumkontrastmedel hos patienter med multipel skleros och hur artificiell intelligens kan tillämpas vid MRT för att minska användning av gadoliniumkontrast. Metod: Allmän litteraturöversikt där vetenskapliga artiklar av kvantitativ studiedesign har sökts fram genom databaserna CINAHL och PubMed. Resultat: Både makrocykliska och linjära gadoliniumbaserade kontrastmedel ansamlas i de basala ganglierna. Genom tillämpning av AI och CAD går det att framställa bilder med fullgod bildkvalitet och samtidigt reducera mängden kontrastmedel som administreras till patienten. Slutsats: Det behövs mer forskning om gadoliniumansamling för att nya rutiner och metoder ska kunna implementeras. Ansamling av gadolinium visar att det finns skäl att fortsätta utveckla nya metoder för uppföljning av sjukdomsförloppet hos MS-patienter. När det gäller AI inom medicinsk bilddiagnostik och magnetresonanstomografi finns många utvecklingsmöjligheter som kan bidra till minskning av gadoliniumbaserad kontrast i framtiden. Fortsatt forskning inom deep learning och CAD kan i framtiden utvecklas så att röntgensjuksköterskan får en mer självbestämmande funktion i bildframställning vid MRT, men även ett mer självständigt arbete i hanteringen av farmaka. Dessutom kan denna utveckling bidra till att röntgensjuksköterskans multidiciplinära samverkan med radiologer stärks och bidrar till en positiv utveckling med kortare granskningstider, bättre hantering av patienter, optimerade undersökningar, minskning av undersökningstider och kortare vårdköer.
Introduction: Approximately 40% of all magnetic resonance imaging (MRI) scans performed in Europe and the United States are performed with gadolinium based contrast agents. Over the past decade, several studies have shown a gadolinium deposition in various structures in the brain. Patients with multiple sclerosis are regularly followed up with MRI with contrast enhancement is the most common method for distinguishing new pathological changes. Developments in technology and methods in artificial intelligence have shown that there is reason to map out whether the radiographers work with examinations and drugs administered to patients can be changed so that the accumulation of gadolinium is prevented. Aim: The purpose of this literature review was to examine the accumulation of gadolinium contrast agents in patients with multiple sclerosis of gadolinium contrast agents in patients with multiple sclerosis and how artificial intelligence can be applied in MRI to reduce the use of gadolinium based contrast agents. Methods: General literature review where scientific articles of a quantitative nature have been searched through the databases CINAHL and PubMed. Results: Both macrocyclic and linear gadolinium based contrast agents are retained in the basal ganglia. With artificial intelligence and CAD, it is possible to obtain data with good quality and at the same time reduce the amount of gadolinium based contrasts to patients. Conclusions: More research on gadolinium accumulation is needed for new routines and methods to be implemented. Accumulation of gadolinium shows that there is reason to continue to develop new methods for monitoring the course of the disease in MS patients. Concerning AI in medical imaging and magnetic resonance imaging, there are many development opportunities that can contribute to the reduction of gadolinium contrast in the future. Continued research in deep learning and CAD can be developed in the future so that the X-ray nurse has a more self-determining function in image production in MRI, but also a more independent work in the management of pharmacies. In addition, this development can contribute to the X - ray nurse's multidisciplinary collaboration with radiologists is strengthened and contributes to a positive development in shorter examination times, better management of patients, optimized examinations, reduction of examination times and shorter care queues.
29

Jabbar, Shaima Ibraheem. "Automated analysis of ultrasound imaging of muscle and tendon in the upper limb using artificial intelligence methods." Thesis, Keele University, 2018. http://eprints.keele.ac.uk/5433/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurate estimation of geometric musculoskeletal parameters from medical imaging has a number of applications in healthcare analysis and modelling. In vivo measurement of key morphological parameters of an individual’s upper limb opens up a new era for the construction of subject-specific models of the shoulder and arm. These models could be used to aid diagnosis of musculoskeletal problems, predict the effects of interventions and assist in the design and development of medical devices. However, these parameters are difficult to evaluate in vivo due to the complicated and inaccessible nature of structures such as muscles and tendons. Ultrasound, as a non-invasive and low-cost imaging technique, has been used in the manual evaluation of parameters such as muscle fibre length, cross sectional area and tendon length. However, the evaluation of ultrasound images depends heavily on the expertise of the operator and is time-consuming. Basing parameter estimation on the properties of the image itself and reducing the reliance on the skill of the operator would allow for automation of the process, speeding up parameter estimation and reducing bias in the final outcome. Key barriers to automation are the presence of speckle noise in the images and low image contrast. This hinders the effectiveness of traditional edge detection and segmentation methods necessary for parameter estimation. Therefore, addressing these limitations is considered pivotal to progress in this area. The aims of this thesis were therefore to develop new methods for the automatic evaluation of these geometric parameters of the upper extremity, and to compare these with manual evaluations. This was done by addressing all stages of the image processing pipeline, and introducing new methods based on artificial intelligence. Speckle noise of musculoskeletal ultrasound images was reduced by successfully applying local adaptive median filtering and anisotropic diffusion filtering. Furthermore, low contrast of the ultrasound image and video was enhanced by developing a new method based on local fuzzy contrast enhancement. Both steps contributed to improving the quality of musculoskeletal ultrasound images to improve the effectiveness of edge detection methods. Subsequently, a new edge detection method based on the fuzzy inference system was developed to outline the necessary details of the musculoskeletal ultrasound images after image enhancement. This step allowed automated segmentation to be used to estimate the morphological parameters of muscles and tendons in the upper extremity. Finally, the automatically estimated geometric parameters, including the thickness and pennation angle of triceps muscle and the cross-sectional area and circumference of the flexor pollicis longus tendon were compared with manually taken measurements from the same ultrasound images. The results show successful performance of the novel methods in the sample population for the muscles and tendons chosen. A larger dataset would help to make the developed methods more robust and more widely applicable. Future work should concentrate on using the developed methods of this thesis to evaluate other geometric parameters of the upper and lower extremities such as automatic evaluation of the muscle fascicle length.
30

Li, Cui. "Image quality assessment using algorithmic and machine learning techniques." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources. Restricted: no access until June 2, 2014, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=26521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph.D.)--Aberdeen University, 2009.
With: An image quality metric based in corner, edge and symmetry maps / Li Cui, Alastair R. Allen. With: An image quality metric based on a colour appearance model / Li Cui and Alastair R. Allen. ACIVS / J. Blanc-Talon et al. eds. 2008 LNCS 5259, 696-707. Includes bibliographical references.
31

Lee, Jong-Ha. "Tactile sensation imaging system and algorithms for tumor detection." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/151945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Electrical Engineering
Ph.D.
Diagnosing early formation of tumors or lumps, particularly those caused by cancer, has been a challenging problem. To help physicians detect tumors more efficiently, various imaging techniques with different imaging modalities such as computer tomography, ultrasonic imaging, nuclear magnetic resonance imaging, and mammography, have been developed. However, each of these techniques has limitations, including exposure to radiation, excessive costs, and complexity of machinery. Tissue elasticity is an important indicator of tissue health, with increased stiffness pointing to an increased risk of cancer. In addition to increased tissue elasticity, geometric parameters such as size of a tissue inclusion are also important factors in assessing the tumor. The combined knowledge of tissue elasticity and its geometry would aid in tumor identification. In this research, we present a tactile sensation imaging system (TSIS) and algorithms which can be used for practical medical diagnostic experiments for measuring stiffness and geometry of tissue inclusion. The TSIS incorporates an optical waveguide sensing probe unit, a light source unit, a camera unit, and a computer unit. The optical method of total internal reflection phenomenon in an optical waveguide is adapted for the tactile sensation imaging principle. The light sources are attached along the edges of the waveguide and illuminates at a critical angle to totally reflect the light within the waveguide. Once the waveguide is deformed due to the stiff object, it causes the trapped light to change the critical angle and diffuse outside the waveguide. The scattered light is captured by a camera. To estimate various target parameters, we develop the tactile data processing algorithm for the target elasticity measurement via direct contact. This algorithm is accomplished by adopting a new non-rigid point matching algorithm called "topology preserving relaxation labeling (TPRL)." Using this algorithm, a series of tactile data is registered and strain information is calculated. The stress information is measured through the summation of pixel values of the tactile data. The stress and strain measurements are used to estimate the elasticity of the touched object. This method is validated by commercial soft polymer samples with a known Young's modulus. The experimental results show that using the TSIS and its algorithm, the elasticity of the touched object is estimated within 5.38% relative estimation error. We also develop a tissue inclusion parameter estimation method via indirect contact for the characterization of tissue inclusion. This method includes developing a forward algorithm and an inversion algorithm. The finite element modeling (FEM) based forward algorithm is designed to comprehensively predict the tactile data based on the parameters of an inclusion in the soft tissue. This algorithm is then used to develop an artificial neural network (ANN) based inversion algorithm for extracting various characteristics of tissue inclusions, such as size, depth, and Young's modulus. The estimation method is then validated by using realistic tissue phantoms with stiff inclusions. The experimental results show that the minimum relative estimation errors for the tissue inclusion size, depth, and hardness are 0.75%, 6.25%, and 17.03%, respectively. The work presented in this dissertation is the initial step towards early detection of malignant breast tumors.
Temple University--Theses
32

Manning, David J. "Applications of signal detection theory to the performance of imaging systems, human observers and artificial intelligence in radiography." Thesis, Lancaster University, 1998. http://eprints.lancs.ac.uk/11591/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An investigation was carried out to evaluate diagnostic performance in medical radiology. A critical review of methods available for the assessment of image quality in terms of physical objective measurements and quantitative observer performance was followed by a series of experiments which applied the techniques of Receiver Operating Characteristics (ROC) to radiographic problems. An appraisal of the performance of six currently available imaging systems designed for chest radiography was performed using expert observers and an anthropomorphic phantom. Results showed a range of abilities to demonstrate pulmonary nodules (ROC areas of 0.866 to 0.961). The ROC outcomes for the imaging systems were shown to correlate well with signal to noise ratio (SNR) measurements for the images (0.78, p< 0.05) although comparisons of ROC and threshold detection indices (HT) gave a lower level of agreement (0.6, p<0.05). The SNR method of image evaluation could probably be used as an alternative to ROC techniques in routine quality assurance procedures when time is short. Observers from a group of undergraduate radiography students were tested by an ROC study into their ability to detect pulmonary lesions in chest images. Their ROC areas (Az) ranged from 0.616 to 0.857 (mean 0.74) compared with an expert mean score of 0.872. The low score for the students was investigated in terms of the cognitive task and their search strategy training. Their (Az ) scores showed no significant correlation with simple psychometric tests. A neural network was tested against radiologists, radiographers and student radiographers in its ability to identify fractures in wrist radiographs. All observers performed to a similar level of ROC Az score but the artificial intelligence showed higher specificity values. This attribute was used to filter some of the normals from the test population and resulted in changes to the mean Az score for the human observers.
33

Popli, Labhesh. "AN ATTENTION BASED DEEP NEURAL NETWORK FOR VISUAL QUESTIONANSWERING SYSTEM." Cleveland State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=csu1579015180507068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Dadi, Kamalaker. "Machine Learning on Population Imaging for Mental Health." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les troubles mentaux présentent une grande hétérogénéité entre les individus. Une difficulté fondamentale pour étudier leurs manifestations ou leurs facteurs de risque est que le diagnostic des conditions mentales pathologiques est rarement disponible dans les grandes cohortes de santé publique. Ici, nous cherchons à développer des biomarqueurs, signatures cérébrales de troubles mentaux. Pour cela, nous utilisons l'apprentissage automatique pour prédire les résultats de santé mentale grâce à l'imagerie de population, en se basant sur l’imagerie cérébrale (imagerie par résonance magnétique (IRM)). Compte tenu des évaluations comportementales ou cliniques, l'imagerie de population peut relier les caractéristiques uniques des variations cérébrales à ces mesures autodéclarées non cérébrales basées sur des questionnaires. Ces mesures non cérébrales fournissent une description unique des différences psychologiques de chaque individu qui peuvent être liées à la psychopathologie à l'aide de méthodes statistiques. Cette thèse de doctorat examine le potentiel d'apprentissage de tels résultats basés sur l'imagerie pour analyser la santé mentale. En utilisant des méthodes d'apprentissage automatique, nous effectuons une évaluation, à la fois complète et robuste, des mesures de population pour guider des prévisions de haute qualité des résultats pour la santé. Cette thèse est organisée en trois parties principales: premièrement, nous présentons une étude approfondie des biomarqueurs du connectome, deuxièmement, nous proposons une réduction significative des données qui facilite les études d'imagerie de population à grande échelle, et enfin nous introduisons des mesures indirectes pour la santé mentale. Nous avons d'abord mis en place une étude approfondie des connectomes d'imagerie afin de prédire les phénotypes cliniques. Avec l'augmentation des images cérébrales de haute qualité acquises en l’absence de tâche explicite, il y a une demande croissante d'évaluation des modèles prédictifs existants. Nous avons effectué des comparaisons systématiques reliant ces images aux évaluations cliniques dans de nombreuses cohortes pour évaluer la robustesse des méthodes d'imagerie des populations pour la santé mentale. Nos résultats soulignent la nécessité de fondations solides dans la construction de réseaux cérébraux entre les individus. Ils décrivent des choix méthodologiques clairs. Ensuite, nous contribuons à une nouvelle génération d'atlas fonctionnels du cerveau pour faciliter des prédictions de haute qualité pour la santé mentale. Les atlas fonctionnels du cerveau sont en effet le principal goulot d'étranglement pour la qualité de la prédiction. Ces atlas sont construits en analysant des volumes cérébraux fonctionnels à grande échelle à l'aide d'un algorithme statistique évolutif, afin d'avoir une meilleure base pour la prédiction des résultats. Après les avoir comparés avec des méthodes de pointe, nous montrons leur utilité pour atténuer les problèmes de traitement des données à grande échelle. La dernière contribution principale est d'étudier les mesures de substitution potentielles pour les résultats pour la santé. Nous considérons des comparaisons de modèles à grande échelle utilisant des mesures du cerveau avec des évaluations comportementales dans une cohorte épidémiologique d'imagerie, le UK Biobank. Dans cet ensemble de données complexe, le défi consiste à trouver les covariables appropriées et à les relier à des cibles bien choisies. Cela est difficile, car il y a très peu de cibles pathologiques fiables. Après une sélection et une évaluation minutieuses du modèle, nous identifions des mesures indirectes qui sont en corrélation avec des conditions non pathologiques comme l'état de sommeil, la consommation d'alcool et l'activité physique. Ceux-ci peuvent être indirectement utiles pour l'étude épidémiologique de la santé mentale
Mental disorders display a vast heterogeneity across individuals. A fundamental challenge to studying their manifestations or risk factors is that the diagnosis of mental pathological conditions are seldom available in large public health cohorts. Here, we seek to develop brain signatures, biomarkers, of mental disorders. For this, we use ma-chine learning to predict mental-health outcomes through population imaging i. e. with brain imaging (Magnetic Resonance Imaging ( MRI )).Given behavioral or clinical assessments, population imaging can relate unique features of the brain variations to these non-brain self-reported measures based on questionnaires. These non-brain measurements carry a unique description of each individual’s psychological differences which can be linked to psychopathology using statistical methods. This PhD thesis investigates the potential of learning such imaging-based outcomes to analyze mental health. Using machine-learning methods, we conduct an evaluation, both a comprehensive and robust, of population measures to guide high-quality predictions of health outcomes. This thesis is organized into three main parts: first, we present an in-depth study of connectome biomarkers, second, we propose a meaningful data reduction which facilitates large-scale population imaging studies, and finally we introduce proxy measures for mental health. We first set up a thorough benchmark for imaging-connectomes to predict clinical phenotypes. With the rise in the high-quality brain images acquired without tasks, there is an increasing demand in evaluation of existing models for predictions. We performed systematic comparisons relating these images to clinical assessments across many cohorts to evaluate the robustness of population imaging methods for mental health. Our benchmarks emphasize the need for solid foundations in building brain networks across individuals. They outline clear methodological choices. Then, we contribute a new generation of brain functional atlases to facilitate high-quality predictions for mental health. Brain functional atlases are indeed the main bottleneck for prediction. These atlases are built by analyzing large-scale functional brain volumes using scalable statistical algorithm, to have better grounding for outcome prediction. After comparing them with state-of-the-art methods, we show their usefulness to mitigate large-scale data handling problems. The last main contribution is to investigate the potential surrogate measures for health outcomes. We consider large-scale model comparisons using brain measurements with behavioral assessments in an imaging epidemiological cohort, the United Kingdom ( UK ) Biobank. On this complex dataset, the challenge lies in finding the appropriate covariates and relating them to well-chosen outcomes. This is challenging, as there are very few available pathological outcomes. After careful model selection and evaluation, we identify proxy measures that display distinct links to socio-demographics and may correlate with non-pathological conditions like the condition of sleep, alcohol consumption and physical fitness activity. These can be indirectly useful for the epidemiological study of mental health
35

Sörman, Paulsson Elsa. "Evaluation of In-Silico Labeling for Live Cell Imaging." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-180590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Today new drugs are tested on cell cultures in wells to minimize time, cost, andanimal testing. The cells are studied using microscopy in different ways and fluorescentprobes are used to study finer details than the light microscopy can observe.This is an invasive method, so instead of molecular analysis, imaging can be used.In this project, phase-contrast microscopy images of cells together with fluorescentmicroscopy images were used. We use Machine Learning to predict the fluorescentimages from the light microscopy images using a strategy called In-Silico Labeling.A Convolutional Neural Network called U-Net was trained and showed good resultson two different datasets. Pixel-wise regression, pixel-wise classification, andimage classification with one cell in each image was tested. The image classificationwas the most difficult part due to difficulties assigning good quality labels tosingle cells. Pixel-wise regression showed the best result.
36

Bussola, Nicole. "AI for Omics and Imaging Models in Precision Medicine and Toxicology." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/348706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis develops an Artificial Intelligence (AI) approach intended for accurate patient stratification and precise diagnostics/prognostics in clinical and preclinical applications. The rapid advance in high throughput technologies and bioinformatics tools is still far from linking precisely the genome-phenotype interactions with the biological mechanisms that underlie pathophysiological conditions. In practice, the incomplete knowledge on individual heterogeneity in complex diseases keeps forcing clinicians to settle for surrogate endpoints and therapies based on a generic one-size-fits-all approach. The working hypothesis is that AI can add new tools to elaborate and integrate together in new features or structures the rich information now available from high-throughput omics and bioimaging data, and that such re- structured information can be applied through predictive models for the precision medicine paradigm, thus favoring the creation of safer tailored treatments for specific patient subgroups. The computational techniques in this thesis are based on the combination of dimensionality reduction methods with Deep Learning (DL) architectures to learn meaningful transformations between the input and the predictive endpoint space. The rationale is that such transformations can introduce intermediate spaces offering more succinct representations, where data from different sources are summarized. The research goal was attacked at increasing levels of complexity, starting from single input modalities (omics and bioimaging of different types and scales), to their multimodal integration. The approach also deals with the key challenges for machine learning (ML) on biomedical data, i.e. reproducibility, stability, and interpretability of the models. Along this path, the thesis contribution is thus the development of a set of specialized AI models and a core framework of three tools of general applicability: i. A Data Analysis Plan (DAP) for model selection and evaluation of classifiers on omics and imaging data to avoid selection bias. ii. The histolab Python package that standardizes the reproducible pre-processing of Whole Slide Images (WSIs), supported by automated testing and easily integrable in DL pipelines for Digital Pathology. iii. Unsupervised and dimensionality reduction techniques based on the UMAP and TDA frameworks for patient subtyping. The framework has been successfully applied on public as well as original data in precision oncology and predictive toxicology. In the clinical setting, this thesis has developed1: 1. (DAPPER) A deep learning framework for evaluation of predictive models in Digital Pathology that controls for selection bias through properly designed data partitioning schemes. 2. (RADLER) A unified deep learning framework that combines radiomics fea- tures and imaging on PET-CT images for prognostic biomarker development in head and neck squamous cell carcinoma. The mixed deep learning/radiomics approach is more accurate than using only one feature type. 3. An ML framework for automated quantification tumor infiltrating lymphocytes (TILs) in onco-immunology, validated on original pathology Neuroblastoma data of the Bambino Gesu’ Children’s Hospital, with high agreement with trained pathologists. The network-based INF pipeline, which applies machine learning models over the combination of multiple omics layers, also providing compact biomarker signatures. INF was validated on three TCGA oncogenomic datasets. In the preclinical setting the framework has been applied for: 1. Deep and machine learning algorithms to predict DILI status from gene expression (GE) data derived from cancer cell lines on the CMap Drug Safety dataset. 2. (ML4TOX) Deep Learning and Support Vector Machine models to predict potential endocrine disruption of environmental chemicals on the CERAPP dataset. 3. (PathologAI) A deep learning pipeline combining generative and convolutional models for preclinical digital pathology. Developed as an internal project within the FDA/NCTR AIRForce initiative and applied to predict necrosis on images from the TG-GATEs project, PathologAI aims to improve accuracy and reduce labor in the identification of lesions in predictive toxicology. Furthermore, GE microarray data were integrated with histology features in a unified multi-modal scheme combining imaging and omics data. The solutions were developed in collaboration with domain experts and considered promising for application.
37

Wassermann, Demian. "Automated in vivo dissection of white matter structures from diffusion magnetic resonance imaging." Nice, 2010. http://www.theses.fr/2010NICE4066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le cerveau est organisé tel un réseau reliant différentes régions. Ce réseau est important pour le développement de fonctions comme le langage. Certains troubles cognitifs peuvent être expliqués par des problèmes de connexion entre régions plus qu’à des dommages de ces dernières. Malgré plusieurs décennies de travail sur ces réseaux, nos connaissances sur le sujet n’ont pas beaucoup évoluées depuis le début du siècle dernier. Récemment, un développement spectaculaire des techniques de l’imagerie par résonance magnétique (IRM) a permis l’étude vivant du cerveau humain. Une technique permettant l’exploration des faisceaux de la matière blanche (MB) in vivo est l’IRM de diffusion (IRMd). En particulier, la trajectographie à partir de l’IRMd facilite le traçage des faisceaux de la MB. C’est donc une technique prometteuse afin d’explorer l’aspect cognitif de l’anatomie humaine ainsi que de ses troubles. La motivation de cette thèse est la dissection in vivo de la MB. Cette procédure permet d’isoler les faisceaux de la MB, qui jouent un rôle particulier dans le fonctionnement du cerveau, de façon à pouvoir les analyser. L’exécution manuelle de cette tache requiert une grande connaissance du cerveau et demande plusieurs heurs de travail. Le développement d’une technique automatique est donc de la plus grande importance. Cette thèse contient plusieurs contributions : nous développons des moyens d’automatiser la dissection de la MB, c’est-à-dire le cadre mathématique nécessaire à sa compréhension. Ces outils nous permettent ensuite de développer des techniques d’analyse de la moelle épinière et de recherche de différences dans la MB entre des individus sains et schizophrènes
The brain is organized in networks that are made up of tracks connecting different regions. These networks are important for the development of brain functions such as language. Lesions and cognitive disorders are sometimes better explained by disconnection mechanisms between cerebral regions than by damage of those regions. Despite several decades of tracing these networks in the brain, our knowledge of cerebral connections has progressed very little since the beginning of the last century. Recently, we have seen a spectacular development of magnetic resonance imaging (MRI) techniques for the study of the living human brain. One technique for exploring white matter (WM) tissue characteristics and pathway in vivo is diffusion MRI (dMRI). Particulary, dMRI tractography facilitates the tracing the WM tracts in vivo. DMRI is a promising technique to explore the anatomical basis of human cognition and its disorders. The motivation of this thesis is the in vivo dissection of the WM. This procedure isolates the WM tracts that play a role in a particular function or disorder of the brain so they can be analysed. Manually performing this task requires a great knowledge of brain anatomy and several hours of work. Hence, the development of a technique to automatically perform the identification of WM structures is of utmost importance. This thesis has several contributions : we develop means for the automatic dissection of WM tracts from dMRI, this is based on a mathematical framework for the WM and its tracts ; using these tools, we develop techniques to analyse the spinal chord and to find group differences in the WM particulary between healthy and schizophrenic subjects
38

Vétil, Rebeca. "Artificial Intelligence Methods to Assist the Diagnosis of Pancreatic Diseases in Radiology." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Avec l’augmentation de son incidence et son taux de survie à cinq ans (9%), le cancer du pancréas pourrait devenir la troisième cause de décès par cancer d’ici 2025.Ces chiffres sont notamment dus aux diagnostics tardifs, limitant les options thérapeutiques. Cette thèse vise à assister les radiologues dans le diagnostic du cancer du pancréas sur des images scanner grâce à des outils d’intelligence artificielle (IA) qui faciliteraient un diagnostic précoce. Pour atteindre ces objectifs, trois pistes de recherche ont été explorées. Premièrement, une méthode de segmentation automatique du pancréas a été développée. Le pancréas présentant une forme allongée et des extrémités subtiles, la méthode proposée utilise des informations géométriques pour ajuster localement la sensibilité de la segmentation. Deuxièmement, une méthode réalise la détection des lésions et de la dilatation du canal pancréatique principal (CPP), deux signes cruciaux du cancer du pancréas. La méthode proposée commence par segmenter le pancréas, les lésions et le CPP. Ensuite, des caractéristiques quantitatives sont extraites des segmentations prédites puis utilisées pour prédire la présence d’une lésion et la dilatation du CPP. La robustesse de la méthode est de montrer sur une base externe de 756 patients. Dernièrement, afin de permettre un diagnostic précoce, deux approches sont proposées pour détecter des signes secondaires. La première utilise un grand nombre de masques de segmentation de pancréas sains pour apprendre un modèle normatif des formes du pancréas. Ce modèle est ensuite exploité pour détecter des formes anormales, en utilisant des méthodes de détection d’anomalies avec peu ou pas d’exemples d’entraînement. La seconde approche s’appuie sur deux types de radiomiques : les radiomiques profonds (RP), extraits par des réseaux de neurones profonds, et les radiomiques manuels (RM), calculés à partir de formules prédéfinies. La méthode extrait des RP non redondants par rapport à un ensemble prédéterminé de RM afin de compléter l’information déjà contenue. Les résultats montrent que cette méthode détecte Efficacement quatre signes secondaires : la forme anormale, l’atrophie, l’infiltration de graisse et la sénilité. Pour élaborer ces méthodes, une base de données de 2800 examens a été constituée, ce qui en fait l’une des plus importantes pour la recherche en IA sur le cancer du pancréas
With its increasing incidence and its five- year survival rate (9%), pancreatic cancer could be- come the third leading cause of cancer-related deaths by 2025. These figures are primarily attributed to late diagnoses, which limit therapeutic options. This the- sis aims to assist radiologists in diagnosing pancrea- tic cancer through artificial intelligence (AI) tools that would facilitate early diagnosis. Several methods have been developed. First, a method for the automatic segmentation of the pancreas on portal CT scans was developed. To deal with the specific anatomy of the pancreas, which is characterized by an elonga- ted shape and subtle extremities easily missed, the proposed method relied on local sensitivity adjust- ments using geometrical priors. Then, the thesis tack- led the detection of pancreatic lesions and main pan- creatic duct (MPD) dilatation, both crucial indicators of pancreatic cancer. The proposed method started with the segmentation of the pancreas, the lesion and the MPD. Then, quantitative features were extracted from the segmentations and leveraged to predict the presence of a lesion and the dilatation of the MPD. The method was evaluated on an external test cohort comprising hundreds of patients. Continuing towards early diagnosis, two strategies were explored to de- tect secondary signs of pancreatic cancer. The first approach leveraged large databases of healthy pan- creases to learn a normative model of healthy pan- creatic shapes, facilitating the identification of anoma- lies. To this end, volumetric segmentation masks were embedded into a common probabilistic shape space, enabling zero-shot and few-shot abnormal shape de- tection. The second approach leveraged two types of radiomics: deep learning radiomics (DLR), extracted by deep neural networks, and hand-crafted radiomics (HCR), derived from predefined formulas. The propo- sed method sought to extract non-redundant DLR that would complement the information contained in the HCR. Results showed that this method effectively de- tected four secondary signs of pancreatic cancer: ab- normal shape, atrophy, senility, and fat replacement. To develop these methods, a database of 2800 exa- minations has been created, making it one of the lar- gest for AI research on pancreatic cancer
39

Preusse, Franziska. "High fluid intelligence and analogical reasoning." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2011. http://dx.doi.org/10.18452/16424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hitherto, previous studies on the cerebral correlates of fluid intelligence (fluIQ) used tasks that did not exclusively demand fluIQ, or were restricted to participants of average fluIQ (ave-fluIQ) solving intelligence test items of varying difficulty, thus not allowing assumptions on interindividual differences in fluIQ. Geometric analogical reasoning (GAR) demands fluIQ very purely and thus is an eligible approach for research on interindividual differences in fluIQ. In a first study, we examined the cerebral correlates of GAR, and showed the involvement of parietal and frontal brain regions. This is in line with the assumptions of the parieto-frontal integration theory (P-FIT) of intelligence and with literature reports for other visuo-spatial tasks. Building upon these findings, we report results from a second study with high fluIQ (hi-fluIQ) and ave-fluIQ school students solving a GAR task. Again in line with the P-FIT model, we demonstrated that the parieto-frontal network is involved in GAR in both groups. However, the extent of task-related brain activation in parietal and frontal brain regions was differentially modulated by fluIQ. Our results thus partly run counter to the postulates of the neural efficiency hypothesis, which assumes a negative brain activation-intelligence relationship. We conclude that this relationship is not generally unitary; rather, it can be conjectured that the adaptive and flexible modulation of brain activation is characteristic of hi-fluIQ. Knowledge on the stability of the cerebral correlates of hi-fluIQ during adolescence had been sparse. To elucidate this field, we examined the follow-up stability of the cerebral correlates of GAR in hi-fluIQ in a third study. We demonstrated that the relevant brain network is in place already at age 17 and that improvements in behavioral performance at age 18 due to task familiarity are indicative of more efficient use of the cerebral resources available.
Bisherige Studien zu zerebralen Korrelaten fluider Intelligenz (fluIQ) haben Aufgaben verwendet, die fluIQ nicht in Reinform erfordern oder haben Probanden mit durchschnittlicher fluIQ (ave-fluIQ) beim Lösen von Intelligenztestaufgaben mit variierenden Schwierigkeitsstufen untersucht und ermöglichen daher keine Aussagen zu interindividuellen Unterschieden in fluIQ. Geometrisches analoges Schließen (GA) beansprucht fluIQ in Reinform und eignet sich daher als differentielles Untersuchungsparadigma. In einer ersten Studie haben wir die zerebralen Korrelate des GA untersucht und nachgewiesen, dass parietale und frontale Hirnregionen involviert sind. Dies steht im Einklang mit der parieto-frontalen Integrationstheorie (P-FIT) der Intelligenz und mit Literaturberichten zu anderen visuell-räumlichen Aufgaben. Aufbauend auf diesen Befunden berichten wir Ergebnisse einer zweiten Studie, in der Schüler mit hoher fluIQ (hi-fluIQ) und ave-fluIQ GA-Aufgaben lösten. In Übereinstimmung mit den Annahmen des P-FIT-Modells konnten wir zeigen, dass GA in beiden Gruppen das parieto-frontale Netzwerk beansprucht. Das Ausmaß der Hirnaktivierung wurde jedoch differentiell durch fluIQ moduliert. Unsere Ergebnisse widersprechen damit teilweise den Postulaten der neuralen Effizienztheorie, die einen negativen Zusammenhang zwischen Hirnaktivierung und Intelligenz annimmt. Wir schlussfolgern, dass dieser Zusammenhang nicht generell einseitig gerichtet ist, sondern die flexible Modulation von Hirnaktivierung charakteristisch für hi-fluIQ ist. Befunde zur Stabilität zerebraler Korrelate von hi-fluIQ in der Jugend waren bisher rar. Um dieses Feld zu beleuchten, haben wir die follow-up-Stabilität zerebraler Korrelate des GA in der hi-fluIQ Gruppe in einer dritten Studie untersucht. Wir konnten zeigen, dass das relevante zerebrale Netzwerk schon mit 17 Jahren etabliert ist und Performanzverbesserungen über die Zeit für eine effizientere Nutzung der verfügbaren zerebralen Ressourcen sprechen.
40

Janse, van Rensburg Frederick Johannes. "Object recognition and automatic selection in a Robotic Sorting Cell." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2006.
This thesis relates to the development of an automated sorting cell as part of a flexible manufacturing line, with the use of object recognition. Algorithms for each of the individual subsections creating the cell, recognition, position calculation and robot integration were developed and tested. The Fourier descriptors object recognition technique is investigated and used. Invariance to scale, rotation or translation of the boundary of an object recognition. Stereoscopy with basic trigonometry is used to calculate the position of recognised objects, after which they are handled by a robot. Integration of the robot into the project environment is done with trigonometry as well as Euler angles. It is shown that a successful, automated sorting cell can be constructed with object recognition. The results show that reliable sorting can be done with available hardware and the algorithms development.
41

Li, Chao. "Characterising heterogeneity of glioblastoma using multi-parametric magnetic resonance imaging." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A better understanding of tumour heterogeneity is central for accurate diagnosis, targeted therapy and personalised treatment of glioblastoma patients. This thesis aims to investigate whether pre-operative multi-parametric magnetic resonance imaging (MRI) can provide a useful tool for evaluating inter-tumoural and intra-tumoural heterogeneity of glioblastoma. For this purpose, we explored: 1) the utilities of habitat imaging in combining multi-parametric MRI for identifying invasive sub-regions (I & II); 2) the significance of integrating multi-parametric MRI, and extracting modality inter-dependence for patient stratification (III & IV); 3) the value of advanced physiological MRI and radiomics approach in predicting epigenetic phenotypes (V). The following observations were made: I. Using a joint histogram analysis method, habitats with different diffusivity patterns were identified. A non-enhancing sub-region with decreased isotropic diffusion and increased anisotropic diffusion was associated with progression-free survival (PFS, hazard ratio [HR] = 1.08, P < 0.001) and overall survival (OS, HR = 1.36, P < 0.001) in multivariate models. II. Using a thresholding method, two low perfusion compartments were identified, which displayed hypoxic and pro-inflammatory microenvironment. Higher lactate in the low perfusion compartment with restricted diffusion was associated with a worse survival (PFS: HR = 2.995, P = 0.047; OS: HR = 4.974, P = 0.005). III. Using an unsupervised multi-view feature selection and late integration method, two patient subgroups were identified, which demonstrated distinct OS (P = 0.007) and PFS (P < 0.001). Features selected by this approach showed significantly incremental prognostic value for 12-month OS (P = 0.049) and PFS (P = 0.022) than clinical factors. IV. Using a method of unsupervised clustering via copula transform and discrete feature extraction, three patient subgroups were identified. The subtype demonstrating high inter-dependency of diffusion and perfusion displayed higher lactate than the other two subtypes (P = 0.016 and P = 0.044, respectively). Both subtypes of low and high inter-dependency showed worse PFS compared to the intermediate subtype (P = 0.046 and P = 0.009, respectively). V. Using a radiomics approach, advanced physiological images showed better performance than structural images for predicting O6-methylguanine-DNA methyltransferase (MGMT) methylation status. For predicting 12-month PFS, the model of radiomic features and clinical factors outperformed the model of MGMT methylation and clinical factors (P = 0.010). In summary, pre-operative multi-parametric MRI shows potential for the non-invasive evaluation of glioblastoma heterogeneity, which could provide crucial information for patient care.
42

Chambe, Mathieu. "Improving image quality using high dynamic range and aesthetics assessment." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Pour traiter la grande quantité de données visuelles disponible, il est important de concevoir des algorithmes qui peuvent trier, améliorer, compresser ou stocker des images et des vidéos. Dans cette thèse, nous proposons deux approches différentes pour améliorer la qualité d'images. Tout d'abord, nous proposons une étude des méthodes d'évaluation automatique de l'esthétique. Ces algorithmes sont basés sur des réseaux de neurones supervisés. Nous avons récolté des images de différents types, puis nous avons utilisé ces images pour tester des modèles. Notre étude montre que les caractéristiques nécessaires pour évaluer précisément les esthétiques de photographies professionnelles ou compétitives sont différentes, mais qu'elles peuvent être apprises par un seul et unique réseau. Enfin, nous proposons de travailler sur les images à grande gamme dynamique (High Dynamic Range, HDR en anglais). Nous présentons ici un nouvel opérateur pour augmenter la gamme dynamique d'images standards, appelé HDR-LFNet. Cet opérateur fusionne la sortie de plusieurs algorithmes pré-existants, ce qui permet d'avoir un réseau plus léger et plus rapide. Nous évaluons les performances de la méthode proposée grâce à des métriques objectives, ainsi qu'une évaluation subjective. Nous prouvons que notre méthode atteint des résultats similaires à l'état de l'art en utilisant moins de ressources
To cope with the increasing amount of visual content available, it is important to devise automatic processes that can sort, improve, compress or store images and videos. In this thesis, we propose two different approaches to software-based image improvement. First, we propose a study on existing aesthetics assessment algorithms. These algorithms are based on supervised neural networks. We have collected several datasets of images, and we have tested different models using these images. We report here the performances of such networks, as well as an idea to improve the already trained networks. Our study shows that the features needed to accurately predict the aesthetics of competitive and professional are different but can be learned simultaneously by a single network. In a second time, we propose to work with High Dynamic Range (HDR) images. We present here a new operator to increase the dynamic range of images called HDR-LFNet, that merges the output of existing operators and therefore, consists in far fewer parameters. Besides, we evaluate our method through objective metrics and a user study. We show that our method is on-par with the state-of-the-art according to objective metrics, but is preferred by observers during the user study, while using less resources overall
43

TRIMBOLI, RUBINA MANUELA. "NEW TRENDS IN BREAST IMAGING FOR BREAST CANCER AND CARDIOVASCULAR RISK." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/699518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background Qualitative, subjective reading of medical images have been the backbone of image interpretation for the past century, providing useful information to the treating physician. During the past two decades, advances in medical imaging technology have offered the possibility to extract high-resolution anatomic, physiologic, functional, biochemical, and metabolic information from clinical images, all of which reflect the molecular composition of the healthy or diseased tissue of organs imaged in the human body. We are now entering the era of ―quantitative imaging‖ recently formally defined as ―the extraction of quantifiable features from medical images for the assessment of normality, or the severity, degree of change, or status of a disease, injury, or chronic condition relative to normal‖. With appropriate calibration, most of the current imaging technologies can provide quantitative information about specific properties of the tissues being imaged. Purpose This doctoral thesis aims at exploring the possible use of imaging methods such as mammography and breast magnetic resonance imaging (MRI) as imaging biomarkers, measuring functional, biochemical and metabolic characteristics of the breast through medical images. Part I. Breast arterial calcifications for cardiovascular risk Breast arterial calcifications (BAC) are easily recognizable on screening mammography and are associated with coronary artery disease. We tried to implement the estimation of BAC to be easily applicable in clinical prevention of cardiovascular disease. In particular, we evaluated the intra- and inter-observer reproducibility of i) a specifically developed semi-automatic tool and of ii) a semi-quantitative scale for BAC quantification on digital mammograms. Part II. Multiparametric breast MRI for breast cancer management Multiparametric breast MRI allows to simultaneously quantify and visualize multiple functional processes at the cellular and molecular levels to further elucidate the development and progression of breast cancer (BC) and the response to treatment. The purpose of our study was to verify the correlation between enhancement parameters derived from routine breast contrast-enhancement MRI and pathological prognostic factors in invasive BC as a condition for the use of MRI-derived imaging biomarkers in adjunct to traditional prognostic tools in clinical decision making. Part III. Artificial intelligence in Breast MRI Recent enthusiasm regarding the introduction of artificial intelligence (AI) into health care and, in particular, into radiology has increased clinicians‘ expectations and also fears regarding the possible impact of AI on their profession. The large datasets provided by and potentially extractable from breast MRI make it the right 6 stuff for fitting AI applications. This session focuses on a systematic mapping review of the literature on AI application in breast MRI published during the past decade, analysing the phenomenon in terms of spread, clinical aim, used approach, and achieved results. Conclusions Medical images represent imaging biomarkers of considerable interest in evidence- based clinical decisionmaking, for therapeutic development and treatment monitoring. Among imaging biomarkers, BAC represent the added value of an ongoing and consolidated cancer screening to act for preventing the main cause of death among women in which traditional CV risk scores do not adequately perform. Breast MRI may act as a prognostic tool to improve BC management through the extraction of a plenty of functional cancer parameters. AI might certainly implement the use of imaging data interacting with and integrating quantitative imaging for improving patient outcome and reducing several sources of bias and variance in the quantitative results obtained from clinical images. The intrinsic multiparametric nature of MRI has the greatest potential to incorporate AI applications into the so called precision medicine. Nevertheless, AI applications are still not ready to be incorporated into clinical practice nor to replace the trained and experienced observer with the ability to interpret and judge during image reading sessions.
44

Penke, Lars. "Neuroscientific approaches to general intelligence and cognitive ageing." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2011. http://dx.doi.org/10.18452/13979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nach einem ausführlichem Überblick über den Kenntnisstand der Genetik und Neurowissenschaft von allgemeiner Intelligenz und einer methodischen Anmerkung zur Notwendigkeit der Berücksichtigung latenter Variablen in den kognitiven Neurowissenschaften am Beispiel einer Reanalyse publizierter Ergebnisse wir das am besten etablierte Gehirnkorrelat der Intelligenz, die Gehirngröße, aus evolutionsgenetischer Perspektive neu betrachtet. Schätzungen des Koeffizienten additiv-genetischer deuten an, dass es keine rezente direktionale Selektion auf Gehirngröße gegeben hat, was ihre Validität als Proxy für Intelligenz in evolutionären Studien in Frage stellt. Stattdessen deuten Korrelationen der Gesichtssymmetrie älterer Männer mit Intelligenz und Informationsverarbeitungsgeschwindigkeit an, dass organismusweite Entwicklungsstabilität eine wichtige Grundlage von unterschieden in kognitiven Fähigkeiten sein könnte. Im zweiten Teil dieser Arbeit geht es vornehmlich um die Alterung kognitiver Fähigkeiten, beginnend mit einem allgemeinen Überblick. Daten einer Stichprobe von über 130 Individuen zeigen dann, dass die Integrität verschiedener Nervenbahnen im Gehirn hoch korreliert, was die Extraktion eines Generalfaktors der Traktintegrität erlaubt, der mit Informationsverarbeitungsgeschwindigkeit korreliert. Der einzige Trakt mit schwacher Ladung auf diesem Generalfaktor ist das Splenium des Corpus Callosum, welches mit Veränderungen der Intelligenz über 6 Jahrzehnte korreliert und den Effekt des Bet2 adrenergischem Rezeptorgens (ADRB2) auf diese Veränderung mediiert, möglicherweise durch Effekte auf neuronale Komopensationsprozesse. Schließlich wird auf Basis neuer Analyseverfahren für Magnetresonanzdaten gezeigt, dass vermehrte Eiseneinlagerungen im Gehirn, vermutlich Marker für zerebrale Mikroblutungen, sowohl mit lebenslang stabilen Intelligenzunterschieden als auch mit der altersbedingten Veränderung kognitiver Fähigkeiten assoziiert sind.
After an extensive review of what is known about the genetics and neuroscience of general intelligence and a methodological note emphasising the necessity to consider latent variables in cognitive neuroscience studies, exemplified by a re-analysis of published results, the most well-established brain correlate of intelligence, brain size, is revisited from an evolutionary genetic perspective. Estimates of the coefficient of additive genetic variation in brain size suggest that there was no recent directional selection on brain size, questioning its validity as a proxy for intelligence in evolutionary analyses. Instead, correlations of facial fluctuating asymmetry with intelligence and information processing speed in old men suggest that organism-wide developmental stability might be an important cause of individual differences in cognitive ability. The second half of the thesis focuses on cognitive ageing, beginning with a general review. In a sample of over 130 subjects it has then been found that the integrity of different white matter tracts in the brain is highly correlated, allowing for the extraction of a general factor of white matter tract integrity, which is correlated with information processing speed. The only tract not loading highly on this general factor is the splenium of the corpus callosum, which is correlated with changes in intelligence over 6 decades and mediates the effect of the beta2 adrenergic receptor gene (ADRB2) on cognitive ageing, possibly due to its involvement in neuronal compensation processes. Finally, using a novel analytic method for magnetic resonance data, it is shown that more iron depositions in the brain, presumably markers of a history of cerebral microbleeds, are associated with both lifelong-stable intelligence differences and age-related decline in cognitive functioning.
45

Fraenz, Christoph [Verfasser], Onur [Gutachter] Güntürkün, and Nikolai [Gutachter] Axmacher. "Neural correlates of intelligence and general knowledge obtained by magnetic resonance imaging / Christoph Fraenz ; Gutachter: Onur Güntürkün, Nikolai Axmacher ; Fakultät für Psychologie." Bochum : Ruhr-Universität Bochum, 2019. http://d-nb.info/1201561000/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nasrin, Mst Shamima. "Pathological Image Analysis with Supervised and Unsupervised Deep Learning Approaches." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1620052562772676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nuzhnaya, Tatyana. "ANALYSIS OF ANATOMICAL BRANCHING STRUCTURES." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/322471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computer and Information Science
Ph.D.
Development of state-of-the-art medical imaging modalities such as Magnetic Resonance Imaging, Computed Tomography, Galactography, MR Diffusion Tensor Imaging, and Tomosynthesis plays an important role for visualization and assessment of anatomical structures. Included among these structures are structures of branching topology such as the bronchial tree in chest computed tomography images, the blood vessels in retinal images and the breast ductal network in x-ray galactograms and the tubular bone patterns in dental radiography. Analysis of such images could help reveal abnormalities, assist in estimating a risk of diseases such as breast cancer and COPD, and aid in the development of realistic anatomy phantoms. This thesis aims at the development of a set of automated methods for the analysis of anatomical structures of tree and network topology. More specifically, the two main objectives include (i) the development of analysis framework to explore the association between topology and texture patterns of anatomical branching structures and (ii) the development of the image processing methods for enhanced visualization of regions of interest in anatomical branching structures such as branching nodes.
Temple University--Theses
48

Sobel, Ryan A. "The Role of Competitive Intelligence in Strategic Decision Making for Commercializing a Novel Endovascular Navigation Technology." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1618854255867602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Duarte, Everton. "Associação entre volume cerebral e medidas de inteligência em adultos saudáveis: um estudo por ressonância magnética estrutural e volumetria baseada em voxel." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/5/5142/tde-07122011-130244/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introdução: As funções cognitivas sofrem influência da idade, tanto no seu desenvolvimento como no seu declínio. De forma geral, as medidas de inteligência se relacionam com o volume de substância cinzenta (SC) em áreas cerebrais específicas e sofrem influência com o processo de envelhecimento. Em crianças e adolescentes as áreas envolvidas são o córtex pré-frontal, enquanto nos idosos e adultos jovens os córtices frontais e temporais desempenham um papel importante. Objetivo: Identificar quais áreas cerebrais estão implicadas nas variações das medidas de inteligência em uma amostra representativa de adultos jovens e idosos saudáveis. As hipóteses principais são: 1) haverá uma associação indireta entre distribuição de SC e idade envolvendo áreas de córtex frontal e temporal; 2) em adultos, as medidas de QI estimado irão se mostrar estáveis em decorrência da estabilidade entre as funções de inteligência fluida e cristalizada; 3) em idosos, as medidas de QI estimado apresentarão correlação direta com a distribuição de SC envolvendo o córtex temporal e límbico; 4) em idosos, as medidas de inteligência fluida apresentarão correlação direta com a distribuição de SC em córtices frontal e pré-frontal; 5) em idosos, as medidas de inteligência cristalizada não apresentarão correlações significativas com a distribuição de SC. Metodologia: Os exames de imagens cerebrais foram obtidos a partir de uma amostra representativa com ampla faixa etária. Foi investigada uma amostra de 258 sujeitos entre 18 e 75 anos que preencheram os critérios de inclusão/exclusão, e para os quais é possível estimar o QI através de subtestes do WASI. Os resultados estatísticos de cada análise foram realizados utilizando a volumetria baseada em voxel sob a forma de mapas paramétricos estatísticos e a relação dos resultados foram somente os que sobreviveram a correção para comparações múltiplas. Todos os resultados foram corrigidos através das variáveis de confusão (protocolo,aparelho de ressonância magnética e volume cerebral). Resultados: Identificamos correlações entre QI estimado e de inteligência fluida em regiões temporais mediais e límbicas bilaterais, na população idosa. Na população adulta também identificamos correlações entre inteligência cristalizada envolvendo regiões frontais e pré-frontais direita. Observamos também que houve perda de SC na população adulta envolvendo as regiões pré-frontais e frontais à esquerda. Conclusão: As medidas de inteligência cristalizada e fluida apresentaram correlação direta com o volume cerebral total, e especificamente em córtices frontais, pré-frontais bilaterais, e em regiões temporais e límbicas
Introduction: The cognitive functions could be influenced by age, both during the development as well as during the decline. Measures of intelligence in general are related to the volume of gray matter (GM) in specific brain areas, and also are under influence of aging process. In children and adolescents the brain areas involved is mainly the prefrontal cortex, while in the elderly other areas, such as the frontal and temporal cortices play an important role. Objective: To identify which brain areas are involved in variations in intelligence measures in a representative sample of large age span, from young adults to healthy elderly. The main hypotheses are: 1) there would be an indirect association between age distribution of GM and surrounding areas of the frontal and temporal cortex, 2) in adults, estimated IQ measures would prove to be stable secondarily to adaptation between fluid intelligence and crystallized functions, 3) in the elderly, measures of IQ estimated would present direct correlation with the distribution of GM involving the temporal cortex and limbic, 4) in the elderly, measures of fluid intelligence would show a direct correlation with the distribution of GM and pre frontal cortices, 5) in the elderly, measures of crystallized intelligence would not present significant correlations with the distribution of GM. Methods: Brain images scans were obtained from a representative sample with large age span. We investigated a sample of 258 subjects between 18 and 75 who met the criteria for inclusion / exclusion, and for which it is possible to estimate the IQ subtests by the WASI. Correlation statistical analyses were performed using voxel-based methodology with Statistical Parametric Map Software, and reported only if survived multiple comparison correction. All results were corrected for confounding variables (protocol, MRI scan and total brain volume). Results: We identified correlations between IQ and estimated fluid intelligence in medial temporal and limbic regions bilateral in the elderly. In the adult population we also identified correlations between crystallized intelligence involving the frontal and prefrontal right. We also observed that there was loss of GM in adults involving the left prefrontal regions and frontal. Conclusion: Measures of fluid and crystallized intelligence showed direct correlation with the total brain volume, and specifically in the frontal cortex, bilateral prefrontal, and temporal and limbic regions
50

Oukhatar, Fatima. "Design, synthesis and characterization of neurotransmitter responsive probes for magnetic resonance and optical imaging." Thesis, Orléans, 2012. http://www.theses.fr/2012ORLE2076/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Malgré le rôle primordial des neurotransmetteurs (NTs) dans le système nerveux central, leur détection non-invasive in vivo reste un défi majeur. L’imagerie par résonance magnétique (IRM), grâce à son excellente résolution spatiale et temporelle, est parmi les techniques de diagnostic les plus performantes. Elle est au centre des développements récents en imagerie moléculaire. En particulier, l’utilisation des agents d’imagerie intelligents qui sont capables de visualiser le statut physico-chimique des tissus commence à avoir une place importante en neuroscience.Cette étude a pour objectif de concevoir, synthétiser et caractériser in vitro des sondes intelligentes à base de cations lanthanide pour la détection in vivo des NTs. La conception de nos sondes est basée sur des interactions doubles avec des neurotransmetteurs zwitterioniques: d’une part entre le complexes de Ln3+ positivement chargé et le carboxylate du NT et d’autre part entre un ether couronne lié au complexe et la fonction amine du NT. Plusieurs des sondes synthétisées présentent des relaxivités élevées et ont une réponse relaxometrique remarquable aux NTs, bien que leur sélectivité vis-à-vis de l’ion bicarbonate ne soit pas suffisante. Afin de développer des sondes pour une approche bimodale IRM /optique, nous avons également intégré dans les complexes une benzophenone qui joue le rôle de chromophore pour sensibiliser la luminescence des ions Ln3+ émettant dans le proche infra-rouge. Le complexe d’Yb3+ correspondant a des propriétés de luminescence très intéressantes avec une forte réponse aux NTs
In spite of the key role of neurotransmitters (NTs) in signal transduction, their non-invasive in vivo monitoring remains an important challenge. Magnetic resonance imaging (MRI) has recently been demonstrated as a promising technique to non-invasively visualize physiological events with excellent temporal and spatial resolution. In particular, smart MRI contrast agents that are able to report on the physico-chemical status of the tissues, start to have a strong impact in neuroscience. The objective of this work was the design, synthesis and in vitro characterization of a series of lanthanide-based probes responsive to NTs with the aim to track in vivo concentration changes of NTs using MR or optical imaging. The design of our imaging probes relies on a dual binding approach of zwitterionic NTs to the Ln3+ complexes, involving interactions (i) between a positively charged Ln3+ chelate and the carboxylate function of the NTs and (ii) between an azacrown ether appended on the chelate and the amine group of the neurotransmitters. Some of the novel contrast agents were found to exhibit high relaxivities and a remarkable relaxivity response towards NTs, though little selectivity against bicarbonate. In order to apply a bimodal MRI/optical imaging approach, we have also incorporated a benzophenone moiety into the chelate to sensitize the near-infrared emitting Ln3+ ions. The Yb3+ analogue proved to be highly sensitive to NTs

To the bibliography