Dissertationen zum Thema „ML algorithm“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-22 Dissertationen für die Forschung zum Thema "ML algorithm" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Krüger, Franz David, und Mohamad Nabeel. „Hyperparameter Tuning Using Genetic Algorithms : A study of genetic algorithms impact and performance for optimization of ML algorithms“. Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42404.
Der volle Inhalt der QuelleAs machine learning (ML) is being more and more frequent in the business world, information gathering through Data mining (DM) is on the rise, and DM-practitioners are generally using several thumb rules to avoid having to spend a decent amount of time to tune the hyperparameters (parameters that control the learning process) of an ML algorithm to gain a high accuracy score. The proposal in this report is to conduct an approach that systematically optimizes the ML algorithms using genetic algorithms (GA) and to evaluate if and how the model should be constructed to find global solutions for a specific data set. By implementing a GA approach on two ML-algorithms, K-nearest neighbors, and Random Forest, on two numerical data sets, Iris data set and Wisconsin breast cancer data set, the model is evaluated by its accuracy scores as well as the computational time which then is compared towards a search method, specifically exhaustive search. The results have shown that it is assumed that GA works well in finding great accuracy scores in a reasonable amount of time. There are some limitations as the parameter’s significance towards an ML algorithm may vary.
Mohammad, Maruf H. „Blind Acquisition of Short Burst with Per-Survivor Processing (PSP)“. Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/46193.
Der volle Inhalt der QuelleMaster of Science
Deyneka, Alexander. „Metody ekvalizace v digitálních komunikačních systémech“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218963.
Der volle Inhalt der QuelleZhang, Dan [Verfasser]. „Iterative algorithms in achieving near-ML decoding performance in concatenated coding systems / Dan Zhang“. Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1048607224/34.
Der volle Inhalt der QuelleSantos, Helton Saulo Bezerra dos. „Essays on Birnbaum-Saunders models“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/87375.
Der volle Inhalt der QuelleIn this thesis, we present three different applications of Birnbaum-Saunders models. In Chapter 2, we introduce a new nonparametric kernel method for estimating asymmetric densities based on generalized skew-Birnbaum-Saunders distributions. Kernels based on these distributions have the advantage of providing flexibility in the asymmetry and kurtosis levels. In addition, the generalized skew-Birnbaum-Saunders kernel density estimators are boundary bias free and achieve the optimal rate of convergence for the mean integrated squared error of the nonnegative asymmetric kernel density estimators. We carry out a data analysis consisting of two parts. First, we conduct a Monte Carlo simulation study for evaluating the performance of the proposed method. Second, we use this method for estimating the density of three real air pollutant concentration data sets, whose numerical results favor the proposed nonparametric estimators. In Chapter 3, we propose a new family of autoregressive conditional duration models based on scale-mixture Birnbaum-Saunders (SBS) distributions. The Birnbaum-Saunders (BS) distribution is a model that has received considerable attention recently due to its good properties. An extension of this distribution is the class of SBS distributions, which allows (i) several of its good properties to be inherited; (ii) maximum likelihood estimation to be efficiently formulated via the EM algorithm; (iii) a robust estimation procedure to be obtained; among other properties. The autoregressive conditional duration model is the primary family of models to analyze high-frequency financial transaction data. This methodology includes parameter estimation by the EM algorithm, inference for these parameters, the predictive model and a residual analysis. We carry out a Monte Carlo simulation study to evaluate the performance of the proposed methodology. In addition, we assess the practical usefulness of this methodology by using real data of financial transactions from the New York stock exchange. Chapter 4 deals with process capability indices (PCIs), which are tools widely used by companies to determine the quality of a product and the performance of their production processes. These indices were developed for processes whose quality characteristic has a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the PCIs must be modified considering the non-normality. The use of unmodified PCIs can lead to inadequacy results. In order to establish quality policies to solve this inadequacy, data transformation has been proposed, as well as the use of quantiles from non-normal distributions. An asymmetric non-normal distribution which has become very popular in recent times is the Birnbaum-Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for the BS distribution. Furthermore, we carry out a simulation study to evaluate the performance of the proposed methodology. This methodology has been implemented in a noncommercial and open source statistical software called R. We apply this methodology to a real data set to illustrate its flexibility and potentiality.
FECCHIO, PIETRO. „High-precision measurement of the hypertriton lifetime and Λ-separation energy exploiting ML algorithms with ALICE at the LHC“. Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2968462.
Der volle Inhalt der QuelleGarg, Anushka. „Comparing Machine Learning Algorithms and Feature Selection Techniques to Predict Undesired Behavior in Business Processesand Study of Auto ML Frameworks“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285559.
Der volle Inhalt der QuelleUnder de senaste åren har omfattningen av maskininlärnings algoritmer och tekniker tagit ett steg i alla branscher (till exempel rekommendationssystem, beteendeanalyser av användare, finansiella applikationer och många fler). I praktiken spelar de en viktig roll för att utnyttja kraften av den enorma mängd data vi för närvarande genererar dagligen i vår digitala värld.I den här studien presenterar vi en omfattande jämförelse av olika övervakade maskininlärnings algoritmer och funktionsvalstekniker för att bygga en bästa förutsägbar modell som en utgång. Således hjälper denna förutsägbara modell företag att förutsäga oönskat beteende i sina affärsprocesser. Dessutom har vi undersökt automatiseringen av alla inblandade steg (från att förstå data till implementeringsmodeller) i den fullständiga maskininlärning rörledningen, även känd som AutoML, och tillhandahåller en omfattande undersökning av de olika ramarna som introducerats i denna domän. Dessa ramar introducerades för att lösa problemet med CASH (kombinerat algoritmval och optimering av Hyper-parameter), vilket i grunden är automatisering av olika rörledningar som är inblandade i processen att bygga en förutsägbar modell för maskininlärning.
Protzenko, Jonathan. „Mezzo : a typed language for safe effectful concurrent programs“. Paris 7, 2014. http://www.theses.fr/2014PA077159.
Der volle Inhalt der QuelleThe present dissertation argues that better programming languages can be designed and implemented, so as to provide greater safety and reliability for computer programs. I sustain my daims through the example of Mezzo, a programming language in the tradition of ML, which I co-designed and implemented. Programs written in Mezzo enjoy stronger properties than programs written in traditional ML languages: they are data-race free; state changes can be tracked by the type system; a central notion of ownership facilitates modular reasoning. Mezzo is not the first attempt at designing a better programming language; hence, a first part strives to position Mezzo relative to other works in the literature. I present landmark results in the field, which served either as sources of inspiration or points of comparison. The subsequent part is about the design of the Mezzo language. Using a variety of examples, I illustrate the language features as well as the safety gains that one obtains by writing their programs in Mezzo. In a subsequent part, I formalize the semantics of the Mezzo language. Mezzo is not just a type system that lives on paper: the fmal part describes the implementation of a type-checker for Mezzo, by formalizing the algorithms that I designed and the various ways the type-checker ensures that a program is valid
Tade, Foluwaso Olunkunle. „Receiver architectures for MIMO wireless communication systems based on V-BLAST and sphere decoding algorithms“. Thesis, University of Hertfordshire, 2011. http://hdl.handle.net/2299/6400.
Der volle Inhalt der QuellePIROZZI, MICHELA. „Development of a simulation tool for measurements and analysis of simulated and real data to identify ADLs and behavioral trends through statistics techniques and ML algorithms“. Doctoral thesis, Università Politecnica delle Marche, 2020. http://hdl.handle.net/11566/272311.
Der volle Inhalt der QuelleWith a growing population of elderly people, the number of subjects at risk of pathology is rapidly increasing. Many research groups are studying pervasive solutions to continuously and unobtrusively monitor fragile subjects in their homes, reducing health-care costs and supporting the medical diagnosis. Anomalous behaviors while performing activities of daily living (ADLs) or variations on behavioral trends are of great importance. To measure ADLs a significant number of parameters need to be considering affecting the measurement such as sensors and environment characteristics or sensors disposition. To face the impossibility to study in the real context the best configuration of sensors able to minimize costs and maximize accuracy, simulation tools are being developed as powerful means. This thesis presents several contributions on this topic. In the following research work, a study of a measurement chain aimed to measure ADLs and represented by PIRs sensors and ML algorithm is conducted and a simulation tool in form of Web Application has been developed to generate datasets and to simulate how the measurement chain reacts varying the configuration of the sensors. Starting from eWare project results, the simulation tool has been thought to provide support for technicians, developers and installers being able to speed up analysis and monitoring times, to allow rapid identification of changes in behavioral trends, to guarantee system performance monitoring and to study the best configuration of the sensors network for a given environment. The UNIVPM Home Care Web App offers the chance to create ad hoc datasets related to ADLs and to conduct analysis thanks to statistical algorithms applied on data. To measure ADLs, machine learning algorithms have been implemented in the tool. Five different tasks have been identified. To test the validity of the developed instrument six case studies divided into two categories have been considered. To the first category belong those studies related to: 1) discover the best configuration of the sensors keeping environmental characteristics and user behavior as constants; 2) define the most performant ML algorithms. The second category aims to proof the stability of the algorithm implemented and its collapse condition by varying user habits. Noise perturbation on data has been applied to all case studies. Results show the validity of the generated datasets. By maximizing the sensors network is it possible to minimize the ML error to 0.8%. Due to cost is a key factor in this scenario, the fourth case studied considered has shown that minimizing the configuration of the sensors it is possible to reduce drastically the cost with a more than reasonable value for the ML error around 11.8%. Results in ADLs measurement can be considered more than satisfactory.
Wessman, Filip. „Advanced Algorithms for Classification and Anomaly Detection on Log File Data : Comparative study of different Machine Learning Approaches“. Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-43175.
Der volle Inhalt der QuelleUcci, Graziano. „The Interstellar Medium of Galaxies: a Machine Learning Approach“. Doctoral thesis, Scuola Normale Superiore, 2019. http://hdl.handle.net/11384/85928.
Der volle Inhalt der QuelleGUPTA, SONALI. „CLASSIFYING FRAUDULENT COMPANIES USING ML ALGORITHM IN PYTHON“. Thesis, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19186.
Der volle Inhalt der QuelleChung, Hsiang-Han, und 鍾享翰. „A SISO ML Decoding Algorithm and Its Application in Turbo Decoding“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/47608737661188217448.
Der volle Inhalt der Quelle長庚大學
電機工程研究所
95
Abstract For many applications, the component code to be used with iterative decoding is a recursive systematic convolutional code (RSC code). Turbo code is a kind of iterative decoding first proposed by Berrou, Glavieux and Thitimajashima, who reported excellent coding gain results, approaching theoretical limit predicted by Shannon, in 1993. Since its excellent performance, the coding technique has been widely used in error control for many communication applications, such as third-generation (3G) mobile radio systems and deep space communications. Two suitable decoding algorithm of turbo code are Soft-Output Viterbi Algorithm (SOVA) proposed by Hagenauer and Hoeher. The other is Maximum A-Posteriori (MAP) algorithm proposed by Bahl et al. In this thesis, a new decoding algorithm is proposed for turbo decoding. This decoding algorithm using log ratio of a-observation probability to substitute log ratio of a-posteriori probability and used for decoding. Theoretically, the proposed new decoding scheme is no coding gain sacrificing, i.e., it can be achieved as well as BER performance of Log-MAP algorithm. Finally, we offer a decoding structure. Compare this structure and the structure proposed by Berrou et al. This structure can reduce some operation for conventional high operation. It also explains that this decoding structure is suitable to be applied to decode turbo codes for a mobile phone.
Wu, Meng-Lin, und 吳孟霖. „Theory and Performance of ML Decoding for LDPC Codes Using Genetic Algorithm“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/64317738328812502720.
Der volle Inhalt der Quelle國立臺灣大學
電信工程學研究所
97
Low-density parity-check (LDPC) codes drawn large attention lately due to their exceptional performance. Typical decoders operate based on the belief-propagation principle. Although these decoding algorithms work remarkably well, it is generally suspected that they do not achieve the performance of ML decoding. The ML performance of LDPC codes remains unknown because efficient ML decoders have not been discovered. Although it has been proved that for various appropriately chosen ensembles of LDPC codes, low error probability and reliable communication is possible up to channel capacity, we still want to know the actual limit for one specific code. Thus, in this thesis, our goal is to establish the ML performance. At a word error probability (WEP) of 10^{-5} or lower, we find that perturbed decoding can effectively achieve the ML performance at reasonable complexity. In higher error probability regime, the complexity of PD becomes prohibitive. In light of this, we propose the use of gifts. Proper gifts can induce high likelihood decoded codewords. We investigate the feasibility of using gifts in detail and discover that the complexity is dominated by the effort to identify small gifts that can pass the trigger criterion. A greedy concept is proposed to maximize the probability for a receiver to produce such a gift. Here we also apply the concept of gift into the genetic algorithm to find the ML bounds of LDPC codes. In genetic decoding algorithm (GDA), chromosomes are amount of gift sequence with some known gift bits. A conventional SPA decoder is used to assign fitness values for the chromosomes in the population. After evolution in many generations, chromosomes that correspond to decoded codewords of very high likelihood emerge. We also propose a parallel genetic decoding algorithm (P-GDA) based on the greedy concept and feasibility research of gifts. The most important aspect of GDA, in our opinion, is that one can utilize the ML bounding technique and GDA to empirically determine an effective lower bound on the error probability with ML decoding. Our results show that GDA and P-GDA outperform conventional decoder by 0.1 ~ 0.13 dB and the two bounds converge at a WEP of $10^{-5}$. Our results also indicate that, for a practical block size of thousands of bits, the SNR-error probability relationship of LDPC codes trends smoothly in the same fashion as the sphere packing bound. The abrupt cliff-like error probability curve is actually an artifact due to the ineffectiveness of iterative decoding. If additional complexity is allowed, our methods can be applied to improve on the typical decoders.
Hsueh, Tsun-Chih. „Theory and Performance of ML Decoding for Turbo Codes using Genetic Algorithm“. 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-3107200702055600.
Der volle Inhalt der QuelleHsueh, Tsun-Chih, und 薛存志. „Theory and Performance of ML Decoding for Turbo Codes using Genetic Algorithm“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/51236587751523493601.
Der volle Inhalt der Quelle臺灣大學
電信工程學研究所
95
Although yielding the lowest error probability, ML decoding of turbo codes has been considered unrealistic so far because efficient ML decoders have not been discovered. In this thesis, we propose an experimental bounding technique for ML decoding and the Genetic Decoding Algorithm (GDA) for turbo codes. The ML bounding technique establishes both lower and upper bounds for ML decoding. GDA combines the principles of perturbed decoding and genetic algorithm. In GDA, chromosomes are random additive perturbation noises. A conventional turbo decoder is used to assign fitness values to the chromosomes in the population. After generations of evolution, good chromosomes that correspond to decoded codewords of very good likelihood emerge. GDA can be used as a practical decoder for turbo codes in certain contexts. It is also a natural multiple-output decoder. The most important aspect of GDA, in our opinion, is that one can utilize the ML bounding technique and GDA to empirically determine a effective lower bound on the error probability with ML decoding. Our results show that, at a word error probability of 10^{-4}, GDA achieves the performance of ML decoding. Using the ML bounding technique and GDA, we establish that an ML decoder only slightly outperforms a MAP-based iterative decoder at this word error probability for the block size we used and the turbo code defined for WCDMA.
Sá, Pedro Miguel Martins de Sousa e. „Image reconstruction algorithm implementation for the easyPET: a didactic and pre-clinical PET system“. Master's thesis, 2017. http://hdl.handle.net/10451/31700.
Der volle Inhalt der QuelleTomografia por Emissão de Positrões (PET) é uma técnica de imagiologia funcional, utilizada para observar processos biológicos. O conceito de tomografia por emissão foi introduzido durante a década de 1950, sendo que foi apenas com o desenvolvimento de radiofármacos na década de 1970, que esta técnica começou a ser utilizada em medicina. Nos últimos 20 anos, o avanço tecnológico tornou os sistemas PET numa ferramenta altamente qualificada para imagiologia funcional. Neste período, o aparecimento de sistemas PET-CT veio colmatar as deficiências produzidas pela PET ao nível de imagem estrutural, com a combinação desta técnica funcional com a de Tomografia Computadorizada (CT). A evolução da tecnologia PET foi também acompanhada pela evolução da tecnologia para produção de radiofármacos, incluindo os radionuclídeos, bem como do conhecimento médico relativo aos processos biológicos humanos. Aliando esta tecnologia e conhecimento, tornou-se possível traçar moléculas com funções metabólicas nos diversos sistemas do corpo humano e, assim, produzir uma variedade de imagens funcionais. Dado o tipo de imagem produzida pela técnica PET, é bastante comum associar-lhe o diagnóstico de doenças cancerígenas, cuja principal característica é a desregulação metabólica celular no organismo. Tendo em vista o aumento esperado da incidência de cancro em Portugal e na Europa, tendo já sido atingida uma incidência nacional, em 2010, de 444,50 pessoas em cada 100.000 (números avançados pela DGS, 2015), a utilização de técnicas que permitam o diagnóstico precoce destas doenças é de elevada importância. Posto isto, e apesar do constante crescimento do gasto público em cuidados médicos relativos ao diagnóstico e tratamento de cancro, estão a ser postos cada vez mais esforços e fundos para que o processo de Investigação e Desenvolvimento (I&D) relacionado com esta doença seja célere. São constantemente desenvolvidas novas e melhores técnicas de imagiologia, que permitem diagnósticos mais precoces e precisos, enquanto ajudam na aplicação de planos de tratamento mais eficazes que, consequentemente, levam a um gasto público mais eficiente. Os sistemas PET inserem-se neste contexto e, uma vez permitindo imagem altamente sensível a processos funcionais, facilmente se generalizaram no meio médico e académico. Os sistemas direcionados a aplicações relacionadas com a medicina humana têm como função observar processos biológicos, com a finalidade de um diagnóstico médico ou estudo. Sistemas pré-clínicos, direcionados a estudos com animais pequenos, têm o propósito de auxiliar a investigação relacionada com os estudos preliminares de doenças que afetem o ser humano. Finalmente, e sendo o grupo com menor oferta comercial, os sistemas PET didáticos possibilitam uma melhor formação de pessoal responsável pelo futuro uso e I&D relacionados com esta tecnologia. No entanto, a tecnologia utilizada nestes três tipos de sistemas encarece consideravelmente o seu valor comercial sendo que, contrariamente ao que seria de esperar, os preços dos sistemas pré-clínicos não se diferenciam consideravelmente dos sistemas para humanos. O encarecimento destes sistemas deve-se ao facto de que toda a tecnologia a eles associada tem características mais dispendiosas de produzir. No caso dos sistemas didáticos, simplesmente não existe o incentivo necessário à sua produção e compra. É neste contexto que surge o easyPET. O design inovador, constituído por apenas duas colunas de detetores opostos, e tirando partido de uma atuação sobre dois eixos de rotação, faz deste sistema ideal para entrar no mercado em duas vertentes. A primeira, constituída apenas por um detetor em cada coluna, está destinada a ter um papel didático. A segunda, tirando partido de colunas com múltiplos detetores, foi desenhada para entrar no mercado de sistemas pré-clínicos. Em ambos os casos, a principal característica do easyPET, e a que o destaca dos restantes sistemas, é o seu reduzido número de detetores, que resulta num reduzido custo de produção. Através da implementação de um número reduzido de detetores e, consequentemente, reduzida eletrónica, é possível obter um custo final da máquina inferior. No entanto, é sempre necessário garantir que os dados obtidos em tal sistema correspondam a imagens com as características necessárias, sendo que o processo de reconstrução de imagem é bastante importante. O trabalho apresentado nesta tese tem como objetivo a implementação de um método de reconstrução de imagem a duas dimensões, dedicado ao sistema easyPET. Para tal, foi considerado um algoritmo estatístico iterativo que se baseia na Maximização da Estimativa da Máxima Verosimilhança (ML-EM), introduzido por Shepp e Vardi em 1982. Desde então, tem sido largamente explorado e, inclusive, dando aso a outras versões bastante comuns em reconstrução de imagem PET, como é caso da Maximização da Espectativa usando Subgrupos Ordenados (OS-EM). A implementação do algoritmo escolhido foi feita no software Matlab. Para computar a unidade básica do algoritmo, a Linha de Resposta (LOR), foi implementado o método ray-driven. Por forma a otimizar a construção da matriz de sistema utilizada neste algoritmo, foram implementadas simetrias de geometria. Esta otimização baseou-se na consideração de que a geometria do sistema easyPET pode ser dividida em quadrantes, sendo que um único quadrante consegue descrever os restantes três. Além disso, foram também implementadas otimizações ao nível estrutural do código escrito em Matlab. Estas foram feitas tendo em conta o aumento na facilidade de acesso à memória através da utilização variáveis para rápido indexamento. Foram também implementados dois métodos de regularização de dados: filtragem gaussiana entre iterações e um root prior baseado na mediana. Por forma a comparar, mais tarde, os resultados obtidos através do algoritmo implementado, foi também implementado o método de reconstrução de Retroprojeção Filtrada (FBP). Por último, foi implementada uma interface para o utilizador, utilizando a aplicação GUIDE do Matlab. Esta interface tem como objetivo servir de ponte entre o sistema didático easyPET e o utilizador, para que a experiência de utilização seja otimizada. Por forma a delinear o teste ao sistema easyPET e ao algoritmo ML-EM implementado, foram seguidas as normas NEMA. Este é um conjunto de normas que tem como objetivo padronizar a análise realizada a sistemas de imagem médica. Para tal, foram adquiridos e simulados ficheiros de dados com uma fonte pontual a 5, 10, 15 e 25 mm do centro do campo de visão do sistema (FOV) e utilizando um par de detetores com 2x2x30 mm3. Para realizar a análise de resultados, os dados foram reconstruídos utilizando a FBP implementada, e foi medida a FWHM e FWTM da fonte reconstruída. O mesmo procedimento foi aplicado, mas reconstruindo os dados através do algoritmo ML-EM, utilizando o filtro gaussiano, o MRP, e não utilizando qualquer método de regularização de dados (nativo). Por forma a comparar os métodos de regularização de dados, foi também realizada uma medição do rácio sinal-ruído (SNR). Os resultados foram obtidos para imagens reconstruídas com um pixel de, aproximadamente, 0.25x0.25 mm2, correspondendo a imagens de 230x230 pixéis. Os primeiros resultados foram obtidos a fim de determinar qual a iteração em que se começaria a observar a estabilização das imagens reconstruídas. Para algoritmo ML-EM implementado e o tipo de dados utilizados, foi observado que a partir da 10a iteração o algoritmo ML-EM converge. Através das medidas para a FWHM e FWTM observou-se, também, que os dados obtidos experimentalmente se diferenciam dos resultados obtidos sobre os dados simulados. Isto levou a que, fora dos objetivos deste trabalho, fossem realizados mais testes utilizando dados experimentais e, que daqui em diante, apenas fossem utilizados dados obtidos através de simulação Monte Carlo, por razões de conveniência na precisão da colocação da fonte pontual. De seguida, comparam-se os dados obtidos através da FBP e o algoritmo ML-EM nativo. Para o primeiro caso foram medidas FWHM de 1.5x1.5 mm2, enquanto que para o segundo foram atingidos valores de 1.2x1.2 mm2. Para os métodos de regularização de dados foram medidos valores de resolução semelhantes ou inferiores, sendo que estes resultaram num aumento da qualidade da reconstrução da fonte, observado através do aumento no valor de SNR medido. O trabalho apresentado nesta tese revela, não só a validação do algoritmo de reconstrução proposto, mas também o bom funcionamento e potencialidades do sistema easyPET. Pelos resultados obtidos através das normas NEMA, é possível observar que este sistema vai ao encontro do estado de arte. Mais ainda, através de um método de reconstrução dedicado ao easyPET é possível otimizar os resultados obtidos. Com o avançar do projeto no qual este trabalho esteve inserido, é de esperar que o modelo a três dimensões pré-clínico easyPET irá produzir melhores resultados. De frisar que o sistema easyPET didático se encontra na sua fase final e que os resultados obtidos são bastante satisfatórios tendo em conta a finalidade deste sistema.
The easyPET scanner has an innovative design, comprising only two array columns facing each other, and with an actuation defined by two rotation axes. Using this design, two approaches have been taken. The first concerns to a didactic PET scanner, where the arrays of detectors are comprised of only one detector each, and it is meant to be a simple 2-dimensional PET scanner for educational purposes. The second corresponds to a pre-clinical scanner, with the arrays having multiple detectors, meant to acquire 3-dimensional data. Given the geometry of the system, there is no concern with the effects of not measuring the Depth-of-Interaction (DOI), and a resolution of 1-1.5 mm is expected with the didactic system, improving with the pre-clinical. The work presented in this thesis deals with 2D image reconstruction for the easyPET scanners. The unconventional nature of the acquisition geometry, the large amount of data to be processed, the complexity of implementing a PET image reconstruction algorithm, and the implementation of data regularization methods, gaussian filtering and Median Root Prior (MRP), were addressed in this thesis. For this, the Matlab software was used to implement the ML-EM algorithm. Alongside, several optimizations were also implemented in order to convey a better computational performance to the algorithm. These optimizations refer to using geometry symmetries and fast indexing approaches. Moreover, a user interface was created so as to enhance the user experience for the didactic easyPET system. The validation of the implemented algorithm was performed using Monte Carlo simulated, and acquired data. The first results obtained indicate that the optimizations implemented on the algorithm have successfully reduced the image reconstruction time. On top of that, the system was tested according to the NEMA rules. A comparison was then made between reconstructed images produced by using Filtered Back Projection (FBP), the native ML-EM implementation, the ML-EM algorithm using inter-iteration gaussian filtering, and the ML-EM algorithm implemented with the MRP. This comparison was made through the calculation of FWHM, FWTM, and SNR, at different spatial positions. The results obtained reveal an approximate 1.5x 1.5 mm2 FWHM source resolution in the FOV, when recurring to FBP, and 1.2x 1.2 mm2 for the native ML-EM algorithm. The implemented data regularization methods produced similar or improved spatial resolution results, whilst improving the source’s SNR values. The results obtained show the potential in the easyPET systems. Since the didactic scanner is already on its final stage, the next step will be to further test the pre-clinical system.
„On local and global influence analysis of latent variable models with ML and Bayesian approaches“. 2004. http://library.cuhk.edu.hk/record=b6073748.
Der volle Inhalt der Quelle"September 2004."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (p. 118-126)
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
Kaur, Rajvir. „A comparative analysis of selected set of natural language processing (NLP) and machine learning (ML) algorithms for clinical coding using clinical classification standards“. Thesis, 2018. http://hdl.handle.net/1959.7/uws:49614.
Der volle Inhalt der QuelleVicente, David José Marques. „Distributed Algorithms for Target Localization in Wireless Sensor Networks Using Hybrid Measurements“. Master's thesis, 2017. http://hdl.handle.net/10362/27875.
Der volle Inhalt der QuelleRasool, Raihan Ur. „CyberPulse: A Security Framework for Software-Defined Networks“. Thesis, 2020. https://vuir.vu.edu.au/42172/.
Der volle Inhalt der Quelle