Thèses sur le sujet « Generative classifiers »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Generative classifiers.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 43 meilleures thèses pour votre recherche sur le sujet « Generative classifiers ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Xue, Jinghao. « Aspects of generative and discriminative classifiers ». Thesis, Connect to e-thesis, 2008. http://theses.gla.ac.uk/272/.

Texte intégral
Résumé :
Thesis (Ph.D.) - University of Glasgow, 2008.
Ph.D. thesis submitted to the Department of Statistics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
Styles APA, Harvard, Vancouver, ISO, etc.
2

ROGER-YUN, Soyoung. « Les expressions nominales à classificateurs et les propositions à cas multiples du coréen : recherches sur leur syntaxe interne et mise en évidence de quelques convergences structurales ». Phd thesis, Université de la Sorbonne nouvelle - Paris III, 2002. http://tel.archives-ouvertes.fr/tel-00002834.

Texte intégral
Résumé :
Cette thèse a pour objet la syntaxe des classificateurs (CL) et des Constructions à Cas Multiples du coréen. Cette étude adopte essentiellement le cadre antisymétrique de Kayne, mais utilise également certains concepts fondamentaux du cadre minimaliste, comme la Vérification des traits formels. La première partie de cette thèse est consacrée à l'étude des CL et de la structure interne des expressions nominales à CL; nous montrons notamment qu'un traitement syntaxique parallèle pour les domaines nominal et phrastique est possible en coréen. Dans la seconde partie, consacrée à la structure phrastique et plus spécifiquement à celle des Constructions à Cas Multiples du coréen, il est soutenu que les marques dites casuelles du coréen ne sont pas de véritables marques casuelles, mais des têtes fonctionnelles, et que les Constructions à Cas Multiples du coréen s'obtiennent par la réitération de ces têtes fonctionnelles, suivie d'une opération d'Attraction.
Styles APA, Harvard, Vancouver, ISO, etc.
3

McClintick, Kyle W. « Training Data Generation Framework For Machine-Learning Based Classifiers ». Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1276.

Texte intégral
Résumé :
In this thesis, we propose a new framework for the generation of training data for machine learning techniques used for classification in communications applications. Machine learning-based signal classifiers do not generalize well when training data does not describe the underlying probability distribution of real signals. The simplest way to accomplish statistical similarity between training and testing data is to synthesize training data passed through a permutation of plausible forms of noise. To accomplish this, a framework is proposed that implements arbitrary channel conditions and baseband signals. A dataset generated using the framework is considered, and is shown to be appropriately sized by having $11\%$ lower entropy than state-of-the-art datasets. Furthermore, unsupervised domain adaptation can allow for powerful generalized training via deep feature transforms on unlabeled evaluation-time signals. A novel Deep Reconstruction-Classification Network (DRCN) application is introduced, which attempts to maintain near-peak signal classification accuracy despite dataset bias, or perturbations on testing data unforeseen in training. Together, feature transforms and diverse training data generated from the proposed framework, teaching a range of plausible noise, can train a deep neural net to classify signals well in many real-world scenarios despite unforeseen perturbations.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Guo, Hong Yu. « Multiple classifier combination through ensembles and data generation ». Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/26648.

Texte intégral
Résumé :
This thesis introduces new approaches, namely the DataBoost and DataBoost-IM algorithms, to extend Boosting algorithms' predictive performance. The DataBoost algorithm is designed to assist Boosting algorithms to avoid over-emphasizing hard examples. In the DataBoost algorithm, new synthetic data with bias information towards hard examples are added to the original training set when training the component classifiers. The DataBoost approach was evaluated against ten data sets, using both decision trees and neural networks as base classifiers. The experiments show promising results, in terms of overall accuracy when compared to a standard benchmarking Boosting algorithm. The DataBoost-IM algorithm is developed to learn from two-class imbalanced data sets. In the DataBoost-IM approach, the class frequencies and the total weights against different classes within the ensemble's training set are rebalanced by adding new synthetic data. The DataBoost-IM method was evaluated, in terms of the F-measures, G-mean and overall accuracy, against seventeen highly and moderately imbalanced data sets using decision trees as base classifiers. (Abstract shortened by UMI.)
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kang, Dae-Ki. « Abstraction, aggregation and recursion for generating accurate and simple classifiers ». [Ames, Iowa : Iowa State University], 2006.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kimura, Takayuki. « RNA-protein structure classifiers incorporated into second-generation statistical potentials ». Thesis, San Jose State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10241445.

Texte intégral
Résumé :

Computational modeling of RNA-protein interactions remains an important endeavor. However, exclusively all-atom approaches that model RNA-protein interactions via molecular dynamics are often problematic in their application. One possible alternative is the implementation of hierarchical approaches, first efficiently exploring configurational space with a coarse-grained representation of the RNA and protein. Subsequently, the lowest energy set of such coarse-grained models can be used as scaffolds for all-atom placements, a standard method in modeling protein 3D-structure. However, the coarse-grained modeling likely will require improved ribonucleotide-amino acid potentials as applied to coarse-grained structures. As a first step we downloaded 1,345 PDB files and clustered them with PISCES to obtain a non-redundant complex data set. The contacts were divided into nine types with DSSR according to the 3D structure of RNA and then 9 sets of potentials were calculated. The potentials were applied to score fifty thousand poses generated by FTDock for twenty-one standard RNA-protein complexes. The results compare favorably to existing RNA-protein potentials. Future research will optimize and test such combined potentials.

Styles APA, Harvard, Vancouver, ISO, etc.
7

Alani, Shayma. « Design of intelligent ensembled classifiers combination methods ». Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/12793.

Texte intégral
Résumé :
Classifier ensembling research has been one of the most active areas of machine learning for a long period of time. The main aim of generating combined classifier ensembles is to improve the prediction accuracy in comparison to using an individual classifier. A combined classifiers ensemble can improve the prediction results by compensating for the individual classifier weaknesses in certain areas and benefiting from better accuracy of the other ensembles in the same area. In this thesis, different algorithms are proposed for designing classifier ensemble combiners. The existing methods such as averaging, voting, weighted average, and optimised weighted method does not increase the accuracy of the combiner in comparison to the proposed advanced methods such as genetic programming and the coalition method. The different methods are studied in detail and analysed using different databases. The aim is to increase the accuracy of the combiner in comparison to the standard stand-alone classifiers. The proposed methods are based on generating a combiner formula using genetic programming, while the coalition is based on estimating the diversity of the classifiers such that a coalition is generated with better prediction accuracy. Standard accuracy measures are used, namely accuracy, sensitivity, specificity and area under the curve, in addition to training error accuracies such as the mean square error. The combiner methods are compared empirically with several stand-alone classifiers using neural network algorithms. Different types of neural network topologies are used to generate different models. Experimental results show that the combiner algorithms are superior in creating the most diverse and accurate classifier ensembles. Ensembles of the same models are generated to boost the accuracy of a single classifier type. An ensemble of 10 models of different initial weights is used to improve the accuracy. Experiments show a significant improvement over a single model classifier. Finally, two combining methods are studied, namely the genetic programming and coalition combination methods. The genetic programming algorithm is used to generate a formula for the classifiers’ combinations, while the coalition method is based on a simple algorithm that assigns linear combination weights based on the consensus theory. Experimental results of the same databases demonstrate the effectiveness of the proposed methods compared to conventional combining methods. The results show that the coalition method is better than genetic programming.
Styles APA, Harvard, Vancouver, ISO, etc.
8

DING, ZEJIN. « Diversified Ensemble Classifiers for Highly Imbalanced Data Learning and their Application in Bioinformatics ». Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/cs_diss/60.

Texte intégral
Résumé :
In this dissertation, the problem of learning from highly imbalanced data is studied. Imbalance data learning is of great importance and challenge in many real applications. Dealing with a minority class normally needs new concepts, observations and solutions in order to fully understand the underlying complicated models. We try to systematically review and solve this special learning task in this dissertation.We propose a new ensemble learning framework—Diversified Ensemble Classifiers for Imbal-anced Data Learning (DECIDL), based on the advantages of existing ensemble imbalanced learning strategies. Our framework combines three learning techniques: a) ensemble learning, b) artificial example generation, and c) diversity construction by reversely data re-labeling. As a meta-learner, DECIDL utilizes general supervised learning algorithms as base learners to build an ensemble committee. We create a standard benchmark data pool, which contains 30 highly skewed sets with diverse characteristics from different domains, in order to facilitate future research on imbalance data learning. We use this benchmark pool to evaluate and compare our DECIDL framework with several ensemble learning methods, namely under-bagging, over-bagging, SMOTE-bagging, and AdaBoost. Extensive experiments suggest that our DECIDL framework is comparable with other methods. The data sets, experiments and results provide a valuable knowledge base for future research on imbalance learning. We develop a simple but effective artificial example generation method for data balancing. Two new methods DBEG-ensemble and DECIDL-DBEG are then designed to improve the power of imbalance learning. Experiments show that these two methods are comparable to the state-of-the-art methods, e.g., GSVM-RU and SMOTE-bagging. Furthermore, we investigate learning on imbalanced data from a new angle—active learning. By combining active learning with the DECIDL framework, we show that the newly designed Active-DECIDL method is very effective for imbalance learning, suggesting the DECIDL framework is very robust and flexible.Lastly, we apply the proposed learning methods to a real-world bioinformatics problem—protein methylation prediction. Extensive computational results show that the DECIDL method does perform very well for the imbalanced data mining task. Importantly, the experimental results have confirmed our new contributions on this particular data learning problem.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Svénsen, Johan F. M. « GTM : the generative topographic mapping ». Thesis, Aston University, 1998. http://publications.aston.ac.uk/1245/.

Texte intégral
Résumé :
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Pyon, Yoon Soo. « Variant Detection Using Next Generation Sequencing Data ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1347053645.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Brody, Samuel. « Closing the gap in WSD : supervised results with unsupervised methods ». Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3981.

Texte intégral
Résumé :
Word-Sense Disambiguation (WSD), holds promise for many NLP applications requiring broad-coverage language understanding, such as summarization (Barzilay and Elhadad, 1997) and question answering (Ramakrishnan et al., 2003). Recent studies have also shown that WSD can benefit machine translation (Vickrey et al., 2005) and information retrieval (Stokoe, 2005). Much work has focused on the computational treatment of sense ambiguity, primarily using data-driven methods. The most accurate WSD systems to date are supervised and rely on the availability of sense-labeled training data. This restriction poses a significant barrier to widespread use of WSD in practice, since such data is extremely expensive to acquire for new languages and domains. Unsupervised WSD holds the key to enable such application, as it does not require sense-labeled data. However, unsupervised methods fall far behind supervised ones in terms of accuracy and ease of use. In this thesis we explore the reasons for this, and present solutions to remedy this situation. We hypothesize that one of the main problems with unsupervised WSD is its lack of a standard formulation and general purpose tools common to supervised methods. As a first step, we examine existing approaches to unsupervised WSD, with the aim of detecting independent principles that can be utilized in a general framework. We investigate ways of leveraging the diversity of existing methods, using ensembles, a common tool in the supervised learning framework. This approach allows us to achieve accuracy beyond that of the individual methods, without need for extensive modification of the underlying systems. Our examination of existing unsupervised approaches highlights the importance of using the predominant sense in case of uncertainty, and the effectiveness of statistical similarity methods as a tool for WSD. However, it also serves to emphasize the need for a way to merge and combine learning elements, and the potential of a supervised-style approach to the problem. Relying on existing methods does not take full advantage of the insights gained from the supervised framework. We therefore present an unsupervised WSD system which circumvents the question of actual disambiguation method, which is the main source of discrepancy in unsupervised WSD, and deals directly with the data. Our method uses statistical and semantic similarity measures to produce labeled training data in a completely unsupervised fashion. This allows the training and use of any standard supervised classifier for the actual disambiguation. Classifiers trained with our method significantly outperform those using other methods of data generation, and represent a big step in bridging the accuracy gap between supervised and unsupervised methods. Finally, we address a major drawback of classical unsupervised systems – their reliance on a fixed sense inventory and lexical resources. This dependence represents a substantial setback for unsupervised methods in cases where such resources are unavailable. Unfortunately, these are exactly the areas in which unsupervised methods are most needed. Unsupervised sense-discrimination, which does not share those restrictions, presents a promising solution to the problem. We therefore develop an unsupervised sense discrimination system. We base our system on a well-studied probabilistic generative model, Latent Dirichlet Allocation (Blei et al., 2003), which has many of the advantages of supervised frameworks. The model’s probabilistic nature lends itself to easy combination and extension, and its generative aspect is well suited to linguistic tasks. Our model achieves state-of-the-art performance on the unsupervised sense induction task, while remaining independent of any fixed sense inventory, and thus represents a fully unsupervised, general purpose, WSD tool.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Lee, Chang Hee. « Synaesthesia materialisation : approaches to applying synaesthesia as a provocation for generating creative ideas within the context of design ». Thesis, Royal College of Art, 2019. http://researchonline.rca.ac.uk/3756/.

Texte intégral
Résumé :
For the past three decades, research on the topic of synaesthesia has been largely dominated by the field of psychology and neuroscience, and has focused on scientifically investigating its experience and causes to define the phenomenon of synaesthesia. However, the scientific research on this subject is now enquiring into potential future implementations and asking how this subject may be useful to wider audiences, and is attempting to expand its research spectrum beyond the mere scientific analysis. This PhD research in design by practice attempts to contribute and expand this scope: it shares a creative interpretation of synaesthesia research and questions its existing boundary. The past synaesthesia research in design has been largely focused on the possibility and potentials of sensory optimisation and cross-modal sensory interaction between users and artefacts. However, this research investigates the provocative properties and characteristics of synaesthesia and shares different approaches to its application for generating creative ideas in design. This PhD research presents nine projects, and they consist of approaches to synaesthesia application, toolkits and validations. Synaesthesia is one of those rare subjects where both science and creative context intersect and nurture each other. By looking into this PhD research, readers may gain insights of how a designer tries to discover a new value within this interdisciplinary context. This research contributes three types of new knowledge and new perspectives. Firstly, it provides a new interpretation and awareness in and of synaesthesia research, and expands its research boundaries, moving from analysis based research to application based research. Secondly, it outlines three approaches, a range of themes and toolkits for using synaesthesia as a provocation in generating creative ideas in the design process. Thirdly, it identifies the differences between previous synaesthesia application research and current application research within the context of design. Research on the topic of synaesthesia has been boosted significantly since the technological innovations (e.g. fMRI brain scanning and neuroimaging) in the early 1990s. However, this research was somewhat limited to scientific analysis analysis in order to understand the nature of the phenomenon. This research paradigm and the scientific focus have now shifted, and they are attempting to discover the potentials of synaesthesia's usefulness through different disciplines and channels. How can we apply the provocative qualities of synaesthesia within the context of design? This research journey begins by investigating this foundational question from a designer's point of view.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Libosvár, Jakub. « Generování modelů domů pro Open Street Mapy ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236184.

Texte intégral
Résumé :
This thesis deals with the procedural generation of building models based on a given pattern. The community project OpenStreetMap is used for obtaining datasets that create the buildings platform patterns. A brief survey of classifiers and formal grammars for modeling is introduced. Designing an estate classifier and algorithm for building generation is practical aspect of this thesis, including the algorithm implementation. 3D output meshes are rendered using OpenGL in real-time.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Alam, Sameer Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. « Evolving complexity towards risk : a massive scenario generation approach for evaluating advanced air traffic management concepts ». Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/38966.

Texte intégral
Résumé :
Present day air traffc control is reaching its operational limits and accommodating future traffic growth will be a challenging task for air traffic service providers and airline operators. Free Flight is a proposed transition from a highly-structured and centrally-controlled air traffic system to a self-optimized and highly-distributed system. In Free Flight, pilots will have the flexibility of real-time trajectory planning and dynamic route optimization given airspace constraints (traffic, weather etc.). A variety of advanced air traffc management (ATM) concepts are proposed as enabling technologies for the realization of Free Flight. Since these concepts can be exposed to unforeseen and challenging scenarios in Free Flight, they need to be validated and evaluated in order to implement the most effective systems in the field. Evaluation of advanced ATM concepts is a challenging task due to the limitations in the existing scenario generation methodologies and limited availability of a common platform (air traffic simulator) where diverse ATM concepts can be modeled and evaluated. Their rigorous evaluation on safety metrics, in a variety of complex scenarios, can provide an insight into their performance, which can help improve upon them while developing new ones. In this thesis, I propose a non-propriety, non-commercial air traffic simulation system, with a novel representation of airspace, which can prototype advanced ATM concepts such as conflict detection and resolution, airborne weather avoidance and cockpit display of traffic information. I then propose a novel evolutionary computation methodology to algorithmically generate a massive number of conflict scenarios of increasing complexity in order to evaluate conflict detection algorithms. I illustrate the methodology in detail by quantitative evaluation of three conflict detection algorithms, from the literature, on safety metrics. I then propose the use of data mining techniques for the discovery of interesting relationships, that may exist implicitly, in the algorithm's performance data. The data mining techniques formulate the conflict characteristics, which may lead to algorithm failure, using if-then rules. Using the rule sets for each algorithm, I propose an ensemble of conflict detection algorithms which uses a switch mechanism to direct the subsequent conflict probes to an algorithm which is less vulnerable to failure in a given conflict scenario. The objective is to form a predictive model for algorithm's vulnerability which can then be included in an ensemble that can minimize the overall vulnerability of the system. In summary, the contributions of this thesis are: 1. A non-propriety, non-commercial air traffic simulation system with a novel representation of airspace for efficient modeling of advanced ATM concepts. 2. An Ant-based dynamic weather avoidance algorithm for traffic-constrained enroute airspace. 3. A novel representation of 4D air traffic scenario that allows the use of an evolutionary computation methodology to evolve complex conflict scenarios for the evaluation of conflict detection algorithms. 4. An evaluation framework where scenario generation, scenario evaluation and scenario evolution processes can be carried out in an integrated manner for rigorous evaluation of advanced ATM concepts. 5. A methodology for forming an intelligent ensemble of conflict detection algorithms by data mining the scenario space.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Mugtussids, Iossif B. « Flight Data Processing Techniques to Identify Unusual Events ». Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/28095.

Texte intégral
Résumé :
Modern aircraft are capable of recording hundreds of parameters during flight. This fact not only facilitates the investigation of an accident or a serious incident, but also provides the opportunity to use the recorded data to predict future aircraft behavior. It is believed that, by analyzing the recorded data, one can identify precursors to hazardous behavior and develop procedures to mitigate the problems before they actually occur. Because of the enormous amount of data collected during each flight, it becomes necessary to identify the segments of data that contain useful information. The objective is to distinguish between typical data points, that are present in the majority of flights, and unusual data points that can be only found in a few flights. The distinction between typical and unusual data points is achieved by using classification procedures. In this dissertation, the application of classification procedures to flight data is investigated. It is proposed to use a Bayesian classifier that tries to identify the flight from which a particular data point came. If the flight from which the data point came is identified with a high level of confidence, then the conclusion that the data point is unusual within the investigated flights can be made. The Bayesian classifier uses the overall and conditional probability density functions together with a priori probabilities to make a decision. Estimating probability density functions is a difficult task in multiple dimensions. Because many of the recorded signals (features) are redundant or highly correlated or are very similar in every flight, feature selection techniques are applied to identify those signals that contain the most discriminatory power. In the limited amount of data available to this research, twenty five features were identified as the set exhibiting the best discriminatory power. Additionally, the number of signals is reduced by applying feature generation techniques to similar signals. To make the approach applicable in practice, when many flights are considered, a very efficient and fast sequential data clustering algorithm is proposed. The order in which the samples are presented to the algorithm is fixed according to the probability density function value. Accuracy and reduction level are controlled using two scalar parameters: a distance threshold value and a maximum compactness factor.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Thanjavur, Bhaaskar Kiran Vishal. « Automatic generation of hardware Tree Classifiers ». Thesis, 2017. https://hdl.handle.net/2144/23688.

Texte intégral
Résumé :
Machine Learning is growing in popularity and spreading across different fields for various applications. Due to this trend, machine learning algorithms use different hardware platforms and are being experimented to obtain high test accuracy and throughput. FPGAs are well-suited hardware platform for machine learning because of its re-programmability and lower power consumption. Programming using FPGAs for machine learning algorithms requires substantial engineering time and effort compared to software implementation. We propose a software assisted design flow to program FPGA for machine learning algorithms using our hardware library. The hardware library is highly parameterized and it accommodates Tree Classifiers. As of now, our library consists of the components required to implement decision trees and random forests. The whole automation is wrapped around using a python script which takes you from the first step of having a dataset and design choices to the last step of having a hardware descriptive code for the trained machine learning model.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Correia, João Nuno Gonçalves Costa Cavaleiro. « Evolutionary Computation for Classifier Assessment and Improvement ». Doctoral thesis, 2018. http://hdl.handle.net/10316/81287.

Texte intégral
Résumé :
Thesis submitted to the University of Coimbra in partial fulfilment of the requirements for the Doctoral Program in Information Science and Technology
Typical Machine Learning (ML) approaches rely on a dataset and a model to solve problems. For most problems, optimisation of ML approaches is crucial to attain competitive performances. Most of the effort goes towards optimising the model by exploring new algorithms and tuning the parameters. Nevertheless, the dataset is also a key part for ML performance. Gathering, constructing and optimising a representative dataset is a hard task and a time-consuming endeavour, with no well-established guidelines to follow. In this thesis, we attest the use of Evolutionary Computation (EC) to assess and improve classifiers via synthesis of new instances. An analysis of the state of the art on dataset construction is performed. The quality of the dataset is tied to the availability of data, which in most cases is hard to control. A throughout analysis is made about Instance Selection and Instance Generation, which sheds light on relevant points for the development of our framework. The Evolutionary Framework for Classifier Assessment and Improvement (EFECTIVE) is introduced and explored. The key parts of the framework are identified: the Classifier System (CS) module, which holds the ML model that is going to be assessed and improved; the EC module responsible for generating the new instances using the CS module for fitness assignment; and the Supervisor, a module responsible for managing the instances that are generated. The approach comes together in an iterative process of automatic assessment and improvement of classifiers. In a first phase, EFECTIVE is tested as a generator, creating instances of a particular class. Without loss of generality, we apply the framework in the domain of image generation. The problem that motivated the approach is presented first: frontal face generation. In this case, the framework relies on the combination of an EC engine and a CS module, i. e., a frontal face detector, to generate images of frontal faces. The results were revealing in two different ways. On the one hand, the approach was able to generate images that from a subjective standpoint resemble faces and are classified as such by the classifier. On the other hand, most of the images did not resemble faces, although they were classified as such by the classifier module. Based on the results, we extended the approach to generate other types of object, attaining similar results. We also combined several classifiers to study the evolution of ambiguous images, i. e. images that induce multistable perception. Overall, the results suggest that the framework is viable as a generator of instances and also that these instances are often misclassified by the CS module. Building on these results, in a second phase, a study of EFECTIVE for improving the performance of classifiers is performed. The core idea is to use the evolved instances that are generated by the EC engine to augment the training dataset. In this phase, the framework uses the Supervisor module to select and filter the instances that will be added to the dataset. The retraining of the classifier with these instances completes an iteration of the framework. We tested this pipeline in a face detection problem evolving instances to: (i) expand the negative dataset; (ii) expand the positive dataset; and (iii) expand both datasets in the same iteration. Overall, the results show that: expanding the negative dataset, by adding misclassified instances, reduces the number of false alarms; expanding the positive dataset increases the number of hits; expanding positive and negative datasets allows the simultaneous reduction of false alarms and increase of hits. After demonstrating the adequacy of EFECTIVE in face detection, we tested the framework in a Computational Creativity context to create an image generation system that promotes style change, obtaining results that further demonstrate the potential of the framework.
As abordagens típicas de Aprendizagem de Máquina (AM) dependem de um conjunto de instâncias e de um modelo para resolver problemas. Para a maioria dos problemas, a otimização das abordagens AM é crucial para obter desempenhos competitivos. A maior parte do esforço vai no sentido de otimizar o modelo através da exploração de novos algoritmos e do ajuste de parâmetros. No entanto, o conjunto de instâncias é também parte fundamental no desempenho de abordagens de AM. Reunir, construir e otimizar um conjunto de instâncias representativo é uma tarefa difícil e morosa, sem diretrizes bem estabelecidas. Nesta tese, atestamos o uso de Computação Evolucionária (CE) para avaliação e aperfeiçoamento de classificadores através da síntese de novas instâncias. Efetuou-se uma análise do estado da arte sobre construção de conjunto de instâncias. A qualidade do conjunto de instâncias está ligada à disponibilidade de dados que, na maioria dos casos, é difícil de controlar. Uma análise completa é feita sobre a seleção e geração de instâncias, o que esclarece pontos relevantes para o desenvolvimento do nosso sistema. O EFECTIVE (Sistema Evolucionário para a Avaliação e Melhoria de Classificadores) é introduzido e explorado. Os componentes principais do sistema são: o módulo sistema de classificação SC, que contém o modelo de AM que será avaliado e melhorado; por o módulo de CE responsável por gerar as novas instâncias usando o módulo SC para atribuição da aptidão; e o Supervisor, um módulo responsável por gerir as instâncias geradas. A abordagem consiste num processo iterativo de avaliação automática e aprimoramento de classificadores. Numa primeira fase, o EFECTIVE é testado como gerador, criando instâncias de uma classe em particular. Sem perda de generalidade, aplicamos o sistema no domínio da geração de imagens. O problema que motivou a abordagem é apresentado em primeiro lugar: geração de imagens de faces frontais. Neste caso, o sistema depende da combinação de um motor de CE e de um módulo SC, i. e., um detector de faces frontais, para gerar imagens de faces frontais. Os resultados foram reveladores de duas maneiras distintas. Por um lado, a abordagem foi capaz de gerar imagens que, de um ponto de vista subjectivo, se assemelham a faces e são classificadas como tal pelo classificador. Por outro lado, a maior parte das imagens não se parecem com faces, muito embora tenham sido classificadas como tal por parte do classificador. Com base nos resultados estendemos a abordagem para gerar outro tipo de objectos, obtendo resultados similares. Também se combinou vários classificadores num estudo sobre evolução de imagens ambíguas, i. e., imagens que induzem perceção multiestável. De um modo geral, os resultados sugerem que o sistema é viável como um gerador de instâncias e que essas instâncias são muitas vezes mal classificadas pelo SC. Com base nos resultados obtidos, numa segunda fase, efectuámos o estudo sobre o EFECTIVE para aprimoramento da performance de classificadores. A ideia central é utilizar as instâncias geradas pelo motor de CE para aumentar o conjunto de dados de treino do classificador. Nesta fase, o sistema usa o módulo Supervisor para selecionar e filtrar as instâncias que serão adicionadas ao conjunto de treino. O re-treinar do classificador com essas instâncias completa uma iteração do sistema. Testou-se este processo num problema de deteção de faces, evoluindo instâncias para: (i) expandir o conjunto dos negativos; (ii) expandir o conjunto dos positivos; e (iii) expandir ambos os conjuntos na mesma iteração. De um modo geral, os resultados mostram que: expandindo o conjunto dos negativos, adicionando instâncias mal classificadas, reduz o número de falsos alarmes; expandindo o conjunto dos positivos aumenta o número de caras bem detetadas; expandindo o conjunto dos positivos e dos negativos ao mesmo tempo resulta na redução de falsos alarmes e no aumento de caras bem detetadas. Após demonstrar a adequação do EFECTIVE na deteção de faces, testamos o sistema num contexto de Criatividade Computacional para criar um sistema de geração de imagens que promove mudança de estilo, obtendo resultados que demonstram o potencial do sistema.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Cumbo, Chiara, Pasquale Rullo et Annamaria Canino. « A techinique for automatic generation of rule-based text classifiers exploiting negative information ». Thesis, 2014. http://hdl.handle.net/10955/499.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Wang, Chi-Cheng, et 王啟誠. « A generation of fuzzy classifier directly from numerical data based on hypercube region ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/84843856344367061011.

Texte intégral
Résumé :
碩士
國立臺灣科技大學
電機工程系
90
This thesis proposed a new method for extracting fuzzy rules directly from numerical data for pattern classification. First, we represent the existence region of data for a class by activation hypercube and define the overlapping region of each activation hypercube by inhibition hypercube to inhibit the existence of data for that class. Then, we generate dynamic cluster for the data that exist in the inhibition hypercube. Our fuzzy classifier composed of fuzzy rules that are described by these hypercubes. Finally, some examples are given to demonstrate the performance and the validity of this algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ding, Zejin. « Diversified Ensemble Classifiers for Highly Imbalanced Data Learning and their Application in Bioinformatics ». 2011. http://scholarworks.gsu.edu/cs_diss/60.

Texte intégral
Résumé :
In this dissertation, the problem of learning from highly imbalanced data is studied. Imbalance data learning is of great importance and challenge in many real applications. Dealing with a minority class normally needs new concepts, observations and solutions in order to fully understand the underlying complicated models. We try to systematically review and solve this special learning task in this dissertation.We propose a new ensemble learning framework—Diversified Ensemble Classifiers for Imbal-anced Data Learning (DECIDL), based on the advantages of existing ensemble imbalanced learning strategies. Our framework combines three learning techniques: a) ensemble learning, b) artificial example generation, and c) diversity construction by reversely data re-labeling. As a meta-learner, DECIDL utilizes general supervised learning algorithms as base learners to build an ensemble committee. We create a standard benchmark data pool, which contains 30 highly skewed sets with diverse characteristics from different domains, in order to facilitate future research on imbalance data learning. We use this benchmark pool to evaluate and compare our DECIDL framework with several ensemble learning methods, namely under-bagging, over-bagging, SMOTE-bagging, and AdaBoost. Extensive experiments suggest that our DECIDL framework is comparable with other methods. The data sets, experiments and results provide a valuable knowledge base for future research on imbalance learning. We develop a simple but effective artificial example generation method for data balancing. Two new methods DBEG-ensemble and DECIDL-DBEG are then designed to improve the power of imbalance learning. Experiments show that these two methods are comparable to the state-of-the-art methods, e.g., GSVM-RU and SMOTE-bagging. Furthermore, we investigate learning on imbalanced data from a new angle—active learning. By combining active learning with the DECIDL framework, we show that the newly designed Active-DECIDL method is very effective for imbalance learning, suggesting the DECIDL framework is very robust and flexible.Lastly, we apply the proposed learning methods to a real-world bioinformatics problem—protein methylation prediction. Extensive computational results show that the DECIDL method does perform very well for the imbalanced data mining task. Importantly, the experimental results have confirmed our new contributions on this particular data learning problem.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Silva, Gabriel Augusto Santos. « Data augmentation and deep classification with generative adversarial networks ». Master's thesis, 2021. http://hdl.handle.net/10773/32283.

Texte intégral
Résumé :
Machine learning has seen many advances in recent years. One type of model that has evolved a lot recently is Generative Adversarial Networks (GANs). These models have the ability to create fake data that resembles the data on which they were trained on. The interest for these models has been ever growing since their creation, in 2014. The ability to create fake data has also been found to be quite useful, especially, in data starved areas, like medical imaging. GANs have been used, with positive results, in areas like these to increase the size of the datasets available, as a way to improve the quality of classifiers. This dissertation makes a study with a specific type of GAN, the Auxiliary Classification GAN (AC-GAN), to understand if there may be new ways in which GANs can improve classification tasks. For this, a three parted experimented was designed, with each part being denominated as a Scenario. In Scenario 1 a standalone classifier was trained, in Scenario 2 that same classifier was trained after data augmentation was done with a GAN and, finally, in Scenario 3 an AC-GAN was used instead of the classifier. Two distinct problems were considered here. The first was the CIFAR-10 problem, which is a well known and structured problem, quite often used as a benchmark in GAN related works. The second problem used here was a skin lesion one. This served two purposes: significantly increasing the difficulty of the problem at hand and approximating the work done here to, possibly, the biggest practical usage of GANs, which has been data augmentation for medical imaging problems. The models developed were based on the original version of the AC-GAN and on the BigGAN, which, when presented, was the best performing GAN known, able to produce high quality images of resolutions of up to 512x512. Adapting the BigGAN into an AC-GAN resulted in the best known performing AC-GAN on the CIFAR-10 dataset. The study made in this dissertation can serve as a solid backbone for further studies on this matter to be made, since the results obtained here strongly suggest that the use of AC-GANs can be an effective way to achieve superior classifiers.
Aprendizagem automática tem visto bastantes melhorias em anos recentes. Um tipo de modelo que tem evoluído bastante são as Generative Adversarial Networks (GANs). Estes modelos têm a capacidade de criar dados falsos que se assemelham aos dados em que foram treinados. O interesse por estes modelos tem vindo a crescer desde a sua criação, em 2014. Tem-se vindo a provar que a possibilidade de criar dados falsos pode ser bastante útil, especialmente, em áreas com pouca abundância de dados, como é o caso da imagem médica. As GANs têm sido usadas, com bastante sucesso, nesse tipo de áreas para aumentar o tamanho dos datasets existentes de modo a melhorar a qualidade dos classificadores usados. Esta dissertação faz um estudo com um tipo específico de GAN, a Auxiliary Classification GAN (AC-GAN), para perceber se existem novas formas para as GANs melhorarem a qualidade de classificadores. Para isso, uma experiência de três partes foi desenhada, sendo cada parte designada como um Cenário. No Cenário 1 um classificador isolado foi treinado, no Cenário 2 foi treinado um classificador igual após uma GAN ter sido usada para fazer aumento de dados e, finalmente, no Cenário 3 usou-se uma AC-GAN em vez de um classificador. Foram considerados dois problemas distintos. O primeiro foi o CIFAR-10, que é um problema bastante conhecido e bem estruturado, usado com muita frequência em problemas relacionados com GANs. O segundo problema usado foi um de lesões de pele. Isto serviu dois propósitos: aumentar significativamente a dificuldade do problema em mão e aproximar o trabalho feito aqui com um dos maiores usos práticos das GANs, que tem sido o uso de GANs para fazer o aumento de datasets em problemas de imagem médica. Os modelos desenvolvidos foram baseados na AC-GAN original e na BigGAN, que, quando foi apresentada, era a melhor GAN conhecida e era capaz de produzir imagens de alta qualidade com resoluções de até 512x512. Adaptar a BigGAN para uma AC-GAN resultou na melhor AC-GAN conhecida treinada no dataset CIFAR-10. O estudo feito nesta dissertação pode servir como uma base sólida para que mais estudos sejam feitos neste âmbito, visto que os resultados obtidos aqui sugerem firmemente que o uso de AC-GANs pode ser uma forma efetiva de atingir classificadores melhores.
Mestrado em Engenharia de Computadores e Telemática
Styles APA, Harvard, Vancouver, ISO, etc.
22

Dattagupta, Samrat Jayanta. « A performance comparison of oversampling methods for data generation in imbalanced learning tasks ». Master's thesis, 2018. http://hdl.handle.net/10362/31307.

Texte intégral
Résumé :
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Marketing Research e CRM
Class Imbalance problem is one of the most fundamental challenges faced by the machine learning community. The imbalance refers to number of instances in the class of interest being relatively low, as compared to the rest of the data. Sampling is a common technique for dealing with this problem. A number of over - sampling approaches have been applied in an attempt to balance the classes. This study provides an overview of the issue of class imbalance and attempts to examine some common oversampling approaches for dealing with this problem. In order to illustrate the differences, an experiment is conducted using multiple simulated data sets for comparing the performance of these oversampling methods on different classifiers based on various evaluation criteria. In addition, the effect of different parameters, such as number of features and imbalance ratio, on the classifier performance is also evaluated.
Styles APA, Harvard, Vancouver, ISO, etc.
23

(11167785), Nicolae Christophe Iovanac. « GENERATIVE, PREDICTIVE, AND REACTIVE MODELS FOR DATA SCARCE PROBLEMS IN CHEMICAL ENGINEERING ». Thesis, 2021.

Trouver le texte intégral
Résumé :
Data scarcity is intrinsic to many problems in chemical engineering due to physical constraints or cost. This challenge is acute in chemical and materials design applications, where a lack of data is the norm when trying to develop something new for an emerging application. Addressing novel chemical design under these scarcity constraints takes one of two routes: the traditional forward approach, where properties are predicted based on chemical structure, and the recent inverse approach, where structures are predicted based on required properties. Statistical methods such as machine learning (ML) could greatly accelerate chemical design under both frameworks; however, in contrast to the modeling of continuous data types, molecular prediction has many unique obstacles (e.g., spatial and causal relationships, featurization difficulties) that require further ML methods development. Despite these challenges, this work demonstrates how transfer learning and active learning strategies can be used to create successful chemical ML models in data scarce situations.
Transfer learning is a domain of machine learning under which information learned in solving one task is transferred to help in another, more difficult task. Consider the case of a forward design problem involving the search for a molecule with a particular property target with limited existing data, a situation not typically amenable to ML. In these situations, there are often correlated properties that are computationally accessible. As all chemical properties are fundamentally tied to the underlying chemical topology, and because related properties arise due to related moieties, the information contained in the correlated property can be leveraged during model training to help improve the prediction of the data scarce property. Transfer learning is thus a favorable strategy for facilitating high throughput characterization of low-data design spaces.
Generative chemical models invert the structure-function paradigm, and instead directly suggest new chemical structures that should display the desired application properties. This inversion process is fraught with difficulties but can be improved by training these models with strategically selected chemical information. Structural information contained within this chemical property data is thus transferred to support the generation of new, feasible compounds. Moreover, transfer learning approach helps ensure that the proposed structures exhibit the specified property targets. Recent extensions also utilize thermodynamic reaction data to help promote the synthesizability of suggested compounds. These transfer learning strategies are well-suited for explorative scenarios where the property values being sought are well outside the range of available training data.
There are situations where property data is so limited that obtaining additional training data is unavoidable. By improving both the predictive and generative qualities of chemical ML models, a fully closed-loop computational search can be conducted using active learning. New molecules in underrepresented property spaces may be iteratively generated by the network, characterized by the network, and used for retraining the network. This allows the model to gradually learn the unknown chemistries required to explore the target regions of chemical space by actively suggesting the new training data it needs. By utilizing active learning, the create-test-refine pathway can be addressed purely in silico. This approach is particularly suitable for multi-target chemical design, where the high dimensionality of the desired property targets exacerbates data scarcity concerns.
The techniques presented herein can be used to improve both predictive and generative performance of chemical ML models. Transfer learning is demonstrated as a powerful technique for improving the predictive performance of chemical models in situations where a correlated property can be leveraged alongside scarce experimental or computational properties. Inverse design may also be facilitated through the use of transfer learning, where property values can be connected with stable structural features to generate new compounds with targeted properties beyond those observed in the training data. Thus, when the necessary chemical structures are not known, generative networks can directly propose them based on function-structure relationships learned from domain data, and this domain data can even be generated and characterized by the model itself for closed-loop chemical searches in an active learning framework. With recent extensions, these models are compelling techniques for looking at chemical reactions and other data types beyond the individual molecule. Furthermore, the approaches are not limited by choice of model architecture or chemical representation and are expected to be helpful in a variety of data scarce chemical applications.
Styles APA, Harvard, Vancouver, ISO, etc.
24

(6997520), Bo Zhang. « A DESIGN PARADIGM FOR DC GENERATION SYSTEM ». Thesis, 2020.

Trouver le texte intégral
Résumé :
The design of a dc generation system is posed as a multi-objective optimization problem which simultaneously designs the generator and the power converter. The proposed design methodology captures the interaction between various system component models and utilizes the system steady state analysis, stability analysis, and disturbance rejection analysis. System mass and power loss are considered as the optimization metrics and minimized. The methodology is demonstrated through the design of a notional dc generation system which contains a Permanent Magnet Synchronous Machine (PMSM), passive rectifier, and a dc-dc converter. To this end, a high fidelity PMSM model, passive rectifier model, semiconductor model and passive component model are developed. The output of optimization is a set of designs forming a Pareto-optimal front. Based on the requirements and the application, a design can be chosen from this set of designs. The methodology is applied to SiC based dc generation system and Si based dc generation system to quantify the advantage of Wide Bandgap (WBG) devices. A prototype SiC based dc generation system is constructed and tested at steady state. Finally a thermal equivalent circuit (TEC) based PMSM thermal model is included in the design paradigm to quantify the impact of the PMSM’s thermal performance to the system design.
Styles APA, Harvard, Vancouver, ISO, etc.
25

(6890684), Nicole E. Eikmeier. « Spectral Properties and Generation of Realistic Networks ». Thesis, 2019.

Trouver le texte intégral
Résumé :
Picture the life of a modern person in the western world: They wake up in the morning and check their social networking sites; they drive to work on roads that connect cities to each other; they make phone calls, send emails and messages to colleagues, friends, and family around the world; they use electricity flowing through power-lines; they browse the Internet, searching for information. All of these typical daily activities rely on the structure of networks. A network, in this case, is a set of nodes (people, web pages, etc) connected by edges (physical connection, collaboration, etc). The term graph is sometimes used to represent a more abstract structure - but here we use the terms graph and network interchangeably. The field of network analysis concerns studying and understanding networks in order to solve problems in the world around us. Graph models are used in conjunction with the study of real-world networks. They are used to study how well an algorithm may do on a real-world network, and for testing properties that may further produce faster algorithms. The first piece of this dissertation is an experimental study which explores features of real data, specifically power-law distributions in degrees and spectra. In addition to a comparison between features of real data to existing results in the literature, this study resulted in a hypothesis on power-law structure in spectra of real-world networks being more reliable than that in the degrees. The theoretical contributions of this dissertation are focused primarily on generating realistic networks through existing and novel graph models. The two graph models presented are called HyperKron and the Triangle Generalized Preferential Attachment model. Both of the models incorporate higher-order structure - leading to more sophisticated properties not examined in traditional models. We use the second of our models to further validate the hypothesis on power-laws in the spectra. Due to the structure of our model, we show that the power-law in the spectra is more resilient to sub-sampling. This gives some explanation for why we see power-laws more frequently in the spectra in real world data.
Styles APA, Harvard, Vancouver, ISO, etc.
26

(7027607), Zhengtian Song. « Second Harmonic Generation Microscopy and Raman Microscopy of Pharmaceutical Materials ». Thesis, 2019.

Trouver le texte intégral
Résumé :

Second harmonic generation (SHG) microscopy and Raman microscopy were used for qualitative and quantitative analysis of pharmaceutical materials. Prototype instruments and algorithms for sampling strategies and data analyses were developed to achieve pharmaceutical materials analysis with low limits of detection and short measurement times

Manufacturing an amorphous solid dispersion (ASD), in which an amorphous active pharmaceutical ingredient (API) within polymer matrix, is an effective approach to improve the solubility and bioavailability of a drug. However, since ASDs are generally metastable materials, they can often transform to produce crystalline API with higher thermodynamic stability. Analytical methods with low limits of detection for crystalline APIs were used to assess the stability of ASDs. With high selectivity to noncentrosymmetric crystals, SHG microscopy was demonstrated as an analytical tool, which exhibited a limit of detection of 10 ppm for ritonavir Form II crystals. SHG microscopy was employed for accelerated stability testing of ASDs, which provided a four-decade dynamic range of crystallinity for kinetic modeling. An established model was validated by investigating nucleation and crystal growth based on SHG images. To achieve in situ accelerated stability testing, controlled environment for in situstability testing (CEiST) was designed and built to provide elevated temperature and humidity, which is compatible with a commercial SHG microscope based on our research prototype. The combination of CEiST and SHG microscopy enabled assessment of individual crystal growth rates by single-particle tracking and nucleation rates for individual fields of view with low Poisson noise. In addition, SHG microscopy coupled with CEiST enabled the study of heterogeneity of crystallization kinetics within pharmaceutical materials.

Polymorphism of APIs plays an important role in drug formulation development. Different polymorphs of identical APIs may exhibit different physiochemical properties, e.g., solubility, stability, and bioavailability, due to their crystal structures. Moreover, polymorph transitions may take place during the manufacturing process and storage. Therefore, analytical methods with high speed for polymorph characterization, which can provide real-time feedback for the polymorphic transition, have broad applications in pharmaceutical materials characterization. Raman spectroscopy is able to determine the API polymorphism, but is hampered by the long measurement times. In this study, two analytical methods with high speed were developed to characterize API polymorphs. One is SHGmicroscopy-guided Raman spectroscopy, which achieved the speed of 10 ms/particle for clopidogrel bisulfate. Initial classification of two different polymorphs was based on SHG images, followed acquisition of Raman spectroscopyat the selected positions to determine the API crystal form. Another approach is implementing of dynamic sampling into confocal Raman microscopy to accelerate Raman image acquisition for 6-folds. Instead of raster scanning, dynamic sampling algorithm enabled acquiring Raman spectra at the most informative locations. The reconstructed Raman image of pharmaceutical materials has <0.5% loss of image quality with 15.8% sampling rate.

Styles APA, Harvard, Vancouver, ISO, etc.
27

(8772923), Chinyi Chen. « Quantum phenomena for next generation computing ». Thesis, 2020.

Trouver le texte intégral
Résumé :
With the transistor dimensions scaling down to a few atoms, quantum phenomena - like quantum tunneling and entanglement - will dictate the operation and performance of the next generation of electronic devices, post-CMOS era. While quantum tunneling limits the scaling of the conventional transistor, Tunneling Field Effect Transistor (TFET) employs band-to-band tunneling for the device operation. This mechanism can reduce the sub-threshold swing (S.S.) beyond the Boltzmann's limit, which is fundamentally limited to 60 mV/dec in a conventional Si-based metal-oxide-semiconductor field-effect transistor (MOSFET). A smaller S.S. ensures TFET operation at a lower supply voltage and, therefore, at lesser power compared to the conventional Si-based MOSFET.

However, the low transmission probability of the band-to-band tunneling mechanism limits the ON-current of a TFET. This can be improved by reducing the body thickness of the devices i.e., using 2-Dimensional (2D) materials or by utilizing heterojunction designs. In this thesis, two promising methods are proposed to increase the ON-current; one for the 2D material TFETs, and another for the III-V heterojunction TFETs.

Maximizing the ON-current in a 2D material TFET by determining an optimum channel thickness, using compact models, is presented. A compact model is derived from rigorous atomistic quantum transport simulations. A new doping profile is proposed for the III-V triple heterojunction TFET to achieve a high ON-current. The optimized ON-current is 325 uA/um at a supply voltage of 0.3 V. The device design is optimized by atomistic quantum transport simulations for a body thickness of 12 nm, which is experimentally feasible.
However, increasing the device's body thickness increases the atomistic quantum transport simulation time. The simulation of a device with a body thickness of over 12 nm is computationally intensive. Therefore, approximate methods like the mode-space approach are employed to reduce the simulation time. In this thesis, the development of the mode-space approximation in modeling the triple heterojunction TFET is also documented.

In addition to the TFETs, quantum computing is an emerging field that utilizes quantum phenomena to facilitate information processing. An extra chapter is devoted to the electronic structure calculations of the Si:P delta-doped layer, using the empirical tight-binding method. The calculations agree with angle-resolved photoemission spectroscopy (ARPES) measurements. The Si:P delta-doped layer is extensively used as contacts in the Phosphorus donor-based quantum computing systems. Understanding its electronic structure paves the way towards the scaling of Phosphorus donor-based quantum computing devices in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
28

(11142147), Zachary Craig Schreiber. « Investigation of Transparent Photovoltaic Vehicle Integration ». Thesis, 2021.

Trouver le texte intégral
Résumé :
The pursuit to combat climate change continues, identifying new methods and technologies for sustainable energy management. Automakers continue developing battery electric vehicles while researchers identify new applications and materials for solar photovoltaics. The continued advancement of technology creates new holes within literature, requiring investigation to understand the unknown. Photovoltaic vehicle integration gained popularity during the 1970s but did not commercialize due to technology, economics, and other factors. By 2021 the idea resurfaced, showcasing commercial and concept vehicles utilizing photovoltaics. The emergence of new transparent photovoltaics presents additional options for vehicle integration but lacks literature analyzing the energy output and economics. The theoretical analysis investigated transparent photovoltaic replacing a vehicle’s windows. The investigation found that transparent photovoltaic vehicle integration generates energy and financial savings. However, due to high system costs and location, the system does not provide a financial payback period like other photovoltaic arrays. Improving cost, location, and other financial parameters create more favorable circumstances for the photovoltaic system. Furthermore, transparent photovoltaics provide energy saving benefits and some return on investment compared to regular glass windows.
Styles APA, Harvard, Vancouver, ISO, etc.
29

(6997700), Wooram Kang. « HYDROGEN GENERATION FROM HYDROUS HYDRAZINE DECOMPOSITION OVER SOLUTION COMBUSTION SYNTHESIZED NICKEL-BASED CATALYSTS ». Thesis, 2019.

Trouver le texte intégral
Résumé :
Hydrous hydrazine (N2H4·H2O) is a promising hydrogen carrier for convenient storage and transportation owing to its high hydrogen content (8.0 wt%), low material cost and stable liquid state at ambient temperature. Particularly, generation of only nitrogen as byproduct, in addition to hydrogen, thus obviating the need for on-board collection system for recycling, ability to generate hydrogen at moderate temperatures (20-80 °C) which correspond to the operating temperature of a proton exchange membrane fuel cell (PEMFC), and easy recharging using current infrastructure of liquid fuels make hydrous hydrazine a promising hydrogen source for fuel cell electric vehicles (FCEVs). Since hydrogen can be generated from catalytic hydrazine decomposition, the development of active, selective and cost-effective catalysts, which enhance the complete decomposition (N2H4 → N2+2H2) and simultaneously suppress the incomplete decomposition (3N2H4 → 4NH3+N2), remains a significant challenge.
In this dissertation, CeO2 powders and various Ni-based catalysts for hydrous hydrazine decomposition were prepared using solution combustion synthesis (SCS) technique and investigated. SCS is a widely employed technique to synthesize nanoscale materials such as oxides, metals, alloys and sulfides, owing to its simplicity, low cost of precursors, energy- and time-efficiency. In addition, product properties can be effectively tailored by adjusting various synthesis parameters which affect the combustion process.
The first and second parts of this work (Chapters 2 and 3) are devoted to investigating the correlation between the synthesis parameters, combustion characteristics and properties of the resulting powder. A series of CeO2, which is a widely used material for various catalytic applications and a promising catalyst support for hydrous hydrazine decomposition, and Ni/CeO2 nanopowders as model catalysts for the target reaction were synthesized using conventional SCS technique. This demonstrated that crystallite size, surface property and concentration of defects in CeO2 structure which strongly influence the catalytic performance, can be effectively controlled by varying the synthesis parameters such as metal precursor (oxidizer) type, reducing agent (fuel), fuel-to-oxidizer ratio and amount of gas generating agent. The tailored CeO2 powder exhibited small CeO2 crystallite size (7.9 nm) and high surface area (88 m2/g), which is the highest value among all prior reported SCS-derived CeO2 powders. The Ni/CeO2 catalysts synthesized with 6 wt% Ni loading, hydrous hydrazine fuel and fuel-to-oxidizer ratio of 2 showed 100% selectivity for hydrogen generation and the highest activity (34.0 h-1 at 50 ºC) among all prior reported catalysts containing Ni alone for hydrous hydrazine decomposition. This superior performance of the Ni/CeO2 catalyst is attributed to small Ni particle size, large pore size and moderate defect concentration.
As the next step, SCS technique was used to develop more efficient and cost-effective catalysts for hydrous hydrazine decomposition. In the third part (Chapter 4), noble-metal-free NiCu/CeO2 catalysts were synthesized and investigated. The characterization results indicated that the addition of Cu to Ni/CeO2 exhibits a synergistic effect to generate significant amounts of defects in the CeO2 structure which promotes catalytic activity. The 13 wt% Ni0.5Cu0.5/CeO2 catalysts showed 100% H2 selectivity and 5.4-fold higher activity (112 h-1 at 50 ºC) as compared to the 13 wt% Ni/CeO2 (20.7 h-1). This performance is also superior to that of most reported non-noble metal catalysts and is even comparable to several noble metal-based catalysts. In the fourth part (Chapter 5), low Pt loading NiPt/CeO2 catalysts were studied. The modified SCS technique was developed and applied to prepare NiPt/CeO2 catalysts, that overcomes the typical problem of conventional SCS which leads to deficiency of Pt at catalyst surface due to the diffusion of Pt into bulk CeO2. The Ni0.6Pt0.4/CeO2 catalysts with 1 wt% Pt loading exhibited high activity (1017 h-1 at 50 ºC) along with 100% H2 selectivity owing to the optimum composition of NiPt alloy, high metal dispersion and a large amount of CeO2 defects. Its activity is higher than most of the reported NiPt-based catalysts which typically contain high Pt loading (3.6-42 wt%).
Next, the intrinsic kinetics of hydrous hydrazine decomposition over the NiPt/CeO2 catalysts, which are necessary for efficient design and optimization of the hydrous hydrazine-based hydrogen generator system, were investigated (Chapter 6). From the experimental data obtained at different reaction temperatures, the intrinsic kinetic model based on the Langmuir-Hinshelwood mechanism was established. The developed model
provides good predictions with the experimental data, especially over a wide range of initial reactant concentration, describing well the variation of reaction order from low to
high reactant concentration.
Finally, the conclusions of the dissertation and recommendations for future work are summarized in Chapter 7.
Styles APA, Harvard, Vancouver, ISO, etc.
30

(9780926), Muhammad Bhuiya. « An experimental study of 2nd generation biodiesel as an alternative fuel for diesel engine ». Thesis, 2017. https://figshare.com/articles/thesis/An_experimental_study_of_2nd_generation_biodiesel_as_an_alternative_fuel_for_diesel_engine/13449476.

Texte intégral
Résumé :
This study investigated the prospects of using 2nd generation biodiesel as an alternative fuel particularly the biodiesel produced from the Australian Beauty Leaf (BL) (Calophyllum inophyllum L.). Firstly, the study developed an optimised oil extraction method from BL kernel based on the kernel size and treatment conditions (for example, seed preparation and cracking, drying, whole kernel, grated kernel and moisture content). Mechanical method of using a screw press expeller and chemical method of using n-hexane were used for oil extraction. The results indicated that the grated kernels that were dried to 14.4% moisture content produced the highest oil yield from both methods. The highest oil recovery of 54% was obtained in n-hexane method from the grated kernel followed by 45% in screw press method. A comparison of fossil energy ratio (FER) (the ratio of energy produced from the biodiesel to the energy required for processing of the feedstocks) was made between n-hexane and screw press methods and the results revealed that the FER in-hexane method was 4.1 compared to 3.7 in screw press method, indicating that the n-hexane method is more efficient than the screw press technique. It should also be noted that the oil content of BL kernel was about 60% on dry weight basis.
Styles APA, Harvard, Vancouver, ISO, etc.
31

(6887678), Oscar E. Sandoval. « Electro-Optic Phase Modulation, Frequency Comb Generation, Nonlinear Spectral Broadening, and Applications ». Thesis, 2019.

Trouver le texte intégral
Résumé :

Electro-optic phase modulation can be used to generate high repetition rate optical frequency combs. The optical frequency comb (OFC) has garnered much attention upon its inception, acting as a crucial component in applications ranging from metrology and spectroscopy, to optical communications. Electro-optic frequency combs (EO combs) can be generated by concatenating an intensity modulator and phase modulator together. The first part of this work focuses on broadening the modest bandwidth inherent to the EO combs. This is achieved by propagation in a nonlinear medium, specifically propagation in a nonlinear optical loop mirror (NOLM). This allows for broadening the EO frequency comb spectrum to a bandwidth of 40 nm with a spectral power variation of < 10 dB. This spectrally broadened EO comb is then used in dual comb interferometry measurements to characterize the single soliton generated in an anomalous dispersion silicone-nitride microresonator. This measurement allows for rapid characterization with low average power. Finally, electro-optic phase modulation is used in a technique to prove frequency-bin entanglement. A quantum network based on optical fiber will require the ability to perform phase modulation independent of photon polarization due to propagation in optical fiber scrambling the polarization of input light. Commercially available phase modulators are inherently dependent on the polarization state of input light making them unsuited to be used in such a depolarized environment. This limitation is overcome by implementing a polarization diversity scheme to measure frequency-bin entanglement for arbitrary orientations of co- and cross- polarized frequency-bin entangled photon pairs.

Styles APA, Harvard, Vancouver, ISO, etc.
32

(8788166), Scott R. Griffin. « Optical Techniques for Analysis of Pharmaceutical Formulations ». Thesis, 2020.

Trouver le texte intégral
Résumé :

The symmetry requirements of both second harmonic generation (SHG) and triboluminescence (TL) provide outstanding selectivity to noncentrosymmetric crystals, leading to high signal to noise measurements of crystal growth and nucleation of active pharmaceutical ingredients (API) within amorphous solid dispersions (ASD) during accelerated stability testing. ASD formulations are becoming increasingly popular in the pharmaceutical industry due to their ability to address challenges associated with APIs that suffer from poor dissolution kinetics and low bioavailability as a result of low aqueous solubility. ASDs kinetically trap APIs into an amorphous state by dispersing the API molecules within a polymer matrix. The amorphous state of the API leads to an increase in apparent solubility, faster dissolution kinetics, and an increase in bioavailability. Both SHG and TL are used to quantitatively and qualitatively detect the crystal growth and nucleation within ASD formulations at the parts per million (ppm) regime. TL is the emission of light upon mechanical disruption of a piezoelectrically active crystal. Instrumentation was developed to rapidly determine the qualitative presence of crystals within nominally amorphous pharmaceutical materials in both powders and slurries. SHG was coupled with a controlled environment for in situ stability testing (CEiST) to enable in situ accelerated stability testing of ASDs. Single particle tracking enabled by the CEiST measurements provided insights into crystal growth rate distributions present due to local differences within the material. Accelerated stability testing monitored by in situ measurements increased the signal to noise in recovered nucleation and crystal growth rates by suppressing the Poisson noise normally present within conventional accelerated stability tests. The disparities between crystal growth and nucleation kinetics on the surface versus within bulk material were also investigated by single particle tracking and in situ measurements. Crystals were found to grow faster in the bulk compared to single crystals growing on the surface while total crystallinity was found to be higher on the surface due to radial growth habits of crystals on the surface compared to columnar growth within the bulk. To increase the throughput of the in situ measurements, a temperature and relative humidity array (TRHA) was developed. The TRHA utilizes a temperature gradient and many individual liquid wells to enable the use of a multitude of different conditions at the same time which can reduce time required to inform formulations design of stability information.

Styles APA, Harvard, Vancouver, ISO, etc.
33

(9167615), Orthi Sikder. « Influence of Size and Interface Effects of Silicon Nanowire and Nanosheet for Ultra-Scaled Next Generation Transistors ». Thesis, 2020.

Trouver le texte intégral
Résumé :
In this work, we investigate the trade-off between scalability and reliability for next generation logic-transistors i.e. Gate-All-Around (GAA)-FET, Multi-Bridge-Channel (MBC)-FET. First, we analyze the electronic properties (i.e. bandgap and
quantum conductance) of ultra-thin silicon (Si) channel i.e. nano-wire and nano-sheet based on first principle simulation. In addition, we study the influence of interface
states (or dangling bonds) at Si-SiO2 interface. Second, we investigate the impact of bandgap change and interface states on GAA-FETs and MBC-FETs characteristics by
employing Non-equilibrium Green's Function based device simulation. In addition to that, we calculate the activation energy of Si-H bond dissociation at Si-SiO2 interface for different Si nano-wire/sheet thickness and different oxide electric-field. Utilizing these thickness dependent activation energies for corresponding oxide electric-field, in conjunction with reaction-diffusion model, we compute the characteristics shift and analyze the negative bias temperature instability in GAA-FET and MBC-FET. Based on our analysis, we estimate the operational voltage of these transistors for a life-time of 10 years and the ON current of the device at iso-OFF-current condition. For example, for channel length of 5 nm and thickness < 5 nm the safe operating voltage needs to be < 0.55V. Furthermore, our analysis suggests that the benefit of Si thickness scaling can potentially be suppressed for obtaining a desired life-time of GAA-FET and MBC-FET.
Styles APA, Harvard, Vancouver, ISO, etc.
34

(8966861), Dina Verdin. « Enacting Agency : Understanding How First-Generation College Students’ Personal Agency Supports Disciplinary Role Identities and Engineering Agency Beliefs ». Thesis, 2020.

Trouver le texte intégral
Résumé :

This dissertation is a three study format. In this dissertation, I used an explanatory sequential mixed method design. Study 1 develops a measurement scale to capture first-generation college students’ agency using the constructs of intentionality, forethought, self-reactiveness, and self-reflectiveness. Study 2 used structural equation modeling to establish a relationship between personal agency, disciplinary role identities, and students’ desire to enact engineering agency. Study 3 was a narrative analysis of how Kitatoi, a Latina, first-generation college student, authored her identity as an engineer. Data for study 1 and 2 came from a survey administered in the Fall of 2017 of 3,711 first-year engineering students across 32 ABET universities.

Styles APA, Harvard, Vancouver, ISO, etc.
35

(9127556), Hilary M. Florian. « IMPROVING THE PROTEIN PIPELINE THROUGH NONLINEAR OPTICAL METHODS ». Thesis, 2020.

Trouver le texte intégral
Résumé :

Understanding the function and structure of a protein is crucial for informing on rational drug design and for developing successful drug candidates. However, this understanding is often limited by the protein pipeline, i.e. the necessary steps to go from developing protein constructs to generating high-resolution structures of macromolecules. Because each step of the protein pipeline requires successful completion of the prior step, bottlenecks are often created and therefore this process can take up to several years to complete. Addressing current limitations in the protein pipeline can help to reduce the time required to successfully solve the structure of a protein.

The field of nonlinear optical (NLO) microscopy provides a potential solution to many issues surrounding the detection and characterization of protein crystals. Techniques such as second harmonic generation (SHG) and two-photon excited UV fluorescence (TPE-UVF) have already been shown to be effective methods for the detection of proteins with high selectivity and sensitivity. Efforts to improve high throughput capabilities of SHG microscopy for crystallization trials resulted in development of a custom microretarder array (μRA) for depth of field (DoF) extension, therefore eliminating the need for z-scanning and reducing the overall data acquisition time. Further work was done with a commercially available μRA to allow for polarization dependent TPE-UVF. By placing the μRA in the rear conjugate plane of the beam path, the patterned polarization was mapped onto the field of view and polarization information was extracted from images by Fourier analysis to aid in discrimination between crystalline and aggregate protein.

Additionally, improvements to X-ray diffraction (XRD), the current gold standard for macromolecular structure elucidation, can result in improved resolution for structure determination. X-ray induced damage to protein crystals is one of the greatest sources of loss in resolution. Previous work has been done to implement a multimodal nonlinear optical (NLO) microscope into the beamline at Argonne National Lab. This instrument aids in crystal positioning for XRD experiments by eliminating the need for X-ray rastering and reduces the overall X-ray dosage to the sample. Modifications to the system to continuously improve the capabilities of the instrument were done, focusing on redesign of the beam path to allow for epi detection of TPE-UVF and building a custom objective for improved throughput of 1064 nm light. Furthermore, a computational method using non-negative matrix factorization (NMF) was employed for isolation of unperturbed diffraction peaks and provided insight into the mechanism by which X-ray damage occurs. This work has the potential to improve the resolution of diffraction data and can be applied to other techniques where X-ray damage is of concern, such as electron microscopy.


Styles APA, Harvard, Vancouver, ISO, etc.
36

(11022585), Bhavya Rathna Kota. « Investigation of GenerationZs' perception of Green Homes and Green Home Features ». Thesis, 2021.

Trouver le texte intégral
Résumé :
In recent years, there has been an increase in environmental awareness in the United States leading to steady growth in environmentally conscious consumerism. These changes have come in response to issues such as the energy crisis, climate change, exponential population growth, and rapid urbanization. This fact is further supported by environmental campaigns and the green movement. Looking to the future of green home marketing, understanding the green consumer behavior of Generation Z (GenZ) is important for environmental and business reasons. The purpose of this research is to better understand the perception of GenZ on Green Homes (GHs). The study uses the lenses of dual inheritance and normative motivation theory to explain the influence of benefits and norms related to environmentalism and sustainability on GenZ consumers’ green behavior. This study seeks to evaluate 1) GenZ’s preferences related to Green Home Features (GHFs), 3) the extent of the influence of certain barriers on the adoption of GHFs, and 3) the types of motivation (intrinsic, instrumental and non-normative) influencing GenZ towards green home consumerism. Data was collected using an online survey questionnaire exclusively at Purdue University during March – April of 2021 (IRB 2020-1414). One hundred sixteen GenZ participants responded to the survey.The findings show that these GenZ consumers prefer a certain type of GHFs over others. Additionally, based on descriptive tests of GHFs, energy-related features were the most prized features, while the least preferred was water-efficient features. Descriptive tests on barriers suggest that GenZ consumers perceive the lack of choice in selecting GHFs in their homes to be a top barrier, followed by a lack of information and the perceived effort to analyze GHFs. Inferential tests for the same indicated that GenZ consumers perceive these barriers differently. Lastly, for GenZ consumers, intrinsic and non-normative motivations significantly affect their willingness to buy GHs. The findings concur with previous studies on green consumer behavior, yet they provide a new benchmark for understanding GenZ consumer behavior on GHs and an updated view of what GHFs they prefer. This research can be used by home marketers and policy makers to study future home trends, attract more potential homeowners to GHs, and help create a sustainable environment for future generations.
Styles APA, Harvard, Vancouver, ISO, etc.
37

(6331859), Changqin Ding. « Polarization-enabled Multidimensional Optical Microscopy ». Thesis, 2019.

Trouver le texte intégral
Résumé :
Polarization-dependence provides a unique handle for extending the dimensionality of optical microscopy, with particular benefits in nonlinear optical imaging. Polarization-dependent second order nonlinear optical processes such as second harmonic generation (SHG) provide rich qualitative and quantitative information on local molecular orientation distribution. By bridging Mueller and Jones tensor, a theoretical framework was introduced to experimentally extend the application of polarization-dependent SHG microscopy measurements toward in vivo imaging, in which partial polarization or depolarization of the beam can complicate polarization analysis. In addition, polarization wavefront shaping was demonstrated to enable a new quantitative phase contrast imaging strategy for thin transparent samples. The axially-offset differential interference contrast microscopy (ADIC) was achieved as a combination of classic Zernike phase contrast and Nomarski differential interference contrast (DIC) methods. The fundamentally unique manner of this strategy also inspired rapid volumetric analysis in time dimension that is accessible for most existing microscopy systems. Finally, the dimensionality of high speed twophoton fluorescence imaging was extended to the spectral domain by spatial/spectral multiplexing, enabling beam scanning two photon fluorescence microscopy with 17 frames per second rate and over 2000 effective spectral data points.
Styles APA, Harvard, Vancouver, ISO, etc.
38

(6397766), Shaobo Fang. « SINGLE VIEW RECONSTRUCTION FOR FOOD PORTION ESTIMATION ». Thesis, 2019.

Trouver le texte intégral
Résumé :

3D scene reconstruction based on single-view images is an ill-posed problem since most 3D information has been lost during the projection process from the 3D world coordinates to the 2D pixel coordinates. To estimate the portion of an object from a single-view requires either the use of priori information such as the geometric shape of the object, or training based techniques that learn from existing portion sizes distribution. In this thesis, we present a single-view based technique for food portion size estimation.


Dietary assessment, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as cancer, diabetes and heart diseases. Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We have developed a mobile dietary assessment system, the Technology Assisted Dietary AssessmentTM (TADATM) system to automatically determine the food types and energy consumed by a user using image analysis techniques.


In this thesis we focus on the use of a single image for food portion size estimation to reduce a user’s burden from having to take multiple images of their meal. We define portion size estimation as the process of determining how much food (or food energy/nutrient) is present in the food image. In addition to estimating food energy/nutrient, food portion estimation could also be estimating food volumes (in cm3) or weights (in grams), as they are directly related to food energy/nutrient. Food portion estimation is a challenging problem as food preparation and consumption process can pose large variations in food shapes and appearances.


As single-view based 3D reconstruction is in general an ill-posed problem, we investigate the use of geometric models such as the shape of a container that can help to partially recover 3D parameters of food items in the scene. We compare the performance of portion estimation technique based on 3D geometric models to techniques using depth maps. We have shown that more accurate estimation can be obtained by using geometric models for objects whose 3D shape are well defined. To further improve the food estimation accuracy we investigate the use of food portions co-occurrence patterns. The food portion co-occurrence patterns can be estimated from food image dataset we collected from dietary studies using the mobile Food RecordTM (mFRTM) system we developed. Co-occurrence patterns is used as prior knowledge to refine portion estimation results. We have been shown that the portion estimation accuracy has been improved when incorporating the co-occurrence patterns as contextual information.


In addition to food portion estimation techniques that are based on geometric models, we also investigate the use deep learning approach. In the geometric model based approach, we have focused on estimation food volumes. However, food volumes are not the final results that directly show food energy/nutrient consumed. Therefore, instead of developing food portion estimation techniques that lead to an intermediate results (food volumes), we present a food portion estimation method to directly estimate food energy (kilocalories) from food images using Generative Adversarial Networks (GANs). We introduce the concept of an “energy distribution” for each food image. To train the GAN, we design a food image dataset based on ground truth food labels and segmentation masks for each food image as well as energy information associated with the food image. Our goal is to learn the mapping of the food image to the food energy. We then estimate food energy based on the estimated energy distribution image. Based on the estimated energy distribution image, we use a Convolutional Neural Networks (CNN) to estimate the numeric values of food energy presented in the eating scene.


Styles APA, Harvard, Vancouver, ISO, etc.
39

(9735716), Phyllis Antwiwaa Agyapong. « Examining the Relationship Between Parental Sex Education, Religiosity And Sex Positivity In First- And Second-Generation African Immigrants ». Thesis, 2020.

Trouver le texte intégral
Résumé :

This quantitative study examined the relationship between parental comprehensive sexual and reproductive health communication (SRH), religiosity and sex positivity in first- and second-generation African immigrants. Comprehensive SRH communication was measured by frequency through the Sexual Communication Scale (SCS), religiosity was measured through the Faith Activities in the Home Scale (FAITHS) and sex positivity was measured through the Sex Positivity Scale (SPS). It was hypothesized that there would be a negative relationship between religiosity and sex positivity and a positive relationship between religiosity and sex positivity in first-and second-generation African immigrants. Results indicated that higher levels of religiosity in the participant’s upbringing was significantly associated with higher sex positivity. Additional findings revealed higher instances of SRH communication correlated with higher sex positivity in men and lower sex positivity in women. This study aimed to set a foundation for future studies on first- and second-generation African immigrants as it relates to sexual health.

Styles APA, Harvard, Vancouver, ISO, etc.
40

(9780230), Sharmina Begum. « Assessment of alternative waste technologies for energy recovery from solid waste in Australia ». Thesis, 2016. https://figshare.com/articles/thesis/Assessment_of_alternative_waste_technologies_for_energy_recovery_from_solid_waste_in_Australia/13436876.

Texte intégral
Résumé :
Solid waste can be considered either as a burden or as a valuable resource for energy generation. Therefore, identifying an environmentally sound and technoeconomically feasible solid waste treatment is a global and local challenge. This study focuses on identifying an Alternative Waste Technology (AWT) for meeting this global and local demand. AWT recovers more resources from the waste flow and reduces the impact on the environment. There are three main pathways for converting solid waste into energy: thermo-chemical, biochemical and physicochemical. This study deals with thermochemical conversion processes. Mainly four thermo-chemical conversion processes of AWTs are commonly used in Australia: anaerobic digestion, pyrolysis, gasification and incineration. The main aim of this study is to identify and test the most suitable AWT for use in Australia. A decision-making tool, Multi-Criteria Analysis (MCA), was used to identify the most suitable AWT. MCA of the available AWTs was performed using five criteria, that is, capital cost, complexity, public acceptability, diversion from landfill and energy produced, from which Gasification technology has been identified as the most suitable AWT for energy recovery from solid waste. This study then mainly focused on assessing the performance of gasification technology for converting solid waste into energy both experimentally and numerically. Experimental investigation of solid waste gasification was performed using a pilotscale gasification plant of Corky’s Carbon and Combustion P/L plant in Mayfield, Australia. In this experiment, wood chips were used as feedstock (solid waste) under specified gasifier operating conditions. Syngas composition was measured at different stages of gasification, such as raw, scrubbed and dewatered syngas. Mass and energy balance was analysed using the experimental measured data. It was found that 65 per cent of the original energy of solid waste was converted to syngas, 23 per cent converted to char and 6 per cent converted to hot oil. The remaining 6 per cent was lost to the atmosphere. Firstly, a numerical investigation was performed by developing a computational process model using Advanced System for Process ENgineering (ASPEN) Plus software. Computational models were developed for both fixed bed gasification and fluidised bed gasification processes. A simplified, small scale fixed bed gasification model was initially developed in order to observe the performance of the solid waste gasification process. The model is validated with experimental data of Municipal Solid Waste (MSW) and food waste from the literature. Using this validated model, the effects of gasifier operating conditions, such as gasifier temperature, air-fuel ratio and steam-fuel ratio were examined and performance analyses were conducted for four different feedstocks, namely wood, coffee bean husks, green wastes and MSWs. Secondly, a computational model was developed for the fluidised bed gasification process. The model was validated with experimental data for wood chips (solid waste) measured at Corky’s Carbon and Combustion plant. A very good agreement was found between simulation and experimental results, with a maximum variation of 3 per cent. The validated model was used to analyse the effects of gasifier operating conditions. Using the fluidised bed gasification model, a detailed analysis was done for both energy and exergy in order to achieve a complete picture of the system outcome. Energy efficiency of 78 per cent and exergetic efficiency of 23 per cent were achieved for the system. The developed fixed bed and fluidised bed gasification models were useful to predict the various operating parameters of a solid waste gasification plant, such as temperature, pressure, air-fuel ratio and steam-fuel ratio. This research outcome contributes to a better understanding by stakeholders and policy makers at national and international levels who are responsible for developing different waste management technologies. In future, this research can be extended for other feedstocks, such as green waste, sugarcane bagasse and mixed MSW.

Styles APA, Harvard, Vancouver, ISO, etc.
41

(9838253), Roshani Subedi. « Assessing the viability of growing Agave Tequilana for biofuel production in Australia ». Thesis, 2013. https://figshare.com/articles/thesis/Assessing_the_viability_of_growing_Agave_Tequilana_for_biofuel_production_in_Australia/20459547.

Texte intégral
Résumé :

Governments around the world have been introducing policies to support the use of biofuels since the 1990s due to its positive influence in climate change mitigation,  air quality, fuel supply security and poverty reduction through rural and regional iindustry growth. In Australia, liquid fuel is in high demand and this demand is increasing every year. To meet the current fuel demand and to address climate change impacts, it is important for Australia to  invest in green and clean energy. Biofuels are one of the options for clean and green energy that could help to reduce the demand for fossil fuels. Not only developed countries but also developing countries are interested in reducing dependence on imported fossil fuel and  promoting economic development, poverty reductions and improving access to commercial energy through biofuel policies. However, the major challenge for the biofuel industry is to find the right feedstock that does not compete with human feedstock and can grow in marginal land. One of such feedstock that is studied in this research is Agave tequilana. 

Overcoming many of the constraints to establish Agave tequilana as a potential feedstock in Australia requires an understanding of the complex technical, economical and systemic challenges associated with farming, processing and extracting ethanol. The aim of this research is to study the viability of growing Agave tequilana as a potential biofuel feedstock in Australia. The study also explores and highlights the economics of growing this crop, with the idea of comparing the costs and benefits of growing Agave tequilana with that of sugarcane. Agave tequilana has been selected for this study because of the existence of a trial site at Ayr, Queensland and because of a similar climate and rainfall pattern to that of the western central highlands of Mexico where Agave is traditionally grown for the production of tequila. In this study, the viability of growing Agave tequilana for producing ethanol in Ayr, Queensland has been assessed using a case study approach and financial cost and Green House Gas (GHG) saving have been estimated using life cycle cost analysis. Likewise, Agave tequilana and sugarcane agronomic practices have been compared and ibofuel policies have been highlighted using secondary sources to support the establishment of non-food crops such as Agave tequilana in Australia and elsewhere. 

Ayr, Queensland is predominantly a sugarcane growing area where sugarcane farmers occupy 88% of the total agricultural land available. The remaining 12% has been set aside for other crops and cattle grazing or alternatively, some land may remain unused. In this study, farmers expressed that there is very limited land in Ayr available for Agave tequilana to be commercially viable until the sugarcane growing land or cattle grazing land is converted into Agave fields. However, it appears that both farmers and stakeholders are ready to accept Agave tequilana as a potential biofuel crop, if it is to be established on marginal lands in the sugarcane belt of Queensland, rather than in the Burdekin region which is predominately a sugarcane growing area. 

The study also found that only 33% respondents were acquainted with this crop, and that a smaller group were aware of the potential of the crop to produce biofuel. Farmers indicated they would wait until the first trial outcomes are finalised and more research and development is undertaken on this crop before deciding to invest. Since this crop takes at least five years to provide a financial return compared to existing crops in the region, most of the respondents expect higher returns of 20-25% at the end of harvesting time and would prefer interim payment. Farmers may also require initial assistance from the government such as subsidised farm machinery, subsidised fuel and interest free loans before deciding to invest. Life cycle stages of Agave tequilana have been derived taking sugarcane as a base crop. At the first trial site, more than 65% of the cost of farming Agave tequilana in Australia occurred in the first year of plantation, and allowed the conclusion that existing tools and machineries are able to be modified and used in farming Agave tequilana in Australia. The tequila

industry provides a model for biofuel production from Agave tequilana. In Australia, the cost of producing ethanol from Agave tequilana is estimated to be around A$0.52 per litre, excluding government subsidies. The total cost of constructing ethanol pl nt capacity of 90

ML/Year in Australia at present is estimated at A$113.5 million. 

The level of support provided to the biofuel industry by the Australian government is relatively less significant  compared to other advanced countries such as USA and EU. However, the support provided by both the federal and state level programs has provided significant amounts of support to the biofuel industry in Australia. In future, if Agave tequilana is to be selected as a potential non-food crop biofuel feedstock, the government and the private sector need to explore the financial opportunities in marginal and semi marginal regions of Australia for supplementing the viability of producing ethanol with new technology. It is also necessary to explore the business case to modify the existing sugar processing mills to produce ethanol from Agave tequilana from its juice and bagasse. 

Styles APA, Harvard, Vancouver, ISO, etc.
42

(5929979), Yun-Jou Lin. « Point Cloud-Based Analysis and Modelling of Urban Environments and Transportation Corridors ». Thesis, 2019.

Trouver le texte intégral
Résumé :
3D point cloud processing has been a critical task due to the increasing demand of a variety of applications such as urban planning and management, as-built mapping of industrial sites, infrastructure monitoring, and road safety inspection. Point clouds are mainly acquired from two sources, laser scanning and optical imaging systems. However, the original point clouds usually do not provide explicit semantic information, and the collected data needs to undergo a sequence of processing steps to derive and extract the required information. Moreover, according to application requirements, the outcomes from the point cloud processing could be different. This dissertation presents two tiers of data processing. The first tier proposes an adaptive data processing framework to deal with multi-source and multi-platform point clouds. The second tier introduces two point clouds processing strategies targeting applications mainly from urban environments and transportation corridors.

For the first tier of data processing, the internal characteristics (e.g., noise level and local point density) of data should be considered first since point clouds might come from a variety of sources/platforms. The acquired point clouds may have a large number of points. Data processing (e.g., segmentation) of such large datasets is time-consuming. Hence, to attain high computational efficiency, this dissertation presents a down-sampling approach while considering the internal characteristics of data and maintaining the nature of the local surface. Moreover, point cloud segmentation is one of the essential steps in the initial data processing chain to derive the semantic information and model point clouds. Therefore, a multi-class simultaneous segmentation procedure is proposed to partition point cloud into planar, linear/cylindrical, and rough features. Since segmentation outcomes could suffer from some artifacts, a series of quality control procedures are introduced to evaluate and improve the quality of the results.

For the second tier of data processing, this dissertation focuses on two applications for high human activity areas, urban environments and transportation corridors. For urban environments, a new framework is introduced to generate digital building models with accurate right-angle, multi-orientation, and curved boundary from building hypotheses which are derived from the proposed segmentation approach. For transportation corridors, this dissertation presents an approach to derive accurate lane width estimates using point clouds acquired from a calibrated mobile mapping system. In summary, this dissertation provides two tiers of data processing. The first tier of data processing, adaptive down-sampling and segmentation, can be utilized for all kinds of point clouds. The second tier of data processing aims at digital building model generation and lane width estimation applications.
Styles APA, Harvard, Vancouver, ISO, etc.
43

(10725756), Duncan N. Houpt. « Synthesis of High-Performance Supercapacitor Electrodes using a CNT-ZIF-8-MoS2 Framework ». Thesis, 2021.

Trouver le texte intégral
Résumé :
Supercapacitors are an emerging energy storage device that have gained attention because of the large specific power, at a reasonable specific energy, that they exhibit. These energy storage devices could be used alongside of or in the place of traditional electrochemical battery technologies to power reliable electrical devices. The performance of supercapacitorsis largely determined by electrode properties including the surface area to volume ratio, the electrical conductivity, and the ion diffusivity. Nanomaterial synthesis has been proposed as a method of enhancing the performance of many macroscopic supercapacitor electrodes due to the high surface area to volume ratio and unique tunable properties that are often size or thickness dependent for many materials. Specifically, carbon materials (such as carbon nanotubes), metal organic frameworks, (such as ZIF-8), and transition metal dichalcogenides (such as molybdenum disulfide) have been of interest due to their conductivity, large surface area, and ion diffusivity that they exhibit when one or more of their characteristic lengths is on the order of several nanometers.

For the experiments, a carbon nanotube-/ZIF-8-/MoS2framework was synthesized into an electrode material. This process involved first dispersing the carbon nanotubes in DMF using ultrasonication and then modifying the structure with polydopamine to create a binding site for the ZIF-8 to attach to the carbon nanotubes. The ZIF-8 was synthesized by combining 1,2,4-Triazole-3-thiol and ZnCl under 120 degrees Celsius. Afterwards, the MoS2was associated with the carbon nanotube and ZIF-8 framework by a disulfide bond with the sulfur vacancy of the MoS2andthe sulfide group of the ZIF-8. Finally, the sample solution was filtered by vacuum filtration and then annealed at 110 degrees Celsius before being deposited on a nickel foam substrate and tested in a 3-electrode electrochemical cyclic voltammetry study.

The resulting materials were found to have a capacitance of 262.15 F/g with corresponding specific energy and specific power values of 52.4Whr/kg and 1572W/kg. Compared to other supercapacitor research materials, this electrode shows a much larger capacitance than other exclusively carbon materials, and comparable capacitance values to the ZIF-8 and MoS2materialswith the added benefits of an easier and faster manufacturing process. Overall, the electrodes developed in this study, could potentially reduce the cost per farad of the supercapacitor to be more competitive energy storage devices
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie