Academic literature on the topic 'Automatic Function Prediction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automatic Function Prediction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automatic Function Prediction"

1

Wrzeszczynski, K. O., Y. Ofran, B. Rost, R. Nair, and J. Liu. "Automatic prediction of protein function." Cellular and Molecular Life Sciences (CMLS) 60, no. 12 (December 1, 2003): 2637–50. http://dx.doi.org/10.1007/s00018-003-3114-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Makrodimitris, Stavros, Roeland C. H. J. van Ham, and Marcel J. T. Reinders. "Automatic Gene Function Prediction in the 2020’s." Genes 11, no. 11 (October 27, 2020): 1264. http://dx.doi.org/10.3390/genes11111264.

Full text
Abstract:
The current rate at which new DNA and protein sequences are being generated is too fast to experimentally discover the functions of those sequences, emphasizing the need for accurate Automatic Function Prediction (AFP) methods. AFP has been an active and growing research field for decades and has made considerable progress in that time. However, it is certainly not solved. In this paper, we describe challenges that the AFP field still has to overcome in the future to increase its applicability. The challenges we consider are how to: (1) include condition-specific functional annotation, (2) predict functions for non-model species, (3) include new informative data sources, (4) deal with the biases of Gene Ontology (GO) annotations, and (5) maximally exploit the GO to obtain performance gains. We also provide recommendations for addressing those challenges, by adapting (1) the way we represent proteins and genes, (2) the way we represent gene functions, and (3) the algorithms that perform the prediction from gene to function. Together, we show that AFP is still a vibrant research area that can benefit from continuing advances in machine learning with which AFP in the 2020s can again take a large step forward reinforcing the power of computational biology.
APA, Harvard, Vancouver, ISO, and other styles
3

Amidi, Shervine, Afshine Amidi, Dimitrios Vlachakis, Nikos Paragios, and Evangelia I. Zacharaki. "Automatic single- and multi-label enzymatic function prediction by machine learning." PeerJ 5 (March 29, 2017): e3095. http://dx.doi.org/10.7717/peerj.3095.

Full text
Abstract:
The number of protein structures in the PDB database has been increasing more than 15-fold since 1999. The creation of computational models predicting enzymatic function is of major importance since such models provide the means to better understand the behavior of newly discovered enzymes when catalyzing chemical reactions. Until now, single-label classification has been widely performed for predicting enzymatic function limiting the application to enzymes performing unique reactions and introducing errors when multi-functional enzymes are examined. Indeed, some enzymes may be performing different reactions and can hence be directly associated with multiple enzymatic functions. In the present work, we propose a multi-label enzymatic function classification scheme that combines structural and amino acid sequence information. We investigate two fusion approaches (in the feature level and decision level) and assess the methodology for general enzymatic function prediction indicated by the first digit of the enzyme commission (EC) code (six main classes) on 40,034 enzymes from the PDB database. The proposed single-label and multi-label models predict correctly the actual functional activities in 97.8% and 95.5% (based on Hamming-loss) of the cases, respectively. Also the multi-label model predicts all possible enzymatic reactions in 85.4% of the multi-labeled enzymes when the number of reactions is unknown. Code and datasets are available athttps://figshare.com/s/a63e0bafa9b71fc7cbd7.
APA, Harvard, Vancouver, ISO, and other styles
4

Vega Yon, George G., Duncan C. Thomas, John Morrison, Huaiyu Mi, Paul D. Thomas, and Paul Marjoram. "Bayesian parameter estimation for automatic annotation of gene functions using observational data and phylogenetic trees." PLOS Computational Biology 17, no. 2 (February 18, 2021): e1007948. http://dx.doi.org/10.1371/journal.pcbi.1007948.

Full text
Abstract:
Gene function annotation is important for a variety of downstream analyses of genetic data. But experimental characterization of function remains costly and slow, making computational prediction an important endeavor. Phylogenetic approaches to prediction have been developed, but implementation of a practical Bayesian framework for parameter estimation remains an outstanding challenge. We have developed a computationally efficient model of evolution of gene annotations using phylogenies based on a Bayesian framework using Markov Chain Monte Carlo for parameter estimation. Unlike previous approaches, our method is able to estimate parameters over many different phylogenetic trees and functions. The resulting parameters agree with biological intuition, such as the increased probability of function change following gene duplication. The method performs well on leave-one-out cross-validation, and we further validated some of the predictions in the experimental scientific literature.
APA, Harvard, Vancouver, ISO, and other styles
5

Sawada, Kenji, Seiichi Shin, Kenji Kumagai, and Hisato Yoneda. "Optimal Scheduling of Automatic Guided Vehicle System via State Space Realization." International Journal of Automation Technology 7, no. 5 (September 5, 2013): 571–80. http://dx.doi.org/10.20965/ijat.2013.p0571.

Full text
Abstract:
This paper considers dynamical system modeling of transportation systems in semiconductor manufacturing based on state space realization. Utilizing this method, we consider an optimal scheduling problem for an Automatic Guided Vehicle (AGV) transfer problem, which is to control AGV congestion at transport rail junctions. Our scheduling algorithm is based on model-predictive control in which the cycle of measurement, prediction and optimization is repeated. Its optimization is recast as an Integer Linear Programming (ILP) problem. Since little attention has been given to AGV scheduling based on model-predictive control, no method is, to our knowledge, known for determining appropriate cost functions. Here, we focus on throughput maximization and shortest transit time problems and show corresponding cost function settings. We also propose a visualization algorithm of AGV scheduling via state space realization, presenting numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
6

Ogawa, Chikara, Yasunori Minami, Masahiro Morita, Teruyo Noda, Soichi Arasawa, Masako Izuta, Atsushi Kubo, et al. "Prediction of Embolization Area after Conventional Transcatheter Arterial Chemoembolization for Hepatocellular Carcinoma Using SYNAPSE VINCENT." Digestive Diseases 34, no. 6 (2016): 696–701. http://dx.doi.org/10.1159/000448859.

Full text
Abstract:
Purpose: Transcatheter arterial chemoembolization (TACE) is one of the most effective therapeutic options for hepatocellular carcinoma (HCC) and it is important to protect residual liver function after treatment as well as the effect. To reduce the liver function deterioration, we evaluated the automatic software to predict the embolization area of TACE in 3 dimensions. Materials and Methods: Automatic prediction software of embolization area was used in chemoembolization of 7 HCCs. Embolization area of chemoembolization was evaluated within 1 week CT findings after TACE and compared simulated area using automatic prediction software. Results: The maximal diameter of these tumors is in the range 12-42 mm (24.6 ± 9.5 mm). The average time for detecting tumor-feeding branches was 242 s. The total time to detect tumor-feeding branches and simulate the embolization area was 384 s. All cases could detect all tumor-feeding branches of HCC, and the expected embolization area of simulation with automatic prediction software was almost the same as the actual areas, as shown by CT after TACE. Conclusion: This new technology has possibilities to reduce the amount of contrast medium used, protect kidney function, decrease radiation exposure, and improve the therapeutic effect of TACE.
APA, Harvard, Vancouver, ISO, and other styles
7

Zacharaki, Evangelia I. "Prediction of protein function using a deep convolutional neural network ensemble." PeerJ Computer Science 3 (July 17, 2017): e124. http://dx.doi.org/10.7717/peerj-cs.124.

Full text
Abstract:
Background The availability of large databases containing high resolution three-dimensional (3D) models of proteins in conjunction with functional annotation allows the exploitation of advanced supervised machine learning techniques for automatic protein function prediction. Methods In this work, novel shape features are extracted representing protein structure in the form of local (per amino acid) distribution of angles and amino acid distances, respectively. Each of the multi-channel feature maps is introduced into a deep convolutional neural network (CNN) for function prediction and the outputs are fused through support vector machines or a correlation-based k-nearest neighbor classifier. Two different architectures are investigated employing either one CNN per multi-channel feature set, or one CNN per image channel. Results Cross validation experiments on single-functional enzymes (n = 44,661) from the PDB database achieved 90.1% correct classification, demonstrating an improvement over previous results on the same dataset when sequence similarity was not considered. Discussion The automatic prediction of protein function can provide quick annotations on extensive datasets opening the path for relevant applications, such as pharmacological target identification. The proposed method shows promise for structure-based protein function prediction, but sufficient data may not yet be available to properly assess the method’s performance on non-homologous proteins and thus reduce the confounding factor of evolutionary relationships.
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Yuanqiang, Jianping Chen, Pengbing Yan, Jun Zhong, Yuxin Sun, and Xinyu Jin. "Lithology Identification of Uranium-Bearing Sand Bodies Using Logging Data Based on a BP Neural Network." Minerals 12, no. 5 (April 27, 2022): 546. http://dx.doi.org/10.3390/min12050546.

Full text
Abstract:
Lithology identification is an essential fact for delineating uranium-bearing sandstone bodies. A new method is provided to delineate sandstone bodies by a lithological automatic classification model using machine learning techniques, which could also improve the efficiency of borehole core logging. In this contribution, the BP neural network model for automatic lithology identification was established using an optimized gradient descent algorithm based on the neural network training of 4578 sets of well logging data (including lithology, density, resistivity, natural gamma, well-diameter, natural potential, etc.) from 8 boreholes of the Tarangaole uranium deposit in Inner Mongolia. The softmax activation function and the cross-entropy loss function are used for lithology classification and weight adjustment. The lithology identification prediction was carried out for 599 samples, with a prediction accuracy of 88.31%. The prediction results suggest that the model is efficient and effective, and that it could be directly applied for automatic lithology identification in sandstone bodies for uranium exploration.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Ling Fang, Hong Gang Xie, and Li Hong Qiao. "Research on the Fouling Prediction Based on Hybrid Kernel Function Relevance Vector Machine." Advanced Materials Research 204-210 (February 2011): 31–35. http://dx.doi.org/10.4028/www.scientific.net/amr.204-210.31.

Full text
Abstract:
The research on the fouling prediction of heat exchanger is significantly to improve operational efficiency and economic benefits of the plants. Based on the relevance vector machine with Gaussian kernel function, polynomial kernel function and hybrid kernel function, simulation research on the fouling prediction was introduced. We construct a six-inputs and one-output network model according to the fouling monitor principle and parameters with MATLAB, all training data came from the Automatic Dynamic Simulator of Fouling and input the network after normalized processing and reclassification. Simulations show that the root mean square error of fouling prediction with hybrid kernel function is less than simple kernel function, and has the better prediction precision.
APA, Harvard, Vancouver, ISO, and other styles
10

Brata, Adam Hendra, Deron Liang, and Sholeh Hadi Pramono. "Software Development of Automatic Data Collector for Bus Route Planning System." International Journal of Electrical and Computer Engineering (IJECE) 5, no. 1 (February 1, 2015): 150. http://dx.doi.org/10.11591/ijece.v5i1.pp150-157.

Full text
Abstract:
<p>Public transportation is important issue in Taiwan. Recently, mobile application named Bus Route Planning was developed to help the user to get information about public transportation using bus. But, this application often gave the user inaccurate bus information and this application has less attractive GUI. To overcome those 2 problems, it needed 2 kinds of solutions. First, a more accurate time prediction algorithm is needed to predict the arrival time of bus. Second, augmented reality technology can be used to make a GUI improvement. In this research, Automatic Data Collector system was proposed to give support for those 2 solutions at once. This proposed system has 3 main functionalities. First, data collector function to provide some data sets that can be further analyzed as an base of time prediction algorithm. Second, data updater functions to provide the most updated bus information for used in augmented reality system. Third, data management function to gave the system better functionality to supported those 2 related systems. This proposed Automatic Data Collector system was developed using batch data processing scenario and SQL native query in Java programming language. The result of testing shown this data processing scenario was very effective to made database manipulation especially for large-sized data.</p>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Automatic Function Prediction"

1

Wang, Lu. "Task Load Modelling for LTE Baseband Signal Processing with Artificial Neural Network Approach." Thesis, KTH, Signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160947.

Full text
Abstract:
This thesis gives a research on developing an automatic or guided-automatic tool to predict the hardware (HW) resource occupation, namely task load, with respect to the software (SW) application algorithm parameters in an LTE base station. For the signal processing in an LTE base station it is important to get knowledge of how many HW resources will be used when applying a SW algorithm on a specic platform. The information is valuable for one to know the system and platform better, which can facilitate a reasonable use of the available resources. The process of developing the tool is considered to be the process of building a mathematical model between HW task load and SW parameters, where the process is dened as function approximation. According to the universal approximation theorem, the problem can be solved by an intelligent method called articial neural networks (ANNs). The theorem indicates that any function can be approximated with a two-layered neural network as long as the activation function and number of hidden neurons are proper. The thesis documents a work ow on building the model with the ANN method, as well as some research on data subset selection with mathematical methods, such as Partial Correlation and Sequential Searching as a data pre-processing step for the ANN approach. In order to make the data selection method suitable for ANNs, a modication has been made on Sequential Searching method, which gives a better result. The results show that it is possible to develop such a guided-automatic tool for prediction purposes in LTE baseband signal processing under specic precision constraints. Compared to other approaches, this model tool with intelligent approach has a higher precision level and a better adaptivity, meaning that it can be used in any part of the platform even though the transmission channels are dierent.
Denna avhandling utvecklar ett automatiskt eller ett guidat automatiskt verktyg for att forutsaga behov av hardvaruresurser, ocksa kallat uppgiftsbelastning, med avseende pa programvarans algoritmparametrar i en LTE basstation. I signalbehandling i en LTE basstation, ar det viktigt att fa kunskap om hur mycket av hardvarans resurser som kommer att tas i bruk nar en programvara ska koras pa en viss plattform. Informationen ar vardefull for nagon att forsta systemet och plattformen battre, vilket kan mojliggora en rimlig anvandning av tillgangliga resurser. Processen att utveckla verktyget anses vara processen att bygga en matematisk modell mellan hardvarans belastning och programvaruparametrarna, dar processen denieras som approximation av en funktion. Enligt den universella approximationssatsen, kan problemet losas genom en intelligent metod som kallas articiella neuronnat (ANN). Satsen visar att en godtycklig funktion kan approximeras med ett tva-skiktS neuralt natverk sa lange aktiveringsfunktionen och antalet dolda neuroner ar korrekt. Avhandlingen dokumenterar ett arbets- ode for att bygga modellen med ANN-metoden, samt studerar matematiska metoder for val av delmangder av data, sasom Partiell korrelation och sekventiell sokning som dataforbehandlingssteg for ANN. For att gora valet av uppgifter som lampar sig for ANN har en andring gjorts i den sekventiella sokmetoden, som ger battre resultat. Resultaten visar att det ar mojligt att utveckla ett sadant guidat automatiskt verktyg for prediktionsandamal i LTE basbandssignalbehandling under specika precisions begransningar. Jamfort med andra metoder, har dessa modellverktyg med intelligent tillvagagangssatt en hogre precisionsniva och battre adaptivitet, vilket innebar att den kan anvandas i godtycklig del av plattformen aven om overforingskanalerna ar olika.
APA, Harvard, Vancouver, ISO, and other styles
2

De, Ferrari Luna Luciana. "On combining collaborative and automated curation for enzyme function prediction." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7538.

Full text
Abstract:
Data generation has vastly exceeded manual annotation in several areas of astronomy, biology, economy, geology, medicine and physics. At the same time, a public community of experts and hobbyists has developed around some of these disciplines thanks to open, editable web resources such as wikis and public annotation challenges. In this thesis I investigate under which conditions a combination of collaborative and automated curation could complete annotation tasks unattainable by human curators alone. My exemplar curation process is taken from the molecular biology domain: the association all existing enzymes (proteins catalysing a chemical reaction) with their function. Assigning enzymatic function to the proteins in a genome is the first essential problem of metabolic reconstruction, important for biology, medicine, industrial production and environmental studies. In the protein database UniProt, only 3% of the records are currently manually curated and only 60% of the 17 million recorded proteins have some functional annotation, including enzymatic annotation. The proteins in UniProt represent only about 380,000 animal species (2,000 of which have completely sequenced genomes) out of the estimated millions of species existing on earth. The enzyme annotation task already applies to millions of entries and this number is bound to increase rapidly as sequencing efforts intensify. To guide my analysis I first develop a basic model of collaborative curation and evaluate it against molecular biology knowledge bases. The analysis highlights a surprising similarity between open and closed annotation environments on metrics usually connected with “democracy” of content. I then develop and evaluate a method to enhance enzyme function annotation using machine learning which demonstrates very high accuracy, recall and precision and the capacity to scale to millions of enzyme instances. This method needs only a protein sequence as input and is thus widely applicable to genomic and metagenomic analysis. The last phase of the work uses active and guided learning to bring together collaborative and automatic curation. In active learning a machine learning algorithm suggests to the human curators which entry should be annotated next. This strategy has the potential to coordinate and reduce the amount of manual curation while improving classification performance and reducing the number of training instances needed. This work demonstrates the benefits of combining classic machine learning and guided learning to improve the quantity and quality of enzymatic knowledge and to bring us closer to the goal of annotating all existing enzymes.
APA, Harvard, Vancouver, ISO, and other styles
3

Alborzi, Seyed Ziaeddin. "Automatic Discovery of Hidden Associations Using Vector Similarity : Application to Biological Annotation Prediction." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0035/document.

Full text
Abstract:
Cette thèse présente: 1) le développement d'une nouvelle approche pour trouver des associations directes entre des paires d'éléments liés indirectement à travers diverses caractéristiques communes, 2) l'utilisation de cette approche pour associer directement des fonctions biologiques aux domaines protéiques (ECDomainMiner et GODomainMiner) et pour découvrir des interactions domaine-domaine, et enfin 3) l'extension de cette approche pour annoter de manière complète à partir des domaines les structures et les séquences des protéines. Au total, 20 728 et 20 318 associations EC-Pfam et GO-Pfam non redondantes ont été découvertes, avec des F-mesures de plus de 0,95 par rapport à un ensemble de référence Gold Standard extrait d'une source d'associations connues (InterPro). Par rapport à environ 1500 associations déterminées manuellement dans InterPro, ECDomainMiner et GODomainMiner produisent une augmentation de 13 fois le nombre d'associations EC-Pfam et GO-Pfam disponibles. Ces associations domaine-fonction sont ensuite utilisées pour annoter des milliers de structures de protéines et des millions de séquences de protéines pour lesquelles leur composition de domaine est connue mais qui manquent actuellement d'annotations fonctionnelles. En utilisant des associations de domaines ayant acquis des annotations fonctionnelles inférées, et en tenant compte des informations de taxonomie, des milliers de règles d'annotation ont été générées automatiquement. Ensuite, ces règles ont été utilisées pour annoter des séquences de protéines dans la base de données TrEMBL
This thesis presents: 1) the development of a novel approach to find direct associations between pairs of elements linked indirectly through various common features, 2) the use of this approach to directly associate biological functions to protein domains (ECDomainMiner and GODomainMiner), and to discover domain-domain interactions, and finally 3) the extension of this approach to comprehensively annotate protein structures and sequences. ECDomainMiner and GODomainMiner are two applications to discover new associations between EC Numbers and GO terms to protein domains, respectively. They find a total of 20,728 and 20,318 non-redundant EC-Pfam and GO-Pfam associations, respectively, with F-measures of more than 0.95 with respect to a “Gold Standard” test set extracted from InterPro. Compared to around 1500 manually curated associations in InterPro, ECDomainMiner and GODomainMiner infer a 13-fold increase in the number of available EC-Pfam and GO-Pfam associations. These function-domain associations are then used to annotate thousands of protein structures and millions of protein sequences for which their domain composition is known but that currently lack experimental functional annotations. Using inferred function-domain associations and considering taxonomy information, thousands of annotation rules have automatically been generated. Then, these rules have been utilized to annotate millions of protein sequences in the TrEMBL database
APA, Harvard, Vancouver, ISO, and other styles
4

Widera, Paweł. "Automated design of energy functions for protein structure prediction by means of genetic programming and improved structure similarity assessment." Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11394/.

Full text
Abstract:
The process of protein structure prediction is a crucial part of understanding the function of the building blocks of life. It is based on the approximation of a protein free energy that is used to guide the search through the space of protein structures towards the thermodynamic equilibrium of the native state. A function that gives a good approximation of the protein free energy should be able to estimate the structural distance of the evaluated candidate structure to the protein native state. This correlation between the energy and the similarity to the native is the key to high quality predictions. State-of-the-art protein structure prediction methods use very simple techniques to design such energy functions. The individual components of the energy functions are created by human experts with the use of statistical analysis of common structural patterns that occurs in the known native structures. The energy function itself is then defined as a simple weighted sum of these components. Exact values of the weights are set in the process of maximisation of the correlation between the energy and the similarity to the native measured by a root mean square deviation between coordinates of the protein backbone. In this dissertation I argue that this process is oversimplified and could be improved on at least two levels. Firstly, a more complex functional combination of the energy components might be able to reflect the similarity more accurately and thus improve the prediction quality. Secondly, a more robust similarity measure that combines different notions of the protein structural similarity might provide a much more realistic baseline for the energy function optimisation. To test these two hypotheses I have proposed a novel approach to the design of energy functions for protein structure prediction using a genetic programming algorithm to evolve the energy functions and a structural similarity consensus to provide a reference similarity measure. The best evolved energy functions were found to reflect the similarity to the native better than the optimised weighted sum of terms, and therefore opening a new interesting area of research for the machine learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Bahram, Mohammad [Verfasser], Dirk [Akademischer Betreuer] [Gutachter] Wollherr, and Fritz [Gutachter] Busch. "Interactive Maneuver Prediction and Planning for Highly Automated Driving Functions / Mohammad Bahram ; Gutachter: Dirk Wollherr, Fritz Busch ; Betreuer: Dirk Wollherr." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1132774144/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

PETRINI, ALESSANDRO. "HIGH PERFORMANCE COMPUTING MACHINE LEARNING METHODS FOR PRECISION MEDICINE." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/817104.

Full text
Abstract:
La Medicina di Precisione (Precision Medicine) è un nuovo paradigma che sta rivoluzionando diversi aspetti delle pratiche cliniche: nella prevenzione e diagnosi, essa è caratterizzata da un approccio diverso dal "one size fits all" proprio della medicina classica. Lo scopo delle Medicina di Precisione è di trovare misure di prevenzione, diagnosi e cura che siano specifiche per ciascun individuo, a partire dalla sua storia personale, stile di vita e fattori genetici. Tre fattori hanno contribuito al rapido sviluppo della Medicina di Precisione: la possibilità di generare rapidamente ed economicamente una vasta quantità di dati omici, in particolare grazie alle nuove tecniche di sequenziamento (Next-Generation Sequencing); la possibilità di diffondere questa enorme quantità di dati grazie al paradigma "Big Data"; la possibilità di estrarre da questi dati tutta una serie di informazioni rilevanti grazie a tecniche di elaborazione innovative ed altamente sofisticate. In particolare, le tecniche di Machine Learning introdotte negli ultimi anni hanno rivoluzionato il modo di analizzare i dati: esse forniscono dei potenti strumenti per l'inferenza statistica e l'estrazione di informazioni rilevanti dai dati in maniera semi-automatica. Al contempo, però, molto spesso richiedono elevate risorse computazionali per poter funzionare efficacemente. Per questo motivo, e per l'elevata mole di dati da elaborare, è necessario sviluppare delle tecniche di Machine Learning orientate al Big Data che utilizzano espressamente tecniche di High Performance Computing, questo per poter sfruttare al meglio le risorse di calcolo disponibili e su diverse scale, dalle singole workstation fino ai super-computer. In questa tesi vengono presentate tre tecniche di Machine Learning sviluppate nel contesto del High Performance Computing e create per affrontare tre questioni fondamentali e ancora irrisolte nel campo della Medicina di Precisione, in particolare la Medicina Genomica: i) l'identificazione di varianti deleterie o patogeniche tra quelle neutrali nelle aree non codificanti del DNA; ii) l'individuazione della attività delle regioni regolatorie in diverse linee cellulari e tessuti; iii) la predizione automatica della funzione delle proteine nel contesto di reti biomolecolari. Per il primo problema è stato sviluppato parSMURF, un innovativo metodo basato su hyper-ensemble in grado di gestire l'elevato grado di sbilanciamento che caratterizza l'identificazione di varianti patogeniche e deleterie in mezzo al "mare" di varianti neutrali nelle aree non-coding del DNA. L'algoritmo è stato implementato per sfruttare appositamente le risorse di supercalcolo del CINECA (Marconi - KNL) e HPC Center Stuttgart (HLRS Apollo HAWK), ottenendo risultati allo stato dell'arte, sia per capacità predittiva, sia per scalabilità. Il secondo problema è stato affrontato tramite lo sviluppo di reti neurali "deep", in particolare Deep Feed Forward e Deep Convolutional Neural Networks per analizzare - rispettivamente - dati di natura epigenetica e sequenze di DNA, con lo scopo di individuare promoter ed enhancer attivi in linee cellulari e tessuti specifici. L'analisi è compiuta "genome-wide" e sono state usate tecniche di parallelizzazione su GPU. Infine, per il terzo problema è stato sviluppato un algoritmo di Machine Learning semi-supervisionato su grafo basato su reti di Hopfield per elaborare efficacemente grandi network biologici, utilizzando ancora tecniche di parallelizzazione su GPU; in particolare, una parte rilevante dell'algoritmo è data dall'introduzione di una tecnica parallela di colorazione del grafo che migliora il classico approccio greedy introdotto da Luby. Tra i futuri lavori e le attività in corso, viene presentato il progetto inerente all'estensione di parSMURF che è stato recentemente premiato dal consorzio Partnership for Advance in Computing in Europe (PRACE) allo scopo di sviluppare ulteriormente l'algoritmo e la sua implementazione, applicarlo a dataset di diversi ordini di grandezza più grandi e inserire i risultati in Genomiser, lo strumento attualmente allo stato dell'arte per l'individuazione di varianti genetiche Mendeliane. Questo progetto è inserito nel contesto di una collaborazione internazionale con i Jackson Lab for Genomic Medicine.
Precision Medicine is a new paradigm which is reshaping several aspects of clinical practice, representing a major departure from the "one size fits all" approach in diagnosis and prevention featured in classical medicine. Its main goal is to find personalized prevention measures and treatments, on the basis of the personal history, lifestyle and specific genetic factors of each individual. Three factors contributed to the rapid rise of Precision Medicine approaches: the ability to quickly and cheaply generate a vast amount of biological and omics data, mainly thanks to Next-Generation Sequencing; the ability to efficiently access this vast amount of data, under the Big Data paradigm; the ability to automatically extract relevant information from data, thanks to innovative and highly sophisticated data processing analytical techniques. Machine Learning in recent years revolutionized data analysis and predictive inference, influencing almost every field of research. Moreover, high-throughput bio-technologies posed additional challenges to effectively manage and process Big Data in Medicine, requiring novel specialized Machine Learning methods and High Performance Computing techniques well-tailored to process and extract knowledge from big bio-medical data. In this thesis we present three High Performance Computing Machine Learning techniques that have been designed and developed for tackling three fundamental and still open questions in the context of Precision and Genomic Medicine: i) identification of pathogenic and deleterious genomic variants among the "sea" of neutral variants in the non-coding regions of the DNA; ii) detection of the activity of regulatory regions across different cell lines and tissues; iii) automatic protein function prediction and drug repurposing in the context of biomolecular networks. For the first problem we developed parSMURF, a novel hyper-ensemble method able to deal with the huge data imbalance that characterizes the detection of pathogenic variants in the non-coding regulatory regions of the human genome. We implemented this approach with highly parallel computational techniques using supercomputing resources at CINECA (Marconi – KNL) and HPC Center Stuttgart (HLRS Apollo HAWK), obtaining state-of-the-art results. For the second problem we developed Deep Feed Forward and Deep Convolutional Neural Networks to respectively process epigenetic and DNA sequence data to detect active promoters and enhancers in specific tissues at genome-wide level using GPU devices to parallelize the computation. Finally we developed scalable semi-supervised graph-based Machine Learning algorithms based on parametrized Hopfield Networks to process in parallel using GPU devices large biological graphs, using a parallel coloring method that improves the classical Luby greedy algorithm. We also present ongoing extensions of parSMURF, very recently awarded by the Partnership for Advance in Computing in Europe (PRACE) consortium to further develop the algorithm, apply them to huge genomic data and embed its results into Genomiser, a state-of-the-art computational tool for the detection of pathogenic variants associated with Mendelian genetic diseases, in the context of an international collaboration with the Jackson Lab for Genomic Medicine.
APA, Harvard, Vancouver, ISO, and other styles
7

Lillack, Max. "Einfluss von Eingabedaten auf nicht-funktionale Eigenschaften in Software-Produktlinien." Master's thesis, Universitätsbibliothek Leipzig, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-101196.

Full text
Abstract:
Nicht-funktionale Eigenschaften geben Aussagen über Qualitätsaspekte einer Software. Mit einer Software-Produktlinie (SPL) wird eine Menge von verwandten Software-Produkten beschrieben, die auf Basis gemeinsam genutzter Bausteine und Architekturen entwickelt werden, um die Anforderungen unterschiedlicher Kundengruppen zu erfüllen. Hierbei werden gezielt Software-Bestandteile wiederverwendet, um Software effizienter zu entwickeln. In dieser Arbeit wird der Einfluss von Eingabedaten auf die nicht-funktionalen Eigenschaften von SPL untersucht. Es wird auf Basis von Messungen ausgewählter nicht-funktionaler Eigenschaften einzelner Software-Produkte ein Vorhersagemodell für beliebige Software-Produkte der SPL erstellt. Das Vorhersagemodell kann genutzt werden, um den Konfigurationsprozess zu unterstützen. Das Verfahren wird anhand einer SPL von verlustfreien Kompressionsalgorithmen evaluiert. Die Berücksichtigung von Eingabedaten kann die Vorhersage von nicht-funktionalen Eigenschaften einer SPL gegenüber einfacheren Vorhersagemodellen ohne die Berücksichtigung von Eingabedaten signifikant verbessern.
APA, Harvard, Vancouver, ISO, and other styles
8

Mangado, López Nerea. "Cochlear implantation modeling and functional evaluation considering uncertainty and parameter variability." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/586214.

Full text
Abstract:
Recent innovations in computational modeling have led to important advances towards the development of predictive tools to simulate and optimize surgery outcomes. This thesis is focused on cochlear implantation surgery, technique which allows recovering functional hearing in patients with severe deafness. The success of this intervention, however, relies on factors, which are unpredictable or difficult to control. This, combined with the high variability of hearing restoration levels among patients, makes the prediction of this surgery a very challenging process. The aim of this thesis is to develop computational tools to assess the functional outcome of the cochlear implantation. To this end, this thesis addresses a set of challenges, such as the automatic optimization of the implantation and stimulation parameters by evaluating the neural response evoked by the cochlear implant or the functional evaluation of a large set of virtual patients.
Recientes mejoras en el desarrollo del modelado computacional han facilitado importantes avances en herramientas predictivas para simular procesos quirúrgicos maximizando así los resultados de la cirugía. Esta tesis se focaliza en la cirugía de implantación coclear. Dicha técnica permite recuperar el sentido auditivo a pacientes con sordera severa. Sin embargo, el éxito de la intervención depende de un conjunto de factores, difíciles de controlar o incluso impredecibles. Por este motivo, existe una gran variabilidad interindividual, lo cual lleva a considerar la predicción de esta cirugía como un proceso complejo. El objetivo de esta tesis es el desarrollo de herramientas computacionales para la evaluación funcional de dicha cirugía. Para este fi n, esta tesis aborda una serie de retos, entre ellos la optimización automática de la respuesta neural inducida por el implante coclear y la evaluación numérica de grandes grupos de pacientes.
APA, Harvard, Vancouver, ISO, and other styles
9

Lian, Chunfeng. "Information fusion and decision-making using belief functions : application to therapeutic monitoring of cancer." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2333/document.

Full text
Abstract:
La radiothérapie est une des méthodes principales utilisée dans le traitement thérapeutique des tumeurs malignes. Pour améliorer son efficacité, deux problèmes essentiels doivent être soigneusement traités : la prédication fiable des résultats thérapeutiques et la segmentation précise des volumes tumoraux. La tomographie d’émission de positrons au traceur Fluoro- 18-déoxy-glucose (FDG-TEP) peut fournir de manière non invasive des informations significatives sur les activités fonctionnelles des cellules tumorales. Les objectifs de cette thèse sont de proposer: 1) des systèmes fiables pour prédire les résultats du traitement contre le cancer en utilisant principalement des caractéristiques extraites des images FDG-TEP; 2) des algorithmes automatiques pour la segmentation de tumeurs de manière précise en TEP et TEP-TDM. La théorie des fonctions de croyance est choisie dans notre étude pour modéliser et raisonner des connaissances incertaines et imprécises pour des images TEP qui sont bruitées et floues. Dans le cadre des fonctions de croyance, nous proposons une méthode de sélection de caractéristiques de manière parcimonieuse et une méthode d’apprentissage de métriques permettant de rendre les classes bien séparées dans l’espace caractéristique afin d’améliorer la précision de classification du classificateur EK-NN. Basées sur ces deux études théoriques, un système robuste de prédiction est proposé, dans lequel le problème d’apprentissage pour des données de petite taille et déséquilibrées est traité de manière efficace. Pour segmenter automatiquement les tumeurs en TEP, une méthode 3-D non supervisée basée sur le regroupement évidentiel (evidential clustering) et l’information spatiale est proposée. Cette méthode de segmentation mono-modalité est ensuite étendue à la co-segmentation dans des images TEP-TDM, en considérant que ces deux modalités distinctes contiennent des informations complémentaires pour améliorer la précision. Toutes les méthodes proposées ont été testées sur des données cliniques, montrant leurs meilleures performances par rapport aux méthodes de l’état de l’art
Radiation therapy is one of the most principal options used in the treatment of malignant tumors. To enhance its effectiveness, two critical issues should be carefully dealt with, i.e., reliably predicting therapy outcomes to adapt undergoing treatment planning for individual patients, and accurately segmenting tumor volumes to maximize radiation delivery in tumor tissues while minimize side effects in adjacent organs at risk. Positron emission tomography with radioactive tracer fluorine-18 fluorodeoxyglucose (FDG-PET) can noninvasively provide significant information of the functional activities of tumor cells. In this thesis, the goal of our study consists of two parts: 1) to propose reliable therapy outcome prediction system using primarily features extracted from FDG-PET images; 2) to propose automatic and accurate algorithms for tumor segmentation in PET and PET-CT images. The theory of belief functions is adopted in our study to model and reason with uncertain and imprecise knowledge quantified from noisy and blurring PET images. In the framework of belief functions, a sparse feature selection method and a low-rank metric learning method are proposed to improve the classification accuracy of the evidential K-nearest neighbor classifier learnt by high-dimensional data that contain unreliable features. Based on the above two theoretical studies, a robust prediction system is then proposed, in which the small-sized and imbalanced nature of clinical data is effectively tackled. To automatically delineate tumors in PET images, an unsupervised 3-D segmentation based on evidential clustering using the theory of belief functions and spatial information is proposed. This mono-modality segmentation method is then extended to co-segment tumor in PET-CT images, considering that these two distinct modalities contain complementary information to further improve the accuracy. All proposed methods have been performed on clinical data, giving better results comparing to the state of the art ones
APA, Harvard, Vancouver, ISO, and other styles
10

Tarasov, Kirill. "Searching for novel gene functions in yeast : identification of thousands of novel molecular interactions by protein-fragment complementation assay followed by automated gene function prediction and high-throughput lipidomics." Thèse, 2014. http://hdl.handle.net/1866/11824.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Automatic Function Prediction"

1

Christofides, Panagiotis D. Networked and Distributed Predictive Control: Methods and Nonlinear Process Network Applications. London: Springer-Verlag London Limited, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Box, George E. P. Time series analysis: Forecasting and control. 4th ed. Hoboken, N.J: John Wiley, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Box, George E. P. Time series analysis: Forecasting and control. 3rd ed. Englewood Cliffs, N.J: Prentice Hall, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Box, George E. P. Time series analysis: Forecasting and control. 4th ed. Hoboken, N.J: John Wiley, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Predictive Functional Control Advances in Industrial Control. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Åström, Karl E., Donal O'Donovan, and Jacques Richalet. Predictive Functional Control: Principles and Industrial Applications. Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Christofides, Panagiotis D., Jinfeng Liu, and David Muñoz de la Peña. Networked and Distributed Predictive Control. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Christofides, Panagiotis D., Jinfeng Liu, and David Muñoz de la Peña. Networked and Distributed Predictive Control: Methods and Nonlinear Process Network Applications. Springer London, Limited, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Time Series Analysis: Forecasting and Control. Wiley, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Box, George E. P. Time Series Analysis: Forecasting and Control (Wiley Series in Probability and Statistics). 4th ed. Wiley-Interscience, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Automatic Function Prediction"

1

Della Ventura, Michele. "Automatic Tonal Music Composition Using Functional Harmony." In Social Computing, Behavioral-Cultural Modeling, and Prediction, 290–95. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16268-3_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chitale, Meghana, Troy Hawkins, and Daisuke Kihara. "Automated Prediction of Protein Function from Sequence." In Prediction of Protein Structures, Functions, and Interactions, 63–85. Chichester, UK: John Wiley & Sons, Ltd, 2008. http://dx.doi.org/10.1002/9780470741894.ch3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chira, Camelia, and Nima Hatami. "Hybrid Evolutionary Algorithm with a Composite Fitness Function for Protein Structure Prediction." In Intelligent Data Engineering and Automated Learning - IDEAL 2012, 184–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32639-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Brian Y., Viacheslav Y. Fofanov, Drew H. Bryant, Bradley D. Dodson, David M. Kristensen, Andreas M. Lisewski, Marek Kimmel, Olivier Lichtarge, and Lydia E. Kavraki. "Geometric Sieving: Automated Distributed Optimization of 3D Motifs for Protein Function Prediction." In Lecture Notes in Computer Science, 500–515. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11732990_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Katriniok, Alexander, Peter Kleibaum, and Martina Joševski. "Automation of Road Intersections Using Distributed Model Predictive Control." In Control Strategies for Advanced Driver Assistance Systems and Autonomous Driving Functions, 175–99. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91569-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Frydrych, Piotr, and Roman Szewczyk. "Preisach Based Model for Predicting of Functional Characteristic of Fluxgate Sensors and Inductive Components." In Recent Advances in Automation, Robotics and Measuring Techniques, 591–96. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05353-0_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alves dos Santos Schwaab, Andréia, Silvia Modesto Nassar, and Paulo José de Freitas Filho. "Automatic Generation of Type-1 and Interval Type-2 Membership Functions for Prediction of Time Series Data." In Lecture Notes in Computer Science, 353–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47955-2_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chua, Hon Nian, and Limsoon Wong. "Predicting Protein Functions from Protein Interaction Networks." In Biological Data Mining in Protein Interaction Networks, 203–22. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-398-2.ch012.

Full text
Abstract:
Functional characterization of genes and their protein products is essential to biological and clinical research. Yet, there is still no reliable way of assigning functional annotations to proteins in a high-throughput manner. In this chapter, the authors provide an introduction to the task of automated protein function prediction. They discuss about the motivation for automated protein function prediction, the challenges faced in this task, as well as some approaches that are currently available. In particular, they take a closer look at methods that use protein-protein interaction for protein function prediction, elaborating on their underlying techniques and assumptions, as well as their strengths and limitations.
APA, Harvard, Vancouver, ISO, and other styles
9

Lagos, Nikolaos, Salah Aït-Mokhtar, Ioan Calapodescu, and JinHee Lee. "Point of Interest Category Prediction with Under-Specified Hierarchical Labels." In PAIS 2022. IOS Press, 2022. http://dx.doi.org/10.3233/faia220070.

Full text
Abstract:
Categories are important elements of databases of Product Listings, for e-commerce platforms, or of Points of Interest (POIs), for location-based services. However, category annotations are often incomplete, which calls for automatic completion. Hierarchical classification has been proposed as a solution to impute missing annotations. We address this task in one of Naver’s production databases (POIs), in order to enhance its quality. In real-life applications, like ours, however, it is unrealistic to count on the existence of a perfectly annotated training set, and noisy training labels prevent us from casting the task as a straightforward classification problem. In order to overcome this difficulty, we propose an approach that takes into account the type of noise in the training set. We identified that the main deficiency is that the training labels tend to be under-specified i.e. they point to categories found at higher levels of the hierarchy than the correct ones. This results in a lot of under-represented and a few over-represented categories. We call categories that are over-represented, due to under-specified labels, joker classes. To allow robust learning in the presence of joker classes we propose a simple and effective approach: First, we detect problematic categories, i.e. joker classes, based on the misclassifications of an initial hierarchical classifier. Then we re-train from scratch, introducing a weight to the standard cross-entropy loss function that targets incorrect predictions related to joker classes. Our model has enabled the correction of thousands of POIs in our production database.
APA, Harvard, Vancouver, ISO, and other styles
10

Jansirani, M., and P. Sumitra. "Implementation of Edge Detection Process by using Supervised Convolution Neural Network." In Artificial Intelligence and Communication Technologies, 275–81. Soft Computing Research Society, 2022. http://dx.doi.org/10.52458/978-81-955020-5-9-28.

Full text
Abstract:
Edge detection is important in digital image processing because it contains essential information that is needed to process the specific application requirement. The feature detection and characteristic extraction processes are both directly influenced by the edge detection procedure. The image quality is determined by the retrieved edges and features. To derive the edge characteristics, several current techniques such as Canny, Laplacian of Gaussian, Prewitt, Zero-Crossing, Roberts, and Sobel are commonly used. However, when the amount of the dataset grows, this strategy requires a lot of computing time. The high running suggests that edge detection approaches have an impact on the overall efficiency of picture analysis. To address these challenges, this paper introduces an edge detection technique combined with machine learning algorithms. To reduce computing complexity, a supervised convolution neural network is used in conjunction with the edge detection procedure. The neural network employs automatic learning functions and pre-training patterns to anticipate edge-related features with minimal effort. The fully convoluted layer and max-pooling function are used in the convolution network to reduce unnecessary features and forecast edges with the highest detection accuracy. Edge prediction deviations are minimized during this procedure by back propagating the incorrect value to the preceding layer, which improves the overall recognition rate. The system's efficiency is then calculated using the MATLAB tool, along with the appropriate performance measures.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automatic Function Prediction"

1

De Santis, Enrico, Alessio Martino, Antonello Rizzi, and Fabio Massimo Frattale Mascioli. "Dissimilarity Space Representations and Automatic Feature Selection for Protein Function Prediction." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kelwade, Jairam P., and Suresh S. Salankar. "Prediction of heart abnormalities using Particle Swarm Optimization in Radial Basis Function Neural network." In 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT). IEEE, 2016. http://dx.doi.org/10.1109/icacdot.2016.7877696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schwenn, Peter, and George Hazen. "Drawing with Performance Prediction." In SNAME 12th Chesapeake Sailing Yacht Symposium. SNAME, 1995. http://dx.doi.org/10.5957/csys-1995-007.

Full text
Abstract:
We describe some advances in Performance Prediction Programs - "PPP"1 for sailing yachts2 - primarily integrating PPP analysis into drawing and providing new sculpting operations in which fairness and desired hydrostatic and on her performance determining characteristics are maintained - the shape remains a boat or a ship of the desired kind during reshaping. Our building blocks for such an integration are: a thousand-fold increase in PPP speed3, new editing tools which maintain Boatness4 , and an accessible modularization of the engineering physics of the PPP within a new programming environment which allows immediate changes by designers. Specifically, these new functions are introduced at the boundary of Drawing and the PPP: - A live knotmeter is displayed with each design variant on the drawing boar, - alongside it's antagonist - Rating. - Continuously updated hydrotatics (including the speed determining factors LSM, wetted surface, stability, prismatics, .. ) are displayed with the knotometer, with the 'positive' factors (like length) graphically opposing the 'negative' (like wetted surface.) Dimensions for PPP use are calculated automatically from the shape at hand - in particular: appendage dimensions, hydrostatics, and so forth. - Bounding limits are set for a design optimization by drawing two or more outlier yacht forms. The space in between can be explored by hand or automatically. - Local optimums of Speed against rating are provided as a 'Snap' function. This is the one dimensional version of automatic exploration for optima. - Intermediate shapes are also controlled during design optimization to maintain realism and performance constraints on type, fairness, 'look', speed producing shape measures like prismatic and displacement etc., and even handicap. - Immediate feedback is available if one chooses to exploit the new programming environment to make aero hydro model changes or extensions to the internal PPP mechanisms while drawing and exploring.
APA, Harvard, Vancouver, ISO, and other styles
4

Patil, Sangram, Aum Patil, Vishwadeep Handikherkar, Sumit Desai, Vikas M. Phalle, and Faruk S. Kazi. "Remaining Useful Life (RUL) Prediction of Rolling Element Bearing Using Random Forest and Gradient Boosting Technique." In ASME 2018 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/imece2018-87623.

Full text
Abstract:
Rolling element bearings are very important and highly utilized in many industries. Their catastrophic failure due to fluctuating working conditions leads to unscheduled breakdown and increases accidental economical losses. Thus these issues have triggered a need for reliable and automatic prognostics methodology which will prevent a potentially expensive maintenance program. Accordingly, Remaining Useful Life (RUL) prediction based on artificial intelligence is an attractive methodology for several researchers. In this study, data-driven condition monitoring approach is implemented for predicting RUL of bearing under a certain load and speed. The approach demonstrates the use of ensemble regression techniques like Random Forest and Gradient Boosting for prediction of RUL with time-domain features which are extracted from given vibration signals. The extracted features are ranked using Decision Tree (DT) based ranking technique and training and testing feature vectors are produced and fed as an input to ensemble technique. Hyper-parameters are tuned for these models by using exhaustive parameter search and performance of these models is further verified by plotting respective learning curves. For the present work FEMTO bearing data-set provided by IEEE PHM Data Challenge 2012 is used. Weibull Hazard Rate Function for each bearing from learning data set is used to find target values i.e. projected RUL of the bearings. Results of proposed models are compared with well-established data-driven approaches from literature and are found to be better than all the models applied on this data-set, thereby demonstrating the reliability of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeon, Byeong, Min Jae Chai, and Kwang Hee Park. "Development of Multiple Predictive Gear Shifting System of Automatic Transmission Connected with Electronic Horizon." In FISITA World Congress 2021. FISITA, 2021. http://dx.doi.org/10.46720/f2021-adm-127.

Full text
Abstract:
"In general, the most essential role of a transmission is to transmit the driving power of the power source to the wheel as much as possible without loss, and in this process, the operating point of the power device is controlled by shift schedule so that the driving efficiency can be maximized through appropriate gear shifting. Regardless of this driving efficiency of the power source, it often occurs that gear shifting can be helpful to the driving situation in the viewpoint of the driver's convenience. It refers to a case in which a rapid deceleration or acceleration must be prepared in advance depending on a specific driving event, such as winding road, downhill, speed bump or entering a highway and so on. In the case of an automatic transmission, the function of shifting to maximize the driving efficiency of the power source can be achieved according to the shift scheduling that has already been set to an optimum value. The automatic transmission, however, is likely to be more disadvantageous than a manual transmission in which the driver directly operates the gear position while observing the outside situation according to the approximate road shape and traffic flow. It is possible to realize an optimized predictive control system that enables early gear shifting of transmission in preparation for upcoming driving situations by connecting the transmission with the connectivity devices in the vehicle that provides various information of the road geometry and traffic conditions in front of the vehicle. In this study, we would like to introduce a new technologies of multiple predictive gear shifting control system and its application effect on the real road driving, that predicts at once the various driving situations expected to occur in the future and performs appropriate transmission gear before the situation by using electronic horizon including 3-dimentional map data and signals from front radar and camera equipped in the vehicle. The technology consists of following three steps: 'Situation Decoding Step' that converts the various raw signals from electronic horizon devices into a comprehensive driving situation that can be understandable to the transmission control unit, 'Speed Prediction Step' that determines the vehicle speed profile required for the upcoming driving situation, and 'Powertrain Control Step' that controls the most proper transmission gear and engine torque. The multiple predictive gear shifting system of automatic transmission proposed in this study was applied to mass production in Hyundai-Kia vehicles, and contributed not only to drivability, safety, and vehicle durability, but to improving fuel efficiency in real road driving."
APA, Harvard, Vancouver, ISO, and other styles
6

Brizzolara, Stefano, Stefano Gaggero, and Alessandro Grasso. "Parametric Optimization of Open and Ducted Propellers." In SNAME 12th Propeller and Shafting Symposium. SNAME, 2009. http://dx.doi.org/10.5957/pss-2009-02.

Full text
Abstract:
A new method for automatic optimization of an initial design of open and ducted propellers is illustrated in the paper. The method is based on a B-Spline based parametric representation of the main geometric parameter of the propeller blades, such as pitch and chords distributions along the radius and blade mean surface (cambered surface) as a BSpline surface. The automatic optimization procedure is driven by a state of the art multi-objective minimization algorithm that drives the selection of the control points that define the shape of the geometric parameters curves and surfaces. The objective function can include not only global characteristics such as efficiency and thrust, but also sheet cavitation extension and volume. The prediction of the propeller hydrodynamic characteristics is made by a panel method developed by the authors and validated in several occasions, applicable to open or ducted propellers, also subject to sheet cavitation.
APA, Harvard, Vancouver, ISO, and other styles
7

Agarwal, Shubham, Laurent Gicquel, Florent Duchaine, Nicolas Odier, Jérôme Dombard, Damien Bonneau, and Michel Slusarz. "Autonomous Large Eddy Simulations Setup for Cooling Hole Shape Optimization." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-59196.

Full text
Abstract:
Abstract Film cooling is a common technique to manage turbine blade thermal environment. The geometry of the holes which are used to generate the cooling film is known to play a very important role on thermal performances and finding the most optimized shape involves rigorous experimental as well as numerical investigations to probe the many parameters at play. For the current study an automatic optimization tool is developed and then probed with the capability of performing hole shape optimization based on Large Eddy Simulation (LES) predictions. To do so, the particular geometry called shaped cooling hole is chosen as a baseline geometry for this optimization process. Relying on the response surface evaluation based on a reduced model approach, the use of a Design of Experiments (DOE) method allows probing a discrete set of values from the parameter space used to define the present shaped cooling hole. At first only two parameters are chosen out of the seven parameters defining the hole shape. This is followed by the automatic generation of the hole geometry, the corresponding computational domain and the associated meshes. Once the geometries and meshes are created, the numerical setup is autonomously completed for each of the cases including a first guess of the flow field to increase convergence of the simulation towards an exploitable solution. To finish, the LES fluid flow prediction is used to evaluate the discrete value of the problem response function which can then participate in the reduced model construction from which the optimization is derived.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Weifeng, Nima Rafibakhsh, Matthew I. Campbell, and Christopher Hoyle. "Product Based Sequence Evaluation for Automated Assembly Planning." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-68298.

Full text
Abstract:
Assembly time estimation is a key factor in evaluating the performance of the assembly process. The overall goal of this research is to develop a method to automatically estimate product assembly time based on the tessellated model. This paper proposes a way to divide an assembly operation into four actions which consist of a) part movement, b) part installation, c) secure operations, and d) subassembly rotations. Four predictive models are built for estimating these action times with the input features that can be obtained from the tessellated model automatically. In order to estimate the four operation times, a design of experiments is applied to collect experimental data, based on the physical assembly experiments performed on products that are representative of common assembly processes. The Box-Behnken design (BBD) is an experiment design to support response surface methodology to interpret and estimate a prediction model for the four operations. With the experimental data, a stepwise regression method is used to estimate the predictive mean time. After that, a Gaussian Process (GP) model is applied with the basis mean regression function for more accurate prediction. Also, the confidence of the prediction can be given by the GP predictive confidence interval (CI). Hence, the uncertainty of the prediction can be quantified by the designer and can be used for evaluating the risk of an assembly sequence design. An Artificial Neural Network (ANN) is also used in this study to compare with the regression and GP model. A case study of a pump assembly time estimation is conducted. The predictive times from the regression, GP and ANN models are compared with the Design for assembly (DFA) method to ensure the reasonable predictions. The results show our proposed method has good prediction for most of the assembly tasks. Although errors exist in some of the task, the accuracy of the models can be improved with more user feedback since the model quality depends upon the amount of the training data.
APA, Harvard, Vancouver, ISO, and other styles
9

Cai, Shengze, Zhicheng Wang, Chryssostomos Chryssostomidis, and George Em Karniadakis. "Heat Transfer Prediction With Unknown Thermal Boundary Conditions Using Physics-Informed Neural Networks." In ASME 2020 Fluids Engineering Division Summer Meeting collocated with the ASME 2020 Heat Transfer Summer Conference and the ASME 2020 18th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/fedsm2020-20159.

Full text
Abstract:
Abstract Simulating convective heat transfer using traditional numerical methods requires explicit definition of thermal boundary conditions on all boundaries of the domain, which is almost impossible to fulfill in real applications. Here, we address this ill-posed problem using machine learning techniques by assuming that we have some extra measurements of the temperature at a few locations in the domain, not necessarily located on the boundaries with the unknown thermal boundary condition. In particular, we employ physics-informed neural networks (PINNs) to represent the velocity and temperature fields while simultaneously enforce the Navier-Stokes and energy equations at random points in the domain. In PINNs, all differential operators are computed using automatic differentiation, hence avoiding discretization in either space or time. The loss function is composed of multiple terms, including the mismatch in the velocity and temperature data, the boundary and initial conditions, as well as the residuals of the Navier-Stokes and energy equations. Here, we develop a data-driven strategy based on PINNs to infer the temperature field in the prototypical problem of convective heat transfer in flow past a cylinder. We assume that we have just a couple of temperature measurements on the cylinder surface and a couple more temperature measurements in the wake region, but the thermal boundary condition on the cylinder surface is totally unknown. Upon training the PINN, we can discover the unknown boundary condition while simultaneously infer the temperature field everywhere in the domain with less than 5% error in the Nusselt number prediction. In order to assess the performance of PINN, we carried out a high fidelity simulation of the same heat transfer problem (with known thermal boundary conditions) by using the high-order spectral/hp-element method (SEM), and quantitatively evaluated the accuracy of PINN’s prediction with respect to SEM. We also propose a method to adaptively select the location of sensors in order to minimize the number of required temperature measurements while increasing the accuracy of the inference in heat transfer.
APA, Harvard, Vancouver, ISO, and other styles
10

Ramp, Isaac J., and Douglas L. Van Bossuyt. "Toward an Automated Model-Based Geometric Method of Representing Function Failure Propagation Across Uncoupled Systems." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-36514.

Full text
Abstract:
The complex engineered systems being designed today must rapidly and accurately be developed to satisfy customer needs while accomplishing required functions with a minimum number of failures. Failure analysis in the conceptual stage of design has expanded in recent years to account for failures in functional modeling. However, function failure propagation across normally uncoupled functions and subsystems has not been fully addressed. A functional model-based geometric method of predicting and mitigating functional failure propagation across systems, which are uncoupled during nominal use cases, is presented. Geometric relationships between uncoupled functions are established to serve as failure propagation flow paths. Mitigation options are developed based upon the geometric relationships and a path toward physical functional layout is provided to limit failure propagation across uncoupled subsystems. The model-based geometric method of predicting and mitigating functional failure propagation across uncoupled engineered systems guides designers toward improved protection and isolation of cross-subsystem failure propagation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automatic Function Prediction"

1

Wideman, Jr., Robert F., Nicholas B. Anthony, Avigdor Cahaner, Alan Shlosberg, Michel Bellaiche, and William B. Roush. Integrated Approach to Evaluating Inherited Predictors of Resistance to Pulmonary Hypertension Syndrome (Ascites) in Fast Growing Broiler Chickens. United States Department of Agriculture, December 2000. http://dx.doi.org/10.32747/2000.7575287.bard.

Full text
Abstract:
Background PHS (pulmonary hypertension syndrome, ascites syndrome) is a serious cause of loss in the broiler industry, and is a prime example of an undesirable side effect of successful genetic development that may be deleteriously manifested by factors in the environment of growing broilers. Basically, continuous and pinpointed selection for rapid growth in broilers has led to higher oxygen demand and consequently to more frequent manifestation of an inherent potential cardiopulmonary incapability to sufficiently oxygenate the arterial blood. The multifaceted causes and modifiers of PHS make research into finding solutions to the syndrome a complex and multi threaded challenge. This research used several directions to better understand the development of PHS and to probe possible means of achieving a goal of monitoring and increasing resistance to the syndrome. Research Objectives (1) To evaluate the growth dynamics of individuals within breeding stocks and their correlation with individual susceptibility or resistance to PHS; (2) To compile data on diagnostic indices found in this work to be predictive for PHS, during exposure to experimental protocols known to trigger PHS; (3) To conduct detailed physiological evaluations of cardiopulmonary function in broilers; (4) To compile data on growth dynamics and other diagnostic indices in existing lines selected for susceptibility or resistance to PHS; (5) To integrate growth dynamics and other diagnostic data within appropriate statistical procedures to provide geneticists with predictive indices that characterize resistance or susceptibility to PHS. Revisions In the first year, the US team acquired the costly Peckode weigh platform / individual bird I.D. system that was to provide the continuous (several times each day), automated weighing of birds, for a comprehensive monitoring of growth dynamics. However, data generated were found to be inaccurate and irreproducible, so making its use implausible. Henceforth, weighing was manual, this highly labor intensive work precluding some of the original objectives of using such a strategy of growth dynamics in selection procedures involving thousands of birds. Major conclusions, solutions, achievements 1. Healthy broilers were found to have greater oscillations in growth velocity and acceleration than PHS susceptible birds. This proved the scientific validity of our original hypothesis that such differences occur. 2. Growth rate in the first week is higher in PHS-susceptible than in PHS-resistant chicks. Artificial neural network accurately distinguished differences between the two groups based on growth patterns in this period. 3. In the US, the unilateral pulmonary occlusion technique was used in collaboration with a major broiler breeding company to create a commercial broiler line that is highly resistant to PHS induced by fast growth and low ambient temperatures. 4. In Israel, lines were obtained by genetic selection on PHS mortality after cold exposure in a dam-line population comprising of 85 sire families. The wide range of PHS incidence per family (0-50%), high heritability (about 0.6), and the results in cold challenged progeny, suggested a highly effective and relatively easy means for selection for PHS resistance 5. The best minimally-invasive diagnostic indices for prediction of PHS resistance were found to be oximetry, hematocrit values, heart rate and electrocardiographic (ECG) lead II waves. Some differences in results were found between the US and Israeli teams, probably reflecting genetic differences in the broiler strains used in the two countries. For instance the US team found the S wave amplitude to predict PHS susceptibility well, whereas the Israeli team found the P wave amplitude to be a better valid predictor. 6. Comprehensive physiological studies further increased knowledge on the development of PHS cardiopulmonary characteristics of pre-ascitic birds, pulmonary arterial wedge pressures, hypotension/kidney response, pulmonary hemodynamic responses to vasoactive mediators were all examined in depth. Implications, scientific and agricultural Substantial progress has been made in understanding the genetic and environmental factors involved in PHS, and their interaction. The two teams each successfully developed different selection programs, by surgical means and by divergent selection under cold challenge. Monitoring of the progress and success of the programs was done be using the in-depth estimations that this research engendered on the reliability and value of non-invasive predictive parameters. These findings helped corroborate the validity of practical means to improve PHT resistance by research-based programs of selection.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography