Dissertations / Theses on the topic 'ANNs'

To see the other types of publications on this topic, follow the link: ANNs.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'ANNs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ghosh, Ranadhir, and n/a. "A Novel Hybrid Learning Algorithm For Artificial Neural Networks." Griffith University. School of Information Technology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030808.162355.

Full text
Abstract:
Last few decades have witnessed the use of artificial neural networks (ANN) in many real-world applications and have offered an attractive paradigm for a broad range of adaptive complex systems. In recent years ANN have enjoyed a great deal of success and have proven useful in wide variety pattern recognition or feature extraction tasks. Examples include optical character recognition, speech recognition and adaptive control to name a few. To keep the pace with its huge demand in diversified application areas, many different kinds of ANN architecture and learning types have been proposed by the researchers to meet varying needs. A novel hybrid learning approach for the training of a feed-forward ANN has been proposed in this thesis. The approach combines evolutionary algorithms with matrix solution methods such as singular value decomposition, Gram-Schmidt etc., to achieve optimum weights for hidden and output layers. The proposed hybrid method is to apply evolutionary algorithm in the first layer and least square method (LS) in the second layer of the ANN. The methodology also finds optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. A learning algorithm has many facets that can make a learning algorithm good for a particular application area. Often there are trade offs between classification accuracy and time complexity, nevertheless, the problem of memory complexity remains. This research explores all the different facets of the proposed new algorithm in terms of classification accuracy, convergence property, generalization ability, time and memory complexity.
APA, Harvard, Vancouver, ISO, and other styles
2

Ghosh, Ranadhir. "A Novel Hybrid Learning Algorithm For Artificial Neural Networks." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365961.

Full text
Abstract:
Last few decades have witnessed the use of artificial neural networks (ANN) in many real-world applications and have offered an attractive paradigm for a broad range of adaptive complex systems. In recent years ANN have enjoyed a great deal of success and have proven useful in wide variety pattern recognition or feature extraction tasks. Examples include optical character recognition, speech recognition and adaptive control to name a few. To keep the pace with its huge demand in diversified application areas, many different kinds of ANN architecture and learning types have been proposed by the researchers to meet varying needs. A novel hybrid learning approach for the training of a feed-forward ANN has been proposed in this thesis. The approach combines evolutionary algorithms with matrix solution methods such as singular value decomposition, Gram-Schmidt etc., to achieve optimum weights for hidden and output layers. The proposed hybrid method is to apply evolutionary algorithm in the first layer and least square method (LS) in the second layer of the ANN. The methodology also finds optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. A learning algorithm has many facets that can make a learning algorithm good for a particular application area. Often there are trade offs between classification accuracy and time complexity, nevertheless, the problem of memory complexity remains. This research explores all the different facets of the proposed new algorithm in terms of classification accuracy, convergence property, generalization ability, time and memory complexity.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
3

Lukashev, A. "Basics of artificial neural networks (ANNs)." Thesis, Київський національний університет технологій та дизайну, 2018. https://er.knutd.edu.ua/handle/123456789/11353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Bulushi, Nabil. "Predicting reservoir properties using artificial neural networks (ANNs)." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.498402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Taylor, Brent S. "Utilizing ANNs to Improve the Forecast for Tire Demand." Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1420656622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cogo, Giovanni <1989&gt. "MultiLayer ANNs: predicting the S&P 500 index." Master's Degree Thesis, Università Ca' Foscari Venezia, 2016. http://hdl.handle.net/10579/7670.

Full text
Abstract:
Stock prediction with artificial neural network (ANN) models has been used extensively by researchers as it provides better results than other techniques. This paper presents an ANN approach to forecast the S&P 500 stock index price. First of all, an ANN-based variable selection model is presented. This model explores the relationship between some initial input variables and the closing price of the S&P 500 stock index. Furthermore, this research investigates how the training algorithm, as well as the number of neurons in the hidden layer and the distribution of the training data, affect the accuracy of the network.
APA, Harvard, Vancouver, ISO, and other styles
7

Stiubhart, Domhnall Uilleam. "An Gaidheal, a'Ghaidhlig agus a'Ghaidhealtachd anns an t-seachdamh linn deug." Thesis, University of Edinburgh, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543577.

Full text
Abstract:
Anns a'cheud leth de'n trächdas, tha mi a' toirt tarraing air an t-suidheachadh ann an Eirinn, carson a ghabh na Gäidheil thall ri ideblas steidhichte cho daingeann air creideamh agus athartha, agus gu de na treithean litreachail a dh'eirich mar thoradh air an seo, nach fhaighear air an taobh seo de Shruth na Maoile. Tionndaidhear a-nuairsin a ghabhail beachd air an eachdraidh a-bhos, 's mi a'feuchainn ri barrachd solais a leigeil asteach air tachartais nan deich bliadhna air fhichead fror-thäbhachdach eadar cur-gu-buil Reachdan Idhe agus toiseach Cogaidhean nan Tri Rioghachd bho dheireadh nan 1630an air adhart. Bithidh mi a'coimhead air na h-aobharan - an dä chuid aobharan geärrthreimhseach agus aobharan fad-threimhseach -a tha air cül nan atharrachaidhean sochmhalairteach a bha a'sior sgapadh rd nam bliadhnaichean ud. 'S iad na h-atharrachaidhean seo, agus abhuil a bh'aca air saoghal nan Gäidheal bho äm Athaiseag Theärlaich II air adhart, a bhios fainear dhomh anns an därna leth de'n t-saothair. Bha an Gaidheal a'sior ghabhail barrachd de'n t-saoghal fo 'shröin, ach aig an aon äm - gu dearbh, gu ire mhöir mar thoradh air na h-atharrachaidhean ud - ghreimich e na bu theinne ri särbheachdan an t-seann shaoghail. Air cül an t-suidheachaidh seo tha iomagain fhasmhor mu na bliadhnaichean ri teachd, äm, a-reir coltais, 'nuair nach biodh röl aig na Gaidheil idir; agus cuideachd mu dhol-sios a'choluadair ghaisgeil a bha mar bhonnsteidh do'n fhein-iomhaigh aca, gu sönraichte do dh'fhein-iomhaigh nam fireannach. Gu ire co-dhiü, b'ann mar thoradh air an iomagain seo a bha aobhar nan Stiübhartach cho feillmhor a-measg nan Gäidheal anns a'cheud leth de'n ochdamh linn deug. Chunnaic mi iomchaidh dä shleachd - mu fhäs obair na creachadaireachd -a ghleidheadh 's a chur ris an trächdas mar eärr-rädh.
APA, Harvard, Vancouver, ISO, and other styles
8

Engin, Seref Naci. "Condition monitoring of rotating machinery using wavelets as pre-processor to ANNs." Thesis, University of Hertfordshire, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cheng, Xiaoyu. "Applications of Artificial Neural Networks (ANNs) in exploring materials property-property correlations." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/7968.

Full text
Abstract:
The discoveries of materials property-property correlations usually require prior knowledge or serendipity, the process of which can be time-consuming, costly, and labour-intensive. On the other hand, artificial neural networks (ANNs) are intelligent and scalable modelling techniques that have been used extensively to predict properties from materials’ composition or processing parameters, but are seldom used in exploring materials property-property correlations. The work presented in this thesis has employed ANNs combinatorial searches to explore the correlations of different materials properties, through which, ‘known’ correlations are verified, and ‘unknown’ correlations are revealed. An evaluation criterion is proposed and demonstrated to be useful in identifying nontrivial correlations. The work has also extended the application of ANNs in the fields of data corrections, property predictions and identifications of variables’ contributions. A systematic ANN protocol has been developed and tested against the known correlating equations of elastic properties and the experimental data, and is found to be reliable and effective to correct suspect data in a complicated situation where no prior knowledge exists. Moreover, the hardness increments of pure metals due to HPT are accurately predicted from shear modulus, melting temperature and Burgers vector. The first two variables are identified to have the largest impacts on hardening. Finally, a combined ANN-SR (symbolic regression) method is proposed to yield parsimonious correlating equations by ruling out redundant variables through the partial derivatives method and the connection weight approach, which are based on the analysis of the ANNs weight vectors. By applying this method, two simple equations that are at least as accurate as other models in providing a rapid estimation of the enthalpies of vaporization for compounds are obtained.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yiming. "Applications of artificial neural networks (ANNs) in several different materials research fields." Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/362.

Full text
Abstract:
In materials science, the traditional methodological framework is the identification of the composition-processing-structure-property causal pathways that link hierarchical structure to properties. However, all the properties of materials can be derived ultimately from structure and bonding, and so the properties of a material are interrelated to varying degrees. The work presented in this thesis, employed artificial neural networks (ANNs) to explore the correlations of different material properties with several examples in different fields. Those including 1) to verify and quantify known correlations between physical parameters and solid solubility of alloy systems, which were first discovered by Hume-Rothery in the 1930s. 2) To explore unknown crossproperty correlations without investigating complicated structure-property relationships, which is exemplified by i) predicting structural stability of perovskites from bond-valence based tolerance factors tBV, and predicting formability of perovskites by using A-O and B-O bond distances; ii) correlating polarizability with other properties, such as first ionization potential, melting point, heat of vaporization and specific heat capacity. 3) In the process of discovering unanticipated relationships between combination of properties of materials, ANNs were also found to be useful for highlighting unusual data points in handbooks, tables and databases that deserve to have their veracity inspected. By applying this method, massive errors in handbooks were found, and a systematic, intelligent and potentially automatic method to detect errors in handbooks is thus developed. Through presenting these four distinct examples from three aspects of ANN capability, different ways that ANNs can contribute to progress in materials science has been explored. These approaches are novel and deserve to be pursued as part of the newer methodologies that are beginning to underpin material research.
APA, Harvard, Vancouver, ISO, and other styles
11

Andrews, Robert. "An automated rule refinement system." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15788/1/Robert_Andrews_Thesis.pdf.

Full text
Abstract:
Artificial neural networks (ANNs) are essentially a 'black box' technology. The lack of an explanation component prevents the full and complete exploitation of this form of machine learning. During the mid 1990's the field of 'rule extraction' emerged. Rule extraction techniques attempt to derive a human comprehensible explanation structure from a trained ANN. Andrews et.al. (1995) proposed the following reasons for extending the ANN paradigm to include a rule extraction facility: * provision of a user explanation capability * extension of the ANN paradigm to 'safety critical' problem domains * software verification and debugging of ANN components in software systems * improving the generalization of ANN solutions * data exploration and induction of scientific theories * knowledge acquisition for symbolic AI systems An allied area of research is that of 'rule refinement'. In rule refinement an initial rule base, (i.e. what may be termed `prior knowledge') is inserted into an ANN by prestructuring some or all of the network architecture, weights, activation functions, learning rates, etc. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract the `refined' rules. Very few ANN techniques have the capability to act as a true rule refinement system. Existing techniques, such as KBANN, (Towell & Shavlik, (1993), are limited in that the rule base used to initialize the network must be a nearly complete, and the refinement process is limited to modifying antecedents. The limitations of existing techniques severely limit their applicability to real world problem domains. Ideally, a rule refinement technique should be able to deal with incomplete initial rule bases, modify antecedents, remove inaccurate rules, and add new knowledge by generating new rules. The motivation for this research project was to develop such a rule refinement system and to investigate its efficacy when applied to both nearly complete and incomplete problem domains. The premise behind rule refinement is that the refined rules better represent the actual domain theory than the initial domain theory used to initialize the network. The hypotheses tested in this research include: * that the utilization of prior domain knowledge will speed up network training, * produce smaller trained networks, * produce more accurate trained networks, and * bias the learning phase towards a solution that 'makes sense' in the problem domain. In 1998 Geva, Malmstrom, & Sitte, (1998) described the Local Cluster (LC) Neural Net. Geva et.al. (1998) showed that the LC network was able to learn / approximate complex functions to a high degree of accuracy. The hidden layer of the LC network is comprised of basis functions, (the local cluster units), that are composed of sigmoid based 'ridge' functions. In the General form of the LC network the ridge functions can be oriented in any direction. We describe RULEX, a technique designed to provide an explanation component for its underlying Local Cluster ANN through the extraction of symbolic rules from the weights of the local cluster units of the trained ANN. RULEX exploits a feature, ie, axis parallel ridge functions, of the Restricted Local Cluster (Geva , Andrews & Geva 2002), that allow hyper-rectangular rules of the form IF ? 1 = i = n : xi ? [ xi lower , xi upper ] THEN pattern belongs to the target class to be easily extracted from local functions that comprise the hidden layer of the LC network. RULEX is tested on 14 applications available in the public domain. RULEX results are compared with a leading machine learning technique, See5, with RULEX generally performing as well as See5 and in some cases outperforming See5 in predictive accuracy. We describe RULEIN, a rule refinement technique that allows symbolic rules to be converted into the parameters that define local cluster functions. RULEIN allows existing domain knowledge to be captured in the architecture of a LC ANN thus facilitating the first phase of the rule refinement paradigm. RULEIN is tested on a variety of artificial and real world problems. Experimental results indicate that RULEIN is able to satisfy the first requirement of a rule refinement technique by correctly translating a set of symbolic rules into a LC ANN that has the same predictive bahaviour as the set of rules from which it was constructed. Experimental results also show that in the cases where a strong domain theory exists, initializing an LC network using RULEIN generally speeds up network training, produces smaller, more accurate trained networks, with the trained network properly representing the underlying domain theory. In cases where a weak domain theory exists the same results are not always apparent. Experiments with the RULEIN / LC / RULEX rule refinement method show that the method is able to remove inaccurate rules from the initial knowledge base, modify rules in the initial knowledge base that are only partially correct, and learn new rules not present in the initial knowledge base. The combination of RULEIN / LC / RULEX thus is shown to be an effective rule refinement technique for use with a Restricted Local Cluster network.
APA, Harvard, Vancouver, ISO, and other styles
12

Andrews, Robert. "An Automated Rule Refinement System." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15788/.

Full text
Abstract:
Artificial neural networks (ANNs) are essentially a 'black box' technology. The lack of an explanation component prevents the full and complete exploitation of this form of machine learning. During the mid 1990's the field of 'rule extraction' emerged. Rule extraction techniques attempt to derive a human comprehensible explanation structure from a trained ANN. Andrews et.al. (1995) proposed the following reasons for extending the ANN paradigm to include a rule extraction facility: * provision of a user explanation capability * extension of the ANN paradigm to 'safety critical' problem domains * software verification and debugging of ANN components in software systems * improving the generalization of ANN solutions * data exploration and induction of scientific theories * knowledge acquisition for symbolic AI systems An allied area of research is that of 'rule refinement'. In rule refinement an initial rule base, (i.e. what may be termed `prior knowledge') is inserted into an ANN by prestructuring some or all of the network architecture, weights, activation functions, learning rates, etc. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract the `refined' rules. Very few ANN techniques have the capability to act as a true rule refinement system. Existing techniques, such as KBANN, (Towell & Shavlik, (1993), are limited in that the rule base used to initialize the network must be a nearly complete, and the refinement process is limited to modifying antecedents. The limitations of existing techniques severely limit their applicability to real world problem domains. Ideally, a rule refinement technique should be able to deal with incomplete initial rule bases, modify antecedents, remove inaccurate rules, and add new knowledge by generating new rules. The motivation for this research project was to develop such a rule refinement system and to investigate its efficacy when applied to both nearly complete and incomplete problem domains. The premise behind rule refinement is that the refined rules better represent the actual domain theory than the initial domain theory used to initialize the network. The hypotheses tested in this research include: * that the utilization of prior domain knowledge will speed up network training, * produce smaller trained networks, * produce more accurate trained networks, and * bias the learning phase towards a solution that 'makes sense' in the problem domain. In 1998 Geva, Malmstrom, & Sitte, (1998) described the Local Cluster (LC) Neural Net. Geva et.al. (1998) showed that the LC network was able to learn / approximate complex functions to a high degree of accuracy. The hidden layer of the LC network is comprised of basis functions, (the local cluster units), that are composed of sigmoid based 'ridge' functions. In the General form of the LC network the ridge functions can be oriented in any direction. We describe RULEX, a technique designed to provide an explanation component for its underlying Local Cluster ANN through the extraction of symbolic rules from the weights of the local cluster units of the trained ANN. RULEX exploits a feature, ie, axis parallel ridge functions, of the Restricted Local Cluster (Geva , Andrews & Geva 2002), that allow hyper-rectangular rules of the form IF ∀ 1 ≤ i ≤ n : xi ∈ [ xi lower , xi upper ] THEN pattern belongs to the target class to be easily extracted from local functions that comprise the hidden layer of the LC network. RULEX is tested on 14 applications available in the public domain. RULEX results are compared with a leading machine learning technique, See5, with RULEX generally performing as well as See5 and in some cases outperforming See5 in predictive accuracy. We describe RULEIN, a rule refinement technique that allows symbolic rules to be converted into the parameters that define local cluster functions. RULEIN allows existing domain knowledge to be captured in the architecture of a LC ANN thus facilitating the first phase of the rule refinement paradigm. RULEIN is tested on a variety of artificial and real world problems. Experimental results indicate that RULEIN is able to satisfy the first requirement of a rule refinement technique by correctly translating a set of symbolic rules into a LC ANN that has the same predictive bahaviour as the set of rules from which it was constructed. Experimental results also show that in the cases where a strong domain theory exists, initializing an LC network using RULEIN generally speeds up network training, produces smaller, more accurate trained networks, with the trained network properly representing the underlying domain theory. In cases where a weak domain theory exists the same results are not always apparent. Experiments with the RULEIN / LC / RULEX rule refinement method show that the method is able to remove inaccurate rules from the initial knowledge base, modify rules in the initial knowledge base that are only partially correct, and learn new rules not present in the initial knowledge base. The combination of RULEIN / LC / RULEX thus is shown to be an effective rule refinement technique for use with a Restricted Local Cluster network.
APA, Harvard, Vancouver, ISO, and other styles
13

Dilan, Askin Rasim. "Unstructured Road Recognition And Following For Mobile Robots Via Image Processing Using Anns." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612047/index.pdf.

Full text
Abstract:
For an autonomous outdoor mobile robot ability to detect roads existing around is a vital capability. Unstructured roads are among the toughest challenges for a mobile robot both in terms of detection and navigation. Even though mobile robots use various sensors to interact with their environment, being a comparatively low-cost and rich source of information, potential of cameras should be fully utilized. This research aims to systematically investigate the potential use of streaming camera images in detecting unstructured roads. The investigation focused on the use of methods employing Artificial Neural Networks (ANNs). An exhaustive test process is followed where different kernel sizes and feature vectors are varied systematically where trainings are carried out via backpropagation in a feed-forward ANN. The thesis also claims a contribution in the creation of test data where truth images are created almost in realtime by making use of the dexterity of human hands. Various road profiles v ranging from human-made unstructured roads to trails are investigated. Output of ANNs indicating road regions is justified against the vanishing point computed in the scene and a heading vector is computed that is to keep the robot on the road. As a result, it is shown that, even though a robot cannot fully rely on camera images for heading computation as proposed, use of image based heading computation can provide a useful assistance to other sensors present on a mobile robot.
APA, Harvard, Vancouver, ISO, and other styles
14

Ersayın, Deniz Tayfur Gökmen. "Studying Seepage In A Body Of Earth-Fill Dam By (Artifical Neural Networks) Anns/." [s.l.]: [s.n.], 2006. http://library.iyte.edu.tr/tezler/master/insaatmuh/T000350.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Alqahtani, Ayedh Mohammad A. "Utilising artificial neural networks (ANNs) towards accurate estimation of life-cycle costs for construction projects." Thesis, Curtin University, 2015. http://hdl.handle.net/20.500.11937/2354.

Full text
Abstract:
This study aimed to establish a new model of Life Cycle Cost (LCC) for construction projects using Artificial Neural Networks (ANNs). Survey research and Costs Significant Items (CSIs) methods were conducted to identify the most important cost and non-cost factors affecting the estimation of LCC. These important factors are considered as input factors of the model. The results indicated that neural network models were able to estimate the cost with an average accuracy between 91%-95%.
APA, Harvard, Vancouver, ISO, and other styles
16

Payne, Russell. "The application of artificial neural networks to combustion and heat exchanger systems." Thesis, University of South Wales, 2005. https://pure.southwales.ac.uk/en/studentthesis/the-application-of-artificial-neural-networks-to-combustion-and-heat-exchanger-systems(684a7758-1b1c-4560-8df1-e482b42ef8a2).html.

Full text
Abstract:
The operation of large industrial scale combustion systems, such as furnaces and boilers is increasingly dictated by emission legislation and requirements for improved efficiency. However, it can be exceedingly difficult and time consuming to gather the information required to improve original designs. Mathematical modelling techniques have led to the development of sophisticated furnace representations that are capable of representing combustion parameters. Whilst such data is ideal for design purposes, the current power of computing systems tends to generate simulation times that are too great to embed the models into online control strategies. The work presented in this thesis offers the possibility of replacing such mathematical models with suitably trained Artificial Neural Networks (ANNs) since they can compute the same outputs at a fraction of the model's speed, suggesting they could provide an ideal alternative in online control strategies. Furthermore, artificial neural networks have the ability to approximate and extrapolate making them extremely robust when encountering conditions not met previously. In addition to improving operational procedures, another approach to increasing furnace system efficiency is to minimise the waste heat energy produced during the combustion process. One very successful method involves the implementation of a heat exchanger system in the exiting gas flue stream, since this is predominantly the main source of heat loss. It can be exceptionally difficult to determine which heat exchanger is best suited for a particular application and it can prove an even more arduous task to control it effectively. Furthermore, there are many factors that alter the performance characteristics of a heat exchanger throughout the duration of its operational life, such as fouling or unexpected systematic faults. This thesis investigates the modelling of an experimental heat exchanger system via artificial neural networks with a view to aiding the design and selection process. Moreover, the work presented offers a means to control heat exchangers subject to varying operating conditions more effectively, thus promoting savings in both waste energy and time.
APA, Harvard, Vancouver, ISO, and other styles
17

Chatzopoulos, Athanasios. "Modelling of turbulent combustion using the Rate-Controlled Constrained Equilibrium (RCCE)-Artificial Neural Networks (ANNs) approach." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/30782.

Full text
Abstract:
The objective of this work is the formulation, development and implementation of Artificial Neural Networks (ANNs) to turbulent combustion problems, for the representation of reduced chemical kinetics. Although ANNs are general and robust tools for simulating dynamical systems within reasonable computational times, their employment in combustion has been limited. In previous studies, ANNs were trained with data collected from either the test case of interest or from a similar problem. To overcome this training drawback, in this work, ANNs are trained with samples generated from an abstract problem; the laminar flamelet equation, allowing the simulation of a wide range of problems. To achieve this, the first step is to reduce a detailed chemical mechanism to a manageable number of variables. This task is performed by the Rate-Controlled Constrained Equilibrium (RCCE) reduction method. The training data sets consist of the composition of points with random mixture fraction, recorded from flamelets with random strain rates. The training, testing and simulation of the ANNs is carried out via the Self-Organising Map - Multilayer Perceptrons (SOM-MLPs) approach. The SOM-MLPs combination takes advantage of a reference map and splits the chemical space into domains of chemical similarity, allowing the employment of a separate MLP for each sub-domain. The RCCE-ANNs tabulation is used to replace conventional chemistry integration methods in RANS computations and LES of real turbulent flames. In the context of RANS the interaction of turbulence and combustion is described by using a PDF method utilising stochastic Lagrangian particles. In LES the sub-grid PDF is represented by an ensemble of Eulerian stochastic fields. Test cases include non-premixed and partially premixed turbulent flames in both non-piloted and piloted burner configurations. The comparison between RCCE-ANNs, real-time RCCE and experimental measurements shows good overall agreement in reproducing the overall flame structure and a significant speed-up of CPU time by the RCCE-ANN method.
APA, Harvard, Vancouver, ISO, and other styles
18

Nazir, Javed. "The use of artificial neural networks (ANNs) to predict medical aerosol behaviour within the human respiratory tract." Thesis, King's College London (University of London), 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yao, Xiaojun. "Méthodes Non-linéaires (ANNs, SVMs) : applications à la Classification et à la Corrélation des Propriétés Physicochimiques et Biologiques." Paris 7, 2004. http://www.theses.fr/2004PA077182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chikolwa, Bwembya C. "Development and structuring of commercial mortgage-backed securities in Australia." Thesis, Curtin University, 2008. http://hdl.handle.net/20.500.11937/2062.

Full text
Abstract:
According to the Reserve Bank of Australia (2006) the increased supply of Commercial Mortgage-Backed Securities (CMBS), with a range of subordination, has broadened the investor base in real estate debt markets and reduced the commercial property sector’s dependence on bank financing The CMBS market has been one of the most dynamic and fastest-growing sectors in the capital markets, for a market which was virtually nonexistent prior to 1990. The global CMBS market issuance which stood at AU$5.1 billion (US$4 billion) in 1990 had grown to AU$380 billion (US$299 billion) by the end of 2006. In Australia, a total of over 60 CMBSs with nearly 180 tranches totalling over AU$17.4 billion had been issued to December 2006 from when they were first introduced in 1999. To date few studies have been done on Australian CMBSs outside the credit rating agency circles. These studies are predominantly practitioner focused (Jones Lang LaSalle 2001; Richardson 2003; Roche 2000, 2002). O’Sullivan (1998) and Simonovski (2003) are the only academic studies on CMBSs. As such, this thesis examines issues relating to the development of Australian CMBSs and quantitatively and qualitatively analyses the structuring of Australian CMBSs. In assessing the growth of the Australian CMBS market, an interpretive historical approach (Baumgarter & Hensley 2005) is adopted to provide a cogent review and explanation of features of international and Australian CMBSs. This helps to understand the changing nature of the market and provides better understanding of the present and suggests possible future directions. The Australian CMBS market is matured in comparison with the larger US and EU CMBS markets as seen by the diversity of asset classes backing the issues and transaction types, tightening spreads, and record issuance volumes.High property market transparency (Jones Lang LaSalle 2006b) and predominance of Listed Property Trusts (LPT) as CMBS issuers (Standard & Poor’s 2005b), who legally have to report their activities and underlying collateral performance to regulatory regimes such as Australian Stock Exchange (ASX)/Australian Securities and Investment Commission (ASIC) and their equity partners, have contributed to the success of the Australian CMBS market. Furthermore, the positive commercial real estate market outlook should support future CMBS issuance, with LPTs continuing their dominance as issuers. In investigating property risk assessment in Australian CMBSs, all the CMBSs issued over a six year period of 2000 to 2005 were obtained from Standard and Poor’s presale reports as found in their Ratings Direct database to identify and review how property risk factors were addressed in all issues and within specific property asset classes following the delineation of property risk by Adair and Hutchinson (2005). Adequate assessment of property risk and its reporting is critical to the success of CMBS issues. The proposed framework shows that assessing and reporting property risk in Australian CMBSs, which are primarily backed by direct property assets, under the headings of investment quality risk, covenant strength risk, and depreciation and obsolescence risk can easily be done. The proposed framework should prove useful to rating agencies, bond issuers and institutional investors. Rating agencies can adopt a more systematic and consistent approach towards reporting of assessed property risk in CMBSs. Issuers and institutional investors can examine the perceived consistency and appropriateness of the rating assigned to a CMBS issue by providing inferences concerning property risk assessment.The ultimate goal of structuring CMBS transactions is to obtain a high credit rating as this has an impact on the yield obtainable and the success of the issue. The credit rating process involves highly subjective assessment of both qualitative and quantitative factors of a particular company as well as pertinent industry level or market level variables (Huang et al. 2004), with the final rating assigned by a credit committee via voting (Kwon et al. 1997). As such, credit rating agencies state that researchers cannot replicate their ratings quantitatively since their ratings reflect each agency’s opinion about an issue’s potential default risk and relies heavily on a committee’s analysis of the issuer’s ability and willingness to repay its debt. However, researchers have replicated bond ratings on the premise that financial ratios contain a large amount of information about a company’s credit risk. In this study, quantitative analysis of determinants of CMBS credit ratings issued by Standard and Poor’s from 2000 – 2006 using ANNs and OR and qualitative analysis of factors considered necessary to obtain a high credit rating and pricing issues necessary for the success of an issue through mail surveys of arrangers and issuers are undertaken. Of the quantitative variables propagated by credit rating agencies as being important to CMBS rating, only loan-to-value ratio (LTV) is found to be statistically significant, with the other variables being statistically insignificant using OR. This leads to the conclusion that statistical approaches used in corporate bond rating studies have limited replication capabilities in CMBS rating and that the endogeneity arguments raise significant questions about LTV and debt service coverage ratio (DSCR) as convenient, short-cut measures of CMBS default risk.However, ANNs do offer promising predictive results and can be used to facilitate implementation of survey-based CMBS rating systems. This should contribute to making the CMBS rating methodology become more explicit which is advantageous in that both CMBS investors and issuers are provided with greater information and faith in the investment. ANN results show that 62.0% of CMBS rating is attributable to LTV (38.2%) and DSCR (23.6%); supporting earlier studies which have listed the two as being the most important variables in CMBS rating. The other variables’ contributions are: CMBS issue size (10.1%), CMBS tenure (6.7%), geographical diversity (13.5%) and property diversity (7.9%) respectively. The methodology used to obtain these results is validated when applied to predict LPT bond ratings. Both OR and ANN produce provide robust alternatives to rating LPT bonds, with no significant differences in results between the full models of the two methods. Qualitative analysis of surveys on arrangers and issuers provides insights into structuring issues they consider necessary to obtain a high credit rating and pricing issues necessary for the success of an issue. Rating of issues was found to be the main reason why investors invest in CMBSs and provision of funds at attractive rates as the main motivation behind CMBS issuance. Furthermore, asset quality was found to be the most important factor necessary to obtain a high credit rating supporting the view by Henderson and ING Barings (1997) that assets backing securitisation are its fundamental credit strength.In addition, analyses of the surveys reveal the following: • The choice of which debt funding option to use depends on market conditions. • Credit tranching, over-collateralisation and cross-collateralisation are the main forms of credit enhancement in use. • On average, the AAA note tranche needs to be above AU$100 million and have 60 - 85% subordination for the CMBS issue to be economically viable. • Structuring costs range between 0.1% – 1% of issue size and structuring duration ranges from 4 – 9 months. • Preferred refinancing options are further capital market issues and bank debt. • Pricing CMBSs is greatly influenced by factors in the broader capital markets. For instance, the market had literary shut down as a result of the “credit crunch” caused by the meltdown in the US sub-prime mortgage market. These findings can be useful to issuers as a guide on the cost of going to the bond market to raise capital, which can be useful in comparing with other sources of funds. The findings of this thesis address crucial research priorities of the property industry as CMBSs are seen as a major commercial real estate debt instrument. By looking at how property risk can be assessed and reported in a more systematic way, and investigating quantitative and qualitative factors considered in structuring CMBSs, investor confidence can be increased through the increased body of knowledge. Several published refereed journal articles in Appendix C further validate the stature and significance of this thesis. It is evident that the property research in this thesis can lead aid in the revitalisation of the Australian CMBS market after the “shut down” caused by the melt-down in the US sub-prime mortgage market and can also be used to set up property-backed CMBSs in emerging countries where the CMBS market is immature or non-existent.
APA, Harvard, Vancouver, ISO, and other styles
21

Panayiotou, Panayiotis Andrea. "Immovable property taxation and the development of an artificial neural network valuation system for residential properties for tax purposes in Cyprus." Thesis, University of South Wales, 1999. https://pure.southwales.ac.uk/en/studentthesis/immovable-property-taxation-and-the-development-of-an-artificial-neural-network-valuation-system-for-residential-properties-for-tax-purposes-in-cyprus(3ec3bd33-0820-4e21-97f0-a3ea0e303a9a).html.

Full text
Abstract:
The last General Valuation in Cyprus, in 1980, took about twelve years to be completed by the Lands and Surveys Department. The comparison method was adopted and no computerised (mass appraisal) method or tool was used to assist the whole process. Although the issue of mass appraisal was raised by Sagric International, who had been invited to Cyprus as consultants, and recently by DataCentralen A/S with the development of a mass appraisal system based on regression analysis, there has been little literature and no research directly undertaken on the problems and the analysis of immovable property taxation in Cyprus and the development of an artificial neural networks valuation system for houses and apartments. The research project approached the issue of property taxation and mass appraisal through an investigation into Cyprus's needs for an up-dated tax base for equitabileness and for an assessment system capable of performing an effective revaluation at a certain date, with minimum acceptable mean error, minimum data and minimum cost. Investigation within Cyprus and world-wide indicated that this research project is a unique study in relation to Cyprus's property taxation and the development of a computer assisted mass appraisal system based on modular artificial neural networks. An empirical study was carried out, including prototyping and testing. The system results satisfy IAAO criteria for mass appraisal techniques, compare favourably with other studies and established a framework upon which future research into computer assisted mass appraisal for taxation purposes can be developed. In conclusion, the project has contributed significantly to the available literature on the immovable property taxation in Cyprus and the development of a computer assisted mass appraisal system for houses and apartments based on modular artificial neural network method. The proposed approach is novel not only in the context of Cyprus but also world-wide.
APA, Harvard, Vancouver, ISO, and other styles
22

Chikolwa, Bwembya C. "Development and structuring of commercial mortgage-backed securities in Australia." Curtin University of Technology, Curtin Business School, School of Economics and Finance, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=18677.

Full text
Abstract:
According to the Reserve Bank of Australia (2006) the increased supply of Commercial Mortgage-Backed Securities (CMBS), with a range of subordination, has broadened the investor base in real estate debt markets and reduced the commercial property sector’s dependence on bank financing The CMBS market has been one of the most dynamic and fastest-growing sectors in the capital markets, for a market which was virtually nonexistent prior to 1990. The global CMBS market issuance which stood at AU$5.1 billion (US$4 billion) in 1990 had grown to AU$380 billion (US$299 billion) by the end of 2006. In Australia, a total of over 60 CMBSs with nearly 180 tranches totalling over AU$17.4 billion had been issued to December 2006 from when they were first introduced in 1999. To date few studies have been done on Australian CMBSs outside the credit rating agency circles. These studies are predominantly practitioner focused (Jones Lang LaSalle 2001; Richardson 2003; Roche 2000, 2002). O’Sullivan (1998) and Simonovski (2003) are the only academic studies on CMBSs. As such, this thesis examines issues relating to the development of Australian CMBSs and quantitatively and qualitatively analyses the structuring of Australian CMBSs. In assessing the growth of the Australian CMBS market, an interpretive historical approach (Baumgarter & Hensley 2005) is adopted to provide a cogent review and explanation of features of international and Australian CMBSs. This helps to understand the changing nature of the market and provides better understanding of the present and suggests possible future directions. The Australian CMBS market is matured in comparison with the larger US and EU CMBS markets as seen by the diversity of asset classes backing the issues and transaction types, tightening spreads, and record issuance volumes.
High property market transparency (Jones Lang LaSalle 2006b) and predominance of Listed Property Trusts (LPT) as CMBS issuers (Standard & Poor’s 2005b), who legally have to report their activities and underlying collateral performance to regulatory regimes such as Australian Stock Exchange (ASX)/Australian Securities and Investment Commission (ASIC) and their equity partners, have contributed to the success of the Australian CMBS market. Furthermore, the positive commercial real estate market outlook should support future CMBS issuance, with LPTs continuing their dominance as issuers. In investigating property risk assessment in Australian CMBSs, all the CMBSs issued over a six year period of 2000 to 2005 were obtained from Standard and Poor’s presale reports as found in their Ratings Direct database to identify and review how property risk factors were addressed in all issues and within specific property asset classes following the delineation of property risk by Adair and Hutchinson (2005). Adequate assessment of property risk and its reporting is critical to the success of CMBS issues. The proposed framework shows that assessing and reporting property risk in Australian CMBSs, which are primarily backed by direct property assets, under the headings of investment quality risk, covenant strength risk, and depreciation and obsolescence risk can easily be done. The proposed framework should prove useful to rating agencies, bond issuers and institutional investors. Rating agencies can adopt a more systematic and consistent approach towards reporting of assessed property risk in CMBSs. Issuers and institutional investors can examine the perceived consistency and appropriateness of the rating assigned to a CMBS issue by providing inferences concerning property risk assessment.
The ultimate goal of structuring CMBS transactions is to obtain a high credit rating as this has an impact on the yield obtainable and the success of the issue. The credit rating process involves highly subjective assessment of both qualitative and quantitative factors of a particular company as well as pertinent industry level or market level variables (Huang et al. 2004), with the final rating assigned by a credit committee via voting (Kwon et al. 1997). As such, credit rating agencies state that researchers cannot replicate their ratings quantitatively since their ratings reflect each agency’s opinion about an issue’s potential default risk and relies heavily on a committee’s analysis of the issuer’s ability and willingness to repay its debt. However, researchers have replicated bond ratings on the premise that financial ratios contain a large amount of information about a company’s credit risk. In this study, quantitative analysis of determinants of CMBS credit ratings issued by Standard and Poor’s from 2000 – 2006 using ANNs and OR and qualitative analysis of factors considered necessary to obtain a high credit rating and pricing issues necessary for the success of an issue through mail surveys of arrangers and issuers are undertaken. Of the quantitative variables propagated by credit rating agencies as being important to CMBS rating, only loan-to-value ratio (LTV) is found to be statistically significant, with the other variables being statistically insignificant using OR. This leads to the conclusion that statistical approaches used in corporate bond rating studies have limited replication capabilities in CMBS rating and that the endogeneity arguments raise significant questions about LTV and debt service coverage ratio (DSCR) as convenient, short-cut measures of CMBS default risk.
However, ANNs do offer promising predictive results and can be used to facilitate implementation of survey-based CMBS rating systems. This should contribute to making the CMBS rating methodology become more explicit which is advantageous in that both CMBS investors and issuers are provided with greater information and faith in the investment. ANN results show that 62.0% of CMBS rating is attributable to LTV (38.2%) and DSCR (23.6%); supporting earlier studies which have listed the two as being the most important variables in CMBS rating. The other variables’ contributions are: CMBS issue size (10.1%), CMBS tenure (6.7%), geographical diversity (13.5%) and property diversity (7.9%) respectively. The methodology used to obtain these results is validated when applied to predict LPT bond ratings. Both OR and ANN produce provide robust alternatives to rating LPT bonds, with no significant differences in results between the full models of the two methods. Qualitative analysis of surveys on arrangers and issuers provides insights into structuring issues they consider necessary to obtain a high credit rating and pricing issues necessary for the success of an issue. Rating of issues was found to be the main reason why investors invest in CMBSs and provision of funds at attractive rates as the main motivation behind CMBS issuance. Furthermore, asset quality was found to be the most important factor necessary to obtain a high credit rating supporting the view by Henderson and ING Barings (1997) that assets backing securitisation are its fundamental credit strength.
In addition, analyses of the surveys reveal the following: • The choice of which debt funding option to use depends on market conditions. • Credit tranching, over-collateralisation and cross-collateralisation are the main forms of credit enhancement in use. • On average, the AAA note tranche needs to be above AU$100 million and have 60 - 85% subordination for the CMBS issue to be economically viable. • Structuring costs range between 0.1% – 1% of issue size and structuring duration ranges from 4 – 9 months. • Preferred refinancing options are further capital market issues and bank debt. • Pricing CMBSs is greatly influenced by factors in the broader capital markets. For instance, the market had literary shut down as a result of the “credit crunch” caused by the meltdown in the US sub-prime mortgage market. These findings can be useful to issuers as a guide on the cost of going to the bond market to raise capital, which can be useful in comparing with other sources of funds. The findings of this thesis address crucial research priorities of the property industry as CMBSs are seen as a major commercial real estate debt instrument. By looking at how property risk can be assessed and reported in a more systematic way, and investigating quantitative and qualitative factors considered in structuring CMBSs, investor confidence can be increased through the increased body of knowledge. Several published refereed journal articles in Appendix C further validate the stature and significance of this thesis. It is evident that the property research in this thesis can lead aid in the revitalisation of the Australian CMBS market after the “shut down” caused by the melt-down in the US sub-prime mortgage market and can also be used to set up property-backed CMBSs in emerging countries where the CMBS market is immature or non-existent.
APA, Harvard, Vancouver, ISO, and other styles
23

Xu, Jin. "Machine Learning – Based Dynamic Response Prediction of High – Speed Railway Bridges." Thesis, KTH, Bro- och stålbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278538.

Full text
Abstract:
Targeting heavier freights and transporting passengers with higher speeds became the strategic railway development during the past decades significantly increasing interests on railway networks. Among different components of a railway network, bridges constitute a major portion imposing considerable construction and maintenance costs. On the other hand, heavier axle loads and higher trains speeds may cause resonance occurrence on bridges; which consequently limits operational train speed and lines. Therefore, satisfaction of new expectations requires conducting a large number of dynamic assessments/analyses on bridges, especially on existing ones. Evidently, such assessments need detailed information, expert engineers and consuming considerable computational costs. In order to save the computational efforts and decreasing required amount of expertise in preliminary evaluation of dynamic responses, predictive models using artificial neural network (ANN) are proposed in this study. In this regard, a previously developed closed-form solution method (based on solving a series of moving force) was adopted to calculate the dynamic responses (maximum deck deflection and maximum vertical deck acceleration) of randomly generated bridges. Basic variables in generation of random bridges were extracted both from literature and geometrical properties of existing bridges in Sweden. Different ANN architectures including number of inputs and neurons were considered to train the most accurate and computationally cost-effective mode. Then, the most efficient model was selected by comparing their performance using absolute error (ERR), Root Mean Square Error (RMSE) and coefficient of determination (R2). The obtained results revealed that the ANN model can acceptably predict the dynamic responses. The proposed model presents Err of about 11.1% and 9.9% for prediction of maximum acceleration and maximum deflection, respectively. Furthermore, its R2 for maximum acceleration and maximum deflection predictions equal to 0.982 and 0.998, respectively. And its RMSE is 0.309 and 1.51E-04 for predicting the maximum acceleration and maximum deflection prediction, respectively. Finally, sensitivity analyses were conducted to evaluate the importance of each input variable on the outcomes. It was noted that the span length of the bridge and speed of the train are the most influential parameters.
APA, Harvard, Vancouver, ISO, and other styles
24

Coughlin, Michael J., and n/a. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks." Griffith University. School of Applied Psychology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030409.110949.

Full text
Abstract:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
APA, Harvard, Vancouver, ISO, and other styles
25

Coughlin, Michael J. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365854.

Full text
Abstract:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Applied Psychology
Griffith Health
Full Text
APA, Harvard, Vancouver, ISO, and other styles
26

Thai, Shee Meng. "Neural network modelling and control of coal fired boiler plant." Thesis, University of South Wales, 2005. https://pure.southwales.ac.uk/en/studentthesis/neural-network-modelling-and-control-of-coal-fired-boiler-plant(b5562ca0-e45e-44d8-aad2-ed2e3e114808).html.

Full text
Abstract:
This thesis presents the development of a Neural Network Based Controller (NNBC) for chain grate stoker fired boilers. The objective of the controller was to increase combustion efficiency and maintain pollutant emissions below future medium term stringent legislation. Artificial Neural Networks (ANNs) were used to estimate future emissions from and control the combustion process. Initial tests at Casella CRE Ltd demonstrated the ability of ANNs to characterise the complex functional relationships which subsisted in the data set, and utilised previously gained knowledge to deliver predictions up to three minutes into the future. This technique was then built into a carefully designed control strategy that fundamentally mimicked the actions of an expert boiler operator, to control an industrial chain grate stoker at HM Prison Garth, Lancashire. Test results demonstrated that the developed novel NNBC was able to control the industrial stoker boiler plant to deliver the load demand whilst keeping the excess air level to a minimum. As a result the NNBC also managed to maintain the pollutant emissions within probable future limits for this size of boiler. This prototype controller would thus offer the industrial coal user with a means to improve the combustion efficiency on chain grate stokers as well as meeting medium term legislation limits on pollutant emissions that could be imposed by the European Commission.
APA, Harvard, Vancouver, ISO, and other styles
27

Gallep, Larissa Tannus [UNESP]. "Anna dos 6 aos 18 anos." Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/86901.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:28Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-08-22Bitstream added on 2014-06-13T19:27:48Z : No. of bitstreams: 1 gallep_lt_me_ia.pdf: 1157277 bytes, checksum: b706aed8e8a39902338d9b0401639ea1 (MD5)
Universidade Estadual Paulista (UNESP)
Com esta pesquisa a minha intenção foi a de realizar uma investigação do filme/documentário russo Anna dos 6 aos 18 e como ele expõe os momentos históricos do fim dos anos 1980 e começo dos 1990, com a queda da União Soviética e o surgimento da Perestroika. Tentei aqui apresentar uma análise de como esta obra, enquanto objeto de arte e documento histórico, se relaciona com as transformações e o discurso oficial do final da URSS. Foram analisadas as diferentes formas de articulação entre os elementos verbais e sonoros, mas com foco nos elementos estético-visuais (signos, ícones, sinais, movimentos de câmera, composições cênicas) e principalmente a montagem. O trabalho aqui aprestado é um estudo sobre o papel da montagem, do filme documentário enquanto documento histórico e do papel do diretor enquanto ―escultor do tempo‖. Para a realização deste trabalho foram realizadas entrevistas com profissionais desta área, bem como pesquisa e observação de diferentes obras cinematográficas e de artes plásticas que abordam a montagem. Também realizo a minha análise sobre algumas imagens escolhidas pelo diretor Nikita Mikhalkov que contam uma visão sobre este período da história russa vivenciada por ele juntamente com o crescimento de sua filha Anna, a heroína do filme objeto do nosso estudo
With this research the intention was to investigate the russian documentary film called Anna from 6 to 18 and how he exposes historic moments in the late 1980s and beggining of 1990s among the Soviet Union fall and the emergence of Perestroyka. We try to present how this work, as art object and historic document, relates itself with the changes and the official version about the end of URSS. We analyzed the different forms of articulation between the verbal elements and sound, but with a focus on aesthetic and visual elements (signs, icons, signs, camera movements, scenic compositions) and especially the assembly. The study presented here is a study on the role of the assembly, the documentary film as a historical document and the role of director as sculptor of the time. For this study, interviews were conducted with experts in this area as well as research and observation of different films and visual arts that address the assembly. Also realize my analysis on some images chosen by director Nikita Mikhalkov who have a vision about this period of Russian history experienced by him along with the growth of his daughter Anna, the heroine of the film object of our study
APA, Harvard, Vancouver, ISO, and other styles
28

Gallep, Larissa Tannus. "Anna dos 6 aos 18 anos /." São Paulo : [s.n.], 2012. http://hdl.handle.net/11449/86901.

Full text
Abstract:
Orientador: José Leonardo do Nascimento
Banca: Omar Khoury
Banca: Simonetta Persichetti
Resumo: Com esta pesquisa a minha intenção foi a de realizar uma investigação do filme/documentário russo Anna dos 6 aos 18 e como ele expõe os momentos históricos do fim dos anos 1980 e começo dos 1990, com a queda da União Soviética e o surgimento da Perestroika. Tentei aqui apresentar uma análise de como esta obra, enquanto objeto de arte e documento histórico, se relaciona com as transformações e o discurso oficial do final da URSS. Foram analisadas as diferentes formas de articulação entre os elementos verbais e sonoros, mas com foco nos elementos estético-visuais (signos, ícones, sinais, movimentos de câmera, composições cênicas) e principalmente a montagem. O trabalho aqui aprestado é um estudo sobre o papel da montagem, do filme documentário enquanto documento histórico e do papel do diretor enquanto ―escultor do tempo‖. Para a realização deste trabalho foram realizadas entrevistas com profissionais desta área, bem como pesquisa e observação de diferentes obras cinematográficas e de artes plásticas que abordam a montagem. Também realizo a minha análise sobre algumas imagens escolhidas pelo diretor Nikita Mikhalkov que contam uma visão sobre este período da história russa vivenciada por ele juntamente com o crescimento de sua filha Anna, a heroína do filme objeto do nosso estudo
Abstract: With this research the intention was to investigate the russian documentary film called Anna from 6 to 18 and how he exposes historic moments in the late 1980s and beggining of 1990s among the Soviet Union fall and the emergence of Perestroyka. We try to present how this work, as art object and historic document, relates itself with the changes and the official version about the end of URSS. We analyzed the different forms of articulation between the verbal elements and sound, but with a focus on aesthetic and visual elements (signs, icons, signs, camera movements, scenic compositions) and especially the assembly. The study presented here is a study on the role of the assembly, the documentary film as a historical document and the role of director as "sculptor of the time." For this study, interviews were conducted with experts in this area as well as research and observation of different films and visual arts that address the assembly. Also realize my analysis on some images chosen by director Nikita Mikhalkov who have a vision about this period of Russian history experienced by him along with the growth of his daughter Anna, the heroine of the film object of our study
Mestre
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Lu. "Task Load Modelling for LTE Baseband Signal Processing with Artificial Neural Network Approach." Thesis, KTH, Signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160947.

Full text
Abstract:
This thesis gives a research on developing an automatic or guided-automatic tool to predict the hardware (HW) resource occupation, namely task load, with respect to the software (SW) application algorithm parameters in an LTE base station. For the signal processing in an LTE base station it is important to get knowledge of how many HW resources will be used when applying a SW algorithm on a specic platform. The information is valuable for one to know the system and platform better, which can facilitate a reasonable use of the available resources. The process of developing the tool is considered to be the process of building a mathematical model between HW task load and SW parameters, where the process is dened as function approximation. According to the universal approximation theorem, the problem can be solved by an intelligent method called articial neural networks (ANNs). The theorem indicates that any function can be approximated with a two-layered neural network as long as the activation function and number of hidden neurons are proper. The thesis documents a work ow on building the model with the ANN method, as well as some research on data subset selection with mathematical methods, such as Partial Correlation and Sequential Searching as a data pre-processing step for the ANN approach. In order to make the data selection method suitable for ANNs, a modication has been made on Sequential Searching method, which gives a better result. The results show that it is possible to develop such a guided-automatic tool for prediction purposes in LTE baseband signal processing under specic precision constraints. Compared to other approaches, this model tool with intelligent approach has a higher precision level and a better adaptivity, meaning that it can be used in any part of the platform even though the transmission channels are dierent.
Denna avhandling utvecklar ett automatiskt eller ett guidat automatiskt verktyg for att forutsaga behov av hardvaruresurser, ocksa kallat uppgiftsbelastning, med avseende pa programvarans algoritmparametrar i en LTE basstation. I signalbehandling i en LTE basstation, ar det viktigt att fa kunskap om hur mycket av hardvarans resurser som kommer att tas i bruk nar en programvara ska koras pa en viss plattform. Informationen ar vardefull for nagon att forsta systemet och plattformen battre, vilket kan mojliggora en rimlig anvandning av tillgangliga resurser. Processen att utveckla verktyget anses vara processen att bygga en matematisk modell mellan hardvarans belastning och programvaruparametrarna, dar processen denieras som approximation av en funktion. Enligt den universella approximationssatsen, kan problemet losas genom en intelligent metod som kallas articiella neuronnat (ANN). Satsen visar att en godtycklig funktion kan approximeras med ett tva-skiktS neuralt natverk sa lange aktiveringsfunktionen och antalet dolda neuroner ar korrekt. Avhandlingen dokumenterar ett arbets- ode for att bygga modellen med ANN-metoden, samt studerar matematiska metoder for val av delmangder av data, sasom Partiell korrelation och sekventiell sokning som dataforbehandlingssteg for ANN. For att gora valet av uppgifter som lampar sig for ANN har en andring gjorts i den sekventiella sokmetoden, som ger battre resultat. Resultaten visar att det ar mojligt att utveckla ett sadant guidat automatiskt verktyg for prediktionsandamal i LTE basbandssignalbehandling under specika precisions begransningar. Jamfort med andra metoder, har dessa modellverktyg med intelligent tillvagagangssatt en hogre precisionsniva och battre adaptivitet, vilket innebar att den kan anvandas i godtycklig del av plattformen aven om overforingskanalerna ar olika.
APA, Harvard, Vancouver, ISO, and other styles
30

Yu, Daoping. "Early Stopping of a Neural Network via the Receiver Operating Curve." Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etd/1732.

Full text
Abstract:
This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the imbalanced data set of the uneven class distribution, random sampling and k-means clustering are implemented to draw a smaller subset of representatives from the original training data set. Finally, the confidence interval for the AUC is estimated in a non-parametric approach.
APA, Harvard, Vancouver, ISO, and other styles
31

Alkroosh, Iyad Salim Jabor. "Modelling pile capacity and load-settlement behaviour of piles embedded in sand & mixed soils using artificial intelligence." Thesis, Curtin University, 2011. http://hdl.handle.net/20.500.11937/351.

Full text
Abstract:
This thesis presents the development of numerical models which are intended to be used to predict the bearing capacity and the load-settlement behaviour of pile foundations embedded in sand and mixed soils. Two artificial intelligence techniques, the gene expression programming (GEP) and the artificial neural networks (ANNs), are used to develop the models. The GEP is a developed version of genetic programming (GP). Initially, the GEP is utilized to model the bearing capacity of the bored piles, concrete driven piles and steel driven piles. The use of the GEP is extended to model the load-settlement behaviour of the piles but achieved limited success. Alternatively, the ANNs have been employed to model the load-settlement behaviour of the piles.The GEP and the ANNs are numerical modelling techniques that depend on input data to determine the structure of the model and its unknown parameters. The GEP tries to mimic the natural evolution of organisms and the ANNs tries to imitate the functions of human brain and nerve system. The two techniques have been applied in the field of geotechnical engineering and found successful in solving many problems.The data used for developing the GEP and ANN models are collected from the literature and comprise a total of 50 bored pile load tests and 58 driven pile load tests (28 concrete pile load tests and 30 steel pile load tests) as well as CPT data. The bored piles have different sizes and round shapes, with diameters ranging from 320 to 1800 mm and lengths from 6 to 27 m. The driven piles also have different sizes and shapes (i.e. circular, square and hexagonal), with diameters ranging from 250 to 660 mm and lengths from 8 to 36 m. All the information of case records in the data source is reviewed to ensure the reliability of used data.The variables that are believed to have significant effect on the bearing capacity of pile foundations are considered. They include pile diameter, embedded length, weighted average cone point resistance within tip influence zone and weighted average cone point resistance and weighted average sleeve friction along shaft.The sleeve friction values are not available in the bored piles data, so the weighted average sleeve friction along shaft is excluded from bored piles models. The models output is the pile capacity (interpreted failure load).Additional input variables are included for modelling the load-settlement behaviour of piles. They include settlement, settlement increment and current state of loadsettlement. The output is the next state of load-settlement.The data are randomly divided into two statistically consistent sets, training set for model calibration and an independent validation set for model performance verification.The predictive ability of the developed GEP model is examined via comparing the performance of the model in training and validation sets. Two performance measures are used: the mean and the coefficient of correlation. The performance of the model was also verified through conducting sensitivity analysis which aimed to determine the response of the model to the variations in the values of each input variables providing the other input variables are constant. The accuracy of the GEP model was evaluated further by comparing its performance with number of currently adopted traditional CPT-based methods. For this purpose, several ranking criteria are used and whichever method scores best is given rank 1. The GEP models, for bored and driven piles, have shown good performance in training and validation sets with high coefficient of correlation between measured and predicted values and low mean values. The results of sensitivity analysis have revealed an incremental relationship between each of the input variables and the output, pile capacity. This agrees with what is available in the geotechnical knowledge and experimental data. The results of comparison with CPT-based methods have shown that the GEP models perform well.The GEP technique is also utilized to simulate the load-settlement behaviour of the piles. Several attempts have been carried out using different input settings. The results of the favoured attempt have shown that the GEP have achieved limited success in predicting the load-settlement behaviour of the piles. Alternatively, the ANN is considered and the sequential neural network is used for modelling the load-settlement behaviour of the piles.This type of network can account for the load-settlement interdependency and has the option to feedback internally the predicted output of the current state of loadsettlement to be used as input for the next state of load-settlement.Three ANN models are developed: a model for bored piles and two models for driven piles (a model for steel and a model for concrete piles). The predictive ability of the models is verified by comparing their predictions in training and validation sets with experimental data. Statistical measures including the coefficient of correlation and the mean are used to assess the performance of the ANN models in training and validation sets. The results have revealed that the predicted load-settlement curves by ANN models are in agreement with experimental data for both of training and validation sets. The results also indicate that the ANN models have achieved high coefficient of correlation and low mean values. This indicates that the ANN models can predict the load-settlement of the piles accurately.To examine the performance of the developed ANN models further, the prediction of the models in the validation set are compared with number of load-transfer methods. The comparison is carried out first visually by comparing the load-settlement curve obtained by the ANN models and the load transfer methods with experimental curves. Secondly, is numerically by calculating the coefficient of correlation and the mean absolute percentage error between the experimental data and the compared methods for each case record. The visual comparison has shown that the ANN models are in better agreement with the experimental data than the load transfer methods. The numerical comparison also has shown that the ANN models scored the highest coefficient of correlation and lowest mean absolute percentage error for all compared case records.The developed ANN models are coded into a simple and easily executable computer program.The output of this study is very useful for designers and also for researchers who wish to apply this methodology on other problems in Geotechnical Engineering. Moreover, the result of this study can be considered applicable worldwide because its input data is collected from different regions.
APA, Harvard, Vancouver, ISO, and other styles
32

SILVA, Ana Maria Ribeiro Bastos da. "Avaliação da qualidade da água bruta superficial das barragens de Bita e Utinga de Suape aplicando estatística e sistemas inteligentes." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17404.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-15T12:20:57Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese SILVA AMRB.pdf: 10197611 bytes, checksum: dfa95dac75e87b0ffef8a344cb8d9996 (MD5)
Made available in DSpace on 2016-07-15T12:20:57Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese SILVA AMRB.pdf: 10197611 bytes, checksum: dfa95dac75e87b0ffef8a344cb8d9996 (MD5) Previous issue date: 2015-01-30
CNPq
Petrobrás
A aplicação de técnicas de Análises de Componentes Principais (ACP), Redes Neurais Artificiais (RNA), Lógica Fuzzy e Sistema Neurofuzzy para investigar as alterações da característica da água das barragens de Utinga e do Bita que abastecem de água bruta a ETA Suape é de fundamental importância em função do grande número de variáveis utilizadas para definir a qualidade. Neste trabalho, foram realizadas 10 coletas de água em cada área, no período de novembro de 2007 a agosto de 2012, totalizando 120 amostras. Ainda que o conjunto de dados experimentais obtidos seja reduzido, houve múltiplos esforços em demanda da aquisição de informações da qualidade da água junto aos órgãos oficiais de monitoramento ambiental. Os resultados mostraram uma tendência à degradação da propriedade da água das barragens em decorrência da presença de microrganismos, sais e nutrientes, responsáveis pelo processo de eutrofização, o que se configurou pela maior concentração de fósforo total, Coliformes termotolerantes, e diminuição de pH e OD, provavelmente devido à ocorrência de descarte de efluentes da agroindústria canavieira, industrial e doméstico. A ACP caracterizou mais 76% das amostras permitindo visualizar a existência de mudanças sazonais e uma pequena variação espacial d`água nas barragens. A condição da água das duas barragens foi modelada satisfatoriamente, razoável precisão e confiabilidade com os modelos estatístico e computacionais, para uma quantidade de parâmetros e dados ambientais, que embora limitados foram suficientes para realização deste trabalho. Ainda assim, fica evidente a eficiência e sucesso da utilização do Sistema Neurofuzzy (coeficiente de regressão de 0,608 a 0,925) que combina as vantagens das Redes Neurais e da Lógica Fuzzy em modelar o conjunto de dados da qualidade da água das barragens de Utinga e Bita.
The application of techniques such as the Principal Components Analysis (PCAs), Artificial Neural Networks (ANNs), Fuzzy Logic and Neuro-fuzzy Systems for investigating the changes in the water quality characteristics in the Utinga and Bita dams, which supplies raw water to the Suape Wastewater Treatment Plant (WWP), is of great importance due to the high number of variables used to define water quality. In this work were collected 10 water samples used to define water quality, in a period ranging from November 2007 to August 2012, with a total of 120 samples. Although the experimental dataset was limited, there were multiple efforts in gathering information from the Environmental Control Agencies. The results showed a tendency of degradation of the water properties in the dams studied due to the presence of microorganisms, salts and nutrients, responsible for the eutrophication process; result of the higher concentration of total phosphorus, Thermotolerant Coliforms and decrease in pH and DO, probably from the discharge of the sugarcane agroindustry and domestic waste. The PCAs characterised more than 76% of the samples collected, and consequently observing the existence of seasonal changes and small spatial variation of water levels in the dams. The water quality conditions in both dams were satisfactorily modelled, obtaining a reasonable precision and statistical and computational reliability for a certain amount of parameters and environmental data that, even though considered limited, were enough to run this trial. Nonetheless, it becomes evident the efficiency and success in using the Neuro- Fuzzy System (regression coefficient of 0.608 to 0.925), which combines the advantages of both the Neural Networks and Fuzzy Logic in modelling the water quality dataset in the Utinga and Bita dams.
APA, Harvard, Vancouver, ISO, and other styles
33

Coester, Christiane. "Schön wie Venus, mutig wie Mars : Anna d'Este, Herzogin von Guise und Von Nemours (1531-1607) /." München : Oldenbourg, 2007. http://www.gbv.de/dms/dhi_paris/phs/516035878.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Gainey, Karen Fern Wilkes. "Subverting the symbolic the semiotic fictions of Anne Tyler, Jayne Anne Phillips, Bobbie Ann Mason, and Grace Paley /." Access abstract and link to full text, 1990. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9102978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dietz, Anne [Verfasser], and Anna [Akademischer Betreuer] Friedl. "Charakterisierung interindividueller Strahlenempfindlichkeit in Zellen aus Tumorpatienten / Anne Dietz. Betreuer: Anna Friedl." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2015. http://d-nb.info/1076980767/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Perandini, Lucia. "Anna, sette anni al fronte. Analisi e proposta di sottotitolaggio del film documentario Anna, sem' let na linii fronta." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8133/.

Full text
Abstract:
The aim of the present work was to make more accessible a documentary film that focuses on the life and work of the Russian journalist killed in Moscow in 2006: Anna Politkovskaja. The film, entitled “Anna, seven years at the frontline” is a collage of several video-interviews with Politkovskaja’s friends and colleagues, in which they talk about who she was, what was her job about and how she tried to make the public aware about the Chechen wars. Working on this translation gave me a unique opportunity to deepen one of the most controversial issues in the recent history of Russia and to create a useful final product for those who have a particular interest in this topic but do not have the necessary linguistic competence to understand this documentary film.
APA, Harvard, Vancouver, ISO, and other styles
37

Piccini, Jacopo. "Data Dependent Convergence Guarantees for Regression Problems in Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24218/.

Full text
Abstract:
It has been recently demonstrated that the artificial neural networks’ (ANN) learning under gradient descent method, can be studied using neural tangent kernel (NTK). This thesis’ goal is to show how techniques related to control theory, can be applied to model and improve the hyperparameters training dynamics. Moreover, it will be proven how by using methods from linear parameter varying (LPV) theory can allow the exact representation of the learning dynamics over its whole domain. The first part of the thesis is dedicated to the modelling and analysis of the system. The modelling of simple ANNs is hereby suggested and a method to expand this approach to larger networks is proposed. After the first part, the LPV system model’s different properties are analysed using different methods. After the modelling and analysis phase, the focus will be shifted on how to improve the neural network both in terms of stability and performances. This improvement is achieved by using state feedback on the LPV system. After setting up the control architecture, controllers based on different methods, such as optimal control and robust control, are then synthesized and their performances are compared.
APA, Harvard, Vancouver, ISO, and other styles
38

Townsend, Joseph Paul. "Artificial development of neural-symbolic networks." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/15162.

Full text
Abstract:
Artificial neural networks (ANNs) and logic programs have both been suggested as means of modelling human cognition. While ANNs are adaptable and relatively noise resistant, the information they represent is distributed across various neurons and is therefore difficult to interpret. On the contrary, symbolic systems such as logic programs are interpretable but less adaptable. Human cognition is performed in a network of biological neurons and yet is capable of representing symbols, and therefore an ideal model would combine the strengths of the two approaches. This is the goal of Neural-Symbolic Integration [4, 16, 21, 40], in which ANNs are used to produce interpretable, adaptable representations of logic programs and other symbolic models. One neural-symbolic model of reasoning is SHRUTI [89, 95], argued to exhibit biological plausibility in that it captures some aspects of real biological processes. SHRUTI's original developers also suggest that further biological plausibility can be ascribed to the fact that SHRUTI networks can be represented by a model of genetic development [96, 120]. The aims of this thesis are to support the claims of SHRUTI's developers by producing the first such genetic representation for SHRUTI networks and to explore biological plausibility further by investigating the evolvability of the proposed SHRUTI genome. The SHRUTI genome is developed and evolved using principles from Generative and Developmental Systems and Artificial Development [13, 105], in which genomes use indirect encoding to provide a set of instructions for the gradual development of the phenotype just as DNA does for biological organisms. This thesis presents genomes that develop SHRUTI representations of logical relations and episodic facts so that they are able to correctly answer questions on the knowledge they represent. The evolvability of the SHRUTI genomes is limited in that an evolutionary search was able to discover genomes for simple relational structures that did not include conjunction, but could not discover structures that enabled conjunctive relations or episodic facts to be learned. Experiments were performed to understand the SHRUTI fitness landscape and demonstrated that this landscape is unsuitable for navigation using an evolutionary search. Complex SHRUTI structures require that necessary substructures must be discovered in unison and not individually in order to yield a positive change in objective fitness that informs the evolutionary search of their discovery. The requirement for multiple substructures to be in place before fitness can be improved is probably owed to the localist representation of concepts and relations in SHRUTI. Therefore this thesis concludes by making a case for switching to more distributed representations as a possible means of improving evolvability in the future.
APA, Harvard, Vancouver, ISO, and other styles
39

Kaczmaryk, Anne. "Approches multi-continuum de la dualité homogénéisation-inversion des propriétés hydrodynamiques en milieu poreux fracturé." Poitiers, 2008. http://theses.edel.univ-poitiers.fr/theses/2008/Kaczmaryk-Anne/2008-Kaczmaryk-Anne-These.pdf.

Full text
Abstract:
Le sous-échantillonnage est une plaie quasi systématique dans l’étude des milieux souterrains. En l’occurrence, la question reste posée d’interpréter des données recueillies in situ pour déterminer les paramètres macroscopiques régissant l’écoulement et le transport de soluté dans ces milieux. L’objet de ce travail est de proposer des outils d’inversion de données hydrodynamiques en gardant une approche physique (par opposition à systémique) du fonctionnement du réservoir. Des mesures de rabattements hydrauliques ont été acquises en deux campagnes de tests d’interférence sur l’aquifère carbonaté fracturé du Site Expérimental Hydrogéologique (SEH) de l’Université de Poitiers. Elles sont interprétées selon des modèles continus double milieu, et intègrent en particulier les effets de drainance karstique observés sur la seconde campagne de mesures. Un outil d’inversion du transfert de masse est également proposé sur la base d’un calcul Lagrangien dans le domaine des temps pour des réseaux de liens. Entre autres sophistications, l’inversion est assortie d’une dérivation analytique des sensibilités aux paramètres. Enfin, la trace du réseau de liens est éliminée en substituant les équations classiques du transport par les équations de Langevin. Elles intègrent un champ de forces à l’origine d’un terme hyperbolique qui pourrait représenter les éventuels effets de chenalisation d’un réseau. Plusieurs développements analytiques en régime transitoire et asymptotique du déplacement moyen et de la dispersion attestent de la faisabilité d’une telle substitution. Le travail doit cependant être poursuivi, notamment la comparaison avec des données de traçage acquises sur le terrain
The quite-systematic scarcity of sampled data hampers the study of underground media. This is why the question remains of getting suited interpretations based on in situ data to evaluate macroscopic parameters ruling flow and mass transport in underground reservoirs. The aim of this work is to invert dynamic data by means of tools with a physical view on the reservoir functioning (opposed here to a systemic approach). Hydraulic interference testing has been held in two campaigns over the fractured limestone aquifer of the Hydrogeological Experimental Site (HES) in Poitiers (France). Drawdown data are interpreted by enhanced dual-medium approaches, with special care given to karstic draining observed on data of the second campaign. A tool for mass transport inversion is also developed with calculations handled by a Lagrangian approach in time over bond networks. Among various refinements, inversion is coupled with an analytical derivation of the model sensitivity to parameters. Finally, the trace of the network is eliminated by substituting the classical transport equations by the Langevin equations. The latter include a force field yielding a hyperbolic term that would mimic the eventual channelling effects of a network. Several analytical developments of the mean displacement and dispersion of particles, both in transient and asymptotic context, testify that the substitution is feasible. This work should be pursued however, for instance by addressing with the tools mentioned above field tracer test experiments carried out in various contexts
APA, Harvard, Vancouver, ISO, and other styles
40

Medici, Anna. "Les gènes de transporteurs de sucres dans la réponse au déficit hydrique et le rôle des protéines ASR dans la régulation de l'expression du gène du transporteur d'hexoses VvHT1 chez la Vigne (Vitis vinifera)." Poitiers, 2010. http://theses.edel.univ-poitiers.fr/theses/2010/Medici-Anna/2010-Medici-Anna-These.pdf.

Full text
Abstract:
La répartition des sucres requiert l'activité de transporteurs membranaires et représente une étape cruciale dans la réponse de la plante au déficit hydrique. Les séquences de 63 transporteurs de sucres ont été identifiées chez la Vigne. L'analyse in silico a identifié 7 sous familles de tranporteurs de monosaccharides et plusieurs élements cis de régulation dans les promoteurs. L'analyse d'expression par macroarray a permis d'identifier 4 gènes de transporteurs fortement exprimés dans tous les organes végétatifs (VvHT1, VvHT3, VvPMT5, VvSUC27), d'autres préférentiellement exprimés dans les feuilles matures (VvHT5), les racines (VvHT2) et les pépins (VvHT3, VvHT5) et 3 régulés au cours du développement de la baie (VvHT2, VvTMT1, VvTMT2). L'analyse par macroarray révèle que l'expression de VvHT5, VvSUC11, VvGIN2 et VvMSA est induite dans les feuilles matures, en réponse au déficit hydrique alors que l'expression de VvHT1 est inhibée. La carence en eau déclenche l'accumulation de glucose, fructose et saccharose dans les feuilles matures de Vigne. Le gène VvMSA a été surexprimé ou réprimé dans des cellules embryogènes de Vigne génétiquement modifiées. La répression de VvMSA diminue l'activité d'absorption du glucose et l'expression de VvHT1. Le phénotype des vitroplants régénérés ainsi que l'expression de VvHT1 et VvMSA ont été étudiés en condition in vitro et ex vitro. Les résultats obtenus suggèrent la présence d'une voie de régulation de l'expression du gène VvHT5 par le fructose. Un modèle de la signalisation glucidique dans l'expression du gène VvHT1 est proposé, comprenant une voie de régulation par les disaccharides et deux par le glucose, un VvMSA-dépendant et un VvMSA-indépendant
Sugar allocation mediated by membrane transporters is strongly affected by water deficit. The sequences of 63 sugar transporters were identified in Grapevine. The in silico analysis identify 7 monosaccharide transporters subfamilies and several cis elements in the promoter regions. Macroarray expression analysis showed that 4 sugar transporter genes are strongly expressed in all vegetative organs (VvHT1, VvHT3, VvPMT5, VvSUC27), some others are preferentially expressed in mature leaf (VvHT5), in root (VvHT2), in seed (VvHT3, VvHT5) and 3 genes are regulated during berry development (VvHT2, VvTMT1, VvTMT2). Macroarray gene expression analysis showed that VvHT5, VvSUC11, VvGIN2 and VvMSA are up-regulated in mature leaves under drought conditions and that VvHT1 was down-regulated. Water deficit triggers a strong accumulation of glucose, fructose and sucrose in water stressed leaves. VvMSA gene was overexpressed or silenced in grape embryogenic cells by genetic transformation. The silencing of VvMSA affects glucose uptake activity and VvHT1 expession. The Grapevine plants phenotype and the expression of VvHT1 and VvMSA expression were studied under both in vitro and ex-vitro conditions. For the first time, the involvement of sugar transporters and VvMSA in the response to water stress in Grapevine is demonstrated. A model for sugar signalling involved in VvHT1 gene expression is proposed, including a disaccharide pathway and two glucose regulatory mechanisms, one VvMSA-dependent and another VvMSA-independent. Finally, our results reveal the importance of VvHT1 hexoses transporter under heterotrophic conditions
APA, Harvard, Vancouver, ISO, and other styles
41

Scheepers, Rajah. "Regentin per Staatsstreich? : Landgräfin Anna von Hessen (1485 - 1525)." Königstein, Taunus Helmer, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2886561&prov=M&dokv̲ar=1&doke̲xt=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Elnikova, Anna. "Rannjaja lirika Anny Achmatowoi tekstowoj i tematicheskij analiz = Anna Achmatowas lyrisches Frühwerk : Text- und Themenanalyse /." St. Gallen, 2005. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/01665579001/$FILE/01665579001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dreise-Beckmann, Sandra. "Herzogin Anna Amalia von Sachsen-Weimar-Eisenach (1739-1807), Musikliebhaberin und Mäzenin : Anhang: Rekonstruktion der Musikaliensammlung, Handschriften und Briefe /." Schneverdingen : Verl. für Musikbücher Wagner, 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=013019723&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mohammadisohrabi, Ali. "Design and implementation of a Recurrent Neural Network for Remaining Useful Life prediction." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
A key idea underlying many Predictive Maintenance solutions is Remaining Useful Life (RUL) of machine parts, and it simply involves a prediction on the time remaining before a machine part is likely to require repair or replacement. Nowadays, with respect to fact that the systems are getting more complex, the innovative Machine Learning and Deep Learning algorithms can be deployed to study the more sophisticated correlations in complex systems. The exponential increase in both data accumulation and processing power make the Deep Learning algorithms more desirable that before. In this paper a Long Short-Term Memory (LSTM) which is a Recurrent Neural Network is designed to predict the Remaining Useful Life (RUL) of Turbofan Engines. The dataset is taken from NASA data repository. Finally, the performance obtained by RNN is compared to the best Machine Learning algorithm for the dataset.
APA, Harvard, Vancouver, ISO, and other styles
45

Fellowes, John Robert. "Community composition of Hong Kong ants : spatial and seasonal patterns /." Thesis, Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18737110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Umphrey, Gary John Carleton University Dissertation Biology. "Differentiation of sibling species in the ant genus Aphaenogaster; karyotypic, electrophoretic, and morphometric investigations of the Fulva-Rudis-Texana complex." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chong, Chee-Seng. "The distribution and ecology of ants in vineyards /." Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/5744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Rillon-Marne, Anne-Zoé. "Philippe le Chancelier et son oeuvre : étude sur l'élaboration d'une poétique musicale." Poitiers, 2008. http://theses.edel.univ-poitiers.fr/theses/2008/Rillon-Anne-Zoe/2008-Rillon-Anne-Zoe-These.pdf.

Full text
Abstract:
L'analyse des conduits monodiques moraux attribués au poète, prédicateur et théologien Philippe le Chancelier, révèle les qualités rhétoriques de cette production tant par le texte que la musique. Les deux entretiennent une relation complexe qui peut être de valoriser les sons des mots, d'en clarifier le sens ou encore de mettre en place une construction savante à l'intention des esprits habitués aux subtilités de la poésie rythmique latine. Le désir de communication du message moral impose ses règles et figures, autant de techniques apprises au contact d'autres pratiques du discours. Le poète-compositeur conçoit le conduit avec les mêmes réflexes qu'un orateur et la voix chantée est un média efficace pour toucher une large palette d'auditeurs. Chaque composition semble s'adapter à des objectifs propres, par les moyens sonores de la langue, par les jeux de structure ou par l'élaboration mélodique. La parole chantée est ainsi une arme pointée contre tous les pécheurs et leurs vices
Though the analysis of the moral monodic conductus by the poet, preacher and theologist Philip the Chancellor, the rhetorical qualities of this work are revealed both by words and music. They have a complex relationship either by enhancing the sounds of words, by clarifying their meaning or by setting a learned construction to be understood by people used to the subtelties of latin rhythmic poetry. Rules and figures are dictated by the will to deliver a moral message, these techniques being derived from others practices of speech. The poet-composer conceives the conductus with the same habits as the ones of the orator and the singing voice is an efficient means of teaching a large and varied audience. Each song seems to fit is own purpose by the sounds of the language, by the structure patterns or by melodic creation. The sung word thus becomes a weapon against all sinners and vices
APA, Harvard, Vancouver, ISO, and other styles
49

Rillon-Marne, Anne-Zoé Cullin Olivier. "Philippe le Chancelier et son oeuvre étude sur l'élaboration d'une poétique musicale /." [Poitiers] : [I-médias], 2008. http://theses.edel.univ-poitiers.fr/theses/2008/Rillon-Anne-Zoe/2008-Rillon-Anne-Zoe-These.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

LOCCI, AMEDEO. "Sviluppo di una piattaforma inerziale terrestre assistita da reti neurali artificiali." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2008. http://hdl.handle.net/2108/684.

Full text
Abstract:
Molte delle tecnologie impiegate nei moderni sistemi di ausilio alla navigazione terrestre, risalgono a circa un quarto di secolo. Generalmente, i principali sistemi in uso negli apparati per la navigazione terrestre, risultano essere i Sistemi di Posizionamento Globale (GPS) ed i Sistemi di Navigazione Inerziale (INS). Tali apparati tecnologici, necessitano di componenti fondamentali di qualità più o meno spinta che ne determina costi di mercato progressivamente crescenti, in grado di soddisfare le più diverse applicazioni di navigazione, passando da quella veicolare terrestre, sino a quella aerospaziale, ma includendo anche applicazione di handling o gestione di flotte veicolari. Negli ultimi anni, i sistemi di navigazione hanno visto applicazioni su larga scala, come sistemi di ausilio alla navigazione terrestre, in particolare con l’impiego del sistema GPS. Per ovviare agli inconvenienti che esso presenta, ultimamente si sono sviluppati sistemi integrati low-cost per la navigazione terrestre con applicazioni su larga scala, tra i quali si distingue, per rilevanti aspetti tecnici ed economici, il sistema integrato INS/GPS. Per ovviare alla bassa qualità dei sensori impiegati, si è dovuto ricorrere all’impiego di modelli matematici in grado di correggere le risposte dei sensori. Da qui, negli anni, uno dei maggiori strumenti di utilizzo si è dimostrato essere il filtro di Kalman nel data fusion INS/GPS; nonostante esso presenti diverse limitazioni. La necessità della correzione dell’INS è legata sia all’ottenimento di una risposta più affidabile nei brevi percorsi rispetto al GPS, sia alla generazione di un data base ove vengano raccolte tutta una serie di informazioni tra le caratteristiche cinematiche, geometriche ed ambientali necessarie, qual’ora venisse ad essere assente o di scarsa qualità il segnale GPS. A tale scopo, si cerca di rendere intelligente il sistema di navigazione: Il filtro di Kalman ha fatto scuola negli anni passati ed ha indirizzato la ricerca verso l’impiego di sistemi intelligenti, quali ad esempio la logica fuzzy, gl algoritmi genetici o le reti neurali. Quest’ultime sono in grado di rendere i sistemi integrati INS/GPS dei sistemi intelligenti, cioè capaci di prendere delle decisioni autonome nella correzione dei dati, a fronte di capacità di apprendimento. Ciò che qui si è perseguito è stato l’impiego delle Reti Neurali Artificiali, al fine di sopperire alla sola risposta del sistema INS qualora si verificasse l’outage del GPS. In tali situazioni, a causa delle inevitabili derive causate dagli effetti del random walk dell’INS, si necessita di eseguire una correzione dei dati inerziali, demandata alle Reti Neurali. Si è resa necessaria un’investigazione dei modelli più attinenti con tali tipologia di dati, nonché un tuning delle rete stesse. Quindi è stato sviluppato sia un sistema di memorizzazione dei dati sensibili, basato sull’aggiornamento della memoria della rete, ( i pesi ), sia un sistema di correzione dei dati inerziali. La valutazione è stata eseguita su test case evidenziando, rispetto agli impieghi di correzione con sistemi classici un netto miglioramento di performance. Tali applicazioni, come anche evidenziato in lavori scientifici, possono considerarsi come un metodo per gli sviluppi futuri delle nuove piattaforme integrate per la navigazione terrestre. Inoltre esse sono in grado di fornire informazioni di assetto, necessarie per la guida dei sistemi autonomi. ed i loro bassi costi di mercato, ne consentono l’impiego su vastissima scala, conseguendo vantaggi in ambito di sicurezza stradale e di ricostruzione di eventi accidentali.
Many of the technologies used in the modern systems of aid to terrestrial navigation go back to approximately a quarter of century. Generally, the main systems in use in the equipments for terrestrial Navigation are the Global Positioning Systems (GPS) and the Inertial Navigation Systems (INS). Such technological apparatus, need fundamental components of high or quite almost quality, which cause progressively growing market costs and can satisfy the most various range of applications in the navigation field, from the vehicular terrestrial one to the aerospace application, but also including handling applications or the management of vehicular fleets. In the last few years, the navigation systems have seen applications on a large scale, as the terrestrial navigation aid systems, in particular with the use of the GPS system. In order to avoid the disadvantages that it can cause, integrated low-cost systems for terrestrial navigation have been lately developed with applications on a large scale, among which, for important technical and economic aspects, the integrated INS/GPS system. The need to avoid the low quality of the used sensors led to the application of mathematical models able to correct the sensors’ answers. From here, one of the most used instrument throughout the years has been the Kalman filter, as an optimal linear Gaussian estimator in the data fusion INS/GPS. However, as multi-sensor integration methodology, it has various limits. The INS correction is needed either to get a more reliable answer in the short route compared the GPS-based navigation system, and to generate a data base in which a series of information are collected, among the necessary cinematic, geometric and environmental characteristics, in case the GPS signal is absent or of low quality. For that purpose, the attempt is to make the navigation system intelligent: the Kalman filter has been shown to be in the last years the reference model which has addressed the research to the use of intelligent models, such as for example the fuzzy logic, the genetic algorithms or the neural networks. The latter can make the integrated INS/GPS systems intelligent, or capable to take independent decisions in the data correction, after a learning process. The aim which has been reached is the use of the Artificial Neural Networks (ANN), in order to compensate for the answer of the INS system, in case the GPS outage occurred. In such situations, due to the unavoidable drifts caused by the INS random walk effects, a correction of the inertial data is needed, through the Neural Networks. A research of the models more related to this kind of data has been necessary, as well as a tuning of the networks. Therefore it has been developed either a storage system for the sensitive data, based on the updating of the network memory, (the weights), and a correction system of the inertial data. The evaluation has been carried out on many test cases and it has shown a definite improvement in performance compared to the use of the correction with conventional systems. Such applications, as also emphasized in scientific works, can be considered as a method for the future developments of the new integrated platforms for terrestrial navigation. Moreover, they can supply attitude configuration needed for the control of the autonomous systems and their low market costs allow a large scale applications, and also advantages in the road safety field and the reconstruction of accidental events.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography