Dissertations / Theses on the topic 'Automated networks'

To see the other types of publications on this topic, follow the link: Automated networks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Automated networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lu, Ching-sung. "Automated validation of communication protocols /." The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu148726702499786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Apaydin, Oncu. "Automated Calibration Of Water Distribution Networks." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615692/index.pdf.

Full text
Abstract:
Water distribution network models are widely used for various purposes such as long-range planning, design, operation and water quality management. Before these models are used for a specific study, they should be calibrated by adjusting model parameters such as pipe roughness values and nodal demands so that models can yield compatible results with site observations (basically, pressure readings). Many methods have been developed to calibrate water distribution networks. In this study, Darwin Calibrator, a computer software that uses genetic algorithm, is used to calibrate N8.3 pressure zone model of Ankara water distribution network
in this case study the network is calibrated on the basis of roughness parameter, Hazen Williams coefficient for the sake of simplicity. It is understood that there are various parameters that contribute to the uncertainties in water distribution network modelling and the calibration process. Besides, computer software&rsquo
s are valuable tools to solve water distribution network problems and to calibrate network models in an accurate and fast way using automated calibration technique. Furthermore, there are many important aspects that should be considered during automated calibration such as pipe roughness grouping. In this study, influence of flow velocity on pipe roughness grouping is examined. Roughness coefficients of pipes have been estimated in the range of 70-140.
APA, Harvard, Vancouver, ISO, and other styles
3

English, Philip J. "Automated discovery of chemical reaction networks." Thesis, University of Newcastle Upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.500929.

Full text
Abstract:
The identification of models of chemical reaction networks is of importance in the safe, economic and environmentally sensitive development of chemical products. Qualitative models of a network of interactions are used in the design of drugs and other therapies. Quantitative models of the behaviour of reaction networks are the foundation of the science of reaction engineering (e.g. see Levenspiel, 1999); allowing the use of simulation software in the rapid development of commercial scale production processes. This work extends the existing methods reported by Burnham et al. (2006); adopting the global basis fonction method first applied to this problem by Crampin et al. (2004a).
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Shailendra. "Automated fault injection and analysis for wired/wireless networks." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wallace, Brian T. "Automated system for load-balancing EBGP peers." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0008800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Darr, Timothy, Ronald Fernandes, Michael Graul, John Hamilton, and Charles H. Jones. "Automated Configuration and Validation of Instrumentation Networks." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606234.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
This paper describes the design and implementation of a test instrumentation network configuration and verification system. Given a multivendor instrument part catalog that contains sensor, actuator, transducer and other instrument data; user requirements (including desired measurement functions) and technical specifications; the instrumentation network configurator will select and connect instruments from the catalog that meet the requirements and technical specifications. The instrumentation network configurator will enable the goal of mixing and matching hardware from multiple vendors to develop robust solutions and to reduce the total cost of ownership for creating and maintaining test instrumentation networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Nilsson, Henrik, and Anders Svensson. "Automated Mobile Cranes." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Conner, Steven. "Automated distribution network planning with active network management." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28818.

Full text
Abstract:
Renewable energy generation is becoming a major part of energy supply, often in the form of distributed generation (DG) connected to distribution networks. While growth has been rapid, there is awareness that limitations on spare capacity within distribution (and transmission) networks is holding back development. Developments are being shelved until new network reinforcements can be built, which may make some projects non-viable. Reinforcements are costly and often underutilised, typically only loaded to their limits for a few occasions during the year. In order to accommodate new DG without the high costs or delays, active network management (ANM) is being promoted in which generation and other network assets are controlled within the limits of the existing network. There is a great deal of complexity and uncertainty associated with developing ANM and devising coherent plans to accommodate new DG is challenging for Distribution Network Operators (DNOs). As such, there is a need for robust network planning tools that can explicitly handle ANM and which can be trusted and implemented easily. This thesis describes the need for and the development of a new distribution expansion planning framework that provides DNOs with a better understanding of the impacts created by renewable DG and the value of ANM. This revolves around a heuristic planning framework which schedules necessary upgrades in power lines and transformers associated with changes in demand as well as those driven by the connection of DG. Within this framework a form of decentralised, adaptive control of DG output has been introduced to allow estimation of the impact of managing voltage and power flow constraints on the timing and need for network upgrades. The framework is initially deployed using simple scenarios but a further advance is the explicit use of time series to provide substantially improved estimates of the levels of curtailment implied by ANM. In addition, a simplified approach to incorporating demand side management has been deployed to facilitate understanding of the scope and role this may play in facilitating DG connections.
APA, Harvard, Vancouver, ISO, and other styles
9

Burnham, Samantha Claire. "Towards the automated determination of chemical reaction networks." Thesis, University of Newcastle Upon Tyne, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lomas, David. "Improving automated postal address recognition using neural networks." Thesis, University of York, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cerkez, Paul. "Automated Detection of Semagram-Laden Images." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/115.

Full text
Abstract:
Digital steganography is gaining wide acceptance in the world of electronic copyright stamping. Digital media that are easy to steal, such as graphics, photos and audio files, are being tagged with both visible and invisible copyright stamp known as a digital watermark. However, these same methodologies are also used to hide communications between actors in criminal or covert activities. An inherent difficulty in developing steganography attacks is overcoming the variety of methods for hiding a message and the multitude of choices of available media. The steganalyst cannot create an attack until the hidden content method appears. When a message is visually transmitted in a non-textual format (i.e., in an image) it is referred to as a semagram. Semagrams are a subset of steganography and are relatively easy to create. However, detecting a hidden message in an image-based semagram is more difficult than detecting digital modifications to an image's structure. The trend in steganography is a decrease in detectable digital traces, and a move toward semagrams. This research outlines the creation of a novel, computer-based application, designed to detect the likely presence of a Morse Code based semagram message in an image. This application capitalizes on the adaptability and learning capabilities of various artificial neural network (NN) architectures, most notably hierarchical architectures. Four NN architectures [feed-forward Back-Propagation NN (BPNN), Self organizing Map (SOM), Neural Abstraction Pyramid (NAP), and a Hybrid Custom Network (HCN)] were tested for applicability to this domain with the best performing one being the HCN. Each NN was given a baseline set of training images (quantity based on NN architecture) then test images were presented, (each test set having 3,337 images). There were 36 levels of testing. Each subsequent test set representing an increase in complexity over the previous one. In the end, the HCN proved to be the NN of choice from among the four tested. The final HCN implementation was the only network able to successfully perform against all 36 levels. Additionally, the HCN, while only being trained on the base Morse Code images, successfully detected images in the 9 test sets of Morse Code isomorphs.
APA, Harvard, Vancouver, ISO, and other styles
12

Andrews, Robert. "An automated rule refinement system." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15788/1/Robert_Andrews_Thesis.pdf.

Full text
Abstract:
Artificial neural networks (ANNs) are essentially a 'black box' technology. The lack of an explanation component prevents the full and complete exploitation of this form of machine learning. During the mid 1990's the field of 'rule extraction' emerged. Rule extraction techniques attempt to derive a human comprehensible explanation structure from a trained ANN. Andrews et.al. (1995) proposed the following reasons for extending the ANN paradigm to include a rule extraction facility: * provision of a user explanation capability * extension of the ANN paradigm to 'safety critical' problem domains * software verification and debugging of ANN components in software systems * improving the generalization of ANN solutions * data exploration and induction of scientific theories * knowledge acquisition for symbolic AI systems An allied area of research is that of 'rule refinement'. In rule refinement an initial rule base, (i.e. what may be termed `prior knowledge') is inserted into an ANN by prestructuring some or all of the network architecture, weights, activation functions, learning rates, etc. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract the `refined' rules. Very few ANN techniques have the capability to act as a true rule refinement system. Existing techniques, such as KBANN, (Towell & Shavlik, (1993), are limited in that the rule base used to initialize the network must be a nearly complete, and the refinement process is limited to modifying antecedents. The limitations of existing techniques severely limit their applicability to real world problem domains. Ideally, a rule refinement technique should be able to deal with incomplete initial rule bases, modify antecedents, remove inaccurate rules, and add new knowledge by generating new rules. The motivation for this research project was to develop such a rule refinement system and to investigate its efficacy when applied to both nearly complete and incomplete problem domains. The premise behind rule refinement is that the refined rules better represent the actual domain theory than the initial domain theory used to initialize the network. The hypotheses tested in this research include: * that the utilization of prior domain knowledge will speed up network training, * produce smaller trained networks, * produce more accurate trained networks, and * bias the learning phase towards a solution that 'makes sense' in the problem domain. In 1998 Geva, Malmstrom, & Sitte, (1998) described the Local Cluster (LC) Neural Net. Geva et.al. (1998) showed that the LC network was able to learn / approximate complex functions to a high degree of accuracy. The hidden layer of the LC network is comprised of basis functions, (the local cluster units), that are composed of sigmoid based 'ridge' functions. In the General form of the LC network the ridge functions can be oriented in any direction. We describe RULEX, a technique designed to provide an explanation component for its underlying Local Cluster ANN through the extraction of symbolic rules from the weights of the local cluster units of the trained ANN. RULEX exploits a feature, ie, axis parallel ridge functions, of the Restricted Local Cluster (Geva , Andrews & Geva 2002), that allow hyper-rectangular rules of the form IF ? 1 = i = n : xi ? [ xi lower , xi upper ] THEN pattern belongs to the target class to be easily extracted from local functions that comprise the hidden layer of the LC network. RULEX is tested on 14 applications available in the public domain. RULEX results are compared with a leading machine learning technique, See5, with RULEX generally performing as well as See5 and in some cases outperforming See5 in predictive accuracy. We describe RULEIN, a rule refinement technique that allows symbolic rules to be converted into the parameters that define local cluster functions. RULEIN allows existing domain knowledge to be captured in the architecture of a LC ANN thus facilitating the first phase of the rule refinement paradigm. RULEIN is tested on a variety of artificial and real world problems. Experimental results indicate that RULEIN is able to satisfy the first requirement of a rule refinement technique by correctly translating a set of symbolic rules into a LC ANN that has the same predictive bahaviour as the set of rules from which it was constructed. Experimental results also show that in the cases where a strong domain theory exists, initializing an LC network using RULEIN generally speeds up network training, produces smaller, more accurate trained networks, with the trained network properly representing the underlying domain theory. In cases where a weak domain theory exists the same results are not always apparent. Experiments with the RULEIN / LC / RULEX rule refinement method show that the method is able to remove inaccurate rules from the initial knowledge base, modify rules in the initial knowledge base that are only partially correct, and learn new rules not present in the initial knowledge base. The combination of RULEIN / LC / RULEX thus is shown to be an effective rule refinement technique for use with a Restricted Local Cluster network.
APA, Harvard, Vancouver, ISO, and other styles
13

Andrews, Robert. "An Automated Rule Refinement System." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15788/.

Full text
Abstract:
Artificial neural networks (ANNs) are essentially a 'black box' technology. The lack of an explanation component prevents the full and complete exploitation of this form of machine learning. During the mid 1990's the field of 'rule extraction' emerged. Rule extraction techniques attempt to derive a human comprehensible explanation structure from a trained ANN. Andrews et.al. (1995) proposed the following reasons for extending the ANN paradigm to include a rule extraction facility: * provision of a user explanation capability * extension of the ANN paradigm to 'safety critical' problem domains * software verification and debugging of ANN components in software systems * improving the generalization of ANN solutions * data exploration and induction of scientific theories * knowledge acquisition for symbolic AI systems An allied area of research is that of 'rule refinement'. In rule refinement an initial rule base, (i.e. what may be termed `prior knowledge') is inserted into an ANN by prestructuring some or all of the network architecture, weights, activation functions, learning rates, etc. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract the `refined' rules. Very few ANN techniques have the capability to act as a true rule refinement system. Existing techniques, such as KBANN, (Towell & Shavlik, (1993), are limited in that the rule base used to initialize the network must be a nearly complete, and the refinement process is limited to modifying antecedents. The limitations of existing techniques severely limit their applicability to real world problem domains. Ideally, a rule refinement technique should be able to deal with incomplete initial rule bases, modify antecedents, remove inaccurate rules, and add new knowledge by generating new rules. The motivation for this research project was to develop such a rule refinement system and to investigate its efficacy when applied to both nearly complete and incomplete problem domains. The premise behind rule refinement is that the refined rules better represent the actual domain theory than the initial domain theory used to initialize the network. The hypotheses tested in this research include: * that the utilization of prior domain knowledge will speed up network training, * produce smaller trained networks, * produce more accurate trained networks, and * bias the learning phase towards a solution that 'makes sense' in the problem domain. In 1998 Geva, Malmstrom, & Sitte, (1998) described the Local Cluster (LC) Neural Net. Geva et.al. (1998) showed that the LC network was able to learn / approximate complex functions to a high degree of accuracy. The hidden layer of the LC network is comprised of basis functions, (the local cluster units), that are composed of sigmoid based 'ridge' functions. In the General form of the LC network the ridge functions can be oriented in any direction. We describe RULEX, a technique designed to provide an explanation component for its underlying Local Cluster ANN through the extraction of symbolic rules from the weights of the local cluster units of the trained ANN. RULEX exploits a feature, ie, axis parallel ridge functions, of the Restricted Local Cluster (Geva , Andrews & Geva 2002), that allow hyper-rectangular rules of the form IF ∀ 1 ≤ i ≤ n : xi ∈ [ xi lower , xi upper ] THEN pattern belongs to the target class to be easily extracted from local functions that comprise the hidden layer of the LC network. RULEX is tested on 14 applications available in the public domain. RULEX results are compared with a leading machine learning technique, See5, with RULEX generally performing as well as See5 and in some cases outperforming See5 in predictive accuracy. We describe RULEIN, a rule refinement technique that allows symbolic rules to be converted into the parameters that define local cluster functions. RULEIN allows existing domain knowledge to be captured in the architecture of a LC ANN thus facilitating the first phase of the rule refinement paradigm. RULEIN is tested on a variety of artificial and real world problems. Experimental results indicate that RULEIN is able to satisfy the first requirement of a rule refinement technique by correctly translating a set of symbolic rules into a LC ANN that has the same predictive bahaviour as the set of rules from which it was constructed. Experimental results also show that in the cases where a strong domain theory exists, initializing an LC network using RULEIN generally speeds up network training, produces smaller, more accurate trained networks, with the trained network properly representing the underlying domain theory. In cases where a weak domain theory exists the same results are not always apparent. Experiments with the RULEIN / LC / RULEX rule refinement method show that the method is able to remove inaccurate rules from the initial knowledge base, modify rules in the initial knowledge base that are only partially correct, and learn new rules not present in the initial knowledge base. The combination of RULEIN / LC / RULEX thus is shown to be an effective rule refinement technique for use with a Restricted Local Cluster network.
APA, Harvard, Vancouver, ISO, and other styles
14

Bhabuta, Madhu Darshan Kumar. "Quantitative analysis of ATM networks." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sångberg, Dennis. "Automated Glioma Segmentation in MRI using Deep Convolutional Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-171046.

Full text
Abstract:
Manual segmentation of brain tumours is a time consuming process, results often show high variability, and there is a call for automation in clinical practice. In this thesis the use of deep convolutional networks for automatic glioma segmentation in MRI is investigated. The implemented networks are evaluated on data used in the brain tumor segmentation challenge (BraTS). It is found that 3D convolutional networks generally outperform 2D convolutional networks, and that the best networks can produce segmentations that closely resemble human segmentations. Convolutional networks are also evaluated as feature extractors with linear SVM classifiers on top, and although the sensitivity is improved considerably, the segmentations are heavily oversegmented. The importance of the amount of data available is investigated as well by comparing results from networks trained on both 2013 and the greatly extended 2014 data set, but it is found that the method of producing ground-truth was also a contributing factor. The networks does not beat the previous high-scores on the BraTS data, but several simple improvement areas are identified to take the networks further.
Manuell segmentering av hjärntumörer är en tidskrävande process, segmenteringarna är ofta varierade mellan experter, och automatisk segmentering skulle vara användbart för kliniskt bruk. Den här rapporten undersöker användningen av deep convolutional networks (ConvNets) för automatisk segmentering av gliom i MR-bilder. De implementerade nätverken utvärderas med hjälp av data från brain tumor segmentation challenge (BraTS). Studien finner att 3D-nätverk har generellt bättre resultat än 2D-nätverk, och att de bästa nätverken har förmågan att ge segmenteringar som liknar mänskliga segmenteringar. ConvNets utvärderas också som feature extractors, med linjära SVM som klassificerare. Den här metoden ger segmenteringar med hög känslighet, men är också till hög grad översegmenterade. Vikten av att ha mer träningsdata undersöks också genom att träna på två olika stora dataset, men metoden för att få fram de riktiga segmenteringarna har troligen också stor påverkan på resultatet. Nätverken slår inte de tidigare rekorden på BraTS, men flera viktiga men enkla förbättringsområden är identifierade som potentiellt skulle förbättra resultaten.
APA, Harvard, Vancouver, ISO, and other styles
16

Tantimuratha, L. "Automated design of flexible and operable heat exchanger networks." Thesis, University of Manchester, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.505685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Asante, N. D. R. "Automated and interactive retrofit designof practical heat exchanger networks." Thesis, University of Manchester, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Elwing-Malmfelt, Linus, and Oscar Keresztes. "Semi-automated hardening of networks based on security classifications." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21793.

Full text
Abstract:
Conducting risk assessments is a vital part of securing information systems. The task of conducting risk assessments is a time-consuming and costly task for organizations. Thus different security control frameworks have been created to assist in the process. These security control frameworks consists of information about what the organization is supposed to implement to achieve a level of security in their information system. To understand what network hardening solution to use and in what part of the system, an analyst needs to manually use the implementation details gathered from the framework. A security control can be split into different security tiers depending on the amount of security the implementation achieves. The security tiers are defined by the authors of the security control framework. An organization can reduce their cost and time spent on implementing the security by having a tool that parses the information system and creates guidelines based on security controls and parsed data. In this research, we will compare different security controls and based on the findings, investigate hardware, software and configurations that are being used when conducting network hardening. We will evaluate to which extent it is possible to generate guidelines that meet the given security tier, whether it is feasible to apply them and present a prototype that is able to generate guidelines. The different security controls will be compared by analyzing the contents of each control in the frameworks. A comprehensive mapping will be created and based on the information gathered in the mapping, network-hardening implementations will be investigated based on the devices in our experiment environment. With implementations at hand, a tool will be proposed that parses information systems and outputs guidelines that integrate the implementations in a readable format. Experts within the field of system hardening then evaluate the created guidelines in terms of achieving defined security levels. For the comparison, a total of 148 different controls were identified to be related in some way. With 148 controls at hand, the prototype can output 111 different guidelines with different security tier associations. According to the comments from the experts, the proposed guidelines were able to satisfy each security tier. Our prototype displayed that we were able to create guidelines that can meet a given security tier. Although the implementation of each guideline is not automated, identifying what network-hardening implementation should be used is done in an automated fashion and thus allowing organizations to put their spending and time into other organizational interests. \newline
Att utföra riskbedömningar är en nödvändig process när ett informations-system ska säkras. Uppgiften med att utföra riskbedömningar är för organisationer en tidskrävande och dyr process. Därför har olika ramverk för säkerhetskontroller tagits fram för att underlätta denna uppgift. Dessa ramverk innehåller information över vad en organisation behöver implementera för att erhålla en specifik nivå av säkerhet i deras informations-system. Den här säkerhetsnivån varierar beroende på hur mycket säkerhet en implementation tillför. De olika nivåerna definieras av ramverksförfattarna. För att förstå vilka nätverkshärdningar organisationen ska använda samt för vilken del i systemet dessa härdningar ska appliceras, behöver en analytiker manuellt gå igenom implementerings-lösningar i ramverken tillsammans med systemet och på så vis ta fram korrekt härdningsåtgärd för en specifik del i systemet. Syftet med arbetet är att jämföra olika säkerhetskontroller och baserat på resultatet undersöka hur hårdvara, mjukvara och konfigurationer kan användas för att härda nätverket. Vi kommer att utvärdera i vilken utsträckning det är möjligt att generera riktlinjer, huruvida det är möjligt att applicera riktlinjerna och ta fram en prototyp som kan generera riktlinjer. De olika ramverken kommer att jämföras genom att innehållet i deras säkerhetskontroller analyseras. En omfattande mappning kommer att tas fram baserat på analysen och utifrån mappningen kommer ytterliggare implementationer rörande nätverkshädrning analyseras. Med hjälp av implementationerna kommer ett verktyg att föreslås vilket analyserar ett informations-system och som producerar riktlinjer som integrerar implementationerna till ett läsbart format. Dessa riktlinjer undersöks sedan av experter gällande hur väl riktlinjerna uppnår definerade säkerhetsnivåer. Under arbetet identifierades totalt 148 olika säkerhets-kontroller som påvisade likhet med varandra. Med dessa 148 kontroller tillgodo klarade vår prototyp av att producera 111 olika riktlinjer tillhörande olika säkerhetsnivåer beroende på systemet som matades in. Enligt kommentarerna ur granskningen som experterna utförde gick följande att konstatera: riktlinjerna som tas fram genom prototypen kunde upprätthålla varje säkerhetsnivå. Vår prototyp påvisade att det var möjligt att skapa riktlinjer som uppnår en efterfrågad säkerhetsnivå. Även om implementering för varje producerad riktlinje inte är automatiserad så kunde vår prototyp automatisera processen av att avgöra vilken nätverks-härdnings implementation som skulle användas för var riktlinje. Detta tillåter organisationer att lägga mer tid och investeringar i andra organisatoriska intressen.
APA, Harvard, Vancouver, ISO, and other styles
19

Zaman, Shaikh Faisal. "Automated Liver Segmentation from MR-Images Using Neural Networks." Thesis, Linköpings universitet, Avdelningen för radiologiska vetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162599.

Full text
Abstract:
Liver segmentation is a cumbersome task when done manually, often consuming quality time of radiologists. Use of automation in such clinical task is fundamental and the subject of most modern research. Various computer aided methods have been incorporated for this task, but it has not given optimal results due to the various challenges faced as low-contrast in the images, abnormalities in the tissues, etc. As of present, there has been significant progress in machine learning and artificial intelligence (AI) in the field of medical image processing. Though challenges exist, like image sensitivity due to different scanners used to acquire images, difference in imaging methods used, just to name a few. The following research embodies a convolutional neural network (CNN) was incorporated for this process, specifically a U-net algorithm. Predicted masks are generated on the corresponding test data and the Dice similarity coefficient (DSC) is used as a statistical validation metric for performance evaluation. Three datasets, from different scanners (two1.5 T scanners and one 3.0 T scanner), have been evaluated. The U-net performs well on the given three different datasets, even though there was limited data for training, reaching upto DSC of 0.93 for one of the datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

Tiwana, Moazzam Islam. "Automated RRM optimization of LTE networks using statistical learning." Phd thesis, Institut National des Télécommunications, 2010. http://tel.archives-ouvertes.fr/tel-00589617.

Full text
Abstract:
The mobile telecommunication industry has experienced a very rapid growth in the recent past. This has resulted in significant technological and architectural evolution in the wireless networks. The expansion and the heterogenity of these networks have made their operational cost more and more important. Typical faults in these networks may be related to equipment breakdown and inappropriate planning and configuration. In this context, automated troubleshooting in wireless networks receives a growing importance, aiming at reducing the operational cost and providing high-quality services for the end-users. Automated troubleshooting can reduce service breakdown time for the clients, resulting in the decrease in client switchover to competing network operators. The Radio Access Network (RAN) of a wireless network constitutes its biggest part. Hence, the automated troubleshooting of RAN of the wireless networks is very important. The troubleshooting comprises the isolation of the faulty cells (fault detection), identifying the causes of the fault (fault diagnosis) and the proposal and deployement of the healing action (solution deployement). First of all, in this thesis, the previous work related to the troubleshooting of the wireless networks has been explored. It turns out that the fault detection and the diagnosis of wireless networks have been well studied in the scientific literature. Surprisingly, no significant references for the research work related to the automated healing of wireless networks have been reported. Thus, the aim of this thesis is to describe my research advances on "Automated healing of LTE wireless networks using statistical learning". We focus on the faults related to Radio Resource Management (RRM) parameters. This thesis explores the use of statistical learning for the automated healing process. In this context, the effectiveness of statistical learning for automated RRM has been investigated. This is achieved by modeling the functional relationships between the RRM parameters and Key Performance Indicators (KPIs). A generic automated RRM architecture has been proposed. This generic architecture has been used to study the application of statistical learning approach to auto-tuning and performance monitoring of the wireless networks. The use of statistical learning in the automated healing of wireless networks introduces two important diculties: Firstly, the KPI measurements obtained from the network are noisy, hence this noise can partially mask the actual behaviour of KPIs. Secondly, these automated healing algorithms are iterative. After each iteration the network performance is typically evaluated over the duration of a day with new network parameter settings. Hence, the iterative algorithms should achieve their QoS objective in a minimum number of iterations. Automated healing methodology developped in this thesis, based on statistical modeling, addresses these two issues. The automated healing algorithms developped are computationaly light and converge in a few number of iterations. This enables the implemenation of these algorithms in the Operation and Maintenance Center (OMC) in the off-line mode. The automated healing methodolgy has been applied to 3G Long Term Evolution (LTE) use cases for healing the mobility and intereference mitigation parameter settings. It has been observed that our healing objective is achieved in a few number of iterations. An automated healing process using the sequential optimization of interference mitigation and packet scheduling parameters has also been investigated. The incorporation of the a priori knowledge into the automated healing process, further reduces the number of iterations required for automated healing. Furthermore, the automated healing process becomes more robust, hence, more feasible and practical for the implementation in the wireless networks.
APA, Harvard, Vancouver, ISO, and other styles
21

Tiwana, Moazzam Islam. "Automated RRM optimization of LTE networks using statistical learning." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2010. http://www.theses.fr/2010TELE0025.

Full text
Abstract:
Le secteur des télécommunications mobiles a connu une croissance très rapide dans un passé récent avec pour résultat d'importantes évolutions technologiques et architecturales des réseaux sans fil. L'expansion et l'hétérogénéité de ces réseaux ont engendré des coûts de fonctionnement de plus en plus importants. Les dysfonctionnements typiques de ces réseaux ont souvent pour origines des pannes d'équipements ainsi que de mauvaises planifications et/ou configurations. Dans ce contexte, le dépannage automatisé des réseaux sans fil peut s'avérer d'une importance particulière visant à réduire les coûts opérationnels et à fournir une bonne qualité de service aux utilisateurs. Le dépannage automatisé des pannes survenant sur les réseaux sans fil peuvent ainsi conduire à une réduction du temps d'interruption de service pour les clients, permettant ainsi d'éviter l'orientation de ces derniers vers les opérateurs concurrents. Le RAN (Radio Access Network) d'un réseau sans fil constitue sa plus grande partie. Par conséquent, le dépannage automatisé des réseaux d'accès radio des réseaux sans fil est très important. Ce dépannage comprend la détection des dysfonctionnements, l'identification des causes des pannes (diagnostic) et la proposition d'actions correctives (déploiement de la solution). Tout d'abord, dans cette thèse, les travaux antérieurs liés au dépannage automatisé des réseaux sans-fil ont été explorés. Il s'avère que la détection et le diagnostic des incidents impactant les réseaux sans-fil ont déjà bien été étudiés dans les productions scientifiques traitant de ces sujets. Mais étonnamment, aucune référence significative sur des travaux de recherche liés aux résolutions automatisées des pannes des réseaux sans fil n'a été rapportée. Ainsi, l'objectif de cette thèse est de présenter mes travaux de recherche sur la " résolution automatisée des dysfonctionnements des réseaux sans fil LTE (Long Term Evolution) à partir d'une approche statistique ". Les dysfonctionnements liés aux paramètres RRM (Radio Resource Management) seront particulièrement étudiés. Cette thèse décrit l'utilisation des données statistiques pour l'automatisation du processus de résolution des problèmes survenant sur les réseaux sans fil. Dans ce but, l'efficacité de l'approche statistique destinée à l'automatisation de la résolution des incidents liés aux paramètres RRM a été étudiée. Ce résultat est obtenu par la modélisation des relations fonctionnelles existantes entre les paramètres RRM et les indicateurs de performance ou KPI (Key Performance Indicator). Une architecture générique automatisée pour RRM 8 a été proposée. Cette dernière a été utilisée afin d'étudier l'utilisation de l'approche statistique dans le paramétrage automatique et le suivi des performances des réseaux sans fil. L'utilisation de l'approche statistique dans la résolution automatique des dysfonctionnements des réseaux sans fil présente deux contraintes majeures. Premièrement, les mesures de KPI obtenues à partir du réseau peuvent contenir des erreurs qui peuvent partiellement masquer le comportement réel des indicateurs de performance. Deuxièmement, ces algorithmes automatisés sont itératifs. Ainsi, après chaque itération, la performance du réseau est généralement évaluée sur la durée d'une journée avec les nouveaux paramètres réseau implémentés. Les algorithmes itératifs devraient donc atteindre leurs objectifs de qualité de service dans un nombre minimum d'itérations. La méthodologie automatisée de diagnostic et de résolution développée dans cette thèse, basée sur la modélisation statistique, prend en compte ces deux difficultés. Ces algorithmes de la résolution automatisé nécessitent peu de calculs et convergent vers un petit nombre d'itérations ce qui permet leur implémentation à l'OMC (Operation and Maintenace Center). La méthodologie a été appliquée à des cas pratiques sur réseau LTE dans le but de résoudre des problématiques liées à la mobilité et aux interférences. Il est ainsi apparu que l'objectif de correction de ces dysfonctionnements a été atteint au bout d'un petit nombre d'itérations. Un processus de résolution automatisé utilisant l'optimisation séquentielle des paramètres d'atténuation des interférences et de packet scheduling a également été étudié. L'incorporation de la "connaissance a priori" dans le processus de résolution automatisé réduit d'avantage le nombre d'itérations nécessaires à l'automatisation du processus. En outre, le processus automatisé de résolution devient plus robuste, et donc, plus simple et plus pratique à mettre en œuvre dans les réseaux sans fil
The mobile telecommunication industry has experienced a very rapid growth in the recent past. This has resulted in significant technological and architectural evolution in the wireless networks. The expansion and the heterogenity of these networks have made their operational cost more and more important. Typical faults in these networks may be related to equipment breakdown and inappropriate planning and configuration. In this context, automated troubleshooting in wireless networks receives a growing importance, aiming at reducing the operational cost and providing high-quality services for the end-users. Automated troubleshooting can reduce service breakdown time for the clients, resulting in the decrease in client switchover to competing network operators. The Radio Access Network (RAN) of a wireless network constitutes its biggest part. Hence, the automated troubleshooting of RAN of the wireless networks is very important. The troubleshooting comprises the isolation of the faulty cells (fault detection), identifying the causes of the fault (fault diagnosis) and the proposal and deployement of the healing action (solution deployement). First of all, in this thesis, the previous work related to the troubleshooting of the wireless networks has been explored. It turns out that the fault detection and the diagnosis of wireless networks have been well studied in the scientific literature. Surprisingly, no significant references for the research work related to the automated healing of wireless networks have been reported. Thus, the aim of this thesis is to describe my research advances on "Automated healing of LTE wireless networks using statistical learning". We focus on the faults related to Radio Resource Management (RRM) parameters. This thesis explores the use of statistical learning for the automated healing process. In this context, the effectiveness of statistical learning for automated RRM has been investigated. This is achieved by modeling the functional relationships between the RRM parameters and Key Performance Indicators (KPIs). A generic automated RRM architecture has been proposed. This generic architecture has been used to study the application of statistical learning approach to auto-tuning and performance monitoring of the wireless networks. The use of statistical learning in the automated healing of wireless networks introduces two important diculties: Firstly, the KPI measurements obtained from the network are noisy, hence this noise can partially mask the actual behaviour of KPIs. Secondly, these automated healing algorithms are iterative. After each iteration the network performance is typically evaluated over the duration of a day with new network parameter settings. Hence, the iterative algorithms should achieve their QoS objective in a minimum number of iterations. Automated healing methodology developped in this thesis, based on statistical modeling, addresses these two issues. The automated healing algorithms developped are computationaly light and converge in a few number of iterations. This enables the implemenation of these algorithms in the Operation and Maintenance Center (OMC) in the off-line mode. The automated healing methodolgy has been applied to 3G Long Term Evolution (LTE) use cases for healing the mobility and intereference mitigation parameter settings. It has been observed that our healing objective is achieved in a few number of iterations. An automated healing process using the sequential optimization of interference mitigation and packet scheduling parameters has also been investigated. The incorporation of the a priori knowledge into the automated healing process, further reduces the number of iterations required for automated healing. Furthermore, the automated healing process becomes more robust, hence, more feasible and practical for the implementation in the wireless networks
APA, Harvard, Vancouver, ISO, and other styles
22

Kaiser, Edward Leo. "Addressing Automated Adversaries of Network Applications." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/4.

Full text
Abstract:
The Internet supports a perpetually evolving patchwork of network services and applications. Popular applications include the World Wide Web, online commerce, online banking, email, instant messaging, multimedia streaming, and online video games. Practically all networked applications have a common objective: to directly or indirectly process requests generated by humans. Some users employ automation to establish an unfair advantage over non-automated users. The perceived and substantive damages that automated, adversarial users inflict on an application degrade its enjoyment and usability by legitimate users, and result in reputation and revenue loss for the application's service provider. This dissertation examines three challenges critical to addressing the undesirable automation of networked applications. The first challenge explores individual methods that detect various automated behaviors. Detection methods range from observing unusual network-level request traffic to sensing anomalous client operation at the application-level. Since many detection methods are not individually conclusive, the second challenge investigates how to combine detection methods to accurately identify automated adversaries. The third challenge considers how to leverage the available knowledge to disincentivize adversary automation by nullifying their advantage over legitimate users. The thesis of this dissertation is that: there exist methods to detect automated behaviors with which an application's service provider can identify and then systematically disincentivize automated adversaries. This dissertation evaluates this thesis using research performed on two network applications that have different access to the client software: Web-based services and multiplayer online games.
APA, Harvard, Vancouver, ISO, and other styles
23

Jones, Charles H. "TOWARDS FULLY AUTOMATED INSTRUMENTATION TEST SUPPORT." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604521.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Imagine that a test vehicle has just arrived at your test facility and that it is fully instrumented with sensors and a data acquisition system (DAS). Imagine that a test engineer logs onto the vehicle’s DAS, submits a list of data requirements, and the DAS automatically configures itself to meet those data requirements. Imagine that the control room then contacts the DAS, downloads the configuration, and coordinates its own configuration with the vehicle’s setup. Imagine all of this done with no more human interaction than the original test engineer’s request. How close to this imaginary scenario is the instrumentation community? We’re not there yet, but through a variety of efforts, we are headed towards this fully automated scenario. This paper outlines the current status, current projects, and some missing pieces in the journey towards this end. This journey includes standards development in the Range Commander’s Council (RCC), smart sensor standards development through the Institute of Electrical and Electronics Engineers (IEEE), Small Business Innovation Research (SBIR) contracts, efforts by the integrated Network Enhanced Telemetry (iNET) project, and other projects involved in reaching this goal.
APA, Harvard, Vancouver, ISO, and other styles
24

Bhuiyan, Touhid. "Trust-based automated recommendation making." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49168/1/Touhid_Bhuiyan_Thesis.pdf.

Full text
Abstract:
Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.
APA, Harvard, Vancouver, ISO, and other styles
25

Allott, Nicholas Mark. "A natural language processing framework for automated assessment." Thesis, Nottingham Trent University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Keski-Korsu, P. (Pasi). "Automated port scanning and security testing on a single network host." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605051652.

Full text
Abstract:
Black-box security testing is divided into five phases: reconnaissance, scanning, exploitation, post exploitation and reporting. There are many tools and methods to perform security testing in the exploitation and post-exploitation phases. Therefore, the first two steps are crucial to execute properly to narrow down different options to do security testing. In the scanning phase, the penetration tester’s goal is to gather as much information as possible from the system under test. One task is to discover open network ports and used network protocols. Nmap is a port scanning tool to check network port states and what network protocols target host supports. Nmap’s different port scanning techniques should be used to obtain comprehensive results of port states. The results from different scanning techniques have to be combined so that port state assignments have to be allocated to trusted and untrusted assignments. After port scanning has been executed, the actual software security testing can begin. This testing can be automated to begin right after port scanning. Automated tests are started of services that run behind open ports that have been discovered in port scanning. The Nmap scripting engine also has a module to execute general scripts to gather more information on the system under test. Another tool, Nikto, is implemented to test services that use Hypertext Transfer Protocol (HTTP). Port scanning and automated testing is time consuming, when scanning and testing is executed comprehensively. Sometimes it is crucial to obtain results in a short time, so there should be options on broadness of scanning and testing. Comprehensive scanning and testing may produce large amounts of scattered information so reporting of the results should be brief and clear to help penetration tester’s work. The performance of the scanning and testing implementation is evaluated by testing a single network host and flexibility is validated by running the scanning and testing on other network hosts
Black-box-tietoturvatestaus on jaettu viiteen vaiheeseen: tiedustelu, skannaus, hyödyntäminen, hyödyntämisen jälkeiset toimet ja raportointi. On olemassa paljon työkaluja ja metodeja tietoturvatestauksen tekemiseen hyödyntämisvaiheessa ja hyödyntämisen jälkeisissä toimissa. Tämän vuoksi kaksi ensimmäistä vaihetta on tärkeä suorittaa huolellisesti, jotta eri tietoturvatestausvaihtoehtoja voidaan vähentää. Skannausvaiheessa tietoturvatestaajan päämäärä on kerätä mahdollisimman paljon tietoa testikohteesta. Yksi tehtävä tässä vaiheessa on löytää avoimia verkkoportteja ja käytettyjä IP-protokollia. Nmap on porttiskannaustyökalu, jonka avulla voidaan selvittää verkkoporttien tilat sekä käytetyt verkkoprotokollat. Nmap sisältää erilaisia porttiskannaustekniikoita, joita tulee käyttää kattavien skannaustulosten saamiseksi. Eri skannaustekniikoista pitää yhdistellä tuloksia, joten skannaustekniikoiden antamat luokitukset tulee jakaa luotettaviin ja ei-luotettaviin tuloksiin. Kun porttiskannaus on suoritettu, varsinainen tietoturvatestaus voi alkaa. Testauksen voi automatisoida alkamaan heti porttiskannauksen jälkeen. Automaatiotestit ajetaan palveluihin, jotka toimivat avoimien porttien takana. Avoimet portit on tutkittu porttiskannausvaiheessa. Nmapin skriptityökalu sisältää myös moduulin, joka suorittaa yleisiä testejä testikohteeseen, millä saadaan lisätietoa testattavasta kohteesta. Toinen testaustyökalu, Nikto, on implementoitu testaamaan palveluja, jotka käyttävät Hypertext Transfer Protokollaa (HTTP). Porttiskannaus ja automatisoitu tietoturvatestaus vie aikaa, kun skannaus ja testaus suoritetaan kokonaisvaltaisesti. Joskus on kuitenkin tärkeää saada tuloksia lyhyessä ajassa, joten testaajalla tulisi olla eri laajuisia skannaus- ja testausvaihtoehtoja. Kokonaisvaltainen skannaus ja testaus voi tuottaa suuren määrän hajallaan olevaa tietoa, joten tulokset pitää raportoida lyhyesti ja selkeästi, jotta penetraatiotestaajan työ helpottuu. Skannaus- ja testausohjelman toimintakyky arvioidaan skannaamalla yksittäinen verkkolaite ja joustavuus muihin ympäristöihin varmistetaan skannaamalla ja testaamalla useampi verkkolaite
APA, Harvard, Vancouver, ISO, and other styles
27

Heaton, Jeff T. "Automated Feature Engineering for Deep Neural Networks with Genetic Programming." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/994.

Full text
Abstract:
Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model’s predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm’s engineered features.
APA, Harvard, Vancouver, ISO, and other styles
28

Peng, Pai. "Automated defect detection for textile fabrics using Gabor wavelet networks." View the Table of Contents & Abstract, 2006. http://sunzi.lib.hku.hk/hkuto/record/B38025966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Peng, Pai, and 彭湃. "Automated defect detection for textile fabrics using Gabor wavelet networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38766103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Heaton, Jeff. "Automated Feature Engineering for Deep Neural Networks with Genetic Programming." Thesis, Nova Southeastern University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10259604.

Full text
Abstract:

Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model's predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set.

This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm's engineered features.

APA, Harvard, Vancouver, ISO, and other styles
31

Basnayake, Mudiyanselage V. (Vishaka). "Federated learning for enhanced sensor reliability of automated wireless networks." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201908142761.

Full text
Abstract:
Abstract. Autonomous mobile robots working in-proximity humans and objects are becoming frequent and thus, avoiding collisions becomes important to increase the safety of the working environment. This thesis develops a mechanism to improve the reliability of sensor measurements in a mobile robot network taking into the account of inter-robot communication and costs of faulty sensor replacements. In this view, first, we develop a sensor fault prediction method utilizing sensor characteristics. Then, network-wide cost capturing sensor replacements and wireless communication is minimized subject to a sensor measurement reliability constraint. Tools from convex optimization are used to develop an algorithm that yields the optimal sensor selection and wireless information communication policy for aforementioned problem. Under the absence of prior knowledge on sensor characteristics, we utilize observations of sensor failures to estimate their characteristics in a distributed manner using federated learning. Finally, extensive simulations are carried out to highlight the performance of the proposed mechanism compared to several state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Banerjee, Sanjib. "A translator for automated code generation for service-based systems." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4779.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains iv, 116 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 66).
APA, Harvard, Vancouver, ISO, and other styles
33

Damar, Halil Evren. "Essays on bank networks and the Turkish banking crisis /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/7490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ahlin, Björn, and Marcus Gärdin. "Automated Classification of Steel Samples : An investigation using Convolutional Neural Networks." Thesis, KTH, Materialvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209669.

Full text
Abstract:
Automated image recognition software has earlier been used for various analyses in the steel making industry. In this study, the possibility to apply such software to classify Scanning Electron Microscope (SEM) images of two steel samples was investigated. The two steel samples were of the same steel grade but with the difference that they had been treated with calcium for a different length of time.  To enable automated image recognition, a Convolutional Neural Network (CNN) was built. The construction of the software was performed with open source code provided by Keras Documentation, thus ensuring an easily reproducible program. The network was trained, validated and tested, first for non-binarized images and then with binarized images. Binarized images were used to ensure that the network's prediction only considers the inclusion information and not the substrate. The non-binarized images gave a classification accuracy of 99.99 %. For the binarized images, the classification accuracy obtained was 67.9%.  The results show that it is possible to classify steel samples using CNNs. One interesting aspect of the success in classifying steel samples is that further studies on CNNs could enable automated classification of inclusions.
Automatiserad bildigenkänning har tidigare använts inom ståltillverkning för olika sorters analyser. Den här studiens syfte är att undersöka om bildigenkänningsprogram applicerat på Svepelektronmikroskopi (SEM) bilder kan klassificera två stålprover. Stålproven var av samma sort, med skillnaden att de behandlats med kalcium olika lång tid. För att möjliggöra den automatiserade bildigenkänningen byggdes ett Convolutional Neural Network (CNN). Nätverket byggdes med hjälp av öppen kod från Keras Documentation. Detta för att programmet enkelt skall kunna reproduceras. Nätverket tränades, validerades och testades, först för vanliga bilder och sedan för binariserade bilder. Binariserade bilder användes för att tvinga programmet att bara klassificera med avseende på inneslutningar och inte på grundmatrisen. Resultaten på klassificeringen för vanliga bilder gav en träffsäkerhet på 99.99%. För binariserade bilder blev träffsäkerheten för klassificeringen 67.9%. Resultaten visar att det är möjligt att använda CNNs för att klassificera stålprover. En intressant möjlighet som vidare studier på CNNs kan leda till är att automatisk klassificering av inneslutningar kan möjliggöras.
APA, Harvard, Vancouver, ISO, and other styles
35

Lopez, de Diego Silvia Isabel. "Automated Interpretation of Abnormal Adult Electroencephalograms." Master's thesis, Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/463281.

Full text
Abstract:
Electrical and Computer Engineering
M.S.E.E.
Interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiner. The interrater agreement, even for relevant clinical events such as seizures, can be low. For instance, the differences between interictal, ictal, and post-ictal EEGs can be quite subtle. Before making such low-level interpretations of the signals, neurologists often classify EEG signals as either normal or abnormal. Even though the characteristics of a normal EEG are well defined, there are some factors, such as benign variants, that complicate this decision. However, neurologists can make this classification accurately by only examining the initial portion of the signal. Therefore, in this thesis, we explore the hypothesis that high performance machine classification of an EEG signal as abnormal can approach human performance using only the first few minutes of an EEG recording. The goal of this thesis is to establish a baseline for automated classification of abnormal adult EEGs using state of the art machine learning algorithms and a big data resource – The TUH EEG Corpus. A demographically balanced subset of the corpus was used to evaluate performance of the systems. The data was partitioned into a training set (1,387 normal and 1,398 abnormal files), and an evaluation set (150 normal and 130 abnormal files). A system based on hidden Markov Models (HMMs) achieved an error rate of 26.1%. The addition of a Stacked Denoising Autoencoder (SdA) post-processing step (HMM-SdA) further decreased the error rate to 24.6%. The overall best result (21.2% error rate) was achieved by a deep learning system that combined a Convolutional Neural Network and a Multilayer Perceptron (CNN-MLP). Even though the performance of our algorithm still lags human performance, which approaches a 1% error rate for this task, we have established an experimental paradigm that can be used to explore this application and have demonstrated a promising baseline using state of the art deep learning technology.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
36

Zhu, Yuehan. "Automated Supply-Chain Quality Inspection Using Image Analysis and Machine Learning." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-20069.

Full text
Abstract:
An image processing method for automatic quality assurance of Ericsson products is developed. The method consists of taking an image of the product, extract the product labels from the image, OCR the product numbers and make a database lookup to match the mounted product with the customer specification. The engineering innovation of the method developed in this report is that the OCR is performed using machine learning techniques. It is shown that machine learning can produce results that are on par or better than baseline OCR methods. The advantage with a machine learning based approach is that the associated neural network can be trained for the specific input images from the Ericsson factory. Imperfections in the image quality and varying type fonts etc. can be handled by properly training the net, a task that would have been very difficult with legacy OCR algorithms where poor OCR results typically need to be mitigated by improving the input image quality rather than changing the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Jubb, Matthew James. "Optimal use of computing equipment in an automated industrial inspection context." Thesis, Durham University, 1995. http://etheses.dur.ac.uk/4882/.

Full text
Abstract:
This thesis deals with automatic defect detection. The objective was to develop the techniques required by a small manufacturing business to make cost-efficient use of inspection technology. In our work on inspection techniques we discuss image acquisition and the choice between custom and general-purpose processing hardware. We examine the classes of general-purpose computer available and study popular operating systems in detail. We highlight the advantages of a hybrid system interconnected via a local area network and develop a sophisticated suite of image-processing software based on it. We quantitatively study the performance of elements of the TCP/IP networking protocol suite and comment on appropriate protocol selection for parallel distributed applications. We implement our own distributed application based on these findings. In our work on inspection algorithms we investigate the potential uses of iterated function series and Fourier transform operators when preprocessing images of defects in aluminium plate acquired using a linescan camera. We employ a multi-layer perceptron neural network trained by backpropagation as a classifier. We examine the effect on the training process of the number of nodes in the hidden layer and the ability of the network to identify faults in images of aluminium plate. We investigate techniques for introducing positional independence into the network's behaviour. We analyse the pattern of weights induced in the network after training in order to gain insight into the logic of its internal representation. We conclude that the backpropagation training process is sufficiently computationally intensive so as to present a real barrier to further development in practical neural network techniques and seek ways to achieve a speed-up. Weconsider the training process as a search problem and arrive at a process involving multiple, parallel search "vectors" and aspects of genetic algorithms. We implement the system as the mentioned distributed application and comment on its performance.
APA, Harvard, Vancouver, ISO, and other styles
38

Cheng, Jie. "Learning Bayesian networks from data : an information theory based approach." Thesis, University of Ulster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Barbosa, Manuel Romano dos Santos Pinto. "Traffic management and control of automated guided vehicles using artificial neural networks." Thesis, University of Warwick, 1997. http://wrap.warwick.ac.uk/4200/.

Full text
Abstract:
An industrial traffic management and control system based on Automated Guided Vehicles faces several combined problems. Decisions must be made concerning which vehicles will respond, or are allocated to each of the transport orders. Once a vehicle is allocated a transport order, a route has to be selected that allows it to reach its target location. In order for the vehicle to move efficiently along the selected route it must be provided with the means to recognise and adapt to the changing characteristics of the path it must follow. When several vehicles are involved these decisions are interrelated and must take into account the coordination of the movements of the vehicles in order to avoid collisions and maximise the performance of the transport system. This research concentrates on the problem of routing the vehicles that have already been assigned destinations associated with transport orders. In nearly all existing AGV systems this problem is simplified by considering there to be a fixed route between source and destination workstations. However if the system is to be used more efficiently, and particularly if it must support the requirements of modern manufacturing strategies, such as Justin- Time and Flexible Manufacturing Systems, of moving very small batches more frequently, then there is a need for a system capable of dealing with the increased complexity of the routing problem. The consideration of alternative paths between any two workstations together with the possibility of other vehicles blocking routes while waiting at a particular location, increases enormously the number of alternatives that must be considered in order to identify the routes for each vehicle leading to an optimum solution. Current methods used to solve this type of problem do not provide satisfactory solutions for all cases, which leaves scope for improvement. The approach proposed in this work takes advantage of the use of Backpropagation Artificial Neural Networks to develop a solution for the routing problem. A novel aspect of the approach implemented is the use of a solution derived for routing a single vehicle in a physical layout when some pieces of track are set as unavailable, as the basis for the solution when several vehicles are involved. Another original aspect is the method developed to deal with the problem of selecting a route between two locations based on an analysis of the conditions of the traffic system, when each movement decision has to be made. This lead to the implementation of a step-by-step search of the available routes for each vehicle. Two distinct phases can be identified in the approach proposed. First the design of a solution based on an ANN to solve the single vehicle case, and subsequently the development and testing of a solution for a multi-vehicle case. To test and implement these phases a specific layout was selected, and an algorithm was implemented to generate the data required for the design of the ANN solution. During the development of alternative solutions it was found that the addition of simple rules provided a useful means to overcome some of the limitations of the ANN solution, and a "hybrid" solution was originated. Numerous computer simulations were performed to test the solutions developed against alternatives based on the best published heuristic rules. The results showed that while it was not possible to generate a globally optimal solution, near optimal solutions could be obtained and the best hybrid solution was marginally better than the best of the currently available heuristic rules.
APA, Harvard, Vancouver, ISO, and other styles
40

Doolittle, Daniel Foster. "Automated Fish Species Classification using Artificial Neural Networks and Autonomous Underwater Vehicles." W&M ScholarWorks, 2003. https://scholarworks.wm.edu/etd/1539617813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Agarwal, Deepam. "A comparative study of artificial neural networks and info fuzzy networks on their use in software testing." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Banerjee, Anirban. "Development of an automated electrogustometer." Thesis, University of Sussex, 2011. http://sro.sussex.ac.uk/id/eprint/6957/.

Full text
Abstract:
In spite of electrogustometry having been in existence since the 1930s, there is no state of the art instrument to assess the electrogustometric threshold. A state of the art electrogustometer has been designed and constructed and tested for reliability and repeatability. This is based on embedded digital technology and is a semi-automatic, battery-powered portable instrument. Physical factors such as electrode area and stimulus duration affect the taste threshold but there are no recommended standards for these factors. Studies have been conducted to ascertain a recommended standard – a circular stainless steel electrode area of 28.5 mm2 and a stimulus duration of 2 seconds. While performing the test-retest assessment of the Sussex Electrogustometer, the new instrument, an anomaly was observed. Upon further investigation it was concluded that it was caused by alcohol consumed by a subject prior to the retest. Elaborate experiments were designed with the help of a neurologist and psychologist to understand the immediate effect of alcohol on taste for non-alcoholics. The results indicated an immediate improvement of taste for lower concentrations of alcohol and a delayed improvement for higher concentration. The studies were extended to understand the immediate effect of anaesthetics and smoking on taste which showed that taste deteriorated as expected. The new machine was used successfully in the clinical environment by local doctors and a report on their findings has also been included within this thesis.
APA, Harvard, Vancouver, ISO, and other styles
43

Hsieh, Sheau-Ling. "AI-BASED WORKSTATIONS AND KNOWLEDGE BASE SERVER DESIGN FOR AUTOMATED STAFFING IN A LOCAL AREA NETWORK (ELECTRONIC MAIL)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Min, Sung-Hwan. "Automated Construction of Macromodels from Frequency Data for Simulation of Distributed Interconnect Networks." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5209.

Full text
Abstract:
As the complexity of interconnects and packages increases and the rise and fall time of the signal decreases, the electromagnetic effects of distributed passive devices are becoming an important factor in determining the performance of gigahertz systems. The electromagnetic behavior extracted using an electromagnetic simulation or from measurements is available as frequency dependent data. This information can be represented as a black box called a macromodel, which captures the behavior of the passive structure at the input/output ports. In this dissertation, the macromodels have been categorized as scalable, passive and broadband macromodels. The scalable macromodels for building design libraries of passive devices have been constructed using multidimensional rational functions, orthogonal polynomials and selective sampling. The passive macromodels for time-domain simulation have been constructed using filter theory and multiport passivity formulae. The broadband macromodels for high-speed simulation have been constructed using band division, selector, subband reordering, subband dilation and pole replacement. An automated construction method has been developed. The construction time of the multiport macromodel has been reduced. A method for reducing the order of the macromodel has been developed. The efficiency of the methods was demonstrated through embedded passive devices, known transfer functions and distributed interconnect networks.
APA, Harvard, Vancouver, ISO, and other styles
45

McCabe, Brenda Yvette. "An automated modeling approach for construction performance improvement using simulation and belief networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23029.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Settipalli, Praveen. "AUTOMATED CLASSIFICATION OF POWER QUALITY DISTURBANCES USING SIGNAL PROCESSING TECHNIQUES AND NEURAL NETWORKS." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/430.

Full text
Abstract:
This thesis focuses on simulating, detecting, localizing and classifying the power quality disturbances using advanced signal processing techniques and neural networks. Primarily discrete wavelet and Fourier transforms are used for feature extraction, and classification is achieved by using neural network algorithms. The proposed feature vector consists of a combination of features computed using multi resolution analysis and discrete Fourier transform. The proposed feature vectors exploit the benefits of having both time and frequency domain information simultaneously. Two different classification algorithms based on Feed forward neural network and adaptive resonance theory neural networks are proposed for classification. This thesis demonstrates that the proposed methodology achieves a good computational and error classification efficiency rate.
APA, Harvard, Vancouver, ISO, and other styles
47

Kingdon, Jason Conrad. "Feed forward neural networks and genetic algorithms for automated financial time series modelling." Thesis, University College London (University of London), 1995. http://discovery.ucl.ac.uk/1318052/.

Full text
Abstract:
This thesis presents an automated system for financial time series modelling. Formal and applied methods are investigated for combining feed-forward Neural Networks and Genetic Algorithms (GAs) into a single adaptive/learning system for automated time series forecasting. Four important research contributions arise from this investigation: i) novel forms of GAs are introduced which are designed to counter the representational bias associated with the conventional Holland GA, ii) an experimental methodology for validating neural network architecture design strategies is introduced, iii) a new method for network pruning is introduced, and iv) an automated method for inferring network complexity for a given learning task is devised. These methods provide a general-purpose applied methodology for developing neural network applications and are tested in the construction of an automated system for financial time series modelling. Traditional economic theory has held that financial price series are random. The lack of a priori models on which to base a computational solution for financial modelling provides one of the hardest tests of adaptive system technology. It is shown that the system developed in this thesis isolates a deterministic signal within a Gilt Futures prices series, to a confidences level of over 99%, yielding a prediction accuracy of over 60% on a single run of 1000 out-of-sample experiments. An important research issue in the use of feed-forward neural networks is the problems associated with parameterisation so as to ensure good generalisation. This thesis conducts a detailed examination of this issue. A novel demonstration of a network's ability to act as a universal functional approximator for finite data sets is given. This supplies an explicit formula for setting a network's architecture and weights in order to map a finite data set to arbitrary precision. It is shown that a network's ability to generalise is extremely sensitive to many parameter choices and that unless careful safeguards are included in the experimental procedure over-fitting can occur. This thesis concentrates on developing automated techniques so as to tackle these problems. Techniques for using GAs to parameterise neural networks are examined. It is shown that the relationship between the fitness function, the GA operators and the choice of encoding are all instrumental in determining the likely success of the GA search. To address this issue a new style of GA is introduced which uses multiple encodings in the course of a run. These are shown to out-perform the Holland GA on a range of standard test functions. Despite this innovation it is argued that the direct use of GAs to neural network parameterisation runs the risk of compounding the network sensitivity issue. Moreover, in the absence of a precise formulation of generalisation a less direct use of GAs to network parameterisation is examined. Specifically a technique, artficia1 network generation (ANG), is introduced in which a GA is used to artificially generate test learning problems for neural networks that have known network solutions. ANG provides a means for directly testing i) a neural net architecture, ii) a neural net training process, and iii) a neural net validation procedure, against generalisation. ANG is used to provide statistical evidence in favour of Occam's Razor as a neural network design principle. A new method for pruning and inferring network complexity for a given learning problem is introduced. Network Regression Pruning (NRP) is a network pruning method that attempts to derive an optimal network architecture by starting from what is considered an overly large network. NRP differs radically from conventional pruning methods in that it attempts to hold a trained network's mapping fixed as pruning proceeds. NRP is shown to be extremely successful at isolating optimal network architectures on a range of test problems generated using ANG. Finally, NRP and techniques validated using ANG are combined to implement an Automated Neural network Time series Analysis System (ANTAS). ANTAS is applied to the gilt futures price series The Long Gilt Futures Contract (LGFC).
APA, Harvard, Vancouver, ISO, and other styles
48

Finney, Graham Barry. "Investigation into the use of neural networks for visual inspection of ceramic tableware." Thesis, Liverpool John Moores University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

D'Souza, Aswin Cletus. "Automated counting of cell bodies using Nissl stained cross-sectional images." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Miah, Abdul J. "Automated library networking in American public community college learning resources centers." Diss., Virginia Polytechnic Institute and State University, 1989. http://books.google.com/books?id=5LbgAAAAMAAJ.

Full text
Abstract:
Thesis (Ed. D.)--Virginia Polytechnic Institute and State University, 1989.
Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 148-159).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography