To see the other types of publications on this topic, follow the link: Algorithmic decision systems.

Dissertations / Theses on the topic 'Algorithmic decision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithmic decision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Böhnlein, Toni [Verfasser]. "Algorithmic Decision-Making in Multi-Agent Systems: Votes and Prices / Toni Böhnlein." München : Verlag Dr. Hut, 2018. http://d-nb.info/1164294113/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Björklund, Pernilla. "The curious case of artificial intelligence : An analysis of the relationship between the EU medical device regulations and algorithmic decision systems used within the medical domain." Thesis, Uppsala universitet, Juridiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-442122.

Full text
Abstract:
The healthcare sector has become a key area for the development and application of new technology and, not least, Artificial Intelligence (AI). New reports are constantly being published about how this algorithm-based technology supports or performs various medical tasks. These illustrates the rapid development of AI that is taking place within healthcare and how algorithms are increasingly involved in systems and medical devices designed to support medical decision-making.  The digital revolution and the advancement of AI technologies represent a step change in the way healthcare may be delivered, medical services coordinated and well-being supported. It could allow for easier and faster communication, earlier and more accurate diagnosing and better healthcare at lower costs. However, systems and devices relying on AI differs significantly from other, traditional, medical devices. AI algorithms are – by nature – complex and partly unpredictable. Additionally, varying levels of opacity has made it hard, sometimes impossible, to interpret and explain recommendations or decisions made by or with support from algorithmic decision systems. These characteristics of AI technology raise important technological, practical, ethical and regulatory issues. The objective of this thesis is to analyse the relationship between the EU regulation on medical devices (MDR) and algorithmic decision systems (ADS) used within the medical domain. The principal question is whether the MDR is enough to guarantee safe and robust ADS within the European healthcare sector or if complementary (or completely different) regulation is necessary. In essence, it will be argued that (i) while ADS are heavily reliant on the quality and representativeness of underlying datasets, there are no requirements with regard to the quality or composition of these datasets in the MDR, (ii) while it is believed that ADS will lead to historically unprecedented changes in healthcare , the regulation lacks guidance on how to manage novel risks and hazards, unique to ADS, and that (iii) as increasingly autonomous systems continue to challenge the existing perceptions of how safety and performance is best maintained, new mechanisms (for transparency, human control and accountability) must be incorporated in the systems. It will also be found that the ability of ADS to change after market certification, will eventually necessitate radical changes in the current regulation and a new regulatory paradigm might be needed.
APA, Harvard, Vancouver, ISO, and other styles
3

Fairley, Andrew. "Information systems for tactical decision making." Thesis, University of Liverpool, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weingartner, Stephan G. "System development : an algorithmic approach." Virtual Press, 1987. http://liblink.bsu.edu/uhtbin/catkey/483077.

Full text
Abstract:
The subject chosen to develop this thesis project on is developing an algorithm or methodology for system selection. The specific problem studied involves a procedure to determine anion computer system alternative is the best choice for a given user situation.The general problem to be addressed is the need for one to choose computing hardware, software, systems, or services in a -Logical approach from a user perspective, considering cost, performance and human factors. Most existing methods consider only cost and performance factors, combining these factors in ad hoc, subjective fashions to react: a selection decision. By not considering factors treat measure effectiveness and functionality of computer services for a user, existing methods ignore some of the most important measures of value to the user.In this work, a systematic and comprehensive approach to computer system selection has been developed. Also developed were methods for selecting and organizing various criteria.Also ways to assess the importance and value of different service attributes to a end-user are discussed.Finally, the feasibility of a systematic approach to computer system selection has been proven by establishing a general methodology and by proving it through a demonstration of a specific application.
APA, Harvard, Vancouver, ISO, and other styles
5

Manongga, D. H. F. "Using genetic algorithm-based methods for financial analysis." Thesis, University of East Anglia, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bacak, Hikmet Ozge. "Decision Making System Algorithm On Menopause Data Set." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12612471/index.pdf.

Full text
Abstract:
Multiple-centered clustering method and decision making system algorithm on menopause data set depending on multiple-centered clustering are described in this study. This method consists of two stages. At the first stage, fuzzy C-means (FCM) clustering algorithm is applied on the data set under consideration with a high number of cluster centers. As the output of FCM, cluster centers and membership function values for each data member is calculated. At the second stage, original cluster centers obtained in the first stage are merged till the new numbers of clusters are reached. Merging process relies upon a &ldquo
similarity measure&rdquo
between clusters defined in the thesis. During the merging process, the cluster center coordinates do not change but the data members in these clusters are merged in a new cluster. As the output of this method, therefore, one obtains clusters which include many cluster centers. In the final part of this study, an application of the clustering algorithms &ndash
including the multiple centered clustering method &ndash
a decision making system is constructed using a special data on menopause treatment. The decisions are based on the clusterings created by the algorithms already discussed in the previous chapters of the thesis. A verification of the decision making system / v decision aid system is done by a team of experts from the Department of Department of Obstetrics and Gynecology of Hacettepe University under the guidance of Prof. Sinan Beksaç
.
APA, Harvard, Vancouver, ISO, and other styles
7

Wan, Min. "Decision diagram algorithms for logic and timed verification." Diss., [Riverside, Calif.] : University of California, Riverside, 2008. http://proquest.umi.com/pqdweb?index=0&did=1663077981&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268242250&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2008.
Includes abstract. Title from first page of PDF file (viewed March 10, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 166-170). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
8

Raboun, Oussama. "Multiple Criteria Spatial Risk Rating." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED066.

Full text
Abstract:
La thèse est motivée par une étude de cas intéressante liée à l’évaluation du risque nucléaire. Le cas d’étude consiste à évaluer l’impact d’un accident nucléaire survenu dans le milieu marin. Ce problème comporte des caractéristiques spatiales, différents enjeux économiques et environnementaux, des connaissances incomplètes sur les potentiels acteurs et un nombre élevé de scénarios d’accident possibles. Le cas d’étude a été résolu en utilisant différentes techniques d’analyse décisionnelle telles que la comparaison des loteries et les outils MCDA (Multiple Criteria Decision Analysis).Une nouvelle méthode de classification ordinale, nommée Dynamic-R, est née de cette thèse, visant à fournir une notation complète et convaincante. La méthode développée a fourni des résultats intéressants au cas d’étude et des propriétés théoriques très intéressantes qui sont présenté dans les chapitres 6 et 7 de ce manuscrit
The thesis is motivated by an interesting case study related to environmental risk assessment. The case study problem consists on assessing the impact of a nuclear accident taking place in the marine environment. This problem is characterized by spatial characteristics, different assets characterizing the spatial area, incomplete knowledge about the possible stakeholders, and a high number of possible accident scenarios. A first solution of the case study problem was proposed where different decision analysis techniques were used such as lotteries comparison, and MCDA (Multiple Criteria Decision Analysis) tools. A new MCDA rating method, named Dynamic-R, was born from this thesis, aiming at providing a complete and convincing rating. The developed method provided interesting results to the case study, and very interesting theoretical properties that will be presented in chapters 6 and 7 of this manuscript
APA, Harvard, Vancouver, ISO, and other styles
9

Chemla, Daniel, and Daniel Chemla. "Algorithms for optimizing shared mobility systems." Phd thesis, Université Paris-Est, 2012. http://pastel.archives-ouvertes.fr/pastel-00839521.

Full text
Abstract:
Bikes sharing systems have known a growing success all over the world. Several attempts have been made since the 1960s. The latest developments in ICT have enabled the system to become efficient. People can obtain real-time information about the position of the vehicles. More than 200 cities have already introduced the system and this trend keeps on with the launching of the NYC system in spring 2013. A new avatar of these means of transportation has arrived with the introduction of Autolib in Paris end of 2011.The objective of this thesis is to propose algorithms that may help to improve this system efficiency. Indeed, operating these systems induces several issues, one of which is the regulation problem. Regulation should ensures users that a right number of vehicles are present at any station anytime in order to fulfill the demand for both vehicles and parking racks. This regulation is often executed thanks to trucks that are travelling the city. This regulation issue is crucial since empty and full stations increase users' dissatisfaction. Finding the optimal strategy for regulating a network appears to be a difficult question. This thesis is divided into two parts. The first one deals with the "static" case. In this part, users' impact on the network is neglected. This is the case at night or when the system is closed. The operator faces a given repartition of the vehicles. He wants the repartition to match a target one that is known a priori. The one-truck and multiple-truck balancing problems are addressed in this thesis. For each one, an algorithm is proposed and tested on several instances. To deal with the "dynamic" case in which users interact with the system, a simulator has been developed. It is used to compare several strategies and to monitor redistribution by using trucks. Strategies not using trucks, but incentive policies are also tested: regularly updated prices are attached to stations to deter users from parking their vehicle at specified stations. At last, the question to find the best initial inventory is also addressed. It corresponds to the case when no truck are used within the day. Two local searches are presented and both aim at minimizing the total time lost by users in the system. The results obtained can be used as inputs for the target repartitions used in the first part. During my thesis, I participated to two EURO-ROADEF challenges, the 2010 edition proposed by EDF and the 2012 one by Google. In both case, my team reached the final phase. In 2010, our method was ranked fourth over all the participants and led to the publication of an article. In 2012, we ranked eighteenth over all the participants. Both works are added in the appendix
APA, Harvard, Vancouver, ISO, and other styles
10

Chung, Sai-ho, and 鍾世豪. "A multi-criterion genetic algorithm for supply chain collaboration." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29357275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zilio, Daniel C. "Physical database design decision algorithms and concurrent reorganization for parallel database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35386.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Aftarczuk, Kamila. "Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6194.

Full text
Abstract:
The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.
APA, Harvard, Vancouver, ISO, and other styles
13

Ugwu, Onuegbu O. "A decision support framework for resource optimisation and management using hybrid genetic algorithms : application in earthworks." Thesis, London South Bank University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Suikki, Oskar. "A decision making algorithm for user preferences and norms in context-aware systems." Thesis, Umeå universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-142517.

Full text
Abstract:
As context-aware systems are becoming popular in modern society, artificial intelligence will have to consider the norms of society. However, norms and personal goals can conflict with eachother complicating how an artificial intelligence should act. In this thesis I will present a decision making algorithm for evaluating contexts where the user has preferences and are affected by norms with deadlines. The norms and user preferences can have varying levels of importance. The algorithm is demonstrated with a operational example of the algorithm on a use case of planning a user’s visit to a smart city.
APA, Harvard, Vancouver, ISO, and other styles
15

Chemla, Daniel. "Algorithms for optimizing shared mobility systems." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1066/document.

Full text
Abstract:
Les systèmes de vélos en libre-service ont connu ces dernières années un développement sans précédent. Bien que les premières tentatives de mise en place remontent aux années 60, l'arrivée de technologies permettant un suivi des différents véhicules mis à la disposition du grand public et de l'état des bornes de stationnement en temps réel a rendu ces systèmes plus attractifs. Plus de 200 villes disposent de tels systèmes et cette tendance se poursuit avec l'entrée en fonctionnement du système de New York prévue pour mars 2013. La fin de l'année 2011 a été marquée par l'arrivée d'un nouvel avatar de ce type de transport avec la mise en place d'Autolib à Paris. L'objectif de cette thèse est de proposer des algorithmes d'aide à la décision pour l'optimisation de réseaux de transport en libre-service. L'exploitation de ces systèmes, qui fleurissent actuellement un peu partout dans le monde, pose en effet de nombreux problèmes, l'un des plus cruciaux étant celui de la régulation. Cette dernière a pour objectif de maintenir dans chaque station un nombre de vélos ni trop faible, ni trop élevé, afin de satisfaire au mieux la demande. Cette régulation se fait souvent par le biais de camions qui effectuent des tournées sur le réseau. Il apparaît rapidement que la question d'une régulation optimale à l'aide d'une flotte fixée de camions est une question difficile. La thèse est divisée en deux parties. Dans la première partie, le cas “statique” est considéré. Les déplacements de véhicules dus aux usagers sont négligés. Cela traduit la situation la nuit ou lorsque le système est fermé à la location. L'opérateur doit redistribuer les véhicules afin que ceux-ci soient disposés selon une répartition définie. Les problèmes de rééquilibrage avec un ou plusieurs camions sont traités. Pour chacun des deux cas, un algorithme est proposé et utilisé pour résoudre des instances de tailles variées. La seconde partie traite du cas “dynamique” dans lequel les utilisateurs interagissent avec le système. Afin d'étudier ce système complexe, un simulateur a été développé. Il est utilisé pour comparer différentes stratégies de redistribution des véhicules. Certaines utilisent des camions se déplaçant dans la ville pendant la journée. D'autres tentent d'organiser une régulation intrinsèque du système par le biais d'une politique d'incitation : des prix mis à jour régulièrement encouragent les usagers à rendre leur véhicule dans certaines stations. Enfin, si on choisit de ne pas utiliser de camion durant la journée, la question de la détermination du nombre optimal de véhicules à disposer à chaque station se pose. Deux méthodes de recherche locale visant à minimiser le temps total perdu par les usagers sont présentées. Les résultats obtenus peuvent servir pour la définition des répartitions cibles de la première partie. Durant ma thèse, j'ai pu participer à deux challenges EURO/ROADEF, celui de 2010 proposé par EDF et celui de 2012 proposé par Google. Dans les deux cas, mon équipe a atteint les phases finales. Lors de l'édition de 2010, notre méthode est arrivée quatrième et a donné lieu à une publication. En 2012, notre méthode est arrivée dix-huitième sur tous les participants. Les travaux menés dans ces cadres sont ajoutés en annexe
Bikes sharing systems have known a growing success all over the world. Several attempts have been made since the 1960s. The latest developments in ICT have enabled the system to become efficient. People can obtain real-time information about the position of the vehicles. More than 200 cities have already introduced the system and this trend keeps on with the launching of the NYC system in spring 2013. A new avatar of these means of transportation has arrived with the introduction of Autolib in Paris end of 2011.The objective of this thesis is to propose algorithms that may help to improve this system efficiency. Indeed, operating these systems induces several issues, one of which is the regulation problem. Regulation should ensures users that a right number of vehicles are present at any station anytime in order to fulfill the demand for both vehicles and parking racks. This regulation is often executed thanks to trucks that are travelling the city. This regulation issue is crucial since empty and full stations increase users' dissatisfaction. Finding the optimal strategy for regulating a network appears to be a difficult question. This thesis is divided into two parts. The first one deals with the “static” case. In this part, users' impact on the network is neglected. This is the case at night or when the system is closed. The operator faces a given repartition of the vehicles. He wants the repartition to match a target one that is known a priori. The one-truck and multiple-truck balancing problems are addressed in this thesis. For each one, an algorithm is proposed and tested on several instances. To deal with the “dynamic” case in which users interact with the system, a simulator has been developed. It is used to compare several strategies and to monitor redistribution by using trucks. Strategies not using trucks, but incentive policies are also tested: regularly updated prices are attached to stations to deter users from parking their vehicle at specified stations. At last, the question to find the best initial inventory is also addressed. It corresponds to the case when no truck are used within the day. Two local searches are presented and both aim at minimizing the total time lost by users in the system. The results obtained can be used as inputs for the target repartitions used in the first part. During my thesis, I participated to two EURO-ROADEF challenges, the 2010 edition proposed by EDF and the 2012 one by Google. In both case, my team reached the final phase. In 2010, our method was ranked fourth over all the participants and led to the publication of an article. In 2012, we ranked eighteenth over all the participants. Both works are added in the appendix
APA, Harvard, Vancouver, ISO, and other styles
16

Gerdes, Mike. "Predictive Health Monitoring for Aircraft Systems using Decision Trees." Licentiate thesis, Linköpings universitet, Fluida och mekatroniska system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105843.

Full text
Abstract:
Unscheduled aircraft maintenance causes a lot problems and costs for aircraft operators. This is due to the fact that aircraft cause significant costs if flights have to be delayed or canceled and because spares are not always available at any place and sometimes have to be shipped across the world. Reducing the number of unscheduled maintenance is thus a great costs factor for aircraft operators. This thesis describes three methods for aircraft health monitoring and prediction; one method for system monitoring, one method for forecasting of time series and one method that combines the two other methods for one complete monitoring and prediction process. Together the three methods allow the forecasting of possible failures. The two base methods use decision trees for decision making in the processes and genetic optimization to improve the performance of the decision trees and to reduce the need for human interaction. Decision trees have the advantage that the generated code can be fast and easily processed, they can be altered by human experts without much work and they are readable by humans. The human readability and modification of the results is especially important to include special knowledge and to remove errors, which the automated code generation produced.
APA, Harvard, Vancouver, ISO, and other styles
17

Calliess, Jan-Peter. "Conservative decision-making and inference in uncertain dynamical systems." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:b7206c3a-8d76-4454-a258-ea1e5bd1c63e.

Full text
Abstract:
The demand for automated decision making, learning and inference in uncertain, risk sensitive and dynamically changing situations presents a challenge: to design computational approaches that promise to be widely deployable and flexible to adapt on the one hand, while offering reliable guarantees on safety on the other. The tension between these desiderata has created a gap that, in spite of intensive research and contributions made from a wide range of communities, remains to be filled. This represents an intriguing challenge that provided motivation for much of the work presented in this thesis. With these desiderata in mind, this thesis makes a number of contributions towards the development of algorithms for automated decision-making and inference under uncertainty. To facilitate inference over unobserved effects of actions, we develop machine learning approaches that are suitable for the construction of models over dynamical laws that provide uncertainty bounds around their predictions. As an example application for conservative decision-making, we apply our learning and inference methods to control in uncertain dynamical systems. Owing to the uncertainty bounds, we can derive performance guarantees of the resulting learning-based controllers. Furthermore, our simulations demonstrate that the resulting decision-making algorithms are effective in learning and controlling under uncertain dynamics and can outperform alternative methods. Another set of contributions is made in multi-agent decision-making which we cast in the general framework of optimisation with interaction constraints. The constraints necessitate coordination, for which we develop several methods. As a particularly challenging application domain, our exposition focusses on collision avoidance. Here we consider coordination both in discrete-time and continuous-time dynamical systems. In the continuous-time case, inference is required to ensure that decisions are made that avoid collisions with adjustably high certainty even when computation is inevitably finite. In both discrete-time and finite-time settings, we introduce conservative decision-making. That is, even with finite computation, a coordination outcome is guaranteed to satisfy collision-avoidance constraints with adjustably high confidence relative to the current uncertain model. Our methods are illustrated in simulations in the context of collision avoidance in graphs, multi-commodity flow problems, distributed stochastic model-predictive control, as well as in collision-prediction and avoidance in stochastic differential systems. Finally, we provide an example of how to combine some of our different methods into a multi-agent predictive controller that coordinates learning agents with uncertain beliefs over their dynamics. Utilising the guarantees established for our learning algorithms, the resulting mechanism can provide collision avoidance guarantees relative to the a posteriori epistemic beliefs over the agents' dynamics.
APA, Harvard, Vancouver, ISO, and other styles
18

Brandao, Jose Carlos Soares. "A decision support system and algorithms for the vehicle routing and scheduling problem." Thesis, Lancaster University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bdiwi, Mohamad. "Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision System." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-150231.

Full text
Abstract:
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows: • How to define which subspaces should be vision, position or force controlled? • When the controller should switch from one control mode to another one? • How to insure that the visual information could be reliably used? • How to define the most appropriated vision/force control structure? In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed. In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely: 1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge. 2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable. 3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene. 4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user. If the previous properties are relatively achieved, the proposed robot system can: • Perform different successive and complex tasks. • Grasp/contact and track imprecisely placed objects with different poses. • Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events. • Benefit from all the advantages of different vision/force control structures. • Benefit from all the information provided by the sensors. • Reduce the human intervention or reprogramming during the execution of the task. • Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
APA, Harvard, Vancouver, ISO, and other styles
20

Lubbe, Hendrik Gideon. "Intelligent automated guided vehicle (AGV) with genetic algorithm decision making capabilities." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2007. http://hdl.handle.net/11462/85.

Full text
Abstract:
Thesis (M.Tech.) - Central University of Technology, Free State, 2006
The ultimate goal regarding this research was to make an intelligent learning machine, thus a new method had to be developed. This was to be made possible by creating a programme that generates another programme. By constantly changing the generated programme to improve itself, the machines are given the ability to adapt to there surroundings and, thus, learn from experience. This generated programme had to perform a specific task. For this experiment the programme was generated for a simulated PIC microcontroller aboard a simulated robot. The goal was to get the robot as close to a specific position inside a simulated maze as possible. The robot therefore had to show the ability to avoid obstacles, although only the distance to the destination was given as an indication of how well the generated programme was performing. The programme performed experiments by randomly changing a number of instructions in the current generated programme. The generated programme was evaluated by simulating the reactions of the robot. If the change to the generated programme resulted in getting the robot closer to the destination, then the changed generated programme was kept for future use. If the change resulted in a less desired reaction, then the newly generated programme was removed and the unchanged programme was kept for future use. This process was repeated for a total of one hundred thousand times before the generated program was considered valid. Because there was a very slim chance that the instruction chosen will be advantageous to the programme, it will take many changes to get the desired instruction and, thus, the desired result. After each change an evaluation was made through simulation. The amount of necessary changes to the programme is greatly reduced by giving seemingly desirable instructions a higher chance of being chosen than the other seemingly unsatisfactory instructions. Due to the extensive use of the random function in this experiment, the results differ from one another. To overcome this barrier, many individual programmes had to be generated by simulating and changing an instruction in the generated programme a hundred thousand times. This method was compared against Genetic Algorithms, which were used to generate a programme for the same simulated robot. The new method made the robot adapt much faster to its surroundings than the Genetic Algorithms. A physical robot, similar to the virtual one, was build to prove that the programmes generated could be used on a physical robot. There were quite a number of differences between the generated programmes and the way in which a human would generally construct the programme. Therefore, this method not only gives programmers a new perspective, but could also possibly do what human programmers have not been able to achieve in the past.
APA, Harvard, Vancouver, ISO, and other styles
21

Bergey, Paul K. "A Decision Support System for the Electrical Power Districting Problem." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27347.

Full text
Abstract:
Due to a variety of political, economic, and technological factors, many national electricity industries around the globe are transforming from non-competitive monopolies with centralized systems to decentralized operations with competitive business units. This process, commonly referred to as deregulation (or liberalization) is driven by the belief that a monopolistic industry fails to achieve economic efficiency for consumers over the long run. Deregulation has occurred in a number of industries such as: aviation, natural gas, transportation, and telecommunications. The most recent movement involving the deregulation of the electricity marketplace is expected to yield consumer benefit as well.

To facilitate deregulation of the electricity marketplace, competitive business units must be established to manage various functions and services independently. In addition, these business units must be given physical property rights for certain parts of the transmission and distribution network in order to provide reliable service and make effective business decisions. However, partitioning a physical power grid into economically viable districts involves many considerations. We refer to this complex problem as the electrical power districting problem.

This research is intended to identify the necessary and fundamental characteristics to appropriately model and solve an electrical power districting problem. Specifically, the objectives of this research are five-fold. First, to identify the issues relevant to electrical power districting problems. Second, to investigate the similarities and differences of electrical power districting problems with other districting problems published in the research literature. Third, to develop and recommend an appropriate solution methodology for electrical power districting problems. Fourth, to demonstrate the effectiveness of the proposed solution method for a specific case of electric power districting in the Republic of Ghana, with data provided by the World Bank. Finally, to develop a decision support system for the decision makers at the World Bank for solving Ghana's electrical power districting problem.
Ph. D.

APA, Harvard, Vancouver, ISO, and other styles
22

Wagner, Ben. "Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems." Wiley, 2019. http://dx.doi.org/10.1002/poi3.198.

Full text
Abstract:
Automated decision making is becoming the norm across large parts of society, which raises interesting liability challenges when human control over technical systems becomes increasingly limited. This article defines "quasi-automation" as inclusion of humans as a basic rubber-stamping mechanism in an otherwise completely automated decision-making system. Three cases of quasi- automation are examined, where human agency in decision making is currently debatable: self- driving cars, border searches based on passenger name records, and content moderation on social media. While there are specific regulatory mechanisms for purely automated decision making, these regulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions. More broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to regulate human or machine agency, rather than looking to regulate both. This results in regulatory gray areas where the regulatory mechanisms do not apply, harming human rights by preventing meaningful liability for socio-technical decision making. The article concludes by proposing criteria to ensure meaningful agency when humans are included in automated decision-making systems, and relates this to the ongoing debate on enabling human rights in Internet infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
23

Astapenko, D. "Automated system design optimisation." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6863.

Full text
Abstract:
The focus of this thesis is to develop a generic approach for solving reliability design optimisation problems which could be applicable to a diverse range of real engineering systems. The basic problem in optimal reliability design of a system is to explore the means of improving the system reliability within the bounds of available resources. Improving the reliability reduces the likelihood of system failure. The consequences of system failure can vary from minor inconvenience and cost to significant economic loss and personal injury. However any improvements made to the system are subject to the availability of resources, which are very often limited. The objective of the design optimisation problem analysed in this thesis is to minimise system unavailability (or unreliability if an unrepairable system is analysed) through the manipulation and assessment of all possible design alterations available, which are subject to constraints on resources and/or system performance requirements. This thesis describes a genetic algorithm-based technique developed to solve the optimisation problem. Since an explicit mathematical form can not be formulated to evaluate the objective function, the system unavailability (unreliability) is assessed using the fault tree method. Central to the optimisation algorithm are newly developed fault tree modification patterns (FTMPs). They are employed here to construct one fault tree representing all possible designs investigated, from the initial system design specified along with the design choices. This is then altered to represent the individual designs in question during the optimisation process. Failure probabilities for specified design cases are quantified by employing Binary Decision Diagrams (BDDs). A computer programme has been developed to automate the application of the optimisation approach to standard engineering safety systems. Its practicality is demonstrated through the consideration of two systems of increasing complexity; first a High Integrity Protection System (HIPS) followed by a Fire Water Deluge System (FWDS). The technique is then further-developed and applied to solve problems of multi-phased mission systems. Two systems are considered; first an unmanned aerial vehicle (UAV) and secondly a military vessel. The final part of this thesis focuses on continuing the development process by adapting the method to solve design optimisation problems for multiple multi-phased mission systems. Its application is demonstrated by considering an advanced UAV system involving multiple multi-phased flight missions. The applications discussed prove that the technique progressively developed in this thesis enables design optimisation problems to be solved for systems with different levels of complexity. A key contribution of this thesis is the development of a novel generic optimisation technique, embedding newly developed FTMPs, which is capable of optimising the reliability design for potentially any engineering system. Another key and novel contribution of this work is the capability to analyse and provide optimal design solutions for multiple multi-phase mission systems. Keywords: optimisation, system design, multi-phased mission system, reliability, genetic algorithm, fault tree, binary decision diagram
APA, Harvard, Vancouver, ISO, and other styles
24

Sun, Rui. "Wide Area System Islanding Detection, Classification, and State Evaluation Algorithm." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/19284.

Full text
Abstract:
An islanded power system indicates a geographical and logical detach between a portion
of a power system and the major grid, and often accompanies with the loss of system
observability. A power system islanding contingency could be one of the most severe
consequences of wide-area system failures. It might result in enormous losses to both the power utilities and the consumers. Even those relatively small and stable islanding events may largely disturb the consumers\' normal operation in the island. On the other hand, the power consumption in the U.S. has been largely increasing since 1970s with the respect to the bloom of global economy and mass manufacturing, and the daily increased requirements from the modern customers. Along with the extreme weather and natural disaster factors, the century old U.S. power grid is under severely tests for potential islanding disturbances. After 1980s, the invention of synchronized phasor measurement units (PMU) has broadened the horizon for system monitoring, control and protection. Its real time feature and reliable measurements has made possible many online system schemes. The recent revolution of computers and electronic devices enables the implementation of complex methods (such as data mining methods) requiring large databases in power system analysis. The proposed method presented in this dissertation is primarily focused on two studies: one power system islanding contingency detection, identification, classification and state evaluation algorithm using a decision tree algorithm and topology approach, and its application in Dominion Virginia power system; and one optimal PMU placement strategy using a binary integral programming algorithm with the consideration of system islanding and redundancy issues.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Joshi, Chetan. "Development of a Decision Support Tool for Planning Rail Systems: An Implementation in TSAM." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/31024.

Full text
Abstract:
A Decision Support model for planning Intercity Railways is presented in this research. The main aim of the model is to generate inputs for the logit model existing in the Virginia Tech Transportation Systems Analysis Model (TSAM). The inputs required by the TSAM logit model are travel time, travel cost and schedule delay. Travel times and travel costs for different rail technologies are calculated using a rail network and actual or proposed rail schedules. The concept of relational databases is used in the development of the network topology. Further, an event graph approach is used for analysis of the generated network. Shortest travel times and their corresponding travel costs between origin-destination pairs are found using Floydâ s algorithm. Complete itineraries including transfers (if involved) are intrinsically held in the precedence matrix generated after running the algorithm. A standard mapping technique is used to obtain the actual routes. The algorithms developed, have been implemented in MATLAB. Schedules from the North American Passenger rail system AMTRAK are used to generate the sample network for this study. The model developed allows the user to evaluate what-if scenarios for various route frequencies and rail technologies such as Accelerail, High Speed Rail and Maglev. The user also has the option of modifying route information. Comparison of travel time values for the mentioned technology types in different corridors revealed that frequency of service has a greater impact on the total travel time in shorter distance corridors, whereas technology/line-haul speed has a greater influence on the total travel time in the longer distance corridors. This tool could be useful to make preliminary assessments of future rail systems. The network topology generated by the algorithm can further be used for network flow assignment, especially time-dependent assignment if used with dynamic graph algorithms.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Hashemi, Vahid [Verfasser], and Holger [Akademischer Betreuer] Hermanns. "Decision algorithms for modelling, optimal control and verification of probabilistic systems / Vahid Hashemi ; Betreuer: Holger Hermanns." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2017. http://d-nb.info/1152095447/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hashemi, Vahid Verfasser], and Holger [Akademischer Betreuer] [Hermanns. "Decision algorithms for modelling, optimal control and verification of probabilistic systems / Vahid Hashemi ; Betreuer: Holger Hermanns." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:291-scidok-ds-270397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Choi, Bong-Jin. "Statistical Analysis, Modeling, and Algorithms for Pharmaceutical and Cancer Systems." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5200.

Full text
Abstract:
The aim of the present study is to develop a statistical algorithm and model associ- ated with breast and lung cancer patients. In this study, we developed several statistical softwares, R packages, and models using our new statistical approach. In the present study, we used the five parameters logistic model for determining the optimal doses of a pharmaceutical drugs, including dynamic initial points, an automatic process for outlier detection and an algorithm that develops a graphic user interface(GUI) program. The developed statistical procedure assists medical scientists by reducing their time in determining the optimal dose of new drugs, and can also easily identify which drugs need more experimentation. Secondly, in the present study, we developed a new classification method that is very useful in the health sciences. We used a new decision tree algorithm and a random forest method to rank our variables and to build a final decision tree model. The decision tree can identify and communicate complex data systems to scientists with minimal knowledge in statistics. Thirdly, we developed statistical packages using the Johnson SB probability distribu- tion which is important in parametrically studying a variety of health, environmental, and engineering problems. Scientists are experiencing difficulties in obtaining estimates for the four parameters of the subject probability distribution. The developed algorithm com- bines several statistical procedures, such as, the Newtwon Raphson, the Bisection, the Least Square Estimation, and the regression method to develop our R package. This R package has functions that generate random numbers, calculate probabilities, inverse probabilities, and estimate the four parameters of the SB Johnson probability distribution. Researchers can use the developed R package to build their own statistical models or perform desirable statistical simulations. The final aspect of the study involves building a statistical model for lung cancer sur- vival time. In developing the subject statistical model, we have taken into consideration the number of cigarettes the patient smoked per day, duration of smoking, and the age at diagnosis of lung cancer. The response variables the survival time. The significant factors include interaction. the probability density function of the survival times has been obtained and the survival function is determined. The analysis is have on your groups the involve gender and with factors. A companies with the ordinary survival function is given.
APA, Harvard, Vancouver, ISO, and other styles
29

Pattison, Rachel Lesley. "Safety system design optimisation." Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/22019.

Full text
Abstract:
This thesis investigates the efficiency of a design optimisation scheme that is appropriate for systems which require a high likelihood of functioning on demand. Traditional approaches to the design of safety critical systems follow the preliminary design, analysis, appraisal and redesign stages until what is regarded as an acceptable design is achieved. For safety systems whose failure could result in loss of life it is imperative that the best use of the available resources is made and a system which is optimal, not just adequate, is produced. The object of the design optimisation problem is to minimise system unavailability through manipulation of the design variables, such that limitations placed on them by constraints are not violated. Commonly, with mathematical optimisation problem; there will be an explicit objective function which defines how the characteristic to be minimised is related to the variables. As regards the safety system problem, an explicit objective function cannot be formulated, and as such, system performance is assessed using the fault tree method. By the use of house events a single fault tree is constructed to represent the failure causes of each potential design to overcome the time consuming task of constructing a fault tree for each design investigated during the optimisation procedure. Once the fault tree has been constructed for the design in question it is converted to a BDD for analysis. A genetic algorithm is first employed to perform the system optimisation, where the practicality of this approach is demonstrated initially through application to a High-Integrity Protection System (HIPS) and subsequently a more complex Firewater Deluge System (FDS). An alternative optimisation scheme achieves the final design specification by solving a sequence of optimisation problems. Each of these problems are defined by assuming some form of the objective function and specifying a sub-region of the design space over which this function will be representative of the system unavailability. The thesis concludes with attention to various optimisation techniques, which possess features able to address difficulties in the optimisation of safety critical systems. Specifically, consideration is given to the use of a statistically designed experiment and a logical search approach.
APA, Harvard, Vancouver, ISO, and other styles
30

Kamath, Akash S. "An efficient algorithm for caching online analytical processing objects in a distributed environment." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174678903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Mitchell, Sophia. "A Cascading Fuzzy Logic Approach for Decision Making in Dynamic Applications." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1448037866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Patek, Stephen D. (Stephen David). "Stochastic and shortest path games : theory and algorithms." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10209.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 132-138).
by Stephen David Patek.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
33

Arthur, Gerald L. Gong Yang. "Implementation of a fuzzy rule-based decision support system for the immunohistochemical diagnosis of small B-cell lymphomas." Diss., Columbia, Mo. : University of Missouri-Columbia, 2009. http://hdl.handle.net/10355/6569.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2009.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Thesis advisor: Yang Gong. "May 2009" Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Yingying. "Algorithms and Data Structures for Efficient Timing Analysis of Asynchronous Real-time Systems." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4622.

Full text
Abstract:
This thesis presents a framework to verify asynchronous real-time systems based on model checking. These systems are modeled by using a common modeling formalism named Labeled Petri-nets(LPNs). In order to verify the real-time systems algorithmically, the zone-based timing analysis method is used for LPNs. It searches the state space with timing information (represented by zones). When there is a high degree of concurrency in the model, firing concurrent enabled transitions in different order may result in different zones, and these zones may be combined without affecting the verification result. Since the zone-based method could not deal with this problem efficiently, the POSET timing analysis method is adopted for LPNs. It separates concurrency from causality and generates an exactly one zone for a single state. But it needs to maintain an extra POSET matrix for each state. In order to save time and memory, an improved zone-based timing analysis method is introduced by integrating above two methods. It searches the state space with zones but eliminates the use of the POSET matrix, which generates the same result as with the POSET method. To illustrate these methods, a circuit example is used throughout the thesis. Since the state space generated is usually very large, a graph data structure named multi-value decision diagrams (MDDs) is implemented to store the zones compactly. In order to share common clock value of dierent zones, two zone encoding methods are described: direct encoding and minimal constraint encoding. They ignore the unnecessary information in zones thus reduce the length of the integer tuples. The effectiveness of these two encoding methods is demonstrated by experimental result of the circuit example.
APA, Harvard, Vancouver, ISO, and other styles
35

Bhuma, Venkata Deepti Kiran. "Bidirectional LAO algorithm a faster approach to solve goal-directed MDPs /." Lexington, Ky. : [University of Kentucky Libraries], 2004. http://lib.uky.edu/ETD/ukycosc2004t00187/VBThesis.pdf.

Full text
Abstract:
Thesis (m.s.)--University of Kentucky, 2004.
Title from document title page (viewed Jan. 5, 2005). Document formatted into pages; contains vii, 32p. : ill. Includes abstract and vita. Includes bibliographical references (p. 30-31).
APA, Harvard, Vancouver, ISO, and other styles
36

Waters, Deric Wayne. "Signal Detection Strategies and Algorithms for Multiple-Input Multiple-Output Channels." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7514.

Full text
Abstract:
In todays society, a growing number of users are demanding more sophisticated services from wireless communication devices. In order to meet these rising demands, it has been proposed to increase the capacity of the wireless channel by using more than one antenna at the transmitter and receiver, thereby creating multiple-input multiple-output (MIMO) channels. Using MIMO communication techniques is a promising way to improve wireless communication technology because in a rich-scattering environment the capacity increases linearly with the number of antennas. However, increasing the number of transmit antennas also increases the complexity of detection at an exponential rate. So while MIMO channels have the potential to greatly increase the capacity of wireless communication systems, they also force a greater computational burden on the receiver. Even suboptimal MIMO detectors that have relatively low complexity, have been shown to achieve unprecedented high spectral efficiency. However, their performance is far inferior to the optimal MIMO detector, meaning they require more transmit power. The fact that the optimal MIMO detector is an impractical solution due to its prohibitive complexity, leaves a performance gap between detectors that require reasonable complexity and the optimal detector. The objective of this research is to bridge this gap and provide new solutions for managing the inherent performance-complexity trade-off in MIMO detection. The optimally-ordered decision-feedback (BODF) detector is a standard low-complexity detector. The contributions of this thesis can be regarded as ways to either improve its performance or reduce its complexity - or both. We propose a novel algorithm to implement the BODF detector based on noise-prediction. This algorithm is more computationally efficient than previously reported implementations of the BODF detector. Another benefit of this algorithm is that it can be used to easily upgrade an existing linear detector into a BODF detector. We propose the partial decision-feedback detector as a strategy to achieve nearly the same performance as the BODF detector, while requiring nearly the same complexity as the linear detector. We propose the family of Chase detectors that allow the receiver to trade performance for reduced complexity. By adapting some simple parameters, a Chase detector may achieve near-ML performance or have near-minimal complexity. We also propose two new detection strategies that belong to the family of Chase detectors called the B-Chase and S-Chase detectors. Both of these detectors can achieve near-optimal performance with less complexity than existing detectors. Finally, we propose the double-sorted lattice-reduction algorithm that achieves near-optimal performance with near-BODF complexity when combined with the decision-feedback detector.
APA, Harvard, Vancouver, ISO, and other styles
37

Yusuf, Syed Adnan. "An evolutionary AI-based decision support system for urban regeneration planning." Thesis, University of Wolverhampton, 2010. http://hdl.handle.net/2436/114896.

Full text
Abstract:
The renewal of derelict inner-city urban districts suffering from high levels of socio-economic deprivation and sustainability problems is one of the key research areas in urban planning and regeneration. Subject to a wide range of social, economical and environmental factors, decision support for an optimal allocation of residential and service lots within such districts is regarded as a complex task. Pre-assessment of various neighbourhood factors before the commencement of actual location allocation of various public services is considered paramount to the sutainable outcome of regeneration projects. Spatial assessment in such derelict built-up areas requires planning of lot assignment for residential buildings in a way to maximize accessibility to public services while minimizing the deprivation of built neighbourhood areas. However, the prediction of socio-economic deprivation impact on the regeneration districts in order to optimize the location-allocation of public service infrastructure is a complex task. This is generally due to the highly conflicting nature of various service structures with various socio-economic and environmental factors. In regards to the problem given above, this thesis presents the development of an evolutionary AI-based decision support systemto assist planners with the assessment and optimization of regeneration districts. The work develops an Adaptive Network Based Fuzzy Inference System (ANFIS) based module to assess neighbourhood districts for various deprivation factors. Additionally an evolutionary genetic algorithms based solution is implemented to optimize various urban regeneration layouts based upon the prior deprivation assessment model. The two-tiered framework initially assesses socio-cultural deprivation levels of employment, health, crime and transport accessibility in neighbourhood areas and produces a deprivation impact matrix overthe regeneration layout lots based upon a trained, network-based fuzzy inference system. Based upon this impact matrix a genetic algorithm is developed to optimize the placement of various public services (shopping malls, primary schools, GPs and post offices) in a way that maximize the accessibility of all services to regenerated residential units as well as contribute to minimize the measure of deprivation of surrounding neighbourhood areas. The outcome of this research is evaluated over two real-world case studies presenting highly coherent results. The work ultimately produces a smart urban regeneration toolkit which provides designer and planner decision support in the form of a simulation toolkit.
APA, Harvard, Vancouver, ISO, and other styles
38

Humpherys, Sean L. "A system of deception and fraud detection using reliable linguistic cues including hedging, disfluencies, and repeated phrases." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/196115.

Full text
Abstract:
Given the increasing problem of fraud, crime, and national security threats, assessing credibility is a recurring research topic in Information Systems and in other disciplines. Decision support systems can help. But the success of the system depends on reliable cues that can distinguish deceptive/truthful behavior and on a proven classification algorithm. This investigation aims to identify linguistic cues that distinguish deceivers from truthtellers; and it aims to demonstrate how the cues can successfully classify deception and truth.Three new datasets were gathered: 202 fraudulent and nonfraudulent financial disclosures (10-Ks), a laboratory experiment that asked twelve questions of participants who answered deceptively to some questions and truthfully to others (Cultural Interviews), and a mock crime experiment where some participants stole a ring from an office and where all participants were interviewed as to their guilt or innocence (Mock Crime). Transcribed participant responses were investigated for distinguishing cues and used for classification testing.Disfluencies (e.g., um, uh, repeated phrases, etc.), hedging words (e.g., perhaps, may, etc.), and interjections (e.g., okay, like, etc.) are theoretically developed as potential cues to deception. Past research provides conflicting evidence regarding disfluency use and deception. Some researchers opine that deception increases cognitive load, which lowers attentional resources, which increases speech errors, and thereby increases disfluency use (i.e., Cognitive-Load Disfluency theory). Other researchers argue against the causal link between disfluencies and speech errors, positing that disfluencies are controllable and that deceivers strategically avoid disfluencies to avoid appearing hesitant or untruthful (i.e., Suppression-Disfluency theory). A series of t-tests, repeated measures GLMs, and nested-model design regressions disconfirm the Suppression-Disfluency theory. Um, uh, and interjections are used at an increased rate by deceivers in spontaneous speech. Reverse order questioning did not increase disfluency use. Fraudulent 10-Ks have a higher mean count of hedging words.Statistical classifiers and machine learning algorithms are demonstrated on the three datasets. A feature reduction by backward Wald stepwise with logistic regression had the highest classification accuracies (69%-87%). Accuracies are compared to professional interviewers and to previously researched classification models. In many cases the new models demonstrated improvements. 10-Ks are classified with 69% overall accuracy.
APA, Harvard, Vancouver, ISO, and other styles
39

Riauke, Jelena. "SPEA2-based safety system multi-objective optimization." Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/5514.

Full text
Abstract:
Safety systems are designed to prevent the occurrence of certain conditions and their future development into a hazardous situation. The consequence of the failure of a safety system of a potentially hazardous industrial system or process varies from minor inconvenience and cost to personal injury, significant economic loss and death. To minimise the likelihood of a hazardous situation, safety systems must be designed to maximise their availability. Therefore, the purpose of this thesis is to propose an effective safety system design optimization scheme. A multi-objective genetic algorithm has been adopted, where the criteria catered for includes unavailability, cost, spurious trip and maintenance down time. Analyses of individual system designs are carried out using the latest advantages of the fault tree analysis technique and the binary decision diagram approach (BDD). The improved strength Pareto evolutionary approach (SPEA2) is chosen to perform the system optimization resulting in the final design specifications. The practicality of the developed approach is demonstrated initially through application to a High Integrity Protection System (HIPS) and subsequently to test scalability using the more complex Firewater Deluge System (FDS). Computer code has been developed to carry out the analysis. The results for both systems are compared to those using a single objective optimization approach (GASSOP) and exhaustive search. The overall conclusions show a number of benefits of the SPEA2 based technique application to the safety system design optimization. It is common for safety systems to feature dependency relationships between its components. To enable the use of the fault tree analysis technique and the BDD approach for such systems, the Markov method is incorporated into the optimization process. The main types of dependency which can exist between the safety system component failures are identified. The Markov model generation algorithms are suggested for each type of dependency. The modified optimization tool is tested on the HIPS and FDS. Results comparison shows the benefit of using the modified technique for safety system optimization. Finally the effectiveness and application to general safety systems is discussed.
APA, Harvard, Vancouver, ISO, and other styles
40

Unceta, Irene. "Adapting by copying. Towards a sustainable machine learning." Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/671692.

Full text
Abstract:
Despite the rapid growth of machine learning in the past decades, deploying automated decision making systems in practice remains a challenge for most companies. On an average day, data scientists face substantial barriers to serving models into production. Production environments are complex ecosystems, still largely based on on-premise technology, where modifications are timely and costly. Given the rapid pace with which the machine learning environment changes these days, companies struggle to stay up-to-date with the latest software releases, the changes in regulation and the newest market trends. As a result, machine learning often fails to deliver according to expectations. And more worryingly, this can result in unwanted risks for users, for the company itself and even for the society as a whole, insofar the negative impact of these risks is perpetuated in time. In this context, adaptation is an instrument that is both necessary and crucial for ensuring a sustainable deployment of industrial machine learning. This dissertation is devoted to developing theoretical and practical tools to enable adaptation of machine learning models in company production environments. More precisely, we focus on devising mechanisms to exploit the knowledge acquired by models to train future generations that are better fit to meet the stringent demands of a changing ecosystem. We introduce copying as a mechanism to replicate the decision behaviour of a model using another that presents differential characteristics, in cases where access to both the models and their training data are restricted. We discuss the theoretical implications of this methodology and show how it can be performed and evaluated in practice. Under the conceptual framework of actionable accountability we also explore how copying can be used to ensure risk mitigation in circumstances where deployment of a machine learning solution results in a negative impact to individuals or organizations.
A pesar del rápido crecimiento del aprendizaje automático en últimas décadas, la implementación de sistemas automatizados para la toma de decisiones sigue siendo un reto para muchas empresas. Los científicos de datos se enfrentan a diario a numerosas barreras a la hora de desplegar los modelos en producción. Los entornos de producción son ecosistemas complejos, mayoritariamente basados en tecnologías on- premise, donde los cambios son costosos. Es por eso que las empresas tienen serias dificultades para mantenerse al día con las últimas versiones de software, los cambios en la regulación vigente o las nuevas tendencias del mercado. Como consecuencia, el rendimiento del aprendizaje automático está a menudo muy por debajo de las expectativas. Y lo que es más preocupante, esto puede derivar en riesgos para los usuarios, para las propias empresas e incluso para la sociedad en su conjunto, en la medida en que el impacto negativo de dichos riesgos se perpetúe en el tiempo. En este contexto, la adaptación se revela como un elemento necesario e imprescindible para asegurar la sostenibilidad del desarrollo industrial del aprendizaje automático. Este trabajo está dedicado a desarrollar las herramientas teóricas y prácticas necesarias para posibilitar la adaptación de los modelos de aprendizaje automático en entornos de producción. En concreto, nos centramos en concebir mecanismos que permitan reutilizar el conocimiento adquirido por los modelos para entrenar futuras generaciones que estén mejor preparadas para satisfacer las demandas de un entorno altamente cambiante. Introducimos la idea de copiar, como un mecanismo que permite replicar el comportamiento decisorio de un modelo utilizando un segundo que presenta características diferenciales, en escenarios donde el acceso tanto a los datos como al propio modelo está restringido. Es en este contexto donde discutimos las implicaciones teóricas de esta metodología y demostramos como las copias pueden ser entrenadas y evaluadas en la práctica. Bajo el marco de la responsabilidad accionable, exploramos también cómo las copias pueden explotarse como herramienta para la mitigación de riesgos en circunstancias en que el despliegue de una solución basada en el aprendizaje automático pueda tener un impacto negativo sobre las personas o las organizaciones.
APA, Harvard, Vancouver, ISO, and other styles
41

Sosnowski, Scott T. "Approximate Action Selection For Large, Coordinating, Multiagent Systems." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459468867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lerma, Elvira Néstor. "Assessment and implementation of evolutionary algorithms for optimal management rules design in water resources systems." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/90547.

Full text
Abstract:
Water is an essential resource from an environmental, biological, economic or social point of view. In basin management, the irregular distribution in time and in space of this resource is well known. This issue is worsened by extreme climate conditions, generating drought periods or flood events. For both situations, optimal management is necessary. In one case, different water uses should be supplied efficiently using the available surface and groundwater resources. In another case, the most important goal is to avoid damages in flood areas, including the loss of human lives, but also to optimize the revenue of energy production in hydropower plants, or in other uses. The approach presented in this thesis proposes to obtain optimal management rules in water resource systems. With this aim, evolutionary algorithms were combined with simulation models. The first ones, as optimization tools, are responsible for guiding the process iterations. In each iteration, a new management rule is defined in the simulation model, which is computed to comprehend the situation of the system after applying this new management. For testing the proposed methodology, four evolutionary algorithms were assessed combining them with two simulation models. The methodology was implemented in four real case studies. This thesis is presented as a compendium of five manuscripts: three scientific papers published in journals (which are indexed in the Journal Citation Report), another under review, and the last manuscript from Conference Proceedings. In the first manuscript, the Pikaia optimization algorithm was combined with the network flow SIMGES simulation model for obtaining four different types of optimal management rules in the Júcar River Basin. In addition, the parameters of the Pikaia algorithm were also analyzed to identify the best combination of them to use in the optimization process. In the second scientific paper, the multi-objective NSGA-II algorithm was assessed to obtain a parametric management rule in the Mijares River basin. In this case, the same simulation model was linked with the evolutionary algorithm. In the Conference manuscript, an in-depth analysis of the Tirso-Flumendosa-Campidano (TFM) system using different scenarios and comparing three water simulation models for water resources management was developed. The third published manuscript presented the assessment and comparison of two evolutionary algorithms for obtaining optimal rules in the TFM system using SIMGES model. The algorithms assessed were the SCE-UA and the Scatter Search. In this research paper, the parameters of both algorithms were also analyzed as it was done with the Pikaia algorithm. The management rules in the three first manuscripts were focused to avoid or minimize deficits in urban and agrarian demands and, in some case studies, also to minimize the water pumped. Finally, in the last document, two of the algorithms used in previous manuscripts were assessed, the mono-objective SCE-UA and the multi-objective NSGA-II. For this research, the algorithms were combined with RS MINERVE software to manage flood events in Visp River basin minimizing damages in risk areas and losses in hydropower plants. Results reached in the five manuscripts demonstrate the validity of the approach. In all the case studies and with the different evolutionary algorithms assessed, the obtained management rules achieved a better system management than the base scenario of each case. These results usually mean a decrease of the economic costs in the management of water resources. However, comparing the four algorithms assessed, SCE-UA algorithm proved to be the most efficient due to the different stop/convergence criteria and its formulation. Nevertheless, NSGA-II is the most recommended due to its multi-objective search focus on the enhancement of different objectives with the same importance where the decision makers can make the best decision for the management of the system.
El agua es un recurso esencial desde el punto de vista ambiental, biológico, económico o social. En la gestión de cuencas, es bien conocido que la distribución del recurso en el tiempo y el espacio es irregular. Este problema se agrava debido a condiciones climáticas extremas, generando períodos de sequía o inundaciones. Para ambas situaciones, una gestión óptima es necesaria. En un caso, el suministro de agua a los diferentes usos del sistema debe realizarte eficientemente empleando los recursos disponibles, tanto superficiales como subterráneos. En el otro caso, el objetivo más importante es evitar daños en las zonas de inundación, incluyendo la pérdida de vidas humanas, pero al mismo tiempo, optimizar los beneficios de centrales hidroeléctricas, o de otros usos. El enfoque presentado en esta tesis propone la obtención de reglas de gestión óptimas en sistemas reales de recursos hídricos. Con este objetivo, se combinaron algoritmos evolutivos con modelos de simulación. Los primeros, como herramientas de optimización, encargados de guiar las iteraciones del proceso. En cada iteración se define una nueva regla de gestión en el modelo de simulación, que se evalúa para conocer la situación del sistema después de aplicar esta nueva gestión. Para probar la metodología propuesta, se evaluaron cuatro algoritmos evolutivos combinándolos con dos modelos de simulación. La metodología se implementó en cuatro casos de estudio reales. Esta tesis se presenta como un compendio de cinco publicaciones: tres de ellas en revistas indexadas en el Journal Citation Report, otra en revisión y la última como publicación de un congreso. En el primer manuscrito, el algoritmo de optimización Pikaia se combinó con el modelo de simulación SIMGES para obtener reglas de gestión óptimas en la cuenca del río Júcar. Además, se analizaron los parámetros del algoritmo para identificar la mejor combinación de los mismos en el proceso de optimización. El segundo artículo evaluó el algoritmo multi-objetivo NSGA-II para obtener una regla de gestión paramétrica en la cuenca del río Mijares. En el trabajo presentado en el congreso se desarrolló un análisis en profundidad del sistema Tirso-Flumendosa-Campidano utilizando diferentes escenarios y comparando tres modelos de simulación para la gestión de los recursos hídricos. En el tercer manuscrito publicado se evaluó y comparó dos algoritmos evolutivos (SCE-UA y Scatter Search) para obtener reglas de gestión óptimas en el sistema Tirso-Flumendosa-Campidano. En dicha investigación también se analizaron los parámetros de ambos algoritmos. Las reglas de gestión de estas cuatro publicaciones se enfocaron en evitar o minimizar los déficits de las demandas urbanas y agrarias y, en ciertos casos, también en minimizar el caudal bombeado, utilizando para ello el modelo de simulación SIMGES. Finalmente, en la última publicación se evaluó el algoritmo mono-objetivo SCE-UA y el multi-objetivo NSGA-II. Para esta investigación, los algoritmos se combinaron con el software RS MINERVE para gestionar los eventos de inundación en la cuenca del río Visp minimizando los daños en las zonas de riesgo y las pérdidas en las centrales hidroeléctricas. Los resultados obtenidos en las cinco publicaciones demuestran la validez del enfoque. En todos los casos de estudio y, con los diferentes algoritmos evolutivos evaluados, las reglas de gestión obtenidas lograron una mejor gestión del sistema que el escenario base de cada caso. Estos resultados suelen representar una disminución de los costes económicos en la gestión de los recursos hídricos. Comparando los cuatro algoritmos, el SCE-UA demostró ser el más eficiente debido a los diferentes criterios de convergencia. No obstante, el NSGA-II es el más recomendado debido a su búsqueda multi-objetivo enfocada en la mejora, con la misma importancia, de diferentes objetivos, donde los tomadores de decisiones pueden sel
L'aigua és un recurs essencial des del punt de vista ambiental, biològic, econòmic o social. En la gestió de conques, és ben conegut que la distribució del recurs en el temps i l'espai és irregular. Este problema s'agreuja a causa de condicions climàtiques extremes, generant períodes de sequera o inundacions. Per a ambdúes situacions, una gestió òptima és necessària. En un cas, el subministrament d'aigua als diferents usos del sistema ha de realitzar-se eficientment utilitzant els recursos disponibles, tant superficials com subterranis. En l'altre cas, l'objectiu més important és evitar danys en les zones d'inundació, incloent la pèrdua de vides humanes, però al mateix temps, optimitzar els beneficis de centrals hidroelèctriques, o d'altres usos. La proposta d'esta tesi és l'obtenció de regles de gestió òptimes en sistemes reals de recursos hídrics. Amb este objectiu, es van combinar algoritmes evolutius amb models de simulació. Els primers, com a ferramentes d'optimització, encarregats de guiar les iteracions del procés. En cada iteració es definix una nova regla de gestió en el model de simulació, que s'avalua per a conéixer la situació del sistema després d'aplicar esta nova gestió. Per a provar la metodologia proposada, es van avaluar quatre algoritmes evolutius combinant-los amb dos models de simulació. La metodologia es va implementar en quatre casos d'estudi reals. Esta tesi es presenta com un compendi de cinc publicacions: tres d'elles en revistes indexades en el Journal Citation Report, una altra en revisió i l'última com a publicació d'un congrés. En el primer manuscrit, l'algoritme d'optimització Pikaia es va combinar amb el model de simulació SIMGES per a obtindre regles de gestió òptimes en la conca del riu Xúquer. A més, es van analitzar els paràmetres de l'algoritme per a identificar la millor combinació dels mateixos en el procés d'optimització. El segon article va avaluar l'algoritme multi-objectiu NSGA-II per a obtindre una regla de gestió paramètrica en la conca del riu Millars. En el treball presentat en el congrés es va desenvolupar una anàlisi en profunditat del sistema Tirso-Flumendosa-Campidano utilitzant diferents escenaris i comparant tres models de simulació per a la gestió dels recursos hídrics. En el tercer manuscrit publicat es va avaluar i va comparar dos algoritmes evolutius (SCE-UA i Scatter Search) per a obtindre regles de gestió òptimes en el sistema Tirso-Flumendosa-Campidano. En dita investigació també es van analitzar els paràmetres d'ambdós algoritmes. Les regles de gestió d'estes quatre publicacions es van enfocar a evitar o minimitzar els dèficits de les demandes urbanes i agràries i, en certs casos, també a minimitzar el cabal bombejat, utilitzant per a això el model de simulació SIMGES. Finalment, en l'última publicació es va avaluar l'algoritme mono-objectiu SCE-UA i el multi-objetiu NSGA-II. Per a esta investigació, els algoritmes es van combinar amb el programa RS MINERVE per a gestionar els esdeveniments d'inundació en la conca del riu Visp minimitzant els danys en les zones de risc i les pèrdues en les centrals hidroelèctriques. Els resultats obtinguts en les cinc publicacions demostren la validesa de la metodología. En tots els casos d'estudi i, amb els diferents algoritmes evolutius avaluats, les regles de gestió obtingudes van aconseguir una millor gestió del sistema que l'escenari base de cada cas. Estos resultats solen representar una disminució dels costos econòmics en la gestió dels recursos hídrics. Comparant els quatre algoritmes, el SCE-UA va demostrar ser el més eficient a causa dels diferents criteris de convergència. No obstant això, el NSGA-II és el més recomanat a causa de la seua cerca multi-objectiu enfocada en la millora, amb la mateixa importància, de diferents objectius, on els decisors poden seleccionar la millor opció per a la gestió del sistema.
Lerma Elvira, N. (2017). Assessment and implementation of evolutionary algorithms for optimal management rules design in water resources systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90547
TESIS
APA, Harvard, Vancouver, ISO, and other styles
43

Gerdes, Mike. "Health Monitoring for Aircraft Systems using Decision Trees and Genetic Evolution." Diss., Aircraft Design and Systems Group (AERO), Department of Automotive and Aeronautical Engineering, Hamburg University of Applied Sciences, 2019. http://d-nb.info/1202830382.

Full text
Abstract:
Reducing unscheduled maintenance is important for aircraft operators. There are significant costs if flights must be delayed or cancelled, for example, if spares are not available and have to be shipped across the world. This thesis describes three methods of aircraft health condition monitoring and prediction; one for system monitoring, one for forecasting and one combining the two other methods for a complete monitoring and prediction process. Together, the three methods allow organizations to forecast possible failures. The first two use decision trees for decision-making and genetic optimization to improve the performance of the decision trees and to reduce the need for human interaction. Decision trees have several advantages: the generated code is quickly and easily processed, it can be altered by human experts without much work, it is readable by humans, and it requires few resources for learning and evaluation. The readability and the ability to modify the results are especially important; special knowledge can be gained and errors produced by the automated code generation can be removed. A large number of data sets is needed for meaningful predictions. This thesis uses two data sources: first, data from existing aircraft sensors, and second, sound and vibration data from additionally installed sensors. It draws on methods from the field of big data and machine learning to analyse and prepare the data sets for the prediction process.
APA, Harvard, Vancouver, ISO, and other styles
44

Tsegaye, Seneshaw Amare. "Flexible Urban Water Distribution Systems." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4597.

Full text
Abstract:
With increasing global change pressures such as urbanization and climate change, cities of the future will experience difficulties in efficiently managing scarcer and less reliable water resources. However, projections of future global change pressures are plagued with uncertainties. This increases the difficulty in developing urban water systems that are adaptable to future uncertainty. A major component of an urban water system is the distribution system, which constitutes approximately 80-85% of the total cost of the water supply system (Swamee and Sharma, 2008). Traditionally, water distribution systems (WDS) are designed using deterministic assumptions of main model input variables such as water availability and water demand. However, these deterministic assumptions are no longer valid due to the inherent uncertainties associated with them. Hence, a new design approach is required, one that recognizes these inherent uncertainties and develops more adaptable and flexible systems capable of using their active capacity to act or respond to future alterations in a timely, performance-efficient, and cost-effective manner. This study develops a framework for the design of flexible WDS that are adaptable to new, different, or changing requirements. The framework consists of two main parts. The first part consists of several components that are important in the pre and post--processing of the least-cost design methodology of a flexible WDS. These components include: the description of uncertainties affecting WDS design, identification of potential flexibility options for WDS, generation of flexibility through optimization, and a method for assessing of flexibility. For assessment a suite of performance metrics is developed that reflect the degree of flexibility of a distribution system. These metrics focus on the capability of the WDS to respond and react to future changes. The uncertainties description focuses on the spatial and temporal variation of future demand. The second part consists of two optimization models for the design of centralized and decentralized WDS respectively. The first model generates flexible, staged development plans for the incremental growth of a centralized WDS. The second model supports the development of clustered/decentralized WDS. It is argued that these clustered systems promote flexibility as they provide internal degrees of freedom, allowing many different combinations of distribution systems to be considered. For both models a unique genetic algorithm based flexibility optimization (GAFO) model was developed that maximizes the flexibility of a WDS at the least cost. The efficacy of the developed framework and tools are demonstrated through two case study applications on real networks in Uganda. The first application looks at the design of a centralized WDS in Mbale, a small town in Eastern Uganda. Results from this application indicate that the flexibility framework is able to generate a more flexible design of the centralized system that is 4% - 50% less expensive than a conventionally designed system when compared against several future scenarios. In addition, this application highlights that the flexible design has a lower regret under different scenarios when compared to the conventionally designed system (a difference of 11.2m3/US$). The second application analyzes the design of a decentralized network in the town of Aura, a small town in Northern Uganda. A comparison of a decentralized system to a centralized system is performed, and the results indicate that the decentralized system is 24% - 34% less expensive and that these cost savings are associated with the ability of the decentralized system to be staged in a way that traces the urban growth trajectory more closely. The decentralized clustered WDS also has a lower regret (a difference of 17.7m3/US$) associated with the potential future conditions in comparison with the conventionally centralized system and hence is more flexible.
APA, Harvard, Vancouver, ISO, and other styles
45

Hsiao, Shih-Hui. "SOCIAL MEDIA ANALYTICS − A UNIFYING DEFINITION, COMPREHENSIVE FRAMEWORK, AND ASSESSMENT OF ALGORITHMS FOR IDENTIFYING INFLUENCERS IN SOCIAL MEDIA." UKnowledge, 2016. http://uknowledge.uky.edu/busadmin_etds/8.

Full text
Abstract:
Given its relative infancy, there is a dearth of research on a comprehensive view of business social media analytics (SMA). This dissertation first examines current literature related to SMA and develops an integrated, unifying definition of business SMA, providing a nuanced starting point for future business SMA research. This dissertation identifies several benefits of business SMA, and elaborates on some of them, while presenting recent empirical evidence in support of foregoing observations. The dissertation also describes several challenges facing business SMA today, along with supporting evidence from the literature, some of which also offer mitigating solutions in particular contexts. The second part of this dissertation studies one SMA implication focusing on identifying social influencer. Growing social media usage, accompanied by explosive growth in SMA, has resulted in increasing interest in finding automated ways of discovering influencers in online social interactions. Beginning 2008, many variants of multiple basic approaches have been proposed. Yet, there is no comprehensive study investigating the relative efficacy of these methods in specific settings. This dissertation investigates and reports on the relative performance of multiple methods on Twitter datasets containing between them tens of thousands to hundreds of thousands of tweets. Accordingly, the second part of the dissertation helps further an understanding of business SMA and its many aspects, grounded in recent empirical work, and is a basis for further research and development. This dissertation provides a relatively comprehensive understanding of SMA and the implementation SMA in influencer identification.
APA, Harvard, Vancouver, ISO, and other styles
46

Natario, Romalho Maria Fernanda. "Application of an automatically designed fuzzy logic decision support system to connection admission control in ATM networks." Thesis, Queen Mary, University of London, 1996. http://qmro.qmul.ac.uk/xmlui/handle/123456789/3817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Abdo, Walid A. A. "Enhancing association rules algorithms for mining distributed databases. Integration of fast BitTable and multi-agent association rules mining in distributed medical databases for decision support." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5661.

Full text
Abstract:
Over the past few years, mining data located in heterogeneous and geographically distributed sites have been designated as one of the key important issues. Loading distributed data into centralized location for mining interesting rules is not a good approach. This is because it violates common issues such as data privacy and it imposes network overheads. The situation becomes worse when the network has limited bandwidth which is the case in most of the real time systems. This has prompted the need for intelligent data analysis to discover the hidden information in these huge amounts of distributed databases. In this research, we present an incremental approach for building an efficient Multi-Agent based algorithm for mining real world databases in geographically distributed sites. First, we propose the Distributed Multi-Agent Association Rules algorithm (DMAAR) to minimize the all-to-all broadcasting between distributed sites. Analytical calculations show that DMAAR reduces the algorithm complexity and minimizes the message communication cost. The proposed Multi-Agent based algorithm complies with the Foundation for Intelligent Physical Agents (FIPA), which is considered as the global standards in communication between agents, thus, enabling the proposed algorithm agents to cooperate with other standard agents. Second, the BitTable Multi-Agent Association Rules algorithm (BMAAR) is proposed. BMAAR includes an efficient BitTable data structure which helps in compressing the database thus can easily fit into the memory of the local sites. It also includes two BitWise AND/OR operations for quick candidate itemsets generation and support counting. Moreover, the algorithm includes three transaction trimming techniques to reduce the size of the mined data. Third, we propose the Pruning Multi-Agent Association Rules algorithm (PMAAR) which includes three candidate itemsets pruning techniques for reducing the large number of generated candidate itemsets, consequently, reducing the total time for the mining process. The proposed PMAAR algorithm has been compared with existing Association Rules algorithms against different benchmark datasets and has proved to have better performance and execution time. Moreover, PMAAR has been implemented on real world distributed medical databases obtained from more than one hospital in Egypt to discover the hidden Association Rules in patients¿ records to demonstrate the merits and capabilities of the proposed model further. Medical data was anonymously obtained without the patients¿ personal details. The analysis helped to identify the existence or the absence of the disease based on minimum number of effective examinations and tests. Thus, the proposed algorithm can help in providing accurate medical decisions based on cost effective treatments, improving the medical service for the patients, reducing the real time response for the health system and improving the quality of clinical decision making.
APA, Harvard, Vancouver, ISO, and other styles
48

Dukyil, Abdulsalam Saleh. "Artificial intelligence and multiple criteria decision making approach for a cost-effective RFID-enabled tracking management system." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17128.

Full text
Abstract:
The implementation of RFID technology has been subject to ever-increasing popularity in relation to the traceability of items as one of the most advance technologies. Implementing such a technology leads to an increase in the visibility management of products. Notwithstanding this, RFID communication performance is potentially greatly affected by interference between the RFID devices. It is also subject to auxiliary costs in investment that should be considered. Hence, seeking a cost-effective design with a desired communication performance for RFID-enabled systems has become a key factor in order to be competitive in today‟s markets. This study introduce a cost and performance-effective design for a proposed RFID-enabled passport tracking system through the development of a multi-objective model that takes in account economic, operation and social criteria. The developed model is aimed at solving the design problem by (i) allocating the optimal numbers of related facilities that should be established and (ii) obtaining trade-offs among three objectives: minimising implementation and operational costs; minimising RFID reader interference; and maximising the social impact measured in the number of created jobs. To come closer to the actual design in terms of considering the uncertain parameters, a fuzzy multi-objective model was developed. To solve the multi-objective optimization problem model, two solution methods were used respectively (epsilon constrain and linear programming) to select the best Pareto solution and a decision-making method was developed to select the final trade-off solution. Moreover, this research aims to provide a user-friendly decision making tool for selecting the best vendor from a group which submitted their tenders for implementing a proposed RFID- based passport tracking system. In addition to that a real case study was applied to examine the applicability of the developed model and the proposed solution methods. The research findings indicate that the developed model is capable of presenting a design for an RFID- enabled passport tracking system. Also, the developed decision-making tool can easily be used to solve similar vendor selection problem. Research findings demonstrate that the proposed RFID-enabled monitoring system for the passport tracking system is economically feasible. The study concludes that the developed mathematical models and optimization approaches can be a useful decision-maker for tackling a number of design and optimization problems for RFID system using artificial intelligence mathematical algorithm based techniques.
APA, Harvard, Vancouver, ISO, and other styles
49

Abdo, Walid Adly Atteya. "Enhancing association rules algorithms for mining distributed databases : integration of fast BitTable and multi-agent association rules mining in distributed medical databases for decision support." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5661.

Full text
Abstract:
Over the past few years, mining data located in heterogeneous and geographically distributed sites have been designated as one of the key important issues. Loading distributed data into centralized location for mining interesting rules is not a good approach. This is because it violates common issues such as data privacy and it imposes network overheads. The situation becomes worse when the network has limited bandwidth which is the case in most of the real time systems. This has prompted the need for intelligent data analysis to discover the hidden information in these huge amounts of distributed databases. In this research, we present an incremental approach for building an efficient Multi-Agent based algorithm for mining real world databases in geographically distributed sites. First, we propose the Distributed Multi-Agent Association Rules algorithm (DMAAR) to minimize the all-to-all broadcasting between distributed sites. Analytical calculations show that DMAAR reduces the algorithm complexity and minimizes the message communication cost. The proposed Multi-Agent based algorithm complies with the Foundation for Intelligent Physical Agents (FIPA), which is considered as the global standards in communication between agents, thus, enabling the proposed algorithm agents to cooperate with other standard agents. Second, the BitTable Multi-Agent Association Rules algorithm (BMAAR) is proposed. BMAAR includes an efficient BitTable data structure which helps in compressing the database thus can easily fit into the memory of the local sites. It also includes two BitWise AND/OR operations for quick candidate itemsets generation and support counting. Moreover, the algorithm includes three transaction trimming techniques to reduce the size of the mined data. Third, we propose the Pruning Multi-Agent Association Rules algorithm (PMAAR) which includes three candidate itemsets pruning techniques for reducing the large number of generated candidate itemsets, consequently, reducing the total time for the mining process. The proposed PMAAR algorithm has been compared with existing Association Rules algorithms against different benchmark datasets and has proved to have better performance and execution time. Moreover, PMAAR has been implemented on real world distributed medical databases obtained from more than one hospital in Egypt to discover the hidden Association Rules in patients' records to demonstrate the merits and capabilities of the proposed model further. Medical data was anonymously obtained without the patients' personal details. The analysis helped to identify the existence or the absence of the disease based on minimum number of effective examinations and tests. Thus, the proposed algorithm can help in providing accurate medical decisions based on cost effective treatments, improving the medical service for the patients, reducing the real time response for the health system and improving the quality of clinical decision making.
APA, Harvard, Vancouver, ISO, and other styles
50

Gohil, Bhupendra. "Diagnostic alarms in anaesthesia." AUT University, 2007. http://hdl.handle.net/10292/956.

Full text
Abstract:
Smart computer algorithms and signal processing techniques have led to rapid development in the field of patient monitoring. Accelerated growth in the field of medical science has made data analysis more demanding and thus the complexity of decision-making procedures. Anaesthetists working in the operating theatre are responsible for carrying out a multitude of tasks which requires constant vigilance and thus a need for a smart decision support system has arisen. It is anticipated that such an automated decision support tool, capable of detecting pathological events can enhance the anaesthetist’s performance by providing the diagnostic information to the anaesthetist in an interactive and ergonomic display format. The main goal of this research was to develop a clinically useful diagnostic alarm system prototype for monitoring pathological events during anaesthesia. Several intelligent techniques, fuzzy logic, artificial neural networks, a probabilistic alarms and logistic regression were explored for developing the optimum diagnostic modules in detecting these events. New real-time diagnostic algorithms were developed and implemented in the form of a prototype system called real time – smart alarms for anaesthesia monitoring (RT-SAAM). Three diagnostic modules based on, fuzzy logic (Fuzzy Module), probabilistic alarms (Probabilistic Module) and respiration induced systolic pressure variations (SPV Module) were developed using MATLABTM and LabVIEWTM. In addition, a new data collection protocol was developed for acquiring data from the existing S/5 Datex-Ohmeda anaesthesia monitor in the operating theatre without disturbing the original setup. The raw physiological patient data acquired from the S/5 monitor were filtered, pre-processed and analysed for detecting anaesthesia related events like absolute hypovolemia (AHV) and fall in cardiac output (FCO) using SAAM. The accuracy of diagnoses generated by SAAM was validated by comparing its diagnostic information with the one provided by the anaesthetist for each patient. Kappa-analysis was used for measuring the level of agreement between the anaesthetist’s and RT-SAAM’s diagnoses. In retrospective (offline) analysis, RT-SAAM that was tested with data from 18 patients gave an overall agreement level of 81% (which implies substantial agreement between SAAM and anaesthetist). RT-SAAM was further tested in real-time with 6-patients giving an agreement level of 71% (which implies fair level of agreement). More real-time tests are required to complete the real-time validation and development of RT-SAAM. This diagnostic alarm system prototype (RT-SAAM) has shown that evidence based expert diagnostic systems can accurately diagnose AHV and FCO events in anaesthetized patients and can be useful in providing decision support to the anaesthetists.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography