To see the other types of publications on this topic, follow the link: Soft computing.

Dissertations / Theses on the topic 'Soft computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Soft computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Keukelaar, J. H. D. "Topics in Soft Computing." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Medaglia, Andres L. "Simulation Optimization Using Soft Computing." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010124-233615.

Full text
Abstract:

To date, most of the research in simulation optimization has been focused on single response optimization on the continuous space of input parameters. However, the optimization of more complex systems does not fit this framework. Decision makers often face the problem of optimizing multiple performance measures of systems with both continuous and discrete input parameters. Previously acquired knowledge of the system by experts is seldomincorporated into the simulation optimization engine. Furthermore, when the goals of the system design are stated in natural language or vague terms, current techniques are unable to deal with this situation. For these reasons, we define and study the fuzzy single response simulation optimization (FSO) and fuzzy multiple response simulation optimization (FMSO) problems.

The primary objective of this research is to develop an efficient and robust method for simulation optimization of complex systems with multiple vague goals. This method uses a fuzzy controller to incorporate existing knowledge to generate high quality approximate Pareto optimal solutions in a minimum number of simulation runs.

For comparison purposes, we also propose an evolutionary method for solving the FMSO problem. Extensive computational experiments on the design of a flow line manufacturing system (in terms of tandem queues with blocking) have been conducted. Both methods are able to generate high quality solutions in terms of Zitzlerand Thiele's "dominated space" metric. Both methods are also able to generate an even sample of the Pareto front. However, the fuzzy controlled method is more efficient, requiring fewer simulation runs than the evolutionary method to achieve the same solution quality.

To accommodate the complexity of natural language, this research also provides a new Bezier curve-based mechanism to elicit knowledge and express complex vague concepts. To date, this is perhaps the most flexible and efficient mechanism for both automatic and interactive generation of membership functions for convex fuzzy sets.

APA, Harvard, Vancouver, ISO, and other styles
3

Di, Tomaso Enza. "Soft computing for Bayesian networks." Thesis, University of Bristol, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thomas, Anna. "Error detection for soft computing applications." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44811.

Full text
Abstract:
Hardware errors are on the rise with reducing chip sizes, and power constraints have necessitated the involvement of software in hardware error detection. At the same time, emerging workloads in the form of soft computing applications, (e.g., multimedia applications) can tolerate most hardware errors as long as the erroneous outputs do not deviate significantly from error-free outcomes. We term outcomes that deviate significantly from the error-free outcomes as Egregious Data Corruptions (EDCs). In this thesis, we propose a technique to place detectors for selectively detecting EDC causing errors in an application. Our technique identifies program locations for placing high coverage detectors for EDCs using static analysis and runtime profiling. We evaluate our technique on six benchmarks to measure the EDC coverage under given performance overhead bounds. Our technique achieves an average EDC coverage of 82%, under performance overheads of 10%, while detecting only 10% of the Non-EDC and benign faults. We also explore the performance-resilience tradeoff space, by studying the effect of compiler optimizations on the error resilience of soft computing applications, both with and without our technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Machaka, Pheeha. "Situation recognition using soft computing techniques." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/11225.

Full text
Abstract:
Includes bibliographical references.
The last decades have witnessed the emergence of a large number of devices pervasively launched into our daily lives as systems producing and collecting data from a variety of information sources to provide different services to different users via a variety of applications. These include infrastructure management, business process monitoring, crisis management and many other system-monitoring activities. Being processed in real-time, these information production/collection activities raise an interest for live performance monitoring, analysis and reporting, and call for data-mining methods in the recognition, prediction, reasoning and controlling of the performance of these systems by controlling changes in the system and/or deviations from normal operation. In recent years, soft computing methods and algorithms have been applied to data mining to identify patterns and provide new insight into data. This thesis revisits the issue of situation recognition for systems producing massive datasets by assessing the relevance of using soft computing techniques for finding hidden pattern in these systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Mitra, Malay. "Medical diognosis using soft computing technology." Thesis, University of North Bengal, 2018. http://hdl.handle.net/123456789/2722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mitra, Malay. "Medical diognosis using soft computing technologies." Thesis, University of North Bengal, 2018. http://ir.nbu.ac.in/handle/123456789/3659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rajkhowa, Priyanka. "Exploiting soft computing for real time performance." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3928.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Hirschen, Kai. "Soft computing methods for applied shape optimization." Phd thesis, [S.l. : s.n.], 2004. http://elib.tu-darmstadt.de/diss/000499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abraham, Ajith 1968. "Hybrid soft computing : architecture optimization and applications." Monash University, Gippsland School of Computing and Information Technology, 2002. http://arrow.monash.edu.au/hdl/1959.1/8676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Shrestha, Pranav Nath. "Applying soft computing to early obesity prediction /." Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1240705441&sid=9&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hiziroglu, Abdulkadir. "A soft computing approach to customer segmentation." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503072.

Full text
Abstract:
Improper selection of segmentation variables and tools may have an effect on segmentation results and can cause a negative financial impact (Tsai & Chiu, 2004). With regards to the selection of segmentation variables, although general segmentation variables such as demographics are frequently utilised based on the assumption that customers with similar demographics and lifestyles tend to exhibit similar purchasing behaviours (Tsai & Chiu, 2004), it is believed the behavioural variables of customers are more suitable to use as segmentation bases (Hsieh, 2004). As far as segmentation techniques are concerned, two conclusions can be made. First, the cluster-based segmentation methods, particularly hierarchical and non-hierarchical methods, have been widely used in the related literature. But, the hierarchical methods are criticised for nonrecovery while the non-hierarchical ones are not able to determine the initial number of clusters (Lien, 2005). Hence, the integration of hierarchical and partitional methods (as a two-stage approach) is suggested to make the clustering results powerful in large databases (Kuo, Ho & Hu, 2002b). Second, none of those traditional approaches has the ability to establish non-strict customer segments that are significantly crucial for today's competitive consumer markets. One crucial area that can meet this requirement is known as soft computing. Although there have been studies related to the usage of soft computing techniques for segmentation problems, they are not based on the effective two-stage methodology. The aim of this study is to propose a soft computing model for customer segmentation using purchasing behaviours of customers in a data mining framework. The segmentation process in this study includes segmentation (clustering and profiling) of existing consumers and classification-prediction of segments for existing and new customers. Both a combination and an integration of soft computing techniques were used in the proposed model. Clustering was performed via a proposed neuro-fuzzy two stage-clustering approach and classification-prediction was employed using a supervised artificial neural network method. Segmenting customers was done according to the purchasing behaviours of customers based on RFM (Recency, Frequency, Monetary) values, which can be considered as an important variable set in identifying customer value. The model was also compared with other two-stage methods (Le., Ward's method followed by k-means and self-organising maps followed by k-means) based on select segmentability criteria. The proposed model was employed in a secondary data set from a UK retail company. The data set included more than 300,000 unique customer records and a random sample of approximately 1 % of it was used for conducting analyses .. The findings indicated that the proposed model provided better insights and managerial implications in comparison with the traditional two-stage methods with respect to the select segmentability criteria. --' The main contribution of this study is threefold. Firstly it has the potential benefits and implications of having fuzzy segments, which enables us to have flexible segments through the availability of membership degrees of each customer to the corresponding customer segments. Secondly the development of a new two-stage clustering model could be considered to be superior to its peers in terms of computational ability. And finally, through the classification phase of the model it was possible to extract knowledge regarding segment stability, which was utilised to calculate customer retention or chum rate over time for corresponding segments.
APA, Harvard, Vancouver, ISO, and other styles
13

Stolpmann, Alexander. "An intelligent soft-computing texture classification system." Thesis, University of South Wales, 2005. https://pure.southwales.ac.uk/en/studentthesis/an-intelligent-softcomputing-texture-classification-system(a43eb831-a799-438b-9112-3ce1df432fe9).html.

Full text
Abstract:
The aim of this research work was to obtain a system that classifies texture. This so called Texture Classification System is not a system for one special task or group of tasks. It is a general approach that shows a way towards real artificial vision. Finding ways to enable computerised systems to visually recognise its surroundings is of increasing importance for the industry and society at large. To reach this goal not only objects but less well describable texture has to be identified within an image. To achieve this aim a number of objectives had to be met. At first a review of how natural vision works was done to better understand the complexity of visual systems. This is followed by a more detailed definition of what texture is. Next a review of image processing techniques, of statistical methods and of soft-computing methods was made to identify those that can be used or improved for the Texture Classification System. A major objective was to create the structure of the Texture Classification System. The design presented in this work is the framework for a multitude of modules arranged in groups and layers with multiple feedback and optimisation possibilities. The main achievement is a system for texture classification for which natural vision was used as a " blue-print". A more detailed definition of what texture is was made and a new texture library was started. The close review of image processing techniques provided a variety of applicable methods, as did the review and enhancement of statistical methods. Some of those methods were improved or used in a new way. Neural networks and fuzzy clustering were applied for classification, while genetic algorithms provide a means for self optimisation. The concepts and methods have been used for a number of projects next to texture classification itself. This work presents applications for fault detection in glass container manufacturing, quality control of veneer, positioning control of steel blocks in a rotation oven, and measurement of hair gloss. With the Texture Classification System a new, holistic approach for complex image processing and artificial vision tasks is being contributed. It uses a modular combination of statistics, image processing and soft-computing methods, easily adaptable to new tasks, includes new ideas for high order statistics, and incorporates self optimisation to achieve lean sub-systems. The system allows multiple feedbacks and includes a border detection routine. The new texture library provides images for future work of researchers. Still a lot of work has to be done in the future to achieve an artificial vision system that is comparable to the human's visual capabilities. This is mainly due to the fact of missing computational resources. At least another decade of hardware development is needed to reach this goal. During this time more, better or even novel methods will be added to the Texture Classification System to improve its universal capabilities.
APA, Harvard, Vancouver, ISO, and other styles
14

Fernando, Kurukulasuriya Joseph Tilak Nihal. "Soft computing techniques in power system analysis." Thesis, full-text, 2008. https://vuir.vu.edu.au/2025/.

Full text
Abstract:
Soft computing is a concept that has come into prominence in recent times and its application to power system analysis is still more recent. This thesis explores the application of soft computing techniques in the area of voltage stability of power systems. Soft computing, as opposed to conventional “hard” computing, is a technique that is tolerant of imprecision, uncertainty, partial truth and approximation. Its methods are based on the working of the human brain and it is commonly known as artificial intelligence. The human brain is capable of arriving at valid conclusions based on incomplete and partial data obtained from prior experience. It is an approximation of this process on a very small scale that is used in soft computing. Some of the important branches of soft computing (SC) are artificial neural networks (ANNs), fuzzy logic (FL), genetic computing (GC) and probabilistic reasoning (PR). The soft computing methods are robust and low cost. It is to be noted that soft computing methods are used in such diverse fields as missile guidance, robotics, industrial plants, pattern recognition, market prediction, patient diagnosis, logistics and of course power system analysis and prediction. However in all these fields its application is comparatively new and research is being carried out continuously in many universities and research institutions worldwide. The research presented in this thesis uses the soft computing method of Artificial Neural Networks (ANN’s) for the prediction of voltage instability in power systems. The research is very timely and current and would be a substantial contribution to the present body of knowledge in soft computing and voltage stability, which by itself is a new field. The methods developed in this research would be faster and more economical than presently available methods enabling their use online.
APA, Harvard, Vancouver, ISO, and other styles
15

Fernando, Kurukulasuriya Joseph Tilak Nihal. "Soft computing techniques in power system analysis." full-text, 2008. http://eprints.vu.edu.au/2025/1/thesis.pdf.

Full text
Abstract:
Soft computing is a concept that has come into prominence in recent times and its application to power system analysis is still more recent. This thesis explores the application of soft computing techniques in the area of voltage stability of power systems. Soft computing, as opposed to conventional “hard” computing, is a technique that is tolerant of imprecision, uncertainty, partial truth and approximation. Its methods are based on the working of the human brain and it is commonly known as artificial intelligence. The human brain is capable of arriving at valid conclusions based on incomplete and partial data obtained from prior experience. It is an approximation of this process on a very small scale that is used in soft computing. Some of the important branches of soft computing (SC) are artificial neural networks (ANNs), fuzzy logic (FL), genetic computing (GC) and probabilistic reasoning (PR). The soft computing methods are robust and low cost. It is to be noted that soft computing methods are used in such diverse fields as missile guidance, robotics, industrial plants, pattern recognition, market prediction, patient diagnosis, logistics and of course power system analysis and prediction. However in all these fields its application is comparatively new and research is being carried out continuously in many universities and research institutions worldwide. The research presented in this thesis uses the soft computing method of Artificial Neural Networks (ANN’s) for the prediction of voltage instability in power systems. The research is very timely and current and would be a substantial contribution to the present body of knowledge in soft computing and voltage stability, which by itself is a new field. The methods developed in this research would be faster and more economical than presently available methods enabling their use online.
APA, Harvard, Vancouver, ISO, and other styles
16

Esteves, João Trevizoli. "Climate and agrometeorology forecasting using soft computing techniques. /." Jaboticabal, 2018. http://hdl.handle.net/11449/180833.

Full text
Abstract:
Orientador: Glauco de Souza Rolim
Resumo: Precipitação, em pequenas escalas de tempo, é um fenômeno associado a altos níveis de incerteza e variabilidade. Dada a sua natureza, técnicas tradicionais de previsão são dispendiosas e exigentes em termos computacionais. Este trabalho apresenta um modelo para prever a ocorrência de chuvas em curtos intervalos de tempo por Redes Neurais Artificiais (RNAs) em períodos acumulados de 3 a 7 dias para cada estação climática, mitigando a necessidade de predizer o seu volume. Com essa premissa pretende-se reduzir a variância, aumentar a tendência dos dados diminuindo a responsabilidade do algoritmo que atua como um filtro para modelos quantitativos, removendo ocorrências subsequentes de valores de zero(ausência) de precipitação, o que influencia e reduz seu desempenho. O modelo foi desenvolvido com séries temporais de 10 regiões agricolamente relevantes no Brasil, esses locais são os que apresentam as séries temporais mais longas disponíveis e são mais deficientes em previsões climáticas precisas, com 60 anos de temperatura média diária do ar e precipitação acumulada. foram utilizados para estimar a evapotranspiração potencial e o balanço hídrico; estas foram as variáveis ​​utilizadas como entrada para as RNAs. A precisão média para todos os períodos acumulados foi de 78% no verão, 71% no inverno 62% na primavera e 56% no outono, foi identificado que o efeito da continentalidade, o efeito da altitude e o volume da precipitação normal , tem um impacto direto na precisão das RNAs. Os... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Precipitation, in short periods of time, is a phenomenon associated with high levels of uncertainty and variability. Given its nature, traditional forecasting techniques are expensive and computationally demanding. This paper presents a model to forecast the occurrence of rainfall in short ranges of time by Artificial Neural Networks(ANNs) in accumulated periods from 3 to 7 days for each climatic season, mitigating the necessity of predicting its amount. With this premise it is intended to reduce the variance, rise the bias of data and lower the responsibility of the model acting as a filter for quantitative models by removing subsequent occurrences of zeros values of rainfall which leads to bias the and reduces its performance. The model were developed with time series from 10 agriculturally relevant regions in Brazil, these places are the ones with the longest available weather time series and and more deficient in accurate climate predictions, it was available 60 years of daily mean air temperature and accumulated precipitation which were used to estimate the potential evapotranspiration and water balance; these were the variables used as inputs for the ANNs models. The mean accuracy of the model for all the accumulated periods were 78% on summer, 71% on winter 62% on spring and 56% on autumn, it was identified that the effect of continentality, the effect of altitude and the volume of normal precipitation, have a direct impact on the accuracy of the ANNs. The models have ... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
17

Struhar, Vaclav. "Improving Soft Real-time Performance of Fog Computing." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55679.

Full text
Abstract:
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication.    In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
APA, Harvard, Vancouver, ISO, and other styles
18

Erman, Maria. "Applications of Soft Computing Techniques for Wireless Communications." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17314.

Full text
Abstract:
This thesis presents methods and applications of Fuzzy Logic and Rough Sets in the domain of Telecommunications at both the network and physical layers. Specifically, the use of a new class of functions, the truncated π functions, for classifying IP traffic by matching datagram size histograms is explored. Furthermore, work on adapting the payoff matrix in multiplayer games by using fuzzy entries as opposed to crisp values that are hard to quantify, is presented. Additionally, applications of fuzzy logic in wireless communications are presented, comprised by a comprehensive review of current trends and applications, followed by work directed towards using it in spectrum sensing and power control in cognitive radio networks. This licentiate thesis represents parts of my work in the fields of Fuzzy Systems and Wireless Communications. The work was done in collaboration between the Departments of Applied Signal Processing and Mathematics at Blekinge Institute of Technology.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, Yuchun. "Granular Support Vector Machines Based on Granular Computing, Soft Computing and Statistical Learning." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_diss/5.

Full text
Abstract:
With emergence of biomedical informatics, Web intelligence, and E-business, new challenges are coming for knowledge discovery and data mining modeling problems. In this dissertation work, a framework named Granular Support Vector Machines (GSVM) is proposed to systematically and formally combine statistical learning theory, granular computing theory and soft computing theory to address challenging predictive data modeling problems effectively and/or efficiently, with specific focus on binary classification problems. In general, GSVM works in 3 steps. Step 1 is granulation to build a sequence of information granules from the original dataset or from the original feature space. Step 2 is modeling Support Vector Machines (SVM) in some of these information granules when necessary. Finally, step 3 is aggregation to consolidate information in these granules at suitable abstract level. A good granulation method to find suitable granules is crucial for modeling a good GSVM. Under this framework, many different granulation algorithms including the GSVM-CMW (cumulative margin width) algorithm, the GSVM-AR (association rule mining) algorithm, a family of GSVM-RFE (recursive feature elimination) algorithms, the GSVM-DC (data cleaning) algorithm and the GSVM-RU (repetitive undersampling) algorithm are designed for binary classification problems with different characteristics. The empirical studies in biomedical domain and many other application domains demonstrate that the framework is promising. As a preliminary step, this dissertation work will be extended in the future to build a Granular Computing based Predictive Data Modeling framework (GrC-PDM) with which we can create hybrid adaptive intelligent data mining systems for high quality prediction.
APA, Harvard, Vancouver, ISO, and other styles
20

Castro, Espinoza Félix. "A soft computing decision support framework for e-learning." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/619802.

Full text
Abstract:
Supported by technological development and its impact on everyday activities, e-Learning and b-Learning (Blended Learning) have experienced rapid growth mainly in higher education and training. Its inherent ability to break both physical and cultural distances, to disseminate knowledge and decrease the costs of the teaching-learning process allows it to reach anywhere and anyone. The educational community is divided as to its role in the future. It is believed that by 2019 half of the world's higher education courses will be delivered through e-Learning. While supporters say that this will be the educational mode of the future, its detractors point out that it is a fashion, that there are huge rates of abandonment and that their massification and potential low quality, will cause its fall, assigning it a major role of accompanying traditional education. There are, however, two interrelated features where there seems to be consensus. On the one hand, the enormous amount of information and evidence that Learning Management Systems (LMS) generate during the e-Learning process and which is the basis of the part of the process that can be automated. In contrast, there is the fundamental role of e-tutors and etrainers who are guarantors of educational quality. These are continually overwhelmed by the need to provide timely and effective feedback to students, manage endless particular situations and casuistics that require decision making and process stored information. In this sense, the tools that e-Learning platforms currently provide to obtain reports and a certain level of follow-up are not sufficient or too adequate. It is in this point of convergence Information-Trainer, where the current developments of the LMS are centered and it is here where the proposed thesis tries to innovate. This research proposes and develops a platform focused on decision support in e-Learning environments. Using soft computing and data mining techniques, it extracts knowledge from the data produced and stored by e-Learning systems, allowing the classification, analysis and generalization of the extracted knowledge. It includes tools to identify models of students' learning behavior and, from them, predict their future performance and enable trainers to provide adequate feedback. Likewise, students can self-assess, avoid those ineffective behavior patterns, and obtain real clues about how to improve their performance in the course, through appropriate routes and strategies based on the behavioral model of successful students. The methodological basis of the mentioned functionalities is the Fuzzy Inductive Reasoning (FIR), which is particularly useful in the modeling of dynamic systems. During the development of the research, the FIR methodology has been improved and empowered by the inclusion of several algorithms. First, an algorithm called CR-FIR, which allows determining the Causal Relevance that have the variables involved in the modeling of learning and assessment of students. In the present thesis, CR-FIR has been tested on a comprehensive set of classical test data, as well as real data sets, belonging to different areas of knowledge. Secondly, the detection of atypical behaviors in virtual campuses was approached using the Generative Topographic Mapping (GTM) methodology, which is a probabilistic alternative to the well-known Self-Organizing Maps. GTM was used simultaneously for clustering, visualization and detection of atypical data. The core of the platform has been the development of an algorithm for extracting linguistic rules in a language understandable to educational experts, which helps them to obtain patterns of student learning behavior. In order to achieve this functionality, the LR-FIR algorithm (Extraction of Linguistic Rules in FIR) was designed and developed as an extension of FIR that allows both to characterize general behavior and to identify interesting patterns. In the case of the application of the platform to several real e-Learning courses, the results obtained demonstrate its feasibility and originality. The teachers' perception about the usability of the tool is very good, and they consider that it could be a valuable resource to mitigate the time requirements of the trainer that the e-Learning courses demand. The identification of student behavior models and prediction processes have been validated as to their usefulness by expert trainers. LR-FIR has been applied and evaluated in a wide set of real problems, not all of them in the educational field, obtaining good results. The structure of the platform makes it possible to assume that its use is potentially valuable in those domains where knowledge management plays a preponderant role, or where decision-making processes are a key element, e.g. ebusiness, e-marketing, customer management, to mention just a few. The Soft Computing tools used and developed in this research: FIR, CR-FIR, LR-FIR and GTM, have been applied successfully in other real domains, such as music, medicine, weather behaviors, etc.
Soportado por el desarrollo tecnológico y su impacto en las diferentes actividades cotidianas, el e-Learning (o aprendizaje electrónico) y el b-Learning (Blended Learning o aprendizaje mixto), han experimentado un crecimiento vertiginoso principalmente en la educación superior y la capacitación. Su habilidad inherente para romper distancias tanto físicas como culturales, para diseminar conocimiento y disminuir los costes del proceso enseñanza aprendizaje le permite llegar a cualquier sitio y a cualquier persona. La comunidad educativa se encuentra dividida en cuanto a su papel en el futuro. Se cree que para el año 2019 la mitad de los cursos de educación superior del mundo se impartirá a través del e-Learning. Mientras que los partidarios aseguran que ésta será la modalidad educativa del futuro, sus detractores señalan que es una moda, que hay enormes índices de abandono y que su masificación y potencial baja calidad, provocará su caída, reservándole un importante papel de acompañamiento a la educación tradicional. Hay, sin embargo, dos características interrelacionadas donde parece haber consenso. Por un lado, la enorme generación de información y evidencias que los sistemas de gestión del aprendizaje o LMS (Learning Management System) generan durante el proceso educativo electrónico y que son la base de la parte del proceso que se puede automatizar. En contraste, está el papel fundamental de los e-tutores y e-formadores que son los garantes de la calidad educativa. Éstos se ven continuamente desbordados por la necesidad de proporcionar retroalimentación oportuna y eficaz a los alumnos, gestionar un sin fin de situaciones particulares y casuísticas que requieren toma de decisiones y procesar la información almacenada. En este sentido, las herramientas que las plataformas de e-Learning proporcionan actualmente para obtener reportes y cierto nivel de seguimiento no son suficientes ni demasiado adecuadas. Es en este punto de convergencia Información-Formador, donde están centrados los actuales desarrollos de los LMS y es aquí donde la tesis que se propone pretende innovar. La presente investigación propone y desarrolla una plataforma enfocada al apoyo en la toma de decisiones en ambientes e-Learning. Utilizando técnicas de Soft Computing y de minería de datos, extrae conocimiento de los datos producidos y almacenados por los sistemas e-Learning permitiendo clasificar, analizar y generalizar el conocimiento extraído. Incluye herramientas para identificar modelos del comportamiento de aprendizaje de los estudiantes y, a partir de ellos, predecir su desempeño futuro y permitir a los formadores proporcionar una retroalimentación adecuada. Así mismo, los estudiantes pueden autoevaluarse, evitar aquellos patrones de comportamiento poco efectivos y obtener pistas reales acerca de cómo mejorar su desempeño en el curso, mediante rutas y estrategias adecuadas a partir del modelo de comportamiento de los estudiantes exitosos. La base metodológica de las funcionalidades mencionadas es el Razonamiento Inductivo Difuso (FIR, por sus siglas en inglés), que es particularmente útil en el modelado de sistemas dinámicos. Durante el desarrollo de la investigación, la metodología FIR ha sido mejorada y potenciada mediante la inclusión de varios algoritmos. En primer lugar un algoritmo denominado CR-FIR, que permite determinar la Relevancia Causal que tienen las variables involucradas en el modelado del aprendizaje y la evaluación de los estudiantes. En la presente tesis, CR-FIR se ha probado en un conjunto amplio de datos de prueba clásicos, así como conjuntos de datos reales, pertenecientes a diferentes áreas de conocimiento. En segundo lugar, la detección de comportamientos atípicos en campus virtuales se abordó mediante el enfoque de Mapeo Topográfico Generativo (GTM), que es una alternativa probabilística a los bien conocidos Mapas Auto-organizativos. GTM se utilizó simultáneamente para agrupamiento, visualización y detección de datos atípicos. La parte medular de la plataforma ha sido el desarrollo de un algoritmo de extracción de reglas lingüísticas en un lenguaje entendible para los expertos educativos, que les ayude a obtener los patrones del comportamiento de aprendizaje de los estudiantes. Para lograr dicha funcionalidad, se diseñó y desarrolló el algoritmo LR-FIR, (extracción de Reglas Lingüísticas en FIR, por sus siglas en inglés) como una extensión de FIR que permite tanto caracterizar el comportamiento general, como identificar patrones interesantes. En el caso de la aplicación de la plataforma a varios cursos e-Learning reales, los resultados obtenidos demuestran su factibilidad y originalidad. La percepción de los profesores acerca de la usabilidad de la herramienta es muy buena, y consideran que podría ser un valioso recurso para mitigar los requerimientos de tiempo del formador que los cursos e-Learning exigen. La identificación de los modelos de comportamiento de los estudiantes y los procesos de predicción han sido validados en cuanto a su utilidad por los formadores expertos. LR-FIR se ha aplicado y evaluado en un amplio conjunto de problemas reales, no todos ellos del ámbito educativo, obteniendo buenos resultados. La estructura de la plataforma permite suponer que su utilización es potencialmente valiosa en aquellos dominios donde la administración del conocimiento juegue un papel preponderante, o donde los procesos de toma de decisiones sean una pieza clave, por ejemplo, e-business, e-marketing, administración de clientes, por mencionar sólo algunos. Las herramientas de Soft Computing utilizadas y desarrolladas en esta investigación: FIR, CR-FIR, LR-FIR y GTM, ha sido aplicadas con éxito en otros dominios reales, como música, medicina, comportamientos climáticos, etc.
APA, Harvard, Vancouver, ISO, and other styles
21

Ahmad, Alzghoul. "Screening Web Breaks in a Pressroom by Soft Computing." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-1144.

Full text
Abstract:

Web breaks are considered as one of the most significant runnability problems

in a pressroom. This work concerns the analysis of relation between various

parameters (variables) characterizing the paper, printing press, the printing

process and the web break occurrence. A large number of variables, 61 in

total, obtained off-line as well as measured online during the printing process

are used in the investigation. Each paper reel is characterized by a vector x

of 61 components.

Two main approaches are explored. The first one treats the problem as a

data classification task into "break" and "non break" classes. The procedures

of classifier training, the selection of relevant input variables and the selection

of hyper-parameters of the classifier are aggregated into one process based on

genetic search. The second approach combines procedures of genetic search

based variable selection and data mapping into a low dimensional space. The

genetic search process results into a variable set providing the best mapping

according to some quality function.

The empirical study was performed using data collected at a pressroom

in Sweden. The total number of data points available for the experiments

was equal to 309. Amongst those, only 37 data points represent the web

break cases. The results of the investigations have shown that the linear

relations between the independent variables and the web break frequency

are not strong.

Three important groups of variables were identified, namely Lab data

(variables characterizing paper properties and measured off-line in a paper

mill lab), Ink registry (variables characterizing operator actions aimed to

adjust ink registry) and Web tension. We found that the most important

variables are: Ink registry Y LS MD (adjustments of yellow ink registry

in machine direction on the lower paper side), Air permeability (character-

izes paper porosity), Paper grammage, Elongation MD, and four variables

characterizing web tension: Moment mean, Min sliding Mean, Web tension

variance, and Web tension mean.

The proposed methods were helpful in finding the variables influencing

the occurrence of web breaks and can also be used for solving other industrial

problems.

APA, Harvard, Vancouver, ISO, and other styles
22

Perez, Ruben E. "Soft Computing techniques and applications in aircraft design optimization." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ63122.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Islam, Nilufar. "Evaluating source water protection strategies : a soft computing approach." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/30842.

Full text
Abstract:
Source water protection is an important step in the implementation of a multi-barrier approach that ensures delivery of safe drinking water cost effectively. However, implementing source water protection strategies can be a challenging task due to technical and administrative issues. Currently many decision support tools are available that mainly use complex mathematical formulations. These tools require large data sets to conduct the analysis, which make their use very limited. A simple soft-computing model is proposed in this research that can estimate and predict a reduction in the pollutant loads based on selected source water protection strategies that include storm water management ponds, vegetated filter strips, and pollution control by agricultural practice. The proposed model uses an export coefficient approach and number of animals to calculate the pollutant loads generated from different land uses (e.g., agricultural lands, forests, roads, livestock, and pasture). A surrogate measure, water quality index, is used for the water assessment after the pollutant loads are discharged into the source water. To demonstrate the proof of concept of the proposed model, a Page Creek Case Study in Clayburn Watershed (British Columbia, Canada) was conducted. The results show that rapid urban development and improperly managed agricultural area have the most adverse effects on the source water quality. On the other hand, forests were found to be the best land use around the source water that ensures acceptable drinking water quality with a minimal requirement for treatment. The proposed model can help decision-makers at different levels of government (Federal/ Provincial/ Municipal) to make informed decisions related to land use, resource allocation and capital investment
APA, Harvard, Vancouver, ISO, and other styles
24

Darus, Intan Zaurah Mat. "Soft computing adaptive active vibration control of flexible structures." Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

McClintock, Shaunna. "Soft computing : a fuzzy logic controlled genetic algorithm environment." Thesis, University of Ulster, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Снитюк, О. І., and Л. В. Бережна. "Сценарний аналіз економічних процесів з використанням технологій Soft Computing." Thesis, НТУ "ХПІ", 2012. http://repository.kpi.kharkov.ua/handle/KhPI-Press/27332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Adam, Otmar. "Soft Business Process Management : Darstellung, Überwachung und Verbesserung von Geschäftsprozessen mit Methoden des Soft Computing /." Berlin : Logos, 2009. http://d-nb.info/999025317/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kumar, Vikas. "Soft computing approaches to uncertainty propagation in environmental risk mangement." Doctoral thesis, Universitat Rovira i Virgili, 2008. http://hdl.handle.net/10803/8558.

Full text
Abstract:
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like probabilistic reasoning, fuzzy logic, artificial neural nets etc have characteristics of approximation and dispositionality. Although in hard computing, imprecision and uncertainty are undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited to achieve tractability, lower cost of computation, effective communication and high Machine Intelligence Quotient (MIQ). Proposed thesis has tried to explore use of different soft computing approaches to handle uncertainty in environmental risk management. The work has been divided into three parts consisting five papers.
In the first part of this thesis different uncertainty propagation methods have been investigated. The first methodology is generalized fuzzy α-cut based on the concept of transformation method. A case study of uncertainty analysis of pollutant transport in the subsurface has been used to show the utility of this approach. This approach shows superiority over conventional methods of uncertainty modelling. A Second method is proposed to manage uncertainty and variability together in risk models. The new hybrid approach combining probabilistic and fuzzy set theory is called Fuzzy Latin Hypercube Sampling (FLHS). An important property of this method is its ability to separate randomness and imprecision to increase the quality of information. A fuzzified statistical summary of the model results gives indices of sensitivity and uncertainty that relate the effects of variability and uncertainty of input variables to model predictions. The feasibility of the method is validated to analyze total variance in the calculation of incremental lifetime risks due to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F) for the residents living in the surroundings of a municipal solid waste incinerator (MSWI) in Basque Country, Spain.
The second part of this thesis deals with the use of artificial intelligence technique for generating environmental indices. The first paper focused on the development of a Hazzard Index (HI) using persistence, bioaccumulation and toxicity properties of a large number of organic and inorganic pollutants. For deriving this index, Self-Organizing Maps (SOM) has been used which provided a hazard ranking for each compound. Subsequently, an Integral Risk Index was developed taking into account the HI and the concentrations of all pollutants in soil samples collected in the target area. Finally, a risk map was elaborated by representing the spatial distribution of the Integral Risk Index with a Geographic Information System (GIS). The second paper is an improvement of the first work. New approach called Neuro-Probabilistic HI was developed by combining SOM and Monte-Carlo analysis. It considers uncertainty associated with contaminants characteristic values. This new index seems to be an adequate tool to be taken into account in risk assessment processes. In both study, the methods have been validated through its implementation in the industrial chemical / petrochemical area of Tarragona.
The third part of this thesis deals with decision-making framework for environmental risk management. In this study, an integrated fuzzy relation analysis (IFRA) model is proposed for risk assessment involving multiple criteria. The fuzzy risk-analysis model is proposed to comprehensively evaluate all risks associated with contaminated systems resulting from more than one toxic chemical. The model is an integrated view on uncertainty techniques based on multi-valued mappings, fuzzy relations and fuzzy analytical hierarchical process. Integration of system simulation and risk analysis using fuzzy approach allowed to incorporate system modelling uncertainty and subjective risk criteria. In this study, it has been shown that a broad integration of fuzzy system simulation and fuzzy risk analysis is possible.
In conclusion, this study has broadly demonstrated the usefulness of soft computing approaches in environmental risk analysis. The proposed methods could significantly advance practice of risk analysis by effectively addressing critical issues of uncertainty propagation problem.
Los problemas del mundo real, especialmente aquellos que implican sistemas naturales, son complejos y se componen de muchos componentes indeterminados, que muestran en muchos casos una relación no lineal. Los modelos convencionales basados en técnicas analíticas que se utilizan actualmente para conocer y predecir el comportamiento de dichos sistemas pueden ser muy complicados e inflexibles cuando se quiere hacer frente a la imprecisión y la complejidad del sistema en un mundo real. El tratamiento de dichos sistemas, supone el enfrentarse a un elevado nivel de incertidumbre así como considerar la imprecisión. Los modelos clásicos basados en análisis numéricos, lógica de valores exactos o binarios, se caracterizan por su precisión y categorización y son clasificados como una aproximación al hard computing. Por el contrario, el soft computing tal como la lógica de razonamiento probabilístico, las redes neuronales artificiales, etc., tienen la característica de aproximación y disponibilidad. Aunque en la hard computing, la imprecisión y la incertidumbre son propiedades no deseadas, en el soft computing la tolerancia en la imprecisión y la incerteza se aprovechan para alcanzar tratabilidad, bajos costes de computación, una comunicación efectiva y un elevado Machine Intelligence Quotient (MIQ). La tesis propuesta intenta explorar el uso de las diferentes aproximaciones en la informática blanda para manipular la incertidumbre en la gestión del riesgo medioambiental. El trabajo se ha dividido en tres secciones que forman parte de cinco artículos.
En la primera parte de esta tesis, se han investigado diferentes métodos de propagación de la incertidumbre. El primer método es el generalizado fuzzy α-cut, el cual está basada en el método de transformación. Para demostrar la utilidad de esta aproximación, se ha utilizado un caso de estudio de análisis de incertidumbre en el transporte de la contaminación en suelo. Esta aproximación muestra una superioridad frente a los métodos convencionales de modelación de la incertidumbre. La segunda metodología propuesta trabaja conjuntamente la variabilidad y la incertidumbre en los modelos de evaluación de riesgo. Para ello, se ha elaborado una nueva aproximación híbrida denominada Fuzzy Latin Hypercube Sampling (FLHS), que combina los conjuntos de la teoría de probabilidad con la teoría de los conjuntos difusos. Una propiedad importante de esta teoría es su capacidad para separarse los aleatoriedad y imprecisión, lo que supone la obtención de una mayor calidad de la información. El resumen estadístico fuzzificado de los resultados del modelo generan índices de sensitividad e incertidumbre que relacionan los efectos de la variabilidad e incertidumbre de los parámetros de modelo con las predicciones de los modelos. La viabilidad del método se llevó a cabo mediante la aplicación de un caso a estudio donde se analizó la varianza total en la cálculo del incremento del riesgo sobre el tiempo de vida de los habitantes que habitan en los alrededores de una incineradora de residuos sólidos urbanos en Tarragona, España, debido a las emisiones de dioxinas y furanos (PCDD/Fs).
La segunda parte de la tesis consistió en la utilización de las técnicas de la inteligencia artificial para la generación de índices medioambientales. En el primer artículo se desarrolló un Índice de Peligrosidad a partir de los valores de persistencia, bioacumulación y toxicidad de un elevado número de contaminantes orgánicos e inorgánicos. Para su elaboración, se utilizaron los Mapas de Auto-Organizativos (SOM), que proporcionaron un ranking de peligrosidad para cada compuesto. A continuación, se elaboró un Índice de Riesgo Integral teniendo en cuenta el Índice de peligrosidad y las concentraciones de cada uno de los contaminantes en las muestras de suelo recogidas en la zona de estudio. Finalmente, se elaboró un mapa de la distribución espacial del Índice de Riesgo Integral mediante la representación en un Sistema de Información Geográfico (SIG). El segundo artículo es un mejoramiento del primer trabajo. En este estudio, se creó un método híbrido de los Mapas Auto-organizativos con los métodos probabilísticos, obteniéndose de esta forma un Índice de Riesgo Integrado. Mediante la combinación de SOM y el análisis de Monte-Carlo se desarrolló una nueva aproximación llamada Índice de Peligrosidad Neuro-Probabilística. Este nuevo índice es una herramienta adecuada para ser utilizada en los procesos de análisis. En ambos artículos, la viabilidad de los métodos han sido validados a través de su aplicación en el área de la industria química y petroquímica de Tarragona (Cataluña, España).
El tercer apartado de esta tesis está enfocado en la elaboración de una estructura metodológica de un sistema de ayuda en la toma de decisiones para la gestión del riesgo medioambiental. En este estudio, se presenta un modelo integrado de análisis de fuzzy (IFRA) para la evaluación del riesgo cuyo resultado depende de múltiples criterios. El modelo es una visión integrada de las técnicas de incertidumbre basadas en diseños de valoraciones múltiples, relaciones fuzzy y procesos analíticos jerárquicos inciertos. La integración de la simulación del sistema y el análisis del riesgo utilizando aproximaciones inciertas permitieron incorporar la incertidumbre procedente del modelo junto con la incertidumbre procedente de la subjetividad de los criterios. En este estudio, se ha demostrado que es posible crear una amplia integración entre la simulación de un sistema incierto y de un análisis de riesgo incierto.
En conclusión, este trabajo demuestra ampliamente la utilidad de aproximación Soft Computing en el análisis de riesgos ambientales. Los métodos propuestos podría avanzar significativamente la práctica de análisis de riesgos de abordar eficazmente el problema de propagación de incertidumbre.
APA, Harvard, Vancouver, ISO, and other styles
29

Ahmed, Mahmud. "The use of advanced soft computing for machinery condition monitoring." Thesis, University of Huddersfield, 2014. http://eprints.hud.ac.uk/id/eprint/25504/.

Full text
Abstract:
The demand for cost effective, reliable and safe machinery operation requires accurate fault detection and classification. These issues are of paramount importance as potential failures of rotating and reciprocating machinery can be managed properly and avoided in some cases. Various methods have been applied to tackle these issues, but the accuracy of those methods is variable and leaves scope for improvement. This research proposes appropriate methods for fault detection and diagnosis. The main consideration of this study is use Artificial Intelligence (AI) and related mathematics approaches to build a condition monitoring (CM) system that has incremental learning capabilities to select effective diagnostic features for the fault diagnosis of a reciprocating compressor (RC). The investigation involved a series of experiments conducted on a two-stage RC at baseline condition and then with faults introduced into the intercooler, drive belt and 2nd stage discharge and suction valve respectively. In addition to this, three combined faults: discharge valve leakage combined with intercooler leakage, suction valve leakage combined with intercooler leakage and discharge valve leakage combined with suction valve leakage were created and simulated to test the model. The vibration data was collected from the experimental RC and processed through pre-processing stage, features extraction, features selection before the developed diagnosis and classification model were built. A large number of potential features are calculated from the time domain, the frequency domain and the envelope spectrum. Applying Neural Networks (NNs), Support Vector Machines (SVMs), Relevance Vector Machines (RVMs) which integrate with Genetic Algorithms (GAs), and principle components analysis (PCA) which cooperates with principle components optimisation, to these features, has found that the features from envelope analysis have the most potential for differentiating various common faults in RCs. The practical results for fault detection, diagnosis and classification show that the proposed methods perform very well and accurately and can be used as effective tools for diagnosing reciprocating machinery failures.
APA, Harvard, Vancouver, ISO, and other styles
30

Turel, Mesut. "Soft computing based spatial analysis of earthquake triggered coherent landslides." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45909.

Full text
Abstract:
Earthquake triggered landslides cause loss of life, destroy structures, roads, powerlines, and pipelines and therefore they have a direct impact on the social and economic life of the hazard region. The damage and fatalities directly related to strong ground shaking and fault rupture are sometimes exceeded by the damage and fatalities caused by earthquake triggered landslides. Even though future earthquakes can hardly be predicted, the identification of areas that are highly susceptible to landslide hazards is possible. For geographical information systems (GIS) based deterministic slope stability and earthquake-induced landslide analysis, the grid-cell approach has been commonly used in conjunction with the relatively simple infinite slope model. The infinite slope model together with Newmark's displacement analysis has been widely used to create seismic landslide susceptibility maps. The infinite slope model gives reliable results in the case of surficial landslides with depth-length ratios smaller than 0.1. On the other hand, the infinite slope model cannot satisfactorily analyze deep-seated coherent landslides. In reality, coherent landslides are common and these types of landslides are a major cause of property damage and fatalities. In the case of coherent landslides, two- or three-dimensional models are required to accurately analyze both static and dynamic performance of slopes. These models are rarely used in GIS-based landslide hazard zonation because they are numerically expensive compared to one dimensional infinite slope models. Building metamodels based on data obtained from computer experiments and using computationally inexpensive predictions based on these metamodels has been widely used in several engineering applications. With these soft computing methods, design variables are carefully chosen using a design of experiments (DOE) methodology to cover a predetermined range of values and computer experiments are performed at these chosen points. The design variables and the responses from the computer simulations are then combined to construct functional relationships (metamodels) between the inputs and the outputs. In this study, Support Vector Machines (SVM) and Artificial Neural Networks (ANN) are used to predict the static and seismic responses of slopes. In order to integrate the soft computing methods with GIS for coherent landslide hazard analysis, an automatic slope profile delineation method from Digital Elevation Models is developed. The integrated framework is evaluated using a case study of the 1989 Loma Prieta, CA earthquake (Mw = 6.9). A seismic landslide hazard analysis is also performed for the same region for a future scenario earthquake (Mw = 7.03) on the San Andreas Fault.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Lijuan. "Multiphase flow measurement using Coriolis flowmeters incorporating soft computing techniques." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/63877/.

Full text
Abstract:
This thesis describes a novel measurement methodology for two-phase or multiphase flow using Coriolis flowmeters incorporating soft computing techniques. A review of methodologies and techniques for two-phase and multiphase flow measurement is given, together with the discussions of existing problems and technical requirements in their applications. The proposed measurement system is based on established sensors and data-driven models. Detailed principle and implementation of input variable selection methods for data-driven models and associated data-driven modelling process are reported. Three advanced input variable selection methods, including partial mutual information, genetic algorithm-artificial neural network and tree-based iterative input selection, are implemented and evaluated with experimental data. Parametric dependency between input variables and their significance and sensitivity to the desired output are discussed. Three soft computing techniques, including artificial neural network, support vector machine and genetic programming, are applied to data-driven modelling for two-phase flow measurement. Performance comparisons between the data-driven models are carried out through experimental tests and data analysis. Performance of Coriolis flowmeters with air-water, air-oil and gas-liquid two-phase carbon dioxide flows is presented through experimental assessment on one-inch and two-inch bore test rigs. Effects of operating pressure, temperature, installation orientation and fluid properties (density and viscosity) on the performance of Coriolis flowmeters are quantified and discussed. Experimental results suggest that the measurement system using Coriolis flowmeters together with the developed data-driven models has significantly reduced the original errors of mass flow measurement to within ±2%. The system also has the capability of predicting gas volume fraction with the relative errors less than ±10%.
APA, Harvard, Vancouver, ISO, and other styles
32

Lubasch, Peer. "Identifikation von Verkehrslasten unter Einsatz von Methoden des soft computing." Dresden TUDpress, 2009. http://d-nb.info/995246319/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Yingjie. "Investigation on soft computing techniques for airport environment evaluation systems." Thesis, Loughborough University, 2008. https://dspace.lboro.ac.uk/2134/35015.

Full text
Abstract:
Spatial and temporal information exist widely in engineering fields, especially in airport environmental management systems. Airport environment is influenced by many different factors and uncertainty is a significant part of the system. Decision support considering this kind of spatial and temporal information and uncertainty is crucial for airport environment related engineering planning and operation. Geographical information systems and computer aided design are two powerful tools in supporting spatial and temporal information systems. However, the present geographical information systems and computer aided design software are still too general in considering the special features in airport environment, especially for uncertainty. In this thesis, a series of parameters and methods for neural network-based knowledge discovery and training improvement are put forward, such as the relative strength of effect, dynamic state space search strategy and compound architecture.
APA, Harvard, Vancouver, ISO, and other styles
34

Gardiner, Michael Robert. "An Evaluation of Soft Processors as a Reliable Computing Platform." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5509.

Full text
Abstract:
This study evaluates the benefits and limitations of soft processors operating in a radiation-hardened FPGA, focusing primarily on the performance and reliability of these systems. FPGAs designs for four popular soft processors, the MicroBlaze, LEON3, Cortex-M0 DesignStart, and OpenRISC 1200 are developed for a Virtex-5 FPGA. The performance of these soft processor designs is then compared on ten widely-used benchmark programs. Benchmarking results indicate that the MicroBlaze has the best integer performance of the soft processors, with at least 2.23X better performance on average than the other three processors. However, the LEON3 has the best floating-point performance, with benchmark scores 8.9X higher on average than its competitors.The soft processors' performance is also compared against estimated benchmark scores for a radiation-hardened processor, the RAD750. We find the average performance of the RAD750 to be 2.58X better than the best soft processor scores on each benchmark, although the best soft processor scores were higher on two benchmarks. The soft processors' inability to compete with the performance of the decade-old RAD750 illustrates the substantial performance gap between hard and soft processor architectures. Although soft processors are not capable of competing with rad-hard processors in performance, the flexibility they provide nevertheless makes them a desirable option for space systems where speed is not the key issue.Fault injection experiments are also completed on three of the soft processors to evaluate their configuration memory sensitivity. Our results demonstrate that the MicroBlaze is less sensitive than the LEON3 and the Cortex-M0 DesignStart, but that the LEON3 has lower sensitivity per FPGA slice than the other processors. A combined metric for soft processor performance and configuration sensitivity is then developed to aid future researchers in evaluating the trade-offs between these two distinct processor attributes.
APA, Harvard, Vancouver, ISO, and other styles
35

Bennert, Reinhard. "Soft Computing-Methoden in Sanierungsprüfung und -controlling : Entscheidungsunterstützung durch Computional Intelligence /." Wiesbaden : Dt. Univ.-Verl, 2004. http://www.gbv.de/dms/zbw/386511489.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Garcia, Raymond Christopher. "A soft computing approach to anomaly detection with real-time applicability." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/21808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Osanlou, Ardeshir. "Soft computing and fractal geometry in signal processing and pattern recognition." Thesis, De Montfort University, 2000. http://hdl.handle.net/2086/4242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Mingwu. "Motion planning and control of mobile manipulators using soft computing techniques." Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Baraka, Ali. "Soft-computing and human-centric approaches for modelling complex manufacturing systems." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/16183/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lucic, Panta. "Modeling Transportation Problems Using Concepts of Swarm Intelligence and Soft Computing." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/26396.

Full text
Abstract:
Many real-world problems could be formulated in a way to fit the necessary form for discrete optimization. Discrete optimization problems can be solved by numerous different techniques that have developed over time. Some of the techniques provide optimal solution(s) to the problem and some of them give â good enoughâ solution(s). The fundamental reason for developing techniques capable of producing solutions that are not necessarily optimal is the fact that many discrete optimization problems are NP-complete. Metaheuristic algorithms are a common name for a set of general-purpose techniques developed to provide solution(s) to the problems associated with discrete optimization. Mostly the techniques are based on natural metaphors. Discrete optimization could be applied to countless problems in transportation engineering. Recently, researchers started studying the behavior of social insects (ants) in an attempt to use the swarm intelligence concept to develop artificial systems with the ability to search a problemâ s solution space in a way that is similar to the foraging search by a colony of social insects. The development of artificial systems does not entail the complete imitation of natural systems, but explores them in search of ideas for modeling. This research is partially devoted to the development of a new system based on the foraging behavior of bee colonies â Bee System. The Bee System was tested through many instances of the Traveling Salesman Problem. Many transportation-engineering problems, besides being of combinatorial nature, are characterized by uncertainty. In order to address these problems, the second part of the research is devoted to development of the algorithms that combine the existing results in the area of swarm intelligence (The Ant System) and approximate reasoning. The proposed approach â Fuzzy Ant System is tested on the following two examples: Stochastic Vehicle Routing Problem and Schedule Synchronization in Public Transit.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
41

Bhupatiraju, Murali K. "Direct and inverse models in metal forming : a soft computing approach /." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488190595941775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chaoui, Hicham. "Soft-computing based intelligent adaptive control design of complex dynamic systems." Thèse, Université du Québec à Trois-Rivières, 2011. http://depot-e.uqtr.ca/2676/1/030295752.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Amina, Mahdi. "Dynamic non-linear system modelling using wavelet-based soft computing techniques." Thesis, University of Westminster, 2011. https://westminsterresearch.westminster.ac.uk/item/8zwwz/dynamic-non-linear-system-modelling-using-wavelet-based-soft-computing-techniques.

Full text
Abstract:
The enormous number of complex systems results in the necessity of high-level and cost-efficient modelling structures for the operators and system designers. Model-based approaches offer a very challenging way to integrate a priori knowledge into the procedure. Soft computing based models in particular, can successfully be applied in cases of highly nonlinear problems. A further reason for dealing with so called soft computational model based techniques is that in real-world cases, many times only partial, uncertain and/or inaccurate data is available. Wavelet-Based soft computing techniques are considered, as one of the latest trends in system identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based approaches to model the non-linear dynamical systems in real world problems in conjunction with possible twists and novelties aiming for more accurate and less complex modelling structure. Initially, an on-line structure and parameter design has been considered in an adaptive Neuro- Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus (Monascus ruber van Tieghem) is examined against several other approaches for further justification of the proposed methodology. By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have been introduced. Increasing the accuracy and decreasing the computational cost are both the primary targets of proposed novelties. Modifying the synoptic weights by replacing them with Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA) comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for the above challenges. These two models differ from the point of view of structure while they share the same HLA scheme. The second approach contains an additional Multiplication layer, plus its hidden layer contains several sub-WNNs for each input dimension. The practical superiority of these extensions is demonstrated by simulation and experimental results on real non-linear dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT) whole milk, and consolidated with comprehensive comparison with other suggested schemes. At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network (FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from the data by building accurate regression, but also for the identification of complex systems. The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the consequent parts of rules. In order to improve the function approximation accuracy and general capability of the FWNN system, an efficient hybrid learning approach is used to adjust the parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the above technique.
APA, Harvard, Vancouver, ISO, and other styles
44

Ma, Xi. "One-diode photovoltaic model parameter extraction based on Soft-Computing Approaches." Thesis, Mittuniversitetet, Institutionen för elektronikkonstruktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36302.

Full text
Abstract:
Thesis explores the question of whether one-diode model can be extracted using soft-computing approaches based on indoor conditions. In thesis, three algorithms were selected using MATLAB for implementation, analysis and comparison. Thesis has proved that under indoor conditions, all three algorithms can accurately extract photovoltaic parameters under most illumination levels, but the extracted photovoltaic parameters cannot satisfy the physical meaning of photovoltaic parameters.
APA, Harvard, Vancouver, ISO, and other styles
45

Falcone, Roberto. "Optimal seismic retrofitting of existing RC frames through soft-computing approaches." Doctoral thesis, Universita degli studi di Salerno, 2018. http://hdl.handle.net/10556/3092.

Full text
Abstract:
2016 - 2017
Ph.D. Thesis proposes a Soft-Computing approach capable of supporting the engineer judgement in the selection and design of the cheapest solution for seismic retrofitting of existing RC framed structure. Chapter 1 points out the need for strengthening the existing buildings as one of the main way of decreasing economic and life losses as direct consequences of earthquake disasters. Moreover, it proposes a wide, but not-exhaustive, list of the most frequently observed deficiencies contributing to the vulnerability of concrete buildings. Chapter 2 collects the state of practice on seismic analysis methods for the assessment the safety of the existing buildings within the framework of a performancebased design. The most common approaches for modeling the material plasticity in the frame non-linear analysis are also reviewed. Chapter 3 presents a wide state of practice on the retrofitting strategies, intended as preventive measures aimed at mitigating the effect of a future earthquake by a) decreasing the seismic hazard demands; b) improving the dynamic characteristics supplied to the existing building. The chapter presents also a list of retrofitting systems, intended as technical interventions commonly classified into local intervention (also known “member-level” techniques) and global intervention (also called “structure-level” techniques) that might be used in synergistic combination to achieve the adopted strategy. In particular, the available approaches and the common criteria, respectively for selecting an optimum retrofit strategy and an optimal system are discussed. Chapter 4 highlights the usefulness of the Soft-Computing methods as efficient tools for providing “objective” answer in reasonable time for complex situation governed by approximation and imprecision. In particular, Chapter 4 collects the applications found in the scientific literature for Fuzzy Logic, Artificial Neural Network and Evolutionary Computing in the fields of structural and earthquake engineering with a taxonomic classification of the problems in modeling, simulation and optimization. Chapter 5 “translates” the search for the cheapest retrofitting system into a constrained optimization problem. To this end, the chapter includes a formulation of a novel procedure that assembles a numerical model for seismic assessment of framed structures within a Soft-Computing-driven optimization algorithm capable to minimize the objective function defined as the total initial cost of intervention. The main components required to assemble the procedure are described in the chapter: the optimization algorithm (Genetic Algorithm); the simulation framework (OpenSees); and the software environment (Matlab). Chapter 6 describes step-by-step the flow-chart of the proposed procedure and it focuses on the main implementation aspects and working details, ranging from a clever initialization of the population of candidate solutions up to a proposal of tuning procedure for the genetic parameters. Chapter 7 discusses numerical examples, where the Soft-Computing procedure is applied to the model of multi-storey RC frames obtained through simulated design. A total of fifteen “scenarios” are studied in order to assess its “robustness” to changes in input data. Finally, Chapter 8, on the base of the outcomes observed, summarizes the capabilities of the proposed procedure, yet highlighting its “limitations” at the current state of development. Some possible modifications are discussed to enhance its efficiency and completeness. [edited by author]
XVI n.s.
APA, Harvard, Vancouver, ISO, and other styles
46

Tiwari, A., J. Knowles, E. Avineri, Keshav P. Dahal, and R. Roy. "Applications of Soft Computing." 2006. http://hdl.handle.net/10454/2291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tsai, Pei-Wei, and 蔡沛緯. "Soft Computing for Information Hiding." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/00402221571722548598.

Full text
Abstract:
碩士
國立高雄應用科技大學
電子與資訊工程研究所碩士班
95
An innovative scheme for optimization algorithm based on observing and imitating creatures’ constitutional behaviors is proposed in this thesis. Taking behaviors from natural organisms to form the optimization algorithm is an effective way for solving optimization problems. The main purpose of this thesis is to establish an optimization algorithm by means of modeling the observed congenital behaviors from the specific species, cat, for solving optimization problems. We present two sub-models, the tracing mode and the seeking mode, for moving the solution sets from one position to another on the solution space. By properly allocating these two sub-models in the evolution, we imitate the behaviors of tracing moving objects and the resting of the cat. Moreover, when applying these sub-models into the algorithm, the solution sets are able to move from one position to a new position. Then, the evolutionary algorithm, Cat Swarm Optimization (CSO), for optimization is achieved. In order to investigate the performance of CSO, we compare CSO with one existing technique called Particle Swarm Optimization (PSO) in the experiments by testing several functions. According to the experimental results, as we expected, CSO achieves searching of the global optimum more swiftly and more precisely than PSO does. Furthermore, we apply CSO into information hiding. When increasing the robustness of the hidden information, the quality of the media containing hidden information becomes degraded. On the contrary, the robustness of the hidden information becomes weaker while the cover media perform more similarly to the original media. Balancing between the robustness of the hidden information and the similarity of the cover media is a frequent trade-off problem in information hiding, and the optimization algorithm is very useful for solving this kind of problem. Based on the results, the application of CSO and the optimization algorithm proposed in this thesis are able to obtain the best search results while producing positive influences on the hidden information at the same time. Without doubt, the proper decision can be found with the application of CSO.
APA, Harvard, Vancouver, ISO, and other styles
48

DAO, THI-KIEN, and 陶氏建 (Thi-Kien Dao). "Soft Computing with Industrial Applications." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3vvvve.

Full text
Abstract:
博士
國立高雄科技大學
電子工程系
107
Industrialization is one of the priority concerns of government departments for socio-economic development. Therefore, industrial applications are paid more considerable attention from the research community. Successful applications have generated enormous economic benefits, e.g., working environment improved, heavily labor liberated, income increased. The government policy for developing economy depends on more increasingly focused on industrial development in the country. However, many real industrial applications face the challenges arising from requirements of the increase of industrial development with creative processes, robustness, and efficiency. Soft Computing (SC) is one of the promising solutions to these challenges. SC is an evolving collection of methodologies, which aims to exploit tolerance for imprecision, uncertainty, and partial truth to achieve robustness, tractability, close to the human mind, and low cost. SC is proving robust in delivering optimal global solutions and assisting in resolving the limitations encountered in traditional methods. The soft computing technologies like the evolutionary algorithms (EA), swarm intelligence (SI) fuzzy logic (FL), rough sets (RS), soft sets (SS), and artificial neural networks (ANN) have been applied successfully to industrial applications. This dissertation tries to bridge the gap between the theory of soft computing and industrial applications partially. The primary methodology of our research is to attempt learning how to analyze, redesign and, improve SC, e.g., technologies of EAs and SIs, for solving the particular related industrial problems. Our approach often has two parts included the algorithm and the solution as the application of the algorithm. For the first part, in the algorithm of SC, we consider the techniques like parallel computing (PC), compact computing (CP), hybrid computing (HC), multi-objective (MO), and discrete transform (DF) to enhance or improve the methodologies according to fitting what specification of the problems. For the second part is the solution for related industrial applications by applying the analyzed algorithms. The principal objects in this dissertation to be solved are the aspects of optimization such as scheduling, balancing, and topology control problems. Besides, we will discuss on advantages and disadvantages of SCs over traditional solutions, we also present the early research results of the dissertation. These results include an optimal make-span in Job shop scheduling problems, a solution for topology control scheme in Wireless Sensor Networks (WSN), a solution for the economic load dispatch problem, and a solution for the base stations (BS) optimum formation in WSN. Regarding the orientation of opportunities and challenges of industrial application development, the dissertation would be feasible for practical application in industrial social life.
APA, Harvard, Vancouver, ISO, and other styles
49

Saad, A., E. Avineri, Keshav P. Dahal, M. Sarfraz, and R. Roy. "Soft Computing in Industrial Applications." 2007. http://hdl.handle.net/10454/2290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Begum, Momotaz. "Robotic mapping using soft computing methodologies /." 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography