Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: SOFT COMPUTING TECHNIQUE.

Дисертації з теми "SOFT COMPUTING TECHNIQUE"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "SOFT COMPUTING TECHNIQUE".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Machaka, Pheeha. "Situation recognition using soft computing techniques." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/11225.

Повний текст джерела
Анотація:
Includes bibliographical references.
The last decades have witnessed the emergence of a large number of devices pervasively launched into our daily lives as systems producing and collecting data from a variety of information sources to provide different services to different users via a variety of applications. These include infrastructure management, business process monitoring, crisis management and many other system-monitoring activities. Being processed in real-time, these information production/collection activities raise an interest for live performance monitoring, analysis and reporting, and call for data-mining methods in the recognition, prediction, reasoning and controlling of the performance of these systems by controlling changes in the system and/or deviations from normal operation. In recent years, soft computing methods and algorithms have been applied to data mining to identify patterns and provide new insight into data. This thesis revisits the issue of situation recognition for systems producing massive datasets by assessing the relevance of using soft computing techniques for finding hidden pattern in these systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Fernando, Kurukulasuriya Joseph Tilak Nihal. "Soft computing techniques in power system analysis." Thesis, full-text, 2008. https://vuir.vu.edu.au/2025/.

Повний текст джерела
Анотація:
Soft computing is a concept that has come into prominence in recent times and its application to power system analysis is still more recent. This thesis explores the application of soft computing techniques in the area of voltage stability of power systems. Soft computing, as opposed to conventional “hard” computing, is a technique that is tolerant of imprecision, uncertainty, partial truth and approximation. Its methods are based on the working of the human brain and it is commonly known as artificial intelligence. The human brain is capable of arriving at valid conclusions based on incomplete and partial data obtained from prior experience. It is an approximation of this process on a very small scale that is used in soft computing. Some of the important branches of soft computing (SC) are artificial neural networks (ANNs), fuzzy logic (FL), genetic computing (GC) and probabilistic reasoning (PR). The soft computing methods are robust and low cost. It is to be noted that soft computing methods are used in such diverse fields as missile guidance, robotics, industrial plants, pattern recognition, market prediction, patient diagnosis, logistics and of course power system analysis and prediction. However in all these fields its application is comparatively new and research is being carried out continuously in many universities and research institutions worldwide. The research presented in this thesis uses the soft computing method of Artificial Neural Networks (ANN’s) for the prediction of voltage instability in power systems. The research is very timely and current and would be a substantial contribution to the present body of knowledge in soft computing and voltage stability, which by itself is a new field. The methods developed in this research would be faster and more economical than presently available methods enabling their use online.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fernando, Kurukulasuriya Joseph Tilak Nihal. "Soft computing techniques in power system analysis." full-text, 2008. http://eprints.vu.edu.au/2025/1/thesis.pdf.

Повний текст джерела
Анотація:
Soft computing is a concept that has come into prominence in recent times and its application to power system analysis is still more recent. This thesis explores the application of soft computing techniques in the area of voltage stability of power systems. Soft computing, as opposed to conventional “hard” computing, is a technique that is tolerant of imprecision, uncertainty, partial truth and approximation. Its methods are based on the working of the human brain and it is commonly known as artificial intelligence. The human brain is capable of arriving at valid conclusions based on incomplete and partial data obtained from prior experience. It is an approximation of this process on a very small scale that is used in soft computing. Some of the important branches of soft computing (SC) are artificial neural networks (ANNs), fuzzy logic (FL), genetic computing (GC) and probabilistic reasoning (PR). The soft computing methods are robust and low cost. It is to be noted that soft computing methods are used in such diverse fields as missile guidance, robotics, industrial plants, pattern recognition, market prediction, patient diagnosis, logistics and of course power system analysis and prediction. However in all these fields its application is comparatively new and research is being carried out continuously in many universities and research institutions worldwide. The research presented in this thesis uses the soft computing method of Artificial Neural Networks (ANN’s) for the prediction of voltage instability in power systems. The research is very timely and current and would be a substantial contribution to the present body of knowledge in soft computing and voltage stability, which by itself is a new field. The methods developed in this research would be faster and more economical than presently available methods enabling their use online.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Esteves, João Trevizoli. "Climate and agrometeorology forecasting using soft computing techniques. /." Jaboticabal, 2018. http://hdl.handle.net/11449/180833.

Повний текст джерела
Анотація:
Orientador: Glauco de Souza Rolim
Resumo: Precipitação, em pequenas escalas de tempo, é um fenômeno associado a altos níveis de incerteza e variabilidade. Dada a sua natureza, técnicas tradicionais de previsão são dispendiosas e exigentes em termos computacionais. Este trabalho apresenta um modelo para prever a ocorrência de chuvas em curtos intervalos de tempo por Redes Neurais Artificiais (RNAs) em períodos acumulados de 3 a 7 dias para cada estação climática, mitigando a necessidade de predizer o seu volume. Com essa premissa pretende-se reduzir a variância, aumentar a tendência dos dados diminuindo a responsabilidade do algoritmo que atua como um filtro para modelos quantitativos, removendo ocorrências subsequentes de valores de zero(ausência) de precipitação, o que influencia e reduz seu desempenho. O modelo foi desenvolvido com séries temporais de 10 regiões agricolamente relevantes no Brasil, esses locais são os que apresentam as séries temporais mais longas disponíveis e são mais deficientes em previsões climáticas precisas, com 60 anos de temperatura média diária do ar e precipitação acumulada. foram utilizados para estimar a evapotranspiração potencial e o balanço hídrico; estas foram as variáveis ​​utilizadas como entrada para as RNAs. A precisão média para todos os períodos acumulados foi de 78% no verão, 71% no inverno 62% na primavera e 56% no outono, foi identificado que o efeito da continentalidade, o efeito da altitude e o volume da precipitação normal , tem um impacto direto na precisão das RNAs. Os... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Precipitation, in short periods of time, is a phenomenon associated with high levels of uncertainty and variability. Given its nature, traditional forecasting techniques are expensive and computationally demanding. This paper presents a model to forecast the occurrence of rainfall in short ranges of time by Artificial Neural Networks(ANNs) in accumulated periods from 3 to 7 days for each climatic season, mitigating the necessity of predicting its amount. With this premise it is intended to reduce the variance, rise the bias of data and lower the responsibility of the model acting as a filter for quantitative models by removing subsequent occurrences of zeros values of rainfall which leads to bias the and reduces its performance. The model were developed with time series from 10 agriculturally relevant regions in Brazil, these places are the ones with the longest available weather time series and and more deficient in accurate climate predictions, it was available 60 years of daily mean air temperature and accumulated precipitation which were used to estimate the potential evapotranspiration and water balance; these were the variables used as inputs for the ANNs models. The mean accuracy of the model for all the accumulated periods were 78% on summer, 71% on winter 62% on spring and 56% on autumn, it was identified that the effect of continentality, the effect of altitude and the volume of normal precipitation, have a direct impact on the accuracy of the ANNs. The models have ... (Complete abstract click electronic access below)
Mestre
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Erman, Maria. "Applications of Soft Computing Techniques for Wireless Communications." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17314.

Повний текст джерела
Анотація:
This thesis presents methods and applications of Fuzzy Logic and Rough Sets in the domain of Telecommunications at both the network and physical layers. Specifically, the use of a new class of functions, the truncated π functions, for classifying IP traffic by matching datagram size histograms is explored. Furthermore, work on adapting the payoff matrix in multiplayer games by using fuzzy entries as opposed to crisp values that are hard to quantify, is presented. Additionally, applications of fuzzy logic in wireless communications are presented, comprised by a comprehensive review of current trends and applications, followed by work directed towards using it in spectrum sensing and power control in cognitive radio networks. This licentiate thesis represents parts of my work in the fields of Fuzzy Systems and Wireless Communications. The work was done in collaboration between the Departments of Applied Signal Processing and Mathematics at Blekinge Institute of Technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Perez, Ruben E. "Soft Computing techniques and applications in aircraft design optimization." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ63122.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wang, Lijuan. "Multiphase flow measurement using Coriolis flowmeters incorporating soft computing techniques." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/63877/.

Повний текст джерела
Анотація:
This thesis describes a novel measurement methodology for two-phase or multiphase flow using Coriolis flowmeters incorporating soft computing techniques. A review of methodologies and techniques for two-phase and multiphase flow measurement is given, together with the discussions of existing problems and technical requirements in their applications. The proposed measurement system is based on established sensors and data-driven models. Detailed principle and implementation of input variable selection methods for data-driven models and associated data-driven modelling process are reported. Three advanced input variable selection methods, including partial mutual information, genetic algorithm-artificial neural network and tree-based iterative input selection, are implemented and evaluated with experimental data. Parametric dependency between input variables and their significance and sensitivity to the desired output are discussed. Three soft computing techniques, including artificial neural network, support vector machine and genetic programming, are applied to data-driven modelling for two-phase flow measurement. Performance comparisons between the data-driven models are carried out through experimental tests and data analysis. Performance of Coriolis flowmeters with air-water, air-oil and gas-liquid two-phase carbon dioxide flows is presented through experimental assessment on one-inch and two-inch bore test rigs. Effects of operating pressure, temperature, installation orientation and fluid properties (density and viscosity) on the performance of Coriolis flowmeters are quantified and discussed. Experimental results suggest that the measurement system using Coriolis flowmeters together with the developed data-driven models has significantly reduced the original errors of mass flow measurement to within ±2%. The system also has the capability of predicting gas volume fraction with the relative errors less than ±10%.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yang, Yingjie. "Investigation on soft computing techniques for airport environment evaluation systems." Thesis, Loughborough University, 2008. https://dspace.lboro.ac.uk/2134/35015.

Повний текст джерела
Анотація:
Spatial and temporal information exist widely in engineering fields, especially in airport environmental management systems. Airport environment is influenced by many different factors and uncertainty is a significant part of the system. Decision support considering this kind of spatial and temporal information and uncertainty is crucial for airport environment related engineering planning and operation. Geographical information systems and computer aided design are two powerful tools in supporting spatial and temporal information systems. However, the present geographical information systems and computer aided design software are still too general in considering the special features in airport environment, especially for uncertainty. In this thesis, a series of parameters and methods for neural network-based knowledge discovery and training improvement are put forward, such as the relative strength of effect, dynamic state space search strategy and compound architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Amina, Mahdi. "Dynamic non-linear system modelling using wavelet-based soft computing techniques." Thesis, University of Westminster, 2011. https://westminsterresearch.westminster.ac.uk/item/8zwwz/dynamic-non-linear-system-modelling-using-wavelet-based-soft-computing-techniques.

Повний текст джерела
Анотація:
The enormous number of complex systems results in the necessity of high-level and cost-efficient modelling structures for the operators and system designers. Model-based approaches offer a very challenging way to integrate a priori knowledge into the procedure. Soft computing based models in particular, can successfully be applied in cases of highly nonlinear problems. A further reason for dealing with so called soft computational model based techniques is that in real-world cases, many times only partial, uncertain and/or inaccurate data is available. Wavelet-Based soft computing techniques are considered, as one of the latest trends in system identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based approaches to model the non-linear dynamical systems in real world problems in conjunction with possible twists and novelties aiming for more accurate and less complex modelling structure. Initially, an on-line structure and parameter design has been considered in an adaptive Neuro- Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus (Monascus ruber van Tieghem) is examined against several other approaches for further justification of the proposed methodology. By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have been introduced. Increasing the accuracy and decreasing the computational cost are both the primary targets of proposed novelties. Modifying the synoptic weights by replacing them with Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA) comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for the above challenges. These two models differ from the point of view of structure while they share the same HLA scheme. The second approach contains an additional Multiplication layer, plus its hidden layer contains several sub-WNNs for each input dimension. The practical superiority of these extensions is demonstrated by simulation and experimental results on real non-linear dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT) whole milk, and consolidated with comprehensive comparison with other suggested schemes. At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network (FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from the data by building accurate regression, but also for the identification of complex systems. The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the consequent parts of rules. In order to improve the function approximation accuracy and general capability of the FWNN system, an efficient hybrid learning approach is used to adjust the parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the above technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, Mingwu. "Motion planning and control of mobile manipulators using soft computing techniques." Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266128.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zhao, Yisu. "Human Emotion Recognition from Body Language of the Head using Soft Computing Techniques." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23468.

Повний текст джерела
Анотація:
When people interact with each other, they not only listen to what the other says, they react to facial expressions, gaze direction, and head movement. Human-computer interaction would be enhanced in a friendly and non-intrusive way if computers could understand and respond to users’ body language in the same way. This thesis aims to investigate new methods for human computer interaction by combining information from the body language of the head to recognize the emotional and cognitive states. We concentrated on the integration of facial expression, eye gaze and head movement using soft computing techniques. The whole procedure is done in two-stage. The first stage focuses on the extraction of explicit information from the modalities of facial expression, head movement, and eye gaze. In the second stage, all these information are fused by soft computing techniques to infer the implicit emotional states. In this thesis, the frequency of head movement (high frequency movement or low frequency movement) is taken into consideration as well as head nods and head shakes. A very high frequency head movement may show much more arousal and active property than the low frequency head movement which differs on the emotion dimensional space. The head movement frequency is acquired by analyzing the tracking results of the coordinates from the detected nostril points. Eye gaze also plays an important role in emotion detection. An eye gaze detector was proposed to analyze whether the subject's gaze direction was direct or averted. We proposed a geometrical relationship of human organs between nostrils and two pupils to achieve this task. Four parameters are defined according to the changes in angles and the changes in the proportion of length of the four feature points to distinguish avert gaze from direct gaze. The sum of these parameters is considered as an evaluation parameter that can be analyzed to quantify gaze level. The multimodal fusion is done by hybridizing the decision level fusion and the soft computing techniques for classification. This could avoid the disadvantages of the decision level fusion technique, while retaining its advantages of adaptation and flexibility. We introduced fuzzification strategies which can successfully quantify the extracted parameters of each modality into a fuzzified value between 0 and 1. These fuzzified values are the inputs for the fuzzy inference systems which map the fuzzy values into emotional states.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Merchán-Cruz, Emmanuel Alejandro. "Soft-computing techniques in the trajectory planning of robot manipulators sharing a common workspace." Thesis, University of Sheffield, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419281.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Soufian, Majeed. "Hard and soft computing techniques for non-linear modeling and control with industrial applications." Thesis, Manchester Metropolitan University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.273053.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Khalid, Fakhar. "Use of soft computing pattern recognition techniques to analyse tropical cyclones from meteorological satellite data." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11887/.

Повний текст джерела
Анотація:
Tropical cyclones are potentially the most destructive of all natural meteorological hazards. When these cloud systems make landfall, they cause significant amounts of human death and injury as well as extensive damage to property, therefore reliable cyclone detection and classification form important weather forecasting activities. The work carried out in this research has focused on developing a unique fuzzy logic based patter recognition algorithm for identifying key recognisable structural elements of North Atlantic basin hurricanes. Due to vaguely defined boundaries and cloud patterns associated to cyclones and hurricanes, existing hurricane detection and classification techniques such as Advanced Dvorak’s T-classification technique and Objective Dvorak’s techniques fail to deal with the pattern uncertainties. Therefore, a fuzzy logic pattern recognition model was developed and implemented to overcome the shortcomings of manual and subjective algorithms for tropical cyclone detection and intensity classification. Three key storm patterns are recognised during the cyclogenesis of any hurricane: Central Dense Overcast (CDO); the eye of the storm; and the spiral rain bands. A cognitive linguistic grammar based approach was used to semantically arrange the key structural components of hurricanes. A fuzzy rule based model was developed to recognise these features in satellite imagery in order to analyse the geometrical uncertain shapes of clouds associated with tropical storms and classify the detected storms’ intensity. The algorithms were trained using NOAA AVHRR, GOES, and Meteosat satellite data at spatial resolution of ~4km and ~8km. The training data resulted in fuzzy membership function which allowed the vagueness of cloud patterns to be classified objectively. The algorithms were validated by detecting and classifying the storm cloud patterns in both visible and infrared satellite imagery to confirm the existence of the storm features. Gradual growth of hurricanes was monitored and mapped based on 3 hourly satellite image dataset of ~8km spatial resolution, which provided a complete temporal profile of the region. 375 North Atlantic storms were processed comprising of a period of 31 years with over 112,000 satellite images of cloud patterns to validate the accuracies of detection and intensity classification of tropical cyclones and hurricanes. The evidence from this research suggests that the fusion of fuzzy logic with traditional pattern recognition techniques and introduction of fuzzy rules to T-classification provides a promising technique for automated detection of tropical cyclones. The system developed displayed detection accuracy of 81.23%, while the intensity classification accuracy was measured at 78.05% with an RMSE of 0.028 T number. The 78.05% intensity classification accuracy was based on storm being recognised from a preliminary stage of tropical storms. The accuracy of hurricane or tropical cyclone intensity estimation from T1 onwards was recorded as 97.12%. While the storms were recognised, their central locations were also estimated because of their importance in tracking the hurricane. Validation process resulted in an RMSE of 0.466 degrees in longitude and RMSE of 0.715 degrees in latitude. The high RMSE which averages 38.4 km suggests that the estimated centre of the storms were around 38.4 km away from the actual centre measured by NOAA. This was an anomalous figure caused by 9 wrongly georeferenced images. Correcting this error resulted in an RMSE of 0.339 degrees in longitude and RMSE of 0.282 degrees in latitude, approximating an average shift of 19km. In 1990s these forecasting errors hovered around 100km area, while according to NOAA the current research averages the track accuracies around 50km, making this research a valuable contribution to the research domain. This research has demonstrated that subjective and manual hurricane recognition techniques can be successfully automated using soft computing based pattern recognition algorithms in order to process a diverse range of meteorological satellite imagery. This can be essential in the understanding of the detection of cloud patterns occurring in natural disasters such as tropical cyclones, assisting their accurate prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Teixeira, Rafael Luís. "Projeto, construção e caracterização de um amortecedor ativo controlado por atuador piezoelétrico." Universidade Federal de Uberlândia, 2007. https://repositorio.ufu.br/handle/123456789/14796.

Повний текст джерела
Анотація:
Fundação de Amparo a Pesquisa do Estado de Minas Gerais
This thesis presents the design methodology, the construction of a prototype and the experimental validation of an active vibration damper witch is controlled by a piezoelectric actuator. The proposed device has two flexible metallic bellows connected to a rigid reservoir filled with a viscous fluid. When one of the bellows is connected to a vibrating structure a periodic flow passes through a variable internal orifice and the damping effect is produced. The size of the orifice is adjusted by a piezoelectric control system that positions the conical core into a conical cavity. The damper device finite element computational model was developed considering that the valve body is rigid and that the fluid - structure iteration occurs between the fluid and the flexible bellows. This model is discretized using a lagrangean-eulrian formulation. The actuator has a closed flexible metallic structure that amplifies the displacement produced by an internally mounted stack of piezoelectric ceramic layers, and it is also modeled by the finite element method. The damper prototype was built and experimental tests using impulsive and harmonic excitations were conducted to determine its dynamic behavior and also to validate the developed computational models. The simulation and experimental results are compared by curves that relate the damping coefficient with the size of the orifice. Reduced dynamical models are proposed to represent the behavior of the damper device with fixed and variable orifice sizes. A local classic PID controller for the piezoelectric actuator was design to assure that the valve core assume the correct position, providing the commanded damping coefficient. The damper device was applied to a vibration system that represents the model of a quarter-car vehicle. One on-off controller and another fuzzy controller were design to control the vibrations of the vehicle equipped with the proposed active damper. Experimental tests shown that the damping coefficient values, commanded by the global controller, were achieved in time intervals lesser than 10 milliseconds. These results demonstrate the very good performance of the proposed damper device.
Esta tese apresenta o desenvolvimento de uma metodologia de projeto, a construção de um protótipo e a validação experimental de um amortecedor ativo de vibrações controlado por um atuador piezelétrico. O dispositivo proposto contém um circuito hidráulico constituído por dois foles metálicos flexíveis conectados a um reservatório rígido cheio com um fluido viscoso. Quando um dos foles é conectado a uma estrutura vibratória um fluxo de fluido é forçado através de um orifício variável, produzindo o efeito de amortecimento. O tamanho do orifício é ajustado por um sistema piezelétrico de controle que posiciona um obturador cônico numa cavidade cônica. O amortecedor é modelado pela técnica dos elementos finitos considerando que o corpo da válvula rígido e que existe interação entre o fluido interno e a estrutura flexível dos foles. Este modelo é discretizado utilizando uma formulação Lagrangeana Euleriana. O atuador, composto por uma estrutura metálica flexível que amplifica o deslocamento produzido por uma pilha de cerâmicas piezelétricas, também é modelado pela técnica dos elementos finitos. Foi construído um protótipo do amortecedor e realizados ensaios experimentais com excitações impulsivas e harmônicas, para determinar o comportamento dinâmico e para validar os modelos computacionais desenvolvidos. A relação entre o tamanho do orifício e a correspondente força de amortecimento produzida é obtida tanto a partir de simulações feitas com o modelo computacional, como através de ensaios com o protótipo, para valores do tamanho do orifício fixos e variáveis. Propõe-se o uso de modelos dinâmicos reduzidos para representar a dinâmica do amortecedor. Para garantir que o atuador piezelétrico posicione corretamente o obturador da válvula, foi incorporado ao amortecedor um controlador local clássico tipo PID. O amortecedor ativo foi aplicado a um sistema vibratório que representa o modelo de um quarto de um automóvel. Desenvolveu-se projeto de um controlador liga - desliga e de um controlador fuzzy para controlar a vibração do veículo equipado com o amortecedor ativo. Testes experimentais mostraram que as alterações no valor do coeficiente de amortecimento da suspensão, comandadas pelo controlador global, foram realizadas em tempos inferiores a 10 milisegundos, indicando excelente desempenho do amortecedor proposto.
Doutor em Engenharia Mecânica
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Owa, Kayode Olayemi. "Non-linear model predictive control strategies for process plants using soft computing approaches." Thesis, University of Plymouth, 2014. http://hdl.handle.net/10026.1/3031.

Повний текст джерела
Анотація:
The developments of advanced non-linear control strategies have attracted a considerable research interests over the past decades especially in process control. Rather than an absolute reliance on mathematical models of process plants which often brings discrepancies especially owing to design errors and equipment degradation, non-linear models are however required because they provide improved prediction capabilities but they are very difficult to derive. In addition, the derivation of the global optimal solution gets more difficult especially when multivariable and non-linear systems are involved. Hence, this research investigates soft computing techniques for the implementation of a novel real time constrained non-linear model predictive controller (NMPC). The time-frequency localisation characteristics of wavelet neural network (WNN) were utilised for the non-linear models design using system identification approach from experimental data and improve upon the conventional artificial neural network (ANN) which is prone to low convergence rate and the difficulties in locating the global minimum point during training process. Salient features of particle swarm optimisation and a genetic algorithm (GA) were combined to optimise the network weights. Real time optimisation occurring at every sampling instant is achieved using a GA to deliver results both in simulations and real time implementation on coupled tank systems with further extension to a complex quadruple tank process in simulations. The results show the superiority of the novel WNN-NMPC approach in terms of the average controller energy and mean squared error over the conventional ANN-NMPC strategies and PID control strategy for both SISO and MIMO systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Mistry, Pritesh. "A Knowledge Based Approach of Toxicity Prediction for Drug Formulation. Modelling Drug Vehicle Relationships Using Soft Computing Techniques." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/14440.

Повний текст джерела
Анотація:
This multidisciplinary thesis is concerned with the prediction of drug formulations for the reduction of drug toxicity. Both scientific and computational approaches are utilised to make original contributions to the field of predictive toxicology. The first part of this thesis provides a detailed scientific discussion on all aspects of drug formulation and toxicity. Discussions are focused around the principal mechanisms of drug toxicity and how drug toxicity is studied and reported in the literature. Furthermore, a review of the current technologies available for formulating drugs for toxicity reduction is provided. Examples of studies reported in the literature that have used these technologies to reduce drug toxicity are also reported. The thesis also provides an overview of the computational approaches currently employed in the field of in silico predictive toxicology. This overview focuses on the machine learning approaches used to build predictive QSAR classification models, with examples discovered from the literature provided. Two methodologies have been developed as part of the main work of this thesis. The first is focused on use of directed bipartite graphs and Venn diagrams for the visualisation and extraction of drug-vehicle relationships from large un-curated datasets which show changes in the patterns of toxicity. These relationships can be rapidly extracted and visualised using the methodology proposed in chapter 4. The second methodology proposed, involves mining large datasets for the extraction of drug-vehicle toxicity data. The methodology uses an area-under-the-curve principle to make pairwise comparisons of vehicles which are classified according to the toxicity protection they offer, from which predictive classification models based on random forests and decisions trees are built. The results of this methodology are reported in chapter 6.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Cakit, Erman. "Investigating The Relationship Between Adverse Events and Infrastructure Development in an Active War Theater Using Soft Computing Techniques." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5777.

Повний текст джерела
Анотація:
The military recently recognized the importance of taking sociocultural factors into consideration. Therefore, Human Social Culture Behavior (HSCB) modeling has been getting much attention in current and future operational requirements to successfully understand the effects of social and cultural factors on human behavior. There are different kinds of modeling approaches to the data that are being used in this field and so far none of them has been widely accepted. HSCB modeling needs the capability to represent complex, ill-defined, and imprecise concepts, and soft computing modeling can deal with these concepts. There is currently no study on the use of any computational methodology for representing the relationship between adverse events and infrastructure development investments in an active war theater. This study investigates the relationship between adverse events and infrastructure development projects in an active war theater using soft computing techniques including fuzzy inference systems (FIS), artificial neural networks (ANNs), and adaptive neuro-fuzzy inference systems (ANFIS) that directly benefits from their accuracy in prediction applications. Fourteen developmental and economic improvement project types were selected based on allocated budget values and a number of projects at different time periods, urban and rural population density, and total adverse event numbers at previous month selected as independent variables. A total of four outputs reflecting the adverse events in terms of the number of people killed, wounded, hijacked, and total number of adverse events has been estimated. For each model, the data was grouped for training and testing as follows: years between 2004 and 2009 (for training purpose) and year 2010 (for testing). Ninety-six different models were developed and investigated for Afghanistan and the country was divided into seven regions for analysis purposes. Performance of each model was investigated and compared to all other models with the calculated mean absolute error (MAE) values and the prediction accuracy within &"177;1 error range (difference between actual and predicted value). Furthermore, sensitivity analysis was performed to determine the effects of input values on dependent variables and to rank the top ten input parameters in order of importance. According to the the results obtained, it was concluded that the ANNs, FIS, and ANFIS are useful modeling techniques for predicting the number of adverse events based on historical development or economic projects' data. When the model accuracy was calculated based on the MAE for each of the models, the ANN had better predictive accuracy than FIS and ANFIS models in general as demonstrated by experimental results. The percentages of prediction accuracy with values found within ± error range around 90%. The sensitivity analysis results show that the importance of economic development projects varies based on the regions, population density, and occurrence of adverse events in Afghanistan. For the purpose of allocating resources and development of regions, the results can be summarized by examining the relationship between adverse events and infrastructure development in an active war theater; emphasis was on predicting the occurrence of events and assessing the potential impact of regional infrastructure development efforts on reducing number of such events.
Ph.D.
Doctorate
Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Izza, Yacine. "Informatique ubiquitaire : techniques de curage d'informations perverties On the extraction of one maximal information subset that does not conflit with multiple contexts Extraction d'un sous-ensemble maximal qui soit cohérent avec des contextes mutuellement contradictoires On computing one max-inclusion consensus On admissible consensuses Boosting MCSes enumeration." Thesis, Artois, 2018. http://www.theses.fr/2018ARTO0405.

Повний текст джерела
Анотація:
Cette thèse étudie une approche possible de l'intelligence artificielle pour la détection et le curage d'informations perverties dans les bases de connaissances des objets et composants intelligents en informatique ubiquitaire. Cette approche est traitée d'un point de vue pratique dans le cadre du formalisme SAT; il s'agit donc de mettre en œuvre des techniques de filtrage d'incohérences dans des bases contradictoires. Plusieurs contributions sont apportées dans cette thèse. Premièrement, nous avons travaillé sur l'extraction d'un ensemble maximal d'informations qui soit cohérent avec une série de contextes hypothétiques. Nous avons proposé une approche incrémentale pour le calcul d'un tel ensemble (AC-MSS). Deuxièmement, nous nous sommes intéressés à la tâche d'énumération des ensembles maximaux satisfaisables (MSS) ou leurs complémentaires les ensembles minimaux rectificatifs (MCS) d'une instance CNF insatisfaisable. Dans cette contribution, nous avons introduit une technique qui améliore les performances des meilleures approches pour l'énumération des MSS/MCS. Cette méthode implémente le paradigme de rotation de modèle qui permet de calculer des ensembles de MCS de manière heuristique et efficace. Finalement, nous avons étudié une notion de consensus permettant réconcilier des sources d'informations. Cette forme de consensus peut être caractérisée par différents critères de préférence, comme le critère de maximalité. Une approche incrémentale de calcul d'un consensus maximal par rapport à l'inclusion ensembliste a été proposée. Nous avons également introduit et étudié la concept de consensus admissible qui raffine la définition initialement proposée du concept de consensus
This thesis studies a possible approach of artificial intelligence for detecting and filtering inconsistent information in knowledge bases of intelligent objects and components in ubiquitous computing. This approach is addressed from a practical point of view in the SAT framework;it is about implementing a techniques of filtering inconsistencies in contradictory bases. Several contributions are made in this thesis. Firstly, we have worked on the extraction of one maximal information set that must be satisfiable with multiple assumptive contexts. We have proposed an incremental approach for computing such a set (AC-MSS). Secondly, we were interested about the enumeration of maximal satisfiable sets (MSS) or their complementary minimal correction sets (MCS) of an unsatisfiable CNF instance. In this contribution, a technique is introduced that boosts the currently most efficient practical approaches to enumerate MCS. It implements a model rotation paradigm that allows the set of MCS to be computed in an heuristically efficient way. Finally, we have studied a notion of consensus to reconcile several sources of information. This form of consensus can obey various preference criteria, including maximality one. We have then developed an incremental algorithm for computing one maximal consensus with respect to set-theoretical inclusion. We have also introduced and studied the concept of admissible consensus that refines the initial concept of consensus
Стилі APA, Harvard, Vancouver, ISO та ін.
20

VINODIA, DEEPAK KUMAR. "APPLICATION OF SOFT COMPUTING TECHNIQUES FOR SOFTWARE RELIABILITY PREDICTION." Thesis, 2017. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15872.

Повний текст джерела
Анотація:
Background: Software reliability prediction has become a key activity in the field of software engineering. It is the process of constructing models that can be used by software practitioners and researchers for assessing and predicting the reliability of the software product. This activity provides significant information about the software product such as “when to stop testing” or “when to release the software product” and other important information. Thus, effective reliability prediction models provide critical information to software stakeholders. Method: In this paper, we have conducted a systematic literature review of studies from the year 2005 to 2016, which use soft computing techniques for software reliability prediction. The studies are examined with specific emphasis on the various soft computing techniques used, their strengths and weaknesses, the investigated datasets, the validation methods and the evaluated performance metrics. The review also analyses the various threats reported by software reliability prediction studies and statistical tests used in literature for evaluating the effectiveness of soft computing techniques for software reliability prediction. Results: After performing strict quality analysis, we found 31 primary studies. The conclusions made based on the data taken from the primary studies indicate wide use of public datasets for developing software reliability prediction models. Moreover, we identified five most commonly used soft computing techniques for software reliability prediction namely, Neural Networks, Fuzzy Logic, Genetic Algorithm, Particle Swarm Optimization and Support Vector Machine. Conclusion: This review summarizes the most commonly used soft computing techniques for software reliability prediction, their strengths and weaknesses and predictive capabilities. The suitability of a specific soft computing technique is an open issue as it depends heavily on nature of the problem and its characteristics. Every software project has its own growth behavior and complexity pattern. Hence, more number of studies should be conducted for the generalization of the results. The review also provides future guidelines to researchers in the domain of software reliability prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Hsieh, Son-Chin, and 謝松慶. "The Study of Soft Computing Technique to Assist PI Controller Design." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/55073384499876653447.

Повний текст джерела
Анотація:
碩士
大葉大學
電機工程學系碩士班
93
Soft computing techniques such as fuzzy logic(FL), neural network(NN) learning and genetic algorithms(GA) are used for DC motor control problem in this study. Firstly, mathematical model of the motor is predicted. A basic PI type controller is designed for the position control problem. A fuzzy logic controller(FLC) is tried for the outer position-loop control without velocity feedback loop. A new structure in which a NN is assisted to the PI control is investigated. Finally, scaling factors are searched by GA. The final result is promising. Comparisons of control results by using different controllers are discussed in this paper including the control effects and robustness to the parameter variation of the plant.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chang, Tsi-Chow, and 張智超. "An Investigation on Soft Computing Technique Applied to the Controller Design and Simulation of Control Systems." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/83992920971292461426.

Повний текст джерела
Анотація:
碩士
大葉大學
電機工程學系
95
An inverted pendulum system is an unstable one. However, its mechanical structure is not too complicated, so it is generally used as a benchmark identification of every control system design. The Twin-Rotor MIMO system (TRMS) is also an unstable system. Due to the complex mechanism and control, due to the server coupling effect between the pitch main control and yaw tail control, additionally, due to the nonlinearity in the mechanical structure, these make the precise mathematical model inappropriate or impossible. The difficulty of controller design on TRMS systems is greater than that of inverted pendulum ones. In this thesis, usage of a soft-computing technique controller called K+NN (K stands for gain, NN is the abbreviation of neural networks), which is a transient assistor is suggested to improve those systems above. Necessary parameters for K+NN as well as the parameter of the original controllers can be aided by using an optimization approach, such as, particle swarm intelligence optimization. Parameters are found off-line beforehand by simulations. Retailed developments of mathematical models of two control systems mentioned above are derived in the thesis with applicable data for simulations. Different possible controller structures, such as, PID, fuzzy-logic control and hybrid control (including state feedback control) are firstly reviewed. With the aid of K+NN assistor to those control systems, simulation results are investigated to prove the capability of K+NN assistors to the systems. Simulink and Matlab software are used for simulation and validation of the control design.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Bagwan, Manish. "Prediction of adhesive strength, deposition efficiecny and wear behaviour of plasma spray coating of low grade mineral on mild steel and copper Substrate by soft computing technique." Thesis, 2013. http://ethesis.nitrkl.ac.in/5391/1/211MM1359.pdf.

Повний текст джерела
Анотація:
Currently emerging technologies contains some of the most prominent ongoing advances, innovations and developments in a variety of engineering field to advance surface property by using modern technology. Because for higher productivity and efficiency across the entire spectrum of manufacturing and engineering industries has ensured that most modern day components/ parts are subjected to day by day increasing harsh environments in routine operations. All the Critical industrial components of the machines are, therefore, prone to more rapid degradation as the parts fail to withstand the aggressive operating conditions and this has been affecting the industry’s economy to a very high extent. The prime objectives are to develop essential surface properties with an economical process. Today the investigation explores the coating potential of industrial wastes. Fly-ash emerges as a major waste from thermal power plants. It mainly comprises of oxides of iron, aluminium, titanium and silicon along with some other minor constituents. Fly-ash premixed with illmenite and quartz which are minerals of low cost available in plenty are excellent for providing protection against resistant to erosion and abrasive wear. In this wide research world Plasma spraying is gaining acceptance for development of quality coatings of various materials on a wide range of substrates. Use of the industrial wastes of such kind as coating material minimizes the plasma spray coating deposition cost, which posed to be the major obstacle to the wide spread purpose due to high cost of the spray grade powders. Fly-ash+quartz+illmenite (weight percentage ratio: 55:25:20) is deposited on copper and mild steel substrates by atmospheric plasma spraying, at operating power levels ranging from 10 to 20kW and after that characterization of the coatings is carried out. The properties/ quality of the coatings depend on the operating condition, process parameters and materials used. The plasma spraying process is controlled by interdependent parameter, co-relations and individual effect on coating characteristics. The size of the particles of raw material used for coating is characterized using Malvern Instruments a Laser particle size analyzer. Coating interface adhesion strength is calculated using a method of coating pull, confirming to ASTM C-633 standard. Deposition efficiency is a key factor that determines the techno economics of the process, evaluated for the deposited coatings. Coating thickness of the polished cross section is measured, using an optical microscope.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Schmidt, S. "A trust-aware framework for service selection and service quality review in e-business ecosystems." Thesis, 2008. http://hdl.handle.net/10453/37705.

Повний текст джерела
Анотація:
University of Technology, Sydney. Faculty of Information Technology.
As e-Business has moved from a niche market to a decisive contributor for the success of most companies, some issues need to be solved in order to assist the continued success of e-Business. The challenge, to deploy fully autonomous business service agents which undertake transactions on behalf of their owners, often fails due to lack of trust in the agent and its decisions. Four aspects can overcome this challenge. Firstly, intelligent agents need to be equipped with self-adjusting reputation, trustworthiness and credibility evaluation mechanisms to assess the trustworthiness of potential counterparts prior to a business transaction. Secondly, such evaluation mechanisms must be transparent and easy to comprehend so agent owners develop trust in their agents’ decisions. Thirdly, the calculations of an agent must be highly customisable so that the agent owner can apply his personal experiences and security requirements to govern the decision making process of the intelligent agent. And finally, agents must communicate via standardised and open protocols in order to facilitate interaction between services deployed across different architectures and technologies. This thesis proposes the DEco Arch framework which integrates behavioural trust element relationships into various decision making processes found in e-Business ecosystems. We apply fuzzy-logic based soft computing techniques to increase user confidence and therefore enhance the adoption of the proposed assessment and review methodologies. A proof-of-concept implementation of the DEco Arch framework has been developed to showcase the proposed concepts in a case study and to conduct empirical experiments to evaluate the robustness and practicability of the proposed methodologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Kumari, Smita. "Web Service Selection Using Soft Computing Techniques." Thesis, 2015. http://ethesis.nitrkl.ac.in/7152/1/Web_Kumari_2015.pdf.

Повний текст джерела
Анотація:
Web service selection is one of the important aspects of SOA. It helps to integrate the services to build a particular application. Web services need to be selected using appropriate interaction styles i.e., either Simple Object Access Protocol (SOAP) or Representational State Transfer Protocol (REST) because choosing web service interaction pattern is a crucial architectural concern for developing the application, and has an impact on the development process. In this study, the performance of web services for Enterprise Application based on SOAP and REST are compared. Since web services operate over the network, throughput and response time are considered as metrics for evaluation. In the literature, it is observed that, emphasis is given on interaction style for selecting web services. However, as the number of services grows day by day, it is time-consuming and difficult to select services that offer similar functionalities. Web services are often described in terms of their functionalities and set of operations. If a customer chooses an application that is of low quality or have malicious content that can affect the overall performance of the application. Hence, web services are selected based on the quality of service (QoS) attributes. In this proposed work, various models are designed using soft computing techniques such as Back Propagation Network (BPN), Radial Basis Function Network (RBFN), Probabilistic Neural Network (PNN) and hybrid Artificial Neural Network (ANN) for web service selection, and their performances are compared based on various performance parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Mohammad, Naseem. "Use of Soft Computing Techniques for Transducer Problems." Thesis, 2008. http://ethesis.nitrkl.ac.in/29/1/20607029.pdf.

Повний текст джерела
Анотація:
In many control system applications Linear Variable Differential Transformer (LVDT) plays an important role to measure the displacement. The performance of the control system depends on the performance of the sensing element. It is observed that the LVDT exhibits the same nonlinear input-output characteristics. Due to such nonlinearities direct digital readout is not possible. As a result we employ the LVDTs only in the linear region of their characteristics. In other words their usable range gets restricted due to the presence of nonlinearity. If the LVDT is used for full range of its nonlinear characteristics, accuracy of measurement is severely affected. So, to reduce this nonlinearities different ANN techniques is being used such as single neuron structure, MLP structure, RBFNN and ANFIS structure. Another problem considered here is with flow measurement. Generally flow measurements uses conventional flow meters for feedback on the flow-control loop cause pressure drop in the flow and in turn lead to the usage of more energy for pumping the fluid. An alternative approach for determining the flow rate without flow meters is thought. The restriction characteristics of the flow-control valve are captured by a neural network (NN) model. The relationship between the flow rate and the physical properties of the flow as well as flow-control valve, that is, pressure drop, pressure, temperature, and flow-control valve coefficient (valve position) is found. With these accessible properties, the NN model yields the flow rate of fluid across the flow-control valve, which acts as a flow meter. The viability of the methodology proposed is illustrated by real flow measurements of water flow which is widely used in hydraulic systems. Control of fluid flow is essential in process-control plants. The signal of flow measured using the flow meter is compared with the signal of the desired flow by the controller. The controller output accordingly adjusts the opening/closing actuator of the flow-control valve in order to maintain the actual flow close to the desired flow. Typically, flow meters of comparatively low cost such as turbine-type flow meters and venturi-type meters are used to measure the volumetric quantity of fluid flow in unit time in a flow process. However, the flow i meter inevitably induces a pressure drop in the flow. In turn, this results in the use of more energy for pumping the fluid. To avoid this problem, non-contact flow meters, i.e. electromagnetic-type flow meters, have been developed and are widely used in process plants not only because there is no requirement for installation in the pipeline but also because introduction to the differential pressure across pipelines is not necessitated. Unfortunately, the cost of such non-contact measurement is comparatively much higher than that of its conventional counterparts. Here, an alternative approach is proposed to obtain the fluid flow measurement for flow-control valves without the pressure drop and the consequent power loss that appear in conventional flow meters. Without the flow meter, it is a fact that the flow rate can be determined from the characteristics of the control valve for flow measurements. In this method, the restriction characteristics of the control valve embedded in a neural network (NN) model are used in determining the flow rate instead of actual measurement using a conventional flow meter. ii
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ranjan, Ankit Raj. "Modeling Capacity of Roundabouts Using Soft Computing Techniques." Thesis, 2017. http://ethesis.nitrkl.ac.in/8756/1/2017_MT_AR_Ranjan.pdf.

Повний текст джерела
Анотація:
The roundabout is a very common solution to the stop controlled intersections and is an increasingly common form of road junction all over the world and widely used in urban and some rural areas of India. For the effective design of roundabouts a detailed analysis of maximum vehicle throughput capacities is required termed as capacity prediction. In India the nature of traffic is heterogeneous but most of the studies related to critical gap evaluation and capacity prediction have been carried out in developed countries having homogeneous nature of traffic flow and strictly followed lane disciplines. In the study related to transportation engineering probably the most and widely used research tool accessible is suitable and proper analysis of data. In this research work video recording technique was adopted for data collection and its analysis and the gap acceptance behavior of drivers is presented for twenty seven unsignalized roundabout sites in India.Statistics and computational intelligence are the two main approaches used in this work for the purpose of data analysis. Capacity of roundabouts is mostly predicted by using regression analysis and gap acceptance-based models but their results are not satisfactory. Variation in driver behavior and prediction techniques used in various models results in difference in the predicted capacities by the various models. A new approach which is used frequently in this area is termed as soft computing. It includes various alternatives to predict the capacity of roundabouts such as artificial neural network (ANN), fuzzy logic, cellular automata, and adaptive neuro-fuzzy interface system (ANFIS). Fully empirical, gap acceptance and simulation are the three main techniques and methodology for capacity prediction models. In this study the critical gap is estimated using gap acceptance models, regression and soft computing technique such as ANN model are used for capacity prediction of roundabouts.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Panigrahi, N., and S. Tripathy. "Application of Soft Computing Techniques to RADAR Pulse Compression." Thesis, 2010. http://ethesis.nitrkl.ac.in/1693/1/Application_of_Soft_Computing_Techniques_to_Radar_Pulse_Compression.pdf.

Повний текст джерела
Анотація:
Soft Computing is a term associated with fields characterized by the use of inexact solutions to computationally-hard tasks for which an exact solution cannot be derived in polynomial time. Almost contrary to conventional (Hard) computing, it is tolerant of imprecision, uncertainty, partial truth, and approximation to achieve tractability, robustness and low solution cost. Effectively, it resembles the Human Mind. The Soft Computing Techniques used in this project work are Adaptive Filter Algorithms and Artificial Neural Networks. An adaptive filter is a filter that self-adjusts its transfer function according to an optimizing algorithm. The adaptive filter algorithms used in this project work are the LMS algorithm, the RLS algorithm, and a slight variation of RLS, the Modified RLS algorithm. An Artificial Neural Network (ANN) is a mathematical model or computational model that tries to simulate the structure and/or functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. Several models have been designed to realize an ANN. In this project, Multi-Layer Perceptron (MLP) Network is used. The algorithm used for modeling such a network is Back-Propagation Algorithm (BPA). Through this project, there has been analyzed a possibility for using the Adaptive Filter Algorithms to determine optimum Matched Filter Coefficients and effectively designing Multi-Layer Perceptron Networks with adequate weight and bias parameters for RADAR Pulse Compression. Barker Codes are taken as system inputs for Radar Pulse Compression. In case of Adaptive Filters, a convergence rate analysis has also been performed for System Identification and in case of ANN, Function Approximation using a 1-2-1 neural network has also been dealt with. A comparison of the adaptive filter algorithms has been performed on the basis of Peak Sidelobe Ratio (PSR). Finally, SSRs are obtained using MLPs of varying neurons and hidden layers and are then compared under several criteria like Noise Performance and Doppler Tolerance.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Hsu, Chin-Yuan, and 許志遠. "An Intelligent Image Filter based on Soft-Computing Techniques." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/20772033919820675234.

Повний текст джерела
Анотація:
碩士
國立成功大學
資訊工程學系碩博士班
92
In this paper, we propose an intelligent image filter based on soft-computing techniques including a genetic based fuzzy image filter (GFIF) and a multilayer genetic based fuzzy image filter (MGFF) to remove impulse noise from highly corrupted images. GFIF consists of a fuzzy number construction process, a fuzzy filtering process, a genetic learning process, and an image knowledge base. First, the fuzzy number construction process will receive a sample image or the noise-free image, then construct an image knowledge base for the fuzzy filtering process. Second, the fuzzy filtering process contains a parallel fuzzy inference mechanism, a fuzzy mean process, and a fuzzy decision process to perform the task of noise removing. Finally, based on genetic algorithm, the genetic learning process will adjust the parameters of the image knowledge base. MGFF is extended from GFIF to apply color image. By the experimental results, GFIF and MGFF achieve better performance than the state-of-the-art filters based on the criteria of Mean-Square-Error (MSE) and Mean-Absolute-Error (MAE). On the subjective evaluation of those filtered images, GFIF and MGFF also result in a higher quality of global restoration.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Teixeira, C. A. "Soft-computing techniques applied to artificial tissue temperature estimation." Doctoral thesis, 2008. http://hdl.handle.net/10400.1/237.

Повний текст джерела
Анотація:
Tese dout., Engenharia electrónica e computação - Processamento de sinal, Universidade do Algarve, 2008
Safety and efficiency of thermal therapies strongly rely on the ability to quantify temperature evolution in the treatment region. Research has been developed in this field, and both invasive and non-invasive technologies have been reported. Till now, only the magnetic resonance imaging (MRI) achieved the hyperthermia/diathermia gold standard value of temperature resolution of 0.5oC in 1cm3, in an in-vivo scenario. However, besides the cost of MRI technology, it does not enable a broad-range therapy application due to its complex environment. Alternatively, backscattered ultrasound (BSU) seems a promising tool for thermal therapy, but till now its performance was only quantitatively tested on homogeneous media and on single-intensity and three-point assessment have been reported. This thesis reports the research performed on the evaluation of time-spatialtemperature evolution based mainly on BSU signals within artificial tissues. Extensive operating conditions were tested on several experimental setups based on dedicated phantoms. Four and eight clinical ultrasound intensities, up to five spatial points, homogeneous and heterogeneous multi-layered phantoms were considered. Spectral and temporal temperature-dependent BSU features were extracted, and applied as invasive and non-invasive methodologies input information. Softcomputing methodologies have been used for temperature estimation. From linear iterative model structure models, to multi-objective genetic algorithms (MOGA) model structure optimisation for linear models, radial basis functions neural netxi xii works (RBFNNs), RBFNNs with linear inputs (RBFLICs), and for the adaptivenetwork- based fuzzy inference system (ANFIS) have been used to estimate the temperature induced on the phantoms. The MOGA+RBFNN methodology, fed with completely data-driven information, estimated temperature with maximum absolute errors less than 0.5oC within two spatial axes. The proposed MOGA+RBFNN methodology applied to non-invasive estimation on multi-layered media, is a innovative approach, as far as known, and enabled a step forward on the therapeutic temperature characterisation, motivating future instrumentation temperature control.
Fundação para a Ciência e a Tecnologia( FCT)
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Dash, Rudra Narayan. "Fault Diagnosis in Induction Motor Using Soft Computing Techniques." Thesis, 2010. http://ethesis.nitrkl.ac.in/2809/1/608EE302.pdf.

Повний текст джерела
Анотація:
Induction motors are one of the commonly used electrical machines in industry because of various technical and economical reasons. These machines face various stresses during operating conditions. These stresses might lead to some modes of failures/faults. Hence condition monitoring becomes necessary in order to avoid catastrophic faults. Various fault monitoring techniques for induction motors can be broadly categorized as model based techniques, signal processing techniques, and soft computing techniques. In case of model based techniques, accurate models of the faulty machine are essentially required for achieving a good fault diagnosis. Sometimes it becomes difficult to obtain accurate models of the faulty machines and also to apply model based techniques. Soft computing techniques provide good analysis of a faulty system even if accurate models are unavailable. Besides giving improved performance these techniques are easy to extend and modify. These can be made adaptive by the incorporation of new data or information. Multilayer perceptron neural network using back propagation algorithm have been extensively applied earlier for the detection of an inter-turn short circuit fault in the stator winding of an induction motor. This thesis extends applying other neuro-computing paradigms such as recurrent neural network (RNN), radial basis function neural network (RBFNN), and adaptive neural fuzzy inference system (ANFIS) for the detection and location of an inter-turn short circuit fault in the stator winding of an induction motor. By using the neural networks, one can identify the particular phase of the induction motor where the inter-turn short circuit fault occurs. Subsequently, a discrete wavelet technique is exploited not only for the detection and location of an inter-turn short circuit fault but also to find out the quantification of degree of this fault in the stator winding of an induction motor. In this work, we have developed an experimental setup for the calculation of induction motor parameters under both healthy and inter-turn short circuit faulty conditions. These parameters are used to generate the phase shifts between the line currents and phase voltages under different load conditions. The detection and location of an inter-turn short circuit fault in the stator winding is based on the monitoring of these three phase shifts. Extensive simulation results are presented in this thesis to demonstrate the effectiveness of the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Sadri, Sara. "Frequency Analysis of Droughts Using Stochastic and Soft Computing Techniques." Thesis, 2010. http://hdl.handle.net/10012/5198.

Повний текст джерела
Анотація:
In the Canadian Prairies recurring droughts are one of the realities which can have significant economical, environmental, and social impacts. For example, droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency analysis is a technique for analyzing how frequently a drought event of a given magnitude may be expected to occur. In this study the state of the science related to frequency analysis of droughts is reviewed and studied. The main contributions of this thesis include development of a model in Matlab which uses the qualities of Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria of effective hydrological regions. In FCM each site has a degree of membership in each of the clusters. The algorithm developed is flexible to get number of regions and return period as inputs and show the final corrected clusters as output for most case scenarios. While drought is considered a bivariate phenomena with two statistical variables of duration and severity to be analyzed simultaneously, an important step in this study is increasing the complexity of the initial model in Matlab to correct regions based on L-comoments statistics (as apposed to L-moments). Implementing a reasonably straightforward approach for bivariate drought frequency analysis using bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two new classes of neural network and machine learning: Radial Basis Function (RBF) and Support Vector Machine Regression (SVM-R). These two techniques are selected based on their good reviews in literature in function estimation and nonparametric regression. The functionalities of RBF and SVM-R are compared with traditional nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization method in which catchments are first regionalized using FCMs is applied and its results are compared with the other three models. Drought data from 36 natural catchments in the Canadian Prairies are used in this study. This study provides a methodology for bivariate drought frequency analysis that can be practiced in any part of the world.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

K, GAYATHRI DEVI. "SCHEDULING IN FLEXIBLE MANUFACTURING SYSTEM (FMS) USING SOFT COMPUTING TECHNIQUES." Thesis, 2023. http://dspace.dtu.ac.in:8080/jspui/handle/repository/20197.

Повний текст джерела
Анотація:
In today's highly competitive and fast-paced manufacturing industry, companies are increasingly turning to flexible manufacturing systems (FMS) to improve their efficiency and productivity. FMS is an automated manufacturing system that includes transport vehicles, automated storage, and a comprehensive computer control system, all working together to produce a wide variety of parts quickly and efficiently. FMS is a critical component of Industry 4.0, the fourth industrial revolution characterized by the integration of advanced technologies making it smart manufacturing process. Scheduling optimization is a crucial aspect of Flexible Manufacturing Systems (FMS) that involves determining the optimal sequence for producing multiple components and allocating the appropriate resources to each operation. The FMS scheduling optimization is of paramount importance for manufacturers, as it results in increased productivity and reduced production costs. By utilizing an efficient FMS scheduling optimization, manufacturers can achieve faster production times, higher throughput rates, and improved quality control. The optimization of FMS scheduling is a significant factor in the current Industry 4.0. The integration of advanced technologies with FMS scheduling optimization can lead to the development of smarter factories with improved efficiency, accuracy, and automation. As such, the optimization of FMS scheduling is a vital element in the success of modern manufacturing operations. Traditional optimization methods, such as linear programming and dynamic programming, have been used for scheduling optimization in manufacturing for several decades. However, these methods have limitations when it comes to solving complex scheduling problems in Flexible Manufacturing Systems (FMS), which are characterized by large vii search spaces, non-linear relationships, and combinatorial constraints. Metaheuristics, a class of optimization algorithms that use heuristic rules to explore the search space efficiently, have emerged as a powerful tool for solving complex FMS scheduling problems. Metaheuristic algorithms are inspired from natural phenomena and mimics it to find near-optimal solutions by iteratively exploring the search space, making probabilistic moves, and adapting to the search environment. These algorithms can handle multiple objectives, constraints, and uncertainty, making them suitable for FMS scheduling optimization. With the advancement of computing power and the availability of high performance computing platforms, metaheuristic algorithms have become even more useful in FMS scheduling optimization. In this research, three novel hybrid meta heuristic methods have been proposed: 1) GAPSOTS- An amalgamation of Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Tabu Search (TS) 2) HAdFA- The Hybrid Adaptive Firefly Algorithm and 3) HFPA- Hybrid Flower Pollination Algorithm. GAPSOTS is simple hybridization of classic meta heuristics without any adaptive features. The GAPSOTS suffered from local optima entrapment and convergence was impetuous. To address the premature convergence problem inherent in the classic Firefly Algorithm (FA), the researcher developed HAdFA that employs two novel adaptive strategies: employing an adaptive randomization parameter (α), which dynamically modifies at each step, and Gray relational analysis updates firefly at each step, thereby maintaining a balance between diversification and intensification. HFPA is inspired by the pollination strategy of flowers. Additionally, both HAdFA and HFPA are incorporated with a local search technique of enhanced simulated annealing to accelerate the algorithm and prevent local optima entrapment. viii The current study addresses FMS scheduling optimization for the following: • A Flexible Job Shop Scheduling Problem (FJSSP) was analysed, studied and tested with proposed meta-heuristics for several benchmark problems for multi-objectives of makespan (MSmax), maximal machine workload (WLmax), total workload (WLtotal), total idle time (Tidle ) and Total tardiness, i.e., lateness of jobs (Tlate ). • An FMS configuration, integrated with AGVs, Automatic storage and retrieval system (AS/RS) has been optimized using a Combined Objective Function (COF) with the aim of minimizing the machine idle time and the total penalty cost combinedly. In order to test the effectiveness of this optimization method, several problems were developed and tested by varying the number of jobs and machines for this particular FMS setup. • The concurrent scheduling of machines and AGVs in a multi-machine FMS setup for different layouts has been studied. This problem has been developed as a multi objective optimization with objectives to minimize the makespan, mean flow time, and mean machine idle time. Proposed meta-heuristics have been employed and tested on randomly generated example problems to evaluate their performance for this setup. These meta-heuristics have proven to be effective in finding optimal solutions, and their application can lead to improved efficiency and reduced costs in FMS setups. • Finally, a real-life case study was conducted in a Lube Oil Blending Plant, Faridabad, India. The proposed GAPSOTS and HAdFA are tested for three problems with varying jobs and machines for multi objectives. ix The corresponding computational experiments have been reported and analyzed. The suggested algorithms have been implemented and tested using Matlab R2019a, computing environment on an Intel Core™i7, with Windows 10. The results indicate that the proposed HAdFA tends to be more efficient among the proposed algorithms and consistently demonstrated to achieve not only optimal solutions but also new makespan values were found for some problems. The efficiency of HAdFA can be attributed to the adaptive parameters integrated into it. This algorithm significantly improves convergence speed and enables the exploration of a large number of rich optimal solutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Nayak, Debasish. "Novel Techniques for SRAM Based Memory Performance Enhancement." Thesis, 2017. http://ethesis.nitrkl.ac.in/8676/1/2017_PhD_512EC1017_DNayak.pdf.

Повний текст джерела
Анотація:
Digital computation has penetrated diversity of applications such as audio visual communication, biomedical applications, industrial application, defense application, entertainment industries, remote sensing and control etc. Electrical systems having digital computing capability have become an integral part of daily life. This puts a thrust to design the systems with utmost portability. Portability of an electronic system not only depends on the physical size of computation blocks embedded inside it but also on the energy consumption by it. Thus, reduction of size of computing block along with the reduction of energy consumption has become prime necessity. As per ITRS (International Technology Roadmap for Semiconductors 2011, http://www.itrs.net/Common/2011ITRS/Home2011.htm.) the SRAM occupies more than 70% area of the SoC. Hence, the performance of the SRAM predominates the overall performance of the SoC. The basic operation of SRAM cells is simple and well-known. But the diversity among various applications requires different objectives to be achieved based on the field of application. Energy consumption, speed of operation, cell stability and area occupancy are the most important aspects of SRAM which need to be optimized for various applications. These parameters are commonly interdependent among each other. Thus improving one performance can potentially degrade the other one. Hence the objective which is most important for a particular application may be enhanced with compromising another performance index which may not be so critical for that particular application. In this thesis we have focused on various techniques to improve the performance of SRAM based memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Chen, Jin-Liang, and 陳金亮. "Development of Soft-computing Techniques And Their Applications to Pattern Recognition." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/06254278758259677564.

Повний текст джерела
Анотація:
碩士
國立海洋大學
電機工程學系
88
In the thesis, three soft-computing techniques are proposed to tackle the two most important tasks in pattern recognition, namely, clustering and classifier design. First, a novel technique (ECT) is presented for exploiting cluster''s terrain. By using ECT, one can improve the clustering performance and exploit terrain of each cluster. Description of cluster''s terrain involves the use of a novel prototype matrix and Mahalanobis distance. Two update equations are derived from an objective function based on the prototype matrix. Then, the covariance matrix of a cluster can be accurately estimated from the converged prototype matrix. More significantly, ECT can be easily incorporated with any clustering algorithm. For example, a self-organizing clustering algorithm (SOMM) is introduced by combining ECT and the idea of Mountain method [2]. In contrast to the original Mountain method, the proposed SOMM algorithm has the following desirable advantages: parameters in the modified Mountain method are data-driven, terrain of each mountain can be estimated, precise center can be searched, and the terminating condition relies on the input nature. Finally, a novel unsupervised neural classifier for solving any multi-class classification tasks or linearly nonseparable classification problems is presented. Its implementation involves the incorporation of a homogeneity principle and a terminating condition to construct a neural network of multilayer perceptron. Based on the principle, the network can be configured in the manner of layer-by-layer until the terminating condition is reached. Due to the use of principle, the classification accuracy for training set is 100%. More importantly, the stability of network growing is proven. Furthermore, simulation results show satisfactory generalization performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yadav, Basant. "Application of soft computing techniques for water quantity and quality modeling." Thesis, 2017. http://localhost:8080/iit/handle/2074/7306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Nanda, Santosh Kumar. "Noise Impact Assessment and Prediction in Mines Using Soft Computing Techniques." Thesis, 2012. http://ethesis.nitrkl.ac.in/4560/1/PhD50405001revised.pdf.

Повний текст джерела
Анотація:
Mining of minerals necessitates use of heavy energy intensive machineries and equipment leading to miners to be exposed to high noise levels. Prolonged exposure of miners to the high levels of noise can cause noise induced hearing loss besides several non-auditory health effects. Hence, in order to improve the environmental condition in work place, it is of utmost importance to develop appropriate noise prediction model for ensuring the accurate status of noise levels from various surface mining machineries. The measurement of sound pressure level (SPL) using sound measuring devices is not accurate due to instrumental error, attenuation due to geometrical aberration, atmospheric attenuation etc. Some of the popular frequency dependent noise prediction models e.g. ISO 9613- 2, ENM, CONCAWE and non-frequency based noise prediction model e.g. VDI-2714 have been applied in mining and allied industries. These models are used to predict the machineries noise by considering all the attenuation factors. Amongst above mathematical models, VDI-2714 is simplest noise prediction model as it is independent from frequency domain. From literature review, it was found that VDI-2714 gives noise prediction in dB (A) not in 1/1 or 1/3 octave bands as compared to other prediction models e.g. ISO-9613-2, CONCAWE, OCMA, and ENM etc. Compared to VDI-2714 noise prediction model, frequency dependent models are mathematically complex to use. All the noise prediction models treat noise as a function of distance, sound power level (SWL), different forms of attenuations such as geometrical absorptions, barrier effects, ground topography, etc. Generally, these parameters are measured in the mines and best fitting models are applied to predict noise. Mathematical models are generally complex and cannot be implemented in real time systems. Additionally, they fail to predict the future parameters from current and past measurements. To overcome these limitations, in this work, soft-computing models have been used. It has been seen that noise prediction is a non-stationary process and soft-computing techniques have been tested for non-stationary time-series prediction for nearly two decades. Considering successful application of soft-computing models in complex engineering problems, in this thesis work, soft-computing system based noise prediction models were developed for predicting far field noise levels due to operation of specific set of mining machinery. Soft Computing models: Fuzzy Inference System (Mamdani and Takagi Sugeno Kang (T-S-K) fuzzy inference systems), MLP (multi layer perceptron or back propagation neural network), RBF (radial basis function) and Adaptive network-based fuzzy inference systems (ANFIS) were used to predict the machinery noise in two opencast mines. The proposed soft-computing based noise prediction models were designed for both frequency and non-frequency based noise prediction models. After successful application of all proposed soft-computing models, comparitive studies were made considering Root Mean Square Error (RMSE) as the performance parameter. It was observed that proposed soft-computing models give good prediction results with accuracy. However, ANFIS model gives better noise prediction with better accuracy than other proposed soft-computing models.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chen, Ying-Hsu, and 陳盈旭. "An Ontology Construction Approach Based on Episode Net and Soft Computing Techniques." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/12673436025321244242.

Повний текст джерела
Анотація:
碩士
國立成功大學
資訊工程學系碩博士班
92
The Ontology is increasingly important for many information systems and SemanticWeb, while the cost of constructing ontology is too much. In this thesis, we propose anautomatic approach for ontology construction to assist the knowledge engineers toconstruct the specific domain ontology. We hope to raise the automation level to makecorrectly and efficiently build the ontology while applying the method to different domains.For different domains, we propose an approach to extract new Chinese terms from thespecific corpus automatically. And we use information retrieval, natural languageprocessing, and soft computing techniques to find out the concepts of the ontology. Inaddition, we extend the concept of episode to construct an Episode Net. Using Episode Net,we can find out the static attributes, dynamic operations, and the associations betweenconcepts of the ontology. Finally, we use object-oriented model to represent the ontologyand then construct the ontology with four-layer object-oriented structure. The experimentalresults show that our approach can effectively assist ontology engineers to construct thedomain ontology.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Choudhury, Debasis. "Characterization of Power Quality Disturbances using Signal Processing and Soft Computing Techniques." Thesis, 2013. http://ethesis.nitrkl.ac.in/4745/1/210EE2101.pdf.

Повний текст джерела
Анотація:
The power quality of the electric power has become an important issue for the electric utilities and their customers. In order to improve the quality of power, electric utilities continuously monitor power delivered at customer sites. Thus automatic classification of distribution line disturbances is highly desirable. The detection and classification of the power quality (PQ) disturbances in power systems are important tasks in monitoring and protection of power system network. Most of the disturbances are non-stationary and transitory in nature hence it requires advanced tools and techniques for the analysis of PQ disturbances. In this work a hybrid technique is used for characterizing PQ disturbances using wavelet transform and fuzzy logic. A no of PQ events are generated and decomposed using wavelet decomposition algorithm of wavelet transform for accurate detection of disturbances. It is also observed that when the PQ disturbances are contaminated with noise the detection becomes difficult and the feature vectors to be extracted will contain a high percentage of noise which may degrade the classification accuracy. Hence a Wavelet based de-noising technique is proposed in this work before feature extraction process. Two very distinct features common to all PQ disturbances like Energy and Total Harmonic Distortion (THD) are extracted using discrete wavelet transform and is fed as inputs to the fuzzy expert system for accurate detection and classification of various PQ disturbances. The fuzzy expert system not only classifies the PQ disturbances but also indicates whether the disturbance is pure or contains harmonics. A neural network based Power Quality Disturbance (PQD) detection system is also modeled implementing Multilayer Feedforward Neural Network (MFNN).
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Teella, Sreedhar Kumar. "Modeling of Breakdown voltage of Solid Insulating Materials Using Soft Computing Techniques." Thesis, 2013. http://ethesis.nitrkl.ac.in/5329/1/211EE2140.pdf.

Повний текст джерела
Анотація:
The voids or cavities within the solid insulating material during manufacturing are potential sources of electrical trees which can lead to continuous degradation and breakdown of insulating material due to Partial Discharge (PD). To determine the suitability of use and acquire the data for the dimensioning of electrical insulation systems breakdown voltage of insulator should be determined. A major field of Artificial Neural Networks (ANN) and Least Square Support Vector Machine (LS-SVM) application is function estimation due to its useful features, they are, non-linearity and adaptively. In this project, the breakdown voltage due to PD in cavities for five insulating materials under AC conditions has been predicted as a function of different input parameters, such as, the insulating sample thickness ‘t,’ the thickness of the void ‘t1’ diameter of the void ‘d’ and relative permittivity of materials by using two different models. The requisite training data are obtained from experimental studies performed on a Cylinder-Plane Electrode system. Different dimensioned voids are artificially created.. On completion of training, it is found that the ANN and LS-SVM models are capable of predicting the breakdown voltage Vb = f (t, t1, d, ) very efficiently and with a small value of Mean Absolute Error. The system has been predicted using MATLAB.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Jog, Adwait, and Avijit Mohapatra. "Financial Forecasting Using Evolutionary Computational Techniques." Thesis, 2009. http://ethesis.nitrkl.ac.in/230/1/Thesis_Adwait.pdf.

Повний текст джерела
Анотація:
Financial forecasting or specially stock market prediction is one of the hottest field of research lately due to its commercial applications owing to high stakes and the kinds of attractive benefits that it has to offer. In this project we have analyzed various evolutionary computation algorithms for forecasting of financial data. The financial data has been taken from a large database and has been based on the stock prices in leading stock exchanges .We have based our models on data taken from Bombay Stock Exchange (BSE), S&P500 (Standard and Poor’s) and Dow Jones Industrial Average (DJIA). We have designed three models and compared those using historical data from the three stock exchanges. The models used were based on: 1. Radial Basis Function parameters updated by Particle swarm optimization. 2. Radial Basis Function parameters updated by Least Mean Square Algorithm. 3. FLANN parameters updated by Particle Swarm optimization. The raw input for the experiment is the historical daily open, close, high, low and volume of the concerned index. However the actual input to the model was the parameters derived from these data. The results of the experiment have been depicted with the aid of suitable curves where a comparative analysis of the various models is done on the basis on various parameters including error convergence and the Mean Average Percentage Error (MAPE). Key Words: Radial Basis Functions, FLANN, PSO, LMS
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Chou, Hsin-Chuan, and 周新川. "Discover Drug Utilization Knowledge Using Soft Computing Techniques An Example of Cardiovascular Disease." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/42592142546564406968.

Повний текст джерела
Анотація:
碩士
國立雲林科技大學
資訊管理系碩士班
93
Cardiovascular disease is becoming the major cause of death in many industrialized countries. People who receive long-term treatments usually ignore the progress of the disease states. Therefore, it is critical and necessary to evaluate drug utilization and laboratory test in order to discover the knowledge that is beneath and can be extracted from those raw data. This paper utilizes techniques of unsupervised networks and rough set theory to discover drug utilization knowledge. The result of the proposed SOM-SOM-RST process shows more advantages than that of decision tree and discriminate analysis. With 10-fold cross verification, the proposed process successfully and effectively detect patients whose diagnosis codes have been changed during the period of investigation and attain an accuracy of approximate 98%. Rough set theory here, hence, can be easily adapted and implemented in support systems. The contributions of this paper are: (1) With the proposed process, individual disease state trends can be identified that remind physicians to re-evaluate the long-term, but ignored disease tends. (2) Generalization of symbolic rules for system development.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Lai, Chia Liang, and 賴佳良. "Application of Soft Computing Techniques with Fourier Series to Forecast Monthly Electricity Demand." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/23171218166774438081.

Повний текст джерела
Анотація:
碩士
國立清華大學
工業工程與工程管理學系
104
The information from electricity demand forecasting helps energy generation enterprises develop an electricity supply system. This study aims to develop a monthly electricity forecasting model to predict the electricity demand for energy management. Given that the influence of weather factors, such as temperature and humidity, is diluted in the overall monthly electricity demand, the forecasting model uses historical electricity consumption data as an integrated factor to obtain future prediction. The proposed approach is applied to a monthly electricity demand time series forecasting model that includes trend and fluctuation series, of which the former describes the trend of the electricity demand series and the latter describes the periodic fluctuation imbedded in the trend. An integrated genetic algorithm and neural network model (GANN) is then trained to forecast the trend series. Given that the fluctuation series demonstrates an oscillatory behavior, we apply Fourier series to fit the fluctuation series. The complete demand model is named GANN–Fourier series. U.S. electricity demand data are used to evaluate the proposed model and to compare the results of applying this model with those of using conventional neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Sudhakarapandian, R. "Application of Soft Computing Techniques for Cell Formation Considering Operational Time and Sequence." Thesis, 2007. http://ethesis.nitrkl.ac.in/11/1/sudh-phd-2008.pdf.

Повний текст джерела
Анотація:
In response to demand in market place, discrete manufacturing firms need to adopt batch type manufacturing for incorporating continuous and rapid changes in manufacturing to gain edge over competitors. In addition, there is an increasing trend toward achieving higher level of integration between design and manufacturing functions in industries to make batch manufacturing more efficient and productive. In batch shop production environment, the cost of manufacturing is inversely proportional to batch size and the batch size determines the productivity. In real time environment, the batch size of the components is often small leading to frequent changeovers, larger machine idleness and so lesser productivity. To alleviate these problems, “Cellular Manufacturing Systems” (CMS) can be implemented to accommodate small batches without loosing much of production run time. Cellular manufacturing is an application of group technology (GT) in which similar parts are identified and grouped togeth...
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ghosh, S., and A. B. Swer. "Modelling of the Breakdown Voltage of Solid Insulating Materials using Soft Computing Techniques." Thesis, 2010. http://ethesis.nitrkl.ac.in/1972/1/btech_project_online.pdf.

Повний текст джерела
Анотація:
The aim of the project is to use Soft Computing Techniques (SCT) in order to model the breakdown voltage of solid insulating materials. Since the breakdown voltage behaviour is non-linear, it can be best modeled using SCT such as Artificial Neural Network (ANN), Radial Basis Function (RBF) Network, Fuzzy Logic (FL) Techniques etc. In order to obtain the experimental data on the breakdown voltage, experiments are conducted under AC and DC conditions and then all the SCT model are applied on it. The prediction of the breakdown voltage of solid insulating materials is indeed a challenging task. Hence the best way to go about it is by resorting to SCT in order to model and predict the breakdown voltage.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Sahu, Sitanshu Sekhar. "Analysis of Genomic and Proteomic Signals Using Signal Processing and Soft Computing Techniques." Thesis, 2011. http://ethesis.nitrkl.ac.in/3005/1/Thesis_Sitanshu_Sekhar_Sahu_-_50709001.pdf.

Повний текст джерела
Анотація:
Bioinformatics is a data rich field which provides unique opportunities to use computational techniques to understand and organize information associated with biomolecules such as DNA, RNA, and Proteins. It involves in-depth study in the areas of genomics and proteomics and requires techniques from computer science,statistics and engineering to identify, model, extract features and to process data for analysis and interpretation of results in a biologically meaningful manner.In engineering methods the signal processing techniques such as transformation,filtering, pattern analysis and soft-computing techniques like multi layer perceptron(MLP) and radial basis function neural network (RBFNN) play vital role to effectively resolve many challenging issues associated with genomics and proteomics. In this dissertation, a sincere attempt has been made to investigate on some challenging problems of bioinformatics by employing some efficient signal and soft computing methods. Some of the specific issues, which have been attempted are protein coding region identification in DNA sequence, hot spot identification in protein, prediction of protein structural class and classification of microarray gene expression data. The dissertation presents some novel methods to measure and to extract features from the genomic sequences using time-frequency analysis and machine intelligence techniques.The problems investigated and the contribution made in the thesis are presented here in a concise manner. The S-transform, a powerful time-frequency representation technique, possesses superior property over the wavelet transform and short time Fourier transform as the exponential function is fixed with respect to time axis while the localizing scalable Gaussian window dilates and translates. The S-transform uses an analysis window whose width is decreasing with frequency providing a frequency dependent resolution. The invertible property of S-transform makes it suitable for time-band filtering application. Gene prediction and protein coding region identification have been always a challenging task in computational biology,especially in eukaryote genomes due to its complex structure. This issue is resolved using a S-transform based time-band filtering approach by localizing the period-3 property present in the DNA sequence which forms the basis for the identification.Similarly, hot spot identification in protein is a burning issue in protein science due to its importance in binding and interaction between proteins. A novel S-transform based time-frequency filtering approach is proposed for efficient identification of the hot spots. Prediction of structural class of protein has been a challenging problem in bioinformatics.A novel feature representation scheme is proposed to efficiently represent the protein, thereby improves the prediction accuracy. The high dimension and low sample size of microarray data lead to curse of dimensionality problem which affects the classification performance.In this dissertation an efficient hybrid feature extraction method is proposed to overcome the dimensionality issue and a RBFNN is introduced to efficiently classify the microarray samples.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sarkar, S. "Power quality disturbance detection and classification using signal processing and soft computing techniques." Thesis, 2014. http://ethesis.nitrkl.ac.in/6149/1/E-66.pdf.

Повний текст джерела
Анотація:
The quality of electric power and disturbances occurred in power signal has become a major issue among the electric power suppliers and customers. For improving the power quality continuous monitoring of power is needed which is being delivered at customer’s sites. Therefore, detection of PQ disturbances, and proper classification of PQD is highly desirable. The detection and classification of the PQD in distribution systems are important tasks for protection of power distributed network. Most of the disturbances are non-stationary and transitory in nature hence it requires advanced tools and techniques for the analysis of PQ disturbances. In this work a hybrid technique is used for characterizing PQ disturbances using wavelet transform and fuzzy logic. A no of PQ events are generated and decomposed using wavelet decomposition algorithm of wavelet transform for accurate detection of disturbances. It is also observed that when the PQ disturbances are contaminated with noise the detection becomes difficult and the feature vectors to be extracted will contain a high percentage of noise which may degrade the classification accuracy. Hence a Wavelet based de-noising technique is proposed in this work before feature extraction process. Two very distinct features common to all PQ disturbances like Energy and Total Harmonic Distortion (THD) are extracted using discrete wavelet transform and are fed as inputs to the fuzzy expert system for accurate detection and classification of various PQ disturbances. The fuzzy expert system not only classifies the PQ disturbances but also indicates whether the disturbance is pure or contains harmonics. A neural network based Power Quality Disturbance (PQD) detection system is also modeled implementing Multilayer Feed forward Neural Network (MFNN).
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Pandey, Anish. "Mobile Robot Navigation in Static and Dynamic Environments using Various Soft Computing Techniques." Thesis, 2016. http://ethesis.nitrkl.ac.in/8038/1/2016_PhD_APandey_512ME119.pdf.

Повний текст джерела
Анотація:
The applications of the autonomous mobile robot in many fields such as industry, space, defence and transportation, and other social sectors are growing day by day. The mobile robot performs many tasks such as rescue operation, patrolling, disaster relief, planetary exploration, and material handling, etc. Therefore, an intelligent mobile robot is required that could travel autonomously in various static and dynamic environments. The present research focuses on the design and implementation of the intelligent navigation algorithms, which is capable of navigating a mobile robot autonomously in static as well as dynamic environments. Navigation and obstacle avoidance are one of the most important tasks for any mobile robots. The primary objective of this research work is to improve the navigation accuracy and efficiency of the mobile robot using various soft computing techniques. In this research work, Hybrid Fuzzy (H-Fuzzy) architecture, Cascade Neuro-Fuzzy (CN-Fuzzy) architecture, Fuzzy-Simulated Annealing (Fuzzy-SA) algorithm, Wind Driven Optimization (WDO) algorithm, and Fuzzy-Wind Driven Optimization (Fuzzy-WDO) algorithm have been designed and implemented to solve the navigation problems of a mobile robot in different static and dynamic environments. The performances of these proposed techniques are demonstrated through computer simulations using MATLAB software and implemented in real time by using experimental mobile robots. Furthermore, the performances of Wind Driven Optimization algorithm and Fuzzy-Wind Driven Optimization algorithm are found to be most efficient (in terms of path length and navigation time) as compared to rest of the techniques, which verifies the effectiveness and efficiency of these newly built techniques for mobile robot navigation. The results obtained from the proposed techniques are compared with other developed techniques such as Fuzzy Logics, Genetic algorithm (GA), Neural Network, and Particle Swarm Optimization (PSO) algorithm, etc. to prove the authenticity of the proposed developed techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Dutta, Abhijeet. "Application of Soft Computing Techniques for Prediction of Slope Failure in Opencast Mines." Thesis, 2016. http://ethesis.nitrkl.ac.in/8284/1/2016_MT_711MN1172_Application_of_soft.pdf.

Повний текст джерела
Анотація:
One of the most arduous jobs in the industry is mining which involves risk at each working stage. Stability is the main focus and of utmost importance. FOS when calculated by traditional deterministic approach cannot represent the exact state at which the slope exists, though it gives a rough idea of the conditions and overall safety factor. Various approaches like numerical modelling, soft computing techniques allow us with the ease to find out the stability conditions of an unstable slope and the probability of its failure in near-by time. In this project, the stability conditions of some of the benches of Bhubaneswari Opencast Project, located in Talcher, have been evaluated using the soft-computing techniques like Artificial Neural Network implemented using MATLAB and then the results are being compared with the Numerical Model results from the software FLAC which deploys Finite Difference Method. A particular slope (CMTL-179, Seam-3) has been studied and the respective factor of safety for each slope has been predicted using both the Artificial Neural Network and FLAC. Initially the data related to bench height, slope angle, lithology, cohesion, internal angle of friction, etc. are determined for the respective rock of the slope of which the FOS is to be calculated. . A total of 14 training functions were used to train the model. The best training was found in Scaled Conjugate Gradient Backpropagation which corresponds to a regression coefficient of 91.36% during training and 88.24% overall. The best Validation Performance was also found at 60 epochs with Mean Squared Error of 0.069776. According to the trained neural network, it was found that the slope was 44.5% stable with a FOS 1.0226. Using the software FLAC, it was found that the slope was stable with FOS=1.17. The generic model will thus allow us to get a range of probability for the slope to fail so that necessary arrangements can be made to prevent the slope failure.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chakraborty, Abhishek. "Evaluation of Urban Street Service Quality in Developing Countries Using Soft-Computing Techniques." Thesis, 2017. http://ethesis.nitrkl.ac.in/8766/1/2017_MT_A_Chakraborty.pdf.

Повний текст джерела
Анотація:
Indian road traffic characteristic is mostly heterogeneous in nature. Heterogeneousor mixed traffic means presence of non-motorized vehicle in a large proportion comparedto motorized vehicles simultaneously on the road, which is very unlikely to the developed countries. This study also tries to define the ALOS score ranges to categorize the perceived and predicted ALOS scores. First, several factors which plays significant role in determination of ALOS are short-listed based on previous research study. For example, traffic volume, width of the road, pavement condition, land use pattern and so on. Then exclusively geometric and traffic data has been collected from seven important cities of India (out of which data of three cities are taken from secondary source). Then by some statistical tests like Pearson’s correlation test, significant independent variables are fixed. The ALOS scores are classified using SOM in ANN to define the ranges of LOS categories ‘A – F’. The HCM method (NCHRP) to determine ALOS is applied on the same data set and the outputs are compared with the target (actual) values, a great deviation is observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії