To see the other types of publications on this topic, follow the link: MITIGATION ALGORITHM.

Dissertations / Theses on the topic 'MITIGATION ALGORITHM'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'MITIGATION ALGORITHM.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zubi, Hazem M. "A genetic algorithm approach for three-phase harmonic mitigation filter design." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604881.

Full text
Abstract:
In industry, adjustable speed drives (ASDs) are widely employed in driving AC motors for variable speed applications due to the high performance and high energy efficiency obtained in such systems. However, ASDs have an impact on the power quality and utilisation of AC power feeds by injecting current harmonics and causing resonances, additional losses, and voltage distortion at the point of common coupling. Due to these problems, electric power utilities have established stringent rules and regulations to limit the effects of this distortion. As a result, efficient, reliable, and economical harmonic mitigation techniques must now be implemented in practical systems to achieve compliance at reasonable cost. A variety of techniques exist to control the harmonic current injected by ASDs, and allow three-phase AC-line-connected medium-power systems to meet stringent power quality standards. Of these, the broadband harmonic passive filter deserves special attention because of its good harmonic mitigation and reactive power compensation abilities, and low cost. It is also relatively free from harmonic resonance problems, has relatively simple structural complexity and involves considerably less engineering effort when compared to systems of single tuned shunt passive filters or active filters and active rectifier solutions. In this thesis, passive broadband harmonic filters are investigated. In particular, the improved broadband filter (IBF) which has superior overall performance and examples of its application are increasing rapidly. During this research project, the IBF operating principle is reviewed and its design principles are established. As the main disadvantage of most passive harmonic filters is the large-sized components, the first proposed design attempts to optimize the size of the filter components (L and C) utilized in the existing IBF topology. The second proposed design attempts to optimize the number and then the size of filter components resulting in an Advanced Broadband passive Filter (ABF) novel structure. The proposed design methods are based on frequency domain modelling of the system and then using a genetic algorithm optimization technique to search for optimal filter component values. The results obtained are compared with the results of a linear searching approach. The measured performance of the optimal filter designs (IBF and ABF) is evaluated under different loading conditions with typical levels of background voltage distortion. This involves assessing input current total harmonic distortion, input power factor, rectifier voltage regulation, efficiency, size and cost. The potential resonance problem is addressed and the influence of voltage imbalance on performance is investigated. The assessment is based on analysis, computer simulations and experimental results. The measured performance is compared to various typical passive harmonic filters for three-phase diode rectifier front-end type adjustable speed drives. Finally, the broadband filter design’s effectiveness and performance are evaluated by involving them in a standard IEEE distribution network operating under different penetration levels of connected nonlinear total loads (ASD system). The study is conducted via detailed modelling of the distribution network and the linked nonlinear loads using computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
2

Ikuma, Takeshi. "Non-Wiener Effects in Narrowband Interference Mitigation Using Adaptive Transversal Equalizers." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/26772.

Full text
Abstract:
The least mean square (LMS) algorithm is widely expected to operate near the corresponding Wiener filter solution. An exception to this popular perception occurs when the algorithm is used to adapt a transversal equalizer in the presence of additive narrowband interference. The steady-state LMS equalizer behavior does not correspond to that of the fixed Wiener equalizer: the mean of its weights is different from the Wiener weights, and its mean squared error (MSE) performance may be significantly better than the Wiener performance. The contributions of this study serve to better understand this so-called non-Wiener phenomenon of the LMS and normalized LMS adaptive transversal equalizers. The first contribution is the analysis of the mean of the LMS weights in steady state, assuming a large interference-to-signal ratio (ISR). The analysis is based on the Butterweck expansion of the weight update equation. The equalization problem is transformed to an equivalent interference estimation problem to make the analysis of the Butterweck expansion tractable. The analytical results are valid for all step-sizes. Simulation results are included to support the analytical results and show that the analytical results predict the simulation results very well, over a wide range of ISR. The second contribution is the new MSE estimator based on the expression for the mean of the LMS equalizer weight vector. The new estimator shows vast improvement over the Reuter-Zeidler MSE estimator. For the development of the new MSE estimator, the transfer function approximation of the LMS algorithm is generalized for the steady-state analysis of the LMS algorithm. This generalization also revealed the cause of the breakdown of the MSE estimators when the interference is not strong, as the assumption that the variation of the weight vector around its mean is small relative to the mean of the weight vector itself. Both the expression for the mean of the weight vector and for the MSE estimator are analyzed for the LMS algorithm at first. The results are then extended to the normalized LMS algorithm by the simple means of adaptation step-size redefinition.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Manmek, Thip Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Real-time power system disturbance identification and its mitigation using an enhanced least squares algorithm." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/26233.

Full text
Abstract:
This thesis proposes, analyses and implements a fast and accurate real-time power system disturbances identification method based on an enhanced linear least squares algorithm for mitigation and monitoring of various power quality problems such as current harmonics, grid unbalances and voltage dips. The enhanced algorithm imposes less real-time computational burden on processing the system and is thus called ???efficient least squares algorithm???. The proposed efficient least squares algorithm does not require matrix inversion operation and contains only real numbers. The number of required real-time matrix multiplications is also reduced in the proposed method by pre-performing some of the matrix multiplications to form a constant matrix. The proposed efficient least squares algorithm extracts instantaneous sine and cosine terms of the fundamental and harmonic components by simply multiplying a set of sampled input data by the pre-calculated constant matrix. A power signal processing system based on the proposed efficient least squares algorithm is presented in this thesis. This power signal processing system derives various power system quantities that are used for real-time monitoring and disturbance mitigation. These power system quantities include constituent components, symmetrical components and various power measurements. The properties of the proposed power signal processing system was studied using modelling and practical implementation in a digital signal processor. These studies demonstrated that the proposed method is capable of extracting time varying power system quantities quickly and accurately. The dynamic response time of the proposed method was less than half that of a fundamental cycle. Moreover, the proposed method showed less sensitivity to noise pollution and small variations in fundamental frequency. The performance of the proposed power signal processing system was compared to that of the popular DFT/FFT methods using computer simulations. The simulation results confirmed the superior performance of the proposed method under both transient and steady-state conditions. In order to investigate the practicability of the method, the proposed power signal processing system was applied to two real-life disturbance mitigation applications namely, an active power filter (APF) and a distribution synchronous static compensator (D-STATCOM). The validity and performance of the proposed signal processing system in both disturbance mitigations applications were investigated by simulation and experimental studies. The extensive modelling and experimental studies confirmed that the proposed signal processing system can be used for practical real-time applications which require fast disturbance identification such as mitigation control and power quality monitoring of power systems
APA, Harvard, Vancouver, ISO, and other styles
4

Gandhi, Nikhil Tej. "Automatic Dependent Surveillance - Broadcast Enabled, Wake Vortex Mitigation Using Cockpit Display." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1354313600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Loh, Nolan. "Buildings as urban climate infrastructure: A framework for designing building forms and facades that mitigate urban heat." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1553513750865168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Odat, Zeyad Abdel-Hameed. "Analyses, Mitigation and Applications of Secure Hash Algorithms." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/32058.

Full text
Abstract:
Cryptographic hash functions are one of the widely used cryptographic primitives with a purpose to ensure the integrity of the system or data. Hash functions are also utilized in conjunction with digital signatures to provide authentication and non-repudiation services. Secure Hash Algorithms are developed over time by the National Institute of Standards and Technology (NIST) for security, optimal performance, and robustness. The most known hash standards are SHA-1, SHA-2, and SHA-3. The secure hash algorithms are considered weak if security requirements have been broken. The main security attacks that threaten the secure hash standards are collision and length extension attacks. The collision attack works by finding two different messages that lead to the same hash. The length extension attack extends the message payload to produce an eligible hash digest. Both attacks already broke some hash standards that follow the Merkle-Damgrard construction. This dissertation proposes methodologies to improve and strengthen weak hash standards against collision and length extension attacks. We propose collision-detection approaches that help to detect the collision attack before it takes place. Besides, a proper replacement, which is supported by a proper construction, is proposed. The collision detection methodology helps to protect weak primitives from any possible collision attack using two approaches. The first approach employs a near-collision detection mechanism that was proposed by Marc Stevens. The second approach is our proposal. Moreover, this dissertation proposes a model that protects the secure hash functions from collision and length extension attacks. The model employs the sponge structure to construct a hash function. The resulting function is strong against collision and length extension attacks. Furthermore, to keep the general structure of the Merkle-Damgrard functions, we propose a model that replaces the SHA-1 and SHA-2 hash standards using the Merkle-Damgrard construction. This model employs the compression function of the SHA-1, the function manipulators of the SHA-2, and the $10*1$ padding method. In the case of big data over the cloud, this dissertation presents several schemes to ensure data security and authenticity. The schemes include secure storage, anonymous privacy-preserving, and auditing of the big data over the cloud.
APA, Harvard, Vancouver, ISO, and other styles
7

Kivrikis, Andreas, and Johan Tjernström. "Development and Evaluation of Multiple Objects Collision Mitigation by Braking Algorithms." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2676.

Full text
Abstract:

A CMbB system is a system that with the help of sensors in the front of a car detects when a collision in unavoidable. When a situation like that is detected, the brakes are activated. The decision of whether to activate the brakes or not is taken by a piece of software called a decision maker. This software continuously checks for routes that would avoid an object in front of the car and as long as a path is found nothing is done. Volvo has been investigating several different CMbB-systems, and the research done by Volvo has previously focused on decision makers that only consider one object in front of the car. By instead taking all present objects in consideration, it should be possible to detect an imminent collision earlier. Volvo has developed some prototypes but needed help evaluating their performance.

As part of this thesis a testing method was developed. The idea was to test as many cases as possible but as the objects’ possible states increase, the number of test cases quickly becomes huge. Different ways of removing irrelevant test cases were developed and when these ideas were realized in a test bench, it showed that about 98 % of the test cases could be removed.

The test results showed that there is clearly an advantage to consider many objects if the cost of increased complexity in the decision maker is not too big. However, the risk of false alarms is high with the current decision makers and several possible improvements have therefore been suggested.

APA, Harvard, Vancouver, ISO, and other styles
8

Santos, Fernando Fernandes dos. "Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/159210.

Full text
Abstract:
A confiabilidade de algoritmos para detecção de pedestres é um problema fundamental para carros auto dirigíveis ou com auxílio de direção. Métodos que utilizam algoritmos de detecção de objetos como Histograma de Gradientes Orientados (HOG - Histogram of Oriented Gradients) ou Redes Neurais de Convolução (CNN – Convolutional Neural Network) são muito populares em aplicações automotivas. Unidades de Processamento Gráfico (GPU – Graphics Processing Unit) são exploradas para executar detecção de objetos de uma maneira eficiente. Infelizmente, as arquiteturas das atuais GPUs tem se mostrado particularmente vulneráveis a erros induzidos por radiação. Este trabalho apresenta uma validação e um estudo analítico sobre a confiabilidade de duas classes de algoritmos de detecção de objetos, HOG e CNN. Esta pesquisa almeja não somente quantificar, mas também qualificar os erros produzidos por radiação em aplicações de detecção de objetos em GPUs embarcadas. Os resultados experimentais com HOG foram obtidos usando duas arquiteturas de GPU embarcadas diferentes (Tegra e AMD APU), cada uma foi exposta por aproximadamente 100 horas em um feixe de nêutrons em Los Alamos National Lab (LANL). As métricas Precision e Recall foram usadas para validar a criticalidade do erro. Uma análise final mostrou que por um lado HOG é intrinsecamente resiliente a falhas (65% a 85% dos erros na saída tiveram um pequeno impacto na detecção), do outro lado alguns erros críticos aconteceram, tais que poderiam resultar em pedestres não detectados ou paradas desnecessárias do veículo. Este trabalho também avaliou a confiabilidade de duas Redes Neurais de Convolução para detecção de Objetos:Darknet e Faster RCNN. Três arquiteturas diferentes de GPUs foram expostas em um feixe de nêutrons controlado (Kepler, Maxwell, e Pascal), com as redes detectando objetos em dois data sets, Caltech e Visual Object Classes. Através da análise das saídas corrompidas das redes neurais, foi possível distinguir entre erros toleráveis e erros críticos, ou seja, erros que poderiam impactar na detecção de objetos. Adicionalmente, extensivas injeções de falhas no nível da aplicação (GDB) e em nível arquitetural (SASSIFI) foram feitas, para identificar partes críticas do código para o HOG e as CNNs. Os resultados mostraram que não são todos os estágios da detecção de objetos que são críticos para a confiabilidade da detecção final. Graças a injeção de falhas foi possível identificar partes do HOG e da Darknet, que se protegidas, irão com uma maior probabilidade aumentar a sua confiabilidade, sem adicionar um overhead desnecessário. A estratégia de tolerância a falhas proposta para o HOG foi capaz de detectar até 70% dos erros com 12% de overhead de tempo.
Pedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
APA, Harvard, Vancouver, ISO, and other styles
9

Salomon, Sophie. "Bias Mitigation Techniques and a Cost-Aware Framework for Boosted Ranking Algorithms." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586450345426827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bhattacharya, Koustav. "Architectures and algorithms for mitigation of soft errors in nanoscale VLSI circuits." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fyrvald, Johanna. "Mitigating algorithmic bias in Artificial Intelligence systems." Thesis, Uppsala universitet, Matematiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388627.

Full text
Abstract:
Artificial Intelligence (AI) systems are increasingly used in society to make decisions that can have direct implications on human lives; credit risk assessments, employment decisions and criminal suspects predictions. As public attention has been drawn towards examples of discriminating and biased AI systems, concerns have been raised about the fairness of these systems. Face recognition systems, in particular, are often trained on non-diverse data sets where certain groups often are underrepresented in the data. The focus of this thesis is to provide insights regarding different aspects that are important to consider in order to mitigate algorithmic bias as well as to investigate the practical implications of bias in AI systems. To fulfil this objective, qualitative interviews with academics and practitioners with different roles in the field of AI and a quantitative online survey is conducted. A practical scenario covering face recognition and gender bias is also applied in order to understand how people reason about this issue in a practical context. The main conclusion of the study is that despite high levels of awareness and understanding about challenges and technical solutions, the academics and practitioners showed little or no awareness of legal aspects regarding bias in AI systems. The implication of this finding is that AI can be seen as a disruptive technology, where organizations tend to develop their own mitigation tools and frameworks as well as use their own moral judgement and understanding of the area instead of turning to legal authorities.
APA, Harvard, Vancouver, ISO, and other styles
12

Ramachandran, Anirudh Vadakkedath. "Mitigating spam using network-level features." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41068.

Full text
Abstract:
Spam is an increasing menace in email: 90% of email is spam, and over 90% of spam is sent by botnets---networks of compromised computers under the control of miscreants. In this dissertation, we introduce email spam filtering using network-level features of spammers. Network-level features are based on lightweight measurements that can be made in the network, often without processing or storing a message. These features stay relevant for longer periods, are harder for criminals to alter at will (e.g., a bot cannot act independently of other bots in the botnet), and afford the unique opportunity to observe the coordinated behavior of spammers. We find that widely-used IP address-based reputation systems (e.g., IP blacklists) cannot keep up with the threats of spam from previously unseen IP addresses, and from new and stealthy attacks---to thwart IP-based reputation systems, spammers are reconnoitering IP Blacklists and sending spam from hijacked IP address space. Finally, spammers are "gaming" collaborative filtering by users in Web-based email by casting fraudulent "Not Spam" votes on spam email. We present three systems that detect each attack that uses spammer behavior rather than their IP address. First, we present IP blacklist counter-intelligence, a system that can passively enumerate spammers performing IP blacklist reconnaissance. Second, we present SpamTracker, a system that distinguishes spammers from legitimate senders by applying clustering on the set of domains to which email is sent. Third, we analyze vote-gaming attacks in large Web-based email systems that pollutes user feedback on spam emails, and present an efficient clustering-based method to mitigate such attacks.
APA, Harvard, Vancouver, ISO, and other styles
13

Ketonen, J. (Johanna). "Equalization and channel estimation algorithms and implementations for cellular MIMO-OFDM downlink." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514298578.

Full text
Abstract:
Abstract The aim of the thesis is to develop algorithms and architectures to meet the high data rate, low complexity requirements of the future mobile communication systems. Algorithms, architectures and implementations for detection, channel estimation and interference mitigation in the multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) receivers are presented. The performance-complexity trade-offs in different receiver algorithms are studied and the results can be utilized in receiver design as well as in system design. Implementation of detectors for spatial multiplexing systems is considered first. The linear minimum mean squared error (LMMSE) and the K-best list sphere detector (LSD) are compared to the successive interference cancellation (SIC) detector. The SIC algorithm was found to perform worse than the K-best LSD when the MIMO channels are highly correlated. The performance difference diminishes when the correlation decreases. With feedback to the transmitter, the performance difference is even smaller, but the full rank transmissions still require a more complex detector. A reconfigurable receiver, using a simple or a more complex detector as the channel conditions change, would achieve the best performance while consuming the least amount of power in the receiver. The use of decision directed (DD) channel estimation is also studied. The 3GPP long term evolution (LTE) based pilot structure is used as a benchmark. The performance and complexity of the pilot symbol based least-squares (LS) channel estimator, the minimum mean square error (MMSE) filter and the DD space-alternating generalized expectation-maximization (SAGE) algorithm are studied. DD channel estimation and MMSE filtering improve the performance with high user velocities, where the pilot symbol density is not sufficient. With DD channel estimation, the pilot overhead can be reduced without any performance degradation by transmitting data instead of pilot symbols. Suppression of co-channel interference in the MIMO-OFDM receiver is finally considered. The interference and noise spatial covariance matrix is used in data detection and channel estimation. Interference mitigation is applied for linear and nonlinear detectors. An algorithm to adapt the accuracy of the matrix decomposition and the use of interference suppression is proposed. The adaptive algorithm performs well in all interference scenarios and the power consumption of the receiver can be reduced
Tiivistelmä Tämän väitöskirjatyön tavoitteena on kehittää vastaanotinalgoritmeja ja -arkkitehtuureja, jotka toteuttavat tulevaisuuden langattomien tietoliikennejärjestelmien suuren datanopeuden ja pienen kompleksisuuden tavoitteet. Työssä esitellään algoritmeja, arkkitehtuureja ja toteutuksia ilmaisuun, kanavaestimointiin ja häiriönvaimennukseen monitulo-monilähtötekniikkaa (multiple-input multiple-output, MIMO) ja ortogonaalista taajuusjakokanavointia (orthogonal frequency division multiplexing, OFDM) yhdistäviin vastaanottimiin. Algoritmeista saatavaa suorituskykyhyötyä verrataan vaadittavaan toteutuksen monimutkaisuuteen. Työn tuloksia voidaan hyödyntää sekä vastaanotin- että järjestelmäsuunnittelussa. Lineaarista pienimmän keskineliövirheen (minimum mean square error, MMSE) ilmaisinta ja listapalloilmaisinta (list sphere detector, LSD) verrataan peräkkäiseen häiriönpoistoilmaisimeen (successive interference cancellation, SIC). SIC-ilmaisimella on huonompi suorituskyky kuin LSD-ilmaisimella radiokanavan ollessa korreloitunut. Korrelaation pienentyessä myös ilmaisimien suorituskykyero pienenee. Erot suorituskyvyissä ovat vähäisiä silloinkin, jos järjestelmässä on takaisinkytkentäkanava lähettimelle. Tällöinkin korkean signaali-kohinasuhteen olosuhteissa LSD-ilmaisimet mahdollistavat tilakanavoidun, suuren datanopeuden tiedonsiirron. Radiokanavan muuttuessa uudelleenkonfiguroitava vastaanotin toisi virransäästömahdollisuuden vaihtelemalla kompleksisen ja yksinkertaisen ilmaisimen välillä. Kanavaestimointialgoritmeja ja niiden toteutuksia vertaillaan käyttämällä lähtökohtana nykyisen mobiilin tiedonsiirtostandardin viitesignaalimallia. Tutkittavat algoritmit perustuvat pienimmän neliösumman (least squares, LS) ja pienimmän keskineliövirheen menetelmään, sekä päätöstakaisinkytkettyyn (decision directed, DD) kanavaestimointialgoritmiin. DD-kanavaestimaattori ja MMSE-suodatin parantavat vastaanottimen suorituskykyä korkeissa käyttäjän nopeuksissa, joissa viitesignaaleiden tiheys ei ole riittävä. DD-kanavaestimoinnilla datanopeutta voidaan nostaa viitesignaaleiden määrää laskemalla vaikuttamatta suorituskykyyn. Työssä tarkastellaan myös saman kanavan häiriön vaimennusta. Häiriöstä ja kohinasta koostuvaa kovarianssimatriisia käytetään ilmaisuun ja kanavaestimointiin. Työssä esitetään adaptiivinen algoritmi matriisihajoitelman tarkkuuden ja häiriön vaimennuksen säätämiseen. Algoritmi mahdollistaa hyvän suorituskyvyn kaikissa häiriötilanteissa vähentäen samalla virrankulutusta
APA, Harvard, Vancouver, ISO, and other styles
14

Yao, Sirui. "Evaluating, Understanding, and Mitigating Unfairness in Recommender Systems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103779.

Full text
Abstract:
Recommender systems are information filtering tools that discover potential matchings between users and items and benefit both parties. This benefit can be considered a social resource that should be equitably allocated across users and items, especially in critical domains such as education and employment. Biases and unfairness in recommendations raise both ethical and legal concerns. In this dissertation, we investigate the concept of unfairness in the context of recommender systems. In particular, we study appropriate unfairness evaluation metrics, examine the relation between bias in recommender models and inequality in the underlying population, as well as propose effective unfairness mitigation approaches. We start with exploring the implication of fairness in recommendation and formulating unfairness evaluation metrics. We focus on the task of rating prediction. We identify the insufficiency of demographic parity for scenarios where the target variable is justifiably dependent on demographic features. Then we propose an alternative set of unfairness metrics that measured based on how much the average predicted ratings deviate from average true ratings. We also reduce these unfairness in matrix factorization (MF) models by explicitly adding them as penalty terms to learning objectives. Next, we target a form of unfairness in matrix factorization models observed as disparate model performance across user groups. We identify four types of biases in the training data that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which learns personalized regularization parameters that directly address the data biases. PRL poses the hyperparameter search problem as a secondary learning task. It enables back-propagation to learn the personalized regularization parameters by leveraging the closed-form solutions of alternating least squares (ALS) to solve MF. Furthermore, the learned parameters are interpretable and provide insights into how fairness is improved. Third, we conduct theoretical analysis on the long-term dynamics of inequality in the underlying population, in terms of the fitting between users and items. We view the task of recommendation as solving a set of classification problems through threshold policies. We mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we prove that a system with the formulated dynamics always has at least one equilibrium, and we provide sufficient conditions for the equilibrium to be unique. We also show that, depending on the item category relationships and the recommendation policies, recommendations in one item category can reshape the user-item fit in another item category. To summarize, in this research, we examine different fairness criteria in rating prediction and recommendation, study the dynamic of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
Doctor of Philosophy
Recommender systems are information filtering tools that discover potential matching between users and items. However, a recommender system, if not properly built, may not treat users and items equitably, which raises ethical and legal concerns. In this research, we explore the implication of fairness in the context of recommender systems, study the relation between unfairness in recommender output and inequality in the underlying population, and propose effective unfairness mitigation approaches. We start with finding unfairness metrics appropriate for recommender systems. We focus on the task of rating prediction, which is a crucial step in recommender systems. We propose a set of unfairness metrics measured as the disparity in how much predictions deviate from the ground truth ratings. We also offer a mitigation method to reduce these forms of unfairness in matrix factorization models Next, we look deeper into the factors that contribute to error-based unfairness in matrix factorization models and identify four types of biases that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which is a mitigation strategy that learns personalized regularization parameters to directly addresses data biases. The learned per-user regularization parameters are interpretable and provide insight into how fairness is improved. Third, we conduct a theoretical study on the long-term dynamics of the inequality in the fitting (e.g., interest, qualification, etc.) between users and items. We first mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we discuss the existence and uniqueness of system equilibrium as the one-step dynamics repeat. We also show that depending on the relation between item categories and the recommendation policies (unconstrained or fair), recommendations in one item category can reshape the user-item fit in another item category. In summary, we examine different fairness criteria in rating prediction and recommendation, study the dynamics of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
APA, Harvard, Vancouver, ISO, and other styles
15

Aputis, Artūras. "DDoS (distributed denial of service) atakų atrėmimo algoritmų tyrimas ir modeliavimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2012~D_20131105_095501-14832.

Full text
Abstract:
Šiuo metu yra sukurta nemažai priemonių aptikti įvairiausias DDoS atakas, tačiau siekiant sustabdyti arba bent sušvelninti DDoS atakų poveikį yra nuveikta labai nedaug. Yra labai sunku pasirinkti tinkamą DDoS atakos atrėmimo metodą. DDoS atakų atrėmimo metodų analizė galėtų padėti pasirinkti tinkamiausią metodą. „BGP DDoS Diversion“ atakų atrėmimo metodas yra vienas efektyviausių ir mažiausiai kaštų reikalaujantis metodas. Šis metodas įgyvendinamas panaudojant BGP protokolą. Ataka yra atremiama kuomet BGP protokolo pagalba yra paskelbiama tik dalis tinklo. DDoS atakos duomenų srautas tokiu atveju yra nukreipiamas į paskelbtą tinklo dalį, o kita tinklo dalis lieka nepažeista atakos. Interneto paslaugų teikėjai naudodami „BGP DDoS Diversion“ atrėmimo metodą gali apsaugoti savo tinklą nuo visiško nepasiekiamumo. Šiame tyrime buvo išnagrinėti DDoS atakų atrėmimo metodai. Išsamiai analizei buvo pasirinktas „BGP DDoS Diversion“ atrėmimo metodas. Metodo analizei buvo pasirinkta virtuali terpė. Sudaryti virtualią terpę buvo pasirinkta OPNET tinklų modeliavimo programa. Panaudojant OPNET modeliavimo įrangą, buvo sukurtas virtualus tinklas, veikiantis Interneto tinklo pagrindu. Sukurtame tinkle buvo įgyvendintas „BGP DDoS Diversion“ atakų atrėmimo metodas. Šiame darbe yra pateikta minėto atrėmimo metodo veikimo charakteristikų analizė.
Nowadays there are lot of ways how to detect various types of DDoS attacks, but in order to stop, or at least to mitigate the impact of such DDoS attacks not enough work is done. It is very difficult to choose the right DDoS mitigation method. The research of DDoS attacks mitigation can provide a good manual how to choose the most appropriate method. „BGP DDoS Diversion“ method is one of the most effective and least cost to deliver DDoS mitigation method. This method is implemented using BGP protocol. BGP diversion mechanism is used to announce a specific part of the provider‘s network to (a part of) the Internet. Announcing a specific part of this network will divert the DDoS traffic and thereby prevent other parts of the provider‘s network becoming unreachable. This gives the provider the ability to continue providing services of the rest of his custumers. This research was based on analyzing the DDoS mitigation methods. For the better analyzes the „BGP DDoS Diversion“ method was chosen. To analyze this method the virtual environment was the best way to accomplish this task. OPNET modeler software was chosen to create the virtual environment. Using OPNET the virtual network was created. Virtual network was based on Internet network standards. „BGP DDoS Diversion“ method was implemented and tested in the virtual network. This research provides the detail analyzes of „BGP DDoS Diversion“ method.
APA, Harvard, Vancouver, ISO, and other styles
16

Abouzar, Pooyan. "Mitigating the effect of propagation impairments on higher layer protocols and algorithms in wireless sensor networks." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58915.

Full text
Abstract:
Wireless sensor networks (WSNs) range from body area networks (BANs) that involve a relatively small number of nodes, short paths and frequent update rates to precision agriculture wireless sensor networks (PAWSNs) that involve a relatively large number of nodes, long paths and infrequent update rates. They are distinguished from wireless access networks by: 1) their mesh architecture and reliance on higher layer protocols and algorithms to perform routing, scheduling, localization, and node placement, 2) their need to operate for long periods of time with only limited access to battery or scavenged power. Energy conservation has long been an important goal for developers of WSNs and the potential for reducing energy consumption in such networks by reducing the strength and/or frequency of transmission has long been recognized. Although the impact of propagation impairments on the physical and media access control layers of WSNs has long been considered, few previous studies have sought to assess their impact on higher layer protocols and algorithms and devise schemes for mitigating or accounting for such impacts. Here, we present four case studies that demonstrate how higher layer protocols and algorithms can be devised to achieve greater energy efficiency by accounting for the nature of the propagation impairments experienced. In the first two case studies, we focus on BANs and: 1) propose a routing protocol that uses linear programming techniques to ensure that all nodes expend energy at a similar rate and thereby maximize network lifetime and 2) propose a scheduling algorithm that accounts for the periodic shadowing observed over many BAN links and thereby reduce the transmit power required to transfer information and thereby maximize network lifetime. In the second two case studies, we focus on PAWSNs and 3) propose an efficient localization algorithm based on the Bayesian model for information aggregation and 4) demonstrate that path loss directionality observed in sites such as high density apple orchards greatly affects WSN connectivity and, therefore, energy consumption and must be considered when designing node placement in agricultural fields.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
17

Powell, Keith. "Next generation wavefront controller for the MMT adaptive optics system: Algorithms and techniques for mitigating dynamic wavefront aberrations." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222838.

Full text
Abstract:
Wavefront controller optimization is important in achieving the best possible image quality for adaptive optics systems on the current generation of large and very large aperture telescopes. This will become even more critical when we consider the demands of the next generation of extremely large telescopes currently under development. These telescopes will be capable of providing resolution which is significantly greater than the current generation of optical/IR telescopes. However, reaching the full resolving potential of these instruments will require a careful analysis of all disturbance sources, then optimizing the wavefront controller to provide the best possible image quality given the desired science goals and system constraints. Along with atmospheric turbulence and sensor noise, structural vibration will play an important part in determining the overall image quality obtained. The next generation of very large aperture telescopes currently being developed will require assessing the effects of structural vibration on closed loop AO system performance as an integral part of the overall system design. Telescope structural vibrations can seriously degrade image quality, resulting in actual spot full width half maximum (FWHM) and angular resolution much worse than the theoretical limit. Strehl ratio can also be significantly degraded by structural vibration as energy is dispersed over a much larger area of the detector. In addition to increasing telescope diameter to obtain higher resolution, there has also been significant interest in adaptive optics systems which observe at shorter wavelength from the near infrared to visible (VNIR) wavelengths, at or near 0.7 microns. This will require significant reduction in the overall wavefront residuals as compared with current systems, and will therefore make assessment and optimization of the wavefront controller even more critical for obtaining good AO system performance in the VNIR regime.
APA, Harvard, Vancouver, ISO, and other styles
18

Fang, Yiping. "Critical infrastructure protection by advanced modelling, simulation and optimization for cascading failure mitigation and resilience." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0013/document.

Full text
Abstract:
Sans cesse croissante complexité et l'interdépendance des infrastructures critiques modernes, avec des environs de risque plus en plus complexes, posent des défis uniques pour leur exploitation sûre, fiable et efficace. L'objectif de la présente thèse est sur la modélisation, la simulation et l'optimisation des infrastructures critiques (par exemple, les réseaux de transmission de puissance) à l'égard de leur vulnérabilité et la résilience aux défaillances en cascade. Cette étude aborde le problème en modélisant infrastructures critiques à un niveau fondamental, en se concentrant sur la topologie du réseau et des modèles de flux physiques dans les infrastructures critiques. Un cadre de modélisation hiérarchique est introduit pour la gestion de la complexité du système. Au sein de ces cadres de modélisation, les techniques d'optimisation avancées (par exemple, non-dominée de tri binaire évolution différentielle (NSBDE) algorithme) sont utilisés pour maximiser à la fois la robustesse et la résilience (capacité de récupération) des infrastructures critiques contre les défaillances en cascade. Plus précisément, le premier problème est pris à partir d'un point de vue de la conception du système holistique, c'est-à-dire certaines propriétés du système, tels que ses capacités de topologie et de liaison, sont redessiné de manière optimale afin d'améliorer la capacité de résister à des défaillances systémiques de système. Les deux modèles de défaillance en cascade topologiques et physiques sont appliquées et leurs résultats correspondants sont comparés. En ce qui concerne le deuxième problème, un nouveau cadre est proposé pour la sélection optimale des mesures appropriées de récupération afin de maximiser la capacité du réseau d’infrastructure critique de récupération à partir d'un événement perturbateur. Un algorithme d'optimisation de calcul pas cher heuristique est proposé pour la solution du problème, en intégrant des concepts fondamentaux de flux de réseau et le calendrier du projet. Exemples d'analyse sont effectués en se référant à plusieurs systèmes de CI réalistes
Continuously increasing complexity and interconnectedness of modern critical infrastructures, together with increasingly complex risk environments, pose unique challenges for their secure, reliable, and efficient operation. The focus of the present dissertation is on the modelling, simulation and optimization of critical infrastructures (CIs) (e.g., power transmission networks) with respect to their vulnerability and resilience to cascading failures. This study approaches the problem by firstly modelling CIs at a fundamental level, by focusing on network topology and physical flow patterns within the CIs. A hierarchical network modelling technique is introduced for the management of system complexity. Within these modelling frameworks, advanced optimization techniques (e.g., non-dominated sorting binary differential evolution (NSBDE) algorithm) are utilized to maximize both the robustness and resilience (recovery capacity) of CIs against cascading failures. Specifically, the first problem is taken from a holistic system design perspective, i.e. some system properties, such as its topology and link capacities, are redesigned in an optimal way in order to enhance system’s capacity of resisting to systemic failures. Both topological and physical cascading failure models are applied and their corresponding results are compared. With respect to the second problem, a novel framework is proposed for optimally selecting proper recovery actions in order to maximize the capacity of the CI network of recovery from a disruptive event. A heuristic, computationally cheap optimization algorithm is proposed for the solution of the problem, by integrating foundemental concepts from network flows and project scheduling. Examples of analysis are carried out by referring to several realistic CI systems
APA, Harvard, Vancouver, ISO, and other styles
19

Poulsen, Andrew Joseph. "Real-time Adaptive Cancellation of Satellite Interference in Radio Astronomy." Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd238.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Min-Pei, and 陳民培. "Multipath Detection and Mitigation Algorithm for Global Positioning System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/39836204088957373003.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
97
In Global Positioning System (GPS), many positioning errors such as clock error, tropospheric error and ionospheric error can be eliminated by using differential techniques. But characteristics of multipath interference depend on time and environment, which means that it can’t be canceled by differential technique. Therefore, multipath becomes one of the main error sources in GPS. According to the characteristics of multipath, this thesis proposes the multipath detection and multipath mitigation algorithm. The purpose of multipath detector is to determine whether a multipath is present. If multipath detector decides that a multipath is present, then the receiver may take some actions to mitigate multipath effect. The design issues of multipath detector contain selection of correlator spacing, taps of moving average filter and threshold setting. The influences of front-end RF filter and fixed-point effect are also analyzed. Simulation results show that the false alarm and miss detection probability of multipath detector are quite low. Moreover, the ability of real-time detection is very robust. For multipath mitigation, this thesis proposes a code discriminator to reduce multipath effect by means of increasing discriminator gain. In comparison with conventional narrow correlator, our proposed discriminator provides a 20% reduction in pseudorange error when the front-end RF bandwidth is 8.184 MHz and 56% when it is 16.368 MHz. However, the jitter when using this proposed discriminator is about 1.6 times larger than narrow correlator.
APA, Harvard, Vancouver, ISO, and other styles
21

Ming-I, Chao, and 趙明義. "Phase noise mitigation algorithm in 60 GHz RoF system." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/13148812948270350252.

Full text
Abstract:
碩士
國立交通大學
光電系統研究所
100
The demand for the wireless bandwidth is increasing and in near future a few Gbps over wireless will be needed. The best way to provide the such high capacity wireless is over 60 GHz because of the huge 7 GHz unlicensed bandwidth. The radio-over-fiber (RoF) systems provide efficient ways for transportation and distribution of high frequency signals such as 60 GHz. The phase noise (linewidth) laser light source and fiber dispersion in the RoF system will induce the impairment of the signal and limit the transmission distance and data rate. In this research, the phase noise model and mitigation algorithm are built up and reduce the bit-error-rate under the FEC limit through over 100 km transmission and DFB laser. Combine with bit-loading, the data rate can improve over 15% with 100 km fiber transmission.
APA, Harvard, Vancouver, ISO, and other styles
22

Chiang, Hsing-kuo, and 江興國. "Interacting Multiple Model Algorithm for NLOS Mitigation in Wireless Location." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/17914550076557737594.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
97
In the thesis, we propose a non-line of sight (NLOS) mitigation approach based on the interacting multiple model (IMM) algorithm. The IMM-based structure, composed of a biased Kalman filter (BKF) and a Kalman filter with NLOS-discarding process (KF-D), is capable of mitigating the ranging error caused by the NLOS effects, and therefore improving the performance and accuracy in wireless location systems. The NLOS effect on signal transmission is one of the major factors that affect the accuracy of the time-based location systems. Effective NLOS identification and mitigation usually count on pre-determined statistic distribution and hypothesis assumption in the signals. Because the variance of the NLOS error is much large than that of measurement noise, hypothesis testing on the LOS/NLOS status can be formulated.The BKF combines the sliding window and decides the status by using hypothesis testing. The calculated variance and the detection result are used in switching between the biased and unbiased modes in the Kalman filter. In the contrast, the KF-D scheme identifies the NLOS status and tries to eliminate the NLOS effects by directly using the estimated results from the LOS stage. The KF-D scheme can achieve reasonably good NLOS mitigation if the estimates in the LOS status are obtained. Due to the discarding process, changes of the state vector within the NLOS stage are possibly ignored, and will cause larger errors in the state estimates. The BKF and KF-D can make up for each other by formulating the filters in an IMM structure, which could tune up the probabilities of BKF and KF-D. In our approach, the measured data are smoothed by sliding window and a BKF. The variance of data and the hypothesis test result are passed to the two filters. The BKF switches between the biased/unbiased modes by using the result. The KF-D may receive the estimated value from BKF based on the results. The probability computation unit changes the weights to get the estimated TOA values. With the simulations in ultra-wideband (UWB) signals, it can be seen that the proposed IMM-based approach can effectively mitigate the NLOS effects and increase the accuracy in wireless position.
APA, Harvard, Vancouver, ISO, and other styles
23

Yen, Chen-Hsu, and 顏晨旭. "Using An Embedded System to Implement STAP Algorithm for GPS Interference Mitigation." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/96376567364072850531.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
95
The Global Navigation Satellite System (GNSS) signals transmit to the receiver is quite weak. Due to the fact that the signal have been mixed with interference and noise, this weak signal cannot be used to track the trajectories precisely. In accordance with the characteristics of spatial direction of the signal, the result of beampattern of the antenna array can be adjusted by an adaptive algorithm to enhance the GPS signal and to process interference mitigation simultaneously. As the rapid development of electronic technologies, the adaptive algorithm can be implemented on a programmable logic device. Hence, this thesis utilizes an embedded system to achieve the GPS interference mitigation. The main architecture combines with antenna array and RF front end as the platform for the reception of GPS signals. In order to mitigate the interference on the GPS signal, a spatial-temporal adaptive processing algorithm (STAP) is implemented on the embedded system, DSP development kit Stratix II.
APA, Harvard, Vancouver, ISO, and other styles
24

Lu, Shao Hang, and 呂少航. "Design of Cyclostationary Impulsive Noise Mitigation Algorithm for Narrowband Power Line Communications." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/xpshpy.

Full text
Abstract:
碩士
國立臺灣科技大學
自動化及控制研究所
106
There are many power devices operated in the narrowband power line channel. Therein, the electronic components with switching function will introduce high energy impulsive noise. Therefore, the data are corrupted during transmission and data integrity issue is missed. Research statistics indicate that the periodic impulsive noise with cyclostationary characteristics is one of the most threatening noises in the narrowband channel, and its energy is usually more than 10 times larger than the background noise. In order to overcome the impact of impulsive noise on data transmission quality and to reduce the probability of transmission error, in this thesis, we propose a cyclostationary impulsive noise mitigation algorithm utilizing frequency shift (FRESH) filter to estimate impulsive noise. It can improve the performance of cyclostationary impulsive noise estimation by the conventional linear time-invariant (LTI) filter. Moreover, an adaptive noise predictor is combined to enhance the integrity of reconstructed noise, and to reduce the impact due to signal being interfered by the impulsive noise. Simulation results show that the algorithm proposed in this thesis can effectively estimate and mitigate the cyclostationary impulsive noises in the received signals and it can achieve significantly improved performance in comparison with the conventional blanking method.
APA, Harvard, Vancouver, ISO, and other styles
25

CHEN, PO-YU, and 陳柏宇. "Time-Frequency Signal Processing-based Jamming Detection and Mitigation Algorithm for Galileo Receivers." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/dgpxpu.

Full text
Abstract:
碩士
國立宜蘭大學
電機工程學系碩士班
103
Due to the increasing demands of location-based services (LBSs), many appliances, such as smart phones, smart wearable devices, unmanned cars, unmanned commercial aircrafts and so on, require high accuracy positioning by using GNSS (Global Navigation Satellite System) receivers. However, malicious users can easily obtain an inexpensive GNSS jammer, which can generate continuous waveform interferences (CWI) and chirp interferences, to paralyze the acquisition operation of GNSS receivers. Therefore, a jamming mitigation mechanism has to be implemented in GNSS receivers. In this thesis, we propose a novel algorithm to detect efficacious jammer. Moreover, it can effectively combat chirp jammers, CWI jammers, and the mixedjammer of chirp and CWI for a Galileo receiver. Our proposed anti-jamming algorithm combines wavelet packet transform (WPT) techniques with adaptive predictors. With WPT, we can de-noise the received contaminated signal to generate a reference signal and this referenced signal then input to an adaptive filter to mimic the jamming signals. Simulation results have shown that the proposed jamming detection algorithm can achieve higher than 99% detection rate; for multiple CWIs case, our method outperforms the conventional adaptive notch filters; for chirp interferences with a jamming-to-signal ratio (JSR) of 55 dB, our approach resultes in the acquisition probability of about 80%, which is about zero when adaptive notch filter is used. However, in the mixed jammer case, the threshold of de-noise cannot be correctly estimated. In this case, our method has poor anti-jamming performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Shi-Yao, and 陳仕堯. "An Optimization-Based Algorithm for Mitigation of Starvation in IEEE 802.11 Wireless Networks." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/36624851731355059014.

Full text
Abstract:
碩士
國立臺灣大學
資訊管理學研究所
104
Distributed Coordination Function (DCF) is a fundamental MAC mechanism of the IEEE 802.11 based WLAN standard to access the medium and reduce possibility of collisions by Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). Each flow of sender must check whether the medium is available or not before sending data. However, in some situations, the random access protocol, CSMA/CA, cause serious unfairness or flow starvation. In this research, we focus on mitigating starvation problems in Wi-Fi ad hoc networks for improving the wireless network service. Adapting the transmission power range, carrier sense threshold are used in the work. The objective is to alleviate the starvation problems between nodes in the densely wireless network. In this thesis, we model the optimal mathematical formulation for describing and mitigating the starvation problems with objective function and some constraints. To fast solve the mathematical optimization, we utilize an approach, Lagrangean Relaxation (LR), which approximates another problem of constrained optimization by a similar problem. The feasible solution is derived from information provided by the Lagrangean multipliers. We are going to use Visual Studio 2013 and C/C++ for computational experiments which some scenarios are designed for evaluating the starvation problems. We anticipate that the expectation results with LR-based approach will be within the twenty-percent gap to the real optimum. Finally, we will get the result which minimizes the starvation problems in IEEE 802.11 wireless networks.
APA, Harvard, Vancouver, ISO, and other styles
27

GUPTA, ANISHI. "GENERIC FRAMEWORK AND MITIGATION ALGORITHM AGAINST BLACK HOLE ATTACK FOR AODV ROUTING PROTOCOL IN MANET." Thesis, 2014. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15729.

Full text
Abstract:
Ad hoc On Demand Vector (AODV) is a demand driven route protocol in Mobile Ad hoc Network (MANET). Adhoc Network is always constraint about resources and threat from malicious nodes and hence light solution is preferably needed. Moreover AODV is susceptible to many attacks such as black hole, gray hole, worm hole and so on. There is always a security threat in adhoc network. For this reason, in this thesis, we propose a new method Extended Modified Enhanced AODV (EMEAODV) which is an extended work of our previous work Modified Enhanced AODV (MEAODV). The new proposed method is effective for multiple sessions unlike our previous method. Moreover detection and prevention of black hole attack is done by real time monitoring suspected node by its neighbor node. The method uses the concept of broadcasting. Monitoring of node which replies to RouteRequest (RREQ) by source is done in promiscuous mode. Malicious node is actually detected by neighbor node of RouteReply (RREP) sender node. In simulation, new method has shown outstanding results as compared to MEAODV (Modified Enhanced AODV), EAODV (Enhanced AODV), and IAODV (Improved AODV) mitigation methods of AODV routing protocol. We have simulated our proposed method by using different mobility models such as Random Way Point, Manhattan Grid, Random Walk and Gauss Markov model. We have compared our mitigating scheme with different other mitigating schemes and proved that our algorithm is best in term of packet delivery ratio and end to end delay by varying number of nodes, number of malicious nodes, number of TCP connections and mobility speed.
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Chien-hung, and 林建宏. "TOA Wireless Location Algorithm with NLOS Mitigation Based on LS-SVM in UWB Systems." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/28tr6a.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
96
One of the major problems encountered in wireless location is the effect caused by non-line of sight (NLOS) propagation. When the direct path from the mobile station (MS) to base stations (BSs) is blocked by obstacles or buildings, the signal arrival times will delay. That will make the signal measurements include an error due to the excess path propagation. If we use the NLOS signal measurements for localization, that will make the system localization performance reduce greatly. In the thesis, a time-of-arrival (TOA) based location system with NLOS mitigation algorithm is proposed. The proposed method uses least squares-support vector machine (LS-SVM) with optimal parameters selection by particle swarm optimization (PSO) for establishing regression model, which is used in the estimation of propagation distances and reduction of the NLOS propagation errors. By using a weighted objective function, the estimation results of the distances are combined with suitable weight factors, which are derived from the differences between the estimated measurements and the measured measurements. By applying the optimality of the weighted objection function, the method is capable of mitigating the NLOS effects and reducing the propagation range errors. Computer simulation results in ultra-wideband (UWB) environments show that the proposed NLOS mitigation algorithm can reduce the mean and variance of the NLOS measurements efficiently. The proposed method outperforms other methods in improving localization accuracy under different NLOS conditions.
APA, Harvard, Vancouver, ISO, and other styles
29

Wei-FanHsueh and 薛瑋帆. "Mitigation Algorithm for the Memory Impulse Noise in Narrow-band Power Line Communication Systems." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/68tt9c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tangtragulwong, Potchara. "Optimal Railroad Rail Grinding for Fatigue Mitigation." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8909.

Full text
Abstract:
This dissertation aims to study the benefit of rail grinding on service life of railroad rails, focusing on failures due to rolling contact fatigue (RCF) at the rail head. Assuming a tangent rail with one-point contact at the running surface, a finite element analysis of a full-scale wheel-rail rolling contact with a nonlinear isotropic kinematic hardening material model is performed to simulate the accumulation of residual stresses and strains in the rail head. Using rolling stress and strain results from the sixth loading cycle, in which residual stresses and strains are at their steady-state, as input, two critical plane fatigue criteria are proposed for fatigue analyses. The first fatigue criterion is the stress-based approach—namely the Findley fatigue criterion. It suggests an important role of tensile residual stresses on subsurface crack nucleation and early growth in the rail head, but applications of the criterion to the near-running-surface region are limited because of plastic deformation from wheel-rail contact. The second fatigue criterion is the strain-based approach—namely the Fatemi-Socie fatigue criterion. Contributed mainly from shear strain amplitudes and factorized by normal stress components, the criterion also predicts fatigue crack nucleation at the subsurface as a possible failure mode as well as fatigue crack nucleation at the near-surface, while maintaining its validity in both regions. A collection of fatigue test data of various types of rail steel from literature is analyzed to determine a relationship between fatigue damages and number of cycles to failure. Considering a set of wheel loads with their corresponding number of rolling passage as a loading unit (LU), optimizations of grinding schedules with genetic algorithm (GA) show that fatigue life of rail increases by varying amount when compared against that from the no-grinding case. Results show that the proposed grinding schedules, optimized with the exploratory and local-search genetic algorithms, can increase fatigue life of rail by 240 percent. The optimization framework is designed to be able to determine a set of optimal grinding schedules for different types of rail steel and different contact configurations, i.e. two-point contact occurred when cornering.
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Sili. "OFDM interference mitigation algorithms with application to DVB-H /." 2008. http://proquest.umi.com/pqdweb?did=1597607541&sid=1&Fmt=2&clientId=10361&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Filippone, Giuseppe, William Spataro, Donato D'Ambrosio, Davide Marocco, and Nicola Leone. "Parallel and evolutionary applications to cellular automata models for mitigation of lava flow invasions." Thesis, 2013. http://hdl.handle.net/10955/874.

Full text
Abstract:
Dottorato di Ricerca in Matematica e Informatica, XXVI Ciclo 2013
In the lava ow mitigation context, the determination of areas exposed to volcanic risk is crucial for diminishing consequences in terms of human causalities and damages of material properties. In order to mitigate the destructive e ects of lava ows along volcanic slopes, the building and positioning of arti cial barriers is fundamental for controlling and slowing down the lava ow advance. In this thesis, a decision support system for de ning and optimizing volcanic hazard mitigation interventions is proposed. The Cellular Automata numerical model SCIARA-fv2 for simulating lava ows at Mt Etna (Italy) and Parallel Genetic Algorithms (PGA) for optimizing protective measures construction by morphological evolution have been considered. In particular, the PGA application regarded the optimization of the position, orientation and extension of earth barriers built to protect Rifugio Sapienza, a touristic facility located near the summit of the volcano. A preliminary release of the algorithm, called single barrier approach (SBA), was initially considered. Subsequently, a second GA strategy, called Evolutionary Greedy Strategy (EGS), was implemented by introducing multibarrier protection measures in order to improve the e ciency of the nal solution. Finally, a Coevolutionary Cooperative Strategy (CCS), has been introduced where all barriers are encoded in the genotype and, because all the constituents parts of the solution interact with the GA environment, a mechanism of cooperation between individuals has been favored. Solutions provided by CCS were extremely e cient and, in particular, the extent of the barriers in terms of volume used to deviate the ow thus avoiding that the lava reaches the inhabited area was less than 72% respect to the EGS 3and 284% respect to the SBA. It is also worth to note that the best set of interventions provided by CCS was approximately eighteen times more ef- cient than the one applied to divert the lava ow away from the facilities during the 2001 Mt.Etna eruption. Due to the highly intensive computational processes involved, General- Purpose Computation with Graphics Processing Units (GPGPU) is applied to accelerate both single and multiple simultaneous running of SCIARAfv2 model using CUDA (Compute Uni ed Device Architecture). Using four di erent GPGPU devices, the study also illustrates several implementation strategies to speedup the overall process and discusses some numerical results obtained. Carried out experiments show that signi cant performance improvements are achieved with a parallel speedup of 77. Finally, to support the analysis phase of the results, an OpenGL and Qt extensible system for the interactive visualization of lava ows simulations was also developed. The System showed that it can run the combined rendering and simulations at interactive frame rate. The study has produced extremely positive results and represents, to our knowledge, the rst application of morphological evolution for lava flow mitigation.
Università della Calabria
APA, Harvard, Vancouver, ISO, and other styles
33

Zafaruddin, S. M. "Algorithms and performance analysis for self d alien crosstalk mitigation in upstream vectored VDSL." Thesis, 2012. http://localhost:8080/xmlui/handle/12345678/6798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hsu, Chao-Yuan, and 許兆元. "ICI Mitigation for High-mobility SISO/MIMO OFDM Systems: Low-complexity Algorithms and Performance Analysis." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/41881546213151002873.

Full text
Abstract:
博士
國立交通大學
電信工程系所
97
In orthogonal frequency-division multiplexing (OFDM) systems, it is generally assumed that the channel response is static in an OFDM symbol period. However, the assumption does not hold in high-mobility environments. As a result, intercarrier interference (ICI) is induced and the system performance is degraded. A simple remedy for this problem is the application of the zero-forcing (ZF) and minimum mean square error (MMSE) equalizers. Unfortunately, the direct ZF method requires the inversion of an NxN ICI matrix, where N is the number of subcarriers. When N is large, the computational complexity can become prohibitively high. As for the direct MMSE method, in addition to an NxN matrix inverse, it requires an extra NxN matrix multiplication, making the required computational complexity higher compared to the direct ZF method. In this dissertation, we first propose a low-complexity ZF method to solve the problem in single-input-single-output (SISO) OFDM systems. The main idea is to explore the special structure inherent in the ICI matrix and to apply Newton's iteration for matrix inversion. With our formulation, fast Fourier transforms (FFTs) can be used in the iterative process, reducing the complexity from O(N^3) to O(Nlog_2 N) . Also, the required number of the iteration is typically one or two. We also analyze the convergence behavior of the proposed method and derive the theoretical output signal-to-interference-noise-ratio (SINR). For the MMSE method, we first reformulate the MMSE solution in a way that the extra matrix multiplication can be avoided. Similar to the ZF method, we then exploit the structure of the ICI matrix and apply Newton's iteration to reduce the complexity of the matrix inversion. For a multiple-input-multiple-output (MIMO) OFDM system, the required complexity of the ZF and MMSE methods becomes more intractable. We then manage to extend the proposed ZF and MMSE methods for SISO-OFDM systems to MIMO-OFDM systems. It turns out that the computational complexity can be reduced even more significantly. Simulation results show that the performance of the proposed methods is almost as good as that of the direct ZF and MMSE methods, while the required computational complexity is reduced dramatically. Finally, we explore the application of the proposed methods in mobility-induced ICI mitigation for OFDM multiple access (OFDMA) systems, and in carrier frequency offset (CFO) induced ICI mitigation for OFDMA uplink systems. As that in OFDM systems, the proposed methods can reduce the required computational complexity, effectively.
APA, Harvard, Vancouver, ISO, and other styles
35

CHEN, KAI-HSIANG, and 陳凱翔. "Efficient Pilot Scheduling Using Metaheuristics Algorithms for Mitigating Pilot Contamination in Massive Multiuser MIMO Systems." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/873vj9.

Full text
Abstract:
碩士
國立高雄師範大學
光電與通訊工程學系
105
Massive (or large-scale) multiuser, multiple-input and multiple-output (MIMO) system is widely regarded as a promising candidate technology in meeting 5G requirements because of its capacity to provide several significant advantages (such as spectral efficiency, link reliability, and coverage) compared with conventional small-scale MIMOsystems. However, this system suffers from a fundamental bottleneck,that is, the pilot contamination effect, where the channel estimationin a given cell is contaminated by the reuse of the same pilot group in adjacent cells. Several approaches have been proposed to reduce the pilot contamination effect. Among these methods, resource allocation strategies for pilot decontamination are highly attractive because of their good performance.These resource allocation strategies are motivated by the heavy dependence of pilot contamination on pilot assignment. These strategies aim to intelligently schedule the limited number of orthogonal pilots to different user terminals (UTs) to alleviate the effect of pilot contamination by capitalizing on UT locations and large-scale fading coefficients. Finding optimal pilot scheduling patterns that minimizes the effect of pilot contamination mathematically in multiuser multicell massive MIMO systems can be formulated as a permutation-based optimization problem. However, determining the optimal solution requires a exhaustive search algorithm, which becomes computationally prohibitive even if the numbers of cells or UTs per cell is small. To reduce both search complexity and pilot contamination effect, we propose the application of three metaheuristics algorithms (i.e., the ant colony optimization (ACO), the modified ACO (MACO), and the genetic algorithms (GA)) to tackle this problem. Simulation results demonstrate that the proposed three algorithms significantly outperform the classical random pilot assignment scheme. In particular, when the number of cells is 3 and number of UTs per cell is 5, the average achievable rates per UT obtained by the proposed ACO-based pilot scheduling method are within 99.99% of the optimal result implemented by the exhaustive search algorithm (ESA). Moreover, the proposed ACO-based pilot scheduling method requires approximately only 6.04% of the search complexity of ESA.
APA, Harvard, Vancouver, ISO, and other styles
36

Chaudhry, Mohammad. "Network Coding in Distributed, Dynamic, and Wireless Environments: Algorithms and Applications." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10529.

Full text
Abstract:
The network coding is a new paradigm that has been shown to improve throughput, fault tolerance, and other quality of service parameters in communication networks. The basic idea of the network coding techniques is to relish the "mixing" nature of the information flows, i.e., many algebraic operations (e.g., addition, subtraction etc.) can be performed over the data packets. Whereas traditionally information flows are treated as physical commodities (e.g., cars) over which algebraic operations can not be performed. In this dissertation we answer some of the important open questions related to the network coding. Our work can be divided into four major parts. Firstly, we focus on network code design for the dynamic networks, i.e., the networks with frequently changing topologies and frequently changing sets of users. Examples of such dynamic networks are content distribution networks, peer-to-peer networks, and mobile wireless networks. A change in the network might result in infeasibility of the previously assigned feasible network code, i.e., all the users might not be able to receive their demands. The central problem in the design of a feasible network code is to assign local encoding coefficients for each pair of links in a way that allows every user to decode the required packets. We analyze the problem of maintaining the feasibility of a network code, and provide bounds on the number of modifications required under dynamic settings. We also present distributed algorithms for the network code design, and propose a new path-based assignment of encoding coefficients to construct a feasible network code. Secondly, we investigate the network coding problems in wireless networks. It has been shown that network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. In wireless networks each packet transmitted by a device is broadcasted within a certain area and can be overheard by the neighboring devices. When a device needs to transmit packets, it employs the Index Coding that uses the knowledge of what the device's neighbors have heard in order to reduce the number of transmissions. With the Index Coding, each transmitted packet can be a linear combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. We propose an efficient exact, and several heuristic solutions for the Index Coding problem. Noting that the Index Coding problem is NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem, where the objective is to maximize the number of transmissions that are saved by employing coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms. Thirdly, we consider the problem of accessing large data files stored at multiple locations across a content distribution, peer-to-peer, or massive storage network. Parts of the data can be stored in either original form, or encoded form at multiple network locations. Clients access the parts of the data through simultaneous downloads from several servers across the network. For each link used client has to pay some cost. A client might not be able to access a subset of servers simultaneously due to network restrictions e.g., congestion etc. Furthermore, a subset of the servers might contain correlated data, and accessing such a subset might not increase amount of information at the client. We present a novel efficient polynomial-time solution for this problem that leverages the matroid theory. Fourthly, we explore applications of the network coding for congestion mitigation and over flow avoidance in the global routing stage of Very Large Scale Integration (VLSI) physical design. Smaller and smarter devices have resulted in a significant increase in the density of on-chip components, which has given rise to congestion and over flow as critical issues in on-chip networks. We present novel techniques and algorithms for reducing congestion and minimizing over flows.
APA, Harvard, Vancouver, ISO, and other styles
37

Kanta, Lufthansa Rahman. "A Risk-based Optimization Modeling Framework for Mitigating Fire Events for Water and Fire Response Infrastructures." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7367.

Full text
Abstract:
The purpose of this dissertation is to address risk and consequences of and effective mitigation strategies for urban fire events involving two critical infrastructures- water distribution and emergency services. Water systems have been identified as one of the United States' critical infrastructures and are vulnerable to various threats caused by natural disasters or malevolent actions. The primary goals of urban water distribution systems are reliable delivery of water during normal and emergency conditions (such as fires), ensuring this water is of acceptable quality, and accomplishing these tasks in a cost-effective manner. Due to interdependency of water systems with other critical infrastructures-e.g., energy, public health, and emergency services (including fire response)- water systems planning and management offers numerous challenges to water utilities and affiliated decision makers. The dissertation is divided into three major sections, each of which presents and demonstrates a methodological innovation applied to the above problem. First, a risk based dynamic programming modeling approach is developed to identify the critical components of a water distribution system during fire events under three failure scenarios: (1) accidental failure due to soil-pipe interaction, (2) accidental failure due to a seismic activity, and (3) intentional failure or malevolent attack. Second, a novel evolutionary computation based multi-objective optimization technique, Non-dominated Sorting Evolution Strategy (NSES), is developed for systematic generation of optimal mitigation strategies for urban fire events for water distribution systems with three competing objectives: (1) minimizing fire damages, (2) minimizing water quality deficiencies, and (3) minimizing the cost of mitigation. Third, a stochastic modeling approach is developed to assess urban fire risk for the coupled water distribution and fire response systems that includes probabilistic expressions for building ignition, WDS failure, and wind direction. Urban fire consequences are evaluated in terms of number of people displaced and cost of property damage. To reduce the assessed urban fire risk, the NSES multi-objective approach is utilized to generate Pareto-optimal solutions that express the tradeoff relationship between risk reduction, mitigation cost, and water quality objectives. The new methodologies are demonstrated through successful application to a realistic case study in water systems planning and management.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography