Littérature scientifique sur le sujet « Algorithmes non centralisés »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Algorithmes non centralisés ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Algorithmes non centralisés"

1

Perez-Diaz, Alvaro, Enrico Harm Gerding et Frank McGroarty. « Catching Cheats : Detecting Strategic Manipulation in Distributed Optimisation of Electric Vehicle Aggregators ». Journal of Artificial Intelligence Research 67 (5 mars 2020) : 437–70. http://dx.doi.org/10.1613/jair.1.11573.

Texte intégral
Résumé :
Given the rapid rise of electric vehicles (EVs) worldwide, and the ambitious targets set for the near future, the management of large EV fleets must be seen as a priority. Specifically, we study a scenario where EV charging is managed through self-interested EV aggregators who compete in the day-ahead market in order to purchase the electricity needed to meet their clients' requirements. With the aim of reducing electricity costs and lowering the impact on electricity markets, a centralised bidding coordination framework has been proposed in the literature employing a coordinator. In order to improve privacy and limit the need for the coordinator, we propose a reformulation of the coordination framework as a decentralised algorithm, employing the Alternating Direction Method of Multipliers (ADMM). However, given the self-interested nature of the aggregators, they can deviate from the algorithm in order to reduce their energy costs. Hence, we study the strategic manipulation of the ADMM algorithm and, in doing so, describe and analyse different possible attack vectors and propose a mathematical framework to quantify and detect manipulation. Importantly, this detection framework is not limited to the considered EV scenario and can be applied to general ADMM algorithms. Finally, we test the proposed decentralised coordination and manipulation detection algorithms in realistic scenarios using real market and driver data from Spain. Our empirical results show that the decentralised algorithm's convergence to the optimal solution can be effectively disrupted by manipulative attacks achieving convergence to a different non-optimal solution which benefits the attacker. With respect to the detection algorithm, results indicate that it achieves very high accuracies and significantly outperforms a naive benchmark.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Daccò, Edoardo, Davide Falabretti, Valentin Ilea, Marco Merlo, Riccardo Nebuloni et Matteo Spiller. « Decentralised Voltage Regulation through Optimal Reactive Power Flow in Distribution Networks with Dispersed Generation ». Electricity 5, no 1 (12 mars 2024) : 134–53. http://dx.doi.org/10.3390/electricity5010008.

Texte intégral
Résumé :
The global capacity for renewable electricity generation has surged, with distributed photovoltaic generation being the primary driver. The increasing penetration of non-programmable renewable Distributed Energy Resources (DERs) presents challenges for properly managing distribution networks, requiring advanced voltage regulation techniques. This paper proposes an innovative decentralised voltage strategy that considers DERs, particularly inverter-based ones, as autonomous regulators in compliance with the state-of-the-art European technical standards and grid codes. The proposed method uses an optimal reactive power flow that minimises voltage deviations along all the medium voltage nodes; to check the algorithm’s performance, it has been applied to a small-scale test network and on a real Italian medium-voltage distribution network, and compared with a fully centralised ORPF. The results show that the proposed decentralised autonomous strategy effectively improves voltage profiles in both case studies, reducing voltage deviation by a few percentage points; these results are further confirmed through an analysis conducted over several days to observe how seasons affect the results.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Murphy, DC, et DB Saleh. « Artificial Intelligence in plastic surgery : What is it ? Where are we now ? What is on the horizon ? » Annals of The Royal College of Surgeons of England 102, no 8 (octobre 2020) : 577–80. http://dx.doi.org/10.1308/rcsann.2020.0158.

Texte intégral
Résumé :
Introduction An increasing quantity of data is required to guide precision medicine and advance future healthcare practices, but current analytical methods often become overwhelmed. Artificial intelligence (AI) provides a promising solution. Plastic surgery is an innovative surgical specialty expected to implement AI into current and future practices. It is important for all plastic surgeons to understand how AI may affect current and future practice, and to recognise its potential limitations. Methods Peer-reviewed published literature and online content were comprehensively reviewed. We report current applications of AI in plastic surgery and possible future applications based on published literature and continuing scientific studies, and detail its potential limitations and ethical considerations. Findings Current machine learning models using convolutional neural networks can evaluate breast mammography and differentiate benign and malignant tumours as accurately as specialist doctors, and motion sensor surgical instruments can collate real-time data to advise intraoperative technical adjustments. Centralised big data portals are expected to collate large datasets to accelerate understanding of disease pathogeneses and best practices. Information obtained using computer vision could guide intraoperative surgical decisions in unprecedented detail and semi-autonomous surgical systems guided by AI algorithms may enable improved surgical outcomes in low- and middle-income countries. Surgeons must collaborate with computer scientists to ensure that AI algorithms inform clinically relevant health objectives and are interpretable. Ethical concerns such as systematic biases causing non-representative conclusions for under-represented patient groups, patient confidentiality and the limitations of AI based on the quality of data input suggests that AI will accompany the plastic surgeon, rather than replace them.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ye, Qiming, Yuxiang Feng, Eduardo Candela, Jose Escribano Macias, Marc Stettler et Panagiotis Angeloudis. « Spatial-Temporal Flows-Adaptive Street Layout Control Using Reinforcement Learning ». Sustainability 14, no 1 (23 décembre 2021) : 107. http://dx.doi.org/10.3390/su14010107.

Texte intégral
Résumé :
Complete streets scheme makes seminal contributions to securing the basic public right-of-way (ROW), improving road safety, and maintaining high traffic efficiency for all modes of commute. However, such a popular street design paradigm also faces endogenous pressures like the appeal to a more balanced ROW for non-vehicular users. In addition, the deployment of Autonomous Vehicle (AV) mobility is likely to challenge the conventional use of the street space as well as this scheme. Previous studies have invented automated control techniques for specific road management issues, such as traffic light control and lane management. Whereas models and algorithms that dynamically calibrate the ROW of road space corresponding to travel demands and place-making requirements still represent a research gap. This study proposes a novel optimal control method that decides the ROW of road space assigned to driveways and sidewalks in real-time. To solve this optimal control task, a reinforcement learning method is introduced that employs a microscopic traffic simulator, namely SUMO, as its environment. The model was trained for 150 episodes using a four-legged intersection and joint AVs-pedestrian travel demands of a day. Results evidenced the effectiveness of the model in both symmetric and asymmetric road settings. After being trained by 150 episodes, our proposed model significantly increased its comprehensive reward of both pedestrians and vehicular traffic efficiency and sidewalk ratio by 10.39%. Decisions on the balanced ROW are optimised as 90.16% of the edges decrease the driveways supply and raise sidewalk shares by approximately 9%. Moreover, during 18.22% of the tested time slots, a lane-width equivalent space is shifted from driveways to sidewalks, minimising the travel costs for both an AV fleet and pedestrians. Our study primarily contributes to the modelling architecture and algorithms concerning centralised and real-time ROW management. Prospective applications out of this method are likely to facilitate AV mobility-oriented road management and pedestrian-friendly street space design in the near future.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Pauletto, Christian. « Gestion publique, agilité et innovation : l’expérience suisse du dispositif de crédits COVID-19 ». Revue Internationale des Sciences Administratives Vol. 90, no 1 (2 avril 2024) : 109–25. http://dx.doi.org/10.3917/risa.901.0109.

Texte intégral
Résumé :
Au mois de mars 2020, l’administration suisse a conçu et mis en place en seulement dix jours un programme de crédits cautionnés destiné aux entreprises. La phase de mise en œuvre a également été de courte durée : moins de cinq mois. Cet article étudie comment cela a été possible compte tenu de la complexité du cadre institutionnel et de la nature novatrice du dispositif, notamment en matière de technologies de l’information, avec, en particulier, des avancées majeures dans la pratique suisse de l’administration électronique : le dispositif a utilisé des algorithmes pour vérifier les demandes des entreprises, un numéro d’identification unique des entreprises (IDE) a été créé à grande échelle, les banques suisses ont été associées à l’élaboration du projet et à sa mise en œuvre, et certaines opérations de leurs clients ont été centralisées sur une plateforme gouvernementale en ligne. Nous présentons les caractéristiques essentielles du processus au moyen d’une analyse du déroulement des opérations sur cette période de dix jours. Nous décrivons également les circonstances et le contexte qui ont conduit à des formes radicalement nouvelles de gouvernance publique. Enfin, nous analysons le résultat pour mettre en évidence les caractéristiques novatrices du livrable. L’exemple étudié a été de courte durée et était imprévu, de sorte qu’aucune donnée ni observation n’a pu être recueillie avant ou pendant son déroulement. Cette étude se fonde donc principalement sur des investigations a posteriori . Les participants au projet ont élaboré un système organisationnel informel sans disposer de mandats, de structures ou de rôles clairement définis. Le fait que le livrable était bien défini a joué un rôle moteur dans le processus. Plusieurs caractéristiques du projet, telles que l’efficacité des réseaux, un flux d’informations en temps réel, la flexibilité des fonctions, un management horizontal, et des sous-processus itératifs rapides, se rapprochent de celles des « organisations agiles ». Les tâches ont été exécutées parallèlement et non séquentiellement. Remarques à l’intention des praticiens Il est frappant que peu d’études académiques aient été publiées jusqu’ici sur les leçons tirées de l’expérience unique des paquets de mesures de soutien d’urgence mis en place pendant la pandémie, y compris au niveau intra-organisationnel. Des recherches pourraient être menées sur la reproductibilité de ces mesures, aussi bien dans la perspective de crises futures que d’un ajustement des pratiques usuelles de gestion publique. Notre proposition vise à contribuer à cette discussion et à inspirer les praticiens des administrations publiques et des entités gouvernementales. Elle met l’accent sur les relations entre la gestion gouvernementale des crises et la transformation numérique des procédures administratives à l’aide d’outils informatiques.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Waldman, Deane. « Replace government healthcare with patient-controlled health care ». Health Economics and Management Review 5, no 1 (31 mars 2024) : 80–89. http://dx.doi.org/10.61093/hem.2024.1-06.

Texte intégral
Résumé :
The purpose of the article is to analyse the shortcomings of the state-run healthcare systems and to substantiate the need for and feasibility of transition to a patient-controlled model. It is shown that patient-controlled health care, free from centralised domination, can provide timely, high-quality, compassionate medical care at an affordable price for both individuals and the nation. It significantly expands the patient’s rights and opportunities to choose a doctor according to their own preferences and financial capabilities. The patient pays for the medical service provided directly to the doctor, who no longer has restrictions on choice of treatment protocols or prescription of medicines. The analysis in the article is based mainly on the example of the United States, where federal control for residents is both direct (194 million Americans are covered by Medicaid, Medicare, Tricare or EMTALA) and indirect (138 Americans have private insurance). In addition, aspects of the article analysis also apply to single-payer countries (Canada, the United Kingdom, France and Spain). The article examines the shortcomings of the current US model of its healthcare system in terms of its compliance with the Constitution. It is noted that, according to the Tenth Amendment to the US Constitution, healthcare powers are not among the 18 powers delegated to the federal government. Also, non-compliance with the law is also observed: government control or administration of state Medicaid programmes is contrary to US law; medical autonomy as the patient’s ability to make personal medical decisions without undue influence from the state. Another disadvantage of state-run healthcare system is that state-controlled healthcare payment structure violates the fiduciary relationship between doctor and patient, as doctors’ authority to make medical decisions is limited. It also calls into question the observance in the United States of the citizen’s “right” to receive medical care in its interpretation as a personal service of a professional caregiver when a patient can demand the desired care and the service provider cannot refuse. The article emphasises that state-run healthcare systems create a conflict between efficient use of financial resources and effective provision of medical care. This issue is considered through the prism of the interests of the main stakeholders: shareholders of companies operating in this area, politicians, patients, healthcare providers and administrators. As evidence of the inefficiency of the existing US healthcare system in comparison with other countries, comparative data for different countries on life expectancy and incidence rates of a number of diseases are provided. The author also discusses the problem of limiting access to medical care (rationing) for patients with public health insurance due to a shortage of healthcare professionals accepting new Medicaid patients. This is caused by low reimbursement rates, overly bureaucratic verification procedures for obtaining payment, overregulation of requirements for doctor-patient relations and procedures for reviewing medical errors, the need to comply with population-based clinical algorithms, etc. It leads to a decrease in the quality of medical care, an increase in patient deaths while waiting for medical care, the risk of disease complications due to delays in diagnosis and timely treatment, ignoring the needs of unique, individual patients, and an increase in the likelihood of medical errors. All of the above disadvantages of state-run healthcare are obviated when the patient is in charge, patient-controlled health care.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Zhixun, Keke Zhang, Leizheng Shu, Zhencai Zhu et Meijiang Zhou. « Distributed angle‐only orbit determination algorithm for non‐cooperative spacecraft based on factor graph ». IET Radar, Sonar & ; Navigation, 22 mai 2024. http://dx.doi.org/10.1049/rsn2.12580.

Texte intégral
Résumé :
AbstractBayesian filtering provides an effective approach for the orbit determination of a non‐cooperative target using angle measurements from multiple CubeSats. However, existing methods face challenges such as low reliability and limited estimation accuracy. Two distributed filtering algorithms based on factor graphs employed in the sub‐parent and distributed cluster spacecraft architectures are proposed. Two appropriate factor graphs representing different cluster spacecraft structures are designed and implement distributed Bayesian filtering within these models. The Gaussian messages transmitted between nodes and the probability distributions of variable nodes are calculated using the derived non‐linear Gaussian belief propagation algorithm. Gaussian messages propagate from the deputy spacecraft to the chief spacecraft in the sub‐parent spacecraft architecture, demonstrating that the estimation accuracy converges to the centralised extended Kalman filter (EKF). Simulation results indicate that the algorithm enhances system robustness in observation node failures without compromising accuracy. In the distributed spacecraft architecture, neighbouring spacecraft iteratively exchanges Gaussian messages. The accuracy of the algorithm can rapidly approach the centralised EKF, benefiting from the efficient and unbiased transmission of observational information. Compared to existing distributed consensus filtering algorithms, the proposed algorithm improves estimation accuracy and reduces the number of iterations needed to achieve consensus.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Nagaraja G, Chandan K J, Amrutha S Dukandar, Akash N et Charitha Reddy. « FINE : A Framework for Distributed Learning on Incomplete Observations for Heterogeneous Crowdsensing Networks ». International Journal of Advanced Research in Science, Communication and Technology, 5 mai 2023, 23–29. http://dx.doi.org/10.48175/ijarsct-9775.

Texte intégral
Résumé :
Numerous crowdsensing applications have been developed recently in mobile social networks and vehicle networks. How to implement an accurate distributed learning process to estimate parameters of an unknown model in crowdsensing is a significant issue because centralised learning methods produce unreliable data gathering, expensive central servers, and privacy concerns. Due to this, we propose FINE, a distributed learning framework for imperfect data and non-smooth estimation, along with its design, analysis, and assessment. Our design, which is focused on creating a workable framework for learning parameters in crowdsensing networks accurately and efficiently, generalises earlier learning techniques by supporting heterogeneous dimensions of data records observed by various nodes as well as minimization based on non-smooth error functions.In particular, FINE makes use of a distributed dual average technique that efficiently minimises non-smooth error functions and a novel distributed record completion algorithm that enables each node to get the global consensus through effective communication with neighbours. All of these algorithms converge, as shown by our analysis, and the convergence rates are also obtained to support their efficacy. Through experiments on synthetic and actual networks, we assess how well our framework performs
Styles APA, Harvard, Vancouver, ISO, etc.
9

Jain, Sambhav, et Reshma Rastogi. « Multi-label Minimax Probability Machine with Multi-manifold Regularisation ». Research Reports on Computer Science, 30 décembre 2021, 44–63. http://dx.doi.org/10.37256/rrcs.1120211193.

Texte intégral
Résumé :
Semi-supervised learning i.e., learning from a large number of unlabelled data and exploiting a small percentage of labelled data has attracted centralised attention in recent years. Semi-supervised problem is handled mainly using graph based Laplacian and Hessian regularisation methods. However, neither the Laplacian method which leads to poor generalisation nor the Hessian energy can properly forecast the data points beyond the range of the domain. Thus, in this paper, the Laplacian-Hessian semi-supervised method is proposed, which can both predict the data points and enhance the stability of Hessian regulariser. In this paper, we propose a Laplacian-Hessian Multi-label Minimax Probability Machine, which is Multi-manifold regularisation framework. The proposed classifier requires mean and covariance information; therefore, assumptions related to the class conditional distributions are not required; rather, a upper bound on the misclassification probability of future data is obtained explicitly. Furthermore, the proposed model can effectively utilise the geometric information via a combination of Hessian-Laplacian manifold regularisation. We also show that the proposed method can be kernelised on the basis of a theorem similar to the representer theorem for handling non-linear cases. Extensive experimental comparisons of our proposed method with related multi-label algorithms on well known multi-label datasets demonstrate the validity and comparable performance of our proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Jain, Sambhav, et Reshma Rastogi. « Multi-label Minimax Probability Machine with Multi-manifold Regularisation ». Research Reports on Computer Science, 30 décembre 2021, 44–63. http://dx.doi.org/10.37256/rrcs.1120221193.

Texte intégral
Résumé :
Semi-supervised learning i.e., learning from a large number of unlabelled data and exploiting a small percentage of labelled data has attracted centralised attention in recent years. Semi-supervised problem is handled mainly using graph based Laplacian and Hessian regularisation methods. However, neither the Laplacian method which leads to poor generalisation nor the Hessian energy can properly forecast the data points beyond the range of the domain. Thus, in this paper, the Laplacian-Hessian semi-supervised method is proposed, which can both predict the data points and enhance the stability of Hessian regulariser. In this paper, we propose a Laplacian-Hessian Multi-label Minimax Probability Machine, which is Multi-manifold regularisation framework. The proposed classifier requires mean and covariance information; therefore, assumptions related to the class conditional distributions are not required; rather, a upper bound on the misclassification probability of future data is obtained explicitly. Furthermore, the proposed model can effectively utilise the geometric information via a combination of Hessian-Laplacian manifold regularisation. We also show that the proposed method can be kernelised on the basis of a theorem similar to the representer theorem for handling non-linear cases. Extensive experimental comparisons of our proposed method with related multi-label algorithms on well known multi-label datasets demonstrate the validity and comparable performance of our proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Algorithmes non centralisés"

1

Alvarez, Valera Hernan Humberto. « An energy saving perspective for distributed environments : Deployment, scheduling and simulation with multidimensional entities for Software and Hardware ». Electronic Thesis or Diss., Pau, 2022. https://theses.hal.science/tel-04116013.

Texte intégral
Résumé :
De nos jours, la forte croissance économique et les conditions météorologiques extrêmes ont augmenté la demande mondiale d'électricité de plus de 6 % en 2021 après la pandémie de COVID. La reprise rapide de cette demande a rapidement augmenté la consommation d'électricité. Même si les sources renouvelables présentent une croissance significative, la production d'électricité à partir de sources de charbon et de gaz a atteint un niveau historique. D'autre part, la consommation d'énergie du secteur du numérique dépend de sa croissance et de son degré d'efficacité énergétique. À ce sujet, bien que les appareils à tous les niveaux de déploiement soient aujourd'hui économes en énergie, leur utilisation massive signifie que la consommation énergétique mondiale continue de croître.Toutes ces données montrent la nécessité d'utiliser l'énergie de ces appareils à bon escient. Pour cette raison, ce travail de thèse aborde le (re)déploiement dynamique de composants logiciels (conteneurs ou machines virtuelles) et de leurs données pour économiser de l'énergie. Dans cette mesure, nous avons conçu et développé des algorithmes intelligents d'ordonnancement distribué pour réduire la consommation électrique globale tout en préservant la qualité de service des applications.De tels algorithmes exécutent des procédures de migration et de duplication en tenant compte de la relation naturelle entre la charge/les fonctionnalités des composants matériels et la consommation d'énergie. Pour cela, ils mettent en place une nouvelle manière de négociations décentralisées basée sur un middleware distribué que nous avons créé (Kaligreen) et des structures de données multidimensionnelles.Pour exploiter et évaluer les algorithmes ci-dessus, des outils appropriés concernant les solutions matérielles et logicielles sont essentiels. Ici, notre choix a été de développer notre propreoutil de simulation appelé : PISCO.PISCO est un simulateur polyvalent et simple qui permet aux utilisateurs de se concentrer uniquement sur leurs stratégies de planification. Il permet d'abstraire les topologies de réseau sous forme de structures de données dont les éléments sont des dispositifs indexés par un ou plusieurs critères. De plus, il imite l'exécution de microservices en allouant des ressources selon diverses heuristiques de planification.Nous avons utilisé PISCO pour implémenter, exécuter et tester nos algorithmes de planification
Nowadays, strong economic growth and extreme weather conditions increased global electricity demand by more than 6% in 2021 after the COVID pandemic. The fast recovery regarding this demand rapidly increased electricity consumption. Even though renewable sources present a significant growth, electricity production from both coal and gas sources has reached a historical level.On the other hand, the consumption of energy by the digital technology sector depends on its growth and its degree of energy efficiency. On this matter, although devices at all deployment levels are energy efficient today, their massive use means that global energy consumption continues to grow.All these data show the need to use the energy of these devices wisely. For that reason, this thesis work addresses the dynamic (re)deployment of software components (containers or virtual machines) and their data to save energy. To this extent, we designed and developed intelligent distributed scheduling algorithms to decrease global power consumption while preserving the applications' quality of service.Such algorithms execute migrations and duplications procedures considering the natural relation between hardware components' load/features and power consumption. For that, they implement a novel manner of decentralized negotiations based on a distributed middleware we created (Kaligreen) and multidimensional data structures.To operate and assess the algorithms above, appropriate tools regarding hardware and software solutions are essential. Here, our choice was to develop our ownsimulation tool called: PISCO.PISCO is a versatile and straightforward simulator that allows users to concentrate only on their scheduling strategies. It enables network topologies to be abstracted as data structures whose elements are devices indexed by one or more criteria. Additionally, it mimics the execution of microservices by allocating resources according to various scheduling heuristics.We have used PISCO to implement, run and test our scheduling algorithms
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Algorithmes non centralisés"

1

Mou, Yongli, Sascha Welten, Mehrshad Jaberansary, Yeliz Ucer Yediel, Toralf Kirsten, Stefan Decker et Oya Beyan. « Distributed Skin Lesion Analysis Across Decentralised Data Sources ». Dans Studies in Health Technology and Informatics. IOS Press, 2021. http://dx.doi.org/10.3233/shti210179.

Texte intégral
Résumé :
Skin cancer has become the most common cancer type. Research has applied image processing and analysis tools to support and improve the diagnose process. Conventional procedures usually centralise data from various data sources to a single location and execute the analysis tasks on central servers. However, centralisation of medical data does not often comply with local data protection regulations due to its sensitive nature and the loss of sovereignty if data providers allow unlimited access to the data. The Personal Health Train (PHT) is a Distributed Analytics (DA) infrastructure bringing the algorithms to the data instead of vice versa. By following this paradigm shift, it proposes a solution for persistent privacy- related challenges. In this work, we present a feasibility study, which demonstrates the capability of the PHT to perform statistical analyses and Machine Learning on skin lesion data distributed among three Germany-wide data providers.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Algorithmes non centralisés"

1

Sundararajan, V., Andrew Redfern, William Watts et Paul Wright. « Distributed Monitoring of Steady-State System Performance Using Wireless Sensor Networks ». Dans ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-59884.

Texte intégral
Résumé :
Wireless sensor networks provide a cost-effective alternative to monitoring system performance in real-time. In addition to the ability to communicate data without wires, the sensor nodes possess computing and memory capabilities that can be harnessed to execute signal processing and state-tracking algorithms. This paper describes the architecture and application layer protocols for the distributed monitoring of the steady-state performance of systems that have a finite number of states. Protocols are defined for two phases — the learning phase and the monitoring phase. In the learning phase, an expert user trains the wireless network to define the acceptable states of the system. The nodes are programmed with a set of algorithms for processing their readings. The nodes use these algorithms to compute invariant metrics on the sensor readings, which are then used to define the internal state of the node. In the monitoring phase, the nodes track their individual states by computing their state based on the sensor readings and then comparing them with the pre-determined values. If the system properties change, the nodes communicate with each other to determine the new state. If the new state is not one of the acceptable states determined in the learning phase, an alert is raised. This approach de-centralizes the monitoring and detection process by distributing both the state information and the computing throughout the network. The paper presents algorithms for the various processes of the system and also the results of testing the sensor network architecture on real-time models. The sensor network can be used in automotive engine test rigs to carry out long term performance analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie