Literatura científica selecionada sobre o tema "Continuous and distributed machine learning"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Continuous and distributed machine learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Continuous and distributed machine learning"

1

Stan, Ioan-Mihail, Siarhei Padolski e Christopher Jon Lee. "Exploring the self-service model to visualize the results of the ATLAS Machine Learning analysis jobs in BigPanDA with Openshift OKD3". EPJ Web of Conferences 251 (2021): 02009. http://dx.doi.org/10.1051/epjconf/202125102009.

Texto completo da fonte
Resumo:
A large scientific computing infrastructure must offer versatility to host any kind of experiment that can lead to innovative ideas. The ATLAS experiment offers wide access possibilities to perform intelligent algorithms and analyze the massive amount of data produced in the Large Hadron Collider at CERN. The BigPanDA monitoring is a component of the PanDA (Production ANd Distributed Analysis) system, and its main role is to monitor the entire lifecycle of a job/task running in the ATLAS Distributed Computing infrastructure. Because many scientific experiments now rely upon Machine Learning algorithms, the BigPanDA community desires to expand the platform’s capabilities and fill the gap between Machine Learning processing and data visualization. In this regard, BigPanDA partially adopts the cloud-native paradigm and entrusts the data presentation to MLFlow services running on Openshift OKD. Thus, BigPanDA interacts with the OKD API and instructs the containers orchestrator how to locate and expose the results of the Machine Learning analysis. The proposed architecture also introduces various DevOps-specific patterns, including continuous integration for MLFlow middleware configuration and continuous deployment pipelines that implement rolling upgrades. The Machine Learning data visualization services operate on demand and run for a limited time, thus optimizing the resource consumption.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yin, Zhongdong, Jingjing Tu e Yonghai Xu. "Development of a Kernel Extreme Learning Machine Model for Capacity Selection of Distributed Generation Considering the Characteristics of Electric Vehicles". Applied Sciences 9, n.º 12 (13 de junho de 2019): 2401. http://dx.doi.org/10.3390/app9122401.

Texto completo da fonte
Resumo:
The large-scale access of distributed generation (DG) and the continuous increase in the demand of electric vehicle (EV) charging will result in fundamental changes in the planning and operating characteristics of the distribution network. Therefore, studying the capacity selection of the distributed generation, such as wind and photovoltaic (PV), and considering the charging characteristic of electric vehicles, is of great significance to the stability and economic operation of the distribution network. By using the network node voltage, the distributed generation output and the electric vehicles’ charging power as training data, we propose a capacity selection model based on the kernel extreme learning machine (KELM). The model accuracy is evaluated by using the root mean square error (RMSE). The stability of the network is evaluated by voltage stability evaluation index (Ivse). The IEEE33 node distributed system is used as simulation example, and gives results calculated by the kernel extreme learning machine that satisfy the minimum network loss and total investment cost. Finally, the results are compared with support vector machine (SVM), particle swarm optimization algorithm (PSO) and genetic algorithm (GA), to verify the feasibility and effectiveness of the proposed model and method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Brophy, Eoin, Maarten De Vos, Geraldine Boylan e Tomás Ward. "Estimation of Continuous Blood Pressure from PPG via a Federated Learning Approach". Sensors 21, n.º 18 (21 de setembro de 2021): 6311. http://dx.doi.org/10.3390/s21186311.

Texto completo da fonte
Resumo:
Ischemic heart disease is the highest cause of mortality globally each year. This puts a massive strain not only on the lives of those affected, but also on the public healthcare systems. To understand the dynamics of the healthy and unhealthy heart, doctors commonly use an electrocardiogram (ECG) and blood pressure (BP) readings. These methods are often quite invasive, particularly when continuous arterial blood pressure (ABP) readings are taken, and not to mention very costly. Using machine learning methods, we develop a framework capable of inferring ABP from a single optical photoplethysmogram (PPG) sensor alone. We train our framework across distributed models and data sources to mimic a large-scale distributed collaborative learning experiment that could be implemented across low-cost wearables. Our time-series-to-time-series generative adversarial network (T2TGAN) is capable of high-quality continuous ABP generation from a PPG signal with a mean error of 2.95 mmHg and a standard deviation of 19.33 mmHg when estimating mean arterial pressure on a previously unseen, noisy, independent dataset. To our knowledge, this framework is the first example of a GAN capable of continuous ABP generation from an input PPG signal that also uses a federated learning methodology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Vrachimis, Andreas, Stella Gkegka e Kostas Kolomvatsos. "Resilient edge machine learning in smart city environments". Journal of Smart Cities and Society 2, n.º 1 (7 de julho de 2023): 3–24. http://dx.doi.org/10.3233/scs-230005.

Texto completo da fonte
Resumo:
Distributed Machine Learning (DML) has emerged as a disruptive technology that enables the execution of Machine Learning (ML) and Deep Learning (DL) algorithms in proximity to data generation, facilitating predictive analytics services in Smart City environments. However, the real-time analysis of data generated by Smart City Edge Devices (EDs) poses significant challenges. Concept drift, where the statistical properties of data streams change over time, leads to degraded prediction performance. Moreover, the reliability of each computing node directly impacts the availability of DML systems, making them vulnerable to node failures. To address these challenges, we propose a resilience framework comprising computationally lightweight maintenance strategies that ensure continuous quality of service and availability in DML applications. We conducted a comprehensive experimental evaluation using real datasets, assessing the effectiveness and efficiency of our resilience maintenance strategies across three different scenarios. Our findings demonstrate the significance and practicality of our framework in sustaining predictive performance in smart city edge learning environments. Specifically, our enhanced model exhibited increased generalizability when confronted with concept drift. Furthermore, we achieved a substantial reduction in the amount of data transmitted over the network during the maintenance of the enhanced models, while balancing the trade-off between the quality of analytics and inter-node data communication cost.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Musa, M. O., e E. E. Odokuma. "A framework for the detection of distributed denial of service attacks on network logs using ML and DL classifiers". Scientia Africana 22, n.º 3 (25 de janeiro de 2024): 153–64. http://dx.doi.org/10.4314/sa.v22i3.14.

Texto completo da fonte
Resumo:
Despite the promise of machine learning in DDoS mitigation, it is not without its challenges. Attackers can employ adversarial techniques to evade detection by machine learning models. Moreover, machine learning models require large amounts of high-quality data for training and continuous refinement. Security teams must also be vigilant in monitoring and fine-tuning these models to adapt to new attack vectors. Nonetheless, the integration of machine learning into cybersecurity strategies represents a powerful approach to countering the persistent threat of DDoS attacks in an increasingly interconnected world. This paper proposed Machine Learning (ML) models and a Deep Learning (DL) model for the detection of Distributed Denial of Service Attacks (DDOS) on network system. The DDOS dataset is highly imbalanced because the number of instances of the various classes of the dataset are different. To solve the imbalance problem, we performed random under-sampling using under sampling technique in python called random under-sampler. The down sampled dataset was used for the training of the ML and DL classifiers. The trained models are random forest, gradient boosting and recurrent neural network algorithms on the DDOS dataset. The model was trained on the DDOS dataset by fine tuning the hyper parameters. The models was used to make prediction in an unseen dataset to detect the various types of the DDOS attacks. The result of the models were evaluated in terms of accuracy. The results of the models show an accuracy result of 79% for random forest, 82%, for gradient boosting, and 99.47% for recurrent neural network. From the experimental results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Oliveri, Giorgio, Lucas C. van Laake, Cesare Carissimo, Clara Miette e Johannes T. B. Overvelde. "Continuous learning of emergent behavior in robotic matter". Proceedings of the National Academy of Sciences 118, n.º 21 (10 de maio de 2021): e2017015118. http://dx.doi.org/10.1073/pnas.2017015118.

Texto completo da fonte
Resumo:
One of the main challenges in robotics is the development of systems that can adapt to their environment and achieve autonomous behavior. Current approaches typically aim to achieve this by increasing the complexity of the centralized controller by, e.g., direct modeling of their behavior, or implementing machine learning. In contrast, we simplify the controller using a decentralized and modular approach, with the aim of finding specific requirements needed for a robust and scalable learning strategy in robots. To achieve this, we conducted experiments and simulations on a specific robotic platform assembled from identical autonomous units that continuously sense their environment and react to it. By letting each unit adapt its behavior independently using a basic Monte Carlo scheme, the assembled system is able to learn and maintain optimal behavior in a dynamic environment as long as its memory is representative of the current environment, even when incurring damage. We show that the physical connection between the units is enough to achieve learning, and no additional communication or centralized information is required. As a result, such a distributed learning approach can be easily scaled to larger assemblies, blurring the boundaries between materials and robots, paving the way for a new class of modular “robotic matter” that can autonomously learn to thrive in dynamic or unfamiliar situations, for example, encountered by soft robots or self-assembled (micro)robots in various environments spanning from the medical realm to space explorations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kodaira, Daisuke, Kazuki Tsukazaki, Taiki Kure e Junji Kondoh. "Improving Forecast Reliability for Geographically Distributed Photovoltaic Generations". Energies 14, n.º 21 (4 de novembro de 2021): 7340. http://dx.doi.org/10.3390/en14217340.

Texto completo da fonte
Resumo:
Photovoltaic (PV) generation is potentially uncertain. Probabilistic PV generation forecasting methods have been proposed with prediction intervals (PIs) to evaluate the uncertainty quantitively. However, few studies have applied PIs to geographically distributed PVs in a specific area. In this study, a two-step probabilistic forecast scheme is proposed for geographically distributed PV generation forecasting. Each step of the proposed scheme adopts ensemble forecasting based on three different machine-learning methods. When individual PV generation is forecasted, the proposed scheme utilizes surrounding PVs’ past data to train the ensemble forecasting model. In this case study, the proposed scheme was compared with conventional non-multistep forecasting. The proposed scheme improved the reliability of the PIs and deterministic PV forecasting results through 30 days of continuous operation with real data in Japan.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Hua, Xia, e Lei Han. "Design and Practical Application of Sports Visualization Platform Based on Tracking Algorithm". Computational Intelligence and Neuroscience 2022 (16 de agosto de 2022): 1–9. http://dx.doi.org/10.1155/2022/4744939.

Texto completo da fonte
Resumo:
Machine learning methods use computers to imitate human learning activities to discover new knowledge and enhance learning effects through continuous improvement. The main process is to further classify or predict unknown data by learning from existing experience and creating a learning machine. In order to improve the real-time performance and accuracy of the distributed EM algorithm for machine online learning, a clustering analysis algorithm based on distance measurement is proposed in combination with related theories. Among them, the greedy EM algorithm is a practical and important algorithm. However, the existing methods cannot simultaneously load a large amount of social information into the memory at a time. Therefore, we created a Hadoop cluster to cluster the Gaussian mixture model and check the accuracy of the algorithm, then compare the running time of the distributed EM algorithm and the greedy algorithm to verify the efficiency of the algorithm, and finally check the scalability of the algorithm by increasing the number of nodes. Based on this fact, this article has conducted research and discussion on the visualization of sports movements, and the teaching of visualization of sports movements can stimulate students’ interest in physical education. The traditional physical education curriculum is completely based on the teacher’s oral explanation and personal demonstration, and the emergence of visualized teaching of motor movements broke the teacher-centered teaching model and made teaching methods more interesting. This stimulated students’ interest in sports and improved classroom efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Rustam, Furqan, Muhammad Faheem Mushtaq, Ameer Hamza, Muhammad Shoaib Farooq, Anca Delia Jurcut e Imran Ashraf. "Denial of Service Attack Classification Using Machine Learning with Multi-Features". Electronics 11, n.º 22 (20 de novembro de 2022): 3817. http://dx.doi.org/10.3390/electronics11223817.

Texto completo da fonte
Resumo:
The exploitation of internet networks through denial of services (DoS) attacks has experienced a continuous surge over the past few years. Despite the development of advanced intrusion detection and protection systems, network security remains a challenging problem and necessitates the development of efficient and effective defense mechanisms to detect these threats. This research proposes a machine learning-based framework to detect distributed DOS (DDoS)/DoS attacks. For this purpose, a large dataset containing the network traffic of the application layer is utilized. A novel multi-feature approach is proposed where the principal component analysis (PCA) features and singular value decomposition (SVD) features are combined to obtain higher performance. The validation of the multi-feature approach is determined by extensive experiments using several machine learning models. The performance of machine learning models is evaluated for each class of attack and results are discussed regarding the accuracy, recall, and F1 score, etc., in the context of recent state-of-the-art approaches. Experimental results confirm that using multi-feature increases the performance and RF obtains a 100% accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Huang, Leqi. "Problems, solutions and improvements on federated learning model". Applied and Computational Engineering 22, n.º 1 (23 de outubro de 2023): 183–86. http://dx.doi.org/10.54254/2755-2721/22/20231215.

Texto completo da fonte
Resumo:
The field of machine learning has been stepping forward at a significant pace since the 21century due to the continuous modifications and improvements on the major underlying algorithms, particularly the model named federated learning (FL). This paper will specifically focus on the Partially Distributed and Coordinated Model, one of the major models subject to federated learning, to provide an analysis of the models working algorithms, existing problems and solutions, and improvements on the original model. The identification of the merits and drawbacks of each solution will be founded on document analysis, data analysis and contrastive analysis. The research concluded that both alternative solutions and improvements to the original model can possess their unique advantage as well as newly-emerged concerns or challenges.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Continuous and distributed machine learning"

1

Armond, Kenneth C. Jr. "Distributed Support Vector Machine Learning". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.

Texto completo da fonte
Resumo:
Support Vector Machines (SVMs) are used for a growing number of applications. A fundamental constraint on SVM learning is the management of the training set. This is because the order of computations goes as the square of the size of the training set. Typically, training sets of 1000 (500 positives and 500 negatives, for example) can be managed on a PC without hard-drive thrashing. Training sets of 10,000 however, simply cannot be managed with PC-based resources. For this reason most SVM implementations must contend with some kind of chunking process to train parts of the data at a time (10 chunks of 1000, for example, to learn the 10,000). Sequential and multi-threaded chunking methods provide a way to run the SVM on large datasets while retaining accuracy. The multi-threaded distributed SVM described in this thesis is implemented using Java RMI, and has been developed to run on a network of multi-core/multi-processor computers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Addanki, Ravichandra. "Learning generalizable device placement algorithms for distributed machine learning". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122746.

Texto completo da fonte
Resumo:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1 x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches. Moreover, Placeto is able to learn a generalizable placement policy for any given family of graphs, which can then be used without any retraining to predict optimized placements for unseen graphs from the same family. This eliminates the large overhead incurred by prior RL approaches whose lack of generalizability necessitates re-training from scratch every time a new graph is to be placed.
by Ravichandra Addanki.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Johansson, Samuel, e Karol Wojtulewicz. "Machine learning algorithms in a distributed context". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148920.

Texto completo da fonte
Resumo:
Interest in distributed approaches to machine learning has increased significantly in recent years due to continuously increasing data sizes for training machine learning models. In this thesis we describe three popular machine learning algorithms: decision trees, Naive Bayes and support vector machines (SVM) and present existing ways of distributing them. We also perform experiments with decision trees distributed with bagging, boosting and hard data partitioning and evaluate them in terms of performance measures such as accuracy, F1 score and execution time. Our experiments show that the execution time of bagging and boosting increase linearly with the number of workers, and that boosting performs significantly better than bagging and hard data partitioning in terms of F1 score. The hard data partitioning algorithm works well for large datasets where the execution time decrease as the number of workers increase without any significant loss in accuracy or F1 score, while the algorithm performs poorly on small data with an increase in execution time and loss in accuracy and F1 score when the number of workers increase.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Karimi, Ahmad Maroof. "Distributed Machine Learning Based Intrusion Detection System". University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1470401374.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zam, Anton. "Evaluating Distributed Machine Learning using IoT Devices". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42388.

Texto completo da fonte
Resumo:
Internet of things (IoT) blir bara större och större varje år och nya enheter läggs till hela tiden. Även om en stor del av dessa enheter är kontinuerligt använda finns det fortfarande väldigt många enheter som står inaktiva och sitter på oanvänd processorkraft som kan användas till att utföra maskininlärnings beräkningar. Det finns för nuvarande väldigt många metoder för att kombinera processorkraften av flera enheter för att utföra maskininlärnings uppgifter, dessa brukar kallas för distribuerade maskininlärnings metoder. huvudfokuset av detta arbetet är att utvärdera olika distribuerade maskininlärnings metoder för att se om de kan implementeras på IoT enheter och i fallet metoderna kan implementeras ska man mäta hur effektiva och skalbara dessa metoderna är. Den distribuerade maskininlärnings metoden som blivit implementerad i detta arbete kallas för ”MultiWorkerMirrorStrategy” och denna metod blev utvärderar genom en jämförelse på träningstiden, tränings precisionen och utvärderings precisionen av 2,3 och 4 Raspberry pi:s med en icke distribuerad metod vilket endast använt sig av 1 Raspberry pi. Resultatet av mätningarna visade att trots att processorkraften ökar för varje enhet som lagts till i clustret blir träningstiden högre samtidigt som resterande mätningar var desamma. Genom att analysera och diskutera dessa resultat drogs slutsatsen att den overhead som skapats av att enheterna kommunicerar med varandra är alldeles för hög vilket resulterar i att den implementerade metoden är väldigt ineffektiv och kan inte skallas upp utan att någon typ av optimering läggs till.
Internet of things is growing every year with new devices being added all the time. Although some of the devices are continuously in use a large amount of them are mostly idle and sitting on untapped processing power that could be used to compute machine learning computations. There currently exist a lot of different methods to combine the processing power of multiple devices to compute machine learning task these are often called distributed machine learning methods. The main focus of this thesis is to evaluate these distributed machine learning methods to see if they could be implemented on IoT devices and if so, measure how efficient and scalable these methods are. The method chosen for implementation was called “MultiWorkerMirrorStrategy” and this method was evaluated by comparing the training time, training accuracy and evaluation accuracy of 2,3 and 4 Raspberry pi:s  with a nondistributed machine learning method with 1 Raspberry pi. The results showed that although the computational power increased with every added device the training time increased while the rest of the measurements stayed the same. After the results were analyzed and discussed the conclusion of this were that the overhead added for communicating between devices were to high resulting in this method being very inefficient and wouldn’t scale without some sort of optimization being added.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Thompson, Simon Giles. "Distributed boosting algorithms". Thesis, University of Portsmouth, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285529.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Dahlberg, Leslie. "Evolutionary Computation in Continuous Optimization and Machine Learning". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35674.

Texto completo da fonte
Resumo:
Evolutionary computation is a field which uses natural computational processes to optimize mathematical and industrial problems. Differential Evolution, Particle Swarm Optimization and Estimation of Distribution Algorithm are some of the newer emerging varieties which have attracted great interest among researchers. This work has compared these three algorithms on a set of mathematical and machine learning benchmarks and also synthesized a new algorithm from the three other ones and compared it to them. The results from the benchmark show which algorithm is best suited to handle various machine learning problems and presents the advantages of using the new algorithm. The new algorithm called DEDA (Differential Estimation of Distribution Algorithms) has shown promising results at both machine learning and mathematical optimization tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ouyang, Hua. "Optimal stochastic and distributed algorithms for machine learning". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49091.

Texto completo da fonte
Resumo:
Stochastic and data-distributed optimization algorithms have received lots of attention from the machine learning community due to the tremendous demand from the large-scale learning and the big-data related optimization. A lot of stochastic and deterministic learning algorithms are proposed recently under various application scenarios. Nevertheless, many of these algorithms are based on heuristics and their optimality in terms of the generalization error is not sufficiently justified. In this talk, I will explain the concept of an optimal learning algorithm, and show that given a time budget and proper hypothesis space, only those achieving the lower bounds of the estimation error and the optimization error are optimal. Guided by this concept, we investigated the stochastic minimization of nonsmooth convex loss functions, a central problem in machine learning. We proposed a novel algorithm named Accelerated Nonsmooth Stochastic Gradient Descent, which exploits the structure of common nonsmooth loss functions to achieve optimal convergence rates for a class of problems including SVMs. It is the first stochastic algorithm that can achieve the optimal O(1/t) rate for minimizing nonsmooth loss functions. The fast rates are confirmed by empirical comparisons with state-of-the-art algorithms including the averaged SGD. The Alternating Direction Method of Multipliers (ADMM) is another flexible method to explore function structures. In the second part we proposed stochastic ADMM that can be applied to a general class of convex and nonsmooth functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1/sqrt{t}) for convex functions and O(log t/t) for strongly convex functions. A novel application named Graph-Guided SVM is proposed to demonstrate the usefulness of our algorithm. We also extend the scalability of stochastic algorithms to nonlinear kernel machines, where the problem is formulated as a constrained dual quadratic optimization. The simplex constraint can be handled by the classic Frank-Wolfe method. The proposed stochastic Frank-Wolfe methods achieve comparable or even better accuracies than state-of-the-art batch and online kernel SVM solvers, and are significantly faster. The last part investigates the problem of data-distributed learning. We formulate it as a consensus-constrained optimization problem and solve it with ADMM. It turns out that the underlying communication topology is a key factor in achieving a balance between a fast learning rate and computation resource consumption. We analyze the linear convergence behavior of consensus ADMM so as to characterize the interplay between the communication topology and the penalty parameters used in ADMM. We observe that given optimal parameters, the complete bipartite and the master-slave graphs exhibit the fastest convergence, followed by bi-regular graphs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Prueller, Hans. "Distributed online machine learning for mobile care systems". Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10875.

Texto completo da fonte
Resumo:
Telecare and especially Mobile Care Systems are getting more and more popular. They have two major benefits: first, they drastically improve the living standards and even health outcomes for patients. In addition, they allow significant cost savings for adult care by reducing the needs for medical staff. A common drawback of current Mobile Care Systems is that they are rather stationary in most cases and firmly installed in patients’ houses or flats, which makes them stay very near to or even in their homes. There is also an upcoming second category of Mobile Care Systems which are portable without restricting the moving space of the patients, but with the major drawback that they have either very limited computational abilities and only a rather low classification quality or, which is most frequently, they only have a very short runtime on battery and therefore indirectly restrict the freedom of moving of the patients once again. These drawbacks are inherently caused by the restricted computational resources and mainly the limitations of battery based power supply of mobile computer systems. This research investigates the application of novel Artificial Intelligence (AI) and Machine Learning (ML) techniques to improve the operation of 2 Mobile Care Systems. As a result, based on the Evolving Connectionist Systems (ECoS) paradigm, an innovative approach for a highly efficient and self-optimising distributed online machine learning algorithm called MECoS - Moving ECoS - is presented. It balances the conflicting needs of providing a highly responsive complex and distributed online learning classification algorithm by requiring only limited resources in the form of computational power and energy. This approach overcomes the drawbacks of current mobile systems and combines them with the advantages of powerful stationary approaches. The research concludes that the practical application of the presented MECoS algorithm offers substantial improvements to the problems as highlighted within this thesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Konečný, Jakub. "Stochastic, distributed and federated optimization for machine learning". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/31478.

Texto completo da fonte
Resumo:
We study optimization algorithms for the finite sum problems frequently arising in machine learning applications. First, we propose novel variants of stochastic gradient descent with a variance reduction property that enables linear convergence for strongly convex objectives. Second, we study distributed setting, in which the data describing the optimization problem does not fit into a single computing node. In this case, traditional methods are inefficient, as the communication costs inherent in distributed optimization become the bottleneck. We propose a communication-efficient framework which iteratively forms local subproblems that can be solved with arbitrary local optimization algorithms. Finally, we introduce the concept of Federated Optimization/Learning, where we try to solve the machine learning problems without having data stored in any centralized manner. The main motivation comes from industry when handling user-generated data. The current prevalent practice is that companies collect vast amounts of user data and store them in datacenters. An alternative we propose is not to collect the data in first place, and instead occasionally use the computational power of users' devices to solve the very same optimization problems, while alleviating privacy concerns at the same time. In such setting, minimization of communication rounds is the primary goal, and we demonstrate that solving the optimization problems in such circumstances is conceptually tractable.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Continuous and distributed machine learning"

1

Weiss, Gerhard. Distributed machine learning. Sankt Augustin: Infix, 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Testas, Abdelaziz. Distributed Machine Learning with PySpark. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9751-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Amini, M. Hadi, ed. Distributed Machine Learning and Computing. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-57567-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Jiang, Jiawei, Bin Cui e Ce Zhang. Distributed Machine Learning and Gradient Optimization. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-3420-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Joshi, Gauri. Optimization Algorithms for Distributed Machine Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-19067-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Sahoo, Jyoti Prakash, Asis Kumar Tripathy, Manoranjan Mohanty, Kuan-Ching Li e Ajit Kumar Nayak, eds. Advances in Distributed Computing and Machine Learning. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-4807-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Rout, Rashmi Ranjan, Soumya Kanti Ghosh, Prasanta K. Jana, Asis Kumar Tripathy, Jyoti Prakash Sahoo e Kuan-Ching Li, eds. Advances in Distributed Computing and Machine Learning. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1018-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Tripathy, Asis Kumar, Mahasweta Sarkar, Jyoti Prakash Sahoo, Kuan-Ching Li e Suchismita Chinara, eds. Advances in Distributed Computing and Machine Learning. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-4218-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Nanda, Umakanta, Asis Kumar Tripathy, Jyoti Prakash Sahoo, Mahasweta Sarkar e Kuan-Ching Li, eds. Advances in Distributed Computing and Machine Learning. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1841-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Chinara, Suchismita, Asis Kumar Tripathy, Kuan-Ching Li, Jyoti Prakash Sahoo e Alekha Kumar Mishra, eds. Advances in Distributed Computing and Machine Learning. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Continuous and distributed machine learning"

1

Carter, Eric, e Matthew Hurst. "Continuous Delivery". In Agile Machine Learning, 59–69. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5107-2_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yang, Qiang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen e Han Yu. "Distributed Machine Learning". In Federated Learning, 33–48. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01585-4_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Galakatos, Alex, Andrew Crotty e Tim Kraska. "Distributed Machine Learning". In Encyclopedia of Database Systems, 1–6. New York, NY: Springer New York, 2017. http://dx.doi.org/10.1007/978-1-4899-7993-3_80647-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Galakatos, Alex, Andrew Crotty e Tim Kraska. "Distributed Machine Learning". In Encyclopedia of Database Systems, 1196–201. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_80647.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Shultz, Thomas R., Scott E. Fahlman, Susan Craw, Periklis Andritsos, Panayiotis Tsaparas, Ricardo Silva, Chris Drummond et al. "Continuous Attribute". In Encyclopedia of Machine Learning, 226. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_172.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen e Tong Li. "Secure Distributed Learning". In Privacy-Preserving Machine Learning, 47–56. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ducoulombier, Antoine, e Michèle Sebag. "Continuous mimetic evolution". In Machine Learning: ECML-98, 334–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0026704.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cleophas, Ton J., e Aeilko H. Zwinderman. "Continuous Sequential Techniques". In Machine Learning in Medicine, 187–94. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-6886-4_18.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Chen, Zhiyuan, e Bing Liu. "Continuous Knowledge Learning in Chatbots". In Lifelong Machine Learning, 131–38. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-031-01581-6_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Liu, Mark. "Q-Learning with Continuous States". In Machine Learning, Animated, 285–300. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/b23383-15.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Continuous and distributed machine learning"

1

Belcastro, Loris, Fabrizio Marozzo, Aleandro Presta e Domenico Talia. "A Spark-based Task Allocation Solution for Machine Learning in the Edge-Cloud Continuum". In 2024 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), 576–82. IEEE, 2024. http://dx.doi.org/10.1109/dcoss-iot61029.2024.00090.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Barros, Claudio D. T., Daniel N. R. da Silva e Fabio A. M. Porto. "Machine Learning on Graph-Structured Data". In Anais Estendidos do Simpósio Brasileiro de Banco de Dados. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbbd_estendido.2021.18179.

Texto completo da fonte
Resumo:
Several real-world complex systems have graph-structured data, including social networks, biological networks, and knowledge graphs. A continuous increase in the quantity and quality of these graphs demands learning models to unlock the potential of this data and execute tasks, including node classification, graph classification, and link prediction. This tutorial presents machine learning on graphs, focusing on how representation learning - from traditional approaches (e.g., matrix factorization and random walks) to deep neural architectures - fosters carrying out those tasks. We also introduce representation learning over dynamic and knowledge graphs. Lastly, we discuss open problems, such as scalability and distributed network embedding systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Chepurnov, A., e N. Ershov. "APPLICATION OF MACHINE LEARNING METHODS FOR CROSS-CLASSIFICATION OF ALGORITHMS AND PROBLEMS OF MULTIVARIATE CONTINUOUS OPTIMIZATION". In 9th International Conference "Distributed Computing and Grid Technologies in Science and Education". Crossref, 2021. http://dx.doi.org/10.54546/mlit.2021.67.50.001.

Texto completo da fonte
Resumo:
The paper is devoted to the development of a software system for the mutual classification of familiesof population optimization algorithms and problems of multivariate continuous optimization. One ofthe goals of this study is to develop methods for predicting the performance of the algorithms includedin the system and choosing the most effective algorithms from them for solving a user-specifiedoptimization problem. In addition, the proposed software system can be used to expand existing testsuites with new optimization problems. The work was carried out with the financial support of theRussian Foundation for Basic Research (Grant No. 20-07-01053 A).
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sartzetakis, Ippokratis, Polyzois Soumplis, Panagiotis Pantazopoulos, Konstantinos V. Katsaros, Vasilis Sourlas e Emmanouel Manos Varvarigos. "Resource Allocation for Distributed Machine Learning at the Edge-Cloud Continuum". In ICC 2022 - IEEE International Conference on Communications. IEEE, 2022. http://dx.doi.org/10.1109/icc45855.2022.9838647.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Tiezzi, Matteo, Simone Marullo, Lapo Faggi, Enrico Meloni, Alessandro Betti e Stefano Melacci. "Stochastic Coherence Over Attention Trajectory For Continuous Learning In Video Streams". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/483.

Texto completo da fonte
Resumo:
Devising intelligent agents able to live in an environment and learn by observing the surroundings is a longstanding goal of Artificial Intelligence. From a bare Machine Learning perspective, challenges arise when the agent is prevented from leveraging large fully-annotated dataset, but rather the interactions with supervisory signals are sparsely distributed over space and time. This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream. The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations. Spatio-temporal stochastic coherence along the attention trajectory, paired with a contrastive term, leads to an unsupervised learning criterion that naturally copes with the considered setting. Differently from most existing works, the learned representations are used in open-set class-incremental classification of each frame pixel, relying on few supervisions. Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream. Inheriting features from state-of-the art models is not as powerful as one might expect.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Gupta, Sujasha, Srivatsava Krishnan e Vishnubaba Sundaresan. "Structural Health Monitoring of Composite Structures via Machine Learning of Mechanoluminescence". In ASME 2019 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/smasis2019-5697.

Texto completo da fonte
Resumo:
Abstract The goal of this paper is to develop a machine learning algorithm for structural health monitoring of polymer composites with mechanoluminescent phosphors as distributed sensors. Mechanoluminescence is the phenomenon of light emission from organic/inorganic materials due to mechanical stimuli. Distributed sensors collect a large amount of data and contain structural response information that is difficult to analyze using classical or continuum models. Hence, approaches to analyze this data using machine learning or deep learning is necessary to develop models that describe initiation of damage, propagation and ultimately structural failure. This paper focuses on developing a machine learning algorithm that predicts the elastic modulus of a structure as a function of input parameters such as stress and measured light output. The training data for the algorithm utilizes experimental results from cyclical loading of elastomeric composite coupons impregnated with ML particles. A multivariate linear regression is performed on the elastic modulus within the training data as a function of stress and ML emission intensity. Error in predicted elastic modulus is minimized using a gradient descent algorithm. The machine learning algorithm outlined in this paper is expected to provide insights into structural response and deterioration of mechanical properties in real-time that cannot be obtained using a finite array of sensors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Mendoza, Alberto, Çağrı Cerrahoğlu, Alessandro Delfino e Martin Sundin. "Signal Processing and Machine Learning for Effective Integration of Distributed Fiber Optic Sensing Data in Production Petrophysics". In 2022 SPWLA 63rd Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2022. http://dx.doi.org/10.30632/spwla-2022-0016.

Texto completo da fonte
Resumo:
Distributed fibre optic sensing (DFOS) is progressively being considered in the mix of customary surveillance tools for oil and gas producing assets. Its applications are beyond monitoring of wells for production and reservoir optimization, including detection of well integrity risks and other well completion failures. However, while DFOS can uniquely yield time-dependent spatially distributed measurements, these are yet to be routinely used in formation evaluation and production logging workflows. The large volumes and complexity of time- and depth-dependent data produced by DFOS often require the usage of Digital Signal Processing (DSP) to reduce the amount of stored data and data-driven techniques such as machine learning (ML) for analysis. Distributed sensing data is sampled at high rates; up to 10,000 samples per second and depth for Distributed Acoustic Sensing (DAS), and one sample per minute and depth for distributed temperature sensing (DTS). The high sampling rate in time, across hundreds or thousands of meters, creates a big data problem. Consequently, managing and transferring data acquired in the field to an expert analyst is extremely challenging. Even when these data management challenges are overcome, the amount of data itself is still not suitable for manual analysis. Starting from edge computing for feature extraction, we illustrate the principles of using DSP and ML to effectively handle the challenges of analyzing time-dependent distributed data from DFOS. Results enable integration of DFOS with customary formation evaluation and production surveillance workflows. Feature extraction, a crucial DSP step used to generate inputs to ML, reduces data size by orders of magnitude while ML models analyse continuous data streams from the field. We derive thermal features from DTS data effectively characterizing Joule Thomson effects. Moreover, we combine DTS thermal features with acoustic features from DAS in supervised ML for multiphase downhole inflow predictions. In so doing, we have successfully applied ML on DFOS for real-time detection of sand production, production and injection profiling, and well integrity surveillance. With use cases in a range of well completion types and well operating conditions, we demonstrate an endto- end system of DFOS that effectively integrates DAS and DTS into routine analysis techniques for Formation Evaluation Specialists and Production Petrophysicists.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Sadigov, Teymur, Cagri Cerrahoglu, James Ramsay, Laurence Burchell, Sean Cavalero, Thomas Watson, Pradyumna Thiruvenkatanathan e Martin Sundin. "Real-Time Water Injection Monitoring with Distributed Fiber Optics Using Physics-Informed Machine Learning". In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/30982-ms.

Texto completo da fonte
Resumo:
Abstract This paper introduces a novel technique that allows real-time injection monitoring with distributed fiber optics using physics-informed machine learning methods and presents results from Clair Ridge asset where a cloud-based, real-time application is deployed. Clair Ridge is a structural high comprising of naturally fractured Devonian to Carboniferous continental sandstones, with a significantly naturally fractured ridge area. The fractured nature of the reservoir lends itself to permanent deployment of Distributed Fiber Optic Sensing (DFOS) to enable real-time injection monitoring to maximise recovery from the field. In addition to their default limitations, such as providing a snapshot measurement and disturbing the natural well flow with up and down flowing passes, wireline-conveyed production logs (PL) are also unable to provide a high-resolution profile of the water injection along the reservoir due to the completion type. DFOS offers unique surveillance capability when permanently installed along the reservoir interface and continuously providing injection profiles with full visibility along the reservoir section without the need for an intervention. The real-time injection monitoring application uses both distributed acoustic and temperature sensing (DAS & DTS) and is based on physics-informed machine learning models. It is now running and available to all asset users on the cloud. So far, the application has generated high-resolution injection profiles over a dozen multi-rate injection periods automatically and the results are cross-checked against the profiles from the warmback analyses that were also generated automatically as part of the same application. The real-time monitoring insights have been effectively applied to provide significant business value using the capability for start-up optimization to manage and improve injection conformance, monitor fractured formations and caprock monitoring.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

"Session details: 1st Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC)". In UCC '21: 2021 IEEE/ACM 14th International Conference on Utility and Cloud Computing, editado por Luiz F. Bittencourt, Ian Foster e Filip De Turck. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3517186.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Gao, Hongchang, Hanzi Xu e Slobodan Vucetic. "Sample Efficient Decentralized Stochastic Frank-Wolfe Methods for Continuous DR-Submodular Maximization". In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/482.

Texto completo da fonte
Resumo:
Continuous DR-submodular maximization is an important machine learning problem, which covers numerous popular applications. With the emergence of large-scale distributed data, developing efficient algorithms for the continuous DR-submodular maximization, such as the decentralized Frank-Wolfe method, became an important challenge. However, existing decentralized Frank-Wolfe methods for this kind of problem have the sample complexity of $\mathcal{O}(1/\epsilon^3)$, incurring a large computational overhead. In this paper, we propose two novel sample efficient decentralized Frank-Wolfe methods to address this challenge. Our theoretical results demonstrate that the sample complexity of the two proposed methods is $\mathcal{O}(1/\epsilon^2)$, which is better than $\mathcal{O}(1/\epsilon^3)$ of the existing methods. As far as we know, this is the first published result achieving such a favorable sample complexity. Extensive experimental results confirm the effectiveness of the proposed methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Continuous and distributed machine learning"

1

Shead, Timothy, Jonathan Berry, Cynthia Phillips e Jared Saia. Information-Theoretically Secure Distributed Machine Learning. Office of Scientific and Technical Information (OSTI), novembro de 2019. http://dx.doi.org/10.2172/1763277.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Lee, Ying-Ying, e Kyle Colangelo. Double debiased machine learning nonparametric inference with continuous treatments. The IFS, outubro de 2019. http://dx.doi.org/10.1920/wp.cem.2019.5419.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lee, Ying-Ying, e Kyle Colangelo. Double debiased machine learning nonparametric inference with continuous treatments. The IFS, dezembro de 2019. http://dx.doi.org/10.1920/wp.cem.2019.7219.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Huang, Amy, Katelyn Barnes, Joseph Bearer, Evan Chrisinger e Christopher Stone. Integrating Distributed-Memory Machine Learning into Large-Scale HPC Simulations. Office of Scientific and Technical Information (OSTI), maio de 2018. http://dx.doi.org/10.2172/1460078.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Varastehpour, Soheil, Hamid Sharifzadeh e Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.

Texto completo da fonte
Resumo:
Deep learning algorithms are a subset of machine learning algorithms that aim to explore several levels of the distributed representations from the input data. Recently, many deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this review paper, some of the up-to-date algorithms of this topic in the field of computer vision and image processing are reviewed. Following this, a brief overview of several different deep learning methods and their recent developments are discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Liu, Xiaopei, Dan Liu e Cong’e Tan. Gut microbiome-based machine learning for diagnostic prediction of liver fibrosis and cirrhosis: a systematic review and meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, maio de 2022. http://dx.doi.org/10.37766/inplasy2022.5.0133.

Texto completo da fonte
Resumo:
Review question / Objective: The invasive liver biopsy is the gold standard for the diagnosis of liver cirrhosis. Other non-invasive diagnostic approaches, have been used as alternatives to liver biopsy, however, these methods cannot identify the pathological grade of the lesion. Recently, studies have shown that gut microbiome-based machine learning can be used as a non-invasive diagnostic approach for liver cirrhosis or fibrosis, while it lacks evidence-based support. Therefore, we performed this systematic review and meta-analysis to evaluate its predictive diagnostic value in liver cirrhosis or fibrosis. Condition being studied: Liver fibrosis and cirrhosis. Liver fibrosis refers to excessive deposition of liver fibrous tissue caused by various pathogenic factors, such as hepatitis virus, alcohol, and drug-induced chemical injury. Continuous progression of liver fibrosis can lead to liver cirrhosis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Choquette, Gary. PR-000-16209-WEB Data Management Best Practices Learned from CEPM. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), abril de 2019. http://dx.doi.org/10.55274/r0011568.

Texto completo da fonte
Resumo:
DATE: Wednesday, May 1, 2019 TIME: 2:00 - 3:30 p.m. ET PRESENTER: Gary Choquette, PRCI CLICK DOWNLOAD/BUY TO ACCESS THE REGISTRATION LINK FOR THIS WEBINAR Systems that manage large sets of data are becoming more common in the energy transportation industry. Having access to the data offers the opportunity to learn from previous experiences to help efficiently manage the future. But how does one manage to digest copious quantities of data to find nuggets within the ore? This webinar will outline some of the data management best practices learned from the research projects associated with CEPM. - Logging/capturing data tips - Techniques to identify 'bad' data - Methods of mapping equipment and associated regressions - Tips for pre-processing data for regressions - Machine learning tips - Establishing alarm limits - Identifying equipment problems - Multiple case studies Who Should Attend? - Data analysts - Equipment support specialists - Those interested in learning more about 'big data' and 'machine learning' Recommended Pre-reading: - PR-309-11202-R01 Field Demonstration Test of Advanced Engine and Compressor Diagnostics for CORE - PR-312-12210-R01 CEPM Monitoring Plan for 2SLB Reciprocating Engines* - PR-309-13208-R01 Field Demonstration of Integrated System and Expert Level Continuous Performance Monitoring for CORE* - PR-309-14209-R01 Field Demo of Integrated Expert Level Continuous Performance Monitoring - PR-309-15205-R01 Continuous Engine Performance Monitoring Technical Specification - PR-000-15208-R01 Reciprocating Engine Speed Stability as a Measure of Combustion Stability - PR-309-15209-R01 Evaluation of NSCR Specific Models for Use in CEPM - PR-000-16209-R01 Demonstration of Continuous Equipment Performance Monitoring - PR-015-17606-Z02 Elbow Meter Test Results* *Documents available to PRCI member only Attendance will be limited to the first 500 registrants to join the webinar. All remaining registrants will receive a link to view the recording after the webinar. Not able to attend? Register anyway to automatically receive a link to the recording after the webinar to view at your convenience! After registering, you will receive a confirmation email containing information about joining the webinar. Please visit our website for other webinars that may be of interest to you!
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Visser, R., H. Kao, R. M. H. Dokht, A. B. Mahani e S. Venables. A comprehensive earthquake catalogue for northeastern British Columbia: the northern Montney trend from 2017 to 2020 and the Kiskatinaw Seismic Monitoring and Mitigation Area from 2019 to 2020. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329078.

Texto completo da fonte
Resumo:
To increase our understanding of induced seismicity, we develop and implement methods to enhance seismic monitoring capabilities in northeastern British Columbia (NE BC). We deploy two different machine learning models to identify earthquake phases using waveform data from regional seismic stations and utilize an earthquake database management system to streamline the construction and maintenance of an up-to-date earthquake catalogue. The completion of this study allows for a comprehensive catalogue in NE BC from 2014 to 2020 by building upon our previous 2014-2016 and 2017-2018 catalogues. The bounds of the area where earthquakes were located were between 55.5°N-60.0°N and 119.8°W-123.5°W. The earthquakes in the catalogue were initially detected by machine learning models, then reviewed by an analyst to confirm correct identification, and finally located using the Non-Linear Location (NonLinLoc) algorithm. Two distinct sub-areas within the bounds consider different periods to supplement what was not covered in previously published reports - the Northern Montney Trend (NMT) is covered from 2017 to 2020 while the Kiskatinaw Seismic Monitoring and Mitigation Area (KSMMA) is covered from 2019 to 2020. The two sub-areas are distinguished by the BC Oil & Gas Commission (BCOGC) due to differences in their geographic location and geology. The catalogue was produced by picking arrival phases on continuous seismic waveforms from 51 stations operated by various organizations in the region. A total of 17,908 events passed our quality control criteria and are included in the final catalogue. Comparably, the routine Canadian National Seismograph Network (CNSN) catalogue reports 207 seismic events - all events in the CNSN catalogue are present in our catalogue. Our catalogue benefits from the use of enhanced station coverage and improved methodology. The total number of events in our catalogue in 2017, 2018, 2019, and 2020 were 62, 47, 9579 and 8220, respectively. The first two years correspond to seismicity in the NMT where poor station coverage makes it difficult to detect small magnitude events. The magnitude of completeness within the KSMMA (ML = ~0.7) is significantly smaller than that obtained for the NMT (ML = ~1.4). The new catalogue is released with separate files for origins, arrivals, and magnitudes which can be joined using the unique ID assigned to each event.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Harris, L. B., P. Adiban e E. Gloaguen. The role of enigmatic deep crustal and upper mantle structures on Au and magmatic Ni-Cu-PGE-Cr mineralization in the Superior Province. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328984.

Texto completo da fonte
Resumo:
Aeromagnetic and ground gravity data for the Canadian Superior Province, filtered to extract long wavelength components and converted to pseudo-gravity, highlight deep, N-S trending regional-scale, rectilinear faults and margins to discrete, competent mafic or felsic granulite blocks (i.e. at high angles to most regional mapped structures and sub-province boundaries) with little to no surface expression that are spatially associated with lode ('orogenic') Au and Ni-Cu-PGE-Cr occurrences. Statistical and machine learning analysis of the Red Lake-Stormy Lake region in the W Superior Province confirms visual inspection for a greater correlation between Au deposits and these deep N-S structures than with mapped surface to upper crustal, generally E-W trending, faults and shear zones. Porphyry Au, Ni, Mo and U-Th showings are also located above these deep transverse faults. Several well defined concentric circular to elliptical structures identified in the Oxford Stull and Island Lake domains along the S boundary of the N Superior proto-craton, intersected by N- to NNW striking extensional fractures and/or faults that transect the W Superior Province, again with little to no direct surface or upper crustal expression, are spatially associated with magmatic Ni-Cu-PGE-Cr and related mineralization and Au occurrences. The McFaulds Lake greenstone belt, aka. 'Ring of Fire', constitutes only a small, crescent-shaped belt within one of these concentric features above which 2736-2733 Ma mafic-ultramafic intrusions bodies were intruded. The Big Trout Lake igneous complex that hosts Cr-Pt-Pd-Rh mineralization west of the Ring of Fire lies within a smaller concentrically ringed feature at depth and, near the Ontario-Manitoba border, the Lingman Lake Au deposit, numerous Au occurrences and minor Ni showings, are similarly located on concentric structures. Preliminary magnetotelluric (MT) interpretations suggest that these concentric structures appear to also have an expression in the subcontinental lithospheric mantle (SCLM) and that lithospheric mantle resistivity features trend N-S as well as E-W. With diameters between ca. 90 km to 185 km, elliptical structures are similar in size and internal geometry to coronae on Venus which geomorphological, radar, and gravity interpretations suggest formed above mantle upwellings. Emplacement of mafic-ultramafic bodies hosting Ni-Cr-PGE mineralization along these ringlike structures at their intersection with coeval deep transverse, ca. N-S faults (viz. phi structures), along with their location along the margin to the N Superior proto-craton, are consistent with secondary mantle upwellings portrayed in numerical models of a mantle plume beneath a craton with a deep lithospheric keel within a regional N-S compressional regime. Early, regional ca. N-S faults in the W Superior were reactivated as dilatational antithetic (secondary Riedel/R') sinistral shears during dextral transpression and as extensional fractures and/or normal faults during N-S shortening. The Kapuskasing structural zone or uplift likely represents Proterozoic reactivation of a similar deep transverse structure. Preservation of discrete faults in the deep crust beneath zones of distributed Neoarchean dextral transcurrent to transpressional shear zones in the present-day upper crust suggests a 'millefeuille' lithospheric strength profile, with competent SCLM, mid- to deep, and upper crustal layers. Mechanically strong deep crustal felsic and mafic granulite layers are attributed to dehydration and melt extraction. Intra-crustal decoupling along a ductile décollement in the W Superior led to the preservation of early-formed deep structures that acted as conduits for magma transport into the overlying crust and focussed hydrothermal fluid flow during regional deformation. Increase in the thickness of semi-brittle layers in the lower crust during regional metamorphism would result in an increase in fracturing and faulting in the lower crust, facilitating hydrothermal and carbonic fluid flow in pathways linking SCLM to the upper crust, a factor explaining the late timing for most orogenic Au. Results provide an important new dataset for regional prospectively mapping, especially with machine learning, and exploration targeting for Au and Ni-Cr-Cu-PGE mineralization. Results also furnish evidence for parautochthonous development of the S Superior Province during plume-related rifting and cannot be explained by conventional subduction and arc-accretion models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia