Letteratura scientifica selezionata sul tema "Latent code optimization"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Latent code optimization".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Latent code optimization"

1

Chen, Taicai, Yue Duan, Dong Li, Lei Qi, Yinghuan Shi e Yang Gao. "PG-LBO: Enhancing High-Dimensional Bayesian Optimization with Pseudo-Label and Gaussian Process Guidance". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 10 (24 marzo 2024): 11381–89. http://dx.doi.org/10.1609/aaai.v38i10.29018.

Testo completo
Abstract (sommario):
Variational Autoencoder based Bayesian Optimization (VAE-BO) has demonstrated its excellent performance in addressing high-dimensional structured optimization problems. However, current mainstream methods overlook the potential of utilizing a pool of unlabeled data to construct the latent space, while only concentrating on designing sophisticated models to leverage the labeled data. Despite their effective usage of labeled data, these methods often require extra network structures, additional procedure, resulting in computational inefficiency. To address this issue, we propose a novel method to effectively utilize unlabeled data with the guidance of labeled data. Specifically, we tailor the pseudo-labeling technique from semi-supervised learning to explicitly reveal the relative magnitudes of optimization objective values hidden within the unlabeled data. Based on this technique, we assign appropriate training weights to unlabeled data to enhance the construction of a discriminative latent space. Furthermore, we treat the VAE encoder and the Gaussian Process (GP) in Bayesian optimization as a unified deep kernel learning process, allowing the direct utilization of labeled data, which we term as Gaussian Process guidance. This directly and effectively integrates the goal of improving GP accuracy into the VAE training, thereby guiding the construction of the latent space. The extensive experiments demonstrate that our proposed method outperforms existing VAE-BO algorithms in various optimization scenarios. Our code will be published at https://github.com/TaicaiChen/PG-LBO.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yuan, Xue, Guanjun Lin, Yonghang Tai e Jun Zhang. "Deep Neural Embedding for Software Vulnerability Discovery: Comparison and Optimization". Security and Communication Networks 2022 (18 gennaio 2022): 1–12. http://dx.doi.org/10.1155/2022/5203217.

Testo completo
Abstract (sommario):
Due to multitudinous vulnerabilities in sophisticated software programs, the detection performance of existing approaches requires further improvement. Multiple vulnerability detection approaches have been proposed to aid code inspection. Among them, there is a line of approaches that apply deep learning (DL) techniques and achieve promising results. This paper attempts to utilize CodeBERT which is a deep contextualized model as an embedding solution to facilitate the detection of vulnerabilities in C open-source projects. The application of CodeBERT for code analysis allows the rich and latent patterns within software code to be revealed, having the potential to facilitate various downstream tasks such as the detection of software vulnerability. CodeBERT inherits the architecture of BERT, providing a stacked encoder of transformer in a bidirectional structure. This facilitates the learning of vulnerable code patterns which requires long-range dependency analysis. Additionally, the multihead attention mechanism of transformer enables multiple key variables of a data flow to be focused, which is crucial for analyzing and tracing potentially vulnerable data flaws, eventually, resulting in optimized detection performance. To evaluate the effectiveness of the proposed CodeBERT-based embedding solution, four mainstream-embedding methods are compared for generating software code embeddings, including Word2Vec, GloVe, and FastText. Experimental results show that CodeBERT-based embedding outperforms other embedding models on the downstream vulnerability detection tasks. To further boost performance, we proposed to include synthetic vulnerable functions and perform synthetic and real-world data fine tuning to facilitate the model learning of C-related vulnerable code patterns. Meanwhile, we explored the suitable configuration of CodeBERT. The evaluation results show that the model with new parameters outperform some state-of-the-art detection methods in our dataset.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sankar, E., L. Karthik e Kuppa Venkatasriram Sastry. "Quantization of Product using Collaborative Filtering Based on Cluster". International Journal for Research in Applied Science and Engineering Technology 10, n. 3 (31 marzo 2022): 876–82. http://dx.doi.org/10.22214/ijraset.2022.40753.

Testo completo
Abstract (sommario):
Abstract: Because of strict response-time constraints, efficiency of top-k recommendation is crucial for real-world recommender systems. Locality sensitive hashing and index-based methods usually store both index data and item feature vectors in main memory, so they handle a limited number of items. Hashing-based recommendation methods enjoy low memory cost and fast retrieval of items, but suffer from large accuracy degradation. In this paper, we propose product Quantized Collaborative Filtering (pQCF) for better trade-off between efficiency and accuracy. pQCF decomposes a joint latent space of users and items into a Cartesian product of low-dimensional subspaces, and learns clustered representation within each subspace. A latent factor is then represented by a short code, which is composed of subspace cluster indexes. A user’s preference for an item can be efficiently calculated via table lookup. We then develop block coordinate descent for efficient optimization and reveal the learning of latent factors is seamlessly integrated with quantization. In this paper we also propose similarity method that has the ability to exploit multiple correlation structures between users who express their preferences for objects that are likely to have similar properties. For this we use a clustering method to find groups of similar objects. Index Terms: Product Quantization, Clustering, Product Search, Collaborative Filtering
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Chennappan, R., e Vidyaa Thulasiraman. "Multicriteria Cuckoo search optimized latent Dirichlet allocation based Ruzchika indexive regression for software quality management". Indonesian Journal of Electrical Engineering and Computer Science 24, n. 3 (1 dicembre 2021): 1804. http://dx.doi.org/10.11591/ijeecs.v24.i3.pp1804-1813.

Testo completo
Abstract (sommario):
The paper presents the software quality management is a highly significant one to ensure the quality and to review the reliability of software products. To improve the software quality by predicting software failures and enhancing the scalability, in this paper, we present a novel reinforced Cuckoo search optimized latent Dirichlet allocation based Ruzchika indexive regression (RCSOLDA-RIR) technique. At first, Multicriteria reinforced Cuckoo search optimization is used to perform the test case selection and find the most optimal solution while considering the multiple criteria and selecting the optimal test cases for testing the software quality. Next, the generative latent Dirichlet allocation model is applied to predict the software failure density with selected optimal test cases with minimum time. Finally, the Ruzchika indexive regression is applied for measuring the similarity between the preceding versions and the new version of software products. Based on the similarity estimation, the software failure density of the new version is also predicted. With this, software error prediction is made in a significant manner, therefore, improving the reliability of software code and service provisioning time between software versions in software systems is also minimized. An experimental assessment of the RCSOLDA-RIR technique achieves better reliability and scalability than the existing methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Kim, Ha Young, e Dongsup Kim. "Prediction of mutation effects using a deep temporal convolutional network". Bioinformatics 36, n. 7 (20 novembre 2019): 2047–52. http://dx.doi.org/10.1093/bioinformatics/btz873.

Testo completo
Abstract (sommario):
Abstract Motivation Accurate prediction of the effects of genetic variation is a major goal in biological research. Towards this goal, numerous machine learning models have been developed to learn information from evolutionary sequence data. The most effective method so far is a deep generative model based on the variational autoencoder (VAE) that models the distributions using a latent variable. In this study, we propose a deep autoregressive generative model named mutationTCN, which employs dilated causal convolutions and attention mechanism for the modeling of inter-residue correlations in a biological sequence. Results We show that this model is competitive with the VAE model when tested against a set of 42 high-throughput mutation scan experiments, with the mean improvement in Spearman rank correlation ∼0.023. In particular, our model can more efficiently capture information from multiple sequence alignments with lower effective number of sequences, such as in viral sequence families, compared with the latent variable model. Also, we extend this architecture to a semi-supervised learning framework, which shows high prediction accuracy. We show that our model enables a direct optimization of the data likelihood and allows for a simple and stable training process. Availability and implementation Source code is available at https://github.com/ha01994/mutationTCN. Supplementary information Supplementary data are available at Bioinformatics online.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Balelli, Irene, Santiago Silva e Marco Lorenzi. "A Differentially Private Probabilistic Framework for Modeling the Variability Across Federated Datasets of Heterogeneous Multi-View Observations". Machine Learning for Biomedical Imaging 1, IPMI 2021 (22 aprile 2022): 1–36. http://dx.doi.org/10.59275/j.melba.2022-7175.

Testo completo
Abstract (sommario):
We propose a novel federated learning paradigm to model data variability among heterogeneous clients in multi-centric studies. Our method is expressed through a hierarchical Bayesian latent variable model, where client-specific parameters are assumed to be realization from a global distribution at the master level, which is in turn estimated to account for data bias and variability across clients. We show that our framework can be effectively optimized through expectation maximization (EM) over latent master's distribution and clients' parameters. We also introduce formal differential privacy (DP) guarantees compatibly with our EM optimization scheme. We tested our method on the analysis of multi-modal medical imaging data and clinical scores from distributed clinical datasets of patients affected by Alzheimer's disease. We demonstrate that our method is robust when data is distributed either in iid and non-iid manners, even when local parameters perturbation is included to provide DP guarantees. Our approach allows to quantify the variability of data, views and centers, while guaranteeing high-quality data reconstruction as compared to the state-of-the-art autoencoding models and federated learning schemes.<br>The code is available at <a href='https://gitlab.inria.fr/epione/federated-multi-views-ppca'>https://gitlab.inria.fr/epione/federated-multi-views-ppca</a>
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Zhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao e Wing-Yin Yu. "Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 3 (28 giugno 2022): 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.

Testo completo
Abstract (sommario):
Spatio-temporal representation learning is critical for video self-supervised representation. Recent approaches mainly use contrastive learning and pretext tasks. However, these approaches learn representation by discriminating sampled instances via feature similarity in the latent space while ignoring the intermediate state of the learned representations, which limits the overall performance. In this work, taking into account the degree of similarity of sampled instances as the intermediate state, we propose a novel pretext task - spatio-temporal overlap rate (STOR) prediction. It stems from the observation that humans are capable of discriminating the overlap rates of videos in space and time. This task encourages the model to discriminate the STOR of two generated samples to learn the representations. Moreover, we employ a joint optimization combining pretext tasks with contrastive learning to further enhance the spatio-temporal representation learning. We also study the mutual influence of each component in the proposed scheme. Extensive experiments demonstrate that our proposed STOR task can favor both contrastive learning and pretext tasks and the joint optimization scheme can significantly improve the spatio-temporal representation in video understanding. The code is available at https://github.com/Katou2/CSTP.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Esztergár-Kiss, Domokos. "Horizon 2020 Project Analysis by Using Topic Modelling Techniques in the Field of Transport". Transport and Telecommunication Journal 25, n. 3 (15 giugno 2024): 266–77. http://dx.doi.org/10.2478/ttj-2024-0019.

Testo completo
Abstract (sommario):
Abstract Understanding the main research directions in transport is crucial to provide useful and relevant insights. The analysis of Horizon 2020, the largest research and innovation framework, has been already realized in a few publications but rarely for the field of transport. Thus, this article is devoted to fill this gap by introducing a novel application of topic modelling techniques, specifically the Latent Dirichlet Allocation (LDA), in the Horizon 2020 framework for transport projects. The method is using the Mallet software with pre-examined code optimizations. As the first step, a corpus is created by collecting 310 project abstracts; afterward, the texts of abstracts are prepared for the LDA analysis by introducing stop words, optimization criteria, the number of words per topics, and the number of topics. The study successfully uncovers the following five main underlying topics: road and traffic safety, aviation and aircraft, mobility and urban transport, maritime industry and shipping, open and real-time data in transport. Besides that, the main trends in transport are identified based on the frequency of words and their occurrence in the corpus. The applied approach maximizes the added value of the Horizon 2020 initiatives by revealing insights that may be overlooked using traditional analysis methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

G, Ranganathan, e Bindhu V. "Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules". December 2020 2, n. 4 (23 febbraio 2021): 162–67. http://dx.doi.org/10.36548/jeea.2020.4.004.

Testo completo
Abstract (sommario):
There have been many compression standards developed during the past few decades and technological advances has resulted in introducing many methodologies with promising results. As far as PSNR metric is concerned, there is a performance gap between reigning compression standards and learned compression algorithms. Based on research, we experimented using an accurate entropy model on the learned compression algorithms to determine the rate-distortion performance. In this paper, discretized Gaussian Mixture likelihood is proposed to determine the latent code parameters in order to attain a more flexible and accurate model of entropy. Moreover, we have also enhanced the performance of the work by introducing recent attention modules in the network architecture. Simulation results indicate that when compared with the previously existing techniques using high-resolution and Kodak datasets, the proposed work achieves a higher rate of performance. When MS-SSIM is used for optimization, our work generates a more visually pleasant image.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Zhang, Dewei, Yin Liu e Sam Davanloo Tajbakhsh. "A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure". INFORMS Journal on Computing 34, n. 2 (marzo 2022): 1126–40. http://dx.doi.org/10.1287/ijoc.2021.1069.

Testo completo
Abstract (sommario):
In many statistical learning problems, it is desired that the optimal solution conform to an a priori known sparsity structure represented by a directed acyclic graph. Inducing such structures by means of convex regularizers requires nonsmooth penalty functions that exploit group overlapping. Our study focuses on evaluating the proximal operator of the latent overlapping group lasso developed by Jacob et al. in 2009 . We implemented an alternating direction method of multiplier with a sharing scheme to solve large-scale instances of the underlying optimization problem efficiently. In the absence of strong convexity, global linear convergence of the algorithm is established using the error bound theory. More specifically, the paper contributes to establishing primal and dual error bounds when the nonsmooth component in the objective function does not have a polyhedral epigraph. We also investigate the effect of the graph structure on the speed of convergence of the algorithm. Detailed numerical simulation studies over different graph structures supporting the proposed algorithm and two applications in learning are provided. Summary of Contribution: The paper proposes a computationally efficient optimization algorithm to evaluate the proximal operator of a nonsmooth hierarchical sparsity-inducing regularizer and establishes its convergence properties. The computationally intensive subproblem of the proposed algorithm can be fully parallelized, which allows solving large-scale instances of the underlying problem. Comprehensive numerical simulation studies benchmarking the proposed algorithm against five other methods on the speed of convergence to optimality are provided. Furthermore, performance of the algorithm is demonstrated on two statistical learning applications related to topic modeling and breast cancer classification. The code along with the simulation studies and benchmarks are available on the corresponding author’s GitHub website for evaluation and future use.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Latent code optimization"

1

Li, Huiyu. "Exfiltration et anonymisation d'images médicales à l'aide de modèles génératifs". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4041.

Testo completo
Abstract (sommario):
Cette thèse aborde plusieurs problèmes de sécurité et de confidentialité lors du traitement d'images médicales dans des lacs de données. Ainsi, on explore la fuite potentielle de données lors de l'exportation de modèles d'intelligence artificielle, puis on développe une approche d'anonymisation d'images médicales qui protège la confidentialité des données. Le Chapitre2 présente une nouvelle attaque d'exfiltration de données, appelée Data Exfiltration by Compression (DEC), qui s'appuie sur les techniques de compression d'images. Cette attaque est effectuée lors de l'exportation d'un réseau de neurones entraîné au sein d'un lac de données distant et elle est applicable indépendamment de la tâche de traitement d'images considérée. En explorant à la fois les méthodes de compression sans perte et avec perte, ce chapitre montre comment l'attaque DEC peut être utilisée efficacement pour voler des images médicales et les reconstruire avec une grande fidélité, grâce à l'utilisation de deux ensembles de données CT et IRM publics. Ce chapitre explore également les contre-mesures qu'un propriétaire de données peut mettre en œuvre pour empêcher l'attaque. Il étudie d'abord l'ajout de bruit gaussien pour atténuer cette attaque, et explore comment les attaquants peuvent créer des attaques résilientes à cet ajout. Enfin, une stratégie alternative d'exportation est proposée, qui implique un réglage fin du modèle et une vérification du code. Le Chapitre 3 présente une méthode d'anonymisation d'images médicales par approche générative, une nouvelle approche pour trouver un compromis entre la préservation de la confidentialité des patients et l'utilité des images générées pour résoudre les tâches de traitement d'images. Cette méthode sépare le processus d'anonymisation en deux étapes : tout d'abord, il extrait les caractéristiques liées à l'identité des patients et à l'utilité des images médicales à l'aide d'encodeurs spécialement entrainés ; ensuite, il optimise le code latent pour atteindre le compromis souhaité entre l'anonymisation et l'utilité de l'image. Nous utilisons des encodeurs d'identité, d'utilité et un encodeur automatique génératif basé sur un réseau antagoniste pour créer des images synthétiques réalistes à partir de l'espace latent. Lors de l'optimisation, nous incorporons ces encodeurs dans de nouvelles fonctions de perte pour produire des images qui suppriment les caractéristiques liées à l'identité tout en conservant leur utilité pour résoudre un problème de classification. L'efficacité de cette approche est démontrée par des expériences sur l'ensemble de données de radiographie thoracique MIMIC-CXR, où les images générées permettent avec succès la détection de pathologies pulmonaires. Le Chapitre 4 s'appuie sur les travaux du Chapitre 3 en utilisant des réseaux antagonistes génératifs (GAN) pour créer une solution d'anonymisation plus robuste et évolutive. Le cadre est structuré en deux étapes distinctes : tout d'abord, nous développons un encodeur simplifié et un nouvel algorithme d'entraînement pour plonger chaque image dans un espace latent. Dans la deuxième étape, nous minimisons les fonctions de perte proposées dans le Chapitre 3 pour optimiser la représentation latente de chaque image. Cette méthode garantit que les images générées suppriment efficacement certaines caractéristiques identifiables tout en conservant des informations diagnostiques cruciales. Des expériences qualitatives et quantitatives sur l'ensemble de données MIMIC-CXR démontrent que notre approche produit des images anonymisées de haute qualité qui conservent les détails diagnostiques essentiels, ce qui les rend bien adaptées à la formation de modèles d'apprentissage automatique dans la classification des pathologies pulmonaires. Le chapitre de conclusion résume les contributions scientifiques de ce travail et aborde les problèmes et défis restants pour produire des données médicales sensibles, sécurisées et préservant leur confidentialité
This thesis aims to address some specific safety and privacy issues when dealing with sensitive medical images within data lakes. This is done by first exploring potential data leakage when exporting machine learning models and then by developing an anonymization approach that protects data privacy.Chapter 2 presents a novel data exfiltration attack, termed Data Exfiltration by Compression (DEC), which leverages image compression techniques to exploit vulnerabilities in the model exporting process. This attack is performed when exporting a trained network from a remote data lake, and is applicable independently of the considered image processing task. By exploring both lossless and lossy compression methods, this chapter demonstrates how DEC can effectively be used to steal medical images and reconstruct them with high fidelity, using two public CT and MR datasets. This chapter also explores mitigation measures that a data owner can implement to prevent the attack. It first investigates the application of differential privacy measures, such as Gaussian noise addition, to mitigate this attack, and explores how attackers can create attacks resilient to differential privacy. Finally, an alternative model export strategy is proposed which involves model fine-tuning and code verification.Chapter 3 introduces the Generative Medical Image Anonymization framework, a novel approach to balance the trade-off between preserving patient privacy while maintaining the utility of the generated images to solve downstream tasks. The framework separates the anonymization process into two key stages: first, it extracts identity and utility-related features from medical images using specially trained encoders; then, it optimizes the latent code to achieve the desired trade-off between anonymity and utility. We employ identity and utility encoders to verify patient identities and detect pathologies, and use a generative adversarial network-based auto-encoder to create realistic synthetic images from the latent space. During optimization, we incorporate these encoders into novel loss functions to produce images that remove identity-related features while maintaining their utility to solve a classification problem. The effectiveness of this approach is demonstrated through extensive experiments on the MIMIC-CXR chest X-ray dataset, where the generated images successfully support lung pathology detection.Chapter 4 builds upon the work from Chapter 4 by utilizing generative adversarial networks (GANs) to create a more robust and scalable anonymization solution. The framework is structured into two distinct stages: first, we develop a streamlined encoder and a novel training scheme to map images into a latent space. In the second stage, we minimize the dual-loss functions proposed in Chapter 3 to optimize the latent representation of each image. This method ensures that the generated images effectively remove some identifiable features while retaining crucial diagnostic information. Extensive qualitative and quantitative experiments on the MIMIC-CXR dataset demonstrate that our approach produces high-quality anonymized images that maintain essential diagnostic details, making them well-suited for training machine learning models in lung pathology classification.The conclusion chapter summarizes the scientific contributions of this work, and addresses remaining issues and challenges for producing secured and privacy preserving sensitive medical data
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Jha, Sudhanshu S. "Power-constrained aware and latency-aware microarchitectural optimizations in many-core processors". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/403960.

Testo completo
Abstract (sommario):
As the transistor budgets outpace the power envelope (the power-wall issue), new architectural and microarchitectural techniques are needed to improve, or at least maintain, the power efficiency of next-generation processors. Run-time adaptation, including core, cache and DVFS adaptations, has recently emerged as a promising area to keep the pace for acceptable power efficiency. However, none of the adaptation techniques proposed so far is able to provide good results when we consider the stringent power budgets that will be common in the next decades, so new techniques that attack the problem from several fronts using different specialized mechanisms are necessary. The combination of different power management mechanisms, however, bring extra levels of complexity, since other factors such as workload behavior and run-time conditions must also be considered to properly allocate power among cores and threads. To address the power issue, this thesis first proposes Chrysso, an integrated and scalable model-driven power management that quickly selects the best combination of adaptation methods out of different core and uncore micro-architecture adaptations, per-core DVFS, or any combination thereof. Chrysso can quickly search the adaptation space by making performance/power projections to identify Pareto-optimal configurations, effectively pruning the search space. Chrysso achieves 1.9x better chip performance over core-level gating for multi-programmed workloads, and 1.5x higher performance for multi-threaded workloads. Most existing power management schemes use a centralized approach to regulate power dissipation. Unfortunately, the complexity and overhead of centralized power management increases significantly with core count rendering it in-viable at fine-grain time slices. The work leverages a two-tier hierarchical power manager. This solution is highly scalable with low overhead on a tiled many-core architecture with shared LLC and per-tile DVFS at fine-grain time slices. The global power is first distributed across tiles using GPM and then within a tile (in parallel across all tiles). Additionally, this work also proposes DVFS and cache-aware thread migration (DCTM) to ensure optimum per-tile co-scheduling of compatible threads at runtime over the two-tier hierarchical power manager. DCTM outperforms existing solutions by up to 12% on adaptive many-core tile processor. With the advancements in the core micro-architectural techniques and technology scaling, the performance gap between the computational component and memory component is increasing significantly (the memory-wall issue). To bridge this gap, the architecture community is pushing forward towards multi-core architecture with on-die near-memory DRAM cache memory (faster than conventional DRAM). Gigascale DRAM Caches poses a problem of how to efficiently manage the tags. The Tags-in-DRAM designs aims at efficiently co-locate tags with data, but it still suffer from high latency especially in multi-way associativity. The thesis finally proposes Tag Cache mechanism, an on-chip distributed tag caching mechanism with limited space and latency overhead to bypass the tag read operation in multi-way DRAM Caches, thereby reducing hit latency. Each Tag Cache, stored in L2, stores tag information of the most recently used DRAM Cache ways. The Tag Cache is able to exploit temporal locality of the DRAM Cache, thereby contributing to on average 46% of the DRAM Cache hits.
A mesura que el consum dels transistors supera el nivell de potència desitjable es necessiten noves tècniques arquitectòniques i microarquitectòniques per millorar, o almenys mantenir, l'eficiència energètica dels processadors de les pròximes generacions. L'adaptació en temps d'execució, tant de nuclis com de les cachés, així com també adaptacions DVFS són idees que han sorgit recentment que fan preveure que sigui un àrea prometedora per mantenir un ritme d'eficiència energètica acceptable. Tanmateix, cap de les tècniques d'adaptació proposades fins ara és capaç d'oferir bons resultats si tenim en compte les restriccions estrictes de potència que seran comuns a les pròximes dècades. És convenient definir noves tècniques que ataquin el problema des de diversos fronts utilitzant diferents mecanismes especialitzats. La combinació de diferents mecanismes de gestió d'energia porta aparellada nivells addicionals de complexitat, ja que altres factors com ara el comportament de la càrrega de treball així com condicions específiques de temps d'execució també han de ser considerats per assignar adequadament la potència entre els nuclis del sistema computador. Per tractar el tema de la potència, aquesta tesi proposa en primer lloc Chrysso, una administració d'energia integrada i escalable que selecciona ràpidament la millor combinació entre diferents adaptacions microarquitectòniques. Chrysso pot buscar ràpidament l'adaptació adequada al fer projeccions òptimes de rendiment i potència basades en configuracions de Pareto, permetent així reduir de manera efectiva l'espai de cerca. Chrysso arriba a un rendiment de 1,9 sobre tècniques convencionals d'inhibició de portes amb una càrrega d'aplicacions seqüencials; i un rendiment de 1,5 quan les aplicacions corresponen a programes parla·lels. La majoria dels sistemes de gestió d'energia existents utilitzen un enfocament centralitzat per regular la dissipació d'energia. Malauradament, la complexitat i el temps d'administració s'incrementen significativament amb una gran quantitat de nuclis. En aquest treball es defineix un gestor jeràrquic de potència basat en dos nivells. Aquesta solució és altament escalable amb baix cost operatiu en una arquitectura de múltiples nuclis integrats en clústers, amb memòria caché de darrer nivell compartida a nivell de cluster, i DVFS establert en intervals de temps de gra fi a nivell de clúster. La potència global es distribueix en primer lloc a través dels clústers utilitzant GPM i després es distribueix dins un clúster (en paral·lel si es consideren tots els clústers). A més, aquest treball també proposa DVFS i migració de fils conscient de la memòria caché (DCTM) que garanteix una òptima distribució de tasques entre els nuclis. DCTM supera les solucions existents fins a un 12%. Amb els avenços en la tecnologia i les tècniques de micro-arquitectura de nuclis, la diferència de rendiment entre el component computacional i la memòria està augmentant significativament. Per omplir aquest buit, s'està avançant cap a arquitectures de múltiples nuclis amb memòries caché integrades basades en DRAM. Aquestes memòries caché DRAM a gran escala plantegen el problema de com gestionar de forma eficaç les etiquetes. Els dissenys de cachés amb dades i etiquetes juntes són un primer pas, però encara pateixen per tenir una alta latència, especialment en cachés amb un grau alt d'associativitat. En aquesta tesi es proposa l'estudi d'una tècnica anomenada Tag Cache, un mecanisme distribuït d'emmagatzematge d'etiquetes, que redueix la latència de les operacions de lectura d'etiquetes en les memòries caché DRAM. Cada Tag Cache, que resideix a L2, emmagatzema la informació de les vies que s'han accedit recentment de les memòries caché DRAM. D'aquesta manera es pot aprofitar la localitat temporal d'una caché DRAM, fet que contribueix en promig en un 46% dels encerts en les caché DRAM.
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Latent code optimization"

1

Aggarwal, Vaneet, e Tian Lan. Modeling and Optimization of Latency in Erasure-Coded Storage Systems. Now Publishers, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Latent code optimization"

1

Cohen, Joshua M., Qinshi Wang e Andrew W. Appel. "Verified Erasure Correction in Coq with MathComp and VST". In Computer Aided Verification, 272–92. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_14.

Testo completo
Abstract (sommario):
AbstractMost methods of data transmission and storage are prone to errors, leading to data loss. Forward erasure correction (FEC) is a method to allow data to be recovered in the presence of errors by encoding the data with redundant parity information determined by an error-correcting code. There are dozens of classes of such codes, many based on sophisticated mathematics, making them difficult to verify using automated tools. In this paper, we present a formal, machine-checked proof of a C implementation of FEC based on Reed-Solomon coding. The C code has been actively used in network defenses for over 25 years, but the algorithm it implements was partially unpublished, and it uses certain optimizations whose correctness was unknown even to the code’s authors. We use Coq’s Mathematical Components library to prove the algorithm’s correctness and the Verified Software Toolchain to prove that the C program correctly implements this algorithm, connecting both using a modular, well-encapsulated structure that could easily be used to verify a high-speed, hardware version of this FEC. This is the first end-to-end, formal proof of a real-world FEC implementation; we verified all previously unknown optimizations and found a latent bug in the code.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Mureithi, Joseph, Saidi Mkomwa, Amir Kassam e Ngari Macharia. "Research and technology development needs for scaling up conservation agriculture systems, practices and innovations in Africa." In Conservation agriculture in Africa: climate smart agricultural development, 176–88. Wallingford: CABI, 2022. http://dx.doi.org/10.1079/9781789245745.0009.

Testo completo
Abstract (sommario):
Abstract Although the net agricultural production across all regions of Africa has experienced a significant increase, African agriculture has performed below its potential over recent decades. Many aspects have been fronted to curb this situation, including sustainable intensification of farming systems and value-chain transformation through Conservation Agriculture (CA) across Africa. Based on the latest update, Africa has about 2.7 million ha under CA, an increase of 458% over the past 10 years with 2008/09 as baseline. However, this constitutes a mere 1.5% of the global area under CA, and less than 1.4% of the total cropland area in Africa. A combination of modern techniques and the optimization of agroecological processes in CA systems and practices requires that agricultural research plays a bigger role in its evolution and focus in the different regions of Africa. This targeted research should crucially contribute towards making agriculture in Africa more productive, competitive, sustainable and inclusive in terms of its functionality towards the farmer, society and nature. Scientific solutions for agricultural transformation need to be pursued without losing sight of the potentials and fragility of Africa's agricultural environments, the complexity of its agricultural production systems and the continent's rich biodiversity. The agricultural research and development agenda in Africa must build on the rich traditional farming culture, knowledge and practices, supported by coherent longer-vision for investments in science for agricultural development. Most of these investments are expected to come from national public and private sources, with governments also expected to invest in generation of 'public goods' such as the national or global environmental benefits typical of CA, and to also catalyse innovation and support market growth. The absolute imperative is that farmers must shift from outdated conventional tillage-based methods to modern, well-tested and knowledge-based methods of land use. Making this transition will be difficult without the creation of an enabling environment. This chapter discusses the various roles and advances required in CA-based research that will support the adoption of CA systems by millions of smallholder farmers in Africa with a view to enhancing sustainable and effective agricultural development and economic growth.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Yang, Dai, Tilman Küstner, Rami Al-Rihawi e Martin Schulz. "Exploring High Bandwidth Memory for PET Image Reconstruction". In Parallel Computing: Technology Trends. IOS Press, 2020. http://dx.doi.org/10.3233/apc200044.

Testo completo
Abstract (sommario):
Memory bandwidth plays an essential role in high performance computing. Its impact on system performance is evident when running applications with a low arithmetic intensity. Therefore, high bandwidth memory is on the agenda of many vendors. However, depending on the memory architecture, other optimizations are required to exploit the performance gain from high bandwidth memory technology. In this paper, we present our optimizations for the Maximum Likelihood Expectation-Maximization (MLEM) algorithm, a method for positron emission tomography (PET) image reconstruction, with a sparse matrix-vector (SpMV) kernel. The results show significant improvement in performance when executing the code on an Intel Xeon Phi processor with MCDRAM when compared to multi-channel DRAM. We further identify that the latency of the MCDRAM becomes a new limiting factor, requiring further optimization. Ultimately, after implementing cache-blocking optimization, we achieved a total memory bandwidth of up to 180 GB/s for the SpMV operation.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Raheel, Muhammad Salman, e Raad Raad. "Streaming Coded Video in P2P Networks". In Advances in Wireless Technologies and Telecommunication, 188–222. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2113-6.ch009.

Testo completo
Abstract (sommario):
This chapter discusses the state of the art in dealing with the resource optimization problem for smooth delivery of video across a peer to peer (P2P) network. It further discusses the properties of using different video coding techniques such as Scalable Video Coding (SVC) and Multiple Descriptive Coding (MDC) to overcome the playback latency in multimedia streaming and maintains an adequate quality of service (QoS) among the users. The problem can be summarized as follows; Given that a video is requested by a peer in the network, what properties of SVC and MDC can be exploited to deliver the video with the highest quality, least upload bandwidth and least delay from all participating peers. However, the solution to these problems is known to be NP hard. Hence, this chapter presents the state of the art in approximation algorithms or techniques that have been proposed to overcome these issues.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Raheel, Muhammad Salman, e Raad Raad. "Streaming Coded Video in P2P Networks". In Research Anthology on Recent Trends, Tools, and Implications of Computer Programming, 1304–39. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3016-0.ch060.

Testo completo
Abstract (sommario):
This chapter discusses the state of the art in dealing with the resource optimization problem for smooth delivery of video across a peer to peer (P2P) network. It further discusses the properties of using different video coding techniques such as Scalable Video Coding (SVC) and Multiple Descriptive Coding (MDC) to overcome the playback latency in multimedia streaming and maintains an adequate quality of service (QoS) among the users. The problem can be summarized as follows; Given that a video is requested by a peer in the network, what properties of SVC and MDC can be exploited to deliver the video with the highest quality, least upload bandwidth and least delay from all participating peers. However, the solution to these problems is known to be NP hard. Hence, this chapter presents the state of the art in approximation algorithms or techniques that have been proposed to overcome these issues.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Das, Kedar Nath. "Hybrid Genetic Algorithm". In Global Trends in Intelligent Computing Research and Development, 268–305. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4936-1.ch010.

Testo completo
Abstract (sommario):
Real coded Genetic Algorithms (GAs) are the most effective and popular techniques for solving continuous optimization problems. In the recent past, researchers used the Laplace Crossover (LX) and Power Mutation (PM) in the GA cycle (namely LX-PM) efficiently for solving both constrained and unconstrained optimization problems. In this chapter, a local search technique, namely Quadratic Approximation (QA) is discussed. QA is hybridized with LX-PM in order to improve its efficiency and efficacy. The generated hybrid system is named H-LX-PM. The supremacy of H-LX-PM over LX-PM is validated through a test bed of 22 unconstrained and 15 constrained typical benchmark problems. In the later part of this chapter, a few applications of GA in networking optimization are highlighted as the scope for future research.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Wang, Yanyun, Dehui Du, Haibo Hu, Zi Liang e Yuanhao Liu. "TSFool: Crafting Highly-Imperceptible Adversarial Time Series Through Multi-Objective Attack". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240644.

Testo completo
Abstract (sommario):
Recent years have witnessed the success of recurrent neural network (RNN) models in time series classification (TSC). However, neural networks (NNs) are vulnerable to adversarial samples, which cause real-life adversarial attacks that undermine the robustness of AI models. To date, most existing attacks target at feed-forward NNs and image recognition tasks, but they cannot perform well on RNN-based TSC. This is due to the cyclical computation of RNN, which prevents direct model differentiation. In addition, the high visual sensitivity of time series to perturbations also poses challenges to local objective optimization of adversarial samples. In this paper, we propose an efficient method called TSFool to craft highly-imperceptible adversarial time series for RNN-based TSC. The core idea is a new global optimization objective known as “Camouflage Coefficient” that captures the imperceptibility of adversarial samples from the class distribution. Based on this, we reduce the adversarial attack problem to a multi-objective optimization problem that enhances the perturbation quality. Furthermore, to speed up the optimization process, we propose to use a representation model for RNN to capture deeply embedded vulnerable samples whose features deviate from the latent manifold. Experiments on 11 UCR and UEA datasets showcase that TSFool significantly outperforms six white-box and three black-box benchmark attacks in terms of effectiveness, efficiency and imperceptibility from various perspectives including standard measure, human study and real-world defense.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Krishna Pasupuleti, Murali. "Next-Gen Connectivity: AI and IoT for Space-Terrestrial Integrated Networks". In Future Networks: AI, IoT, and Sustainable Communications from Earth to Orbit, 82–95. National Education Services, 2024. http://dx.doi.org/10.62311/nesx/7202.

Testo completo
Abstract (sommario):
Abstract: This chapter explores the transformative potential of artificial intelligence (AI) and the Internet of Things (IoT) in developing space-terrestrial integrated networks. It discusses the core technologies and architectural frameworks driving next-gen connectivity, including AI-driven network optimization, IoT integration, and advanced satellite communication protocols. The chapter highlights practical applications, such as expanding global internet access, enhancing disaster response, and enabling smart city infrastructure. It also addresses the challenges of latency, security, and regulatory barriers while providing a forward-looking view of future innovations, such as AI-enhanced network intelligence and emerging satellite technologies. The chapter concludes with a call for collaboration and strategic investment to create a more connected and resilient world. Keywords: AI, IoT, space-terrestrial networks, next-gen connectivity, network optimization, satellite technology, global internet access, disaster response, smart cities, latency, security, data privacy, regulatory challenges, AI-driven automation, future innovations, digital divide, global communication.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Al-Shameri, Yahya Najib Hamood. "Applications of Artificial Intelligence for Enhanced Bug Detection in Software Development". In Advances in Educational Technologies and Instructional Design, 155–88. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-6745-2.ch008.

Testo completo
Abstract (sommario):
As technology advances at breakneck speed, reliance on software grows exponentially. This growth has given rise to multiple issues encountered during software development, including increased occurrences of software bugs. The conventional methods previously used by developers, such as manual code checking or testing, may involve greater chances of human error and extended timelines, resulting in more significant expense budgets towards revenue leakage due to late deliveries or failed outcomes. This chapter per the authors aim to explore how artificial intelligence (AI) solutions can identify potential vulnerabilities more accurately than before. By utilizing machine learning algorithms, deep learning, and natural language processing, AI-based bug detection not only enhances precision and efficiency but also saves costs on the development process, ultimately giving better software quality and reduced monetary costs. This chapter per the authors present some applications of AI techniques in bugs detection to select bugs accurately and suggest possible optimizations for the buggy codes.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Xu, Hao, BenHong Zhang, Qiwei Hu e Zhaoyang Du. "A Dynamic Queue Adjustment Algorithm for Task Offloading in Vehicular Edge Computing Based on MADDPG". In Advances in Transdisciplinary Engineering. IOS Press, 2024. https://doi.org/10.3233/atde241303.

Testo completo
Abstract (sommario):
Vehicular Edge Computing (VEC) has brought great convenience to people’s daily life. However, some problems still exist in VEC networks. Existing studies usually consider latency as an optimization objective, however, for some tasks, it is only necessary to ensure that they are completed within the deadline. For this reason, the urgency of the tasks is defined and, unlike the usual first-come-first-served waiting, the tasks are queued according to their urgency. Then, the optimization objective of the system is formulated and the offloading decision process is transformed into a Markov Decision Process(MDP),and our problem is solved by a deep reinforcement learning based algorithm. Simulation results show that the performance of our proposed algorithm is better than other algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Latent code optimization"

1

Takahashi, Ryo, Kota Ando e Hiroki Nakahara. "A Stacked FPGA utilizing 3D-SRAM with Latency Optimization". In 2024 IEEE 17th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC), 400–406. IEEE, 2024. https://doi.org/10.1109/mcsoc64144.2024.00072.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Beykal, Burcu. "From Then to Now and Beyond: Exploring How Machine Learning Shapes Process Design Problems". In Foundations of Computer-Aided Process Design, 16–21. Hamilton, Canada: PSE Press, 2024. http://dx.doi.org/10.69997/sct.116002.

Testo completo
Abstract (sommario):
Following the discovery of the least squares method in 1805 by Legendre and later in 1809 by Gauss, surrogate modeling and machine learning have come a long way. From identifying patterns and trends in process data to predictive modeling, optimization, fault detection, reaction network discovery, and process operations, machine learning became an integral part of all aspects of process design and process systems engineering. This is enabled, at the same time necessitated, by the vast amounts of data that are readily available from processes, increased digitalization, automation, increasing computation power, and simulation software that can model complex phenomena that span over several temporal and spatial scales. Although this paper is not a comprehensive review, it gives an overview of the recent history of machine learning models that we use every day and how they shaped process design problems from the recent advances to the exploration of their prospects.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Perry, Travis, e Andrew Gallaher. "Automated Layout with a Python Integrated NDARC Environment". In Vertical Flight Society 74th Annual Forum & Technology Display, 1–11. The Vertical Flight Society, 2018. http://dx.doi.org/10.4050/f-0074-2018-12723.

Testo completo
Abstract (sommario):
Geometric layout of an aircraft concept is a fundamental aspect of the design process and can often be a primary driver for design choices, and trade space decisions. Most commonly, geometry is either estimated analytically by performance and sizing tools, from in-production aircraft data, or modeled using Computer Aided Design (CAD). Analytic geometry estimates are often not precise, requiring CAD to refine these estimates. Modeling an aircraft design to the fidelity needed to refine these geometric estimates can be a time consuming process. Furthermore, the initial layout design iterations are primarily used to refine prior estimates rather than address layout design choices. There is a need to accomplish these high level layout tasks in a timely manner; allowing for broad trade space analysis, and relying on CAD later in the design process for more detailed geometric layout. This paper will cover the development of the Automated Layout with a Python Integrated NDARC Environment (ALPINE), a Python Application Programming Interface (API) based geometry tool which leverages outputs from NASA Design and Analysis of Rotorcraft (NDARC) and the geometry software OpenVSP to expedite high level layout processes. ALPINE is an object oriented API tool that streamlines the initial conceptual layout process. This is accomplished through mapping NDARC geometry parameters to custom components generated in OpenVSP, and using algorithms native to OpenVSP. Through the use of this tool, the time needed for initial geometric layout is reduced significantly, potential design challenges can be highlighted without a detailed CAD model, geometry can be integrated within closed loop design optimization problems, and analytic geometry estimates from performance sizing codes can be refined and correlated to 3D model-based analyses.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Barattin, Simone, Christos Tzelepis, Ioannis Patras e Nicu Sebe. "Attribute-Preserving Face Dataset Anonymization via Latent Code Optimization". In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00773.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Van Der Cruysse, Jonathan, e Christophe Dubach. "Latent Idiom Recognition for a Minimalist Functional Array Language Using Equality Saturation". In 2024 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2024. http://dx.doi.org/10.1109/cgo57630.2024.10444879.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ozturk, O., G. Chen, M. Kandemir e M. Karakoy. "Compiler-Directed Variable Latency Aware SPM Management to CopeWith Timing Problems". In International Symposium on Code Generation and Optimization (CGO'07). IEEE, 2007. http://dx.doi.org/10.1109/cgo.2007.6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Compton, Logan M., James L. Armes e Gary L. Solbrekken. "Custom 1-D CFD Numeric Model of Single-Cell Scale Sample Holder for Scanning Thermal Analysis". In ASME 2012 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/imece2012-89615.

Testo completo
Abstract (sommario):
Successful cryopreservation protocols have been developed for a limited number of cell types through an extensive amount of experimentation. To optimize current protocols and to develop effective protocols for a larger range of cells and tissues it is imperative that accurate transport models be developed for the cooling process. Such models are dependent on the thermodynamic properties of intracellular and extracellular solutions, including heat capacity, latent heat, and the physical phase change temperatures. Scanning techniques, such as differential-scanning calorimetry (DSC) and differential thermal analysis are effective tools for measuring those thermodynamic properties. It is essential to understand the behavior of the in house fabricated differential-scanning calorimeter given different cooling and warming rates to reassure and validate the obtained experimental results. A 1-D transient CFD code was created in Matlab using Patankar’s theory to not only validate obtained experimental results but aid in optimizing the control system to produce linear cooling and warming rates. A freezing model was also implemented as a subroutine to numerically observe the effect of heat release and absorption of the sample during a run. The numeric model is composed of a multilayer scheme that incorporates a thermoelectric module which provides the primary temperature control along with the micron sized bridge with sample holder and thermocouple. An electric current profile is imported in from either an experimental run to validate results or from an optimization program to determine the optimum electrical current profile for a desired temperature profile. Numeric detection of heat capacity, latent heat, and thermal resistance has also been demonstrated.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Feng, Zhenpeng, Milos Dakovic, Mingzhe Zhu e Ljubisa Stankovic. "Time-frequency Representation Optimization using InfoGAN Latent Codes". In 2022 30th Telecommunications Forum (TELFOR). IEEE, 2022. http://dx.doi.org/10.1109/telfor56187.2022.9983718.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

K, Soumya, Navjot Singh e Vivek Kumar. "Comparing the Performance of the Latest Generation Multi-Threaded and Multi-Core ASICs". In 2024 International Conference on Optimization Computing and Wireless Communication (ICOCWC). IEEE, 2024. http://dx.doi.org/10.1109/icocwc60930.2024.10470857.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Morishita, Masaki, Tai Asayama e Masanori Tashimo. "Development of System Based Code: Methodologies for Life-Cycle Margin Evaluation". In 14th International Conference on Nuclear Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/icone14-89393.

Testo completo
Abstract (sommario):
The late Professor Emeritus Yasuhide Asada proposed the System Based Code concept, which intends the optimization of design of nuclear plants through margin exchange among a variety of technical options which are not allowed by current codes and standards. The key technology of the System Based Code is margin exchange evaluation methodology. This paper describes recent progress with regards to margin exchange methodologies in Japan.
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Latent code optimization"

1

DiDomizio, Matthew, e Jonathan Butta. Measurement of Heat Transfer and Fire Damage Patterns on Walls for Fire Model Validation. UL Research Institutes, luglio 2024. http://dx.doi.org/10.54206/102376/hnkr9109.

Testo completo
Abstract (sommario):
Fire models are presently employed by fire investigators to make predictions of fire dynamics within structures. Predictions include the evolution of gas temperatures and velocities, smoke movement, fire growth and spread, and thermal exposures to surrounding objects, such as walls. Heat flux varies spatially over exposed walls based on the complex thermal interactions within the fire environment, and is the driving factor for thermally induced fire damage. A fire model predicts the temperature and heat transfer through walls based on field predictions, such as radiative and convective heat flux, and is also subject to the boundary condition represen-tation, which is at the discretion of model practitioners. At the time of writing, Fire Dynamics Simulator can represent in-depth heat transfer through walls, and transverse heat transfer is in a preliminary development stage. Critically, limited suitable data exists for validation of heat trans-fer through walls exposed to fires. Mass loss and discoloration fire effects are directly related to the heat transfer and thermal decomposition of walls, therefore it is crucial that the representation of transverse heat transfer in walls in fire models be validated to ensure that fire investigators can produce accurate simulations and reconstructions with these tools. The purpose of this study was to conduct a series of experiments to obtain data that addresses three validation spaces: 1) thermal exposure to walls from fires; 2) heat transfer within walls exposed to fires; and 3) fire damage patterns arising on walls exposed to fires. Fire Safety Research Institute, part of UL Research Institutes, in collaboration with the Bureau of Alcohol, Tobacco, Firearms and Explosives Fire Research Laboratory, led this novel research endeavor. Experiments were performed on three types of walls to address the needs in this validation space: 1. Steel sheet (304 stainless steel, 0.793 mm thick, coated in high-emissivity high-temperature paint on both sides). This wall type was used to support the heat flux validation objective. By combining measurements of gas temperatures near the wall with surface temperatures obtained using infrared thermography, estimates of the incident heat flux to the wall were produced. 2. Calcium silicate board (BNZ Marinite I, 12.7 mm thick). This wall type was used to support the heat transfer validation objective. Since calcium silicate board is a noncombustible material with well-characterized thermophysical properties at elevated temperatures, measurements of surface temperature may be used to validate transverse heat transfer in a fire model without the need to account for a decomposition mechanism. 3. Gypsum wallboard (USG Sheetrock Ultralight, 12.7 mm thick, coated in white latex paint on the exposed side). This wall type was used to support the fire damage patterns validation objective. Two types of fire effects were considered: 1) discoloration and charring of the painted paper facing of the gypsum wallboard; and 2) mass loss of the gypsum wallboard (which is related to the calcination of the core material). In addition to temperature and heat flux measurements, high resolution photographs of fire patterns were recorded, and mass loss over the entirety of the wall was measured by cutting the wall into smaller samples and measuring the mass of each individual sample. A total of 63 experiments were conducted, encompassing seven fire sources and three wall types (each combination conducted in triplicate). Fire sources included a natural gas burner, gasoline and heptane pools, wood cribs, and upholstered furniture. A methodology was developed for obtaining estimates of field heat flux to a wall using a large plate heat flux sensor. This included a numerical optimization scheme to account for convection heat transfer. These data characterized the incident heat flux received by calcium silicate board and gypsum wallboard in subsequent experiments. Fire damage patterns on the gypsum wallboard, attributed to discoloration and mass loss fire effects, were measured. It was found that heat flux and mass loss fields were similar for a given fire type, but the relationship between these measurements was not consistent across all fire types. Therefore, it was concluded that cumulative heat flux does not adequately describe the mass loss fire effect. Fire damage patterns attributed to the discoloration fire effect were defined as the line of demarcation separating charred and uncharred regions of the wall. It was found that the average values of cumulative heat flux and mass loss ratio coinciding with the fire damage patterns were 10.41 ± 1.51 MJ m−2 and 14.86 ± 2.08 %, respectively. These damage metrics may have utility in predicting char delineation damage patterns in gypsum wallboard using a fire model, with the mass loss ratio metric being overall the best fit over all exposures considered. The dataset produced in this study has been published to a public repository, and may be accessed from the following URL: <https://doi.org/10.5281/zenodo.10543089>.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia