Journal articles on the topic 'Custom data cache tuning'

To see the other types of publications on this topic, follow the link: Custom data cache tuning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Custom data cache tuning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Patel, Aarti, and Prashant K.Shah. "Semi-Custom design of functional unit block using data path methodology in data cache unit." International Journal of VLSI & Signal Processing 4, no. 3 (May 25, 2017): 33–37. http://dx.doi.org/10.14445/23942584/ijvsp-v4i3p107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Naik Dessai, Sanket Suresh, and Varuna Eswer. "Embedded Software Testing to Determine BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 3 (December 1, 2016): 121. http://dx.doi.org/10.11591/ijset.v1i3.4577.

Full text
Abstract:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation of embedded testing procedure to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS). The implementation proposed for embedded testing in the paper considers the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements embedding testbed with a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the testbed implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. In this testbed twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
APA, Harvard, Vancouver, ISO, and other styles
3

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
4

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
5

Eswer, Varuna, and Sanket S. Naik Dessai. "Processor performance metrics analysis and implementation for MIPS using an open source OS." International Journal of Reconfigurable and Embedded Systems (IJRES) 10, no. 2 (July 1, 2021): 137. http://dx.doi.org/10.11591/ijres.v10.i2.pp137-148.

Full text
Abstract:
<p><span>Processor efficiency is a important in embedded system. The efficiency of the processor depends on the L1 cache and translation lookaside buffer (TLB). It is required to understand the L1 cache and TLB performances during varied load for the execution on the processor and hence studies the performance of the varing load and its performance with caches with MIPS and operating system (OS) are studied in this paper. The proposed methods of implementation in the paper considers the counting of the instruction exxecution for respective cache and TLB management and the events are measured using a dedicated counters in software. The software counters are used as there are limitation to hardware counters in the MIPS32. Twenty-seven metrics are considered for analysis and proper identification and implemented for the performance measurement of L1 cache and TLB on the MIPS32 processor. The generated data helps in future research in compiler tuning, memory management design for OS, analysing architectural issues, system benchmarking, scalability, address space analysis, studies of bus communication among processor and its workload sharing characterisation and kernel profiling.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
6

Eswer, Varuna, and Sanket Suresh Naik Dessai. "Embedded Software Engineering Approach to Implement BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 1 (April 1, 2016): 41. http://dx.doi.org/10.11591/ijset.v1i1.4568.

Full text
Abstract:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS) using software engineering approach. Software engineering providing better clearity for the system developemt and its performance analysis.In the initial stage if the requirement analysis for the performance measurment sort very clearly,the methodologies for the implementation becomes very economical without any ambigunity.In this paper a implementation is proposed to determine the processor performance metrics using a software engineering approach considering the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. Twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
APA, Harvard, Vancouver, ISO, and other styles
7

Papagiannis, Anastasios, Giorgos Saloustros, Giorgos Xanthakis, Giorgos Kalaentzis, Pilar Gonzalez-Ferez, and Angelos Bilas. "Kreon." ACM Transactions on Storage 17, no. 1 (February 2, 2021): 1–32. http://dx.doi.org/10.1145/3418414.

Full text
Abstract:
Persistent key-value stores have emerged as a main component in the data access path of modern data processing systems. However, they exhibit high CPU and I/O overhead. Nowadays, due to power limitations, it is important to reduce CPU overheads for data processing. In this article, we propose Kreon , a key-value store that targets servers with flash-based storage, where CPU overhead and I/O amplification are more significant bottlenecks compared to I/O randomness. We first observe that two significant sources of overhead in key-value stores are: (a) The use of compaction in Log-Structured Merge-Trees (LSM-Tree) that constantly perform merging and sorting of large data segments and (b) the use of an I/O cache to access devices, which incurs overhead even for data that reside in memory. To avoid these, Kreon performs data movement from level to level by using partial reorganization instead of full data reorganization via the use of a full index per-level. Kreon uses memory-mapped I/O via a custom kernel path to avoid a user-space cache. For a large dataset, Kreon reduces CPU cycles/op by up to 5.8×, reduces I/O amplification for inserts by up to 4.61×, and increases insert ops/s by up to 5.3×, compared to RocksDB.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Jiaoyi, and Yihan Gao. "CARMI." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2679–91. http://dx.doi.org/10.14778/3551793.3551823.

Full text
Abstract:
Learned indexes, which use machine learning models to replace traditional index structures, have shown promising results in recent studies. However, existing learned indexes exhibit a performance gap between synthetic and real-world datasets, making them far from practical indexes. In this paper, we identify that ignoring the importance of data partitioning during model training is the main reason for this problem. Thus, we explicitly apply data partitioning to index construction and propose a new efficient and updatable cache-aware RMI framework, called CARMI. Specifically, we introduce entropy as a metric to quantify and characterize the effectiveness of data partitioning of tree nodes in learned indexes and propose a novel cost model, laying a new theoretical foundation for future research. Then, based on our novel cost model, CARMI can automatically determine tree structures and model types under various datasets and workloads by a hybrid construction algorithm without any manual tuning. Furthermore, since memory accesses limit the performance of RMIs, a new cache-aware design is also applied in CARMI, which makes full use of the characteristics of the CPU cache to effectively reduce the number of memory accesses. Our experimental study shows that CARMI performs better than baselines, achieving an average of 2.2X/1.9X speedup compared to B+ Tree/ALEX, while using only about 0.77X memory space of B+ Tree. On the SOSD platform, CARMI outperforms all baselines, with an average speedup of 1.2X over the nearest competitor RMI, which has been carefully tuned for each dataset in advance.
APA, Harvard, Vancouver, ISO, and other styles
9

Godard, Patrice, and Jonathan van Eyll. "BED: a Biological Entity Dictionary based on a graph data model." F1000Research 7 (May 16, 2018): 195. http://dx.doi.org/10.12688/f1000research.13925.2.

Full text
Abstract:
The understanding of molecular processes involved in a specific biological system can be significantly improved by combining and comparing different data sets and knowledge resources. However, these information sources often use different identification systems and an identifier conversion step is required before any integration effort. Mapping between identifiers is often provided by the reference information resources and several tools have been implemented to simplify their use. However, most of these tools do not combine the information provided by individual resources to increase the completeness of the mapping process. Also, deprecated identifiers from former versions of databases are not taken into account. Finally, finding automatically the most relevant path to map identifiers from one scope to the other is often not trivial. The Biological Entity Dictionary (BED) addresses these three challenges by relying on a graph data model describing possible relationships between entities and their identifiers. This model has been implemented using Neo4j and an R package provides functions to query the graph but also to create and feed a custom instance of the database. This design combined with a local installation of the graph database and a cache system make BED very efficient to convert large lists of identifiers.
APA, Harvard, Vancouver, ISO, and other styles
10

Godard, Patrice, and Jonathan van Eyll. "BED: a Biological Entity Dictionary based on a graph data model." F1000Research 7 (July 19, 2018): 195. http://dx.doi.org/10.12688/f1000research.13925.3.

Full text
Abstract:
The understanding of molecular processes involved in a specific biological system can be significantly improved by combining and comparing different data sets and knowledge resources. However, these information sources often use different identification systems and an identifier conversion step is required before any integration effort. Mapping between identifiers is often provided by the reference information resources and several tools have been implemented to simplify their use. However, most of these tools do not combine the information provided by individual resources to increase the completeness of the mapping process. Also, deprecated identifiers from former versions of databases are not taken into account. Finally, finding automatically the most relevant path to map identifiers from one scope to the other is often not trivial. The Biological Entity Dictionary (BED) addresses these three challenges by relying on a graph data model describing possible relationships between entities and their identifiers. This model has been implemented using Neo4j and an R package provides functions to query the graph but also to create and feed a custom instance of the database. This design combined with a local installation of the graph database and a cache system make BED very efficient to convert large lists of identifiers.
APA, Harvard, Vancouver, ISO, and other styles
11

Bethel, E. Wes, and Mark Howison. "Multi-core and many-core shared-memory parallel raycasting volume rendering optimization and tuning." International Journal of High Performance Computing Applications 26, no. 4 (April 3, 2012): 399–412. http://dx.doi.org/10.1177/1094342012440466.

Full text
Abstract:
Given the computing industry trend of increasing processing capacity by adding more cores to a chip, the focus of this work is tuning the performance of a staple visualization algorithm, raycasting volume rendering, for shared-memory parallelism on multi-core CPUs and many-core GPUs. Our approach is to vary tunable algorithmic settings, along with known algorithmic optimizations and two different memory layouts, and measure performance in terms of absolute runtime and L2 memory cache misses. Our results indicate there is a wide variation in runtime performance on all platforms, as much as 254% for the tunable parameters we test on multi-core CPUs and 265% on many-core GPUs, and the optimal configurations vary across platforms, often in a non-obvious way. For example, our results indicate the optimal configurations on the GPU occur at a crossover point between those that maintain good cache utilization and those that saturate computational throughput. This result is likely to be extremely difficult to predict with an empirical performance model for this particular algorithm because it has an unstructured memory access pattern that varies locally for individual rays and globally for the selected viewpoint. Our results also show that optimal parameters on modern architectures are markedly different from those in previous studies run on older architectures. In addition, given the dramatic performance variation across platforms for both optimal algorithm settings and performance results, there is a clear benefit for production visualization and analysis codes to adopt a strategy for performance optimization through auto-tuning. These benefits will likely become more pronounced in the future as the number of cores per chip and the cost of moving data through the memory hierarchy both increase.
APA, Harvard, Vancouver, ISO, and other styles
12

Marinelli, Tommaso, Jignacio Gómez Pérez, Christian Tenllado, Manu Komalan, Mohit Gupta, and Francky Catthoor. "Microarchitectural Exploration of STT-MRAM Last-level Cache Parameters for Energy-efficient Devices." ACM Transactions on Embedded Computing Systems 21, no. 1 (January 31, 2022): 1–20. http://dx.doi.org/10.1145/3490391.

Full text
Abstract:
As the technology scaling advances, limitations of traditional memories in terms of density and energy become more evident. Modern caches occupy a large part of a CPU physical size and high static leakage poses a limit to the overall efficiency of the systems, including IoT/edge devices. Several alternatives to CMOS SRAM memories have been studied during the past few decades, some of which already represent a viable replacement for different levels of the cache hierarchy. One of the most promising technologies is the spin-transfer torque magnetic RAM (STT-MRAM), due to its small basic cell design, almost absent static current and non-volatility as an added value. However, nothing comes for free, and designers will have to deal with other limitations, such as the higher latencies and dynamic energy consumption for write operations compared to reads. The goal of this work is to explore several microarchitectural parameters that may overcome some of those drawbacks when using STT-MRAM as last-level cache (LLC) in embedded devices. Such parameters include: number of cache banks, number of miss status handling registers (MSHRs) and write buffer entries, presence of hardware prefetchers. We show that an effective tuning of those parameters may virtually remove any performance loss while saving more than 60% of the LLC energy on average. The analysis is then extended comparing the energy results from calibrated technology models with data obtained with freely available tools, highlighting the importance of using accurate models for architectural exploration.
APA, Harvard, Vancouver, ISO, and other styles
13

Oo, Nwe Zin, and Panyayot Chaikan. "An Improvement of the Matrix-Matrix Multiplication Speed using 2D-Tiling and AVX512 Intrinsics for Multi-Core Architectures." ASEAN Journal of Scientific and Technological Reports 24, no. 2 (August 22, 2021): 104–14. http://dx.doi.org/10.55164/ajstr.v24i2.242021.

Full text
Abstract:
Matrix-matrix multiplication is a time-consuming operation in scientific and engineering applications. When the matrix size is large, it will take a lot of computation time, resulting in slow software which is unacceptable in real-time applications. In this paper, 2D-tiling, loop unrolling, data padding, OpenMP directives, and AVX512 intrinsics are utilized to increase the speed of matrix-matrix multiplication on multi-core architectures. Our algorithm, tested on a Core i9-7900X machine, is more than two times faster than the operations offered by the OpenBLAS and Eigen libraries for single and double precision floating-point matrices. We also propose an equation for parameter tuning which allows our algorithm to be adapted to process any size of matrix on CPUs with different cache organizations.
APA, Harvard, Vancouver, ISO, and other styles
14

Singhal, Rahul, Shruti Srivatsan, and Priyabrata Panda. "Classification of Music Genres using Feature Selection and Hyperparameter Tuning." September 2022 4, no. 3 (August 25, 2022): 167–78. http://dx.doi.org/10.36548/jaicn.2022.3.003.

Full text
Abstract:
The ability of music to spread joy and excitement across lives, makes it widely acknowledged as the human race's universal language. The phrase "music genre" is frequently used to group several musical styles together as following a shared custom or set of guidelines. According to their unique preferences, people now make playlists based on particular musical genres. Due to the determination and extraction of appropriate audio elements, music genre identification is regarded as a challenging task. Music information retrieval, which extracts meaningful information from music, is one of several real - world applications of machine learning. The objective of this paper is to efficiently categorise songs into various genres based on their attributes using various machine learning approaches. To enhance the outcomes, appropriate feature engineering and data pre-processing techniques have been performed. Finally, using suitable performance assessment measures, the output from each model has been compared. Compared to other machine learning algorithms, Random Forest along with efficient feature selection and hyperparameter tuning has produced better results in classifying music genres.
APA, Harvard, Vancouver, ISO, and other styles
15

Xiao, Baonan, Jianfeng Yang, and Xianxian Qi. "Imitation Learning-Based Performance-Power Trade-Off Uncore Frequency Scaling Policy for Multicore System." Sensors 23, no. 3 (January 28, 2023): 1449. http://dx.doi.org/10.3390/s23031449.

Full text
Abstract:
As the importance of uncore components, such as shared cache slices and memory controllers, increases in processor architecture, the percentage of uncore power consumption in the overall power consumption of multicore processors rises significantly. To maximize the power efficiency of a multicore processor system, we investigate the uncore frequency scaling (UFS) policy and propose a novel imitation learning-based uncore frequency control policy. This policy performs online learning based on the DAgger algorithm and converts the annotation cost of online aggregation data into fine-tuning of the expert model. This design optimizes the online learning efficiency and improves the generality of the UFS policy on unseen loads. On the other hand, we shift our policy optimization target to Performance Per Watt (PPW), i.e., the power efficiency of the processor, to avoid saving a percentage of power while losing a larger percentage of performance. The experimental results show that our proposed policy outperforms the current advanced UFS policy in the benchmark test sequence of SPEC CPU2017. Our policy has a maximum improvement of about 10% relative to the performance-first policies. In the unseen processor load, the tuning decision made by our policy after collecting 50 aggregation data can maintain the processor stably near the optimal power efficiency state.
APA, Harvard, Vancouver, ISO, and other styles
16

Azad, Raja Muhammad Atif, and Conor Ryan. "A Simple Approach to Lifetime Learning in Genetic Programming-Based Symbolic Regression." Evolutionary Computation 22, no. 2 (June 2014): 287–317. http://dx.doi.org/10.1162/evco_a_00111.

Full text
Abstract:
Genetic programming (GP) coarsely models natural evolution to evolve computer programs. Unlike in nature, where individuals can often improve their fitness through lifetime experience, the fitness of GP individuals generally does not change during their lifetime, and there is usually no opportunity to pass on acquired knowledge. This paper introduces the Chameleon system to address this discrepancy and augment GP with lifetime learning by adding a simple local search that operates by tuning the internal nodes of individuals. Although not the first attempt to combine local search with GP, its simplicity means that it is easy to understand and cheap to implement. A simple cache is added which leverages the local search to reduce the tuning cost to a small fraction of the expected cost, and we provide a theoretical upper limit on the maximum tuning expense given the average tree size of the population and show that this limit grows very conservatively as the average tree size of the population increases. We show that Chameleon uses available genetic material more efficiently by exploring more actively than with standard GP, and demonstrate that not only does Chameleon outperform standard GP (on both training and test data) over a number of symbolic regression type problems, it does so by producing smaller individuals and it works harmoniously with two other well-known extensions to GP, namely, linear scaling and a diversity-promoting tournament selection method.
APA, Harvard, Vancouver, ISO, and other styles
17

Ali, Akram Syed, Christopher Coté, Mohammad Heidarinejad, and Brent Stephens. "Elemental: An Open-Source Wireless Hardware and Software Platform for Building Energy and Indoor Environmental Monitoring and Control." Sensors 19, no. 18 (September 18, 2019): 4017. http://dx.doi.org/10.3390/s19184017.

Full text
Abstract:
This work demonstrates an open-source hardware and software platform for monitoring the performance of buildings, called Elemental, that is designed to provide data on indoor environmental quality, energy usage, HVAC operation, and other factors to its users. It combines: (i) custom printed circuit boards (PCBs) with RFM69 frequency shift keying (FSK) radio frequency (RF) transceivers for wireless sensors, control nodes, and USB gateway, (ii) a Raspberry Pi 3B with custom firmware acting as either a centralized or distributed backhaul, and (iii) a custom dockerized application for the backend called Brood that serves as the director software managing message brokering via Message Queuing Telemetry Transport (MQTT) protocol using VerneMQ, database storage using InfluxDB, and data visualization using Grafana. The platform is built around the idea of a private, secure, and open technology for the built environment. Among its many applications, the platform allows occupants to investigate anomalies in energy usage, environmental quality, and thermal performance via a comprehensive dashboard with rich querying capabilities. It also includes multiple frontends to view and analyze building activity data, which can be used directly in building controls or to provide recommendations on how to increase operational efficiency or improve operating conditions. Here, we demonstrate three distinct applications of the Elemental platform, including: (1) deployment in a research lab for long-term data collection and automated analysis, (2) use as a full-home energy and environmental monitoring solution, and (3) fault and anomaly detection and diagnostics of individual building systems at the zone-level. Through these applications we demonstrate that the platform allows easy and virtually unlimited datalogging, monitoring, and analysis of real-time sensor data with low setup costs. Low-power sensor nodes placed in abundance in a building can also provide precise and immediate fault-detection, allowing for tuning equipment for more efficient operation and faster maintenance during the lifetime of the building.
APA, Harvard, Vancouver, ISO, and other styles
18

Nawaz, Zeeshan. "Methyl-Tert-Butyl-Ether Synthesis Reactor Modelling and Optimization Using an Aspen Custom Modeler." Hungarian Journal of Industry and Chemistry 45, no. 2 (December 1, 2017): 1–7. http://dx.doi.org/10.1515/hjic-2017-0012.

Full text
Abstract:
Abstract A pseudo-homogeneous model of methyl-tert-butyl-ether (MTBE) synthesis in a multi-tubular packed-bed reactor has been developed using an Aspen Custom Modeler (ACM) for selecting optimum operating strategies, for the maximization and enhancement of MTBE production, and isobutylene consumption, respectively. The model accounts for mass, energy and momentum balances; and the effectiveness factor is evaluated in a onedimensional pseudo-homogeneous model. The kinetic investigation contains kinetic rate expressions as given by the effectiveness factor for accounting the resistance of pellets in terms of mass and heat transfer. An activity coefficient can be used in order to systematically obtain a new steady-state solution. The model used literature-based correlations for the estimation of heat transfer coefficients. The value of the coefficient for gascoolant heat transfer can be adjusted by using a tuning coefficient in order to enrich the process data. Reasonable agreement was found between model predictions and data under similar conditions. The studies concerning model sensitivity compute the optimum temperature, pressure, feed flow rate, methanol/isobutylene ratio, heat removal rate, etc. of the reactor and suggest optimum operating conditions of the reactor.
APA, Harvard, Vancouver, ISO, and other styles
19

Blazewicz, Marek, Ian Hinder, David M. Koppelman, Steven R. Brandt, Milosz Ciznicki, Michal Kierzynka, Frank Löffler, Erik Schnetter, and Jian Tao. "From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation." Scientific Programming 21, no. 1-2 (2013): 1–16. http://dx.doi.org/10.1155/2013/167841.

Full text
Abstract:
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, theChemoraframework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.
APA, Harvard, Vancouver, ISO, and other styles
20

Delgadillo Bonequi, Ian, Abraham Stroschein, and Lucas J. Koerner. "A field-programmable gate array (FPGA)-based data acquisition system for closed-loop experiments." Review of Scientific Instruments 93, no. 11 (November 1, 2022): 114712. http://dx.doi.org/10.1063/5.0121898.

Full text
Abstract:
We describe a custom and open source field-programmable gate array (FPGA)-based data acquisition (DAQ) system developed for electrophysiology and generally useful for closed-loop feedback experiments. FPGA acquisition and processing are combined with high-speed analog and digital converters to enable real-time feedback. The digital approach eases experimental setup and repeatability by allowing for system identification and in situ tuning of filter bandwidths. The FPGA system includes I2C and serial peripheral interface controllers, 1 GiB dynamic RAM for data buffering, and a USB3 interface to Python software. The DAQ system uses common HDMI connectors to support daughtercards that can be customized for a given experiment to make the system modular and expandable. The FPGA-based digital signal processing (DSP) is used to generate fourth-order digital infinite impulse response filters and feedback with microsecond latency. The FPGA-based DSP and an analog inner-loop are demonstrated via an experiment that rapidly steps the voltage of a capacitor isolated from the system by a considerable resistance using a feedback approach that adjusts the driving voltage based on the digitized capacitor current.
APA, Harvard, Vancouver, ISO, and other styles
21

Ebrahim, Ali. "High-Level Design Optimizations for Implementing Data Stream Sketch Frequency Estimators on FPGAs." Electronics 11, no. 15 (July 31, 2022): 2399. http://dx.doi.org/10.3390/electronics11152399.

Full text
Abstract:
This paper presents simple yet effective optimizations for implementing data stream frequency estimation sketch kernels using High-Level Synthesis (HLS). The paper addresses design issues common to sketches utilizing large portions of the embedded RAM resources in a Field Programmable Gate Array (FPGA). First, a solution based on Load-Store Queue (LSQ) architecture is proposed for resolving the memory dependencies associated with the hash tables in a frequency estimation sketch. Second, performance fine-tuning through high-level pragmas is explored to achieve the best possible throughput. Finally, a technique based on pre-processing the data stream in a small cache memory prior to updating the sketch is evaluated to reduce the dynamic power consumption. Using an Intel HLS compiler, a proposed optimized hardware version of the popular Count-Min sketch utilizing 80% of the embedded RAM in an Intel Arria 10 FPGA, achieved more than 3x the throughput of an unoptimized baseline implementation. Furthermore, the sketch update rate is significantly reduced when the input stream is skewed. This, in turn, minimizes the effect of high throughput on dynamic power consumption. Compared to FPGA sketches in the published literature, the presented sketch is the most well-rounded sketch in terms of features and versatility. In terms of throughput, the presented sketch is on a par with the fastest sketches fine-tuned at the Register Transfer Level (RTL).
APA, Harvard, Vancouver, ISO, and other styles
22

Ziouzios, Dimitris, Dimitris Tsiktsiris, Nikolaos Baras, and Minas Dasygenis. "A Distributed Architecture for Smart Recycling Using Machine Learning." Future Internet 12, no. 9 (August 24, 2020): 141. http://dx.doi.org/10.3390/fi12090141.

Full text
Abstract:
Recycling is vital for a sustainable and clean environment. Developed and developing countries are both facing the problem of solid management waste and recycling issues. Waste classification is a good solution to separate the waste from the recycle materials. In this work, we propose a cloud based classification algorithm for automated machines in recycling factories using machine learning. We trained an efficient MobileNet model, able to classify five different types of waste. The inference can be performed in real-time on a cloud server. Various techniques are described and used in order to improve the classification accuracy, such as data augmentation and hyper-parameter tuning. Multiple industrial stations are supported and interconnected via custom data transmission protocols, along with security features. Experimental results indicated that our solution can achieve excellent performance with 96.57% accuracy utilizing a cloud server.
APA, Harvard, Vancouver, ISO, and other styles
23

VERMA, SANTHOSH, and DAVID M. KOPPELMAN. "THE INTERACTION AND RELATIVE EFFECTIVENESS OF HARDWARE AND SOFTWARE DATA PREFETCH." Journal of Circuits, Systems and Computers 21, no. 02 (April 2012): 1240002. http://dx.doi.org/10.1142/s0218126612400026.

Full text
Abstract:
A major performance limiter in modern processors is the long latencies caused by data cache misses. Both compiler- and hardware-based prefetching schemes help hide these latencies and so improve performance. Compiler techniques infer memory access patterns through code analysis, and insert appropriate prefetch instructions. Hardware prefetching techniques work independently from the compiler by monitoring an access stream, detecting patterns in this stream and issuing prefetches based on these patterns. This paper looks at the interplay between compiler and hardware architecture-based prefetching techniques. Does either technique make the other one unnecessary? First, compilers' ability to achieve good results without extreme expertise is evaluated by preparing binaries with no prefetch, one-flag prefetch (no tuning), and expertly tuned prefetch. From runs of SPECcpu2006 binaries, we find that expertise avoids minor slowdown in a few benchmarks and provides substantial speedup in others. We compare software schemes to hardware prefetching schemes and our simulations show software alone substantially outperforms hardware alone on about half of a selection of benchmarks. While hardware matches or exceeds software in a few cases, software is better on average. Analysis reveals that in many cases hardware is not prefetching access patterns that it is capable of recognizing, due to some irregularities in the observed miss sequence. Hardware outperforms software on address sequences that the compiler would not guess. In general, while software is better at prefetching individual loads, hardware partly compensates for this by identifying more loads to prefetch. Using the two schemes together provides further benefits, but less than the sum of the contributions of each alone.
APA, Harvard, Vancouver, ISO, and other styles
24

Poojary, Ramaprasad, Roma Raina, and Amit Kumar Mondal. "Effect of data-augmentation on fine-tuned CNN model performance." IAES International Journal of Artificial Intelligence (IJ-AI) 10, no. 1 (March 1, 2021): 84. http://dx.doi.org/10.11591/ijai.v10.i1.pp84-92.

Full text
Abstract:
<span id="docs-internal-guid-cdb76bbb-7fff-978d-961c-e21c41807064"><span>During the last few years, deep learning achieved remarkable results in the field of machine learning when used for computer vision tasks. Among many of its architectures, deep neural network-based architecture known as convolutional neural networks are recently used widely for image detection and classification. Although it is a great tool for computer vision tasks, it demands a large amount of training data to yield high performance. In this paper, the data augmentation method is proposed to overcome the challenges faced due to a lack of insufficient training data. To analyze the effect of data augmentation, the proposed method uses two convolutional neural network architectures. To minimize the training time without compromising accuracy, models are built by fine-tuning pre-trained networks VGG16 and ResNet50. To evaluate the performance of the models, loss functions and accuracies are used. Proposed models are constructed using Keras deep learning framework and models are trained on a custom dataset created from Kaggle CAT vs DOG database. Experimental results showed that both the models achieved better test accuracy when data augmentation is employed, and model constructed using ResNet50 outperformed VGG16 based model with a test accuracy of 90% with data augmentation &amp; 82% without data augmentation.</span></span>
APA, Harvard, Vancouver, ISO, and other styles
25

Ovalle-Magallanes, Emmanuel, Juan Gabriel Avina-Cervantes, Ivan Cruz-Aceves, and Jose Ruiz-Pinales. "Transfer Learning for Stenosis Detection in X-ray Coronary Angiography." Mathematics 8, no. 9 (September 4, 2020): 1510. http://dx.doi.org/10.3390/math8091510.

Full text
Abstract:
Coronary artery disease is the most frequent type of heart disease caused by an abnormal narrowing of coronary arteries, also called stenosis or atherosclerosis. It is also the leading cause of death globally. Currently, X-ray Coronary Angiography (XCA) remains the gold-standard imaging technique for medical diagnosis of stenosis and other related conditions. This paper presents a new method for the automatic detection of coronary artery stenosis in XCA images, employing a pre-trained (VGG16, ResNet50, and Inception-v3) Convolutional Neural Network (CNN) via Transfer Learning. The method is based on a network-cut and fine-tuning approach. The optimal cut and fine-tuned layers were selected following 20 different configurations for each network. The three networks were fine-tuned using three strategies: only real data, only artificial data, and artificial with real data. The synthetic dataset consists of 10,000 images (80% for training, 20% for validation) produced by a generative model. These different configurations were analyzed and compared using a real dataset of 250 real XCA images (125 for testing and 125 for fine-tuning), regarding their randomly initiated CNNs and a fourth custom CNN, trained as well with artificial and real data. The results showed that pre-trained VGG16, ResNet50, and Inception-v3 cut on an early layer and fine-tuned, overcame the referencing CNNs performance. Specifically, Inception-v3 provided the best stenosis detection with an accuracy of 0.95, a precision of 0.93, sensitivity, specificity, and F1 score of 0.98, 0.92, and 0.95, respectively. Moreover, a class activation map is applied to identify the high attention regions for stenosis detection.
APA, Harvard, Vancouver, ISO, and other styles
26

Et. al., Gitanjali Sinha,. "Scalable Data Processing for Prediction, Batch Computation and Analysis and Response Times using Google BigQuery." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (April 9, 2021): 843–54. http://dx.doi.org/10.17762/itii.v9i2.422.

Full text
Abstract:
Computing a complex dataset analysis is tedious task because distributed resource management is difficult task. Google disperses the registering utilized by BigQuery across process assets powerfully which implies that we don't need to oversee figure asset, for example, bunches, register motor, stockpiling structure. Fighting commitments customarily require custom estimating (and esteeming) of unequivocal procedure gatherings, and this can change after some time which can be trying. Since Google logically assigns resources, costs are dynamic too. Google offers both a compensation all the more just as costs emerge elective where you pay for the data brought into BigQuery and subsequently per question costs. Since BigQuery is a totally managed organization, the backend game plan and tuning is managed by Google. This is much more direct than battling plans that anticipate that you should pick a number and sort of gatherings to make and to administer after some time. BigQuery consequently recreates information between zones to empower high accessibility. It additionally naturally load adjusts to give ideal execution and to limit the effect of any equipment disappointments. So getting benefits of BigQuery we did complex data analysis in huge amount of data set within a friction of second. Our result is showing the capability of our research work in the field of scalable data processing.
APA, Harvard, Vancouver, ISO, and other styles
27

Střelák, David, Carlos Óscar S. Sorzano, José María Carazo, and Jiří Filipovič. "A GPU acceleration of 3-D Fourier reconstruction in cryo-EM." International Journal of High Performance Computing Applications 33, no. 5 (March 11, 2019): 948–59. http://dx.doi.org/10.1177/1094342019832958.

Full text
Abstract:
Cryo-electron microscopy is a popular method for macromolecules structure determination. Reconstruction of a 3-D volume from raw data obtained from a microscope is highly computationally demanding. Thus, acceleration of the reconstruction has a great practical value. In this article, we introduce a novel graphics processing unit (GPU)-friendly algorithm for direct Fourier reconstruction, one of the main computational bottlenecks in the 3-D volume reconstruction pipeline for some experimental cases (particularly those with a large number of images and a high internal symmetry). Contrary to the state of the art, our algorithm uses a gather memory pattern, improving cache locality and removing race conditions in parallel writing into the 3-D volume. We also introduce a finely tuned CUDA implementation of our algorithm, using auto-tuning to search for a combination of optimization parameters maximizing performance on a given GPU architecture. Our CUDA implementation is integrated in widely used software Xmipp, version 3.19, reaching 11.4× speedup compared to the original parallel CPU implementation using GPU with comparable power consumption. Moreover, we have reached 31.7× speedup using four GPUs and 2.14×–5.96× speedup compared to optimized GPU implementation based on a scatter memory pattern.
APA, Harvard, Vancouver, ISO, and other styles
28

Turkmanovic, Haris, Ivan Popovic, Dejan Drajic, and Zoran Cica. "Green computing for iot - software approach." Facta universitatis - series: Electronics and Energetics 35, no. 4 (2022): 541–55. http://dx.doi.org/10.2298/fuee2204541t.

Full text
Abstract:
More efficient usage of limited energy resources on embedded platforms, found in various IoT applications, is identified as a universal challenge in designing such devices and systems. Although many power management techniques for control and optimization of device power consumption have been introduced at the hardware and software level, only few of them are addressing device operation at the application level. In this paper, a software engineering approach for managing the operation of IoT edge devices is presented. This approach involves a set of the application-level software parameters that affect consumption of the IoT device and its real-time behavior. To investigate and illustrate the impact of the introduced parameters on the device performance and its energy footprint, we utilize a custom-built simulation environment. The simulation results obtained from analyzing simplified data producer-consumer configuration of IoT edge tier, under push-based communication model, confirm that careful tuning of the identified set of parameters can lead to more energy efficient IoT end-device operation.
APA, Harvard, Vancouver, ISO, and other styles
29

Dettori, Stefano, Alessandro Maddaloni, Filippo Galli, Valentina Colla, Federico Bucciarelli, Damaso Checcacci, and Annamaria Signorini. "Steam Turbine Rotor Stress Control through Nonlinear Model Predictive Control." Energies 14, no. 13 (July 2, 2021): 3998. http://dx.doi.org/10.3390/en14133998.

Full text
Abstract:
The current flexibility of the energy market requires operating steam turbines that have challenging operation requirements such as variable steam conditions and higher number of startups. This article proposes an advanced control system based on the Nonlinear Model Predictive Control (NMPC) technique, which allows to speed up the start-up of steam turbines and increase the energy produced while maintaining rotor stress as a constraint variable. A soft sensor for the online calculation of rotor stress is presented together with the steam turbine control logic. Then, we present how the computational cost of the controller was contained by reducing the order of the formulation of the optimization problem, adjusting the scheduling of the optimizer routine, and tuning the parameters of the controller itself. The performance of the control system has been compared with respect to the PI Controller architecture fed by the soft sensor results and with standard pre-calculated curves. The control architecture was evaluated in a simulation exploiting actual data from a Concentrated Solar Power Plant. The NMPC technique shows an increase in performance, with respect to the custom PI control application, and encouraging results.
APA, Harvard, Vancouver, ISO, and other styles
30

Manigandan, E., V. Shanthi, and Magesh Kasthuri. "Preparing Low Cost Solution Based On Customized Process Of Parallel Clustering Solution." European Scientific Journal, ESJ 12, no. 21 (July 29, 2016): 159. http://dx.doi.org/10.19044/esj.2016.v12n21p159.

Full text
Abstract:
Big Data analysis is the field of data processing where it involves collections of large volume of data sets which are generally so large and really complex in nature and also there is no unified scientific solution globally for any data analysis due to its nature of difficulties to process them by adopting traditional approaches and technologies. Handling large volume of data and preparing them for deep analysis to evaluate them and prepare required information as required by the mining process is the most complex and sometimes costlier task in real-time. There are many solutions for the data mining process like clustering, special mining, k-means mining to name a few. But the real challenge in data mining process is choosing the correct solution or algorithm to apply for mining the input data and tuning the processing step in such a way that we establish a cost effective solution for the entire mining process. There may be many solutions where mining is efficient but cost of operation is not effective and sometimes it is vice-versa. Hence there is always an ever increasing demand for an efficient solution which is cost effective as well as efficient in data mining technique. The intent of this paper is researching on how we implement a concept called Parallel clustering which gives higher benefit in terms of cost and time in data mining processing without compromising the efficiency and accuracy in expected result. This paper discusses one such custom algorithm and its performance as compared to other solutions.
APA, Harvard, Vancouver, ISO, and other styles
31

Alruily, Meshrif, Abdul Manaf Fazal, Ayman Mohamed Mostafa, and Mohamed Ezz. "Automated Arabic Long-Tweet Classification Using Transfer Learning with BERT." Applied Sciences 13, no. 6 (March 9, 2023): 3482. http://dx.doi.org/10.3390/app13063482.

Full text
Abstract:
Social media platforms like Twitter are commonly used by people interested in various activities, interests, and subjects that may cover their everyday activities and plans, as well as their thoughts on religion, technology, or the products they use. In this paper, we present bidirectional encoder representations from transformers (BERT)-based text classification model, ARABERT4TWC, for classifying the Arabic tweets of users into different categories. This work aims to provide an enhanced deep-learning model that can automatically classify the robust Arabic tweets of different users. In our proposed work, a transformer-based model for text classification is constructed from a pre-trained BERT model provided by the hugging face transformer library with custom dense layers. The multi-class classification layer is built on top of the BERT encoder to categorize the tweets. First, data sanitation and preprocessing were performed on the raw Arabic corpus to improve the model’s accuracy. Second, an Arabic-specific BERT model was built and input embedding vectors were fed into it. Using five publicly accessible datasets, substantial experiments were executed, and the fine-tuning technique was assessed in terms of tokenized vector and learning rate. In addition, we assessed the accuracy of various deep-learning models for classifying Arabic text.
APA, Harvard, Vancouver, ISO, and other styles
32

De Keyser, Thomas, Essam Saeid, Christopher G. St C. Kendall, and James Kellogg. "Normalized and color-filled logarithmic gamma-ray logs to enhance subsurface stratigraphic interpretation of carbonates and siliciclastics." Interpretation 8, no. 1 (February 1, 2020): B1—B11. http://dx.doi.org/10.1190/int-2018-0247.1.

Full text
Abstract:
Modern petrophysical software has broad capabilities for the display and manipulation of subsurface digital log data and for its integration with core data. Color and scale are two of the most important display attributes that can be used to enhance the visualization and interpretation of rock properties. The gamma-ray (GR) log, the most important log used in subsurface interpretation, is conventionally displayed on a linear scale of American Petroleum Institute (API) units. This makes it difficult to interpret in very clean lithologies with low API values or where the range of values is very large. We determine how displaying GR values on a logarithmic scale enhances the recognition of cyclicity in lithofacies with low GR values concurrently with rocks with very high API values. We further enhance the GR curve with a color-fill pattern that is intuitive and suggests the lithofacies. We calibrate the core-derived lithofacies data to the color-fill pattern, interactively “tuning” it to match lithofacies boundaries, further increasing the value of the methodology. Because of the many factors that cause variation in recorded API values, we normalized the GR curves, either by bulk shifts or statistical means, so that they display the same colors for equivalent lithologies in all wells in a cross section. We have developed two integrated studies demonstrating techniques to improve the display and interpretation of borehole logs, specifically those of Mesozoic carbonates and evaporites of the Middle East and Cenozoic siliciclastic fluvial and marginal marine systems of the Llanos Foothills of Colombia. Many examples of custom color-fill patterns for petrophysical logs could be suggested, the possibilities being limited only by the data available and the interpretation being presented.
APA, Harvard, Vancouver, ISO, and other styles
33

Jia, Yin, Balakrishnan Ramalingam, Rajesh Elara Mohan, Zhenyuan Yang, Zimou Zeng, and Prabakaran Veerajagadheswar. "Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation." Sensors 23, no. 4 (February 20, 2023): 2337. http://dx.doi.org/10.3390/s23042337.

Full text
Abstract:
Hazardous object detection (escalators, stairs, glass doors, etc.) and avoidance are critical functional safety modules for autonomous mobile cleaning robots. Conventional object detectors have less accuracy for detecting low-feature hazardous objects and have miss detection, and the false classification ratio is high when the object is under occlusion. Miss detection or false classification of hazardous objects poses an operational safety issue for mobile robots. This work presents a deep-learning-based context-aware multi-level information fusion framework for autonomous mobile cleaning robots to detect and avoid hazardous objects with a higher confidence level, even if the object is under occlusion. First, the image-level-contextual-encoding module was proposed and incorporated with the Faster RCNN ResNet 50 object detector model to improve the low-featured and occluded hazardous object detection in an indoor environment. Further, a safe-distance-estimation function was proposed to avoid hazardous objects. It computes the distance of the hazardous object from the robot’s position and steers the robot into a safer zone using detection results and object depth data. The proposed framework was trained with a custom image dataset using fine-tuning techniques and tested in real-time with an in-house-developed mobile cleaning robot, BELUGA. The experimental results show that the proposed algorithm detected the low-featured and occluded hazardous object with a higher confidence level than the conventional object detector and scored an average detection accuracy of 88.71%.
APA, Harvard, Vancouver, ISO, and other styles
34

Barulina, Marina, Askhat Sanbaev, Sergey Okunkov, Ivan Ulitin, and Ivan Okoneshnikov. "Deep Learning Approaches to Automatic Chronic Venous Disease Classification." Mathematics 10, no. 19 (September 30, 2022): 3571. http://dx.doi.org/10.3390/math10193571.

Full text
Abstract:
Chronic venous disease (CVD) occurs in a substantial proportion of the world’s population. If the onset of CVD looks like a cosmetic defect, over time, it might be transformed into serious problems that will require surgical intervention. The aim of this work is to use deep learning (DL) methods for automatic classification of the stage of CVD for self-diagnosis of a patient by using the image of the patient’s legs. The images of legs with CVD required for DL algorithms were collected from open Internet resources using the developed algorithms. For image preprocessing, the binary classification problem “legs–no legs” was solved based on Resnet50 with accuracy of 0.998. The application of this filter made it possible to collect a dataset of 11,118 good-quality leg images with various stages of CVD. For classification of various stages of CVD according to the CEAP classification, the multi-classification problem was set and resolved by using four neural networks with completely different architectures: Resnet50 and transformers such as data-efficient image transformers (DeiT) and a custom vision transformer (vit-base-patch16-224 and vit-base-patch16-384). The model based on DeiT without any tuning showed better results than the model based on Resnet50 did (precision = 0.770 (DeiT) and 0.615 (Resnet50)). vit-base-patch16-384 showed the best results (precision = 0.79). To demonstrate the results of the work, a Telegram bot was developed, in which fully functioning DL algorithms were implemented. This bot allowed evaluating the condition of the patient’s legs with fairly good accuracy of CVD classification.
APA, Harvard, Vancouver, ISO, and other styles
35

Rocha, Kyle Akira, Jeff J. Andrews, Christopher P. L. Berry, Zoheyr Doctor, Aggelos K. Katsaggelos, Juan Gabriel Serra Pérez, Pablo Marchant, et al. "Active Learning for Computationally Efficient Distribution of Binary Evolution Simulations." Astrophysical Journal 938, no. 1 (October 1, 2022): 64. http://dx.doi.org/10.3847/1538-4357/ac8b05.

Full text
Abstract:
Abstract Binary stars undergo a variety of interactions and evolutionary phases, critical for predicting and explaining observations. Binary population synthesis with full simulation of stellar structure and evolution is computationally expensive, requiring a large number of mass-transfer sequences. The recently developed binary population synthesis code POSYDON incorporates grids of MESA binary star simulations that are interpolated to model large-scale populations of massive binaries. The traditional method of computing a high-density rectilinear grid of simulations is not scalable for higher-dimension grids, accounting for a range of metallicities, rotation, and eccentricity. We present a new active learning algorithm, psy-cris, which uses machine learning in the data-gathering process to adaptively and iteratively target simulations to run, resulting in a custom, high-performance training set. We test psy-cris on a toy problem and find the resulting training sets require fewer simulations for accurate classification and regression than either regular or randomly sampled grids. We further apply psy-cris to the target problem of building a dynamic grid of MESA simulations, and we demonstrate that, even without fine tuning, a simulation set of only ∼1/4 the size of a rectilinear grid is sufficient to achieve the same classification accuracy. We anticipate further gains when algorithmic parameters are optimized for the targeted application. We find that optimizing for classification only may lead to performance losses in regression, and vice versa. Lowering the computational cost of producing grids will enable new population synthesis codes such as POSYDON to cover more input parameters while preserving interpolation accuracies.
APA, Harvard, Vancouver, ISO, and other styles
36

Wijnands, Jasper S., Jason Thompson, Kerry A. Nice, Gideon D. P. A. Aschwanden, and Mark Stevenson. "Real-time monitoring of driver drowsiness on mobile platforms using 3D neural networks." Neural Computing and Applications 32, no. 13 (October 13, 2019): 9731–43. http://dx.doi.org/10.1007/s00521-019-04506-0.

Full text
Abstract:
Abstract Driver drowsiness increases crash risk, leading to substantial road trauma each year. Drowsiness detection methods have received considerable attention, but few studies have investigated the implementation of a detection approach on a mobile phone. Phone applications reduce the need for specialised hardware and hence, enable a cost-effective roll-out of the technology across the driving population. While it has been shown that three-dimensional (3D) operations are more suitable for spatiotemporal feature learning, current methods for drowsiness detection commonly use frame-based, multi-step approaches. However, computationally expensive techniques that achieve superior results on action recognition benchmarks (e.g. 3D convolutions, optical flow extraction) create bottlenecks for real-time, safety-critical applications on mobile devices. Here, we show how depthwise separable 3D convolutions, combined with an early fusion of spatial and temporal information, can achieve a balance between high prediction accuracy and real-time inference requirements. In particular, increased accuracy is achieved when assessment requires motion information, for example, when sunglasses conceal the eyes. Further, a custom TensorFlow-based smartphone application shows the true impact of various approaches on inference times and demonstrates the effectiveness of real-time monitoring based on out-of-sample data to alert a drowsy driver. Our model is pre-trained on ImageNet and Kinetics and fine-tuned on a publicly available Driver Drowsiness Detection dataset. Fine-tuning on large naturalistic driving datasets could further improve accuracy to obtain robust in-vehicle performance. Overall, our research is a step towards practical deep learning applications, potentially preventing micro-sleeps and reducing road trauma.
APA, Harvard, Vancouver, ISO, and other styles
37

Masud, Mehedi, M. Shamim Hossain, Hesham Alhumyani, Sultan S. Alshamrani, Omar Cheikhrouhou, Saleh Ibrahim, Ghulam Muhammad, Amr E. Eldin Rashed, and B. B. Gupta. "Pre-Trained Convolutional Neural Networks for Breast Cancer Detection Using Ultrasound Images." ACM Transactions on Internet Technology 21, no. 4 (July 16, 2021): 1–17. http://dx.doi.org/10.1145/3418355.

Full text
Abstract:
Volunteer computing based data processing is a new trend in healthcare applications. Researchers are now leveraging volunteer computing power to train deep learning networks consisting of billions of parameters. Breast cancer is the second most common cause of death in women among cancers. The early detection of cancer may diminish the death risk of patients. Since the diagnosis of breast cancer manually takes lengthy time and there is a scarcity of detection systems, development of an automatic diagnosis system is needed for early detection of cancer. Machine learning models are now widely used for cancer detection and prediction research for improving the successive therapy of patients. Considering this need, this study implements pre-trained convolutional neural network based models for detecting breast cancer using ultrasound images. In particular, we tuned the pre-trained models for extracting key features from ultrasound images and included a classifier on the top layer. We measured accuracy of seven popular state-of-the-art pre-trained models using different optimizers and hyper-parameters through fivefold cross validation. Moreover, we consider Grad-CAM and occlusion mapping techniques to examine how well the models extract key features from the ultrasound images to detect cancers. We observe that after fine tuning, DenseNet201 and ResNet50 show 100% accuracy with Adam and RMSprop optimizers. VGG16 shows 100% accuracy using the Stochastic Gradient Descent optimizer. We also develop a custom convolutional neural network model with a smaller number of layers compared to large layers in the pre-trained models. The model also shows 100% accuracy using the Adam optimizer in classifying healthy and breast cancer patients. It is our belief that the model will assist healthcare experts with improved and faster patient screening and pave a way to further breast cancer research.
APA, Harvard, Vancouver, ISO, and other styles
38

Omar, Mohamed, Zhuoran Xu, Ryan Carelli, Jacob Rosenthal, David Brundage, Daniela C. Salles, Eddie L. Imada, et al. "Abstract 462: Using attention-based deep multiple instance learning to identify key genetic alterations in prostate cancer from whole slide images." Cancer Research 82, no. 12_Supplement (June 15, 2022): 462. http://dx.doi.org/10.1158/1538-7445.am2022-462.

Full text
Abstract:
Abstract Prostate cancer (PCa) is associated with several genetic alterations which play an important role in the disease heterogeneity and clinical outcome. These alterations involve gene fusion between TMPRSS2 and members of the ETS family of transcription factors like ERG, ETV1, and ETV4 together with mutations or deletions in tumor suppressors like TP53 and PTEN. The expanding wealth of digital whole slide images (WSIs) and the increasing adoption of deep learning approaches offer a unique opportunity for pathologists to streamline the detection of these alterations. Here, we used 736 haematoxylin and eosin-stained WSIs from 494 primary PCa patients to identify several key genetic alterations including ERG, ETV1, and ETV4 fusion, PTEN loss, and TP53 and SPOP mutations. Using a custom segmentation pipeline, we identified tissue regions and tiled them into high-resolution (10X magnification) patches (256X256 pixels) which were passed to our deep multiple instance learning framework. Using a pre-trained ResNet50 model, we extracted informative features which were subsequently used for training to predict slide-level labels and to detect slide regions with high diagnostic relevance. Using a 10-folds cross validation approach, we divided the data into training (80%), validation (10%) and testing (10%) sets. The training and validation data were used for training the model and hyperparameters tuning, respectively while the testing data was used to provide an unbiased evaluation of the models’ performance using the mean Area Under the Receiver Operating Characteristic (AUROC) across the ten testing folds as evaluation metric. We managed to accurately detect key molecular alterations including ERG fusion, ETV1 fusion, ETV4 fusion, and PTEN loss. Additionally, we were able to detect mutations in TP53 and SPOP together with the presence of androgen-receptor splice variant 7 (ARv7). In addition to slide-level classification, we also identified subregions with high attention score which can help pathologists identify the distinct morphological features associated with each genetic alteration. Finally, in order to examine the cellular structure associated with each genetic alteration, we used Hover-Net model to segment and classify the nuclei in the high-attention tiles. Our work highlights the utility of using WSIs to accurately identify key molecular alteration in cancer and their associated morphological and cellular features on the slide which would streamline the diagnostic process. To the best of our knowledge, this is the first study that uses routine WSIs to predict and characterize key genetic alterations in PCa. Citation Format: Mohamed Omar, Zhuoran Xu, Ryan Carelli, Jacob Rosenthal, David Brundage, Daniela C. Salles, Eddie L. Imada, Renato Umeton, Edward M. Schaeffer, Brian D. Robinson, Tamara L. Lotan, Massimo Loda, Luigi Marchionni. Using attention-based deep multiple instance learning to identify key genetic alterations in prostate cancer from whole slide images [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 462.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Tao, Paul Auer, Stephen R. Spellman, Caitrin Fretham, Wael Saber, and Yung-Tsi Bolon. "Genomic Subgroups Impact Post-Transplant Survival in Patients with Myelodysplastic Syndrome: A CIBMTR Analysis." Blood 138, Supplement 1 (November 5, 2021): 3678. http://dx.doi.org/10.1182/blood-2021-151621.

Full text
Abstract:
Abstract Background Myelodysplastic syndromes (MDS) are clonal stem cell malignancies characterized by cytopenia, inefficient hematopoiesis, dysplasia in one or more myeloid cell lineages and increased risk of development of acute myeloid leukemia. It has been well appreciated that genomic alterations play a key role in MDS pathogenesis. The Revised International Prognostic Scoring System (IPSS-R) algorithm is commonly used to predict overall survival but may fail to recapitulate reliable prognostic information at the individual patient level, especially at the time of hematopoietic cell transplantation (HCT). Current World Health Organization (WHO) classification includes MDS with isolated 5q deletion as the only genetically defined category. Comprehensive analysis of recurrent genomic features by unsupervised clustering empowers discovery of potential prognostic molecular signatures. Methods Using whole blood samples obtained from 494 MDS patients at the time of HCT, we conducted whole-genome sequencing (WGS) and somatic variant processing via a custom analytic pipeline based on OCTOPUS and a set of annotation databases. Multiple filters allowed for selection and fine-tuning of criteria, including removal of variants with Gnomad allele frequency above 10x10 -06, removal of noncoding variants in low complexity and repetitive regions, those with no functional indications from ANNOVAR annotations, CADD conservative score under 15, and absence in HGMD or COSMIC databases. Highly annotated clinical data, including cytogenetic abnormalities at the latest time point prior to HCT, were obtained from CIBMTR forms. K-means clustering was applied to recurrent mutations and cytogenetic abnormalities to identify clinically relevant genomic subtypes. The optimal cluster number was determined by Gap-status algorithm. Statistics of clinical characteristics were compared among different genomic subgroups by Chi-squared test for categorical variables and Mann-Whitney U test for continuous variables. Overall survival association tests were conducted by Cox multivariate models. Relapse and transplant-related risk were performed by competing risk analysis using Fine-Gray models. Models were adjusted for patient-, disease-, and HCT-related factors. Results The somatic genomic landscape in our MDS cohort was examined for the total count of recurrent mutations at the sample level and gene level. Among 53 recurrently mutated genes in 257 of 494 MDS cases, TP53, TET2, RUNX1, DNMT3A, and ASXL1 were the most frequently mutated genes in our MDS cohort. Based on k-means clustering of the recurrent mutational and cytogenetic data, we detected five clusters that stratified our MDS patient cohort, including one reference cluster with no recurrent somatic mutations or cytogenetic abnormalities. Compared to the reference subgroup, significantly higher cytogenetic scores and IPSS-R scores were observed in genomic clusters with TP53 mutations (cytogenetic score: P=3.42E-07*; IPSS-R score: P= 2.38E-10*) and cytogenetic abnormalities del5q, or tri8p (cytogenetic score: P= 2.38E-10*; IPSS-R score: P=0.09) , or mono7 (cytogenetic score: P=3.29E-13*; IPSS-R score: P=1.38E-05*) (data not shown). Cox multivariate models revealed that genomic clusters with TP53 and del5q mutations (P&lt;0.001*) or tri8p (P=0.02*) mutations have strong associations with post-transplant overall survival outcome (Figure 1A). Furthermore, competing risk analysis confirmed significantly higher risk of relapse in genomic subgroups with TP53 and del5q mutations in the reduced intensity conditioning regimen setting (P=0.01) (Figure 1B), while significantly higher risk of transplant-related mortality was found in the genomic subgroup with tri8p in the myeloablative conditioning regimen setting (P=0.03) (Figure 1C). Conclusion Our study suggests that molecular signatures from MDS patient genomes at HCT may provide an independent prognosis of post-transplant survival. Additionally, our data suggests that the choice of regimen intensity could be informed by knowledge of the individual genomic signature of a given MDS patient. Figure 1 Figure 1. Disclosures Saber: Govt. COI: Other.
APA, Harvard, Vancouver, ISO, and other styles
40

Fukai, Yohsuke T., and Kyogo Kawaguchi. "LapTrack: Linear assignment particle tracking with tunable metrics." Bioinformatics, December 10, 2022. http://dx.doi.org/10.1093/bioinformatics/btac799.

Full text
Abstract:
Abstract Motivation Particle tracking is an important step of analysis in a variety of scientific fields, and is particularly indispensable for the construction of cellular lineages from live images. Although various supervised machine learning methods have been developed for cell tracking, the diversity of the data still necessitates heuristic methods that require parameter estimations from small amounts of data. For this, solving tracking as a linear assignment problem (LAP) has been widely applied and demonstrated to be efficient. However, there has been no implementation that allows custom connection costs, parallel parameter tuning with ground truth annotations, and the functionality to preserve ground truth connections, limiting the application to datasets with partial annotations. Results We developed LapTrack, a LAP-based tracker which allows including arbitrary cost functions and inputs, parallel parameter tuning, and ground-truth track preservation. Analysis of real and artificial datasets demonstrates the advantage of custom metric functions for tracking score improvement from distance-only cases. The tracker can be easily combined with other Python-based tools for particle detection, segmentation, and visualization. Availability and implementation LapTrack is available as a Python package on PyPi, and the notebook examples are shared at https://github.com/yfukai/laptrack. The data and code for this publication are hosted at https://github.com/NoneqPhysLivingMatterLab/laptrack-optimisation. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
41

Köhler, Christian. "Managing Audio Monitoring Data with SIMON – Concept for Data Administration, Online Repository and Dissemination." Biodiversity Information Science and Standards 4 (September 30, 2020). http://dx.doi.org/10.3897/biss.4.59108.

Full text
Abstract:
Automated observations of natural occurrences play a key role in monitoring biodiversity worldwide. With the development of affordable hardware like the AudioMoth (Hill et al. 2019) acoustic logger, large scale and long-term monitoring has come within reach. However, data management and dissemination of monitoring data remain challenging, as the development of software and the infrastructure for the management of monitoring data lag behind. We want to fill this gap, providing a complete audio monitoring solution from affordable audio monitoring hardware, custom data management tools and storage infrastructure based on open source hard- and software, biodiversity information standards and integrable interfaces. The Scientific Monitoring Data Management and Online Repository (SIMON) consists of a portable data collector and a connected online repository. The data collector, a device for the automated extraction of the audio data from the audio loggers in the field, stores the data and metadata in an internal cache. Once connected to the internet via WiFi or a cable connection, the data are automatically uploaded to an online repository for automated analysis, annotation, data management and dissemination. To prevent SIMON from becoming yet another proprietary storage, the FAIR principles (Findable, Accessible, Interoperable, and Re-usable) Wilkinson et al. (2016) are at the very core of data managed in the online repository. We plan to offer an API (application programming interface) to disseminate data to established data infrastructures. A second API will allow the use of external services for data enrichment. While in the planning phase, we would like to take the opportunity to discuss with domain experts the requirements and implementation of different standards—namely ABCD (Access to Biological Collections Data task group, Biodiversity Information Standards (TDWG) 2007), Darwin Core (Darwin Core Task Group, Biodiversity Information Standards (TDWG) 2009) and Darwin Core Archive (Remsen et al. 2017)—connecting to external services and targeting data infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
42

Wiesbrock, Christopher, Simon Musall, and Björn M. Kampa. "A flexible Python-based touchscreen chamber for operant conditioning reveals improved visual perception of cardinal orientations in mice." Frontiers in Cellular Neuroscience 16 (October 10, 2022). http://dx.doi.org/10.3389/fncel.2022.866109.

Full text
Abstract:
Natural scenes are composed of a wide range of edge angles and spatial frequencies, with a strong overrepresentation of vertical and horizontal edges. Correspondingly, many mammalian species are much better at discriminating these cardinal orientations compared to obliques. A potential reason for this increased performance could be an increased number of neurons in the visual cortex that are tuned to cardinal orientations, which is likely to be an adaptation to the natural scene statistics. Such biased angular tuning has recently been shown in the mouse primary visual cortex. However, it is still unknown if mice also show a perceptual dominance of cardinal orientations. Here, we describe the design of a novel custom-built touchscreen chamber that allows testing natural scene perception and orientation discrimination performance by applying different task designs. Using this chamber, we applied an iterative convergence towards orientation discrimination thresholds for cardinal or oblique orientations in different cohorts of mice. Surprisingly, the expert discrimination performance was similar for both groups but showed large inter-individual differences in performance and training time. To study the discrimination of cardinal and oblique stimuli in the same mice, we, therefore, applied, a different training regime where mice learned to discriminate cardinal and oblique gratings in parallel. Parallel training revealed a higher task performance for cardinal orientations in an early phase of the training. The performance for both orientations became similar after prolonged training, suggesting that learning permits equally high perceptual tuning towards oblique stimuli. In summary, our custom-built touchscreen chamber offers a flexible tool to test natural visual perception in rodents and revealed a training-induced increase in the perception of oblique gratings. The touchscreen chamber is entirely open-source, easy to build, and freely available to the scientific community to conduct visual or multimodal behavioral studies. It is also based on the FAIR principles for data management and sharing and could therefore serve as a catalyst for testing the perception of complex and natural visual stimuli across behavioral labs.
APA, Harvard, Vancouver, ISO, and other styles
43

Frishman, Samuel, Alison M. Kight, Ileana Pirozzi, Sainiteesh Maddineni, Annabel Imbrie-Moore, Zulekha Karachiwalla, Michael J. Paulsen, Alexander D. Kaiser, Y. Joseph Woo, and Mark R. Cutkosky. "Dynaring: a Patient-specific Mitral Annuloplasty Ring with Selective Stiffness Segments." Journal of Medical Devices, April 29, 2022. http://dx.doi.org/10.1115/1.4054445.

Full text
Abstract:
Abstract Annuloplasty ring choice and design are critical to the long-term efficacy of mitral valve (MV) repair. DynaRing is a selectively compliant annuloplasty ring composed of variable stiffness elastomer segments, a shape-set nitinol core, and across diameter filament. The ring provides sufficient stiffness to stabilize a diseased annulus while allowing physiological annular dynamics. Moreover, adjusting elastomer properties provides a mechanism for effectively tuning key MV metrics to specific patients. We evaluate the ring embedded in porcine valves with an ex-vivo left heart simulator and per-form a 150 million cycle fatigue test via a custom oscillatory system. We present a patient-specific design approach for determining ring parameters using a finite element model optimization and patient MRI data. Ex-vivo experiment results demonstrate that motion of DynaRing closely matches literature values for healthy annuli. Findings from the patient-specific optimization establish DynaRing's ability to adjust AP and IC diameters and saddle height by up to 8.8%, 5.6%, 19.8%, respectively and match a wide range of patient data.
APA, Harvard, Vancouver, ISO, and other styles
44

Faruq Aziz. "Recognition of Indonesian Traditional Cakes using The MobileNet Algorithm." International Journal of Computer and Information Technology(2279-0764) 11, no. 5 (January 14, 2023). http://dx.doi.org/10.24203/ijcit.v11i5.254.

Full text
Abstract:
Indonesia is a country with a variety of cultures, ranging from dance to cuisine and food variations. Cake is one of the unique variations of food include traditional cake. A variety of custom-made cakes will make the taste special, even though the name is the same. Traditional cakes are foods that are part of the ancestral culture that has been passed down from generation to generation explicitly in the region or Indonesian society. Machine learning methods are suitable for consistent and clear object recognition, this requires complex image pre-processing and feature extraction methods. The proposed model of our research is MobileNetv2 which was customized and then we did fine tuning then all of our training datasets do data-augmentation to create new datasets with various patterns so that the train dataset can be more numerous and avoid overfitting and the model can detect cake differences with an accuracy rate of 94% and loss 0.06.
APA, Harvard, Vancouver, ISO, and other styles
45

Saba, Tanzila, Amjad Rehman, Tariq Sadad, and Zahid Mehmood. "Copy-move image forged information detection and localisation in digital images using deep convolutional network." Journal of Information Science, December 14, 2021, 016555152110500. http://dx.doi.org/10.1177/01655515211050024.

Full text
Abstract:
Image tempering is one of the significant issues in the modern era. The use of powerful tools for image editing with advanced technology and its widespread on social media raised questions on data integrity. Currently, the protection of images is uncertain and a severe concern, mainly when it transfers over the Internet. Thus, it is essential to detect an anomaly in images through artificial intelligence techniques. The simple way of image forgery is called copy-move, where a part of an image is replicated in the same image to hide unwanted content of the image. However, image processing through handcrafted features usually looks for pattern concerns with duplicate content, limiting their employment for huge data classification. On the other side, deep learning approaches achieve promising results, but their performance depends on training data with fine-tuning of hyperparameters. Thus, we proposed a custom convolutional neural network (CNN) architecture with a pre-trained model ResNet101 through a transfer learning approach. For this purpose, both models are trained on five different datasets. In both cases, the impact of the model is evaluated through accuracy, precision, recall, F-score and achieved the highest 98.4% accuracy using the Coverage dataset.
APA, Harvard, Vancouver, ISO, and other styles
46

Isakov, Matti, Veera Langi, Lalit Pun, Guilherme Corrêa Soares, Innokenty Kantor, Mads Ry Vogel Jørgensen, and Mikko Hokka. "In-Situ X-ray Diffraction Analysis of Metastable Austenite Containing Steels Under Mechanical Loading at a Wide Strain Rate Range." Metallurgical and Materials Transactions, February 14, 2023. http://dx.doi.org/10.1007/s11661-023-06986-1.

Full text
Abstract:
AbstractThis paper presents and discusses the methodology and technical aspects of mechanical tests carried out at a wide strain rate range with simultaneous synchrotron X-ray diffraction measurements. The motivation for the study was to develop capabilities for in-situ characterization of the loading rate dependency of mechanically induced phase transformations in steels containing metastable austenite. The experiments were carried out at the DanMAX beamline of the MAX IV Laboratory, into which a custom-made tensile loading device was incorporated. The test setup was supplemented with in-situ optical imaging of the specimen, which allowed digital image correlation-based deformation analysis. All the measurement channels were synchronized to a common time basis with trigger signals between the devices as well as post-test fine tuning based on diffraction ring shape analysis. This facilitated precise correlation between the mechanical and diffraction data at strain rates up to 1 s−1 corresponding to test duration of less than one second. Diffraction data were collected at an acquisition rate of 250 Hz, which provided excellent temporal resolution. The feasibility of the methodology is demonstrated by providing novel data on the kinetics of the martensitic phase transformation in EN 1.4318-alloy following a rapid increase in strain rate (a so-called jump test).
APA, Harvard, Vancouver, ISO, and other styles
47

Ying, Yin, Zhihong Zhou, and Quanhai Zhang. "Blockchain-based Collaborative Caching Mechanism for Information Center IoT." Journal of ICT Standardization, January 14, 2023. http://dx.doi.org/10.13052/jicts2245-800x.1114.

Full text
Abstract:
The development of fifth generation mobile communication (5G) technology and Internet of Things (IoT) has enabled more mobile terminals to access the network and generate huge amounts of information content. This will make it difficult for the traditional IP-based host-to-host model to cope with the demand for massive data transmission, making network congestion an increasingly serious problem. To cope with these problems, a new network architecture, the Information-Centric Networking (ICN), a content network with content caching as one of its most core functions, has emerged. In addition, in the era when 5G and future 6G networks gradually realize the interconnection of everything, the Information-Centric Internet of Things (IC-IoT) based on ICN architecture has emerged, and a large number of IoT devices can use ICN nodes as edge devices to realize collaborative caching. The caching capacity of IC-IoT is directly related to the transmission efficiency and capacity of the whole network, and the performance of IC-IoT caching capacity is the top priority of research in this field. To address the above issues, the research in this paper focuses on deploying blockchains in IC-IoT networks and using the consensus mechanism of custom blockchains to motivate ICN nodes and non-ICN nodes in the network to perform caching collaboratively, which in turn improves the caching capacity of the whole network. The main work of this paper has the following points. First, incentivize IC-IoT collaborative caching based on blockchain consensus mechanism: by deploying blockchain in IC-IoT, rewarding nodes that obtain bookkeeping rights to incentivize network-wide collaborative caching, and designing experiments to compare the caching capacity of the network before and after the incentive; second, improve DPoS consensus mechanism to incentivize collaborative caching: Experiments are designed to compare the incentive capacity of PoW consensus mechanism and improved DPoS consensus mechanism for IC-IoT network collaborative caching, and to select the consensus mechanism with better performance; third, the design and implementation of IC-IoT test bed: write ICN program to form ICN network from basic communication to multiple nodes, and then deploying blockchain on the network for subsequent extension studies. This thesis demonstrates the feasibility of using blockchain for IC-IoT network cache collaborative incentive, and proves that the blockchain incentive method in this paper can improve the throughput of IC-IoT network cache by building a test bed.
APA, Harvard, Vancouver, ISO, and other styles
48

Fan, Shaoze, Shun Zhang, Jianbo Liu, Ningyuan Cao, Xiaoxiao Guo, Jing Li, and Xin Zhang. "Power Converter Circuit Design Automation using Parallel Monte Carlo Tree Search." ACM Transactions on Design Automation of Electronic Systems, July 21, 2022. http://dx.doi.org/10.1145/3549538.

Full text
Abstract:
The tidal waves of modern electronic/electrical devices have led to increasing demands for ubiquitous application-specific power converters. A conventional manual design procedure of such power converters is computation- and labor-intensive, which involves selecting and connecting component devices, tuning component-wise parameters and control schemes, and iteratively evaluating and optimizing the design. To automate and speed up this design process, we propose an automatic framework that designs custom power converters from design specifications using Monte Carlo Tree Search. Specifically, the framework embraces the upper-confidence-bound-tree (UCT), a variant of Monte Carlo Tree Search, to automate topology space exploration with circuit design specification-encoded reward signals. Moreover, our UCT-based approach can exploit small offline data via the specially designed default policy and can run in parallel to accelerate topology space exploration. Further, it utilizes a hybrid circuit evaluation strategy to substantially reduce design evaluation costs. Empirically, we demonstrated that our framework could generate energy-efficient circuit topologies for various target voltage conversion ratios. Compared to existing automatic topology optimization strategies, the proposed method is much more computationally efficient --- the sequential version can generate topologies with the same quality while being up to 67% faster. The parallelization schemes can further achieve high speedups compared to the sequential version.
APA, Harvard, Vancouver, ISO, and other styles
49

Mezaal, Jawad, and Thamer Alameri. "DESIGN AND CONSTRUCT UNIT TO CONTROL FLUID ENTERING SOLAR COLLECTORS DURING EFFICIENCY TESTS." Journal of Applied Engineering Science, October 4, 2022, 1–10. http://dx.doi.org/10.5937/jaes0-35998.

Full text
Abstract:
This paper describes the development of an apparatus to control the fluids that enter a solar collector in experimental tests with respect to the Australian and New Zealand Standard AS/NZS 2535.1.2007. This standard explains the testing procedure, indicating that the inlet fluid should have specified temperature and flow rate uncertainties. The hardware components were constructed in the lab. A new sophisticated data acquisition system with an NI CompactDAQ was added to control the unit, and a new software application in LabVIEW was developed. The unit was operated in an open-loop to understand its behaviour as a multiple-inputs and multiple-outputs system (MIMO). A rule of thumb tuning method was used to design the proportional-integral PI controller for the heating system.Moreover, a custom decoupler with a PI controller was developed to reduce the interactions in the MIMO. The measured steady-state responses were analysed to determine the flow rate and temperature compared with the limited boundaries. The final results show that the system could supply water to the solar collector within the accuracy requirements. Achieving the fluid's absolute temperature and flow rate within the required constraints of the published standard has proven that the developed unit can be adapted to perform solar collector testing. However, additional steps are suggested for further work to enable the unit to provide field testing.
APA, Harvard, Vancouver, ISO, and other styles
50

Hansmeyer, Laura, Pinar Yurt, Naubahar Agha, Attila Trunk, Michael Berger, Antonino Calapai, Stefan Treue, and Alexander Gail. "Home-enclosure based behavioral and wireless neural recording setup for unrestrained rhesus macaques." eneuro, December 23, 2022, ENEURO.0285–22.2022. http://dx.doi.org/10.1523/eneuro.0285-22.2022.

Full text
Abstract:
Electrophysiological studies with behaving non-human primates (NHP) often require the separation of animals from their social group as well as partial movement restraint to perform well controlled experiments. When the research goal per se does not mandate constraining the animals’ movements there are often still experimental needs imposed by tethered data acquisition. Recent technological advances meanwhile allow wireless neurophysiological recordings at high band-width in limited-size enclosures. Here, we demonstrate wireless neural recordings at single unit resolution from unrestrained Rhesus macaques while they performed self-paced, structured visuomotor tasks on our custom-built, stand-alone touchscreen system (XBI) in their home environment. We were able to successfully characterize neural tuning to task parameters, such as visuo-spatial selectivity during movement planning and execution, as expected from existing findings obtained via setup-based neurophysiology recordings. We conclude that when movement restraint and/or a highly controlled, insulated environment are not necessary for scientific reasons, cage-based wireless neural recordings are a viable option. We propose an approach that allows the animals to engage in a self-paced manner with our XBI device, both for fully automatized training and cognitive testing, as well as neural data acquisition in their familiar environment, maintaining auditory and sometimes visual contact with their conspecifics.Significance statementCage-based cognitive systems have previously been shown to be highly useful in cognitive assessment of non-human primates. These systems allow animals to engage with the task/system in an unrestrained and self-paced manner. We expanded the capabilities of our own cage-based testing device by combining cognitive testing with wireless neural recordings in the animals’ home environment, in an upscalable approach. When neither movement constraints nor specialized equipment are scientifically necessary, our approach allows for the combination of cognitive testing with intracranial electrophysiology without removing the animal from its home environment, potentially improving animal well-being.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography