Academic literature on the topic 'Custom data cache tuning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Custom data cache tuning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Custom data cache tuning"

1

Patel, Aarti, and Prashant K.Shah. "Semi-Custom design of functional unit block using data path methodology in data cache unit." International Journal of VLSI & Signal Processing 4, no. 3 (May 25, 2017): 33–37. http://dx.doi.org/10.14445/23942584/ijvsp-v4i3p107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Naik Dessai, Sanket Suresh, and Varuna Eswer. "Embedded Software Testing to Determine BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 3 (December 1, 2016): 121. http://dx.doi.org/10.11591/ijset.v1i3.4577.

Full text
Abstract:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation of embedded testing procedure to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS). The implementation proposed for embedded testing in the paper considers the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements embedding testbed with a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the testbed implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. In this testbed twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
APA, Harvard, Vancouver, ISO, and other styles
3

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
4

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
5

Eswer, Varuna, and Sanket S. Naik Dessai. "Processor performance metrics analysis and implementation for MIPS using an open source OS." International Journal of Reconfigurable and Embedded Systems (IJRES) 10, no. 2 (July 1, 2021): 137. http://dx.doi.org/10.11591/ijres.v10.i2.pp137-148.

Full text
Abstract:
<p><span>Processor efficiency is a important in embedded system. The efficiency of the processor depends on the L1 cache and translation lookaside buffer (TLB). It is required to understand the L1 cache and TLB performances during varied load for the execution on the processor and hence studies the performance of the varing load and its performance with caches with MIPS and operating system (OS) are studied in this paper. The proposed methods of implementation in the paper considers the counting of the instruction exxecution for respective cache and TLB management and the events are measured using a dedicated counters in software. The software counters are used as there are limitation to hardware counters in the MIPS32. Twenty-seven metrics are considered for analysis and proper identification and implemented for the performance measurement of L1 cache and TLB on the MIPS32 processor. The generated data helps in future research in compiler tuning, memory management design for OS, analysing architectural issues, system benchmarking, scalability, address space analysis, studies of bus communication among processor and its workload sharing characterisation and kernel profiling.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
6

Eswer, Varuna, and Sanket Suresh Naik Dessai. "Embedded Software Engineering Approach to Implement BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 1 (April 1, 2016): 41. http://dx.doi.org/10.11591/ijset.v1i1.4568.

Full text
Abstract:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS) using software engineering approach. Software engineering providing better clearity for the system developemt and its performance analysis.In the initial stage if the requirement analysis for the performance measurment sort very clearly,the methodologies for the implementation becomes very economical without any ambigunity.In this paper a implementation is proposed to determine the processor performance metrics using a software engineering approach considering the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. Twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
APA, Harvard, Vancouver, ISO, and other styles
7

Papagiannis, Anastasios, Giorgos Saloustros, Giorgos Xanthakis, Giorgos Kalaentzis, Pilar Gonzalez-Ferez, and Angelos Bilas. "Kreon." ACM Transactions on Storage 17, no. 1 (February 2, 2021): 1–32. http://dx.doi.org/10.1145/3418414.

Full text
Abstract:
Persistent key-value stores have emerged as a main component in the data access path of modern data processing systems. However, they exhibit high CPU and I/O overhead. Nowadays, due to power limitations, it is important to reduce CPU overheads for data processing. In this article, we propose Kreon , a key-value store that targets servers with flash-based storage, where CPU overhead and I/O amplification are more significant bottlenecks compared to I/O randomness. We first observe that two significant sources of overhead in key-value stores are: (a) The use of compaction in Log-Structured Merge-Trees (LSM-Tree) that constantly perform merging and sorting of large data segments and (b) the use of an I/O cache to access devices, which incurs overhead even for data that reside in memory. To avoid these, Kreon performs data movement from level to level by using partial reorganization instead of full data reorganization via the use of a full index per-level. Kreon uses memory-mapped I/O via a custom kernel path to avoid a user-space cache. For a large dataset, Kreon reduces CPU cycles/op by up to 5.8×, reduces I/O amplification for inserts by up to 4.61×, and increases insert ops/s by up to 5.3×, compared to RocksDB.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Jiaoyi, and Yihan Gao. "CARMI." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2679–91. http://dx.doi.org/10.14778/3551793.3551823.

Full text
Abstract:
Learned indexes, which use machine learning models to replace traditional index structures, have shown promising results in recent studies. However, existing learned indexes exhibit a performance gap between synthetic and real-world datasets, making them far from practical indexes. In this paper, we identify that ignoring the importance of data partitioning during model training is the main reason for this problem. Thus, we explicitly apply data partitioning to index construction and propose a new efficient and updatable cache-aware RMI framework, called CARMI. Specifically, we introduce entropy as a metric to quantify and characterize the effectiveness of data partitioning of tree nodes in learned indexes and propose a novel cost model, laying a new theoretical foundation for future research. Then, based on our novel cost model, CARMI can automatically determine tree structures and model types under various datasets and workloads by a hybrid construction algorithm without any manual tuning. Furthermore, since memory accesses limit the performance of RMIs, a new cache-aware design is also applied in CARMI, which makes full use of the characteristics of the CPU cache to effectively reduce the number of memory accesses. Our experimental study shows that CARMI performs better than baselines, achieving an average of 2.2X/1.9X speedup compared to B+ Tree/ALEX, while using only about 0.77X memory space of B+ Tree. On the SOSD platform, CARMI outperforms all baselines, with an average speedup of 1.2X over the nearest competitor RMI, which has been carefully tuned for each dataset in advance.
APA, Harvard, Vancouver, ISO, and other styles
9

Godard, Patrice, and Jonathan van Eyll. "BED: a Biological Entity Dictionary based on a graph data model." F1000Research 7 (May 16, 2018): 195. http://dx.doi.org/10.12688/f1000research.13925.2.

Full text
Abstract:
The understanding of molecular processes involved in a specific biological system can be significantly improved by combining and comparing different data sets and knowledge resources. However, these information sources often use different identification systems and an identifier conversion step is required before any integration effort. Mapping between identifiers is often provided by the reference information resources and several tools have been implemented to simplify their use. However, most of these tools do not combine the information provided by individual resources to increase the completeness of the mapping process. Also, deprecated identifiers from former versions of databases are not taken into account. Finally, finding automatically the most relevant path to map identifiers from one scope to the other is often not trivial. The Biological Entity Dictionary (BED) addresses these three challenges by relying on a graph data model describing possible relationships between entities and their identifiers. This model has been implemented using Neo4j and an R package provides functions to query the graph but also to create and feed a custom instance of the database. This design combined with a local installation of the graph database and a cache system make BED very efficient to convert large lists of identifiers.
APA, Harvard, Vancouver, ISO, and other styles
10

Godard, Patrice, and Jonathan van Eyll. "BED: a Biological Entity Dictionary based on a graph data model." F1000Research 7 (July 19, 2018): 195. http://dx.doi.org/10.12688/f1000research.13925.3.

Full text
Abstract:
The understanding of molecular processes involved in a specific biological system can be significantly improved by combining and comparing different data sets and knowledge resources. However, these information sources often use different identification systems and an identifier conversion step is required before any integration effort. Mapping between identifiers is often provided by the reference information resources and several tools have been implemented to simplify their use. However, most of these tools do not combine the information provided by individual resources to increase the completeness of the mapping process. Also, deprecated identifiers from former versions of databases are not taken into account. Finally, finding automatically the most relevant path to map identifiers from one scope to the other is often not trivial. The Biological Entity Dictionary (BED) addresses these three challenges by relying on a graph data model describing possible relationships between entities and their identifiers. This model has been implemented using Neo4j and an R package provides functions to query the graph but also to create and feed a custom instance of the database. This design combined with a local installation of the graph database and a cache system make BED very efficient to convert large lists of identifiers.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Custom data cache tuning"

1

ARIF, ARSLAN. "Performance Optimization of Memory Intensive Applications on FPGA Accelerator." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2727226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rajbhandari, Prashish. "Benchmarking a Custom List Data Type in Memcached against Redis." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1460446657.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Custom data cache tuning"

1

Chinazzo, André, Christian De Schryver, Katharina Zweig, and Norbert Wehn. "A Custom Hardware Architecture for the Link Assessment Problem." In Lecture Notes in Computer Science, 57–75. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21534-6_4.

Full text
Abstract:
AbstractHeterogeneous accelerator enhanced computing architectures are a common solution in embedded computing, mainly due to the constraints in energy and power efficiency. Such accelerator enhanced systems dispatch data- and computing-intensive tasks to specialized, optimized and thus efficient hardware units, leaving most control flow tasks for the more generic but less efficient central processing units (CPUs). Nowadays, also high-performance computing (HPC) systems are becoming more heterogeneous by incorporating accelerators into the computing nodes.In this chapter, we introduce the concept of heterogeneous computing and present the design of a hardware accelerator for solving the Link Assessment (LA) problem, in introduced Chapter 3. The hardware accelerator integrates its main dedicated processing units with a customized cache design and light-weight data path. We provide detailed area, energy, and timing results for a 28 nm application specific integrated circuit (ASIC) process and DDR3 memory devices. Compared to an CPU-based cluster, our proposed solution uses 38x less memory and is 1030x more energy efficient for processing a users-movies dataset with half a million edges.
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, N. M. G., Ayaz Ahmad, Dankan Gowda V., S. Lokesh, and Kirti Rahul Rahul Kadam. "An Enhanced Method for Running Embedded Applications in a Power-Efficient Manner." In Advances in Computer and Electrical Engineering, 257–77. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-4974-5.ch013.

Full text
Abstract:
Many modern items that are in widespread use have embedded systems. Due of embedded processing's ability to provide complex functions and a rich user experience, it has grown commonplace in many types of electronic products during the last 20 years. Power consumption in embedded systems is regarded as a crucial design criterion among other factors like area, testability, and safety. Low power consumption has therefore become a crucial consideration in the design of embedded microprocessors. The proposed new method takes into consideration both the spatial and temporal locality of the accessed data. In the chapter, the new cache replacement is combined with an efficient cache partitioning method to improve the cache hit rate. In this work, a new modification is proposed for the instruction set design to be used in custom made processors.
APA, Harvard, Vancouver, ISO, and other styles
3

Joshi, R. C., Manoj Misra, and Narottam Chand. "Energy-Efficient Cache Invalidation in Wireless Mobile Environment." In Mobile Computing, 3012–20. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-054-7.ch226.

Full text
Abstract:
Caching at the mobile client is a potential technique that can reduce the number of uplink requests, lighten the server load, shorten the query latency and increase the data availability. A cache invalidation strategy ensures that any data item cached at a mobile client has same value as on the origin server. Traditional cache invalidation strategies make use of periodic broadcasting of invalidation reports (IRs) by the server. The IR approach suffers from long query latency, larger tuning time and poor utilization of bandwidth. Using updated invalidation report (UIR) method that replaces a small fraction of the recent updates, the query latency can be reduced. To improve upon the IR and UIR based strategies, this chapter presents a synchronous stateful cache maintenance technique called Update Report (UR). The proposed strategy outperforms the IR and UIR strategies by reducing the query latency, minimizing the disconnection overheads, optimizing the use of wireless channel and conserving the client energy.
APA, Harvard, Vancouver, ISO, and other styles
4

Westin, Stu. "Building a Custom Client-Side Research Tool for Online Web-Based Experiments." In Computing Information Technology, 253–66. IGI Global, 2003. http://dx.doi.org/10.4018/978-1-93177-752-0.ch016.

Full text
Abstract:
This chapter describes a general software-based approach to conducting online Web research through the development of a custom research tool. Specifically, the tool is an Internet Explorer-like Web browser that can be designed to deliver experimental treatments and to collect experimental data with great precision and flexibility. The purpose of the manuscript is to introduce this approach to Web-based research, and to discuss the most salient issues, techniques, and problems that are involved in the development and use of such a research instrument. Programming custom event handlers, for a preexisting software object called the WebBrowser Control, constitutes a major part of the research approach. Event handling techniques having to do with downloading and navigation, with browser interface emulation, and with window and session control are presented. Other relevant issues such as cache management, keyboard handling, and accessing HTML page elements through the Document Object Model are also presented.
APA, Harvard, Vancouver, ISO, and other styles
5

D’Hollander, Erik H. "Empowering Parallel Computing with Field Programmable Gate Arrays." In Parallel Computing: Technology Trends. IOS Press, 2020. http://dx.doi.org/10.3233/apc200020.

Full text
Abstract:
After more than 30 years, reconfigurable computing has grown from a concept to a mature field of science and technology. The cornerstone of this evolution is the field programmable gate array, a building block enabling the configuration of a custom hardware architecture. The departure from static von Neumann-like architectures opens the way to eliminate the instruction overhead and to optimize the execution speed and power consumption. FPGAs now live in a growing ecosystem of development tools, enabling software programmers to map algorithms directly onto hardware. Applications abound in many directions, including data centers, IoT, AI, image processing and space exploration. The increasing success of FPGAs is largely due to an improved toolchain with solid high-level synthesis support as well as a better integration with processor and memory systems. On the other hand, long compile times and complex design exploration remain areas for improvement. In this paper we address the evolution of FPGAs towards advanced multi-functional accelerators, discuss different programming models and their HLS language implementations, as well as high-performance tuning of FPGAs integrated into a heterogeneous platform. We pinpoint fallacies and pitfalls, and identify opportunities for language enhancements and architectural refinements.
APA, Harvard, Vancouver, ISO, and other styles
6

Petersen, Wesley, and Peter Arbenz. "Shared Memory Parallelism." In Introduction to Parallel Computing. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780198515760.003.0009.

Full text
Abstract:
Shared memory machines typically have relatively few processors, say 2–128. An intrinsic characteristic of these machines is a strategy for memory coherence and a fast tightly coupled network for distributing data from a commonly accessible memory system. Our test examples were run on two HP Superdome clusters: Stardust is a production machine with 64 PA-8700 processors, and Pegasus is a 32 CPU machine with the same kind of processors. The HP9000 is grouped into cells, each with 4 CPUs, a common memory/cell, and connected to a CCNUMA crossbar network. The network consists of sets of 4×4 crossbars and is shown in Figure 4.2. An effective bandwidth test, the EFF_BW benchmark [116], groups processors into two equally sized sets. Arbitrary pairings are made between elements from each group, Figure 4.3, and the cross-sectional bandwidth of the network is measured for a fixed number of processors and varying message sizes. The results from the HP9000 machine Stardust are shown in Figure 4.4. It is clear from this figure that the cross-sectional bandwidth of the network is quite high. Although not apparent from Figure 4.4, the latency for this test (the intercept near Message Size = 0) is not high. Due to the low incremental resolution of MPI_Wtime, multiple test runs must be done to quantify the latency. Dr Byrde’s tests show that minimum latency is ≳ 1.5μs. A clearer example of a shared memory architecture is the Cray X1 machine, shown in Figures 4.5 and 4.6. In Figure 4.6, the shared memory design is obvious. Each multi-streaming processor (MSP) shown in Figure 4.5 has 4 processors (custom designed processor chips forged by IBM), and 4 corresponding caches. Although not clear from available diagrams, vector memory access apparently permits cache by-pass; hence the term streaming in MSP. That is, vector registers are loaded directly from memory: see, for example, Figure 3.4. On each board (called nodes) are 4 such MSPs and 16 memory modules which share a common (coherent) memory view. Coherence is only maintained on each board, but not across multiple board systems.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Custom data cache tuning"

1

Gonzalez, Miguel, Subhash Ayirala, Lyla Maskeen, and Abdulkarim Sofi. "Miniature Viscosity Sensors for EOR Polymer Fluids." In SPE Improved Oil Recovery Conference. SPE, 2022. http://dx.doi.org/10.2118/209430-ms.

Full text
Abstract:
Abstract There are currently no technologies available to measure polymer solution viscosities at realistic downhole conditions in a well during enhanced oil recovery (EOR). In this paper, custom-made probes using quartz tuning fork (QTF) resonators are demonstrated for measurements of viscosity of polymer fluids. The electromechanical response of the resonators was calibrated in simple Newtonian fluids and in non-Newtonian polymer fluids at different concentrations. The responses were then used to measure field-collected samples of polymer injection fluids. The measured viscosity values by tuning forks were lower than those measured by the conventional rheometer at 6.8 s-1, indicating the effect of viscoelasticity of the fluid. However, the predicted rheometer viscosity versus QTF measured viscosity showed a perfect exponential correlation, allowing for calibration between the two viscometers. The QTF sensors were shown to successfully produce accurate viscosity measurements of polymer fluids within the required polymer concentration ranges used in the field, and predicted field sample viscosities with less than 5% error from the rheometer data. These devices can be easily integrated into portable systems for lab or wellsite deployment as well as logging tools for downhole deployment.
APA, Harvard, Vancouver, ISO, and other styles
2

Taghavi, O., P. S. Shiakolas, and O. Kuljaca. "Fuzzy Logic Real-Time Digital Control of a Hardware in the Loop Maglev Device Using MATLAB and xPC Target." In ASME 2003 International Mechanical Engineering Congress and Exposition. ASMEDC, 2003. http://dx.doi.org/10.1115/imece2003-42821.

Full text
Abstract:
This work will discuss the use of a single environment for real-time digital control with a hardware-in-the-loop (HIL) magnetic levitation (maglev) device for modeling and controls education, with emphasis on fuzzy logic (FL) feedforward control. This environment utilizes two computers (host and target), an off-the-shelf data acquisition card, and the HIL device (a nonlinear, open-loop, unstable, and time varying, custom-built maglev). The software includes tools from MathWorks Inc., and a C++ compiler. The values of any parameter (control law, reference trajectory) in the Smulink model can be changed dynamically on the host computer and their effects observed in real-time on the HIL system. Real-time data was collected from the HIL device and used in designing, tuning and implementing a feedforward FL controller all using MathWorks tools that controlled the HIL device in real-time. It was observed that the tracking error was substantially improved when the FL augmented the control effort of a classical lead compensator. The procedure for the FL development, tuning and hardware implementation along with examples will be presented. This system has been recently completed and was successfully used in an educational setting for one graduate and undergraduate Mechanical Engineering course.
APA, Harvard, Vancouver, ISO, and other styles
3

Khor, Pei Lin, and Wong Jee Keen Raymond. "Food Allergen Detection in Malaysian Food Using Convolutional Neural Networks." In International Technical Postgraduate Conference 2022. AIJR Publisher, 2022. http://dx.doi.org/10.21467/proceedings.141.15.

Full text
Abstract:
Food allergy is a rising, global epidemic. Some Malaysian cooking contains food-allergic-reaction-causing ingredients that may cause severe allergic reactions. A food allergen detection system in Malaysian food is proposed for tourists with food allergies who are unfamiliar with the wide variety of Malaysian dishes to prevent severe allergic reactions. This work focuses on three major food allergens, which include peanuts, cow’s milk, and shellfish. A new Malaysian food image dataset was prepared, and transfer learning on the custom dataset was done via fine-tuning and feature extraction techniques. Comparisons on the ResNet50, InceptionV3, and VGG16 architectures are done based on the accuracy of each model on the testing data. The VGG16 architecture is concluded as the most suitable neural network model for food allergen detection in Malaysian food. The proposed classifier achieved an accuracy of 80.56% on the test samples. The final model is loaded into a Graphical User Interface (GUI) application to demonstrate the results of the Malaysian food classification model.
APA, Harvard, Vancouver, ISO, and other styles
4

Trownson, Glenn, Peter Gill, William Brayshaw, James Watson, and Jonathan Mann. "Thermomechanical Fatigue Initiation in Nuclear Grades of Austenitic Stainless Steel Using Plant Realistic Loading." In ASME 2022 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/pvp2022-84760.

Full text
Abstract:
Abstract The effect of a Pressurised Water Reactor (PWR) environment on fatigue life is currently assessed using methods such as NUREG/CR-6909 for initiation and ASME Code Case N809 for crack growth, which may be inherently conservative for certain components, especially when considering plant relevant loading. The thermal shock testing with thick-walled specimens as discussed in this paper allows for more plant relevant loading regimes to be utilised in assessments, incorporating through-wall stress gradients, thick walled test specimens and out-of-phase temperature/strain characteristics. This should lead to improvements in reducing the levels of excess conservatism in current assessment methodologies. The capability of the test facility was first presented in PVP2016-63161 [4]. Since then, significant modifications have been made in order to maximise the achievable strain amplitudes in the thick-walled specimen geometry, alongside minimising typical test durations. This was achieved by maximising the temperature differential between the hot and cold cycles and tuning the cycle length in order to ensure that the cycle is long enough to achieve a target strain amplitude, whilst ensuring that it is not so long as to unreasonably increase test durations. This paper details the results of the thermal shock testing performed to date, the development of accompanying Finite Element Analysis (FEA), preliminary initiation data and the development of the various Non Destructive Testing (NDT) techniques used to detect fatigue crack initiation on the thick-walled specimens. Owing to the long testing times needed to achieve the required cycling, various NDT techniques were developed and employed to confirm the presence of fatigue cracking in the thick-walled test specimens before considering more in-depth characterisation using destructive techniques. Eddy Current Array (ECA) testing has been specifically developed for this testing and uses a 360-degree custom bore probe to conduct non-contact ECA measurements on the inner surface of the test specimens. Calibration blocks containing various sized Electrical Discharge Machining (EDM) notches were used to provide a calibration (amplitude and phase) of eddy current responses for prospective flaw depth sizing from indications. The ECA testing performed has provided indications that fatigue cracking is present within the thick-walled specimens tested and subsequent Visual Testing (VT) was performed to assess the highlighted indications from the ECA testing. The VT methods employed included a video borescope for imaging the inner walls of the specimen. In order to increase the detection capabilities (by improving the contrast) the VT was used in conjunction with fluorescent Dye-Penetrant (fDP) testing, whereby a method was developed for using fDP within the inside bore of the specimen alongside a custom ultraviolet (UV) source to better highlight cracking. This paper discusses the success of the NDT developments and testing performed to date and details the latest complementary crack growth assessment work.
APA, Harvard, Vancouver, ISO, and other styles
5

Gautam, Sandarbh, Daulet Magzymov, Birol Dindoruk, Richard Fyfe, and Kory Holmes. "Quantification of the Impact of Pressure on Relative Permeability Curves Utilizing Automated Unit with Gamma Ray Scanning Capability." In SPE Annual Technical Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210297-ms.

Full text
Abstract:
Abstract Physics of multiphase flow in porous media heavily relies on the concept of relative permeability. Moreover, relative permeability is an integral input parameter for any numerical reservoir simulation representing multiphase flow in porous media. Relative permeability curves are often used as tuning parameters to match the elements of the production history. Many times it is possible to see a single set of fixed relative permeability curves applied for the entire complex large-scale reservoir simulation. In this study, we are experimentally investigating the effect of high pressures on relative permeability curves. We are using a state-of-the-art custom-made relative permeability steady-state flow system with a gamma-ray source. The setup is capable of handling pressures from atmospheric up to 10000 psi, and temperatures up to 200 °C. For this paper, we use a model oil and brine, such as n-hexane and sodium iodide aqueous solution. The porous media is Berea sandstone rock. Such choice of simple fluids is done to avoid any secondary effects of fluid-rock interaction, such as wettability alteration, asphaltenes, and gas-dissolution. Moreover, by using simple fluids ystems we avoid fluid-fluid interactions, miscibility and interaction of phase behavior and flow. We run the relative permeability scans at fixed temperature (isotherm), and at several pressure values (isobars), such as 100, 2000, 4000 psia. The relative permeability curves are then compared to each other to examine the impact of pressure. There are two main possible outcomes for this study. First outcome is that here is no significant effect of pressure on relative permeability curves. Such outcome confirms the status quo where a fixed relative permeability curves are used for the entire simulation study. The second possible outcome of the study is that there is considerable effect of pressure on relative permeability curves. Such outcome fundamentally questions the common assumption of fixed relative permeability curves that is broadly applied in the industry. Regardless of the two main outcomes of the study, all will contribute to better understanding of the multiphase flow in porous media under high-pressure/variable pressure conditions. Moreover, we are able to observe the in-situ phase saturation propagation with radioactive scanning of the core. Such monitoring of the core simultaneously with relative permeability measurements will shed the light in the in-situ phase propagation at realistic conditions. Systematic look and the amount of data that addresses the pressure effect on the relative permeability is extremely scarce in the literature, even though the pressure varies significantly in the reservoir during the lifetime of the field. Therefore, it is essential to understand the pressure effect on the relative permeability under well controlled laboratory conditions. The outcomes of this paper may help engineers to improve design, simulation, and predictions during field developments and decision-making process.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography