Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Non-uniform memory access.

Artykuły w czasopismach na temat „Non-uniform memory access”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 44 najlepszych artykułów w czasopismach naukowych na temat „Non-uniform memory access”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Lameter, Christoph. "An overview of non-uniform memory access". Communications of the ACM 56, nr 9 (wrzesień 2013): 59–54. http://dx.doi.org/10.1145/2500468.2500477.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lameter, Christoph. "NUMA (Non-Uniform Memory Access): An Overview". Queue 11, nr 7 (lipiec 2013): 40–51. http://dx.doi.org/10.1145/2508834.2513149.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Rui-bo, Kai Lu i Xi-cheng Lu. "Aware conflict detection of non-uniform memory access system and prevention for transactional memory". Journal of Central South University 19, nr 8 (sierpień 2012): 2266–71. http://dx.doi.org/10.1007/s11771-012-1270-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

MOTLAGH, BAHMAN S., i RONALD F. DeMARA. "PERFORMANCE OF SCALABLE SHARED-MEMORY ARCHITECTURES". Journal of Circuits, Systems and Computers 10, nr 01n02 (luty 2000): 1–22. http://dx.doi.org/10.1142/s0218126600000068.

Pełny tekst źródła
Streszczenie:
Analytical models were developed and simulations of memory latency were performed for Uniform Memory Access (UMA), Non-Uniform Memory Access (NUMA), Local-Remote-Global (LRG), and RCR architectures for hit rates from 0.1 to 0.9 in steps of 0.1, memory access times of 10 to 100 ns, proportions of read/write access from 0.01 to 0.1, and block sizes of 8 to 64 words. The RCR architecture provides favorable performance over UMA and NUMA architectures for all ranges of application and system parameters. RCR outperforms LRG architectures when the hit rates of the processor cache exceed 80%and replicated memory exceed 25%. Thus, inclusion of a small replicated memory at each processor significantly reduces expected access time since all replicated memory hits become independent of global traffic. For configurations of up to 32 processors, results show that latency is further reduced by distinguishing burst-mode transfers between isolated memory accesses and those which are incrementally outside the working set.
Style APA, Harvard, Vancouver, ISO itp.
5

Nikolopoulos, Dimitrios S., Ernest Artiaga, Eduard Ayguadé i Jesús Labarta. "Scaling Non-Regular Shared-Memory Codes by Reusing Custom Loop Schedules". Scientific Programming 11, nr 2 (2003): 143–58. http://dx.doi.org/10.1155/2003/379739.

Pełny tekst źródła
Streszczenie:
In this paper we explore the idea of customizing and reusing loop schedules to improve the scalability of non-regular numerical codes in shared-memory architectures with non-uniform memory access latency. The main objective is to implicitly setup affinity links between threads and data, by devising loop schedules that achieve balanced work distribution within irregular data spaces and reusing them as much as possible along the execution of the program for better memory access locality. This transformation provides a great deal of flexibility in optimizing locality, without compromising the simplicity of the shared-memory programming paradigm. In particular, the programmer does not need to explicitly distribute data between processors. The paper presents practical examples from real applications and experiments showing the efficiency of the approach.
Style APA, Harvard, Vancouver, ISO itp.
6

Denoyelle, Nicolas, Brice Goglin, Aleksandar Ilic, Emmanuel Jeannot i Leonel Sousa. "Modeling Non-Uniform Memory Access on Large Compute Nodes with the Cache-Aware Roofline Model". IEEE Transactions on Parallel and Distributed Systems 30, nr 6 (1.06.2019): 1374–89. http://dx.doi.org/10.1109/tpds.2018.2883056.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Priya, Bhukya Krishna, i N. Ramasubramanian. "Improving the Lifetime of Phase Change Memory by Shadow Dynamic Random Access Memory". International Journal of Service Science, Management, Engineering, and Technology 12, nr 2 (marzec 2021): 154–68. http://dx.doi.org/10.4018/ijssmet.2021030109.

Pełny tekst źródła
Streszczenie:
Emerging NVM are replacing the conventional memory technologies due to their huge cell density and low energy consumption. Restricted writes is one of the major drawbacks to adopt PCM memories in real-time environments. The non-uniform writes and process variations can damage the memory cell with intensive writes, as PCM memory cells are having restricted write endurance. To prolong the lifetime of a PCM, an extra DRAM shadow memory has been added to store the writes that comes to the PCM and to level out the wearing that occurs on the PCM. An extra address directory will store the address of data written to the DRAM and a counter is used to count the number of times the blocks are written into. Based upon the counter values, the data will be written from DRAM to the PCM. The data is written to the DRAM from the PCM, based on the data requirement. Experimental results show the reduction in overall writes in a PCM, which in turn improves the lifetime of a PCM by 5% with less hardware and power overhead.
Style APA, Harvard, Vancouver, ISO itp.
8

Wittig, Robert, Philipp Schulz, Emil Matus i Gerhard P. Fettweis. "Accurate Estimation of Service Rates in Interleaved Scratchpad Memory Systems". ACM Transactions on Embedded Computing Systems 21, nr 1 (31.01.2022): 1–15. http://dx.doi.org/10.1145/3457171.

Pełny tekst źródła
Streszczenie:
The prototyping of embedded platforms demands rapid exploration of multi-dimensional parameter sets. Especially the design of the memory system is essential to guarantee high utilization while reducing conflicts at the same time. To aid the design process, several probabilistic models to estimate the throughput of interleaved memory systems have been proposed. While accurately estimating the average throughput of the system, these models fail to determine the impact on individual processing elements. To mitigate this divergence, we extend three known models to include non-uniform access probabilities and priorities.
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Qing, Youyou Lu, Junru Li, Minhui Xie i Jiwu Shu. "Nap: Persistent Memory Indexes for NUMA Architectures". ACM Transactions on Storage 18, nr 1 (28.02.2022): 1–35. http://dx.doi.org/10.1145/3507922.

Pełny tekst źródła
Streszczenie:
We present Nap , a black-box approach that converts concurrent persistent memory (PM) indexes into non-uniform memory access (NUMA)-aware counterparts. Based on the observation that real-world workloads always feature skewed access patterns, Nap introduces a NUMA-aware layer (NAL) on the top of existing concurrent PM indexes, and steers accesses to hot items to this layer. The NAL maintains (1) per-node partial views in PM for serving insert/update/delete operations with failure atomicity and (2) a global view in DRAM for serving lookup operations. The NAL eliminates remote PM accesses to hot items without inducing extra local PM accesses. Moreover, to handle dynamic workloads, Nap adopts a fast NAL switch mechanism. We convert five state-of-the-art PM indexes using Nap . Evaluation on a four-node machine with Optane DC Persistent Memory shows that Nap can improve the throughput by up to 2.3× and 1.56× under write-intensive and read-intensive workloads, respectively.
Style APA, Harvard, Vancouver, ISO itp.
10

Știrb, Iulia. "Extending NUMA-BTLP Algorithm with Thread Mapping Based on a Communication Tree". Computers 7, nr 4 (3.12.2018): 66. http://dx.doi.org/10.3390/computers7040066.

Pełny tekst źródła
Streszczenie:
The paper presents a Non-Uniform Memory Access (NUMA)-aware compiler optimization for task-level parallel code. The optimization is based on Non-Uniform Memory Access—Balanced Task and Loop Parallelism (NUMA-BTLP) algorithm Ştirb, 2018. The algorithm gets the type of each thread in the source code based on a static analysis of the code. After assigning a type to each thread, NUMA-BTLP Ştirb, 2018 calls NUMA-BTDM mapping algorithm Ştirb, 2016 which uses PThreads routine pthread_setaffinity_np to set the CPU affinities of the threads (i.e., thread-to-core associations) based on their type. The algorithms perform an improve thread mapping for NUMA systems by mapping threads that share data on the same core(s), allowing fast access to L1 cache data. The paper proves that PThreads based task-level parallel code which is optimized by NUMA-BTLP Ştirb, 2018 and NUMA-BTDM Ştirb, 2016 at compile-time, is running time and energy efficiently on NUMA systems. The results show that the energy is optimized with up to 5% at the same execution time for one of the tested real benchmarks and up to 15% for another benchmark running in infinite loop. The algorithms can be used on real-time control systems such as client/server based applications which require efficient access to shared resources. Most often, task parallelism is used in the implementation of the server and loop parallelism is used for the client.
Style APA, Harvard, Vancouver, ISO itp.
11

Guo, Xiao Mei, Wei Zhao, Li Hong Zhang i Wen Hua Yu. "Parallel Performance of MPI Based Parallel FDTD on NUMA Architecture Workstation". Advanced Materials Research 532-533 (czerwiec 2012): 1115–19. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1115.

Pełny tekst źródła
Streszczenie:
This paper introduces a parallel FDTD (Finite Difference Time Domain) algorithm based on MPI (Message Passing Interface) parallel environment and NUMA (Non-Uniform Memory Access) architecture workstation. The FDTD computation is carried out independently in local meshes in each process. The data are exchanged by communication between adjacent subdomains to achieve the FDTD parallel method. The results show the consistency between serial and parallel algorithms, and the computing efficiency is improved effectively.
Style APA, Harvard, Vancouver, ISO itp.
12

ALKOWAILEET, WAIL Y., DAVID CARRILLO-CISNEROS, ROBERT V. LIM i ISAAC D. SCHERSON. "NUMA-Aware Multicore Matrix Multiplication". Parallel Processing Letters 24, nr 04 (grudzień 2014): 1450006. http://dx.doi.org/10.1142/s0129626414500066.

Pełny tekst źródła
Streszczenie:
A user-level scheduling along with a specific data alignment for matrix multiplication in cache-coherent Non-Uniform Memory Access (ccNUMA) architectures is presented. Addressing the data locality problem that could occur in such systems potentially alleviates memory bottlenecks. We show experimentally that an agnostic thread scheduler (e.g., OpenMP 3.1) from the data placement on a ccNUMA machine produces a high number of cache-misses. To overcome this memory contention problem, we show how proper memory mapping and scheduling manage to tune an existing matrix multiplication implementation and reduce the number of cache-misses by 67% and consequently, reduce the computation time by up to 22%. Finally, we show a relationship between cache-misses and the gained speedup as a novel figure of merit to measure the quality of the method.
Style APA, Harvard, Vancouver, ISO itp.
13

Zheng, Yili. "Optimizing UPC Programs for Multi-Core Systems". Scientific Programming 18, nr 3-4 (2010): 183–91. http://dx.doi.org/10.1155/2010/646829.

Pełny tekst źródła
Streszczenie:
The Partitioned Global Address Space (PGAS) model of Unified Parallel C (UPC) can help users express and manage application data locality on non-uniform memory access (NUMA) multi-core shared-memory systems to get good performance. First, we describe several UPC program optimization techniques that are important to achieving good performance on NUMA multi-core computers with examples and quantitative performance results. Second, we use two numerical computing kernels, parallel matrix–matrix multiplication and parallel 3-D FFT, to demonstrate the end-to-end development and optimization for UPC applications. Our results show that the optimized UPC programs achieve very good and scalable performance on current multi-core systems and can even outperform vendor-optimized libraries in some cases.
Style APA, Harvard, Vancouver, ISO itp.
14

Zhang, Wei, Zihao Jiang, Zhiguang Chen, Nong Xiao i Yang Ou. "NUMA-Aware DGEMM Based on 64-Bit ARMv8 Multicore Processors Architecture". Electronics 10, nr 16 (17.08.2021): 1984. http://dx.doi.org/10.3390/electronics10161984.

Pełny tekst źródła
Streszczenie:
Double-precision general matrix multiplication (DGEMM) is an essential kernel for measuring the potential performance of an HPC platform. ARMv8-based system-on-chips (SoCs) have become the candidates for the next-generation HPC systems with their highly competitive performance and energy efficiency. Therefore, it is meaningful to design high-performance DGEMM for ARMv8-based SoCs. However, as ARMv8-based SoCs integrate increasing cores, modern CPU uses non-uniform memory access (NUMA). NUMA restricts the performance and scalability of DGEMM when many threads access remote NUMA domains. This poses a challenge to develop high-performance DGEMM on multi-NUMA architecture. We present a NUMA-aware method to reduce the number of cross-die and cross-chip memory access events. The critical enabler for NUMA-aware DGEMM is to leverage two levels of parallelism between and within nodes in a purely threaded implementation, which allows the task independence and data localization of NUMA nodes. We have implemented NUMA-aware DGEMM in the OpenBLAS and evaluated it on a dual-socket server with 48-core processors based on the Kunpeng920 architecture. The results show that NUMA-aware DGEMM has effectively reduced the number of cross-die and cross-chip memory access, resulting in enhancing the scalability of DGEMM significantly and increasing the performance of DGEMM by 17.1% on average, with the most remarkable improvement being 21.9%.
Style APA, Harvard, Vancouver, ISO itp.
15

Vollebregt, Edwin. "Abstract Level Parallelization of Finite Difference Methods". Scientific Programming 6, nr 4 (1997): 331–44. http://dx.doi.org/10.1155/1997/321965.

Pełny tekst źródła
Streszczenie:
A formalism is proposed for describing finite difference calculations in an abstract way. The formalism consists of index sets and stencils, for characterizing the structure of sets of data items and interactions between data items (“neighbouring relations”). The formalism provides a means for lifting programming to a more abstract level. This simplifies the tasks of performance analysis and verification of correctness, and opens the way for automaticcode generation. The notation is particularly useful in parallelization, for the systematic construction of parallel programs in a process/channel programming paradigm (e.g., message passing). This is important because message passing, unfortunately, still is the only approach that leads to acceptable performance for many more unstructured or irregular problems on parallel computers that have non-uniform memory access times. It will be shown that the use of index sets and stencils greatly simplifies the determination of which data must be exchanged between different computing processes.
Style APA, Harvard, Vancouver, ISO itp.
16

Chen, Shizhao, Jianbin Fang, Chuanfu Xu i Zheng Wang. "Adaptive Hybrid Storage Format for Sparse Matrix–Vector Multiplication on Multi-Core SIMD CPUs". Applied Sciences 12, nr 19 (29.09.2022): 9812. http://dx.doi.org/10.3390/app12199812.

Pełny tekst źródła
Streszczenie:
Optimizing sparse matrix–vector multiplication (SpMV) is challenging due to the non-uniform distribution of the non-zero elements of the sparse matrix. The best-performing SpMV format changes depending on the input matrix and the underlying architecture, and there is no “one-size-fit-for-all” format. A hybrid scheme combining multiple SpMV storage formats allows one to choose an appropriate format to use for the target matrix and hardware. However, existing hybrid approaches are inadequate for utilizing the SIMD cores of modern multi-core CPUs with SIMDs, and it remains unclear how to best mix different SpMV formats for a given matrix. This paper presents a new hybrid storage format for sparse matrices, specifically targeting multi-core CPUs with SIMDs. Our approach partitions the target sparse matrix into two segmentations based on the regularities of the memory access pattern, where each segmentation is stored in a format suitable for its memory access patterns. Unlike prior hybrid storage schemes that rely on the user to determine the data partition among storage formats, we employ machine learning to build a predictive model to automatically determine the partition threshold on a per matrix basis. Our predictive model is first trained off line, and the trained model can be applied to any new, unseen sparse matrix. We apply our approach to 956 matrices and evaluate its performance on three distinct multi-core CPU platforms: a 72-core Intel Knights Landing (KNL) CPU, a 128-core AMD EPYC CPU, and a 64-core Phytium ARMv8 CPU. Experimental results show that our hybrid scheme, combined with the predictive model, outperforms the best-performing alternative by 2.9%, 17.5% and 16% on average on KNL, AMD, and Phytium, respectively.
Style APA, Harvard, Vancouver, ISO itp.
17

Liu, Peini, i Jordi Guitart. "Performance characterization of containerization for HPC workloads on InfiniBand clusters: an empirical study". Cluster Computing 25, nr 2 (16.11.2021): 847–68. http://dx.doi.org/10.1007/s10586-021-03460-8.

Pełny tekst źródła
Streszczenie:
AbstractContainerization technology offers an appealing alternative for encapsulating and operating applications (and all their dependencies) without being constrained by the performance penalties of using Virtual Machines and, as a result, has got the interest of the High-Performance Computing (HPC) community to obtain fast, customized, portable, flexible, and reproducible deployments of their workloads. Previous work on this area has demonstrated that containerized HPC applications can exploit InfiniBand networks, but has ignored the potential of multi-container deployments which partition the processes that belong to each application into multiple containers in each host. Partitioning HPC applications has demonstrated to be useful when using virtual machines by constraining them to a single NUMA (Non-Uniform Memory Access) domain. This paper conducts a systematical study on the performance of multi-container deployments with different network fabrics and protocols, focusing especially on Infiniband networks. We analyze the impact of container granularity and its potential to exploit processor and memory affinity to improve applications’ performance. Our results show that default Singularity can achieve near bare-metal performance but does not support fine-grain multi-container deployments. Docker and Singularity-instance have similar behavior in terms of the performance of deployment schemes with different container granularity and affinity. This behavior differs for the several network fabrics and protocols, and depends as well on the application communication patterns and the message size. Moreover, deployments on Infiniband are also more impacted by the computation and memory allocation, and because of that, they can exploit the affinity better.
Style APA, Harvard, Vancouver, ISO itp.
18

Su, Xing, i Fei Lei. "Hybrid-Grained Dynamic Load Balanced GEMM on NUMA Architectures". Electronics 7, nr 12 (27.11.2018): 359. http://dx.doi.org/10.3390/electronics7120359.

Pełny tekst źródła
Streszczenie:
The Basic Linear Algebra Subprograms (BLAS) is a fundamental numerical software and GEneral Matrix Multiply (GEMM) is the most important computational kernel routine in the BLAS library. On multi-core and many-core processors, the whole workload of GEMM is partitioned and scheduled to multiple threads to exploit the parallel hardware. Generally, the workload is equally partitioned among threads and all threads are expected to accomplish their work in roughly the same time. However, this is not the case on Non-Uniform Memory Access (NUMA) architectures. The NUMA effect may cause threads to run at different speeds, and the overall executing times of GEMM is determined by the slowest thread. In this paper, we propose a hybrid-grained dynamic load-balancing method to reduce the harm of the NUMA effect by allowing fast threads to steal work from slow ones. We evaluate the proposed method on Phytium 2000+, an emerging 64-core high-performance processor based on Arm’s AArch64 architecture. Results show that our method reduces the synchronization overhead by 51.5% and achieves an improvement of GEMM performance by 1.9%.
Style APA, Harvard, Vancouver, ISO itp.
19

Ferreira Lima, João Vicente, Issam Raïs, Laurent Lefèvre i Thierry Gautier. "Performance and energy analysis of OpenMP runtime systems with dense linear algebra algorithms". International Journal of High Performance Computing Applications 33, nr 3 (9.08.2018): 431–43. http://dx.doi.org/10.1177/1094342018792079.

Pełny tekst źródła
Streszczenie:
In this article, we analyze performance and energy consumption of five OpenMP runtime systems over a non-uniform memory access (NUMA) platform. We also selected three CPU-level optimizations or techniques to evaluate their impact on the runtime systems: processors features Turbo Boost and C-States, and CPU Dynamic Voltage and Frequency Scaling through Linux CPUFreq governors. We present an experimental study to characterize OpenMP runtime systems on the three main kernels in dense linear algebra algorithms (Cholesky, LU, and QR) in terms of performance and energy consumption. Our experimental results suggest that OpenMP runtime systems can be considered as a new energy leverage, and Turbo Boost, as well as C-States, impacted significantly performance and energy. CPUFreq governors had more impact with Turbo Boost disabled, since both optimizations reduced performance due to CPU thermal limits. An LU factorization with concurrent-write extension from libKOMP achieved up to 63% of performance gain and 29% of energy decrease over original PLASMA algorithm using GNU C compiler (GCC) libGOMP runtime.
Style APA, Harvard, Vancouver, ISO itp.
20

Denoyelle, Nicolas, John Tramm, Kazutomo Yoshii, Swann Perarnau i Pete Beckman. "NUMA-AWARE DATA MANAGEMENT FOR NEUTRON CROSS SECTION DATA IN CONTINUOUS ENERGY MONTE CARLO NEUTRON TRANSPORT SIMULATION". EPJ Web of Conferences 247 (2021): 04020. http://dx.doi.org/10.1051/epjconf/202124704020.

Pełny tekst źródła
Streszczenie:
The calculation of macroscopic neutron cross-sections is a fundamental part of the continuous-energy Monte Carlo (MC) neutron transport algorithm. MC simulations of full nuclear reactor cores are computationally expensive, making high-accuracy simulations impractical for most routine reactor analysis tasks because of their long time to solution. Thus, preparation of MC simulation algorithms for next generation supercomputers is extremely important as improvements in computational performance and efficiency will directly translate into improvements in achievable simulation accuracy. Due to the stochastic nature of the MC algorithm, cross-section data tables are accessed in a highly randomized manner, resulting in frequent cache misses and latency-bound memory accesses. Furthermore, contemporary and next generation non-uniform memory access (NUMA) computer architectures, featuring very high latencies and less cache space per core, will exacerbate this behaviour. The absence of a topology-aware allocation strategy in existing high-performance computing (HPC) programming models is a major source of performance problems in NUMA systems. Thus, to improve performance of the MC simulation algorithm, we propose a topology-aware data allocation strategies that allow full control over the location of data structures within a memory hierarchy. A new memory management library, known as AML, has recently been created to facilitate this mapping. To evaluate the usefulness of AML in the context of MC reactor simulations, we have converted two existing MC transport cross-section lookup “proxy-applications” (XSBench and RSBench) to utilize the AML allocation library. In this study, we use these proxy-applications to test several continuous-energy cross-section data lookup strategies (the nuclide grid, unionized grid, logarithmic hash grid, and multipole methods) with a number of AML allocation schemes on a variety of node architectures. We find that the AML library speeds up cross-section lookup performance up to 2x on current generation hardware (e.g., a dual-socket Skylake-based NUMA system) as compared with naive allocation. These exciting results also show a path forward for efficient performance on next-generation exascale supercomputer designs that feature even more complex NUMA memory hierarchies.
Style APA, Harvard, Vancouver, ISO itp.
21

Schönherr, Jan H., Ben Juurlink i Jan Richling. "TACO: A Scheduling Scheme for Parallel Applications on Multicore Architectures". Scientific Programming 22, nr 3 (2014): 223–37. http://dx.doi.org/10.1155/2014/423084.

Pełny tekst źródła
Streszczenie:
While multicore architectures are used in the whole product range from server systems to handheld computers, the deployed software still undergoes the slow transition from sequential to parallel. This transition, however, is gaining more and more momentum due to the increased availability of more sophisticated parallel programming environments. Combined with the ever increasing complexity of multicore architectures, this results in a scheduling problem that is different from what it has been, because concurrently executing parallel programs and features such as non-uniform memory access, shared caches, or simultaneous multithreading have to be considered. In this paper, we compare different ways of scheduling multiple parallel applications on multicore architectures. Due to emerging parallel programming environments, we primarily consider applications where the parallelism degree can be changed on the fly. We propose TACO, a topology-aware scheduling scheme that combines equipartitioning and coscheduling, which does not suffer from the drawbacks of the individual concepts. Additionally, TACO is conceptually compatible with contention-aware scheduling strategies. We find that topology-awareness increases performance for all evaluated workloads. The combination with coscheduling is more sensitive towards the executed workloads and NUMA effects. However, the gained versatility allows new use cases to be explored, which were not possible before.
Style APA, Harvard, Vancouver, ISO itp.
22

Vishwas, B. C., Abhishek Gadia i Mainak Chaudhuri. "Implementing a Parallel Matrix Factorization Library on the Cell Broadband Engine". Scientific Programming 17, nr 1-2 (2009): 3–29. http://dx.doi.org/10.1155/2009/710321.

Pełny tekst źródła
Streszczenie:
Matrix factorization (or often called decomposition) is a frequently used kernel in a large number of applications ranging from linear solvers to data clustering and machine learning. The central contribution of this paper is a thorough performance study of four popular matrix factorization techniques, namely, LU, Cholesky, QR and SVD on the STI Cell broadband engine. The paper explores algorithmic as well as implementation challenges related to the Cell chip-multiprocessor and explains how we achieve near-linear speedup on most of the factorization techniques for a range of matrix sizes. For each of the factorization routines, we identify the bottleneck kernels and explain how we have attempted to resolve the bottleneck and to what extent we have been successful. Our implementations, for the largest data sets that we use, running on a two-node 3.2 GHz Cell BladeCenter (exercising a total of sixteen SPEs), on average, deliver 203.9, 284.6, 81.5, 243.9 and 54.0 GFLOPS for dense LU, dense Cholesky, sparse Cholesky, QR and SVD, respectively. The implementations achieve speedup of 11.2, 12.8, 10.6, 13.0 and 6.2, respectively for dense LU, dense Cholesky, sparse Cholesky, QR and SVD, when running on sixteen SPEs. We discuss the interesting interactions that result from parallelization of the factorization routines on a two-node non-uniform memory access (NUMA) Cell Blade cluster.
Style APA, Harvard, Vancouver, ISO itp.
23

Liu, Hanyan, Yunping Zhao, Xiaowen Chen, Chen Li i Jianzhuang Lu. "TB-NUCA: A Temperature-Balanced 3D NUCA Based on Bayesian Optimization". Electronics 11, nr 18 (14.09.2022): 2910. http://dx.doi.org/10.3390/electronics11182910.

Pełny tekst źródła
Streszczenie:
Three-dimensional network-on-chip (NoC) is the primary interconnection method for 3D-stacked multicore processors due to their excellent scalability and interconnect flexibility. With the support of 3D NoC, 3D non-uniform cache architecture (NUCA) is commonly used to organize the last-level cache (LLC) due to its high capacity and fast access latency. However, owing to the layered structure that leads to longer heat dissipation paths and variable inter-layer cooling efficiency, 3D NoC experiences a severe thermal problem that has a big impact on the reliability and performance of the chip. In traditional memory-to-LLC mapping in 3D NUCA, the traffic load in each node is inconsistent with its heat dissipation capability, causing thermal hotspots. To solve the above problem, we propose a temperature-balanced NUCA mapping mechanism named TB-NUCA. First, the Bayesian optimization algorithm is used to calculate the probability distribution of cache blocks in each node in order to equalize the node temperature. Secondly, the structure of TB-NUCA is designed. Finally, comparative experiments were conducted under random, transpose-2, and shuffle traffic patterns. The experimental results reveal that, compared with the classical NUCA mapping mechanism (S-NUCA), TB-NUCA can increase the mean-time-to-failure (MTTF) of routers by up to 28.13% while reducing the maximum temperature, average temperature, and standard deviation of temperature by a maximum of 4.92%, 4.48%, and 20.46%, respectively.
Style APA, Harvard, Vancouver, ISO itp.
24

Mishra, Akshita, Soumen Saha, Henam Sylvia Devi, Abhisek Dixit i Madhusudan Singh. "High resistive state retention in room temperature solution processed biocompatible memory devices for health monitoring applications". MRS Advances 4, nr 24 (2019): 1409–15. http://dx.doi.org/10.1557/adv.2019.161.

Pełny tekst źródła
Streszczenie:
AbstractWearable and bio-implantable health monitoring applications require flexible memory devices that can be used to locally store body vitals prior to transmission or to support local data processing in distributed smart systems. In recent years, non-volatile resistive random access memories composed of oxide-based insulators such as hafnium oxide and niobium pentoxide have attracted a great deal of interest. Unfortunately, hafnium and niobium are not low-cost materials and may also present health challenges. In this work, we have explored the alternative of using titanium dioxide as the insulating oxide using a low-cost solution-phase deposition process. Aqueous sol deposited thin films were deposited on standard RCA-cleaned commercial thermal silicon dioxide (500 nm) wafer (500 µm). Patterned bottom contacts Cr/Au (∼200/300 Å) using shadow masks were deposited on the substrate using successive DC sputtering, and thermal evaporation, respectively at 5 X 10-6 Torr. A sol was prepared using titanium (IV) butoxide as precursor hydrolysed under water and ethanol to form a colloidal solution (sol) at 50°C under constant stirring. Powder X-Ray Diffraction (PXRD) scans of calcined (from sol at 750°C) nanoparticles show a mixture of anatase and rutile phases, confirming the composition of the material. The sol was slowly cooled to room temperature before being spin coated at low rotational speeds on to the substrate in multiple steps involving several spin coating and drying steps to form a uniform film. Top contacts (Ag) of thickness (∼500 Å) were deposited on the sol-deposited thin films using thermal evaporation. The resulting devices were coated with a thick layer of polydimethylsiloxane (PDMS) using a 10:1 ratio of base elastomer and curing agent respectively. After drying the PDMS, resistance measurements were carried out. A high resistance state was detected prior to electroforming in the air at ∼5 MΩ which remains nearly unchanged (∼4.3 MΩ) when dipped in a ∼7.4 pH phosphate buffer solution (equivalent to human blood’s pH (reference average value ∼7.4 pH)). Unencapsulated devices (UM1) were further characterized in air using a Keithley 4200-SCS semiconductor parameter analyzer in dual sweep mode to observe repeatable hysteresis behavior with a large difference between trace and retrace R-V characteristics (∼50±3% over a pristine device), which compares favorably with recent data in the literature on high-performance sputtered TiO2 memristors. Unchanged retention ratio using biocompatible device materials and encapsulation suggests that these devices can be used for biomedical implantable sensor electronics.
Style APA, Harvard, Vancouver, ISO itp.
25

Zhao, Qing-Tai, Fengben Xi, Yi Han, Jin Hee Bae i Detlev Gruetzmacher. "(Invited, Digital Presentation) Approach to Neuromorphic Computing with Ferroelectric Schottky Barrier FETs". ECS Meeting Abstracts MA2022-01, nr 29 (7.07.2022): 1298. http://dx.doi.org/10.1149/ma2022-01291298mtgabs.

Pełny tekst źródła
Streszczenie:
Neuromorphic computing inspired by neural network systems of the human brain enables energy efficient computing as a solution of the von Neumann bottleneck. A neural network consists of thousands or even millions of neurons which communicate with each other through connected synapses. Synapses can memorize and process the information simultaneously. The plasticity of a synapse to strengthen or weaken its activity over time make it be capable of learning and computing. Thus, artificial synapses which can emulate functionalities and the plasticity of bio-synapses form the backbone of a neuromorphic computing system. Non-volatile memories with two-terminals, like resistive random-access memory (ReRAM), phase change memory (PCM), are attractive candidates for artificial synapses. However, signal processing and learning cannot be performed simultaneously in these two-terminal synapses. FeFET, similar to a MOSFET structure using CMOS compatible HfO2 based ferroelectrics as gate oxide forms three-terminal synapses offering high endurance, good performance and high energy efficiency. In contrast to two-terminal devices, three terminal FeFET based synapses can perform processing and learning at the same time. In order to maintain the ferroelectric properties of an HfO2 based ferroelectric film, high temperature annealing should be avoided after the ferroelectric layer deposition. In this paper, we present ferroelectric NiSi2 source/drain Schottky barrier (SB) MOSFET (FE-SBFET) structure (Fig.1a), which requires neither ion implantation nor thermal activation of source/drain contacts at high temperatures. FE-SBFETs were fabricated on SOI substrates with a boron-doped (1016 B/cm-3), 55 nm thick top Si layer and a 145 nm thick buried oxide (BOX) layer. Very thin (9 nm) single crystalline NiSi2 layers which offer superior properties of uniform and stable SB contacts on Si are used at source/drain regions. A gate stack consisting of 10 nm thick Hf0.5Zr0.5O2 (HZO) layer and a 40 nm thick TiN layer are deposited by ALD and sputtering, respectively. A rapid thermal annealing at 500 °C is performed to crystallize the HZO into a ferroelectric phase before the gate patterning. The fabricated device has a channel length of 10 µm and a gate width of 10 µm. The overlap between the top gate and NiSi2 is 6 µm along the channel and 10 µm wide on each side. The ferroelectric polarization modulates both the SB at the source/drain contacts as well as the potential in the channel, thus changing the carrier injection through the SB. The Id-Vg transfer characteristics of a p-type FE-SBFET shows a clockwise hysteresis which is caused by the ferroelectric polarization switch. The excitatory post-synaptic current (EPSC), one of the typical short-term synaptic plasticity features for biologic synapses, is characterized by measuring the transient drain currents for a voltage pulse on the gate of a FE-SBFET (Fig.1b). The amplitude of the pulse VAM changes from -0.2 to -1.2 V with a fixed pulse width tpw=1 μs. We found that the EPSC peak value increases linearly with VAM. It shows a very low energy/spike consumption of 2fJ/spike at VAM=-0.2 V, demonstrating a very high energy efficiency. From the EPSC measurements with repeated gate voltage pulses paired-pulse facilitation/depression (PPF/PPD) are characterized showing an exponential decay, similar to biological synapses. The long-term synaptic plasticity of FE-SBFET synapses is characterized by a series repeated identical or non-identical pulses. The later can improve the long-term potentiation/depression (LTP/LTD) symmetry and linearity. The measurements show a large Gmax/Gmin ratio, very high endurance and small cycle-to-cycle (CTC) variation (1.06%) due to the perfect contact of NiSi2 (Fig.1c). The biological neuron-like spike-timing-dependent plasticity (STDP) is characterized for the FE-SBFET synapse. The results show an asymmetric anti-Hebbian STDP, which is one of the biological STDP functionalities (Fig.1d). In conclusion, the fabricated FE-SBFET synapse exhibits multiple synaptic functions with high endurance and small variations. The ultra-low energy/spike consumption indicates a high potential for low power neuromorphic computing applications. Acknowledgement: This work was supported by the Federal Ministry of Education and Research (BMBF, Germany) in the project NEUROTEC (16ME0398K). Figure 1
Style APA, Harvard, Vancouver, ISO itp.
26

Bondavalli, Paolo, Marie Martin, Louiza Hamidouche, Alberto Montanaro, Aikaterini-Flora Trompeta i Costas Charitidis. "Nano-Graphitic based Non-Volatile Memories Fabricated by the Dynamic Spray-Gun Deposition Method". Micromachines 10, nr 2 (29.01.2019): 95. http://dx.doi.org/10.3390/mi10020095.

Pełny tekst źródła
Streszczenie:
This paper deals with the fabrication of Resistive Random Access Memory (ReRAM) based on oxidized carbon nanofibers (CNFs). Stable suspensions of oxidized CNFs have been prepared in water and sprayed on an appropriate substrate, using the dynamic spray-gun deposition method, developed at Thales Research and Technology. This technique allows extremely uniform mats to be produced while heating the substrate at the boiling point of the solvent used for the suspensions. A thickness of around 150 nm of CNFs sandwiched between two metal layers (the metalized substrate and the top contacts) has been achieved, creating a Metal-Insulator-Metal (MIM) structure typical of ReRAM. After applying a bias, we were able to change the resistance of the oxidized layer between a low (LRS) and a high resistance state (HRS) in a completely reversible way. This is the first time that a scientific group has produced this kind of device using CNFs and these results pave the way for the further implementation of this kind of memory on flexible substrates.
Style APA, Harvard, Vancouver, ISO itp.
27

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang i Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions". Advanced Materials Research 345 (wrzesień 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Pełny tekst źródła
Streszczenie:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
Style APA, Harvard, Vancouver, ISO itp.
28

Bartoš, Robert, D. Bejšovec, Alberto Malucelli, J. Prokšová, J. Lodin, Štěpán Čapek i Martin Sameš. "Authors: R. Bartoš1, D. Bejšovec2, A. Malucelli1, J. Prokšová3, J. Lodin1, Š. Čapek4, M. Sameš1 Authors - sphere of activity: 1 Neurochirurgická klinika UJEP a Krajská zdravotní a. s., Masarykova nemocnice v Ústí nad Labem, o. z., 2 KAPIM – Anesteziologická klinika UJEP a Krajská zdravotní a. s., Masarykova nemocnice v Ústí nad Labem, o. z., 3 Rehabilitační oddělení, Logopedie, Krajská zdravotní a. s., Masarykova nemocnice v Ústí nad Labem, o. z., 4 Department of Neurosurgery, University of Virginia, Charlottesville, Virginia, USA Article: Cesk Slov Neurol N 2017; 80/113(2): 220-223 DOI: 10.14735/amcsnn2017220 Category: Case Report Number of articles displayed: 15x PDF PDF print print post comment post comment previous article zobrazit obsah show contents next article Summary Background: Lateral or supine positions are the traditional positions for cranial tumor resections performed with an “awake” component. These positions are used effectively for patients with tumors adjacent to speech centers or located in the superior frontal or precentral gyrus respectively. However, these may be unsatisfactory for tumors in a close proximity to the parieto-occipital region. In this case report, we describe “awake” surgery performed on a patient in semisitting position. Case description: A 57-year-old patient suffered second recurrence of a glioblastoma multiforme tumor with subcortical invasion of the postcentral gyrus. Due to a high risk of severe neurological deficit, it was decided to perform an awake surgery with the semisitting position providing the best exposure to the lesion and the pyramidal tract. The pyramidal tract of the patient was mapped using motor responses to regular stimuli during which the surgeon recected the tumor. The patient was fully cooperative throughout the procedure and subjectively described the semisitting position as comfortable. Postoperatively, the patient showed no signs of new neurological deficits. Planned re-radiation therapy was not performed. Conclusion: This clinical case demonstrates successful use of the semisitting position in “awake” surgery and we recommend considering its use for tumors in previously challenging locations, such as the lower parietal lobules or postcentral gyrus. This position could also be used during surgeries involving visual pathways mapping. Key words: semisitting position – „awake“ surgery – glioma – parietal lobe – pyramidal tract – cortical stimulation mapping The authors declare they have no potential conflicts of interest concerning drugs, products, or services used in the study. The Editorial Board declares that the manuscript met the ICMJE “uniform requirements” for biomedical papers. Rate article: Complete evaluation of the article: 0/5, evaluated 0x Read more Assessment of Prospective Memory – a Validity Study of Memory for Intentions Screening Test Source: Czech and Slovak Neurology and Neurosurgery The Reasons and the Process of Amendment of the Czech Addenbrooke’s Cognitive Examination (ACE-CZ) Source: Czech and Slovak Neurology and Neurosurgery Amendment of the Czech Addenbrooke’s Cognitive Examination (ACE-CZ) Source: Czech and Slovak Neurology and Neurosurgery The Bristol Activities of Daily Living Scale BADLS-CZ for the Evaluation of Patients with Dementia Source: Czech and Slovak Neurology and Neurosurgery Transient Ischemic Attack and Minor Stroke Management Source: Czech and Slovak Neurology and Neurosurgery Cognition and Hemodynamics after Carotid Endarterectomy for Asymptomatic Stenosis Source: Czech and Slovak Neurology and Neurosurgery Decubitus as a Cause of Death even in the 21st Century Source: Czech and Slovak Neurology and Neurosurgery Reader discussion Enter discussion Open Journal System Subscription Subscribe the Journal The subscription grants you full access to all the articles. further information Issue No.: 2 / 2017 show contents Year 2017 Year 2016 Year 2015 Year 2014 Year 2013 Year 2012 Year 2011 Year 2010 Year 2009 Year 2008 Year 2007 Old archive more Most read Neurorehabilitation of Sensorimotor Function after Spinal Cord Injury | views: 737 Pre-motor and Non-motor Symp­toms of Parkinson’s Disease – Taxonomy, Clinical Manifestation and Neuropathological Correlates | views: 665 Gliomas of the Limbic and Paralimbic System, Technique and Results of Resections | views: 527 Three Times of the Clock Drawing Test Rated with BaJa Scoring in Patients with Early Alzheimer‘s Disease | views: 491 Transient Ischemic Attack and Minor Stroke Management | views: 455". Česká a slovenská neurologie a neurochirurgie 80/113, nr 2 (31.03.2017): 220–23. http://dx.doi.org/10.14735/amcsnn2017220.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Rattanatranurak, Apisit, i Surin Kittitornkun. "A MultiStack Parallel (MSP) Partition Algorithm Applied to Sorting". Journal of Mobile Multimedia, 8.09.2020. http://dx.doi.org/10.13052/jmm1550-4646.1632.

Pełny tekst źródła
Streszczenie:
The CPUs of smartphones are becoming multicore with huge RAM and storage to support a variety of multimedia applications in the near future. A MultiStack Parallel (MSP) sorting algorithm is proposed and named MSPSort to support manycore systems. It can be regarded as many threads of single-pivot interleaving block-based Hoare’s algorithm. Each thread performs compare-swap operations between left and right (stacked and interleaved) data blocks. A number of multithreading features of OpenMP and our own optimization strategies have been utilized. To simulate those smartphones, MSPSort is fine tuned and tested on four Linux systems, e.g. Intel i7-2600, Xeon X5670, AMD R7-1700 and R9-2920. Their memory configurations can be classified as either uniform or non-uniform memory access. The statistical results are satisfied compared to parallel-mode sorting algorithms of Standard Template Library, namely Balanced QuickSort and MultiWay MergeSort. Moreover, MSPSort looks promising to be developed further to improve both run time and stability.
Style APA, Harvard, Vancouver, ISO itp.
30

Mummidi, Chandra sekhar, i Sandip Kundu. "ACTION: Adaptive Cache Block Migration in Distributed Cache Architectures". ACM Transactions on Architecture and Code Optimization, 29.11.2022. http://dx.doi.org/10.1145/3572911.

Pełny tekst źródła
Streszczenie:
Chip multiprocessors (CMP) with more cores have more traffic to the last-level cache (LLC). Without a corresponding increase in LLC bandwidth, such traffic cannot be sustained, resulting in performance degradation. Previous research focused on data placement techniques to improve access latency in Non-Uniform Cache Architectures (NUCA). Placing data closer to the referring core reduces traffic in cache interconnect. However, earlier data placement work did not account for the frequency with which specific memory references are accessed. The difficulty of tracking access frequency for all memory references is one of the main reasons why it was not considered in NUCA data placement. In this research, we present a hardware-assisted solution called ACTION ( A daptive C ache Block Migra tion ) to track the access frequency of individual memory references and prioritize placement of frequently referred data closer to the affine core. ACTION mechanism implements cache block migration when there is a detectable change in access frequencies due to a shift in the program phase. ACTION counts access references in the LLC stream using a simple and approximate method and uses straightforward placement and migration solution to keep the hardware overhead low. We evaluate ACTION on a 4-core CMP with a 5x5 mesh LLC network implementing a partitioned D-NUCA against workloads exhibiting distinct asymmetry in cache block access frequency. Our simulation results indicate that ACTION can improve CMP performance by up to 7.5% over state-of-the-art (SOTA) D-NUCA solutions.
Style APA, Harvard, Vancouver, ISO itp.
31

Seo, Sung-ho, Woo-sik Nam, Jae-seok Kim, Chang-hyup Shin, Se-yun Lim, Jea-gun Park i Yoon-joong Kim. "Effect of Buffer or Barrier Layer on Bistability for Nonvolatile Memory Fabricated with Al Nanocrystals Embedded in α-NPD". MRS Proceedings 965 (2006). http://dx.doi.org/10.1557/proc-0965-s09-13.

Pełny tekst źródła
Streszczenie:
ABSTRACTRecently, low molecular organic non-volatile memories have been developed as a next generation of non-volatile memory because of nano-meter device-feature size and nano-second access and store-time. We developed a non-volatile memory fabricated with the device structure of Al/ α-NPD/Al nano-crystals surrounded by Al2O3/α-NPD/Al, where α-NPD is N,N'-bis(1-naphthyl)-1,1'biphenyl4-4”diamine. One layer of Al nano-crystals with ∼20 nm-width ∼20 nm length was uniform produced between α-NPD layers, confirmed by 1.2MV high voltage transmission-electron-microscope. This device showed Vth of 3.0 V, Vprogram of 4.3 V, and Verase of 6.3 V. Particularly, this device exhibited an excellent non-volatile memory behavior performing the bi-stability (Iprogrm/Ierase) of >1×102, program/erase cycles of >1×105 and multi-levels. In addition, previous reports about low molecular organic non-volatile memories have showed a bad reproducible memory characteristic. However, this issue was completely solved via isolating Al nano-crystals embedded in α-NPD by O2 plasma oxidation. The uniformity of Vth, Vp, and Ve were 9.91%, 6.94% and 7.92%, respectively. Furthermore, the effect of buffer or barrier layer on non-volatile memory characteristics was investigate to examine the control ability for Vth, Vp, and Ve. The 0.5-nm LiF showed a barrier layer behavior suppressing the bi-stability of non-volatile memory. Otherwise, 15-nm CuPc exhibited a buffer layer behavior enhancing the bi-stability of nonvolatile memory.
Style APA, Harvard, Vancouver, ISO itp.
32

Pedretti, Giacomo, Catherine E. Graves, Sergey Serebryakov, Ruibin Mao, Xia Sheng, Martin Foltin, Can Li i John Paul Strachan. "Tree-based machine learning performed in-memory with memristive analog CAM". Nature Communications 12, nr 1 (4.10.2021). http://dx.doi.org/10.1038/s41467-021-25873-0.

Pełny tekst źródła
Streszczenie:
AbstractTree-based machine learning techniques, such as Decision Trees and Random Forests, are top performers in several domains as they do well with limited training datasets and offer improved interpretability compared to Deep Neural Networks (DNN). However, these models are difficult to optimize for fast inference at scale without accuracy loss in von Neumann architectures due to non-uniform memory access patterns. Recently, we proposed a novel analog content addressable memory (CAM) based on emerging memristor devices for fast look-up table operations. Here, we propose for the first time to use the analog CAM as an in-memory computational primitive to accelerate tree-based model inference. We demonstrate an efficient mapping algorithm leveraging the new analog CAM capabilities such that each root to leaf path of a Decision Tree is programmed into a row. This new in-memory compute concept for enables few-cycle model inference, dramatically increasing 103 × the throughput over conventional approaches.
Style APA, Harvard, Vancouver, ISO itp.
33

Jaffer, Shehbaz, Kaveh Mahdaviani i Bianca Schroeder. "Improving the Endurance of Next Generation SSD’s using WOM-v codes". ACM Transactions on Storage, 14.11.2022. http://dx.doi.org/10.1145/3565027.

Pełny tekst źródła
Streszczenie:
High density Solid State Drives, such as QLC drives, offer increased storage capacity, but a magnitude lower Program and Erase (P/E) cycles, limiting their endurance and hence usability. We present the design and implementation of non-binary, Voltage-Based Write-Once-Memory (WOM-v) Codes to improve the lifetime of QLC drives. First, we develop a FEMU based simulator test-bed to evaluate the gains of WOM-v codes on real world workloads. Second, we propose and implement two optimizations, an efficient garbage collection mechanism and an encoding optimization to drastically improve WOM-v code endurance without compromising performance. Third, we propose analytical approaches to obtain estimates of the endurance gains under WOM-v codes. We analyze Greedy garbage collection technique with uniform page access distribution and the Least Recently Written (LRW) garbage collection technique with skewed page access distribution in the context of WOM-v codes. We find that although both approaches overestimate the number of required erase operations, the model based on greedy garbage collection with uniform page access distribution provides tighter bounds. A careful evaluation, including microbenchmarks and trace-driven evaluation, demonstrates that WOM-v codes can reduce Erase cycles for QLC drives by 4.4x-11.1x for real world workloads with minimal performance overheads resulting in improved QLC SSD lifetime.
Style APA, Harvard, Vancouver, ISO itp.
34

Tao, Zhuofu, Chen Wu, Yuan Liang, Kun Wang i Lei He. "LW-GCN: A Lightweight FPGA-based Graph Convolutional Network Accelerator". ACM Transactions on Reconfigurable Technology and Systems, 4.08.2022. http://dx.doi.org/10.1145/3550075.

Pełny tekst źródła
Streszczenie:
Graph convolutional networks (GCNs) have been introduced to effectively process non-euclidean graph data. However, GCNs incur large amounts of irregularity in computation and memory access, which prevents efficient use of traditional neural network accelerators. Moreover, existing dedicated GCN accelerators demand high memory volumes and are difficult to implement onto resource limited edge devices. In this work, we propose LW-GCN , a lightweight FPGA-based accelerator with a software-hardware co-designed process to tackle irregularity in computation and memory access in GCN inference. LW-GCN decomposes the main GCN operations into Sparse Matrix-Matrix Multiplication (SpMM) and Matrix-Matrix Multiplication (MM). We propose a novel compression format to balance workload across PEs and prevent data hazards. Moreover, we apply data quantization and workload tiling, and map both SpMM and MM of GCN inference onto a uniform architecture on resource limited hardware. Evaluation on GCN and GraphSAGE are performed on Xilinx Kintex-7 FPGA with three popular datasets. Compared to existing CPU, GPU, and state-of-the-art FPGA-based accelerator, LW-GCN reduces latency by up to 60x, 12x and 1.7x and increases power efficiency by up to 912x., 511x and 3.87x, respectively. Furthermore, compared with NVIDIA’s latest edge GPU Jetson Xavier NX, LW-GCN achieves speedup and energy savings of 32x and 84x, respectively.
Style APA, Harvard, Vancouver, ISO itp.
35

Qian, Jiarui, Yong Wang, Xiaoxue Wang, Peng Zhang i Xiaofeng Wang. "Load balancing scheduling mechanism for OpenStack and Docker integration". Journal of Cloud Computing 12, nr 1 (28.04.2023). http://dx.doi.org/10.1186/s13677-023-00445-3.

Pełny tekst źródła
Streszczenie:
AbstractWith the development of cloud-edge collaborative computing, cloud computing has become crucial in data analysis and data processing. OpenStack and Docker are important components of cloud computing, and the integration of the two has always attracted widespread attention in industry. The scheduling mechanism adopted by the existing fusion solution uses a uniform resource weight for all containers, and the computing nodes resources on the cloud platform is unbalanced under differentiated resource requirements of the containers. Therefore, considering different network communication qualities, a load-balancing Docker scheduling mechanism based on OpenStack is proposed to meet the needs of various resources (CPU, memory, disk, and bandwidth) of containers. This mechanism uses the precise limitation strategy for container resources and a centralized strategy for container resources as the scheduling basis, and it generates exclusive weights for containers through a filtering stage, a weighing stage based on weight adaptation, and a non-uniform memory access (NUMA) lean stage. The experimental results show that, compared with Nova-docker and Yun, the proposed mechanism reduces the resource load imbalance within a node by 57.35% and 59.00% on average, and the average imbalance between nodes is reduced by 53.53% and 50.90%. This mechanism can also achieve better load balancing without regard to bandwidth.
Style APA, Harvard, Vancouver, ISO itp.
36

Redman, William T., Nora S. Wolcott, Luca Montelisciani, Gabriel Luna, Tyler D. Marks, Kevin K. Sit, Che-Hang Yu, Spencer Smith i Michael J. Goard. "Long-term transverse imaging of the hippocampus with glass microperiscopes". eLife 11 (1.07.2022). http://dx.doi.org/10.7554/elife.75391.

Pełny tekst źródła
Streszczenie:
The hippocampus consists of a stereotyped neuronal circuit repeated along the septal-temporal axis. This transverse circuit contains distinct subfields with stereotyped connectivity that support crucial cognitive processes, including episodic and spatial memory. However, comprehensive measurements across the transverse hippocampal circuit in vivo are intractable with existing techniques. Here, we developed an approach for two-photon imaging of the transverse hippocampal plane in awake mice via implanted glass microperiscopes, allowing optical access to the major hippocampal subfields and to the dendritic arbor of pyramidal neurons. Using this approach, we tracked dendritic morphological dynamics on CA1 apical dendrites and characterized spine turnover. We then used calcium imaging to quantify the prevalence of place and speed cells across subfields. Finally, we measured the anatomical distribution of spatial information, finding a non-uniform distribution of spatial selectivity along the DG-to-CA1 axis. This approach extends the existing toolbox for structural and functional measurements of hippocampal circuitry.
Style APA, Harvard, Vancouver, ISO itp.
37

Dülger, Özcan, i Tansel Dökeroğlu. "A new parallel tabu search algorithm for the optimization of the maximum vertex weight clique problem". Concurrency and Computation: Practice and Experience, 22.08.2023. http://dx.doi.org/10.1002/cpe.7891.

Pełny tekst źródła
Streszczenie:
SummaryThe efficiency of metaheuristic algorithms depends significantly on the number of fitness value evaluations performed on candidate solutions. In addition to various intelligent techniques used to obtain better results, parallelization of calculations can substantially improve the solutions in cases where the problem is NP‐hard and requires many evaluations. This study proposes a new parallel tabu search method for solving the Maximum Vertex Weight Clique Problem (MVWCP) on the Non‐Uniform Memory Access (NUMA) architectures using the OpenMP parallel programming paradigm. Achieving scalability in the NUMA architectures presents significant challenges due to the high complexity of their memory systems, which can lead to performance loss. However, our proposed Tabu‐NUMA algorithm provides up to speed‐up with 64 cores for ten basic problem instances in DIMACS‐W and BHOSLIB‐W benchmarks. And it improves the performance of the serial Multi Neighborhood Tabu Search (MN/TS) algorithm for 38 problem instances in DIMACS‐W and BHOSLIB‐W benchmarks. We further evaluate our algorithm on larger datasets with thousands of edges and vertices from Network Data Repository benchmark problem instances, and we report significant improvements in terms of speed up. Our results confirm that the Tabu‐NUMA algorithm is among the best recent algorithms for solving MVWCP on the NUMA architectures.
Style APA, Harvard, Vancouver, ISO itp.
38

M Shanthappa, Pallavi, i Rakshitha Kumar. "ProAll-D: protein allergen detection using long short term memory - a deep learning approach". ADMET and DMPK, 21.08.2022. http://dx.doi.org/10.5599/admet.1335.

Pełny tekst źródła
Streszczenie:
Background: An allergic reaction is the immune system's overreacting to a previously encountered, typically benign molecule, frequently a protein. Allergy reactions can result in rashes, itching, mucous membrane swelling, asthma, coughing, and other bizarre symptoms. To anticipate allergies, a wide range of principles and methods have been applied in bioinformatics. The sequence similarity approach's positive predictive value is very low and ineffective for methods based on FAO/WHO criteria, making it difficult to predict possible allergens. Method: This work advocated the use of a deep learning model LSTM (Long Short-Term Memory) to overcome the limitations of traditional approaches and machine learning lower performance models in predicting the allergenicity of dietary proteins. A total of 2,427 allergens and 2,427 non-allergens, from a variety of sources, including the Central Science Laboratory and the NCBI are used. The data was divided 80:20 for training and testing purposes. These techniques have all been implemented in Python. To describe the protein sequences of allergens and non-allergens, five E-descriptors were used. E1 (hydrophilic character of peptides), E2 (length), E3(propensity to form helices), E4(abundance and dispersion), and E5 (propensity of beta strands) are used to make the variable-length protein sequence to uniform length using ACC transformation. A total of eight machine learning techniques have been taken into consideration. Results: The Gaussian Naive Bayes as accuracy of 64.14 %, Radius Neighbour's Classifier with 49.2 %, Bagging Classifier was 85.8 %, ADA Boost was 76.9 %, Linear Discriminant Analysis has 76.13 %, Quadratic Discriminant Analysis was 84.2 %, Extra Tree Classifier was 90%, and LSTM is 91.5 %. Conclusion: As the LSTM, has an AUC value of 91.5 % is regarded best in predicting allergens. A web server called ProAll-D has been created that successfully identifies novel allergens using the LSTM approach. Users can use the link https://doi.org/10.17632/tjmt97xpjf.1 to access the ProAll-D server and data. ©2022 by the authors. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/). Keywords Allergen prediction; ACC transformation; LSTM model; Gaussian naive bayes; Classifier; Extra tree classifier; Bagging classifier; ADA boost; Linear discriminant analysis; Quadratic discriminant analysis
Style APA, Harvard, Vancouver, ISO itp.
39

Kairys, Anson, Ana Daugherty, Voyko Kavcic, Sarah Shair, Carol Persad, Judith Heidebrink, Arijit Bhaumik i Bruno Giordani. "Laptop-Administered NIH Toolbox and Cogstate Brief Battery in Community-Dwelling Black Adults: Unexpected Pattern of Cognitive Performance between MCI and Healthy Controls". Journal of the International Neuropsychological Society, 23.03.2021, 1–10. http://dx.doi.org/10.1017/s135561772100028x.

Pełny tekst źródła
Streszczenie:
Abstract Objective: Black adults are approximately twice as likely to develop Alzheimer’s disease (AD) than non-Hispanic Whites and access diagnostic services later in their illness. This dictates the need to develop assessments that are cost-effective, easily administered, and sensitive to preclinical stages of AD, such as mild cognitive impairment (MCI). Two computerized cognitive batteries, NIH Toolbox-Cognition and Cogstate Brief Battery, have been developed. However, utility of these measures for clinical characterization remains only partially determined. We sought to determine the convergent validity of these computerized measures in relation to consensus diagnosis in a sample of MCI and healthy controls (HC). Method: Participants were community-dwelling Black adults who completed the neuropsychological battery and other Uniform Data Set (UDS) forms from the AD centers program for consensus diagnosis (HC = 61; MCI = 43) and the NIH Toolbox-Cognition and Cogstate batteries. Discriminant function analysis was used to determine which cognitive tests best differentiated the groups. Results: NIH Toolbox crystallized measures, Oral Reading and Picture Vocabulary, were the most sensitive in identifying MCI apart from HC. Secondarily, deficits in memory and executive subtests were also predictive. UDS neuropsychological test analyses showed the expected pattern of memory and executive functioning tests differentiating MCI from HC. Conclusions: Contrary to expectation, NIH Toolbox crystallized abilities appeared preferentially sensitive to diagnostic group differences. This study highlights the importance of further research into the validity and clinical utility of computerized neuropsychological tests within ethnic minority populations.
Style APA, Harvard, Vancouver, ISO itp.
40

Jiang, Jiazhi, Zijian Huang, Dan Huang, Jiangsu Du, Lin Chen, Ziguan Chen i Yutong Lu. "Hierarchical Model Parallelism for Optimizing Inference on Many-core Processor Via Decoupled 3D-CNN Structure". ACM Transactions on Architecture and Code Optimization, 18.06.2023. http://dx.doi.org/10.1145/3605149.

Pełny tekst źródła
Streszczenie:
The tremendous success of convolutional neural network (CNN) has made it ubiquitous in many fields of human endeavor. Many applications such as biomedical analysis and scientific data analysis involve analyzing volumetric data. This spawns huge demand for 3D-CNN. Although accelerators such as GPU may provide higher throughput on deep learning applications, they may not be available in all scenarios. CPU, especially many-core CPU with non-uniform memory access (NUMA) architecture, remains an attractive choice for deep learning inference in many scenarios. In this paper, we propose a distributed inference solution for 3D-CNN that targets on the emerging ARM many-core CPU platform. A hierarchical partition approach is claimed to accelerate 3D-CNN inference by exploiting characteristics of memory and cache on ARM many-core CPU. Based on the hierarchical model partition approach, other optimization techniques such as NUMA-aware thread scheduling and optimization of 3D-img2row convolution are designed to exploit the potential of ARM many-core CPU for 3D-CNN. We evaluate our proposed inference solution with several classic 3D-CNNs: C3D, 3D-resnet34, 3D-resnet50, 3D-vgg11 and P3D. Our experimental results show that our solution can boost the performance of the 3D-CNN inference, and achieve much better scalability, with a negligible fluctuation in accuracy. When employing our 3D-CNN inference solution on ACL libraries, it can outperform naive ACL implementations by 11x to 50x on ARM many-core processor. When employing our 3D-CNN inference solution on NCNN libraries, it can outperform the naive NCNN implementations by 5.2x to 14.2x on ARM many-core processor.
Style APA, Harvard, Vancouver, ISO itp.
41

Lindeberg, Tony. "A time-causal and time-recursive scale-covariant scale-space representation of temporal signals and past time". Biological Cybernetics, 23.01.2023. http://dx.doi.org/10.1007/s00422-022-00953-6.

Pełny tekst źródła
Streszczenie:
AbstractThis article presents an overview of a theory for performing temporal smoothing on temporal signals in such a way that: (i) temporally smoothed signals at coarser temporal scales are guaranteed to constitute simplifications of corresponding temporally smoothed signals at any finer temporal scale (including the original signal) and (ii) the temporal smoothing process is both time-causal and time-recursive, in the sense that it does not require access to future information and can be performed with no other temporal memory buffer of the past than the resulting smoothed temporal scale-space representations themselves. For specific subsets of parameter settings for the classes of linear and shift-invariant temporal smoothing operators that obey this property, it is shown how temporal scale covariance can be additionally obtained, guaranteeing that if the temporal input signal is rescaled by a uniform temporal scaling factor, then also the resulting temporal scale-space representations of the rescaled temporal signal will constitute mere rescalings of the temporal scale-space representations of the original input signal, complemented by a shift along the temporal scale dimension. The resulting time-causal limit kernel that obeys this property constitutes a canonical temporal kernel for processing temporal signals in real-time scenarios when the regular Gaussian kernel cannot be used, because of its non-causal access to information from the future, and we cannot additionally require the temporal smoothing process to comprise a complementary memory of the past beyond the information contained in the temporal smoothing process itself, which in this way also serves as a multi-scale temporal memory of the past. We describe how the time-causal limit kernel relates to previously used temporal models, such as Koenderink’s scale-time kernels and the ex-Gaussian kernel. We do also give an overview of how the time-causal limit kernel can be used for modelling the temporal processing in models for spatio-temporal and spectro-temporal receptive fields, and how it more generally has a high potential for modelling neural temporal response functions in a purely time-causal and time-recursive way, that can also handle phenomena at multiple temporal scales in a theoretically well-founded manner. We detail how this theory can be efficiently implemented for discrete data, in terms of a set of recursive filters coupled in cascade. Hence, the theory is generally applicable for both: (i) modelling continuous temporal phenomena over multiple temporal scales and (ii) digital processing of measured temporal signals in real time. We conclude by stating implications of the theory for modelling temporal phenomena in biological, perceptual, neural and memory processes by mathematical models, as well as implications regarding the philosophy of time and perceptual agents. Specifically, we propose that for A-type theories of time, as well as for perceptual agents, the notion of a non-infinitesimal inner temporal scale of the temporal receptive fields has to be included in representations of the present, where the inherent nonzero temporal delay of such time-causal receptive fields implies a need for incorporating predictions from the actual time-delayed present in the layers of a perceptual hierarchy, to make it possible for a representation of the perceptual present to constitute a representation of the environment with timing properties closer to the actual present.
Style APA, Harvard, Vancouver, ISO itp.
42

Webb, Damien, i Rachel Franks. "Metropolitan Collections: Reaching Out to Regional Australia". M/C Journal 22, nr 3 (19.06.2019). http://dx.doi.org/10.5204/mcj.1529.

Pełny tekst źródła
Streszczenie:
Special Care NoticeThis article discusses trauma and violence inflicted upon the Indigenous peoples of Tasmania through the processes of colonisation. Content within this article may be distressing to some readers. IntroductionThis article looks briefly at the collection, consultation, and digital sharing of stories essential to the histories of the First Nations peoples of Australia. Focusing on materials held in Sydney, New South Wales two case studies—the object known as the Proclamation Board and the George Augustus Robinson Papers—explore how materials can be shared with Aboriginal peoples of the region now known as Tasmania. Specifically, the authors of this article (a Palawa man and an Australian woman of European descent) ask how can the idea of the privileging of Indigenous voices, within Eurocentric cultural collections, be transformed from rhetoric to reality? Moreover, how can we navigate this complex work, that is made even more problematic by distance, through the utilisation of knowledge networks which are geographically isolated from the collections holding stories crucial to Indigenous communities? In seeking to answer these important questions, this article looks at how cultural, emotional, and intellectual ownership can be divested from the physical ownership of a collection in a way that repatriates—appropriately and sensitively—stories of Aboriginal Australia and of colonisation. Holding Stories, Not Always Our OwnCultural institutions, including libraries, have, in recent years, been drawn into discussions centred on the notion of digital disruption and “that transformative shift which has seen the ongoing realignment of business resources, relationships, knowledge, and value both facilitating the entry of previously impossible ideas and accelerating the competitive impact of those same impossible ideas” (Franks and Ensor n.p.). As Molly Brown has noted, librarians “are faced, on a daily basis, with rapidly changing technology and the ways in which our patrons access and use information. Thus, we need to look at disruptive technologies as opportunities” (n.p.). Some innovations, including the transition from card catalogues to online catalogues and the provision of a wide range of electronic resources, are now considered to be business as usual for most institutions. So, too, the digitisation of great swathes of materials to facilitate access to collections onsite and online, with digitising primary sources seen as an intermediary between the pillars of preserving these materials and facilitating access for those who cannot, for a variety of logistical and personal reasons, travel to a particular repository where a collection is held.The result has been the development of hybrid collections: that is, collections that can be accessed in both physical and digital formats. Yet, the digitisation processes conducted by memory institutions is often selective. Limited resources, even for large-scale digitisation projects usually only realise outcomes that focus on making visually rich, key, or canonical documents, or those documents that are considered high use and at risk, available online. Such materials are extracted from the larger full body of records while other lesser-known components are often omitted. Digitisation projects therefore tend to be devised for a broader audience where contextual questions are less central to the methodology in favour of presenting notable or famous documents online only. Documents can be profiled as an exhibition separate from their complete collection and, critically, their wider context. Libraries of course are not neutral spaces and this practice of (re)enforcing the canon through digitisation is a challenge that cultural institutions, in partnerships, need to address (Franks and Ensor n.p.). Indeed, our digital collections are as affected by power relationships and the ongoing impacts of colonisation as our physical collections. These power relationships can be seen through an organisation’s “processes that support acquisitions, as purchases and as the acceptance of artefacts offered as donations. Throughout such processes decisions are continually made (consciously and unconsciously) that affect what is presented and actively promoted as the official history” (Thorpe et al. 8). While it is important to acknowledge what we do collect, it is equally important to look, too, at what we do not collect and to consider how we continually privilege and exclude stories. Especially when these stories are not always our own, but are held, often as accidents of collecting. For example, an item comes in as part of a larger suite of materials while older, city-based institutions often pre-date regional repositories. An essential point here is that cultural institutions can often become comfortable in what they collect, building on existing holdings. This, in turn, can lead to comfortable digitisation. If we are to be truly disruptive, we need to embrace feeling uncomfortable in what we do, and we need to view digitisation as an intervention opportunity; a chance to challenge what we ‘know’ about our collections. This is especially relevant in any attempts to decolonise collections.Case Study One: The Proclamation BoardThe first case study looks at an example of re-digitisation. One of the seven Proclamation Boards known to survive in a public collection is held by the Mitchell Library, State Library of New South Wales, having been purchased from Tasmanian collector and photographer John Watt Beattie (1859–1930) in May 1919 for £30 (Morris 86). Why, with so much material to digitise—working in a program of limited funds and time—would the Library return to an object that has already been privileged? Unanswered questions and advances in digitisation technologies, created a unique opportunity. For the First Peoples of Van Diemen’s Land (now known as Tasmania), colonisation by the British in 1803 was “an emotionally, intellectually, physically, and spiritually confronting series of encounters” (Franks n.p.). Violent incidents became routine and were followed by a full-scale conflict, often referred to as the Black War (Clements 1), or more recently as the Tasmanian War, fought from the 1820s until 1832. Image 1: Governor Arthur’s Proclamation to the Aborigines, ca. 1828–1830. Image Credit: Mitchell Library, State Library of New South Wales, Call No.: SAFE / R 247.Behind the British combatants were various support staff, including administrators and propagandists. One of the efforts by the belligerents, behind the front line, to win the war and bring about peace was the production of approximately 100 Proclamation Boards. These four-strip pictograms were the result of a scheme introduced by Lieutenant Governor George Arthur (1784–1854), on the advice of Surveyor General George Frankland (1800–38), to communicate that all are equal under the rule of law (Arthur 1). Frankland wrote to Arthur in early 1829 to suggest these Proclamation Boards could be produced and nailed to trees (Morris 84), as a Eurocentric adaptation of a traditional method of communication used by Indigenous peoples who left images on the trunks of trees. The overtly stated purpose of the Boards was, like the printed proclamations exhorting peace, to assert, all people—black and white—were equal. That “British Justice would protect” everyone (Morris 84). The first strip on each of these pictogram Boards presents Indigenous peoples and colonists living peacefully together. The second strip shows “a conciliatory handshake between the British governor and an Aboriginal ‘chief’, highly reminiscent of images found in North America on treaty medals and anti-slavery tokens” (Darian-Smith and Edmonds 4). The third and fourth strips depict the repercussions for committing murder (or, indeed, any significant crime), with an Indigenous man hanged for spearing a colonist and a European man hanged for shooting an Aboriginal man. Both men executed in the presence of the Lieutenant Governor. The Boards, oil on Huon pine, were painted by “convict artists incarcerated in the island penal colony” (Carroll 73).The Board at the State Library of New South Wales was digitised quite early on in the Library’s digitisation program, it has been routinely exhibited (including for the Library’s centenary in 2010) and is written about regularly. Yet, many questions about this small piece of timber remain unanswered. For example, some Boards were outlined with sketches and some were outlined with pouncing, “a technique [of the Italian Renaissance] of pricking the contours of a drawing with a pin. Charcoal was then dusted on to the drawing” (Carroll 75–76). Could such a sketch or example of pouncing be seen beneath the surface layers of paint on this particular Board? What might be revealed by examining the Board more closely and looking at this object in different ways?An important, but unexpected, discovery was that while most of the pigments in the painting correlate with those commonly available to artists in the early nineteenth century there is one outstanding anomaly. X-ray analysis revealed cadmium yellow present in several places across the painting, including the dresses of the little girls in strip one, uniform details in strip two, and the trousers worn by the settler men in strips three and four (Kahabka 2). This is an extraordinary discovery, as cadmium yellows were available “commercially as an artist pigment in England by 1846” and were shown by “Winsor & Newton at the 1851 Exhibition held at the Crystal Palace, London” (Fiedler and Bayard 68). The availability of this particular type of yellow in the early 1850s could set a new marker for the earliest possible date for the manufacture of this Board, long-assumed to be 1828–30. Further, the early manufacture of cadmium yellow saw the pigment in short supply and a very expensive option when compared with other pigments such as chrome yellow (the darker yellow, seen in the grid lines that separate the scenes in the painting). This presents a clearly uncomfortable truth in relation to an object so heavily researched and so significant to a well-regarded collection that aims to document much of Australia’s colonial history. Is it possible, for example, the Board has been subjected to overpainting at a later date? Or, was this premium paint used to produce a display Board that was sent, by the Tasmanian Government, to the 1866 Intercolonial Exhibition in Melbourne? In seeking to see the finer details of the painting through re-digitisation, the results were much richer than anticipated. The sketch outlines are clearly visible in the new high-resolution files. There are, too, details unable to be seen clearly with the naked eye, including this warrior’s headdress and ceremonial scarring on his stomach, scars that tell stories “of pain, endurance, identity, status, beauty, courage, sorrow or grief” (Australian Museum n.p.). The image of this man has been duplicated and distributed since the 1830s, an anonymous figure deployed to tell a settler-centric story of the Black, or Tasmanian, War. This man can now be seen, for the first time nine decades later, to wear his own story. We do not know his name, but he is no longer completely anonymous. This image is now, in some ways, a portrait. The State Library of New South Wales acknowledges this object is part of an important chapter in the Tasmanian story and, though two Boards are in collections in Tasmania (the Tasmanian Museum and Art Gallery, Hobart and the Queen Victoria Museum and Art Gallery, Launceston), each Board is different. The Library holds an important piece of a large and complex puzzle and has a moral obligation to make this information available beyond its metropolitan location. Digitisation, in this case re-digitisation, is allowing for the disruption of this story in sparking new questions around provenance and for the relocating of a Palawa warrior to a more prominent, perhaps even equal role, within a colonial narrative. Image 2: Detail, Governor Arthur’s Proclamation to the Aborigines, ca. 1828–1830. Image Credit: Mitchell Library, State Library of New South Wales, Call No.: SAFE / R 247.Case Study Two: The George Augustus Robinson PapersThe second case study focuses on the work being led by the Indigenous Engagement Branch at the State Library of New South Wales on the George Augustus Robinson (1791–1866) Papers. In 1829, Robinson was granted a government post in Van Diemen’s Land to ‘conciliate’ with the Palawa peoples. More accurately, Robinson’s core task was dispossession and the systematic disconnection of the Palawa peoples from their Country, community, and culture. Robinson was a habitual diarist and notetaker documenting much of his own life as well as the lives of those around him, including First Nations peoples. His extensive suite of papers represents a familiar and peculiar kind of discomfort for Aboriginal Australians, one in which they are forced to learn about themselves through the eyes and words of their oppressors. For many First Nations peoples of Tasmania, Robinson remains a violent and terrible figure, but his observations of Palawa culture and language are as vital as they are problematic. Importantly, his papers include vibrant and utterly unique descriptions of people, place, flora and fauna, and language, as well as illustrations revealing insights into the routines of daily life (even as those routines were being systematically dismantled by colonial authorities). “Robinson’s records have informed much of the revitalisation of Tasmanian Aboriginal culture in the twentieth century and continue to provide the basis for investigations of identity and deep relationships to land by Aboriginal scholars” (Lehman n.p.). These observations and snippets of lived culture are of immense value to Palawa peoples today but the act of reading between Robinson’s assumptions and beyond his entrenched colonial views is difficult work.Image 3: George Augustus Robinson Papers, 1829–34. Image Credit: Mitchell Library, State Library of New South Wales, A 7023–A 7031.The canonical reference for Robinson’s archive is Friendly Mission: The Tasmanian Journals and Papers of George Augustus Robinson, 1829–1834, edited by N.J.B. Plomley. The volume of over 1,000 pages was first published in 1966. This large-scale project is recognised “as a monumental work of Tasmanian history” (Crane ix). Yet, this standard text (relied upon by Indigenous and non-Indigenous researchers) has clearly not reproduced a significant percentage of Robinson’s Tasmanian manuscripts. Through his presumptuous truncations Plomley has not simply edited Robinson’s work but has, quite literally, written many Palawa stories out of this colonial narrative. It is this lack of agency in determining what should be left out that is most troubling, and reflects an all-too-familiar approach which libraries, including the State Library of New South Wales, are now urgently trying to rectify. Plomley’s preface and introduction does not indicate large tranches of information are missing. Indeed, Plomley specifies “that in extenso [in full] reproduction was necessary” (4) and omissions “have been kept to a minimum” (8). A 32-page supplement was published in 1971. A new edition, including the supplement, some corrections made by Plomley, and some extra material was released in 2008. But much continues to be unknown outside of academic circles, and far too few Palawa Elders and language revival workers have had access to Robinson’s original unfiltered observations. Indeed, Plomley’s text is linear and neat when compared to the often-chaotic writings of Robinson. Digitisation cannot address matters of the materiality of the archive, but such projects do offer opportunities for access to information in its original form, unedited, and unmediated.Extensive consultation with communities in Tasmania is underpinning the digitisation and re-description of a collection which has long been assumed—through partial digitisation, microfilming, and Plomley’s text—to be readily available and wholly understood. Central to this project is not just challenging the canonical status of Plomley’s work but directly challenging the idea non-Aboriginal experts can truly understand the cultural or linguistic context of the information recorded in Robinson’s journals. One of the more exciting outcomes, so far, has been working with Palawa peoples to explore the possibility of Palawa-led transcriptions and translation, and not breaking up the tasks of this work and distributing them to consultants or to non-Indigenous student groups. In this way, people are being meaningfully reunited with their own histories and, crucially, given first right to contextualise and understand these histories. Again, digitisation and disruption can be seen here as allies with the facilitation of accessibility to an archive in ways that re-distribute the traditional power relations around interpreting and telling stories held within colonial-rich collections.Image 4: Detail, George Augustus Robinson Papers, 1829–34. Image Credit: Mitchell Library, State Library of New South Wales, A 7023–A 7031.As has been so brilliantly illustrated by Bruce Pascoe’s recent work Dark Emu (2014), when Aboriginal peoples are given the opportunity to interpret their own culture from the colonial records without interference, they are able to see strength and sophistication rather than victimhood. For, to “understand how the Europeans’ assumptions selectively filtered the information brought to them by the early explorers is to see how we came to have the history of the country we accept today” (4). Far from decrying these early colonial records Aboriginal peoples understand their vital importance in connecting to a culture which was dismantled and destroyed, but importantly it is known that far too much is lost in translation when Aboriginal Australians are not the ones undertaking the translating. ConclusionFor Aboriginal Australians, culture and knowledge is no longer always anchored to Country. These histories, once so firmly connected to communities through their ancestral lands and languages, have been dispersed across the continent and around the world. Many important stories—of family history, language, and ways of life—are held in cultural institutions and understanding the role of responsibly disseminating these collections through digitisation is paramount. In transitioning from physical collections to hybrid collections of the physical and digital, the digitisation processes conducted by memory institutions can be—and due to the size of some collections is inevitably—selective. Limited resources, even for large-scale and well-resourced digitisation projects usually realise outcomes that focus on making visually rich, key, or canonical documents, or those documents considered high use or at risk, available online. Such materials are extracted from a full body of records. Digitisation projects, as noted, tend to be devised for a broader audience where contextual questions are less central to the methodology in favour of presenting notable documents online, separate from their complete collection and, critically, their context. Our institutions carry the weight of past collecting strategies and, today, the pressure of digitisation strategies as well. Contemporary librarians should not be gatekeepers, but rather key holders. In collaborating across sectors and with communities we open doors for education, research, and the repatriation of culture and knowledge. We must, always, remember to open these doors wide: the call of Aboriginal Australians of ‘nothing about us without us’ is not an invitation to collaboration but an imperative. Libraries—as well as galleries, archives, and museums—cannot tell these stories alone. Also, these two case studies highlight what we believe to be one of the biggest mistakes that not just libraries but all cultural institutions are vulnerable to making, the assumption that just because a collection is open access it is also accessible. Digitisation projects are more valuable when communicated, contextualised and—essentially—the result of community consultation. Such work can, for some, be uncomfortable while for others it offers opportunities to embrace disruption and, by extension, opportunities to decolonise collections. For First Nations peoples this work can be more powerful than any simple measurement tool can record. Through examining our past collecting, deliberate efforts to consult, and through digital sharing projects across metropolitan and regional Australia, we can make meaningful differences to the ways in which Aboriginal Australians can, again, own their histories.Acknowledgements The authors acknowledge the Palawa peoples: the traditional custodians of the lands known today as Tasmania. The authors acknowledge, too, the Gadigal people upon whose lands this article was researched and written. We are indebted to Dana Kahabka (Conservator), Joy Lai (Imaging Specialist), Richard Neville (Mitchell Librarian), and Marika Duczynski (Project Officer) at the State Library of New South Wales. Sincere thanks are also given to Jason Ensor of Western Sydney University.ReferencesArthur, George. “Proclamation.” The Hobart Town Courier 19 Apr. 1828: 1.———. Proclamation to the Aborigines. Graphic Materials. Sydney: Mitchell Library, State Library of New South Wales, SAFE R / 247, ca. 1828–1830.Australian Museum. “Aboriginal Scarification.” 2018. 11 Jan. 2019 <https://australianmuseum.net.au/about/history/exhibitions/body-art/aboriginal-scarification/>.Brown, Molly. “Disruptive Technology: A Good Thing for Our Libraries?” International Librarians Network (2016). 26 Aug. 2018 <https://interlibnet.org/2016/11/25/disruptive-technology-a-good-thing-for-our-libraries/>.Carroll, Khadija von Zinnenburg. Art in the Time of Colony: Empires and the Making of the Modern World, 1650–2000. Farnham, UK: Ashgate Publishing, 2014.Clements, Nicholas. The Black War: Fear, Sex and Resistance in Tasmania. St Lucia, U of Queensland P, 2014.Crane, Ralph. “Introduction.” Friendly Mission: The Tasmanian Journals and Papers of George Augustus Robinson, 1829-1834. 2nd ed. Launceston and Hobart: Queen Victoria Museum and Art Gallery, and Quintus Publishing, 2008. ix.Darian-Smith, Kate, and Penelope Edmonds. “Conciliation on Colonial Frontiers.” Conciliation on Colonial Frontiers: Conflict, Performance and Commemoration in Australia and the Pacific Rim. Eds. Kate Darian-Smith and Penelope Edmonds. New York: Routledge, 2015. 1–14.Edmonds, Penelope. “‘Failing in Every Endeavour to Conciliate’: Governor Arthur’s Proclamation Boards to the Aborigines, Australian Conciliation Narratives and Their Transnational Connections.” Journal of Australian Studies 35.2 (2011): 201–18.Fiedler, Inge, and Michael A. Bayard. Artist Pigments, a Handbook of Their History and Characteristics. Ed. Robert L. Feller. Cambridge: Cambridge UP, 1986. 65–108. Franks, Rachel. “A True Crime Tale: Re-Imagining Governor Arthur’s Proclamation Board for the Tasmanian Aborigines.” M/C Journal 18.6 (2015). 1 Feb. 2019 <http://journal.media-culture.org.au/index.php/mcjournal/article/view/1036>.Franks, Rachel, and Jason Ensor. “Challenging the Canon: Collaboration, Digitisation and Education.” ALIA Online: A Conference of the Australian Library and Information Association, 11–15 Feb. 2019, Sydney.Kahabka, Dana. Condition Assessment [Governor Arthur’s Proclamation to the Aborigines, ca. 1828–1830, SAFE / R247]. Sydney: State Library of New South Wales, 2017.Lehman, Greg. “Pleading Robinson: Reviews of Friendly Mission: The Tasmanian Journals and Papers of George Augustus Robinson (2008) and Reading Robinson: Companion Essays to Friendly Mission (2008).” Australian Humanities Review 49 (2010). 1 May 2019 <http://press-files.anu.edu.au/downloads/press/p41961/html/review-12.xhtml?referer=1294&page=15>. Morris, John. “Notes on A Message to the Tasmanian Aborigines in 1829, popularly called ‘Governor Davey’s Proclamation to the Aborigines, 1816’.” Australiana 10.3 (1988): 84–7.Pascoe, Bruce. Dark Emu. Broome: Magabala Books, 2014/2018.Plomley, N.J.B. Friendly Mission: The Tasmanian Journals and Papers of George Augustus Robinson, 1829–1834. Hobart: Tasmanian Historical Research Association, 1966.Robinson, George Augustus. Papers. Textual Records. Sydney: Mitchell Library, State Library of NSW, A 7023–A 7031, 1829–34. Thorpe, Kirsten, Monica Galassi, and Rachel Franks. “Discovering Indigenous Australian Culture: Building Trusted Engagement in Online Environments.” Journal of Web Librarianship 10.4 (2016): 343–63.
Style APA, Harvard, Vancouver, ISO itp.
43

Patterson-Ooi, Amber, i Natalie Araujo. "Beyond Needle and Thread". M/C Journal 25, nr 4 (5.10.2022). http://dx.doi.org/10.5204/mcj.2927.

Pełny tekst źródła
Streszczenie:
Introduction In the elite space of Haute Couture, fashion is presented through a theatrical array of dynamics—the engagement of specific bodies performing for select audiences in highly curated spaces. Each element is both very precise in its objectives and carefully selected for impact. In this way, the production of Haute Couture makes itself accessible to only a few select members of society. Globally, there are only an estimated 4,000 direct consumers of Haute Couture (Hendrik). Given this limited market, the work of elite couturiers relies on other forms of artistic media, namely film, photography, and increasingly, museum spaces, to reach broader audiences who are then enabled to participate in the fashion ‘space’ via a process of visual consumption. For these audiences, Haute Couture is less about material consumption than it is about the aspirational consumption and contestation of notions of identity. This article uses qualitative textual analysis and draws on semiotic theory to explore symbolism and values in Haute Couture. Semiotics, an approach popularised by the work of Roland Barthes, examines signifiers as elements of the construction of metalanguage and myth. Barthes recognised a broad understanding of language that extended beyond oral and written forms. He acknowledged that a photograph or artefact may also constitute “a kind of speech” (111). Similarly, fashion can be seen as both an important signifier and mode of communication. The model of fashion as communication is one extensively explored within culture studies (e.g. Hall; Lurie). Much of the discussion of semiotics in this literature is predicated on sender/receiver models. These models conceive of fashion as the mechanism through which individual senders communicate to another individual or to collective (and largely passive) audiences (Barnard). Yet, fashion is not a unidirectional form of communication. It can be seen as a dialogical and discursive space of encounter and contestation. To understand the role of Haute Couture as a contested space of identity and socio-political discourse, this article examines the work of Chinese couturier Guo Pei. An artisan such as Guo Pei places the results of needle and thread into spaces of the theatrical, the spectacular, and, significantly, the powerfully socio-political. Guo Pei’s contributions to Haute Couture are extravagant, fantastical productions that also serve as spaces of socio-cultural information exchange and debate. Guo Pei’s creations bring together political history, memory, and fantasy. Here we explore the socio-cultural and political semiotics that emerge when the humble stitch is dramatically amplified onto the Haute Couture runway. We argue that Guo Pei’s work speaks not only to a cultural imaginary but also to the contested nature of gender and socio-political authority in contemporary China. The Politicisation of Fashion in China The majority of literature regarding Chinese fashion in the twentieth and twenty-first centuries has focussed on the use of fashion to communicate socio-political messages (Finnane). This is most clearly seen in analyses of the connections between dress and egalitarian ideals during Mao Zedong’s Cultural Revolution. As Zhang (952-952) notes, revolutionary fashion emphasised simplicity, frugality, and homogenisation. It rejected style choices that reflected both traditional Chinese and Western fashions. In Mao’s China, fashion was utilised by the state and adopted by the populace as a means of reinforcing the regime’s ideological orientations. For example, the ubiquitous Mao suit, worn by both men and women during the Cultural Revolution “was intended not merely as a unisex garment but a means to deemphasise gender altogether” (Feng 79). The Maoist regime’s intention to create a type of social equality through sartorial homogenisation was clear. Reflecting on the ways in which fashion both responded to and shaped women’s positionality, Mao stated, “women are regarded as criminals to begin with, and tall buns and long skirts are the instruments of torture applied to them by men. There is also their facial makeup, which is the brand of the criminal, the jewellery on their hands, which constitutes shackles and their pierced ears and bound feet which represent corporal punishment” (Mao cited in Finnane 23). Mao’s suit—the homogenising militaristic uniform adopted by many citizens—may have been intended as a mechanism for promoting equality, freeing women from the bonds of gendered oppression and all citizens from visual markers of class. Nonetheless, in practice Maoist fashion and policing of appearance during the Cultural Revolution enforced a politics of amnesia and perversely may have “entailed feminizing the undesirable, by conflating woman, bourgeoisie, and colour while also insisting on a type of gender equality that the belted Mao jacket belied” (Chen 161). In work on cultural transformations in the post-Maoist period, Braester argues that since the late 1980s Chinese cultural products—here taken to include artefacts such as Haute Couture—have similarly been defined by the politics of memory and identity. Evocation of historically important symbols and motifs may serve to impose a form of narrative continuity, connecting the present to the past. Yet, as Braester notes, such strategies may belie stability: “to contemplate memory and forgetting is tantamount to acknowledging the temporal and spatial instability of the post-industrial, globalizing world” (435). In this way, cultural products are not only sites of cultural continuity, but also of contestation. Imperial Dreams of Feminine Power The work of Chinese couturier Guo Pei showcases traditional Chinese embroidery techniques alongside more typically Western fashion design practices as a means of demonstrating not only Haute Couturier craftsmanship but also celebrating Chinese imperial culture through nostalgic fantasies in her contemporary designs. Born in Beijing, in 1967, at the beginning of the Chinese Cultural Revolution, Guo Pei studied fashion at the Beijing Second Light Industry School before working in private and state-owned fashion houses. She eventually moved to establish her own fashion design studio and was recognised as “the designer of choice for high society and the political elite” in China (Yoong 19). Her work was catapulted into Western consciousness when her cape, titled ‘Yellow Empress’ was donned by Rihanna for the 2015 Met Gala. The design was a response to an era in which the colour yellow was forbidden to all but the emperor. In the same year, Guo Pei was named an invited member of La Federation de la Haute Couture, becoming the first and only Chinese-born and trained couturier to receive the honour. Recognition of her work at political and socio-economic levels earned her an award for ‘Outstanding Contribution to Economy and Cultural Diplomacy’ by the Asian Couture Federation in 2019. While Maoist fashion influences pursued a vision of gender equality through the ‘unsexing’ of fashion, Guo Pei’s work presents a very different reading of female adornment. One example is her exquisite Snow Queen dress, which draws on imperial motifs in its design. An ensemble of silk, gold embroidery, and Swarovski crystals weighing 50 kilograms, the Snow Queen “characterises Guo Pei’s ideal woman who is noble, resilient and can bear the weight of responsibility” (Yoong 140). In its initial appearance on the Haute Couture runway, the dress was worn by 78-year-old American model, Carmen Dell’Orefice, signalling the equation of age with strength and beauty. Rather than being a site of torture or corporal punishment, as suggested by Mao, the Snow Queen dress positions imagined traditional imperial fashion as a space for celebration and empowerment of the feminine form. The choice of model reinforces this message, while simultaneously contesting global narratives that conflate women’s beauty and physical ability with youthfulness. In this way, fashion can be understood as an intersectional space. On the one hand, Guo Pei's work reinvigorates a particular nostalgic vision of Chinese imperial culture and in doing so pushes back against the socio-political ‘non-fashion’ and uniformity of Maoist dress codes. Yet, on the other hand, positioning her work in the very elite space of Haute Couture serves to reinstate social stratification and class boundaries through the creation of economically inaccessible artefacts: a process that in turn involves the reification and museumification of fashion as material culture. Ideals of femininity, identity, individuality, and the expressions of either creating or dismantling power, are anchored within cultural, social, and temporal landscapes. Benedict Anderson argues that the museumising imagination is “profoundly political” (123). Like sacred texts and maps, fashion as material ephemera evokes and reinforces a sense of continuity and connection to history. Yet, the belonging engendered through engagement with material and imagined pasts is imprecise in its orientation. As much as it is about maintaining threads to an historical past, it is simultaneously an appeal to present possibilities. In his broader analysis, Anderson explores the notion of parallelity, the potentiality not to recreate some geographically or temporally removed place, but to open a space of “living lives parallel …] along the same trajectory” (131). Guo Pei’s creations appeal to a similar museumising imagination. At once, her work evokes both a particular imagined past of imperial grandeur, against instability of the politically shifting present, and appeals to new possibilities of gendered emancipation within that imagined space. Contesting and Complicating East-West Dualism The design process frequently involves borrowing, reinterpretation, and renewal of ideas. The erasure of certain cultural and political aspects of social continuity through the Chinese Cultural Revolution, and the socio-political changes thereafter, have created fertile ground for an artist like Guo Pei. Her palimpsest reaches back through time, picks up those cultural threads of extravagance, and projects them wholesale into the spaces of fashion in the present moment. Cognisance of design intentionality and historical and contemporary fashion discourses influence the various interpretations of fashion semiotics. However, there are also audience-created meanings within the various modes of performance and consumption. Where Kaiser and Green assert that “the process of fashion is inevitably linked to making and sustaining as well as resisting and dismantling power” (1), we can also observe that sartorial semiotics can have different meanings at different times. In the documentary, Yellow Is Forbidden, Guo Pei reflects on shifting semiotics in fashion. Speaking with a client, she remarks that “dragons and phoenixes used to represent the Chinese emperor—now they represent the spirit of the Chinese” (Brettkelly). Once a symbol of sacred, individual power, these iconic signifiers now communicate collective national identity. Both playing with and reimagining not only the grandeur of China’s imperial past, but also the particular role of the feminine form and female power therein, Guo Pei’s corpus evokes and complicates such contestations of power. On the one hand, her work serves to contest homogenising narratives of identity and femininity within China. Equally important, however, are the ways in which this work, which is possible both through and in spite of a Euro-American centric system of patronage within the fashion industry, complicates notions of East-West dualism. For Guo Pei, drawing on broadly accessible visual signifiers of Chinese heritage and culture has been critical in bringing attention to her endeavours. Her work draws significantly from her cultural heritage in terms of colour selections and traditional Chinese embroidery techniques. Symbols and motifs peculiar to Chinese culture are abundant: lotus flowers, dragons, phoenixes, auspicious numbers, and favourable Chinese language characters such as buttons in the shape of ‘double happiness’ (囍) are often present in her designs. Likewise, her techniques pay homage to traditional craft work, including Peranakan beading. The parallelity conjured by these choices is deliberate. In staging Guo Pei’s work for museum exhibitions at museums such as the Asian Civilizations Museum, her designs are often showcased beside the historical artefacts that inspired them (Fu). On her Chinese website, Guo Pei, highlights the historical connections between her designs and traditional Chinese embroidery craft through a sub-section of the “Spirit” header, entitled simply, “Inheritance”. These influences and expressions of Chinese culture are, in Guo Pei's own words her “design language” (Brettkelly). However, Guo Pei has also expressed an ambivalence about her positioning as a Chinese designer. She has maintained that she does not want “to be labelled as a Chinese storyteller ... and thinks about a global audience” (Yoong). In her expression of this desire to both derive power through design choices and historically situated practices and symbols, and simultaneously move beyond nationally bounded identity frameworks, Guo Pei positions herself in a space ‘betwixt and between.’ This is not only a space of encounter between East and West, but also a space that calls into question the limits and possibilities of semiotic expression. Authenticity and Legitimacy Global audiences of fashion rely on social devices of diffusion other than the runway: photography, film, museums, and galleries. Unique to Haute Couture, however, is the way in which such processes are often abstracted, decontextualised and pushed to the extremities of theatrical opulence. De Perthuis argues that to remove context “greatly reduce[s] the social, political, psychological and semiotic meanings” of fashion (151). When iconic motifs are utilised, the western gaze risks falling back on essentialising reification of identity. To this extent, for non-Chinese audiences Guo Pei’s works may serve not so much to problemitise historical and contemporary feminine identities and inheritances, so much as project an essentialisation of Chinese femininity. The double-bind created through Guo Pei’s simultaneous appeal to and resistance of archetypical notions of Chinese identity and femininity complicates the semiotic currency of her work. Moreover, Guo Pei’s work highlights tensions concerning understandings of Chinese culture between those in China and the diaspora. In her process of accessing reference material, Guo Pei has necessarily been driven to travel internationally, due to her concerns about a lack of access to material artefacts within China. She has sought out remnants of her ancestral culture in both the Chinese diaspora as well as material culture designed for export (Yoong; Brettkelly). This borrowing of Chinese design as depicted outside of China proper, alongside the use of western influences and patronage in Guo’s work has resulted in her work being dismissed by critics as “superficial … export ware, reimported” (Thurman). The insinuation that her work is derivative is tinged with denigration. Such critiques question not only the authenticity of the motifs and techniques utilised in Guo Pei’s designs, but also the legitimacy of the narratives of both feminine and Chinese identity communicated therein. Questions of cultural ‘authenticity’ serve to deny how culture, both tangible and intangible, is mutable over time and space. In his work on tourism, Taylor suggests that wherever “the production of authenticity is dependent on some act of (re)production, it is conventionally the past which is seen to hold the model of the original” (9). In this way, legitimacy of semiotic communication in works that evoke a temporally distant past is often seen to be adjudicated through notions of fidelity to the past. This authenticity of the ‘traditional’ associates ‘tradition’ with ‘truth’ and ‘authenticity.’ It is itself a form of mythmaking. As Guo Pei’s work is at once quintessentially Chinese and, through its audiences and capitalist modes of circulation, fundamentally Western, it challenges notions of authenticity and legitimacy both within the fashion world and in broader social discourses. Speaking about similar processes in literary fiction, Colavincenzo notes that works that attempt to “take on the myth of historical discourse and practice … expose the ways in which this discourse is constructed and how it fails to meet the various claims it makes for itself” (143). Rather than reinforcing imagined ‘truths’, appeals to an historical imagination such as that deployed by Guo Pei reveal its contingency. Conclusion In Fashion in Altermodern China, Feng suggests that we can “understand the sartorial as situating a set of visible codes and structures of meaning” (1). More than a reductionistic process of sender/receiver communication, fashion is profoundly embedded with intersectional dialogues. It is not the precision of signifiers, but their instability, fluidity, and mutability that is revealing. Guo Pei’s work offers narratives at the junction of Chinese and foreign, original and derivative, mythical and historical that have an unsettled nature. This ineffable tension between construction and deconstruction draws in both fashion creators and audiences. Whether encountering fashion on the runway, in museum cabinets, or on magazine pages, all renditions rely on its audience to engage with processes of imagination, fantasy, and memory as the first step of comprehending the semiotic languages of cloth. References Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. Rev. ed. London: Verso, 2016. Barnard, Malcolm. "Fashion as Communication Revisited." Fashion Theory. Routledge, 2020. 247-258. Barthes, Roland. Mythologies. London: J. Cape, 1972. Braester, Yomi. "The Post-Maoist Politics of Memory." A Companion to Modern Chinese Literature. Ed. Yingjin Zhang. London: John Wiley and Sons. 434-51. Brettkelly, Pietra (dir.). Yellow Is Forbidden. Madman Entertainment, 2019. Chen, Tina Mai. "Dressing for the Party: Clothing, Citizenship, and Gender-Formation in Mao's China." Fashion Theory 5.2 (2001): 143-71. Colavincenzo, Marc. "Trading Fact for Magic—Mythologizing History in Postmodern Historical Fiction." Trading Magic for Fact, Fact for Magic. Ed. Marc Colavincenzo. Brill, 2003. 85-106. De Perthuis, Karen. "The Utopian 'No Place' of the Fashion Photograph." Fashion, Performance and Performativity: The Complex Spaces of Fashion. Eds. Andrea Kollnitz and Marco Pecorari. London: Bloomsbury, 2022. 145-60. Feng, Jie. Fashion in Altermodern China. Dress Cultures. Eds. Reina Lewis and Elizabeth Wilson. London: Bloomsbury Publishing, 2022. Finnane, Antonia. Changing Clothes in China: Fashion, History, Nation. New York: Columbia UP, 2008. Fu, Courtney R. "Guo Pei: Chinese Art and Couture." Fashion Theory 25.1 (2021): 127-140. Hall, Stuart. "Encoding – Decoding." Crime and Media. Ed. Chris Greer. London: Routledge, 2019. Hendrik, Joris. "The History of Haute Couture in Numbers." Vogue (France), 2021. Kaiser, Susan B., and Denise N. Green. Fashion and Cultural Studies. London: Bloomsbury, 2021. Lurie, Alison. The Language of Clothes. London: Bloomsbury, 1992. Taylor, John P. "Authenticity and Sincerity in Tourism." Annals of Tourism Research 28.1 (2001): 7-26. Thurman, Judith. "The Empire's New Clothes – China’s Rich Have Their First Homegrown Haute Couturier." The New Yorker, 2016. Yoong, Jackie. "Guo Pei: Chinese Art and Couture." Singapore: Asian Civilisations Museum, 2019. Zhang, Weiwei. "Politicizing Fashion: Inconspicuous Consumption and Anti-Intellectualism during the Cultural Revolution in China." Journal of Consumer Culture 21.4 (2021): 950-966.
Style APA, Harvard, Vancouver, ISO itp.
44

Drummond, Rozalind, Jondi Keane i Patrick West. "Zones of Practice: Embodiment and Creative Arts Research". M/C Journal 15, nr 4 (14.08.2012). http://dx.doi.org/10.5204/mcj.528.

Pełny tekst źródła
Streszczenie:
Introduction This article presents the trans-disciplinary encounters with and perspectives on embodiment of three creative-arts practitioners within the Deakin University research project Flows & Catchments. The project explores how creative arts participate in community and the possibility of well-being. We discuss our preparations for creative work exhibited at the 2012 Lake Bolac Eel Festival in regional Western Victoria, Australia. This festival provided a fertile time-place-space context through which to meet with one regional community and engage with scales of geological and historical time (volcanoes, water flows, first contact), human and animal roots and routes (settlement, eel migrations, hunting and gathering), and cultural heritage (the eel stone traps used by indigenous people, settler stonewalling, indigenous language recovery). It also allowed us to learn from how a festival brings to the surface these scales of time, place and space. All these scales also require an embodied response—a physical relation to the land and to the people of a community—which involves how specific interests and ways of engaging coordinate experience and accentuate particular connections of material to cultural patterns of activity. The focus of our interest in “embody” and embodiment relates to the way in which the term constantly slides from metaphor (figural connection) to description (literal process). Our research question, therefore, addresses the specific interaction of these two tendencies. Rather than eliminate one in preference to the other, it is the interaction and movement from one to the other that an approach through creative-arts practices makes visible. The visibility of these tendencies and the mechanisms to which they are linked (media, organising principle or relational aesthetic) are highlighted by the particular time-place-space modalities that each of the creative arts deploys. When looking across different creative practices, the attachments and elisions become more fine-grained and clearer. A key aim of practice-led research is to observe, study and learn, but also to transform the production of meaning and its relationship to the community of users (Barrett and Bolt). The opportunity to work collaboratively with a community like the one at Lake Bolac provided an occasion to gauge our discerning and initiating skills within creative-arts research and to test the argument that the combination of our different approaches adds to community and individual well-being. Our approach is informed by Gilles Deleuze’s ethical proposition that the health of a community is directly influenced by the richness of the composition of its parts. With this in mind, each creative-arts practitioner will emphasize their encounter with an element of community. Zones of Practice–Drawing Together (Jondi Keane) Galleries are strange in-between places, both destinations and non-sites momentarily outside of history and place. The Lake Bolac Memorial Hall, however, retains its character of place, participating in the history of memorial halls through events such as the Eel Festival. The drawing project “Stone Soup” emphasizes the idea of encounter (O’Sullivan), particularly the interactions of sensibilities shaped by a land, a history and an orientation that comprise an affective field. The artist’s brief in this situation—the encounter as the rupture of habitual modes of being (O’Sullivan 1)—provides a platform of relations to be filled with embodied experience that connects the interests, actions and observations produced outside the gallery to the amplified and dilated experience presented within the gallery. My work suggests that person-to person in-situ encounters intensify the movement across embodied ways of knowing. “Stone Soup”. Photograph by Daniel Armstrong.Arts practice and practice-led research makes available the spectrum of embodied engagements that are mixed to varying degrees with the conceptual positioning of material, both social and cultural. The exhibition and workshop I engaged with at the Eel Festival focused on three level of attention: memory (highly personal), affection (intra-personal) and exchange (communal, non-individual). Attention, the cognitive activity of directing and guiding perception, observation and interpretation, is the thread that binds body to environment, body to history, and body to the constructs of person, family and community. Jean-Jacques Lecercle observes that, for Deleuze, “not only is the philosopher in possession of a specific techne, essential to the well-being of the community, a techne the practice of which demands the use of specialized tools, but he makes his own tools: a system of concepts is a box of tools” (Lecercle 100). This notion is further enhanced when informed by enactive theories of cognition in which, “bodily practices including gesture are part of the activity in which concepts are formed” (Hutchins 429) Creative practices highlight the role of the body in the delicate interaction between a conceptually shaped gallery “space” and the communally constructed meeting “place.” My part of the exhibition consisted of a series of drawings/diagrams characterized under the umbrella of “making stone soup.” The notion of making stone soup is taken from folk tales about travelers in search of food who invent the idea of a magical stone soup to induce cooperation by asking local residents to garnish the “magical” stone soup with local produce. Other forms of the folk tale from around the world include nail soup, button soup and axe soup. Participants were able to choose from three different types of soup (communal drawing) that they would like to help produce. When a drawing was completed another one could be started. The mix of ideas and images constituted the soup. Three types of soup were on offer and required assistance to make: Stone soup–communal drawing of what people like to eat, particularly earth-grown produce; what they would bring to a community event and how they associate these foods with the local identity. Axe soup–communal drawing of places and spaces important to the participants because of connection to the land, to events and/or people. These might include floor plans, scenes of rooms or views, or memories of places that mix with the felt importance of spaces.Heirloom soup–communal drawing of important objects associated with particular persons. The drawings were given to the festival organizer to exhibit at the following year’s festival. "Story Telling”. Photograph by Daniel Armstrong.Drawing in: Like taking a breath, the act of drawing and putting one’s thought and affections into words or pictures is focused through the sensation of the drawing materials, the size of the paper, and the way one orients oneself to the paper and the activity. These pre-drawing dispositions set up the way a conversation might occur and what the tenor of that exchange may bring. By asking participants to focus on three types of attachments or attentions and contributing to a collective drawing, the onus on art skills or poignancy is diminished, and the feeling of turning inward to access feeling and memory turns outward towards inscription and cooperation. Drawing out: Like exhaling around vowels and consonants, the movement of the hand with brush and ink or pen and ink across a piece of paper follows our patterns of engagement, the embodied experience consistent with all our other daily activities. We each have a way of orchestrating the sequence of movements that constitute an image-story. The maker of stone soup must provide a new encounter, a platform for cooperation. I found that drawing alongside the participants, talking to them, inscribing and witnessing their stories in this way, heightened the collective activity and produced a new affective field of common experience. In this instance the stone soup became the medium for an emergent composition of relations. Zones of Practice–Embodying Photographic Space (Rozalind Drummond) Photography inevitably entails a certain characterization of reality. From being “out there” the world comes to be “inside” photographs—a visual sliver, a grab, and an upload, a perpetual tumble cycle of extruded images existing everywhere yet nowhere. While the outside, the “out there” is brought within the frame of the photograph, I am interested rather in looking, through the viewfinder, to spaces that work the other way, which suggest the potential to locate a “non-space”—where the inside suggests an outside or empty space. Thus, the photograph becomes disembodied to reveal space. I consider embodiment as the trace of other embodiments that frame the subject. Mark Auge’s conception of “non-places” seems apt here. He writes about non-places as those that are lived or passed through on the way to some place else, an accumulation of spaces that can be understood and named (94). These are spaces that can be defined in everyday terms as places with which we are familiar, places in which the real erupts: a borderline separating the outside from the inside, temporary spaces that can exist for the camera. The viewer may well peer in and look for everything that appears to have been left out. Thus, the photograph becomes a recollection of what Roland Barthes calls “a disruption in the topography”—we imagine a “beyond” that evokes a sense of melancholy or of irrevocably sliding toward it (238). How then could the individual embody such a space? The groups of photographs of Lake Bolac are spread out on a table. I play some music awhile, Glenn Gould, whose performing embodies what, to me, represents such humanity. Hear him breathing? It is Prelude and Fugue No. 16 in G Minor by Bach, on vinyl; music becomes a tangible and physical presence. When we close our eyes, our ears determine a sound’s location in a room; we map out a space, by listening, and can create a measureable dimension to sound. Walking about the territory of a living room, in suburban Melbourne, I consider too a small but vital clue: that while scrutinizing these details of a photographic image on paper, simultaneously I am returning to a small town in the Western District of Victoria. In the fluid act of looking at images in a house in Melbourne, I am now also walking down a road to Lake Bolac and can hear the incidental sounds of the environment—birdcalls and human voices—elements that inhabit and embody space: a borderline, alongside the photographs. What is imprinted in actual time, what is fundamental, is that the space of a photograph is actually devoid of sound and that I am still standing in a living room in Melbourne. In Against Architecture, Denis Hollier states of Bataille, “he wrote of the psychological power of space as a fluid, boundary effacing, always displaced and displacing medium. The non-spaces of cities and towns are locations where it is possible to be lost in a collective space, a progression of thoroughfares that are transitional, delivering the individual from one point and place to another—stairwells, laneways and roadsides—a constellation of streets….” (Hollier 79). Though photographs are sound-less, sound gives access to the outside of the image. “Untitled”. Photograph by Rozalind Drummond from “Stay with me here.” 2012 Type C Digital Print. Is there an outline of an image here? The enlargement of a snapshot of a photograph does not simply render what in any case was visible, though unclear. What is the viewer to look for in this photograph? Upon closer inspection a young woman stands to the right within the frame—she wears a school uniform; the pattern of the garment can be seen and read distinctly. In the detail it is finely striped, with a dark hue of blue, on a paler background, and the wearer’s body is imprinted upon the clothing, which receives the body’s details and impressions. The dress has a fold or pleat at the back; the distinct lines and patterns are reminiscent of a map, or an incidental grid. Here, the leitmotif of worn clothing is a poetic one. The young woman wears her hair piled, vertiginous, in a loosely constructed yet considered fashion; she stands assured, looking away and looking forward, within the compositional frame. The camera offers a momentary pause. This is our view. Our eye is directed to look further away past the figure, and the map of her clothing, to a long hallway in the school, before drifting to the left and right of the frame, where the outside world of Lake Bolac is clear and visible through the interior space of the hallway—the natural environment of daylight, luminescent and vivid. The time frame is late summer, the light reflecting and reverberating through glass doors, and gleaming painted surfaces, in a continuous rectangular pattern of grid lines. In the near distance, the viewer can see an open door, a pictorial breathing space, beyond the spatial line and coolness of the photograph, beyond the frame of the photograph and our knowing. The photograph becomes a signpost. What is outside, beyond the school corridors, recalled through the medium of photography, are other scenes, yet to be constructed from the spaces, streets and roads of Lake Bolac. Zones of Practice–Time as the “Skin” of Writing, Embodiment and Place (Patrick West) There is no writing without a body to write. Yet sometimes it feels that my creative writing, resisting its necessary embodiment, has by some trick of metaphor retreated into what Jondi Keane refers to as a purely conceptual mode of thought. This slippage between figural connection and literal process alerted me, in the process of my attempt to foster place-based well-being at Lake Bolac, to the importance of time to writerly embodiment. My contribution to the Lake Bolac Eel Festival art exhibition was a written text, “Stay with me here”, conceived as my response to the themes of Rozalind Drummond’s photographs. To prepare this joint production, we mixed with staff and students at the Lake Bolac Secondary College. But this mode of embodiment made me feel curiously dis-embodied as a place-based writer. My embodiment was apparently superficial, only skin deep. Still this experience started me thinking about how the skin is actually thickly embodied as both body and where the body encounters, not only other bodies, but place itself—conceivably across many times. Skin is also the embodiment of writing to the degree that writing suggests an uncertain and queered form of embodiment. Skin, where the body reaches its limit, expires, touches other bodies or not, is inevitably implicated with writing as a fragile and always provisional, indexical embodiment. Nothing can be more easily either here or somewhere else than writing. Writing is an exhibition or gallery of anywhere, like skin in that both are un-placed in place. The one-pager “Stay with me here” explores how the instantaneous time and present-ness of Drummond’s photographs relate to the profusion of times and relations to other places immanent in Lake Bolac’s landscape and community (as evidenced, for example, in the image of a prep student yawning at the end of a long day in the midst of an ancient volcanic landscape, dreaming, perhaps, of somewhere else). To get to such issues of time and relationality of place, however, involves detouring via the notion of skin as suggested to me by my initial sense of dis-embodiment in Lake Bolac. “Stay with me here” works with an idea of skin as answer to the implied question, Where is here? It creates the (symbolic) embodiment of place precisely as a matter of skin, making skin-like writing an issue of transitory topography. The only permanent “here” is the skin. Emphasizing something valid for all writing, “here” (grammatically a context-dependent deictic) is the skin, where embodiment is defined by the constant possibility of re-embodiment, somewhere else, some time else. Reminding us that it is eminently possible to be elsewhere (from this place, from here), skin also suggests that you cannot be in two places at the one time (at least, not with the same embodiment). My skin is a sign that, because my embodiment in any particular place (any “here”) is only ever temporary, it is time that necessarily sustains my embodiment in any place whatsoever into the future. According to Henri Bergson, time must be creative, as the future hasn’t happened yet! “Time is invention or it is nothing at all” (341). The future of place, as much as of writing and of embodiment itself, is thus creatively sheathed in time as if within a skin. On Bergson’s view, time might be said to be least and greatest embodiment, for it is (dis-embodied) time that enables all future and currently un-created modes of embodiment. All of these time-inspired modes will involve a relationship to place (time can only “happen” in some version of place). And all of them will involve writing too, because time is the ultimate (dis-)embodiment of writing. As writing is like a skin, a minimal embodiment shared actually or potentially with more than one body, so time is the very possibility of writing (embodiment) into the future. “Stay with me here” explores how place is always already embodied in a relationship to other places, through the skin, and to the future of (a) place through the creativity of time as the skin of embodiment. By enriching descriptive and metaphoric practices of time, instability of place and awarenesses of the (dis-)embodied nature of writing—as a practice of skin—my text is useful to well-being as an analogue to the lived experience, in time and place, of the people of Lake Bolac. Theoretically, it weaves Bergson’s philosophy of time (time richly composed) into the fabric of Deleuze’s proposition that the health of a community is linked to the richness of the composition of its parts. Creatively, it celebrates the identity that the notion of “here” might enable, especially when read alongside and in dialogue with Drummond’s photographs in exhibition. Here is an abridged text of “Stay with me here:” “Stay with me here” There is salt in these lakes, anciently—rectilinear lakes never to be without ripple or stir. Pooling waters the islands of otherwise oceans, which people make out from hereabouts, make for, dream of. Stay with me here. Trusting to lessons delivered at the shore of a lake moves one closer to a deepness of instruction, where the water also learns. From our not being where we are, there. Stay with me here. What is perfection to water if not water? A time when photographs were born out of its swill and slosh. The image swimming knowingly to the surface—its first breaths of the perceiving air, its glimpsing itself once. The portraits of ourselves we do not dare. Such magical chemical reactions, as in, I react badly to you. Such salts! Stay with me here, elsewhere. As if one had simply washed up by chance, onto this desert island or any other place of sand and water trickling. Daring to imagine we’ll be there together. This is what I mean by… stay with me here. Notice these things—how music sounds different as one walks away; the emotional gymnastics with which you plan to impress; the skin of the eye that watches over you. Stay with me here—in your spectacular, careless brilliance. The edge of whatever it is one wants to say. The moment never to be photographed. Conclusion It is not for the artists to presume that they can empower a community. As Tasmin Lorraine notes, community is not a single person’s empowerment but “the empowerment of many assemblages of which one is part” (128). All communities, regional communities on the scale of Lake Bolac or communities of interest, are held in place by enthusiasm and common histories. We have focused on the embodiment of these common histories, which vary in an infinite number of degrees from the most literal to the most figurative, pulling from the filigree of experiences a web of interpersonal connections. Oscillating between metaphor and description, embodiment as variously presented in this article helps promote community and, by extension, individual well-being. The drawing out of sensations into forms that produce new experiences—like the drawing of breath, the drawing of a hot bath, or the drawing out of a story—enhances the permeability of boundaries opened to what touches upon them. It is not just that we can embody our values, but that we are able to craft, manifest, enact, sense and evoke the connections that take shape as our richly composed world, in which, as Deleuze notes, “it is no longer a matter of utilizations or captures, but of sociabilities and communities” (126). ReferencesAuge, Mark. Non-Places: An Introduction to an Anthropology of Supermodernity. London: Verso, 1995. Barrett, Estelle, and Barbara Bolt. Eds. Practice as Research: Approaches to Creative Arts Enquiry. London: I. B. Tauris, 2007. Barthes, Roland. The Responsibility of Forms. New York: Hill and Wang, 1985. Bergson, Henri. Creative Evolution. Mineola, New York: Dover Publications, 1998. Deleuze, Gilles. Spinoza: Practical Philosophy. San Francisco: City Lights Books, 1988. Hollier, Denis. Against Architecture: The Writings of Georges Bataille. Cambridge, MA: MIT Press, 1989. Hutchins, Edwin. “Enaction, Imagination and Insight.” Enaction: Towards a New Paradigm for Cognitive Science. Eds. J. Stewart, O. Gapenne, and E.A. Di Paolo. Cambridge, MA: MIT Press, 2010. 425–450.Lecercle, Jean-Jacques. Deleuze and Language. New York: Palgrave Macmillan, 2002.Lorraine, Tamsin. Deleuze and Guattari’s Immanent Ethics: Theory, Subjectivity and Duration. Albany: State University of New York at Albany, 2011.O’Sullivan, Simon. Art Encounters: Deleuze and Guattari—Thought beyond Representation. London: Palgrave Macmillan, 2006.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii