Academic literature on the topic 'Memory optimisation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Memory optimisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Memory optimisation"

1

Weinhardt, M., and W. Luk. "Memory access optimisation for reconfigurable systems." IEE Proceedings - Computers and Digital Techniques 148, no. 3 (May 1, 2001): 105–12. http://dx.doi.org/10.1049/ip-cdt:20010514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kulhavý, Rudolf, and Petya Ivanova. "Memory-based prediction in control and optimisation." IFAC Proceedings Volumes 32, no. 2 (July 1999): 4058–63. http://dx.doi.org/10.1016/s1474-6670(17)56692-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sayadi, Fatma Elzahra, Marwa Chouchene, Haithem Bahri, Randa Khemiri, and Mohamed Atri. "CUDA memory optimisation strategies for motion estimation." IET Computers & Digital Techniques 13, no. 1 (September 6, 2018): 20–27. http://dx.doi.org/10.1049/iet-cdt.2017.0149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Zongwei, Xi Li, Chao Wang, and Xuehai Zhou. "Memory power optimisation on low-bit multi-access cross memory address mapping schema." International Journal of Embedded Systems 6, no. 2/3 (2014): 240. http://dx.doi.org/10.1504/ijes.2014.063822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Jose L. Risco, Oscar Garnica, Juan Lanchares, J. Ignacio Hidalgo, and David Atienza. "Particle swarm optimisation of memory usage in embedded systems." International Journal of High Performance Systems Architecture 1, no. 4 (2008): 209. http://dx.doi.org/10.1504/ijhpsa.2008.024205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bossard, Antoine. "Memory Optimisation on AVR Microcontrollers for IoT Devices’ Minimalistic Displays." Chips 1, no. 1 (April 21, 2022): 2–13. http://dx.doi.org/10.3390/chips1010002.

Full text
Abstract:
The minimalistic hardware of most Internet of things (IoT) devices and sensors, especially those based on microcontrollers (MCU), imposes severe limitations on the memory capacity and interfacing capabilities of the device. Nevertheless, many applications prescribe not only textual but also graphical display features as output interface. Due to the aforementioned limitations, the storage of graphical data is however highly problematic and existing solutions have even resorted to requiring external storage (e.g., a microSD card) for that purpose. In this paper, we present, evaluate and discuss two solutions that enable loading fullscreen, optimal 18-bit colour image data directly from the MCU, that is, without having to rely on additional hardware. Importantly, these solutions retain a very low footprint to suit the microcontroller architecture; the AVR architecture has been selected given its popularity. The obtained results show the feasibility and practicability of the proposal: in the worst case, 21 Kbytes of memory are required, in other words approximately 33% of the flash memory of a 32-Kbyte MCU remain available.
APA, Harvard, Vancouver, ISO, and other styles
7

Tirelli, D., and S. Mascelloni. "Characterisation and optimisation of shape memory alloys for seismic applications." Le Journal de Physique IV 10, PR9 (September 2000): Pr9–665—Pr9–670. http://dx.doi.org/10.1051/jp4:20009111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Zhengnan, Xiaoqing Han, William Lyons, and Fergal O'Rourke. "Energy management optimisation using a combined Long Short-Term Memory recurrent neural network – Particle Swarm Optimisation model." Journal of Cleaner Production 326 (December 2021): 129246. http://dx.doi.org/10.1016/j.jclepro.2021.129246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ding, Weijie, Jin Huang, Guanyu Shang, Xuexuan Wang, Baoqiang Li, Yunfei Li, and Hourong Liu. "Short-Term Trajectory Prediction Based on Hyperparametric Optimisation and a Dual Attention Mechanism." Aerospace 9, no. 8 (August 20, 2022): 464. http://dx.doi.org/10.3390/aerospace9080464.

Full text
Abstract:
Highly accurate trajectory prediction models can achieve route optimisation and save airspace resources, which is a crucial technology and research focus for the new generation of intelligent air traffic control. Aiming at the problems of inadequate extraction of trajectory features and difficulty in overcoming the short-term memory of time series in existing trajectory prediction, a trajectory prediction model based on a convolutional neural network-bidirectional long short-term memory (CNN-BiLSTM) network combined with dual attention and genetic algorithm (GA) optimisation is proposed. First, to autonomously mine the data association between input features and trajectory features as well as highlight the influence of important features, an attention mechanism was added to a conventional CNN architecture to develop a feature attention module. An attention mechanism was introduced at the output of the BiLSTM network to form a temporal attention module to enhance the influence of important historical information, and GA was used to optimise the hyperparameters of the model to achieve the best performance. Finally, a multifaceted comparison with other typical time-series prediction models based on real flight data verifies that the prediction model based on hyperparameter optimisation and a dual attention mechanism has significant advantages in terms of prediction accuracy and applicability.
APA, Harvard, Vancouver, ISO, and other styles
10

Thiruvady, Dhananjay, Asef Nazari, and Aldeida Aleti. "Multi-objective Beam-ACO for Maximising Reliability and Minimising Communication Overhead in the Component Deployment Problem." Algorithms 13, no. 10 (October 3, 2020): 252. http://dx.doi.org/10.3390/a13100252.

Full text
Abstract:
Automated deployment of software components into hardware resources is a highly constrained optimisation problem. Hardware memory limits which components can be deployed into the particular hardware unit. Interacting software components have to be deployed either into the same hardware unit, or connected units. Safety concerns could restrict the deployment of two software components into the same unit. All these constraints hinder the search for high quality solutions that optimise quality attributes, such as reliability and communication overhead. When the optimisation problem is multi-objective, as it is the case when considering reliability and communication overhead, existing methods often fail to produce feasible results. Moreover, this problem can be modelled by bipartite graphs with complicating constraints, but known methods do not scale well under the additional restrictions. In this paper, we develop a novel multi-objective Beam search and ant colony optimisation (Beam-ACO) hybrid method, which uses problem specific bounds derived from communication, co-localisation and memory constraints, to guide the search towards feasibility. We conduct an experimental evaluation on a range of component deployment problem instances with varying levels of difficulty. We find that Beam-ACO guided by the co-localisation constraint is most effective in finding high quality feasible solutions.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Memory optimisation"

1

Forrest, B. M. "Memory and optimisation in neural network models." Thesis, University of Edinburgh, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384164.

Full text
Abstract:
A numerical study of two classes of neural network models is presented. The performance of Ising spin neural networks as content-addressable memories for the storage of bit patterns is analysed. By studying systems of increasing sizes, behaviour consistent with fintite-size scaling, characteristic of a first-order phase transition, is shown to be exhibited by the basins of attraction of the stored patterns in the Hopfield model. A local iterative learning algorithm is then developed for these models which is shown to achieve perfect storage of nominated patterns with near-optimal content-addressability. Similar scaling behaviour of the associated basins of attraction is observed. For both this learning algorithm and the Hopfield model, by extrapolating to the thermodynamic limit, estimates are obtained for the critical minimum overlap which an input pattern must have with a stored pattern in order to successfully retrieve it. The role of a neural network as a tool for optimising cost functions of binary valued variables is also studied. The particular application considered is that of restoring binary images which have become corrupted by noise. Image restorations are achieved by representing the array of pixel intensities as a network of analogue neurons. The performance of the network is shown to compare favourably with two other deterministic methods-a gradient descent on the same cost function and a majority-rule scheme-both in terms of restoring images and in terms of minimising the cost function. All of the computationally intensive simulations exploit the inherent parallelism in the models: both SIMD (the ICL DAP) and MIMD (the Meiko Computing Surface) machines are used.
APA, Harvard, Vancouver, ISO, and other styles
2

Fargus, Alexander. "Optimisation of correlation matrix memory prognostic and diagnostic systems." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/9032/.

Full text
Abstract:
Condition monitoring systems for prognostics and diagnostics can enable large and complex systems to be operated more safely, at a lower cost and have a longer lifetime than is possible without them. AURA Alert is a condition monitoring system that uses a fast approximate k-Nearest Neighbour (kNN) search of a timeseries database containing known system states to identify anomalous system behaviour. This search algorithm, AURA kNN, uses a type of binary associative neural network called a Correlation Matrix Memory (CMM) to facilitate the search of the historical database. AURA kNN is evaluated with respect to the state of the art Locality Sensitive Hashing (LSH) approximate kNN algorithm and shown to be orders of magnitude slower to search large historical databases. As a result, it is determined that the standard AURA kNN scales poorly for large historical databases. A novel method for generating CMM input tokens called Weighted Overlap Code Construction is presented and combined with Baum Coded output tokens to reduce the query time of the CMM. These modifications are shown to improve the ability of AURA kNN to scale with large databases, but this comes at the cost of accuracy. In the best case an AURA kNN search is 3.1 times faster than LSH with an accuracy penalty of 4% on databases with 1000 features and fewer than 100,000 samples. However the modified AURA kNN is still slower than LSH with databases with fewer features or more samples. These results suggest that it may be possible for AURA kNN to be improved so that it is competitive with the state of the art LSH algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Uzor, Chigozirim. "Compact dynamic optimisation algorithm." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/13056.

Full text
Abstract:
In recent years, the field of evolutionary dynamic optimisation has seen significant increase in scientific developments and contributions. This is as a result of its relevance in solving academic and real-world problems. Several techniques such as hyper-mutation, hyper-learning, hyper-selection, change detection and many more have been developed specifically for solving dynamic optimisation problems. However, the complex structure of algorithms employing these techniques make them unsuitable for real-world, real-time dynamic optimisation problem using embedded systems with limited memory. The work presented in this thesis focuses on a compact approach as an alternative to population based optimisation algorithm, suitable for solving real-time dynamic optimisation problems. Specifically, a novel compact dynamic optimisation algorithm suitable for embedded systems with limited memory is presented. Three novel dynamic approaches that augment and enhance the evolving properties of the compact genetic algorithm in dynamic environments are introduced. These are 1.) change detection scheme that measures the degree of dynamic change 2.) mutation schemes whereby the mutation rates is directly linked to the detected degree of change and 3.) change trend scheme the monitors change pattern exhibited by the system. The novel compact dynamic optimization algorithm outlined was applied to two differing dynamic optimization problems. This work evaluates the algorithm in the context of tuning a controller for a physical target system in a dynamic environment and solving a dynamic optimization problem using an artificial dynamic environment generator. The novel compact dynamic optimisation algorithm was compared to some existing dynamic optimisation techniques. Through a series of experiments, it was shown that maintaining diversity at a population level is more efficient than diversity at an individual level. Among the five variants of the novel compact dynamic optimization algorithm, the third variant showed the best performance in terms of response to dynamic changes and solution quality. Furthermore, it was demonstrated that information transfer based on dynamic change patterns can effectively minimize the exploration/exploitation dilemma in a dynamic environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Beyler, Jean-Christophe. "Dynamic software memory access optimization : Dynamic low-cost reduction of memory latencies by binary analyses and transformations." Université Louis Pasteur (Strasbourg) (1971-2008), 2007. http://www.theses.fr/2007STR13171.

Full text
Abstract:
Cette thèse se place dans le cadre du développement d'approches dynamiques permettant une maîtrise du comportement du couple logiciel/matériel en cours d'exécution. Plus particulièrement, les travaux présentés ici recouvrent l'objectif principal de minimisation des temps d'exécution sur une architecture mono ou multi-processeurs, par anticipation des accès mémoire des programmes via le préchargement des données utiles, et ce de manière entièrement transparente à l'utilisateur. Nous montrons qu'il est possible de concevoir un tel système dynamique d'une relative complexité et entièrement logiciel, c'est-à-dire qui ne repose sur aucune fonctionnalité spécifique de la machine d'exécution, qui est efficace pour de nombreux programmes et très peu pénalisant pour les autres. A notre connaissance, notre travail constitue une première proposition d'un système dynamique d'optimisation entièrement logiciel qui ne se base pas sur une interprétation du code binaire
This thesis concerns the development of dynamic approaches for the control of the hardware/software couple. More precisely, works presented here have the main goal of minimizing program execution times on mono or multi-processor architectures, by anticipating memory accesses through dynamic prefetch of useful data in cache memory and in a way that is entirely transparent to the user. The developed systems consist in a dynamic analysis phase, where memory access latencies are measured, a phase of binary optimizing transformations when they have been evaluated as efficient, and where data prefetching instructions are inserted into the binary code, a dynamic analysis phase of the optimizations efficiency, and finally a canceling phase for transformations that have been evaluated as inefficient. Every phase applies individually to every memory access, and eventually applies several times if memory accesses have behaviors that are varying during the execution time of the target software
APA, Harvard, Vancouver, ISO, and other styles
5

Maalej, Kammoun Maroua. "Low-cost memory analyses for efficient compilers." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1167/document.

Full text
Abstract:
La rapidité, la consommation énergétique et l'efficacité des systèmes logiciels et matériels sont devenues les préoccupations majeures de la communauté informatique de nos jours. Gérer de manière correcte et efficace les problématiques mémoire est essentiel pour le développement des programmes de grande tailles sur des architectures de plus en plus complexes. Dans ce contexte, cette thèse contribue aux domaines de l'analyse mémoire et de la compilation tant sur les aspects théoriques que sur les aspects pratiques et expérimentaux. Outre l'étude approfondie de l'état de l'art des analyses mémoire et des différentes limitations qu'elles montrent, notre contribution réside dans la conception et l'évaluation de nouvelles analyses qui remédient au manque de précision des techniques publiées et implémentées. Nous nous sommes principalement attachés à améliorer l'analyse de pointeurs appartenant à une même structure de données, afin de lever une des limitations majeures des compilateurs actuels. Nous développons nos analyses dans le cadre général de l'interprétation abstraite « non dense ». Ce choix est motivé par les aspects de correction et d'efficacité : deux critères requis pour une intégration facile dans un compilateur. La première analyse que nous concevons est basée sur l'analyse d'intervalles des variables entières ; elle utilise le fait que deux pointeurs définis à l'aide d'un même pointeur de base n'aliasent pas si les valeurs possibles des décalages sont disjointes. La seconde analyse que nous développons est inspirée du domaine abstrait des Pentagones ; elle génère des relations d'ordre strict entre des paires de pointeurs comparables. Enfin, nous combinons et enrichissons les deux analyses précédentes dans un cadre plus général. Ces analyses ont été implémentées dans le compilateur LLVM. Nous expérimentons et évaluons leurs performances, et les comparons aux implémentations disponibles selon deux métriques : le nombre de paires de pointeurs pour lesquelles nous inférons le non-aliasing et les optimisations rendues possibles par nos analyses
This thesis was motivated by the emergence of massively parallel processing and supercomputingthat tend to make computer programming extremely performing. Speedup, the power consump-tion, and the efficiency of both software and hardware are nowadays the main concerns of theinformation systems community. Handling memory in a correct and efficient way is a step towardless complex and more performing programs and architectures. This thesis falls into this contextand contributes to memory analysis and compilation fields in both theoretical and experimentalaspects.Besides the deep study of the current state-of-the-art of memory analyses and their limitations,our theoretical results stand in designing new algorithms to recover part of the imprecisionthat published techniques still show. Among the present limitations, we focus our research onthe pointer arithmetic to disambiguate pointers within the same data structure. We develop ouranalyses in the abstract interpretation framework. The key idea behind this choice is correctness,and scalability: two requisite criteria for analyses to be embedded to the compiler construction.The first alias analysis we design is based on the range lattice of integer variables. Given a pair ofpointers defined from a common base pointer, they are disjoint if their offsets cannot have valuesthat intersect at runtime. The second pointer analysis we develop is inspired from the Pentagonabstract domain. We conclude that two pointers do not alias whenever we are able to build astrict relation between them, valid at program points where the two variables are simultaneouslyalive. In a third algorithm we design, we combine both the first and second analysis, and enhancethem with a coarse grained but efficient analysis to deal with non related pointers.We implement these analyses on top of the LLVM compiler. We experiment and evaluate theirperformance based on two metrics: the number of disambiguated pairs of pointers compared tocommon analyses of the compiler, and the optimizations further enabled thanks to the extraprecision they introduce
APA, Harvard, Vancouver, ISO, and other styles
6

Munns, Joseph. "Optimisation and applications of a Raman quantum memory for temporal modes of light." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/63867.

Full text
Abstract:
Within any practical proposal for utilising quantum mechanics for information processing whether for the efficient computation of problems intractable by classical methods, or for secure communication; at some stage one must have the means to transfer quantum information between remote nodes of a network. For this, light is the obvious choice. To realise this vision one requires the means to overcome the ''scaling problem'' intrinsic to many photonic-based quantum technologies due to probabilistic operations. It has been identified that optical quantum memories which facilitate the storage and retrieval of quantum states of light are an enabling technology. Another requisite technology to ensure the scalability of a quantum network is the means to interface dissimilar material nodes, which in practice means the translation of quantum light in bandwidth, frequency and temporal shape. The first part of this thesis presents experimental, theoretical and numerical investigations of noise reduction in the Raman memory protocol in thermal caesium vapour, by means of a cavity. To do this, I develop a theoretical description of the cavity memory interaction, along with a model of the atom-cavity system to enable meeting the required resonance conditions. This is followed by a proof-of-concept experimental demonstration, showing suppression of noise in the retrieved state. To conclude this part, I investigate the optimisation of this system and provide a numerical framework for its design, and propose a route towards realising the Raman memory as a practical quantum memory. The second theme is an exploration of the practical application of the Raman memory as an interface for temporal modes of light. I perform a preliminary investigation, and develop characterisation tools, to experimentally verify the modal structure of the memory interaction. This work provides the basis for deploying the Raman memory as a temporal-mode selective device for GHz bandwidth quantum states of light.
APA, Harvard, Vancouver, ISO, and other styles
7

Alsaiari, Mabkhoot Abdullah. "High throughput optimisation of functional nanomaterials and composite structures for resistive switching memory." Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/422863/.

Full text
Abstract:
The Semiconductor industry is investigating high speed, low power consumption, high-density memory devices that can retain their information without power supply. Resistive Random Access Memory (ReRAM) is one of the most attractive candidates as an alternative to conventional flash memory devices due to of its simple metal-insulator-metal (MIM) structures. A compositional gradient of thin film materials produced by the simultaneous combination of elements provides a powerful tool for the combinatorial synthesis of materials. It was applied here to control the composition, structure and morphology of materials in composite devices of ReRAM. This allows the systematic high throughput screening of the intrinsic properties of the materials, as well as the high throughput optimisations of composite thin films that mimic memory device structures. Therefore, the focus of this project is to develop a novel capacitor for ReRAM application. We present here details of the preparation technique and the screening methodologies of this approach by applying the synthesis to various phases of titania, for which there is an extensive literature, as a prelude to the screening of more complex systems. Inert Pt electrodes and active Cu electrodes were deposited on TiO2 as top electrodes using different mask sizes (50 micron and 250 micron). The bottom electrode is Si/ SiO2/ TiO2/ Pt (SSTOP) was constant throughout this project. TiO2 was prepared using evaporative physical vapour deposition (PVD) with a variation of thickness between 10 nm and 300 nm on SSTOP. The synthetic conditions were chosen to produce TiO2 oxygen stoichiometric and sub-stoichiometric amorphous, anatase and rutile materials. The oxides have been fully characterised by X-Ray Diffraction (XRD), X-ray Photo electron Spectroscopy (XPS), Raman Spectroscopy, Four Point Probe (4pp) and Atomic Force Microscopy (AFM). The electrical screening was carried out on capacitor-like structures produced using 250 micron diameter top electrodes deposited using a 14 x 14 array contact mask. Current-Voltage (I-V) measurements were conducted employing a variety of current compliances (IC). The typical I-V switching of the unipolar mode (both state in one polarity) was achieved on all titania phases, whereas the bipolar mode (each state in different polarity) was achieved only on the amorphous phase. The resistance differences between High Resistance State (HRS) and Low Resistance State (LRS) were clearly identified in each system. It was observed that for all the devices investigated, a lower forming field was required on the thicker layer of the active switching layers. Devices with copper electrodes, and composite devices with sub-stoichiometric titania adjacent to the stoichiometric titania could be formed at lower voltages and electric fields. The results obtained here confirm the feasibility of the high-throughput approach to optimise functional nanomaterials and composite device structures for resistive switching memory application.
APA, Harvard, Vancouver, ISO, and other styles
8

Marina, Sahakyan. "Optimisation des mises à jours XML pour les systèmes main-memory: implémentation et expériences." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00641579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaeslin, Alain E. "Performance Optimisation of Discrete-Event Simulation Software on Multi-Core Computers." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191132.

Full text
Abstract:
SIMLOX is a discrete-event simulation software developed by Systecon AB for analysing logistic support solution scenarios. To cope with ever larger problems, SIMLOX's simulation engine was recently enhanced with a parallel execution mechanism in order to take advantage of multi-core processors. However, this extension did not result in the desired reduction in runtime for all simulation scenarios even though the parallelisation strategy applied had promised linear speedup. Therefore, an in-depth analysis of the limiting scalability bottlenecks became necessary and has been carried out in this project. Through the use of a low-overhead profiler and microarchitecture analysis, the root causes were identified: atomic operations causing a high communication overhead, poor locality leading to translation lookaside buffer thrashing, and hot spots that consume significant amounts of CPU time. Subsequently, appropriate optimisations to overcome the limiting factors were implemented: eliminating the expensive operations, more efficient handling of heap memory through the use of a scalable memory allocator, and data structures that make better use of caches. Experimental evaluation using real world test cases demonstrated a speedup of at least 6.75x on an eight-core processor. Most cases even achieve a speedup of more than 7.2x. The various optimisations implemented further helped to lower run times for sequential execution by 1.5x or more. It can be concluded that achieving nearly linear speedup on a multi-core processor is possible in practice for discrete-event simulation.
SIMLOX är en kommersiell mjukvara utvecklad av Systecon AB, vars huvudsakliga funktion är en händelsestyrd simuleringskärna för analys av underhållslösningar för komplexa tekniska system. För hantering av stora problem så används parallellexekvering för simuleringen, vilket i teorin borde ge en nästan linjär skalning med antal trådar. Prestandaförbättringen som observerats i praktiken var dock ytterst begränsad, varför en ordentlig analys av skalbarheten har gjorts i detta projekt. Genom användandet av ett profileringsverktyg med liten overhead och mikroarkitektur-analys, så kunde orsakerna hittas: atomiska operationer som skapar mycket overhead för kommunikation, dålig lokalitet ger fragmentering vid översättning till fysiska adresser och dåligt utnyttjande av TLB-cachen, och vissa flaskhalsar som kräver mycket CPU-kraft. Därefter implementerades och testade optimeringar för att undvika de identifierade problem. Testade lösningar inkluderar eliminering av dyra operationer, ökad effektivitet i minneshantering genom skalbara minneshanteringsalgoritmer och implementation av datastrukturer som ger bättre lokalitet och därmed bättre användande av cache-strukturen. Verifiering på verkliga testfall visade på uppsnabbningar på åtminstone 6.75 gånger på en processor med 8 kärnor. De flesta fall visade på en uppsnabbning med en faktor större än 7.2. Optimeringarna gav även en uppsnabbning med en faktor på åtminstone 1.5 vid sekventiell exekvering i en tråd. Slutsatsen är därmed att det är möjligt att uppnå nästan linjär skalning med antalet kärnor för denna typ av händelsestyrd simulering.
APA, Harvard, Vancouver, ISO, and other styles
10

Laga, Arezki. "Optimisation des performance des logiciels de traitement de données sur les périphériques de stockage SSD." Thesis, Brest, 2018. http://www.theses.fr/2018BRES0087/document.

Full text
Abstract:
Nous assistons aujourd’hui à une croissance vertigineuse des volumes de données. Cela exerce une pression sur les infrastructures de stockage et les logiciels de traitement de données comme les Systèmes de Gestion de Base de Données (SGBD). De nouvelles technologies ont vu le jour et permettent de réduire la pression exercée par les grandes masses de données. Nous nous intéressons particulièrement aux nouvelles technologies de mémoires secondaires comme les supports de stockage SSD (Solid State Drive) à base de mémoire Flash. Les supports de stockage SSD offrent des performances jusqu’à 10 fois plus élevées que les supports de stockage magnétiques. Cependant, ces nouveaux supports de stockage offrent un nouveau modèle de performance. Cela implique l’optimisation des coûts d’E/S pour les algorithmes de traitement et de gestion des données. Dans cette thèse, nous proposons un modèle des coûts d’E/S sur SSD pour les algorithmes de traitement de données. Ce modèle considère principalement le volume des données, l’espace mémoire alloué et la distribution des données. Nous proposons également un nouvel algorithme de tri en mémoire secondaire : MONTRES. Ce dernier est optimisé pour réduire le coût des E/S lorsque le volume de données à trier fait plusieurs fois la taille de la mémoire principale. Nous proposons enfin un mécanisme de pré-chargement de données : Lynx. Ce dernier utilise un mécanisme d’apprentissage pour prédire et anticiper les prochaines lectures en mémoire secondaire
The growing volume of data poses a real challenge to data processing software like DBMS (DataBase Management Systems) and data storage infrastructure. New technologies have emerged in order to face the data volume challenges. We considered in this thesis the emerging new external memories like flash memory-based storage devices named SSD (Solid State Drive).SSD storage devices offer a performance gain compared to the traditional magnetic devices.However, SSD devices offer a new performance model that involves 10 cost optimization for data processing and management algorithms.We proposed in this thesis an 10 cost model to evaluate the data processing algorithms. This model considers mainly the SSD 10 performance and the data distribution.We also proposed a new external sorting algorithm: MONTRES. This algorithm includes optimizations to reduce the 10 cost when the volume of data is greater than the allocated memory space by an order of magnitude. We proposed finally a data prefetching mechanism: Lynx. This one makes use of a machine learning technique to predict and to anticipate future access to the external memory
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Memory optimisation"

1

Paviotti, Marco, Simon Cooksey, Anouk Paradis, Daniel Wright, Scott Owens, and Mark Batty. "Modular Relaxed Dependencies in Weak Memory Concurrency." In Programming Languages and Systems, 599–625. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44914-8_22.

Full text
Abstract:
AbstractWe present a denotational semantics for weak memory concurrency that avoids thin-air reads, provides data-race free programs with sequentially consistent semantics (DRF-SC), and supports a compositional refinement relation for validating optimisations. Our semantics identifies false program dependencies that might be removed by compiler optimisation, and leaves in place just the dependencies necessary to rule out thin-air reads. We show that our dependency calculation can be used to rule out thin-air reads in any axiomatic concurrency model, in particular C++. We present a tool that automatically evaluates litmus tests, show that we can augment C++ to fix the thin-air problem, and we prove that our augmentation is compatible with the previously used compilation mappings over key processor architectures. We argue that our dependency calculation offers a practical route to fixing the longstanding problem of thin-air reads in the C++ specification.
APA, Harvard, Vancouver, ISO, and other styles
2

Broderick, Ian, and Enda Howley. "Particle Swarm Optimisation with Enhanced Memory Particles." In Lecture Notes in Computer Science, 254–61. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09952-1_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Panda, Preeti Ranjan. "Power Optimisation Strategies Targeting the Memory Subsystem." In Designing Embedded Processors, 131–55. Dordrecht: Springer Netherlands, 2007. http://dx.doi.org/10.1007/978-1-4020-5869-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

White, Leo, and Alan Mycroft. "Concise Analysis Using Implication Algebras for Task-Local Memory Optimisation." In Static Analysis, 433–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38856-9_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Catthoor, Francky, Sven Wuytack, Eddy De Greef, Florin Balasa, Lode Nachtergaele, and Arnout Vandecappelle. "Optimisation of Global Data Transfer and Storage Organisation for Decreased Power and Area in Custom Data-Dominated Real-Time Systems." In Custom Memory Management Methodology, 1–15. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4757-2849-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bellizzi, Jennifer, Mark Vella, Christian Colombo, and Julio Hernandez-Castro. "Real-Time Triggering of Android Memory Dumps for Stealthy Attack Investigation." In Secure IT Systems, 20–36. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70852-8_2.

Full text
Abstract:
AbstractAttackers regularly target Android phones and come up with new ways to bypass detection mechanisms to achieve long-term stealth on a victim’s phone. One way attackers do this is by leveraging critical benign app functionality to carry out specific attacks.In this paper, we present a novel generalised framework, JIT-MF (Just-in-time Memory Forensics), which aims to address the problem of timely collection of short-lived evidence in volatile memory to solve the stealthiest of Android attacks. The main components of this framework are i) Identification of critical data objects in memory linked with critical benign application steps that may be misused by an attacker; and ii) Careful selection of trigger points, which identify when memory dumps should be taken during benign app execution.The effectiveness and cost of trigger point selection, a cornerstone of this framework, are evaluated in a preliminary qualitative study using Telegram and Pushbullet as the victim apps targeted by stealthy malware. Our study identifies that JIT-MF is successful in dumping critical data objects on time, providing evidence that eludes all other forensic sources. Experimentation offers insight into identifying categories of trigger points that can strike a balance between the effort required for selection and the resulting effectiveness and storage costs. Several optimisation measures for the JIT-MF tools are presented, considering the typical resource constraints of Android devices.
APA, Harvard, Vancouver, ISO, and other styles
7

Dodds, Mike, Mark Batty, and Alexey Gotsman. "Compositional Verification of Compiler Optimisations on Relaxed Memory." In Programming Languages and Systems, 1027–55. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89884-1_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Czezowski, Adam, and Peter Strazdins. "Optimisations for the memory hierarchy of a Singular Value Decomposition algorithm implemented on the MIMD architecture." In High-Performance Computing and Networking, 215–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-57981-8_119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Amine, Khalil. "Insights Into Simulated Annealing." In Advances in Computational Intelligence and Robotics, 121–39. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2857-9.ch007.

Full text
Abstract:
Simulated annealing is a probabilistic local search method for global combinatorial optimisation problems allowing gradual convergence to a near-optimal solution. It consists of a sequence of moves from a current solution to a better one according to certain transition rules while accepting occasionally some uphill solutions in order to guarantee diversity in the domain exploration and to avoid getting caught at local optima. The process is managed by a certain static or dynamic cooling schedule that controls the number of iterations. This meta-heuristic provides several advantages that include the ability of escaping local optima and the use of small amount of short-term memory. A wide range of applications and variants have hitherto emerged as a consequence of its adaptability to many combinatorial as well as continuous optimisation cases, and also its guaranteed asymptotic convergence to the global optimum.
APA, Harvard, Vancouver, ISO, and other styles
10

Islekel, Ege Selin. "Nightmare Knowledges: Epistemologies of Disappearance." In Turkey's Necropolitical Laboratory, 253–72. Edinburgh University Press, 2019. http://dx.doi.org/10.3366/edinburgh/9781474450263.003.0012.

Full text
Abstract:
This chapter develops a conception of necropolitics as a power/knowledge assemblage by focusing on the games of truth and regimes of knowledge produced around death in the cases of mass graves and disappearance in Turkey. In particular, I am interested in the relations drawn between death, memory, and knowledge in necropolitical spaces, in spaces where life and the living are subsumed under the active production, regulation, and optimisation of death. The chapter consists of three parts: the first part analyses the relation between necropolitics and knowledge production, in order to establish necropolitics not only as a political technology, but also an epistemic one. The second section investigates the specific techniques of knowledge deployed in necropolitics, i.e., necro-epistemic methods, which target the temporal and logical coherence of memory in necropolitical spaces. The last section focuses on the practices of epistemic resistance, which work through mobilising perplexing realities in order to instigate counter-discourses. Overall, I argue that these counter-discourses, which I call ‘nightmare-knowledges,’ constitute necropolitical spaces as spaces of epistemic agency.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Memory optimisation"

1

Turkington, Kieron, George A. Constantinides, Peter Y. K. Cheung, and Konstantinos Masselos. "Co-optimisation of datapath and memory in outer loop pipelining." In 2008 International Conference on Field-Programmable Technology (FPT). IEEE, 2008. http://dx.doi.org/10.1109/fpt.2008.4762359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Palkovic, Martin, Henk Corporaal, and Francky Catthoor. "Global memory optimisation for embedded systems allowed by code duplication." In the 2005 workshop. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1140389.1140397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abalenkovs, Maksims. "Performance optimisation of stencil-based codes for shared memory architectures." In 2017 11th European Conference on Antennas and Propagation (EUCAP). IEEE, 2017. http://dx.doi.org/10.23919/eucap.2017.7928861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cheng, Chuan, and Christos-Savvas Bouganis. "Memory optimisation for hardware induction of axis-parallel decision tree." In 2014 International Conference on ReConFigurable Computing and FPGAs (ReConFig). IEEE, 2014. http://dx.doi.org/10.1109/reconfig.2014.7032538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vogel, Pirmin, Andrea Marongiu, and Luca Benini. "An Evaluation of Memory Sharing Performance for Heterogeneous Embedded SoCs with Many-Core Accelerators." In COSMIC '15: International Workshop on Code Optimisation for Multi and Many Cores. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2723772.2723775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sha, Jinze, Andrew Kadis, Fan Yang, and Timothy D. Wilkinson. "Limited-memory BFGS Optimisation of Phase-Only Computer-Generated Hologram for Fraunhofer Diffraction." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/dh.2022.w3a.3.

Full text
Abstract:
We implement a novel limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) optimisation algorithm with cross entropy (CE) loss function, to produce phase-only computer-generated hologram (CGH) for holographic displays, with validation on a binary-phase modulation holographic projector.
APA, Harvard, Vancouver, ISO, and other styles
7

Prakash, Ishan, Aniruddh Bansal, Rohit Verma, and Rajeev Shorey. "SmartSplit: Latency-Energy-Memory Optimisation for CNN Splitting on Smartphone Environment." In 2022 14th International Conference on COMmunication Systems & NETworkS (COMSNETS). IEEE, 2022. http://dx.doi.org/10.1109/comsnets53615.2022.9668610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Santos, Luis C., Filipe N. Santos, Andrae S. Aguiar, Antonio Valente, and Pedro Costa. "Path Planning with Hybrid Maps for processing and memory usage optimisation." In 2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 2022. http://dx.doi.org/10.1109/icarsc55462.2022.9784767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Yongtian, and John Macdonald. "Memory-Saving Techniques In Damped-Least-Squares Optimisation Of Complex Optical Systems." In 1988 International Congress on Optical Science and Engineering, edited by Andre Masson, Joachim J. Schulte-in-den-Baeumen, and Hannfried Zuegge. SPIE, 1989. http://dx.doi.org/10.1117/12.949386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Prakash, Sidharth S., and Binsu C. Kovoor. "Performance optimisation of web applications using In-memory caching and asynchronous job queues." In 2016 International Conference on Inventive Computation Technologies (ICICT). IEEE, 2016. http://dx.doi.org/10.1109/inventive.2016.7830234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography