Academic literature on the topic 'Core kernel operations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Core kernel operations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Core kernel operations"

1

Nadim, Mohammad, Wonjun Lee, and David Akopian. "Characteristic Features of the Kernel-level Rootkit for Learning-based Detection Model Training." Electronic Imaging 2021, no. 3 (June 18, 2021): 34–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.3.mobmu-034.

Full text
Abstract:
The core part of the operating system is the kernel, and it plays an important role in managing critical data structure resources for correct operations. The kernel-level rootkits are the most elusive type of malware that can modify the running OS kernel in order to hide its presence and perform many malicious activities such as process hiding, module hiding, network communication hiding, and many more. In the past years, many approaches have been proposed to detect kernel-level rootkit. Still, it is challenging to detect new attacks and properly categorize the kernel-level rootkits. Memory forensic approaches showed efficient results with the limitation against transient attacks. Cross-view-based and integrity monitoring-based approaches have their own weaknesses. A learning-based detection approach is an excellent way to solve these problems. In this paper, we give an insight into the kernel-level rootkit characteristic features and how the features can be represented to train learning-based models in order to detect known and unknown attacks. Our feature set combined the memory forensic, cross-view, and integrity features to train learning-based detection models. We also suggest useful tools that can be used to collect the characteristics features of the kernel-level rootkit.
APA, Harvard, Vancouver, ISO, and other styles
2

GE, MING, and YANGSHENG XU. "KERNEL-BASED DATA COMPRESSION FOR ADVANCED MANUFACTURING SYSTEM." Journal of Advanced Manufacturing Systems 02, no. 01 (June 2003): 89–104. http://dx.doi.org/10.1142/s021968670300023x.

Full text
Abstract:
Manufacturing system is becoming larger and more complicated. Global manufacturing chains have become common in the new millennium. Internet and intranet integrate the advanced manufacturing system. To perform remote monitoring and diagnosis in such chains and systems, real-time data compression has become a core factor in the efficient and effective exchange of information exchange via computer networks. This paper presents a new technique for compressing data using a kernel-based method. Overcoming the drawbacks of support vector techniques — that is, fast decompression but slow compression — the new method exhibits high speed in both phases. In addition, the new method can also be applied for pattern classification. Based on strain signal example tests derived from sheet metal stamping operations, the new method is very effective. The proposed technology has enormous potential in the application of advanced manufacturing system monitoring and control through internet or intranet.
APA, Harvard, Vancouver, ISO, and other styles
3

Driessen, T., S. Muto, and M. Nakayama. "A cooperative game of information trading: The core, the nucleolus and the kernel." ZOR Zeitschrift für Operations Research Methods and Models of Operations Research 36, no. 1 (January 1992): 55–72. http://dx.doi.org/10.1007/bf01541032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Savran, I., M. Demirci, and A. H. Yılmaz. "Accelerating Shor’s factorization algorithm on GPUs." Canadian Journal of Physics 96, no. 7 (July 2018): 759–61. http://dx.doi.org/10.1139/cjp-2017-0768.

Full text
Abstract:
Shor’s quantum algorithm is very important for cryptography, because it can factor large numbers much faster than classical algorithms. In this study, we implement a simulator for Shor’s quantum algorithm on graphic processor units (GPU) and compare our results with Liquid, which is a Microsoft quantum simulation platform, and two classical CPU implementations. We evaluate 10 benchmarks for comparing our GPU implementation with Liquid and single-core implementation. The analysis shows that GPU vector operations are more suitable for Shor’s quantum algorithm. Our GPU kernel function is compute-bound, due to all threads in a block reaching the same element of the state vector. Our implementation has 52.5× speedup over single-core algorithm and 20.5× speedup over Liquid.
APA, Harvard, Vancouver, ISO, and other styles
5

Chiang, Mei-Ling, and Wei-Lun Su. "Thread-Aware Mechanism to Enhance Inter-Node Load Balancing for Multithreaded Applications on NUMA Systems." Applied Sciences 11, no. 14 (July 14, 2021): 6486. http://dx.doi.org/10.3390/app11146486.

Full text
Abstract:
NUMA multi-core systems divide system resources into several nodes. When an imbalance in the load between cores occurs, the kernel scheduler’s load balancing mechanism then migrates threads between cores or across NUMA nodes. Remote memory access is required for a thread to access memory on the previous node, which degrades performance. Threads to be migrated must be selected effectively and efficiently since the related operations run in the critical path of the kernel scheduler. This study focuses on improving inter-node load balancing for multithreaded applications. We propose a thread-aware selection policy that considers the distribution of threads on nodes for each thread group while migrating one thread for inter-node load balancing. The thread is selected for which its thread group has the least exclusive thread distribution, and thread members are distributed more evenly on nodes. This has less influence on data mapping and thread mapping for the thread group. We further devise several enhancements to eliminate superfluous evaluations for multithreaded processes, so the selection procedure is more efficient. The experimental results for the commonly used PARSEC 3.0 benchmark suite show that the modified Linux kernel with the proposed selection policy increases performance by 10.7% compared with the unmodified Linux kernel.
APA, Harvard, Vancouver, ISO, and other styles
6

Ping, Yuan, Bin Hao, Xiali Hei, Jie Wu, and Baocang Wang. "Maximized Privacy-Preserving Outsourcing on Support Vector Clustering." Electronics 9, no. 1 (January 17, 2020): 178. http://dx.doi.org/10.3390/electronics9010178.

Full text
Abstract:
Despite its remarkable capability in handling arbitrary cluster shapes, support vector clustering (SVC) suffers from pricey storage of kernel matrix and costly computations. Outsourcing data or function on demand is intuitively expected, yet it raises a great violation of privacy. We propose maximized privacy-preserving outsourcing on SVC (MPPSVC), which, to the best of our knowledge, is the first all-phase outsourceable solution. For privacy-preserving, we exploit the properties of homomorphic encryption and secure two-party computation. To break through the operation limitation, we propose a reformative SVC with elementary operations (RSVC-EO, the core of MPPSVC), in which a series of designs make selective outsourcing phase possible. In the training phase, we develop a dual coordinate descent solver, which avoids interactions before getting the encrypted coefficient vector. In the labeling phase, we design a fresh convex decomposition cluster labeling, by which no iteration is required by convex decomposition and no sampling checks exist in connectivity analysis. Afterward, we customize secure protocols to match these operations for essential interactions in the encrypted domain. Considering the privacy-preserving property and efficiency in a semi-honest environment, we proved MPPSVC’s robustness against adversarial attacks. Our experimental results confirm that MPPSVC achieves comparable accuracies to RSVC-EO, which outperforms the state-of-the-art variants of SVC.
APA, Harvard, Vancouver, ISO, and other styles
7

Polievoda, Yurii. "INVESTIGATION OF THE PROCESS OF CHIPPING OF THE WALNUT SHELL AS A RESULT OF THE FORCE ACTION OF HEMISPHERICAL SURFACES." Vibrations in engineering and technology, no. 3(98) (October 30, 2020): 111–19. http://dx.doi.org/10.37128/2306-8744-2020-3-12.

Full text
Abstract:
Knowledge of the technical properties of walnuts is essential to facilitate and improve the design of processing equipment and procedures and equipment for collection, packaging and storage. In the process of obtaining the fruit of a walnut and its derivatives, the kernels are subjected to a number of single operations. At each stage, based on the mechanical properties of walnuts, various types of equipment operate, including vibration equipment for cleaning, sorting, separating and peeling. Thus, the detailed results presented in this study provide useful data that can be used by engineers to design appropriate walnut processing machines, thereby improving process and quality control. To reduce mechanical damage and improve the Knowledge of the technical properties of walnuts is necessary to facilitate and improve the design of process equipment, as well as to mechanize the operations of collection, packaging and storage. In the process of obtaining walnuts, namely the kernels are subjected to a number of single operations. At each stage, based on the mechanical properties of walnuts, different types of equipment work, including vibrating for cleaning, sorting, separation and peeling. Thus, the detailed results presented in this study provide useful data that can be used by engineers in the design of appropriate machines for processing walnuts, thereby improving the process and quality control. The article provides a detailed review of the process of splitting the walnut kernel. Examples of many ways to split the shell, all their disadvantages and advantages are given. The mechanical method of peeling is theoretically investigated and substantiated. To reduce mechanical damage and improve the quality of extraction of walnut kernels during peeling, the object of study was selected 1 variety of walnut. The results of compression tests showed that the relationship between the breaking force and the deformation of the walnut was nonlinear, and the process of peeling (chipping) of the shell mainly consisted of three stages. The best method of splitting a walnut was «spherical compression» (between two spherical inserts), and the maximum value of the destructive force was also determined. The results showed that «spherical compression» was an appropriate way to destroy the walnut, which led to obvious trends in the spread of cracks in the shell and, moreover, to better integrity of the removed core. The article considers a number of small and large equipment designed for splitting nuts, both at home and in industrial conditions, of different design and performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Blanchfield, Paul J., Lori S. Tate, and Cheryl L. Podemski. "Survival and behaviour of rainbow trout (Oncorhynchus mykiss) released from an experimental aquaculture operationThis paper is part of the series “Forty Years of Aquatic Research at the Experimental Lakes Area”." Canadian Journal of Fisheries and Aquatic Sciences 66, no. 11 (November 2009): 1976–88. http://dx.doi.org/10.1139/f09-127.

Full text
Abstract:
The potential for farmed fish that have escaped from open-cage aquaculture operations to affect native populations will depend on their survival and behaviour in the wild. We used standard commercial practices to rear 10 tonnes of rainbow trout ( Oncorhynchus mykiss ) in a 23 ha lake at the Experimental Lakes Area (Ontario, Canada). Each fall (2003–2005), we released farmed rainbow trout (escapees) into the study lake and monitored their movements using automated positioning telemetry. Rainbow trout experienced high annual mortality (~50%), with none surviving beyond 3 years. Farmed fish had narrowly defined pelagic distributions that comprised the upper few metres of the water column, even when at the cage site. Although released rainbow trout dispersed throughout the study lake, most spent significant portions of time at the cage site, especially during normal operation when commercial feed was available. Core use areas (50% Kernel) included the farm for half of the released fish. Surviving rainbow trout showed continued reliance upon the cage site in their second year. However, wide dispersal, high growth rate, and lack of reliance on the cage site by some escaped fish warrant further research to assess potential effects of open-cage aquaculture in the water bodies where the industry occurs.
APA, Harvard, Vancouver, ISO, and other styles
9

Ge, Fen, Ning Wu, Hao Xiao, Yuanyuan Zhang, and Fang Zhou. "Compact Convolutional Neural Network Accelerator for IoT Endpoint SoC." Electronics 8, no. 5 (May 5, 2019): 497. http://dx.doi.org/10.3390/electronics8050497.

Full text
Abstract:
As a classical artificial intelligence algorithm, the convolutional neural network (CNN) algorithm plays an important role in image recognition and classification and is gradually being applied in the Internet of Things (IoT) system. A compact CNN accelerator for the IoT endpoint System-on-Chip (SoC) is proposed in this paper to meet the needs of CNN computations. Based on analysis of the CNN structure, basic functional modules of CNN such as convolution circuit and pooling circuit with a low data bandwidth and a smaller area are designed, and an accelerator is constructed in the form of four acceleration chains. After the acceleration unit design is completed, the Cortex-M3 is used to construct a verification SoC and the designed verification platform is implemented on the FPGA to evaluate the resource consumption and performance analysis of the CNN accelerator. The CNN accelerator achieved a throughput of 6.54 GOPS (giga operations per second) by consuming 4901 LUTs without using any hardware multipliers. The comparison shows that the compact accelerator proposed in this paper makes the CNN computational power of the SoC based on the Cortex-M3 kernel two times higher than the quad-core Cortex-A7 SoC and 67% of the computational power of eight-core Cortex-A53 SoC.
APA, Harvard, Vancouver, ISO, and other styles
10

MERRILL, DUANE, and ANDREW GRIMSHAW. "HIGH PERFORMANCE AND SCALABLE RADIX SORTING: A CASE STUDY OF IMPLEMENTING DYNAMIC PARALLELISM FOR GPU COMPUTING." Parallel Processing Letters 21, no. 02 (June 2011): 245–72. http://dx.doi.org/10.1142/s0129626411000187.

Full text
Abstract:
The need to rank and order data is pervasive, and many algorithms are fundamentally dependent upon sorting and partitioning operations. Prior to this work, GPU stream processors have been perceived as challenging targets for problems with dynamic and global data-dependences such as sorting. This paper presents: (1) a family of very efficient parallel algorithms for radix sorting; and (2) our allocation-oriented algorithmic design strategies that match the strengths of GPU processor architecture to this genre of dynamic parallelism. We demonstrate multiple factors of speedup (up to 3.8x) compared to state-of-the-art GPU sorting. We also reverse the performance differentials observed between GPU and multi/many-core CPU architectures by recent comparisons in the literature, including those with 32-core CPU-based accelerators. Our average sorting rates exceed 1B 32-bit keys/sec on a single GPU microprocessor. Our sorting passes are constructed from a very efficient parallel prefix scan "runtime" that incorporates three design features: (1) kernel fusion for locally generating and consuming prefix scan data; (2) multi-scan for performing multiple related, concurrent prefix scans (one for each partitioning bin); and (3) flexible algorithm serialization for avoiding unnecessary synchronization and communication within algorithmic phases, allowing us to construct a single implementation that scales well across all generations and configurations of programmable NVIDIA GPUs.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Core kernel operations"

1

He, Haifeng. "Memory Footprint Reduction of Operating System Kernels." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/196010.

Full text
Abstract:
As the complexity of embedded systems grows, there is an increasing use of operating systems (OSes) in embedded devices, such as mobile phones, media players and other consumer electronics. Despite their convenience and flexibility, such operating systems can be overly general and contain features and code that are not needed in every application context, which incurs unnecessary performance overheads. In most embedded systems, resources, such as processing power, available memory, and power consumption, are strictly constrained. In particular, the amount of memory on embedded devices is often very limited. This, together with the popular usage of operating systems in embedded devices, makes it important to reduce the memory footprint of operating systems. This dissertation addresses this challenge and presents automated ways to reduce the memory footprint of OS kernels for embedded systems. First, we present kernel code compaction, an automated approach that reduces the code size of an OS kernel statically by removing unused functionality. OS kernel code tends to be different from ordinary application code, including the presence of a significant amount of hand-written assembly code, multiple entry points, implicit control flow paths involving interrupt handlers, and frequent indirect control flow via function pointers. We use a novel "approximated compilation" technique to apply source-level pointer analysis to hand-written assembly code. A prototype implementation of our idea on an Intel x86 platform and a minimally configured Linux kernel obtains a code size reduction of close to 24%.Even though code compaction can remove a portion of the entire OS kernel code, when exercised with typical embedded benchmarks, such as MiBench, most kernel code is executed infrequently if at all. Our second contribution is on-demand code loading, an automated approach that keeps the rarely used code on secondary storage and loads it into main memory only when it is needed. In order to minimize the overhead of code loading, a greedy node-coalescing algorithm is proposed to group closely related code together. The experimental results show that this approach can reduce memory requirements for the Linux kernel code by about 53%with little degradation in performance. Last, we describe dynamic data structure compression, an approach that reduces the runtime memory footprint of dynamic data structures in an OS kernel. A prototype implementation for the Linux kernel reduces the memory consumption of the slab allocators in Linux by 17.5%when running the MediaBench suite while incurring only minimal increases in execution time (1.9%).
APA, Harvard, Vancouver, ISO, and other styles
2

Paolillo, Antonio. "Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/277427.

Full text
Abstract:
Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism. Nowadays, embedded systems are part of our daily lives.Some of these systems are called safetycritical and have strong requirements in terms of safety and reliability.Additionally, these systems must have a long autonomy, good performance and minimal costs.Finally, these systems must exhibit predictable behaviour and provide their results within firm deadlines.When these different constraints are combined in the requirement specifications of a modern product, classic design techniques making use of single core platforms are not sufficient.Academic research in the field of real-time embedded systems has produced numerous techniques to exploit the capabilities of modern hardware platforms.These techniques are often based on using parallelism inherently present in modern hardware to improve the system performance while reducing the platform power dissipation.However, very few systems existing on the market are using these state-of-the-art techniques.Moreover, few of these techniques have been validated in the context of practical experiments.In this thesis, we realise the study of operating system level techniques allowing to exploit hardware parallelism through the implementation of parallel software in order to boost the performance of target applications and to reduce the overall system energy consumption while satisfying strict application timing requirements.We detail the theoretical foundations of the ideas applied in the dissertation and validate these ideas through experimental work.To this aim, we use a new Real-Time Operating System kernel written in the context of the creation of a spin-off of the Université libre de Bruxelles.Our experiments are based on the execution of applications on the operating system which run on a real-world platform for embedded systems.Our results show that, compared to traditional design techniques, using parallel and power-aware scheduling techniques in order to exploit hardware and software parallelism allows to execute embedded applications with substantial savings in terms of energy consumption.We present future and ongoing research work that exploit the capabilities of recent embedded platforms.These platforms combine multi-core processors and reconfigurable hardware logic, allowing further improvements in performance and energy consumption.
Optimisation de Métriques de Performances de Systèmes Embarqués Temps Réel Durs par utilisation du Parallélisme Logiciel et Matériel. De nos jours, les systèmes embarqués font partie intégrante de notre quotidien.Certains de ces systèmes, appelés systèmes critiques, sont soumis à de fortes contraintes de fiabilité et de robustesse.De plus, des contraintes de coûts, d’autonomie et de performances s’additionnent à la fiabilité.Enfin, ces systèmes doivent très souvent respecter des délais très stricts de façon prédictible.Lorsque ces différentes contraintes sont combinées dans le cahier de charge d’un produit, les techniques classiques de conception consistant à utiliser un seul cœur d’un processeur ne suffisent plus.La recherche académique dans le domaine des systèmes embarqués temps réel a produit de nombreuses techniques pour exploiter les plate-formes modernes.Ces techniques sont souvent basées sur l’exploitation du parallélisme inhérent au matériel pour améliorer les performances du système et la puissance dissipée par la plate-forme.Cependant, peu de systèmes existant sur le marché exploitent ces techniques de la littérature et peu de ces techniques ont été validées dans le cadre d’expériences pratiques.Dans cette thèse, nous réalisons l’étude des techniques, au niveau du système d’exploitation, permettant l’exploitation du parallélisme matériel par l’implémentation de logiciels parallèles afin de maximiser les performances et réduire l’impact sur l’énergie consommée tout en satisfaisant les contraintes temporelles strictes du cahier de charge applicatif. Nous détaillons les fondements théoriques des idées qui sont appliquées dans la dissertation et nous les validons par des travaux expérimentaux.A ces fins, nous utilisons le nouveau noyau d’un système d’exploitation écrit dans le cadre de la création d’une spin-off de l’Université libre de Bruxelles.Nos expériences, basées sur l’exécution d’applications sur le système d’exploitation qui s’exécute lui-même sur une plate-forme embarquée réelle, montre que l’utilisation de techniques d’ordonnancement exploitant le parallélisme matériel et logiciel permet de larges économies d’énergie consommée lors de l’exécution d’applications embarquées.De futurs travaux en cours de réalisation sont présentés.Ceux-ci exploitent des plate-formes innovantes qui combinent processeurs multi-cœurs et matériel reconfigurable, permettant d’aller encore plus loin dans l’amélioration des performances et les gains énergétiques.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
3

Aloui, Nadia. "Localisation sonore par retournement temporel." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT079/document.

Full text
Abstract:
L'objectif général de cette thèse était de proposer une solution de localisation en intérieur à la fois simple et capable de surmonter les défis de la propagation dans les environnements en intérieur. Pour ce faire, un système de localisation basé sur la méthode des signatures et adoptant le temps d'arrivée du signal de l'émetteur au récepteur comme signature, a été proposé. Le système présente deux architectures différentes, une première orientée privée utilisant la méthode d'accès multiple à répartition par code et une deuxième centralisée basée sur la méthode d'accès multiple à répartition dans le temps. Le système calcule la position de l'objet d'intérêt par la méthode de noyau. Une comparaison expérimentale entre le système à architecture orientée privée et un système de localisation sonore déjà existant et basé sur la méthode de trilatération, a permis de confirmer les résultats trouvés dans le cas de la localisation par ondes radiofréquences. Cependant, nos expérimentations étaient les premières à montrer l'effet de la réverbération sur les approches de la localisation acoustique. Dans un second lieu, un système de localisation basé sur la technique de retournement temporel, permettant une localisation simultanée de sources avec différentes précisions, a été testé par simulations en faisant varier le nombre de sources. Ce système a été ensuite validé par expérimentations. Dans la dernière partie de notre étude, nous nous sommes intéressés à la réduction de l'audibilité du signal utile à la localisation par recours à la psycho-acoustique. Un filtre défini à partir du seuil d'audition absolu a été appliqué au signal de localisation. Nos résultats ont montré une amélioration de la précision de localisation comparé au système de localisation sans modèle psycho-acoustique et ce grâce à l'utilisation d'un filtre adapté au modèle psycho-acoustique à la réception. Par ailleurs, l'écoute du signal après application du modèle psycho-acoustique a montré une réduction significative de son audibilité comparée à celle du signal original
The objective of this PhD is to propose a location solution that should be simple and robust to multipath that characterizes the indoor environments. First, a location system that exploits the time domain of channel parameters has been proposed. The system adopts the time of arrival of the path of maximum amplitude as a signature and estimates the target position through nonparametric kernel regression. The system was evaluated in experiments for two main configurations: a privacy-oriented configuration with code-division multiple-access operation and a centralized configuration with time-division multiple-access operation. A comparison between our privacy-oriented system and another acoustic location system based on code-division multiple-access operation and lateration method confirms the results found in radiofrequency-based localization. However, our experiments are the first to demonstrate the detrimental effect that reverberation has on acoustic localization approaches. Second, a location system based on time reversal technique and able to localize simultaneously sources with different location precisions has been tested through simulations for different values of the number of sources. The system has then been validated by experiments. Finally, we have been interested in reducing the audibility of the localization signal through psycho-acoustics. A filter, set from the absolute threshold of hearing, is then applied to the signal. Our results showed an improvement in precision, when compared to the location system without psychoacoustic model, thanks to the use of matched filter at the receiver. Moreover, we have noticed a significant reduction in the audibility of the filtered signal compared to that of the original signal
APA, Harvard, Vancouver, ISO, and other styles
4

(6842960), Shuang Zhai. "Transkernel: An Executor for Commodity Kernels on Peripheral Cores." Thesis, 2019.

Find full text
Abstract:

Modern mobile devices have numerous ephemeral tasks. These tasks are driven by background activities, such as push notifications and sensor readings. In order to execute these tasks, the whole platform has to periodically wake up beforehand, and go to sleep afterwards. During this process, the OS kernel operates on power state of various IO devices, which has been identified as the bottleneck for energy efficiency. To this end, we want to offload this kernel phase to a more energy efficient, microcontroller level core, named peripheral core.

To execute commodity OS on a peripheral core, existing approaches either require much engineering effort or incur high execution cost. Therefore, we proposed a new OS model called transkernel. By utilizing cross-ISA dynamic binary translation (DBT) technique, transkernel creates a virtualized environment on the peripheral core. It relies on a small set of stable interfaces. It is specialized for frequently executed kernel path. It exploits ISA similarities to reduce DBT overhead.

We implement a transkernel model on ARM platform. With novel design and optimization, we demonstrate that a transkernel can gain energy efficiency. Moreover, it provides a new OS design to harness heterogeneous SoCs.

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Core kernel operations"

1

Mauro, Jim. Solaris internals: Core kernel components. Palo Alto, CA: Sun Microsystems, Inc., 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Frederick, Jolitz William, ed. Source code secrets: The basic kernel. San Jose, Calif: Peer-to-Peer Communications, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schulman, Andrew. Unauthorized Windows 95: Features, Bootprozess, Kernel 32, APIs, 32/16 Bit Code : Developer's Guide für die Grundlagen von Windows 95. Bonn: IWT Verlag, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Linux Core Kernel Commentary: Guide to Insider's Knowledge on the Core Kernel of the Linux Code. Coriolis Group Books, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mauro, Jim, Richard McDougall, and Sun Microsystems Press. Solaris Internals: Core Kernel Architecture. Prentice Hall PTR, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mauro, Jim, Richard McDougall, and Sun Microsystems Press. Solaris Internals: Core Kernel Architecture. Prentice Hall PTR, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Maxwell, Scott, and Scott Andrew Maxwell. Linux Core Kernel Commentary, 2nd Edition. 2nd ed. Coriolis Group Books, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jolitz, Lynne G., and William Frederick Jolitz. The Basic Kernel: Source Code Secrets. Coriolis Group Books, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Perla, Enrico. A Guide to Kernel Exploitation: Attacking the Core. Syngress, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Core kernel operations"

1

Drescher, Gabor, Timo Hönig, Sebastian Maier, Benjamin Oechslein, and Wolfgang Schröder-Preikschat. "A Scalability-Aware Kernel Executive for Many-Core Operating Systems." In Euro-Par 2013: Parallel Processing Workshops, 823–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-54420-0_80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Marwedel, Peter. "System Software." In Embedded Systems, 203–37. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-60910-8_4.

Full text
Abstract:
AbstractIn order to cope with the complexity of applications of embedded systems, reuse of components is a key technique. As pointed out by Sangiovanni-Vincentelli (The context for platform-based design. IEEE Design and Test of Computers, 2002), software and hardware components must be reused in the platform-based design methosdology (see p. 296). These components comprise knowledge from earlier design efforts and constitute intellectual property (IP). Standard software components that can be reused include system software components such as embedded operating systems (OSs) and middleware. The last term denotes software that provides an intermediate layer between the OS and application software. This chapter starts with a description of general requirements for embedded operating systems. This includes real-time capabilities as well as adaptation techniques to provide just the required functionality. Mutually exclusive access to resources can result in priority inversion, which is a serious problem for real-time systems. Priority inversion can be circumvented with resource access protocols. We will present three such protocols: the priority inheritance, priority ceiling, and stack resource protocols. A separate section covers the ERIKA real-time system kernel. Furthermore, we will explain how Linux can be adapted to systems with tight resource constraints. Finally, we will provide pointers for additional reusable software components, like hardware abstraction layers (HALs), communication software, and real-time data bases. Our description of embedded operating systems and of middleware in this chapter is consistent with the overall design flow.
APA, Harvard, Vancouver, ISO, and other styles
3

Harbour, Daniel. "The Form of the Phi Kernel." In Impossible Persons. The MIT Press, 2016. http://dx.doi.org/10.7551/mitpress/9780262034739.003.0009.

Full text
Abstract:
This chapter considers the broader lessons that might be drawn from the current study of person. It draws out the core commonalities between the person and number features that have been proposed: that features are “operations” richer than first-order predicates held together by conjunction; that they are not subject to extrinsic constraints on order of composition or co-occurrence; and that there are semantic and morphological grounds for representing features of both kinds bivalently. The consequences of this study might ramify beyond linguistics by altering our understanding of and means of investigating the language of thought and the nature and evolution of mind. The author asserts, contrary to widespread opinion, minds do leave fossils, but these are to be sought, not by paleoanthropologists sifting through the archeological record, but by cognitive scientists, including linguists, via our theories of the structure of the mind itself.
APA, Harvard, Vancouver, ISO, and other styles
4

Rai, Jitendra Kumar, Atul Negi, Rajeev Wankar, and K. D. Nayak. "A Machine Learning Based Meta-Scheduler for Multi-Core Processors." In Machine Learning, 522–34. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch311.

Full text
Abstract:
Sharing resources such as caches and memory buses between the cores of multi-core processors may cause performance bottlenecks for running programs. In this paper, the authors describe a meta-scheduler, which adapts the process scheduling decisions for reducing the contention for shared L2 caches on multi-core processors. The meta-scheduler takes into account the multi-core topology as well as the L2 cache related characteristics of the processes. Using the model generated by the process of machine learning, it predicts the L2 cache behavior, i.e., solo-run-L2-cache-stress, of the programs. It runs in user mode and guides the underlying operating system process scheduler in intelligent scheduling of processes to reduce the contention of shared L2 caches. In these experiments, the authors observed up to 12 percent speedup in individual as well as overall performance, while using meta-scheduler as compared to default process scheduler (Completely Fair Scheduler) of Linux kernel.
APA, Harvard, Vancouver, ISO, and other styles
5

Rai, Jitendra Kumar, Atul Negi, Rajeev Wankar, and K. D. Nayak. "A Machine Learning Based Meta-Scheduler for Multi-Core Processors." In Technological Innovations in Adaptive and Dependable Systems, 226–38. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0255-7.ch014.

Full text
Abstract:
Sharing resources such as caches and memory buses between the cores of multi-core processors may cause performance bottlenecks for running programs. In this paper, the authors describe a meta-scheduler, which adapts the process scheduling decisions for reducing the contention for shared L2 caches on multi-core processors. The meta-scheduler takes into account the multi-core topology as well as the L2 cache related characteristics of the processes. Using the model generated by the process of machine learning, it predicts the L2 cache behavior, i.e., solo-run-L2-cache-stress, of the programs. It runs in user mode and guides the underlying operating system process scheduler in intelligent scheduling of processes to reduce the contention of shared L2 caches. In these experiments, the authors observed up to 12 percent speedup in individual as well as overall performance, while using meta-scheduler as compared to default process scheduler (Completely Fair Scheduler) of Linux kernel.
APA, Harvard, Vancouver, ISO, and other styles
6

"Dense Linear Algebra." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 47–93. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-7082-1.ch003.

Full text
Abstract:
This chapter introduces innovative approaches for the efficient use of some of the most novel techniques based on tasking to optimize dense linear algebra operations. The idea is to explore high-level programming techniques that increment the programming productivity and performance for dense linear algebra operations. The authors apply these techniques on some of the most important and widely used dense linear algebra kernels, such as the GEMM and TRSM routines of the BLAS-3 standard, as well as the LU factorization and solve of the LAPACK library. The authors use as target platforms two different current HPC architectures: a CPU multi-core processor and a GPU hardware accelerator. Different approaches are presented depending on the target platform, but always based on tasking.
APA, Harvard, Vancouver, ISO, and other styles
7

Tischler, Michael A. "Accelerating Geospatial Modeling in ArcGIS With Graphical Processor Units." In Geospatial Intelligence, 411–22. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8054-6.ch019.

Full text
Abstract:
Geospatial data can be enormous in size and tedious to process efficiently on standard computational workstations. Distributing the processing tasks through highly parallelized processing reduces the burden on the primary processor and processing times can drastically shorten as a result. ERSI's ArcGIS, while widely used in the military, does not natively support multi-core processing or utilization of graphic processor units (GPUs). However, the ArcPy Python library included in ArcGIS 10 provides geospatial developers with the means to process geospatial data in a flexible environment that can be linked with GPU application programming interfaces (APIs). This research extends a custom desktop geospatial model of spatial similarity for remote soil classification which takes advantage of both standard ArcPy/ArcGIS geoprocessing functions and custom GPU kernels, operating on an NVIDIA Tesla S2050 equipped with potential access to 1792 cores. The author will present their results which describe hardware and software configurations, processing efficiency gains, and lessons learned.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Yung-Sen, and Chung-Yen Lin. "Stimulatory Effects of Androgens on Eel Primary Ovarian Development - from Phenotypes to Genotypes." In Veterinary Medicine and Science. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.99582.

Full text
Abstract:
Androgens stimulate primary ovarian development in Vertebrate. Japanese eels underwent operation to sample the pre- and post-treated ovarian tissues from the same individual. Ovarian phenotypic or genotypic data were mined in a pair. A correlation between the initial ovarian status (determined by kernel density estimation (KDE), presented as a probability density of oocyte size) and the consequence of androgen (17MT) treatment (change in ovary) has been showed. The initial ovarian status appeared to be important to influence ovarian androgenic sensitivity. The initial ovary was important to the outcomes of androgen treatments, and ePAV (expression presence-absence variation) is existing in Japanese eel by analyze DEGs; core, unique, or accessory genes were identified, the sensitivities of initial ovaries were correlated with their gene expression profiles. We speculated the importance of genetic differential expression on the variations of phenotypes by 17MT, and transcriptomic approach seems to allow extracting multiple layers of genomic data.
APA, Harvard, Vancouver, ISO, and other styles
9

de Luca, P., A. Galletti, H. R. Ghehsareh, L. Marcellino, and M. Raei. "A GPU-CUDA Framework for Solving a Two-Dimensional Inverse Anomalous Diffusion Problem." In Parallel Computing: Technology Trends. IOS Press, 2020. http://dx.doi.org/10.3233/apc200056.

Full text
Abstract:
This paper deals with the solution of an inverse time fractional diffusion equation described by a Caputo fractional derivative. Numerical simulations, involving large domains, give rise to a huge practical problem. Hence, by starting from an accurate meshless localized collocation method using radial basis functions (RBFs), here we propose a fast algorithm which exploits the GPU-CUDA capabilities. More in detail, we first developed a C code which uses the well-known numerical library LAPACK to perform basic linear algebra operations in order to implement an efficient sequential algorithm. Then we propose a GPU software based on ad hoc parallel CUDA-kernels and efficient usage of parallel numerical libraries available for GPUs. Performance analysis will show the reliability and the efficiency of the proposed parallel implementation.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Dai, Tilman Küstner, Rami Al-Rihawi, and Martin Schulz. "Exploring High Bandwidth Memory for PET Image Reconstruction." In Parallel Computing: Technology Trends. IOS Press, 2020. http://dx.doi.org/10.3233/apc200044.

Full text
Abstract:
Memory bandwidth plays an essential role in high performance computing. Its impact on system performance is evident when running applications with a low arithmetic intensity. Therefore, high bandwidth memory is on the agenda of many vendors. However, depending on the memory architecture, other optimizations are required to exploit the performance gain from high bandwidth memory technology. In this paper, we present our optimizations for the Maximum Likelihood Expectation-Maximization (MLEM) algorithm, a method for positron emission tomography (PET) image reconstruction, with a sparse matrix-vector (SpMV) kernel. The results show significant improvement in performance when executing the code on an Intel Xeon Phi processor with MCDRAM when compared to multi-channel DRAM. We further identify that the latency of the MCDRAM becomes a new limiting factor, requiring further optimization. Ultimately, after implementing cache-blocking optimization, we achieved a total memory bandwidth of up to 180 GB/s for the SpMV operation.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Core kernel operations"

1

Dejon, Nicolas, Chrystel Gaber, and Gilles Grimaud. "From MMU to MPU: Adaptation of the Pip Kernel to Constrained Devices." In 12th International Conference on Artificial Intelligence, Soft Computing and Applications. Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.122309.

Full text
Abstract:
This article presents a hardware-based memory isolation solution for constrained devices. Existing solutions target high-end embedded systems (typically ARM Cortex-A with a Memory Management Unit, MMU) such as seL4 or Pip (formally verified kernels) or target low-end devices such as ACES, MINION, TrustLite, EwoK but with limited flexibility by proposing a single level of isolation. Our approach consists in adapting Pip to inherit its flexibility (multiple levels of isolation) but using the Memory Protection Unit (MPU) instead of the MMU since the MPU is commonly available on constrained embedded systems (typically ARMv7 Cortex-M4 or ARMv8 Cortex-M33 and similar devices). This paper describes our design of Pip-MPU (Pip’s variant based on the MPU) and the rationale behind our choices. We validate our proposal with an implementation on an nRF52840 development kit and we perform various evaluations such as memory footprint, CPU cycles and energy consumption. We demonstrate that although our prototyped Pip-MPU causes a 16% overhead on both performance and energy consumption, it can reduce the attack surface of the accessible application memory from 100% down to 2% and the privileged operations by 99%. Pip-MPU takes less than 10 kB of Flash (6 kB for its core components) and 550 B of RAM.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Zhi-Xuan, Cheng Jin, Tian-Jing Zhang, Xiao Wu, and Liang-Jian Deng. "SpanConv: A New Convolution via Spanning Kernel Space for Lightweight Pansharpening." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/118.

Full text
Abstract:
Standard convolution operations can effectively perform feature extraction and representation but result in high computational cost, largely due to the generation of the original convolution kernel corresponding to the channel dimension of the feature map, which will cause unnecessary redundancy. In this paper, we focus on kernel generation and present an interpretable span strategy, named SpanConv, for the effective construction of kernel space. Specifically, we first learn two navigated kernels with single channel as bases, then extend the two kernels by learnable coefficients, and finally span the two sets of kernels by their linear combination to construct the so-called SpanKernel. The proposed SpanConv is realized by replacing plain convolution kernel by SpanKernel. To verify the effectiveness of SpanConv, we design a simple network with SpanConv. Experiments demonstrate the proposed network significantly reduces parameters comparing with benchmark networks for remote sensing pansharpening, while achieving competitive performance and excellent generalization. Code is available at https://github.com/zhi-xuan-chen/IJCAI-2022 SpanConv.
APA, Harvard, Vancouver, ISO, and other styles
3

Brandes, Tim, Stefano Scarso, Christian Koch, and Stephan Staudacher. "Data-Driven Analysis of Engine Mission Severity Using Non-Dimensional Groups." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58673.

Full text
Abstract:
Abstract A numerical experiment of intentionally reduced complexity is used to demonstrate a method to classify flight missions in terms of the operational severity experienced by the engines. In this proof of concept, the general term of severity is limited to the erosion of the core flow compressor blade and vane leading edges. A Monte Carlo simulation of varying operational conditions generates a required database of 10000 flight missions. Each flight is sampled at a rate of 1 Hz. Eleven measurable or synthesizable physical parameters are deemed to be relevant for the problem. They are reduced to seven universal non-dimensional groups which are averaged for each flight. The application of principal component analysis allows a further reduction to three principal components. They are used to run a support-vector machine model in order to classify the flights. A linear kernel function is chosen for the support-vector machine due to its low computation time compared to other functions. The robustness of the classification approach against measurement precision error is evaluated. In addition, a minimum number of flights required for training and a sensible number of severity classes are documented. Furthermore, the importance to train the algorithms on a sufficiently wide range of operations is presented.
APA, Harvard, Vancouver, ISO, and other styles
4

He, Haifeng, John Trimble, Somu Perianayagam, Saumya Debray, and Gregory Andrews. "Code Compaction of an Operating System Kernel." In International Symposium on Code Generation and Optimization (CGO'07). IEEE, 2007. http://dx.doi.org/10.1109/cgo.2007.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Qingbo Yuan, Jianbo Zhao, Mingyu Chen, and Ninghui Sun. "GenerOS: An asymmetric operating system kernel for multi-core systems." In Distributed Processing (IPDPS). IEEE, 2010. http://dx.doi.org/10.1109/ipdps.2010.5470363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kudrjavets, Gunnar, Jeff Thomas, Nachiappan Nagappan, and Ayushi Rastogi. "Is Kernel Code Different From Non-Kernel Code? A Case Study of BSD Family Operating Systems." In 2022 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 2022. http://dx.doi.org/10.1109/icsme55016.2022.00027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oliveira, Daniela Alvim Seabra de, and S. Felix Wu. "Protecting Kernel Code and Data with a Virtualization-Aware Collaborative Operating System." In 2009 Annual Computer Security Applications Conference (ACSAC). IEEE, 2009. http://dx.doi.org/10.1109/acsac.2009.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Golubev, Igor, and Igor Kadarmetov. "Development of the Code GOLT for Performance Evaluation of Coated-Particles Fuel." In Fourth International Topical Meeting on High Temperature Reactor Technology. ASMEDC, 2008. http://dx.doi.org/10.1115/htr2008-58156.

Full text
Abstract:
The coated particles (CP) performance computer code GOLT (Russian abbreviation of Gas-Cooled Fuel) is under development at the A. A. Bochvar All-Russia Research Institute of Inorganic Materials. The main goal of the code is supporting development of fuel for the Gas-Turbine Modular Helium Reactor (GT-MHR). The first version GOLT-v1 has capable to calculate temperature distribution along particle radius, fuel kernel swelling, development of internal pressure under coating due to formation of gaseous fission products and CO, development of stresses and deformation in each coating layer. For TRISO-type particles special probabilistic failure model was developed. According to the failure model integrated probability of silicon carbide failure depends on probability of each dense pyrocarbon layer failure. Probabilistic version GOLT-v2 takes into account possibility of gap formation between buffer and inner dense pyrocarbon layer or between kernel and buffer that influences on maximal fuel temperature and stresses distribution in coating. More detail model of buffer performance at irradiation was developed and included in the code. List of probable coating failure mechanisms was extended. The ability of coating failure due to Kernel-Coating Mechanical Interaction (KCMI) as well as model of failure due to kernel migration was added. Thermo-dynamical code ASTRA is used in some tasks as supporting tool for calculating internal pressure and chemical interaction between SiC coating and fission products and CO. The version GOLT-v3 has accumulated all capabilities of previous versions and included Monte-Carlo analysis for estimation of fraction of failed particles with account of statistical dispersion of structural, materials and operating parameters. In the paper short description of capabilities of last versions of the code is presented. Main attention is putted to results of development version GOLT-v2a for evaluation fuel performance during accidents.
APA, Harvard, Vancouver, ISO, and other styles
9

Goto, Minoru, Shohei Ueta, Jun Aihara, Yoshitomo Inaba, Yuji Fukaya, Yukio Tachibana, and Koji Okamoto. "Development of Security and Safety Fuel for Pu-Burner HTGR: Part 2 — Design Study of Fuel and Reactor Core." In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-67110.

Full text
Abstract:
JAEA (Japan Atomic Energy Agency) has conducted feasibility studies of the fuel and of the reactor core for the plutonium-burner HTGR (High Temperature Gas-cooled Reactor). The increase of the internal pressure, which is caused by generations of CO gas and stable noble gases, is considered to be the one of the major causes of TRISO (TRI-structural ISO-tropic) fuel failure at high burn-up. The CO gas is generated by the chemical reaction of the graphite making up the buffer layer with the free-oxygen released from the fuel kernel by fission. The stable noble gases, which are fission products, are also released from the fuel kernel. Although it is considered very difficult to suppress the increase of the partial pressure of the stable noble gases because of its chemically inert nature, the increase of the CO gas partial pressure can be suppressed by reducing the free-oxygen mole concentration using a chemical reaction. ZrC acts an oxygen getter, which reduces the free-oxygen generated with fission reaction. An increase of the CO gas partial pressure with burn-up in a TRISO fuel is expected to be suppressed by coating ZrC on a fuel kernel. A PuO2-YSZ (Yttria Stabilized Zirconia) fuel kernel with a ZrC coating, which enhances safety, security and safeguard, namely: 3S-TRISO fuel, was proposed to introduce to the plutonium-burner HTGR. In this study, the efficiency of the ZrC coating as the free-oxygen getter under a HTGR temperature condition was examined based on a thermochemical calculation. A preliminary feasibility study on the 3S-TRISO fuel that enables to attain a high burn-up around 500 GWd/t was also conducted focusing on a fuel failure caused by an increase of the internal pressure. Additionally, a preliminary nuclear analysis was conducted for the plutonium-burner HTGR with a fuel shuffling in the radial direction. As a result, the thermochemical calculation result showed that all the amount of the free-oxygen is captured by a thin ZrC coating under 1600°C condition. The plutonium-burner HTGR will be designed to suppress fuel temperature to be lower than 1600°C under severe accident conditions, and hence it was confirmed that coating ZrC on the fuel kernel is very effective method to suppress the internal pressure. The internal pressure the 3S-TRISO fuel at 500 GWd/t is calculated to be lower than 60 MPa, which allows to prevent the fuel failure, and hence the feasibility of the 3S-TRISO fuel was also confirmed. Additionally, the results of the whole core burn-up calculations showed that the fuel shuffling in the radial direction allows to achieve the high burn-up around 500 GWd/t. It also showed that the temperature coefficient of reactivity is negative value during the rated power condition through the operation period.
APA, Harvard, Vancouver, ISO, and other styles
10

Miao, Zhuang, Fangxiaozhi Yu, and Zhao Xu. "Research on Signal Reconstruction Technology for Nuclear Power Plant Based on Self-Associative Kernel Regression." In 2022 29th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/icone29-89258.

Full text
Abstract:
Abstract The instrument control system of the nuclear power plant is an important part of ensuring its safe and reliable operation. The sensor is the core component of the instrument control system to monitor the operation status of the nuclear power plant and trigger the operation of the instrument. Therefore, the availability of the sensor plays an important role of the safe operation of the nuclear power plant. The virtual sensor technology based on self-associative kernel regression can accurately determine the fault of the sensor signal, such as instrument drift, missing instrument values, etc. And it can replace the fault sensor with reliable and accurate values in a short period of time, ensuring normal operation of the system in a short period of time. In addition, the output value of the virtual sensor can also be used to calculate the measured value of the unmounted sensor part. This paper introduces a method to replace sensor values with right values when measured sensor values has something unusual. This paper only concerns about replacing sensor values and the paper assumes that it can find sensor fault correctly and timely. The paper selects the main steam system of the pressurized water reactor of the nuclear power plant as an example to verify the feasibility and reliability of the virtual sensor technology based on the self-associative kernel regression. The research results show that it is basically feasible to implement the virtual sensor technology by using the related technical solutions proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography