Journal articles on the topic 'Code-based masking'

To see the other types of publications on this topic, follow the link: Code-based masking.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Code-based masking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xiao, Yisheng, Ruiyang Xu, Lijun Wu, Juntao Li, Tao Qin, Tie-Yan Liu, and Min Zhang. "AMOM: Adaptive Masking over Masking for Conditional Masked Language Model." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 13789–97. http://dx.doi.org/10.1609/aaai.v37i11.26615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Transformer-based autoregressive (AR) methods have achieved appealing performance for varied sequence-to-sequence generation tasks, e.g., neural machine translation, summarization, and code generation, but suffer from low inference efficiency. To speed up the inference stage, many non-autoregressive (NAR) strategies have been proposed in the past few years. Among them, the conditional masked language model (CMLM) is one of the most versatile frameworks, as it can support many different sequence generation scenarios and achieve very competitive performance on these tasks. In this paper, we further introduce a simple yet effective adaptive masking over masking strategy to enhance the refinement capability of the decoder and make the encoder optimization easier. Experiments on 3 different tasks (neural machine translation, summarization, and code generation) with 15 datasets in total confirm that our proposed simple method achieves significant performance improvement over the strong CMLM model. Surprisingly, our proposed model yields state-of-the-art performance on neural machine translation (34.62 BLEU on WMT16 EN to RO, 34.82 BLEU on WMT16 RO to EN, and 34.84 BLEU on IWSLT De to En) and even better performance than the AR Transformer on 7 benchmark datasets with at least 2.2x speedup. Our code is available at GitHub.
2

Carlet, Claude, Abderrahman Daif, Sylvain Guilley, and Cédric Tavernier. "Quasi-linear masking against SCA and FIA, with cost amortization." IACR Transactions on Cryptographic Hardware and Embedded Systems 2024, no. 1 (December 4, 2023): 398–432. http://dx.doi.org/10.46586/tches.v2024.i1.398-432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The implementation of cryptographic algorithms must be protected against physical attacks. Side-channel and fault injection analyses are two prominent such implementation-level attacks. Protections against either do exist. Against sidechannel attacks, they are characterized by SNI security orders: the higher the order, the more difficult the attack.In this paper, we leverage fast discrete Fourier transform to reduce the complexity of high-order masking. The security paradigm is that of code-based masking. Coding theory is amenable both to mask material at a prescribed order, by mixing the information, and to detect and/or correct errors purposely injected by an attacker. For the first time, we show that quasi-linear masking (pioneered by Goudarzi, Joux and Rivain at ASIACRYPT 2018) can be achieved alongside with cost amortisation. This technique consists in masking several symbols/bytes with the same masking material, therefore improving the efficiency of the masking. We provide a security proof, leveraging both coding and probing security arguments. Regarding fault detection, our masking is capable of detecting up to d faults, where 2d + 1 is the length of the code, at any place of the algorithm, including within gadgets. In addition to the theory, that makes use of the Frobenius Additive Fast Fourier Transform, we show performance results, in a C language implementation, which confirms in practice that the complexity is quasi-linear in the code length.
3

Levina, Alla, and Gleb Ryaskin. "Robust Code Constructions Based on Bent Functions and Spline Wavelet Decomposition." Mathematics 10, no. 18 (September 12, 2022): 3305. http://dx.doi.org/10.3390/math10183305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper investigates new robust code constructions based on bent functions and spline–wavelet transformation. Implementation of bent functions in code construction increases the probability of error detection in the data channel and cryptographic devices. Meanwhile, the use of spline wavelet theory for constructing the codes gives the possibility to increase system security from the actions of an attacker. Presented constructions combine spline-wavelets functions and bent functions. Developed robust codes, compared to existing ones, have a higher parameter of the maximum error masking probability. Illustrated codes ensure the security of transmitted information. Some of the granted constructions were implemented on FPGA.
4

Wang, Weijia, Yu Yu, and Francois-Xavier Standaert. "Provable Order Amplification for Code-Based Masking: How to Avoid Non-Linear Leakages Due to Masked Operations." IEEE Transactions on Information Forensics and Security 14, no. 11 (November 2019): 3069–82. http://dx.doi.org/10.1109/tifs.2019.2912549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goy, Guillaume, Julien Maillard, Philippe Gaborit, and Antoine Loiseau. "Single trace HQC shared key recovery with SASCA." IACR Transactions on Cryptographic Hardware and Embedded Systems 2024, no. 2 (March 12, 2024): 64–87. http://dx.doi.org/10.46586/tches.v2024.i2.64-87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents practicable single trace attacks against the Hamming Quasi-Cyclic (HQC) Key Encapsulation Mechanism. These attacks are the first Soft Analytical Side-Channel Attacks (SASCA) against code-based cryptography. We mount SASCA based on Belief Propagation (BP) on several steps of HQC’s decapsulation process. Firstly, we target the Reed-Solomon (RS) decoder involved in the HQC publicly known code. We perform simulated attacks under Hamming weight leakage model, and reach excellent accuracies (superior to 0.9) up to a high noise level (σ = 3), thanks to a re-decoding strategy. In a real case attack scenario, on a STM32F407, this attack leads to a perfect success rate. Secondly, we conduct an analogous attack against the RS encoder used during the re-encryption step required by the Fujisaki-Okamoto-like transform. Both in simulation and practical instances, results are satisfactory and this attack represents a threat to the security of HQC. Finally, we analyze the strength of countermeasures based on masking and shuffling strategies. In line with previous SASCA literature targeting Kyber, we show that masking HQC is a limited countermeasure against BP attacks, as well as shuffling countermeasures adapted from Kyber. We evaluate the “full shuffling” strategy which thwarts our attack by introducing sufficient combinatorial complexity. Eventually, we highlight the difficulty of protecting the current RS encoder with a shuffling strategy. A possible countermeasure would be to consider another encoding algorithm for the scheme to support a full shuffling. Since the encoding subroutine is only a small part of the implementation, it would come at a small cost.
6

Yao, Xincheng, Chongyang Zhang, Ruoqi Li, Jun Sun, and Zhenyu Liu. "One-for-All: Proposal Masked Cross-Class Anomaly Detection." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4792–800. http://dx.doi.org/10.1609/aaai.v37i4.25604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One of the most challenges for anomaly detection (AD) is how to learn one unified and generalizable model to adapt to multi-class especially cross-class settings: the model is trained with normal samples from seen classes with the objective to detect anomalies from both seen and unseen classes. In this work, we propose a novel Proposal Masked Anomaly Detection (PMAD) approach for such challenging multi- and cross-class anomaly detection. The proposed PMAD can be adapted to seen and unseen classes by two key designs: MAE-based patch-level reconstruction and prototype-guided proposal masking. First, motivated by MAE (Masked AutoEncoder), we develop a patch-level reconstruction model rather than the image-level reconstruction adopted in most AD methods for this reason: the masked patches in unseen classes can be reconstructed well by using the visible patches and the adaptive reconstruction capability of MAE. Moreover, we improve MAE by ViT encoder-decoder architecture, combinational masking, and visual tokens as reconstruction objectives to make it more suitable for anomaly detection. Second, we develop a two-stage anomaly detection manner during inference. In the proposal masking stage, the prototype-guided proposal masking module is utilized to generate proposals for suspicious anomalies as much as possible, then masked patches can be generated from the proposal regions. By masking most likely anomalous patches, the “shortcut reconstruction” issue (i.e., anomalous regions can be well reconstructed) can be mostly avoided. In the reconstruction stage, these masked patches are then reconstructed by the trained patch-level reconstruction model to determine if they are anomalies. Extensive experiments show that the proposed PMAD can outperform current state-of-the-art models significantly under the multi- and especially cross-class settings. Code will be publicly available at https://github.com/xcyao00/PMAD.
7

Burke, Colin J., Patrick D. Aleo, Yu-Ching Chen, Xin Liu, John R. Peterson, Glenn H. Sembroski, and Joshua Yao-Yu Lin. "Deblending and classifying astronomical sources with Mask R-CNN deep learning." Monthly Notices of the Royal Astronomical Society 490, no. 3 (October 10, 2019): 3952–65. http://dx.doi.org/10.1093/mnras/stz2845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT We apply a new deep learning technique to detect, classify, and deblend sources in multiband astronomical images. We train and evaluate the performance of an artificial neural network built on the Mask Region-based Convolutional Neural Network image processing framework, a general code for efficient object detection, classification, and instance segmentation. After evaluating the performance of our network against simulated ground truth images for star and galaxy classes, we find a precision of 92 per cent at 80 per cent recall for stars and a precision of 98 per cent at 80 per cent recall for galaxies in a typical field with ∼30 galaxies arcmin−2. We investigate the deblending capability of our code, and find that clean deblends are handled robustly during object masking, even for significantly blended sources. This technique, or extensions using similar network architectures, may be applied to current and future deep imaging surveys such as Large Synoptic Survey Telescope and Wide-Field Infrared Survey Telescope. Our code, astro r-cnn, is publicly available at https://github.com/burke86/astro_rcnn.
8

Zhu, Fengmin, Michael Sammler, Rodolphe Lepigre, Derek Dreyer, and Deepak Garg. "BFF: foundational and automated verification of bitfield-manipulating programs." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (October 31, 2022): 1613–38. http://dx.doi.org/10.1145/3563345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Low-level systems code often needs to interact with data, such as page table entries or network packet headers, in which multiple pieces of information are packaged together as bitfield components of a single machine integer and accessed via bitfield manipulations (e.g., shifts and masking). Most existing approaches to verifying such code employ SMT solvers, instantiated with theories for bit vector reasoning: these provide a powerful hammer, but also significantly increase the trusted computing base of the verification toolchain. In this work, we propose an alternative approach to the verification of bitfield-manipulating systems code, which we call BFF. Building on the RefinedC framework, BFF is not only highly automated (as SMT-based approaches are) but also foundational---i.e., it produces a machine-checked proof of program correctness against a formal semantics for C programs, fully mechanized in Coq. Unlike SMT-based approaches, we do not try to solve the general problem of arbitrary bit vector reasoning, but rather observe that real systems code typically accesses bitfields using simple, well-understood programming patterns: the layout of a bit vector is known up front, and its bitfields are accessed in predictable ways through a handful of bitwise operations involving bit masks. Correspondingly, we center our approach around the concept of a structured bit vector---i.e., a bit vector with a known bitfield layout---which we use to drive simple and predictable automation. We validate the BFF approach by verifying a range of bitfield-manipulating C functions drawn from real systems code, including page table manipulation code from the Linux kernel and the pKVM hypervisor.
9

Chen, Ying, Rebekah Wu, James Felton, David M. Rocke, and Anu Chakicherla. "A Method to Detect Differential Gene Expression in Cross-Species Hybridization Experiments at Gene and Probe Level." Biomedical Informatics Insights 3 (January 2010): BII.S3846. http://dx.doi.org/10.4137/bii.s3846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Motivation Whole genome microarrays are increasingly becoming the method of choice to study responses in model organisms to disease, stressors or other stimuli. However, whole genome sequences are available for only some model organisms, and there are still many species whose genome sequences are not yet available. Cross-species studies, where arrays developed for one species are used to study gene expression in a closely related species, have been used to address this gap, with some promising results. Current analytical methods have included filtration of some probes or genes that showed low hybridization activities. But consensus filtration schemes are still not available. Results A novel masking procedure is proposed based on currently available target species sequences to filter out probes and study a cross-species data set using this masking procedure and gene-set analysis. Gene-set analysis evaluates the association of some priori defined gene groups with a phenotype of interest. Two methods, Gene Set Enrichment Analysis (GSEA) and Test of Test Statistics (ToTS) were investigated. The results showed that masking procedure together with ToTS method worked well in our data set. The results from an alternative way to study cross-species hybridization experiments without masking are also presented. We hypothesize that the multi-probes structure of Affymetrix microarrays makes it possible to aggregate the effects of both well-hybridized and poorly-hybridized probes to study a group of genes. The principles of gene-set analysis were applied to the probe-level data instead of gene-level data. The results showed that ToTS can give valuable information and thus can be used as a powerful technique for analyzing cross-species hybridization experiments. Availability Software in the form of R code is available at http://anson.ucdavis.edu/~ychen/cross-species.html Supplementary Data Supplementary data are available at http://anson.ucdavis.edu/~ychen/cross-species.html
10

Harrington, J. Patrick. "Polarized Continuum Radiation from Stellar Atmospheres." Proceedings of the International Astronomical Union 10, S305 (December 2014): 395–400. http://dx.doi.org/10.1017/s1743921315005116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractContinuum scattering by free electrons can be significant in early type stars, while in late type stars Rayleigh scattering by hydrogen atoms or molecules may be important. Computer programs used to construct models of stellar atmospheres generally treat the scattering of the continuum radiation as isotropic and unpolarized, but this scattering has a dipole angular dependence and will produce polarization. We review an accurate method for evaluating the polarization and limb darkening of the radiation from model stellar atmospheres. We use this method to obtain results for: (i) Late type stars, based on the MARCS code models (Gustafsson et al. 2008), and (ii) Early type stars, based on the NLTE code TLUSTY (Lanz and Hubeny 2003). These results are tabulated at http://www.astro.umd.edu/~jph/Stellar_Polarization.html While the net polarization vanishes for an unresolved spherical star, this symmetry is broken by rapid rotation or by the masking of part of the star by a binary companion or during the transit of an exoplanet. We give some numerical results for these last cases.
11

Mak, Lesley, and Pooya Taheri. "An Automated Tool for Upgrading Fortran Codes." Software 1, no. 3 (August 13, 2022): 299–315. http://dx.doi.org/10.3390/software1030014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With archaic coding techniques, there will be a time when it will be necessary to modernize vulnerable software. However, redeveloping out-of-date code can be a time-consuming task when dealing with a multitude of files. To reduce the amount of reassembly for Fortran-based projects, in this paper, we develop a prototype for automating the manual labor of refactoring individual files. ForDADT (Fortran Dynamic Autonomous Diagnostic Tool) project is a Python program designed to reduce the amount of refactoring necessary when compiling Fortran files. In this paper, we demonstrate how ForDADT is used to automate the process of upgrading Fortran codes, process the files, and automate the cleaning of compilation errors. The developed tool automatically updates thousands of files and builds the software to find and fix the errors using pattern matching and data masking algorithms. These modifications address the concerns of code readability, type safety, portability, and adherence to modern programming practices.
12

Liva, Gianluigi, Shumei Song, Lan Lan, Yifei Zhang, Shu Lin, and William E. Ryan. "Design of LDPC Codes: A Survey and New Results." Journal of Communications Software and Systems 2, no. 3 (April 5, 2017): 191. http://dx.doi.org/10.24138/jcomss.v2i3.283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This survey paper provides fundamentals in the design of LDPC codes. To provide a target for the code designer, we first summarize the EXIT chart technique for determining(near-)optimal degree distributions for LDPC code ensembles. We also demonstrate the simplicity of representing codes by protographs and how this naturally leads to quasi-cyclic LDPC codes. The EXIT chart technique is then extended to the special case of protograph-based LDPC codes. Next, we present several design approaches for LDPC codes which incorporate one or more accumulators, including quasi-cyclic accumulatorbased codes. The second half the paper then surveys severalalgebraic LDPC code design techniques. First, codes based on finite geometries are discussed and then codes whose designs are based on Reed-Solomon codes are covered. The algebraic designs lead to cyclic, quasi-cyclic, and structured codes. The masking technique for converting regular quasi-cyclic LDPC codes to irregular codes is also presented. Some of these results and codes have not been presented elsewhere. The paper focuses on the binary-input AWGN channel (BI-AWGNC). However, as discussed in the paper, good BI-AWGNC codes tend to be universally good across many channels. Alternatively, the reader may treat this paper as a starting point for extensions to more advanced channels. The paper concludes with a brief discussion of open problems.
13

Liu, Yongxu, Yinghui Quan, Guoyao Xiao, Aobo Li, and Jinjian Wu. "Scaling and Masking: A New Paradigm of Data Sampling for Image and Video Quality Assessment." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (March 24, 2024): 3792–801. http://dx.doi.org/10.1609/aaai.v38i4.28170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Quality assessment of images and videos emphasizes both local details and global semantics, whereas general data sampling methods (e.g., resizing, cropping or grid-based fragment) fail to catch them simultaneously. To address the deficiency, current approaches have to adopt multi-branch models and take as input the multi-resolution data, which burdens the model complexity. In this work, instead of stacking up models, a more elegant data sampling method (named as SAMA, scaling and masking) is explored, which compacts both the local and global content in a regular input size. The basic idea is to scale the data into a pyramid first, and reduce the pyramid into a regular data dimension with a masking strategy. Benefiting from the spatial and temporal redundancy in images and videos, the processed data maintains the multi-scale characteristics with a regular input size, thus can be processed by a single-branch model. We verify the sampling method in image and video quality assessment. Experiments show that our sampling method can improve the performance of current single-branch models significantly, and achieves competitive performance to the multi-branch models without extra model complexity. The source code will be available at https://github.com/Sissuire/SAMA.
14

Li, Daixun, Weiying Xie, Jiaqing Zhang, and Yunsong Li. "MDFL: Multi-Domain Diffusion-Driven Feature Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8653–60. http://dx.doi.org/10.1609/aaai.v38i8.28710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
High-dimensional images, known for their rich semantic information, are widely applied in remote sensing and other fields. The spatial information in these images reflects the object's texture features, while the spectral information reveals the potential spectral representations across different bands. Currently, the understanding of high-dimensional images remains limited to a single-domain perspective with performance degradation. Motivated by the masking texture effect observed in the human visual system, we present a multi-domain diffusion-driven feature learning network (MDFL) , a scheme to redefine the effective information domain that the model really focuses on. This method employs diffusion-based posterior sampling to explicitly consider joint information interactions between the high-dimensional manifold structures in the spectral, spatial, and frequency domains, thereby eliminating the influence of masking texture effects in visual models. Additionally, we introduce a feature reuse mechanism to gather deep and raw features of high-dimensional data. We demonstrate that MDFL significantly improves the feature extraction performance of high-dimensional data, thereby providing a powerful aid for revealing the intrinsic patterns and structures of such data. The experimental results on three multi-modal remote sensing datasets show that MDFL reaches an average overall accuracy of 98.25%, outperforming various state-of-the-art baseline schemes. Code available at https://github.com/LDXDU/MDFL-AAAI-24.
15

Vakulenko, S. P., N. K. Volosova, and D. F. Pastukhov. "METHODS OF QR CODE TRANSMISSION IN COMPUTER STEGANOGRAPHY." World of Transport and Transportation 16, no. 5 (October 28, 2018): 14–25. http://dx.doi.org/10.30932/1992-3252-2018-16-5-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For the English full text of the article please see the attached PDF-File (English version follows Russian version).ABSTRACT The article deals with the problems of using steganography methods for transmitting data. The authors justify their approach by arguing that the importance of cryptography in terms of transfer of hidden data i s obvious, however, being cryptographic, coded information maintains potential threat with the mere existence of an encrypted message, that is, an indication that it is necessary to apply cryptanalysis. With the help of steganography, mathematical methods are constructed that deprive the carrier of the potential threat of these prompts. The article proposes to transmit data encoded into a QR code, pre-masking it. In particular, for reliability of restoring the «original», it is proposed to solve the same problem by different methods through iteration using linear systems, the Radon transform, the boundary problem for the Poisson equation, and beyond this, methods based on probability theory. The advantages of this option include the possibility of using modern computer tools and software developed in the field of various types of tomography and in mathematical physics, while slightly modifying them. Keywords: cryptography, steganography, quick response code, information security, Radon transform, Poisson equation, iterative methods.
16

Semenova, Elizaveta, Maria Luisa Guerriero, Bairu Zhang, Andreas Hock, Philip Hopcroft, Ganesh Kadamur, Avid M. Afzal, and Stanley E. Lazic. "Flexible Fitting of PROTAC Concentration–Response Curves with Changepoint Gaussian Processes." SLAS DISCOVERY: Advancing the Science of Drug Discovery 26, no. 9 (September 20, 2021): 1212–24. http://dx.doi.org/10.1177/24725552211028142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A proteolysis-targeting chimera (PROTAC) is a new technology that marks proteins for degradation in a highly specific manner. During screening, PROTAC compounds are tested in concentration–response (CR) assays to determine their potency, and parameters such as the half-maximal degradation concentration (DC50) are estimated from the fitted CR curves. These parameters are used to rank compounds, with lower DC50 values indicating greater potency. However, PROTAC data often exhibit biphasic and polyphasic relationships, making standard sigmoidal CR models inappropriate. A common solution includes manual omitting of points (the so-called masking step), allowing standard models to be used on the reduced data sets. Due to its manual and subjective nature, masking becomes a costly and nonreproducible procedure. We therefore used a Bayesian changepoint Gaussian processes model that can flexibly fit both nonsigmoidal and sigmoidal CR curves without user input. Parameters such as the DC50, maximum effect Dmax, and point of departure (PoD) are estimated from the fitted curves. We then rank compounds based on one or more parameters and propagate the parameter uncertainty into the rankings, enabling us to confidently state if one compound is better than another. Hence, we used a flexible and automated procedure for PROTAC screening experiments. By minimizing subjective decisions, our approach reduces time and cost and ensures reproducibility of the compound-ranking procedure. The code and data are provided on GitHub ( https://github.com/elizavetasemenova/gp_concentration_response ).
17

Kurtin, Danielle L., Daniel A. J. Parsons, and Scott M. Stagg. "VTES: a stochastic Python-based tool to simulate viral transmission." F1000Research 9 (October 5, 2020): 1198. http://dx.doi.org/10.12688/f1000research.26786.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The spread of diseases like severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in human populations involve a large number of variables, making it difficult to predict how it will spread across communities and populations. Reduced representation simulations allow us to reduce the complexity of disease spread and model transmission based on a few key variables. Here we have created a Viral Transmission Education Simulator (VTES) that simulates the spread of disease through the interactions between circles representing individual people bouncing around a bounded, 2D plane. Infections are transmitted via person-to-person contact and the course of an outbreak can be tracked over time. Using this approach, we are able to simulate the influence of variables like infectivity, population density, and social distancing on the course of an outbreak. We also describe how VTES's code can be used to calculate R0 for the simulated pandemic. VTES is useful for modeling how small changes in variables that influence disease transmission can have large changes on the outcome of an epidemic. Additionally, VTES serves as an educational tool where users can easily visualize how disease spreads, and test how interventions, like masking, can influence an outbreak. VTES is designed to be simple and clear to encourage user modifications. These properties make VTES an educational tool that uses accessible, clear code and dynamic simulations to provide a richer understanding of the behaviors and factors underpinning a pandemic. VTES is available from: https://github.com/sstagg/disease-transmission.
18

Cui, Jianming, Wenxiu Kong, Xiaojun Zhang, Da Chen, and Qingtian Zeng. "DLSTM-Based Successive Cancellation Flipping Decoder for Short Polar Codes." Entropy 23, no. 7 (July 6, 2021): 863. http://dx.doi.org/10.3390/e23070863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Polar code has been adopted as the control channel coding scheme for the fifth generation (5G), and the performance of short polar codes is receiving intensive attention. The successive cancellation flipping (SC flipping) algorithm suffers a significant performance loss in short block lengths. To address this issue, we propose a double long short-term memory (DLSTM) neural network to locate the first error bit. To enhance the prediction accuracy of the DLSTM network, all frozen bits are clipped in the output layer. Then, Gaussian approximation is applied to measure the channel reliability and rank the flipping set to choose the least reliable position for multi-bit flipping. To be robust under different codewords, padding and masking strategies aid the network architecture to be compatible with multiple block lengths. Numerical results indicate that the error-correction performance of the proposed algorithm is competitive with that of the CA-SCL algorithm. It has better performance than the machine learning-based multi-bit flipping SC (ML-MSCF) decoder and the dynamic SC flipping (DSCF) decoder for short polar codes.
19

Sinex, D. G., and D. C. Havey. "Neural mechanisms of tone-on-tone masking: patterns of discharge rate and discharge synchrony related to rates of spontaneous discharge in the chinchilla auditory nerve." Journal of Neurophysiology 56, no. 6 (December 1, 1986): 1763–80. http://dx.doi.org/10.1152/jn.1986.56.6.1763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Responses of chinchilla auditory nerve fibers to brief probe tones in the presence of a fixed tonal masker were obtained. The stimulus conditions were analogous to those that have been used in many psychophysical experiments. The relation between previously described response properties of auditory nerve fibers and features of psychophysical tone-on-tone masking was examined. In psychophysical studies, a fixed narrowband masker produces a characteristic pattern of masked thresholds, which becomes broad and asymmetrical at high masker levels. In the present experiment 1, a 5,000-Hz masker was presented at 30, 50, and 70 dB SPL. Masked thresholds based on the average rate of response to probe tones were estimated for single auditory nerve fibers. The lowest of these masked thresholds formed a pattern similar to the psychophysical masking pattern, becoming broader and more asymmetrical as the masker was increased to 70 dB SPL. The masked thresholds of fibers with low and medium rates of spontaneous discharge (SR) were as low as or lower than the masked thresholds of fibers with high SRs. In certain frequency regions, masked thresholds based on responses to cochlear distortion products were lower than the masked thresholds of any fiber responding to the probe tone; this result is also similar to previous psychophysical observations. In experiment 2, responses of chinchilla auditory nerve fibers to probe tones in the presence of a masker at 1,000 Hz and 50 dB SPL were studied. Probe tone thresholds in the presence of this masker have been measured psychophysically in chinchillas. Thus the relation between behavioral and neural masked thresholds in the same species could be examined. Masked thresholds were estimated from average discharge rate responses and also from discharge synchrony. Good quantitative agreement was observed between the probe tone levels at which changes in average discharge rate were observed and the chinchilla's behavioral masked thresholds. For fibers matched for characteristic frequency, the masked thresholds based on average discharge rate of high-SR fibers tended to be elevated compared with the thresholds of medium-SR fibers. Changes in discharge rate synchronized to the probe tone occurred at levels lower than the chinchilla's behavioral masked thresholds. If discharge synchrony can be used for detection, the code would appear to be based on the relative synchrony to the probe tone and to the masking tone. Low synchrony masked thresholds were obtained from fibers with all SRs.(ABSTRACT TRUNCATED AT 400 WORDS)
20

Wang, Liang, Xiang Tao, Qiang Liu, Shu Wu, and Liang Wang. "Rethinking Graph Masked Autoencoders through Alignment and Uniformity." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15528–36. http://dx.doi.org/10.1609/aaai.v38i14.29479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Self-supervised learning on graphs can be bifurcated into contrastive and generative methods. Contrastive methods, also known as graph contrastive learning (GCL), have dominated graph self-supervised learning in the past few years, but the recent advent of graph masked autoencoder (GraphMAE) rekindles the momentum behind generative methods. Despite the empirical success of GraphMAE, there is still a dearth of theoretical understanding regarding its efficacy. Moreover, while both generative and contrastive methods have been shown to be effective, their connections and differences have yet to be thoroughly investigated. Therefore, we theoretically build a bridge between GraphMAE and GCL, and prove that the node-level reconstruction objective in GraphMAE implicitly performs context-level GCL. Based on our theoretical analysis, we further identify the limitations of the GraphMAE from the perspectives of alignment and uniformity, which have been considered as two key properties of high-quality representations in GCL. We point out that GraphMAE's alignment performance is restricted by the masking strategy, and the uniformity is not strictly guaranteed. To remedy the aforementioned limitations, we propose an Alignment-Uniformity enhanced Graph Masked AutoEncoder, named AUG-MAE. Specifically, we propose an easy-to-hard adversarial masking strategy to provide hard-to-align samples, which improves the alignment performance. Meanwhile, we introduce an explicit uniformity regularizer to ensure the uniformity of the learned representations. Experimental results on benchmark datasets demonstrate the superiority of our model over existing state-of-the-art methods. The code is available at: https://github.com/AzureLeon1/AUG-MAE.
21

Wu, Cong, Xiao-Jun Wu, Josef Kittler, Tianyang Xu, Sara Ahmed, Muhammad Awais, and Zhenhua Feng. "SCD-Net: Spatiotemporal Clues Disentanglement Network for Self-Supervised Skeleton-Based Action Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 5949–57. http://dx.doi.org/10.1609/aaai.v38i6.28409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Contrastive learning has achieved great success in skeleton-based action recognition. However, most existing approaches encode the skeleton sequences as entangled spatiotemporal representations and confine the contrasts to the same level of representation. Instead, this paper introduces a novel contrastive learning framework, namely Spatiotemporal Clues Disentanglement Network (SCD-Net). Specifically, we integrate the decoupling module with a feature extractor to derive explicit clues from spatial and temporal domains respectively. As for the training of SCD-Net, with a constructed global anchor, we encourage the interaction between the anchor and extracted clues. Further, we propose a new masking strategy with structural constraints to strengthen the contextual associations, leveraging the latest development from masked image modelling into the proposed SCD-Net. We conduct extensive evaluations on the NTU-RGB+D (60&120) and PKU-MMD (I&II) datasets, covering various downstream tasks such as action recognition, action retrieval, transfer learning, and semi-supervised learning. The experimental results demonstrate the effectiveness of our method, which outperforms the existing state-of-the-art (SOTA) approaches significantly. Our code and supplementary material can be found at https://github.com/cong-wu/SCD-Net.
22

Thompson, William, Jensen Lawrence, Dori Blakely, Christian Marois, Jason Wang, Mosé Giordano, Timothy Brandt, et al. "Octofitter: Fast, Flexible, and Accurate Orbit Modeling to Detect Exoplanets." Astronomical Journal 166, no. 4 (September 20, 2023): 164. http://dx.doi.org/10.3847/1538-3881/acf5cc.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract As next-generation imaging instruments and interferometers search for planets closer to their stars, they must contend with increasing orbital motion and longer integration times. These compounding effects make it difficult to detect faint planets but also present an opportunity. Increased orbital motion makes it possible to move the search for planets into the orbital domain, where direct images can be freely combined with the radial velocity and proper motion anomaly, even without a confirmed detection in any single epoch. In this paper, we present a fast and differentiable multimethod orbit-modeling and planet detection code called Octofitter. This code is designed to be highly modular and allows users to easily adjust priors, change parameterizations, and specify arbitrary function relations between the parameters of one or more planets. Octofitter further supplies tools for examining model outputs including prior and posterior predictive checks and simulation-based calibration. We demonstrate the capabilities of Octofitter on real and simulated data from different instruments and methods, including HD 91312, simulated JWST/NIRISS aperture masking interferometry observations, radial velocity curves, and grids of images from the Gemini Planet Imager. We show that Octofitter can reliably recover faint planets in long sequences of images with arbitrary orbital motion. This publicly available tool will enable the broad application of multiepoch and multimethod exoplanet detection, which could improve how future targeted ground- and space-based surveys are performed. Finally, its rapid convergence makes it a useful addition to the existing ecosystem of tools for modeling the orbits of directly imaged planets.
23

Zhao, Zhuoran, Jinbin Bai, Delong Chen, Debang Wang, and Yubo Pan. "Taming Diffusion Models for Music-Driven Conducting Motion Generation." Proceedings of the AAAI Symposium Series 1, no. 1 (October 3, 2023): 40–44. http://dx.doi.org/10.1609/aaaiss.v1i1.27474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Generating the motion of orchestral conductors from a given piece of symphony music is a challenging task since it requires a model to learn semantic music features and capture the underlying distribution of real conducting motion. Prior works have applied Generative Adversarial Networks (GAN) to this task, but the promising diffusion model, which recently showed its advantages in terms of both training stability and output quality, has not been exploited in this context. This paper presents Diffusion-Conductor, a novel DDIM-based approach for music-driven conducting motion generation, which integrates the diffusion model to a two-stage learning framework. We further propose a random masking strategy to improve the feature robustness, and use a pair of geometric loss functions to impose additional regularizations and increase motion diversity. We also design several novel metrics, including Frechet Gesture Distance (FGD) and Beat Consistency Score (BC) for a more comprehensive evaluation of the generated motion. Experimental results demonstrate the advantages of our model. The code is released at https://github.com/viiika/Diffusion-Conductor.
24

Zhang, Nan, Liam O’Neill, Gautam Das, Xiuzhen Cheng, and Heng Huang. "No Silver Bullet." International Journal of Healthcare Information Systems and Informatics 7, no. 4 (October 2012): 48–58. http://dx.doi.org/10.4018/jhisi.2012100104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In accordance with HIPAA regulations, patients’ personal information is typically removed or generalized prior to being released as public data files. However, it is not known if the standard method of de-identification is sufficient to prevent re-identification by an intruder. The authors conducted analytical processing to identify security vulnerabilities in the protocols to de-identify hospital data. Their techniques for discovering privacy leakage utilized three disclosure channels: (1) data inter-dependency, (2) biomedical domain knowledge, and (3) suppression algorithms and partial suppression results. One state’s inpatient discharge data set was used to represent the current practice of de-identification of health care data, where a systematic approach had been employed to suppress certain elements of the patient’s record. Of the 1,098 records for which the hospital ID was suppressed, the original hospital ID was recovered for 616 records, leading to a nullification rate of 56.1%. Utilizing domain knowledge based on the patient’s Diagnosis Related Group (DRG) code, the authors recovered the real age of 64 patients, the gender of 83 male patients and 713 female patients. They also successfully identified the ZIP code of 1,219 patients. The procedure used to de-identify hospital records was found to be inadequate to prevent disclosure of patient information. As the masking procedure described was found to be reversible, this increases the risk that an intruder could use this information to re-identify individual patients.
25

Zeng, Fan-Gang, Ying-Yee Kong, Henry J. Michalewski, and Arnold Starr. "Perceptual Consequences of Disrupted Auditory Nerve Activity." Journal of Neurophysiology 93, no. 6 (June 2005): 3050–63. http://dx.doi.org/10.1152/jn.00985.2004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.
26

Nechaeva, Anastasia L. "<i>Otherness</i> as Basis of Artistic Chronotope: Script vs. Film." SibScript 25, no. 4 (September 28, 2023): 567–76. http://dx.doi.org/10.21603/sibscript-2023-25-4-567-576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article describes the existential category of the Other and the Otherness in V. Sorokin's screenplay for the film called 4, which was directed by I. Hrzhanovsky in 2004. The Otherness helped V. Sorokin to realize his artistic idea of post-Soviet Russia. According to Jean Paul Sartre, the Other conveys a feeling of the last days: without a single scene of physical violence, the reader develops a feeling of nausea. The essence of the Other is based on the principles of rejection, fear, and inevitability; the Other is the only verifier of one’s existence. In 4, the frightening atmosphere of the Other World is created by such artistic means as coding, dying, masking, and deconstruction of symmetry. They highlight the artistic intent to portray the chaotic nature of being and the absence of time, where human fate is meaningless, and people are reduced to functions in the game of life. The Otherness of the chronotype gives the script and the film a new code: the loss of the holy number four desacralizes existence, thus condemning the world and people in it to a life of hopeless dismay.
27

Lin, Zhiwei, Yongtao Wang, Shengxiang Qi, Nan Dong, and Ming-Hsuan Yang. "BEV-MAE: Bird’s Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (March 24, 2024): 3531–39. http://dx.doi.org/10.1609/aaai.v38i4.28141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Existing LiDAR-based 3D object detection methods for autonomous driving scenarios mainly adopt the training-from-scratch paradigm. Unfortunately, this paradigm heavily relies on large-scale labeled data, whose collection can be expensive and time-consuming. Self-supervised pre-training is an effective and desirable way to alleviate this dependence on extensive annotated data. In this work, we present BEV-MAE, an efficient masked autoencoder pre-training framework for LiDAR-based 3D object detection in autonomous driving. Specifically, we propose a bird's eye view (BEV) guided masking strategy to guide the 3D encoder learning feature representation in a BEV perspective and avoid complex decoder design during pre-training. Furthermore, we introduce a learnable point token to maintain a consistent receptive field size of the 3D encoder with fine-tuning for masked point cloud inputs. Based on the property of outdoor point clouds in autonomous driving scenarios, i.e., the point clouds of distant objects are more sparse, we propose point density prediction to enable the 3D encoder to learn location information, which is essential for object detection. Experimental results show that BEV-MAE surpasses prior state-of-the-art self-supervised methods and achieves a favorably pre-training efficiency. Furthermore, based on TransFusion-L, BEV-MAE achieves new state-of-the-art LiDAR-based 3D object detection results, with 73.6 NDS and 69.6 mAP on the nuScenes benchmark. The source code will be released at https://github.com/VDIGPKU/BEV-MAE.
28

Fan, Deng-Ping, Ziling Huang, Peng Zheng, Hong Liu, Xuebin Qin, and Luc Van Gool. "Facial-sketch Synthesis: A New Challenge." Machine Intelligence Research 19, no. 4 (July 30, 2022): 257–87. http://dx.doi.org/10.1007/s11633-022-1349-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThis paper aims to conduct a comprehensive study on facial-sketch synthesis (FSS). However, due to the high cost of obtaining hand-drawn sketch datasets, there is a lack of a complete benchmark for assessing the development of FSS algorithms over the last decade. We first introduce a high-quality dataset for FSS, named FS2K, which consists of 2 104 image-sketch pairs spanning three types of sketch styles, image backgrounds, lighting conditions, skin colors, and facial attributes. FS2K differs from previous FSS datasets in difficulty, diversity, and scalability and should thus facilitate the progress of FSS research. Second, we present the largest-scale FSS investigation by reviewing 89 classic methods, including 25 handcrafted feature-based facial-sketch synthesis approaches, 29 general translation methods, and 35 image-to-sketch approaches. In addition, we elaborate comprehensive experiments on the existing 19 cutting-edge models. Third, we present a simple baseline for FSS, named FSGAN. With only two straightforward components, i.e., facial-aware masking and style-vector expansion, our FSGAN surpasses the performance of all previous state-of-the-art models on the proposed FS2K dataset by a large margin. Finally, we conclude with lessons learned over the past years and point out several unsolved challenges. Our code is available at https://github.com/DengPingFan/FSGAN.
29

Zhao, Jianwei, Qiang Zhai, Pengbo Zhao, Rui Huang, and Hong Cheng. "Co-Visual Pattern-Augmented Generative Transformer Learning for Automobile Geo-Localization." Remote Sensing 15, no. 9 (April 22, 2023): 2221. http://dx.doi.org/10.3390/rs15092221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Geolocation is a fundamental component of route planning and navigation for unmanned vehicles, but GNSS-based geolocation fails under denial-of-service conditions. Cross-view geo-localization (CVGL), which aims to estimate the geographic location of the ground-level camera by matching against enormous geo-tagged aerial (e.g., satellite) images, has received a lot of attention but remains extremely challenging due to the drastic appearance differences across aerial–ground views. In existing methods, global representations of different views are extracted primarily using Siamese-like architectures, but their interactive benefits are seldom taken into account. In this paper, we present a novel approach using cross-view knowledge generative techniques in combination with transformers, namely mutual generative transformer learning (MGTL), for CVGL. Specifically, by taking the initial representations produced by the backbone network, MGTL develops two separate generative sub-modules—one for aerial-aware knowledge generation from ground-view semantics and vice versa—and fully exploits the entirely mutual benefits through the attention mechanism. Moreover, to better capture the co-visual relationships between aerial and ground views, we introduce a cascaded attention masking algorithm to further boost accuracy. Extensive experiments on challenging public benchmarks, i.e., CVACT and CVUSA, demonstrate the effectiveness of the proposed method, which sets new records compared with the existing state-of-the-art models. Our code will be available upon acceptance.
30

Chayka, Larysa. "STRUCTURE AND DYNAMICS OF VERBAL CONFLICT SITUATION (BASED ON THE ENGLISH LANGUAGE DISCOURSE)." Odessa Linguistic Journal, no. 12 (2018): 39–46. http://dx.doi.org/10.32837/2312-3192/12/6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article provides the results of the verbal conflict analysis based on the English language dialogical discourse highlighting the problems related to its definition, structure and dynamics. The paper discusses a series of issues concerning the verbal conflict which is characterized by linguistic manipulation, i.e., by using language features and principles of its application on the purpose of hidden influence on the addressee in the right direction for the addresser. The article offers definitions of concepts of the verbal conflict situation, its phases and components, identifies actual and potential kinds of the verbal conflict, depicts individual images of the verbal conflict situation. The article considers the ideas of communicators of themselves and their partners on conflicting speech communication about the goals, opportunities, social characteristics and mental state; on the environment in which the verbal conflict occurs; on the code of communicative act; on the communication channel through which communicative interaction is carried out. The work characterises two types of actions that are part of a system of counter-actions in the context of emotional states, the purpose of which is to block the intentions of another communicator directly or indirectly and to achieve the ultimate purpose. Direct or indirect going out of the verbal conflict situation is characterised. Among the various manipulative communicative steps there are the tactics which are known to psychologists and specialists in the theory of communication, such as: masking one’s own intentions; outright misinformation of the enemy; false consent; enticement; expectation; demonstration of false and true goals (distraction of attention); bluff, etc. Escalation of the verbal conflict is treated as clashing on the subject-activity basis or on the personal basis. Several variants of the actual course of the conflict speech interaction are distinguished. Finally, we offer some concluding remarks and suggestions for further investigations.
31

Lin, Yuqi, Minghao Chen, Kaipeng Zhang, Hengjia Li, Mingming Li, Zheng Yang, Dongqin Lv, Binbin Lin, Haifeng Liu, and Deng Cai. "TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP without Training." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (March 24, 2024): 3513–21. http://dx.doi.org/10.1609/aaai.v38i4.28139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Contrastive Language-Image Pre-training (CLIP) has demonstrated impressive capabilities in open-vocabulary classification. The class token in the image encoder is trained to capture the global features to distinguish different text descriptions supervised by contrastive loss, making it highly effective for single-label classification. However, it shows poor performance on multi-label datasets because the global feature tends to be dominated by the most prominent class and the contrastive nature of softmax operation aggravates it. In this study, we observe that the multi-label classification results heavily rely on discriminative local features but are overlooked by CLIP. As a result, we dissect the preservation of patch-wise spatial information in CLIP and proposed a local-to-global framework to obtain image tags. It comprises three steps: (1) patch-level classification to obtain coarse scores; (2) dual-masking attention refinement (DMAR) module to refine the coarse scores; (3) class-wise reidentification (CWR) module to remedy predictions from a global perspective. This framework is solely based on frozen CLIP and significantly enhances its multi-label classification performance on various benchmarks without dataset-specific training. Besides, to comprehensively assess the quality and practicality of generated tags, we extend their application to the downstream task, i.e., weakly supervised semantic segmentation (WSSS) with generated tags as image-level pseudo labels. Experiments demonstrate that this classify-then-segment paradigm dramatically outperforms other annotation-free segmentation methods and validates the effectiveness of generated tags. Our code is available at https://github.com/linyq2117/TagCLIP.
32

Tsmots, I. G., V. M. Teslyuk, Yu V. Opotiak, and I. V. Pikh. "MODELS AND TOOLS FOR DEBUGGING AND TESTING MOBILE SYSTEMS FOR NEURO-LIKE CRYPTOGRAPHIC PROTECTION OF DATA TRANSMISSION." Ukrainian Journal of Information Technology 4, no. 2 (2022): 45–55. http://dx.doi.org/10.23939/ujit2022.02.045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The work revealed the need for providing cryptographic protection and immunity to data transmission and control commands when using the mobile robotic platform as well as the importance of taking into account the limitations regarding dimensions, energy consumption and productivity. It was found that one of the ways to meet the requirements of cryptographic protection is the use of neuro-like networks. Their feature is the ability to pre-calculate the weight coefficients that will be used when encrypting/decrypting data. It is suggested that during neuro-like encryption/decryption of data, the key should be generated taking into account the architecture of the neuro-like network (the number of neurons, the number of inputs and their bit rate), the matrix of weight coefficients and the table for masking. It was determined that a neural network with pre-calculated weight coefficients makes it possible to use a table-algorithmic method for data encryption/decryption, which is based on the operations of reading from memory, adding and shifting. Limitations regarding dimensions, energy consumption and performance are analyzed. They can be overcome during implementation by using a universal processor core supplemented with specialized FPGA hardware for neuro-like elements. That is the combined use of software and specialized hardware ensures the effective implementation of neuro-like data encryption/decryption algorithms and management teams. Models and tools for debugging and testing a neuro-like cryptographic system are presented. A model of the preliminary settings of the neuro-like data encryption system has been developed, the main components of which are the former of the neuro-like network architecture, the calculator of weight coefficient matrices and the calculator of tables of macro-partial products. A model of the process of neuro-like encryption of control commands using a table-algorithmic method has been developed. Models for testing and debugging blocks of encryption (decryption), encoding (decoding), and masking (unmasking) of data have been developed, which, due to the use of reference values for comparison, ensure an increase in the quality of testing and debugging of the cryptographic system. A cryptographic system was developed, which, as a result of a dynamic change in the type of neuro-like network architecture and the values of weighting coefficients, mask codes and barker-like code, provides an increase in the crypto-resistance of data transmission. Testing of the simulation model was carried out on the example of message transmission for various configurations of a cryptographic system.
33

Cayuso, Juan, Richard Bloch, Selim C. Hotinli, Matthew C. Johnson, and Fiona McCarthy. "Velocity reconstruction with the cosmic microwave background and galaxy surveys." Journal of Cosmology and Astroparticle Physics 2023, no. 02 (February 1, 2023): 051. http://dx.doi.org/10.1088/1475-7516/2023/02/051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The kinetic Sunyaev Zel'dovich (kSZ) and moving lens effects, secondary contributions to the cosmic microwave background (CMB), carry significant cosmological information due to their dependence on the large-scale peculiar velocity field. Previous work identified a promising means of extracting this cosmological information using a set of quadratic estimators for the radial and transverse components of the velocity field. These estimators are based on the statistically anisotropic components of the cross-correlation between the CMB and a tracer of large scale structure, such as a galaxy redshift survey. In this work, we assess the challenges to the program of velocity reconstruction posed by various foregrounds and systematics in the CMB and galaxy surveys, as well as biases in the quadratic estimators. To do so, we further develop the quadratic estimator formalism and implement a numerical code for computing properly correlated spectra for all the components of the CMB (primary/secondary blackbody components and foregrounds) and a photometric redshift survey, with associated redshift errors, to allow for accurate forecasting. We create a simulation framework for generating realizations of properly correlated CMB maps and redshift binned galaxy number counts, assuming the underlying fields are Gaussian, and use this to validate a velocity reconstruction pipeline and assess map-based systematics such as masking. We highlight the most significant challenges for velocity reconstruction, which include biases associated with: modelling errors, characterization of redshift errors, and coarse graining of cosmological fields on our past light cone. Despite these challenges, the outlook for velocity reconstruction is quite optimistic, and we use our reconstruction pipeline to confirm that these techniques will be feasible with near-term CMB experiments and photometric galaxy redshift surveys.
34

Konduri, Raja Rajeswari, Nagireddi Roopavathi, Balantrapu Vijaya Lakshmi, and Putrevu Venkata Krishna Chaitanya. "Mitigating Peak Sidelobe Levels in Pulse Compression Radar using Artificial Neural Networks." Indian Journal of Artificial Intelligence and Neural Networking 3, no. 6 (March 30, 2024): 12–20. http://dx.doi.org/10.54105/ijainn.f9517.03061023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, Artificial Neural Networks (ANNs) are being considered to obtain low sidelobe pattern for binary codes and thereby to improve the performance of pulse compression radar. Pulse compression is a popular technique used for improving range resolution in the radar systems. This paper proposes a new approach for Pulse Compression using various types of ANN networks like Multi-Layer Perception (MLP), Recursive Neural Networks (RNN), Radial Basis Function (RBF) and Recurrent Radial Basis Function (RRBF) and a special class of Feed-Forward Wavelet Neural Network (WNN) with one input layer, one output layer and one hidden layer are being considered. Networks of 13-bit Barker code and extended binary Barker codes of 35, 55 and 75 length codes were used for the implementation and thereby to improve the performance of pulse compression radar. WNN-based networks using Morlet and Sigmoid activation function in hidden and output layers respectively, a special class of Artificial Neural Network is considered in this paper. The performance metrics used are Peak Sidelobe Ratio (PSLR), Integrated Sidelobe Ratio (ISLR) and Signal-to-Sidelobe Ratio (SSR). Further the performance in terms of range and Doppler resolution is also presented in this paper. Better performance in terms of sidelobe reduction can be achieved with ANNs compared to Autocorrelation Function (ACF) called as matched filter. If the sidelobe values are high there is possibility of masking weaker return signals and there by detection becomes difficult. From this paper it can be established that RRBF gives better result than other ANN networks. Further, WNN gives the best performance even compared with RRBF in terms of sidelobe reduction in pulse compression radar.
35

Marin, Maximillian, Roger Vargas, Michael Harris, Brendan Jeffrey, L. Elaine Epperson, David Durbin, Michael Strong, et al. "Benchmarking the empirical accuracy of short-read sequencing across the M. tuberculosis genome." Bioinformatics 38, no. 7 (January 10, 2022): 1781–87. http://dx.doi.org/10.1093/bioinformatics/btac023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Motivation Short-read whole-genome sequencing (WGS) is a vital tool for clinical applications and basic research. Genetic divergence from the reference genome, repetitive sequences and sequencing bias reduces the performance of variant calling using short-read alignment, but the loss in recall and specificity has not been adequately characterized. To benchmark short-read variant calling, we used 36 diverse clinical Mycobacterium tuberculosis (Mtb) isolates dually sequenced with Illumina short-reads and PacBio long-reads. We systematically studied the short-read variant calling accuracy and the influence of sequence uniqueness, reference bias and GC content. Results Reference-based Illumina variant calling demonstrated a maximum recall of 89.0% and minimum precision of 98.5% across parameters evaluated. The approach that maximized variant recall while still maintaining high precision (&lt;99%) was tuning the mapping quality filtering threshold, i.e. confidence of the read mapping (recall = 85.8%, precision = 99.1%, MQ ≥ 40). Additional masking of repetitive sequence content is an alternative conservative approach to variant calling that increases precision at cost to recall (recall = 70.2%, precision = 99.6%, MQ ≥ 40). Of the genomic positions typically excluded for Mtb, 68% are accurately called using Illumina WGS including 52/168 PE/PPE genes (34.5%). From these results, we present a refined list of low confidence regions across the Mtb genome, which we found to frequently overlap with regions with structural variation, low sequence uniqueness and low sequencing coverage. Our benchmarking results have broad implications for the use of WGS in the study of Mtb biology, inference of transmission in public health surveillance systems and more generally for WGS applications in other organisms. Availability and implementation All relevant code is available at https://github.com/farhat-lab/mtb-illumina-wgs-evaluation. Supplementary information Supplementary data are available at Bioinformatics online.
36

Dreifeld, O. V. "Pragmatic and Semantic Analysis of Anti-War Protest Utterances (on the Data of Public Opinion Discourse in the Spring of 2022 in Russia)." Discourse 9, no. 6 (December 21, 2023): 116–27. http://dx.doi.org/10.32603/2412-8562-2023-9-6-116-127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction. The main objective of article is analysis the personal anti-war utterances on protest posters to determine their pragmatic functions in the discourse of public opinion. The theoretical and practical significance of article is identification, description and classification types of personal speech tactics of anti-war protest discourse based on material of verbalized statements at solitary pickets in the spring of 2022 in Russia.Methodology and sources. A content analysis, pragmatic and semantic analysis of 455 utterances are carried out, utterances are obtained by continuous sampling method in the Telegram channel “OVD-Info” in the period from 02.24.2022 to 05.31.2022 and from the opened media sources indicated in the open database “Illustrative Material”.Results and discussion. Anti-war discourse as thematic variety of political discourse is formed in the process of public expression of criticism or disagreement in relation to military actions as way of resolving a geopolitical conflict and /or in relation to a specific military event, as well as in relation to political decisions leading to military actions. Anti-war discourse contains a specific semiotic code, which includes concepts “peace” and “war” as an obligatory component. And anti-war discourse is realized in a certain social and historical extralinguistic context. We checked up the important role of speech tactics which significantly transform the text of anti-war utterance, such as graphic euphemization, semiotic euphemization, abbreviation, neutralization. They implement a complex spectrum of speaker` s intentions, masking and revealing it the same time.Conclusion. We have found not only differences in “direct” and “hidden” messages, but their similarities, consisting in the fact that for a part of the audience in the spring of 2022 in Russia the difference was leveled by the conditions of social communication: any statement in the public space was assessed as ‘anti-war” regardless of its form and content.
37

Rognes, Torbjørn, Tomáš Flouri, Ben Nichols, Christopher Quince, and Frédéric Mahé. "VSEARCH: a versatile open source tool for metagenomics." PeerJ 4 (October 18, 2016): e2584. http://dx.doi.org/10.7717/peerj.2584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
BackgroundVSEARCH is an open source and free of charge multithreaded 64-bit tool for processing and preparing metagenomics, genomics and population genomics nucleotide sequence data. It is designed as an alternative to the widely used USEARCH tool (Edgar, 2010) for which the source code is not publicly available, algorithm details are only rudimentarily described, and only a memory-confined 32-bit version is freely available for academic use.MethodsWhen searching nucleotide sequences, VSEARCH uses a fast heuristic based on words shared by the query and target sequences in order to quickly identify similar sequences, a similar strategy is probably used in USEARCH. VSEARCH then performs optimal global sequence alignment of the query against potential target sequences, using full dynamic programming instead of the seed-and-extend heuristic used by USEARCH. Pairwise alignments are computed in parallel using vectorisation and multiple threads.ResultsVSEARCH includes most commands for analysing nucleotide sequences available in USEARCH version 7 and several of those available in USEARCH version 8, including searching (exact or based on global alignment), clustering by similarity (using length pre-sorting, abundance pre-sorting or a user-defined order), chimera detection (reference-based orde novo), dereplication (full length or prefix), pairwise alignment, reverse complementation, sorting, and subsampling. VSEARCH also includes commands for FASTQ file processing, i.e., format detection, filtering, read quality statistics, and merging of paired reads. Furthermore, VSEARCH extends functionality with several new commands and improvements, including shuffling, rereplication, masking of low-complexity sequences with the well-known DUST algorithm, a choice among different similarity definitions, and FASTQ file format conversion. VSEARCH is here shown to be more accurate than USEARCH when performing searching, clustering, chimera detection and subsampling, while on a par with USEARCH for paired-ends read merging. VSEARCH is slower than USEARCH when performing clustering and chimera detection, but significantly faster when performing paired-end reads merging and dereplication. VSEARCH is available athttps://github.com/torognes/vsearchunder either the BSD 2-clause license or the GNU General Public License version 3.0.DiscussionVSEARCH has been shown to be a fast, accurate and full-fledged alternative to USEARCH. A free and open-source versatile tool for sequence analysis is now available to the metagenomics community.
38

Han, Changho, Youngjae Song, Hong-Seok Lim, Yunwon Tae, Jong-Hwan Jang, Byeong Tak Lee, Yeha Lee, Woong Bae, and Dukyong Yoon. "Automated Detection of Acute Myocardial Infarction Using Asynchronous Electrocardiogram Signals—Preview of Implementing Artificial Intelligence With Multichannel Electrocardiographs Obtained From Smartwatches: Retrospective Study." Journal of Medical Internet Research 23, no. 9 (September 10, 2021): e31129. http://dx.doi.org/10.2196/31129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background When using a smartwatch to obtain electrocardiogram (ECG) signals from multiple leads, the device has to be placed on different parts of the body sequentially. The ECG signals measured from different leads are asynchronous. Artificial intelligence (AI) models for asynchronous ECG signals have barely been explored. Objective We aimed to develop an AI model for detecting acute myocardial infarction using asynchronous ECGs and compare its performance with that of the automatic ECG interpretations provided by a commercial ECG analysis software. We sought to evaluate the feasibility of implementing multiple lead–based AI-enabled ECG algorithms on smartwatches. Moreover, we aimed to determine the optimal number of leads for sufficient diagnostic power. Methods We extracted ECGs recorded within 24 hours from each visit to the emergency room of Ajou University Medical Center between June 1994 and January 2018 from patients aged 20 years or older. The ECGs were labeled on the basis of whether a diagnostic code corresponding to acute myocardial infarction was entered. We derived asynchronous ECG lead sets from standard 12-lead ECG reports and simulated a situation similar to the sequential recording of ECG leads via smartwatches. We constructed an AI model based on residual networks and self-attention mechanisms by randomly masking each lead channel during the training phase and then testing the model using various targeting lead sets with the remaining lead channels masked. Results The performance of lead sets with 3 or more leads compared favorably with that of the automatic ECG interpretations provided by a commercial ECG analysis software, with 8.1%-13.9% gain in sensitivity when the specificity was matched. Our results indicate that multiple lead-based AI-enabled ECG algorithms can be implemented on smartwatches. Model performance generally increased as the number of leads increased (12-lead sets: area under the receiver operating characteristic curve [AUROC] 0.880; 4-lead sets: AUROC 0.858, SD 0.008; 3-lead sets: AUROC 0.845, SD 0.011; 2-lead sets: AUROC 0.813, SD 0.018; single-lead sets: AUROC 0.768, SD 0.001). Considering the short amount of time needed to measure additional leads, measuring at least 3 leads—ideally more than 4 leads—is necessary for minimizing the risk of failing to detect acute myocardial infarction occurring in a certain spatial location or direction. Conclusions By developing an AI model for detecting acute myocardial infarction with asynchronous ECG lead sets, we demonstrated the feasibility of multiple lead-based AI-enabled ECG algorithms on smartwatches for automated diagnosis of cardiac disorders. We also demonstrated the necessity of measuring at least 3 leads for accurate detection. Our results can be used as reference for the development of other AI models using sequentially measured asynchronous ECG leads via smartwatches for detecting various cardiac disorders.
39

Stratonov, Vasyl. ""Computer crimes": some features and characteristics." Naukovyy Visnyk Dnipropetrovs'kogo Derzhavnogo Universytetu Vnutrishnikh Sprav 2, no. 2 (June 3, 2020): 134–41. http://dx.doi.org/10.31733/2078-3566-2020-2-134-141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Unfortunately, along with positive achievements, informatization also has negative manifestations, namely, the possibility of using computer technology to commit crimes. The world has long been talking about "cy-bercrime" about "computer crime," and chapter 16 of the Criminal Code of Ukraine deals with crimes in the use of computers, computer systems and networks, as well as telecommunications. Therefore, we can state that a unified approach to the definition of a concept does not exist. However, the introduction of certain norms into the law does not solve the problems. Problems arise with the direct implementation of these standards in everyday life. Since “computer crimes” are transnational in nature, we must join forces to combat such crimes. In developed countries, this type of crime leads to huge losses, significant funds that are spent on the development and implementation of software, technical and other means of protection against unauthorized access to information, its distortion or destruction. With this in mind, it is fundamen-tally important to study methods of committing crimes using computers, computer systems and telecom-munication networks. Therefore, we characterize some of the most common ways of committing computer crimes. Such crimes are characterized by the following features: the complexity of their detection and in-vestigation, the difficulty of proving in court, the high damage even from one crime. Therefore, based on the analysis of both theory and the results of practice, we primarily focus on individual methods of committing “computer crimes”. We reveal in the article the content, forms and methods of committing computer crimes in the realities of today. We focus on the main methods of unauthorized receipt of information, namely: the use of a device that listens (bookmarks); deleted photo; interception of electronic radiation; hoax (disguise for system requests); interception of acoustic radiation and restoration of printer text; theft of media and industrial waste (garbage collection); reading data from arrays of other users; copying storage media with overcoming protection measures; masking a registered user; use of software traps; illegal connection to equipment and communication lines; failure of defense mechanisms. We characterize the most common both methods and methods of unauthorized receipt of infor-mation from computer and information networks. Knowing the ways of committing crimes will help to further prevent the commission of crimes, take preventive measures.
40

Zhang, Liang, Jinsong Su, Zijun Min, Zhongjian Miao, Qingguo Hu, Biao Fu, Xiaodong Shi, and Yidong Chen. "Exploring Self-Distillation Based Relational Reasoning Training for Document-Level Relation Extraction." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 13967–75. http://dx.doi.org/10.1609/aaai.v37i11.26635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Document-level relation extraction (RE) aims to extract relational triples from a document. One of its primary challenges is to predict implicit relations between entities, which are not explicitly expressed in the document but can usually be extracted through relational reasoning. Previous methods mainly implicitly model relational reasoning through the interaction among entities or entity pairs. However, they suffer from two deficiencies: 1) they often consider only one reasoning pattern, of which coverage on relational triples is limited; 2) they do not explicitly model the process of relational reasoning. In this paper, to deal with the first problem, we propose a document-level RE model with a reasoning module that contains a core unit, the reasoning multi-head self-attention unit. This unit is a variant of the conventional multi-head self-attention and utilizes four attention heads to model four common reasoning patterns, respectively, which can cover more relational triples than previous methods. Then, to address the second issue, we propose a self-distillation training framework, which contains two branches sharing parameters. In the first branch, we first randomly mask some entity pair feature vectors in the document, and then train our reasoning module to infer their relations by exploiting the feature information of other related entity pairs. By doing so, we can explicitly model the process of relational reasoning. However, because the additional masking operation is not used during testing, it causes an input gap between training and testing scenarios, which would hurt the model performance. To reduce this gap, we perform conventional supervised training without masking operation in the second branch and utilize Kullback-Leibler divergence loss to minimize the difference between the predictions of the two branches. Finally, we conduct comprehensive experiments on three benchmark datasets, of which experimental results demonstrate that our model consistently outperforms all competitive baselines. Our source code is available at https://github.com/DeepLearnXMU/DocRE-SD
41

Drozd, Oleksandr V., Andrzej Rucinski, Kostiantyn V. Zashcholkin, Myroslav O. Drozd, and Yulian Yu Sulima. "IMPROVING FPGA COMPONENTS OF CRITICAL SYSTEMS BASED ON NATURAL VERSION REDUNDANCY." Applied Aspects of Information Technology 4, no. 2 (June 30, 2021): 168–77. http://dx.doi.org/10.15276/aait.02.2021.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article is devoted to the problem of improving FPGA (Field Programmable Gate Array) components developed for safety related systems. FPGA components are improved in the checkability of their circuits and the trustworthiness of the results calculated on them to support fault-tolerant solutions, which are basic in ensuring the functional safety of critical systems. Fault-tolerant solu tions need protection from sources of multiple failures, which include hidden faults. They can be accumulated in significant quanti ties during a long normal operation and disrupt the functionality of fault-tolerant circuits with the onset of the most responsible emer gency mode. Protection against hidden faults is ensured by the checkability of the circuits, which is aimed at the manifestation of faults and therefore must be supported in conjunction with the trustworthiness of the results, taking into account the decrease in trustworthiness in the event of the manifestation of faults. The problem of increasing the checkability of the FPGA component in normal operation and the trustworthiness of the results calculated in the emergency mode is solved by using the natural version re dundancy inherent in the LUT-oriented architecture (Look-Up Table). This redundancy is manifested in the existence of many ver sions of the program code that preserve the functionality of the FPGA component with the same hardware implementation. The checkability of the FPGA component and the trustworthiness of the calculated results are considered taking into account the typical failures of the LUT-oriented architecture. These malfunctions are investigated from the standpoint of the consistency of their mani festation and masking, respectively, in normal and emergency modes on versions of the program code. Malfunctions are identified with bit distortion in the memory of the LUT units. Bits that are only observed in emergency mode are potentially dangerous because they can hide faults in normal mode. Moving potentially dangerous bits to checkable positions, observed in normal mode, is per formed by choosing the appropriate versions of the program code and organizing the operation of the FPGA component on several versions. Experiments carried out with the FPGA component using the example of an iterative array multiplier of binary codes have shown the effectiveness of using the natural version redundancy of the LUT-oriented architecture to solve the problem of hidden faults.
42

Costes, Nicolas, and Martijn Stam. "Redundant Code-based Masking Revisited." IACR Transactions on Cryptographic Hardware and Embedded Systems, December 3, 2020, 426–50. http://dx.doi.org/10.46586/tches.v2021.i1.426-450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Masking schemes are a popular countermeasure against side-channel attacks. To mask bytes, the two classical options are Boolean masking and polynomial masking. The latter lends itself to redundant masking, where leakage emanates from more shares than are strictly necessary to reconstruct, raising the obvious question how well such “redundant” leakage can be exploited by a side-channel adversary. We revisit the recent work by Chabanne et al. (CHES’18) and show that, contrary to their conclusions, said leakage can—in theory—always be exploited. For the Hamming weight scenario in the low-noise regime, we heuristically determine how security degrades in terms of the number of redundant shares for first and second order secure polynomial masking schemes.Furthermore, we leverage a well-established link between linear secret sharing schemes and coding theory to determine when different masking schemes will end up with essentially equivalent leakage profiles. Surprisingly, we conclude that for typical field sizes and security orders, Boolean masking is a special case of polynomial masking. We also identify quasi-Boolean masking schemes as a special class of redundant polynomial masking and point out that the popular “Frobenius-stable” sets of interpolations points typically lead to such quasi-Boolean masking schemes, with subsequent degraded leakage performance.
43

Wang, Weijia, Pierrick Méaux, Gaëtan Cassiers, and François-Xavier Standaert. "Efficient and Private Computations with Code-Based Masking." IACR Transactions on Cryptographic Hardware and Embedded Systems, March 2, 2020, 128–71. http://dx.doi.org/10.46586/tches.v2020.i2.128-171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Code-based masking is a very general type of masking scheme that covers Boolean masking, inner product masking, direct sum masking, and so on. The merits of the generalization are twofold. Firstly, the higher algebraic complexity of the sharing function decreases the information leakage in “low noise conditions” and may increase the “statistical security order” of an implementation (with linear leakages). Secondly, the underlying error-correction codes can offer improved fault resistance for the encoded variables. Nevertheless, this higher algebraic complexity also implies additional challenges. On the one hand, a generic multiplication algorithm applicable to any linear code is still unknown. On the other hand, masking schemes with higher algebraic complexity usually come with implementation overheads, as for example witnessed by inner-product masking. In this paper, we contribute to these challenges in two directions. Firstly, we propose a generic algorithm that allows us (to the best of our knowledge for the first time) to compute on data shared with linear codes. Secondly, we introduce a new amortization technique that can significantly mitigate the implementation overheads of code-based masking, and illustrate this claim with a case study. Precisely, we show that, although performing every single code-based masked operation is relatively complex, processing multiple secrets in parallel leads to much better performances. This property enables code-based masked implementations of the AES to compete with the state-of-the-art in randomness complexity. Since our masked operations can be instantiated with various linear codes, we hope that these investigations open new avenues for the study of code-based masking schemes, by specializing the codes for improved performances, better side-channel security or improved fault tolerance.
44

Cheng, Wei, Sylvain Guilley, Claude Carlet, Jean-Luc Danger, and Sihem Mesnager. "Information Leakages in Code-based Masking: A Unified Quantification Approach." IACR Transactions on Cryptographic Hardware and Embedded Systems, July 9, 2021, 465–95. http://dx.doi.org/10.46586/tches.v2021.i3.465-495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a unified approach to quantifying the information leakages in the most general code-based masking schemes. Specifically, by utilizing a uniform representation, we highlight first that all code-based masking schemes’ side-channel resistance can be quantified by an all-in-one framework consisting of two easy-tocompute parameters (the dual distance and the number of conditioned codewords) from a coding-theoretic perspective. In particular, we use signal-to-noise ratio (SNR) and mutual information (MI) as two complementary metrics, where a closed-form expression of SNR and an approximation of MI are proposed by connecting both metrics to the two coding-theoretic parameters. Secondly, considering the connection between Reed-Solomon code and SSS (Shamir’s Secret Sharing) scheme, the SSS-based masking is viewed as a particular case of generalized code-based masking. Hence as a straightforward application, we evaluate the impact of public points on the side-channel security of SSS-based masking schemes, namely the polynomial masking, and enhance the SSS-based masking by choosing optimal public points for it. Interestingly, we show that given a specific security order, more shares in SSS-based masking leak more information on secrets in an information-theoretic sense. Finally, our approach provides a systematic method for optimizing the side-channel resistance of every code-based masking. More precisely, this approach enables us to select optimal linear codes (parameters) for the generalized code-based masking by choosing appropriate codes according to the two coding-theoretic parameters. Summing up, we provide a best-practice guideline for the application of code-based masking to protect cryptographic implementations.
45

Wu, Qianmei, Wei Cheng, Sylvain Guilley, Fan Zhang, and Wei Fu. "On Efficient and Secure Code-based Masking: A Pragmatic Evaluation." IACR Transactions on Cryptographic Hardware and Embedded Systems, June 8, 2022, 192–222. http://dx.doi.org/10.46586/tches.v2022.i3.192-222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Code-based masking is a highly generalized type of masking schemes, which can be instantiated into specific cases by assigning different encoders. It captivates by its side-channel resistance against higher-order attacks and the potential to withstand fault injection attacks. However, similar to other algebraically-involved masking schemes, code-based masking is also burdened with expensive computational overhead. To mitigate such cost and make it efficient, we contribute to several improvements to the original scheme proposed by Wang et al. in TCHES 2020. Specifically, we devise a computationally friendly encoder and accordingly accelerate masked gadgets to leverage efficient implementations. In addition, we highlight that the amortization technique introduced by Wang et al. does not always lead to efficient implementations as expected, but actually decreases the efficiency in some cases.From the perspective of practical security, we carry out an extensive evaluation of the concrete security of code-based masking in the real world. On one hand, we select three representative variations of code-based masking as targets for an extensive evaluation. On the other hand, we aim at security assessment of both encoding and computations to investigate whether the state-of-the-art computational framework for code-based masking reaches the security of the corresponding encoding. By leveraging both leakage assessment tool and side-channel attacks, we verify the existence of “security order amplification” in practice and validate the reliability of the leakage quantification method proposed by Cheng et al. in TCHES 2021. In addition, we also study the security decrease caused by the “cost amortization” technique and redundancy of code-based masking. We identify a security bottleneck in the gadgets computations which limits the whole masked implementation. To the best of our knowledge, this is the first time that allows us to narrow down the gap between the theoretical security order under the probing model (sometimes with simulation experiments) and the concrete side-channel security level of protected implementations by code-based masking in practice.
46

Demange, Loïc, and Mélissa Rossi. "A provably masked implementation of BIKE Key Encapsulation Mechanism." IACR Communications in Cryptology, April 9, 2024. http://dx.doi.org/10.62056/aesgvua5v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST's standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Chen et al. in SAC 2015. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances. In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model. More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, we show that masking at order 1, 2, 3, 4 and 5 implies a performance penalty of x5.8, x14.2, x24.4, x38 and x55.6 compared to order 0 (unmasked and unoptimized BIKE). This scaling is encouraging and no Boolean to Arithmetic conversion has been used.
47

Tang, Peng, Cheng Xie, and Haoran Duan. "Node and edge dual-masked self-supervised graph representation." Knowledge and Information Systems, December 23, 2023. http://dx.doi.org/10.1007/s10115-023-01950-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractSelf-supervised graph representation learning has been widely used in many intelligent applications since labeled information can hardly be found in these data environments. Currently, masking and reconstruction-based (MR-based) methods lead the state-of-the-art records in the self-supervised graph representation field. However, existing MR-based methods did not fully consider both the deep-level node and structure information which might decrease the final performance of the graph representation. To this end, this paper proposes a node and edge dual-masked self-supervised graph representation model to consider both node and structure information. First, a dual masking model is proposed to perform node masking and edge masking on the original graph at the same time to generate two masking graphs. Second, a graph encoder is designed to encode the two generated masking graphs. Then, two reconstruction decoders are designed to reconstruct the nodes and edges according to the masking graphs. At last, the reconstructed nodes and edges are compared with the original nodes and edges to calculate the loss values without using the labeled information. The proposed method is validated on a total of 14 datasets for graph node classification tasks and graph classification tasks. The experimental results show that the method is effective in self-supervised graph representation. The code is available at: https://github.com/TangPeng0627/Node-and-Edge-Dual-Mask.
48

Cheng, Wei, Sylvain Guilley, and Jean-Luc Danger. "Information Leakage in Code-based Masking: A Systematic Evaluation by Higher-Order Attacks." IEEE Transactions on Information Forensics and Security, 2022, 1. http://dx.doi.org/10.1109/tifs.2022.3167914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

"Higher-Order Masking Scheme against DPA Attack in Practice: McEliece Cryptosystem Based on QD-MDPC Code." KSII Transactions on Internet and Information Systems 13, no. 2 (February 28, 2019). http://dx.doi.org/10.3837/tiis.2019.02.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhou, Zhecheng, Zhenya Du, Xin Jiang, Linlin Zhuo, Yixin Xu, Xiangzheng Fu, Mingzhe Liu, and Quan Zou. "GAM-MDR: probing miRNA–drug resistance using a graph autoencoder based on random path masking." Briefings in Functional Genomics, February 22, 2024. http://dx.doi.org/10.1093/bfgp/elae005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract MicroRNAs (miRNAs) are found ubiquitously in biological cells and play a pivotal role in regulating the expression of numerous target genes. Therapies centered around miRNAs are emerging as a promising strategy for disease treatment, aiming to intervene in disease progression by modulating abnormal miRNA expressions. The accurate prediction of miRNA–drug resistance (MDR) is crucial for the success of miRNA therapies. Computational models based on deep learning have demonstrated exceptional performance in predicting potential MDRs. However, their effectiveness can be compromised by errors in the data acquisition process, leading to inaccurate node representations. To address this challenge, we introduce the GAM-MDR model, which combines the graph autoencoder (GAE) with random path masking techniques to precisely predict potential MDRs. The reliability and effectiveness of the GAM-MDR model are mainly reflected in two aspects. Firstly, it efficiently extracts the representations of miRNA and drug nodes in the miRNA–drug network. Secondly, our designed random path masking strategy efficiently reconstructs critical paths in the network, thereby reducing the adverse impact of noisy data. To our knowledge, this is the first time that a random path masking strategy has been integrated into a GAE to infer MDRs. Our method was subjected to multiple validations on public datasets and yielded promising results. We are optimistic that our model could offer valuable insights for miRNA therapeutic strategies and deepen the understanding of the regulatory mechanisms of miRNAs. Our data and code are publicly available at GitHub:https://github.com/ZZCrazy00/GAM-MDR.

To the bibliography