Academic literature on the topic 'Binary code learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Binary code learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Binary code learning"

1

Mohan Liu, Mohan Liu, Xiaoming Tang Mohan Liu, and Hanming Fei Xiaoming Tang. "Design of Malicious Code Detection System Based on Binary Code Slicing." 電腦學刊 33, no. 3 (June 2022): 225–38. http://dx.doi.org/10.53106/199115992022063303018.

Full text
Abstract:
<p>Malicious code threatens the safety of computer systems. Researching malicious code design techniques and mastering code behavior patterns are the basic work of network security prevention. With the game of network offense and defense, malicious code shows the characteristics of invisibility, polymorphism, and multi-dismutation. How to correctly and effectively understand malicious code and extract the key malicious features is the main goal of malicious code detection technology. As an important method of program understanding, program slicing is used to analyze the program code by using the idea of &ldquo;decomposition&rdquo;, and then extract the code fragments that the analyst is interested in. In recent years, data mining and machine learning techniques have been applied to the field of malicious code detection. The reason why it has become the focus of research is that it can use data mining to dig out meaningful patterns from a large amount of existing code data. Machine learning can It helps to summarize the identification knowledge of known malicious code, so as to conduct similarity search and help find unknown malicious code. The machine learning heuristic malicious code detection method firstly needs to automatically or manually extract the structure, function and behavior characteristics of the malicious code, so we can first slice the malicious code and then perform the detection. Through the improvement of the classic program slicing algorithm, this paper effectively improves the slicing problem between binary code processes. At the same time, it implements a malicious code detection system. The machine code byte sequence variable-length N-gram is used as the feature extraction method to further prove that the efficiency and accuracy of malicious code detection technology based on data mining and machine learning. </p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Xiang, Fumin Shen, Yang Yang, Guangwei Gao, and Yuan Wang. "Binary code learning via optimal class representations." Neurocomputing 208 (October 2016): 59–65. http://dx.doi.org/10.1016/j.neucom.2015.12.129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Lei, Xiao Bai, Xianglong Liu, Jun Zhou, and Edwin R. Hancock. "Learning binary code for fast nearest subspace search." Pattern Recognition 98 (February 2020): 107040. http://dx.doi.org/10.1016/j.patcog.2019.107040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Xiang, Yuanping Nie, Zhi Wang, Xiaohui Kuang, Kefan Qiu, Cheng Qian, and Gang Zhao. "BMOP: Bidirectional Universal Adversarial Learning for Binary OpCode Features." Wireless Communications and Mobile Computing 2020 (December 2, 2020): 1–11. http://dx.doi.org/10.1155/2020/8876632.

Full text
Abstract:
For malware detection, current state-of-the-art research concentrates on machine learning techniques. Binary n -gram OpCode features are commonly used for malicious code identification and classification with high accuracy. Binary OpCode modification is much more difficult than modification of image pixels. Traditional adversarial perturbation methods could not be applied on OpCode directly. In this paper, we propose a bidirectional universal adversarial learning method for effective binary OpCode perturbation from both benign and malicious perspectives. Benign features are those OpCodes that represent benign behaviours, while malicious features are OpCodes for malicious behaviours. From a large dataset of benign and malicious binary applications, we select the most significant benign and malicious OpCode features based on the feature SHAP value in the trained machine learning model. We implement an OpCode modification method that insert benign OpCodes into executables as garbage codes without execution and modify malicious OpCodes by equivalent replacement preserving execution semantics. The experimental results show that the benign and malicious OpCode perturbation (BMOP) method could bypass malicious code detection models based on the SVM, XGBoost, and DNN algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeong, Junho, Yangsun Lee, Uduakobong George Offong, and Yunsik Son. "A Type Information Reconstruction Scheme Based on Long Short-Term Memory for Weakness Analysis in Binary File." International Journal of Software Engineering and Knowledge Engineering 28, no. 09 (September 2018): 1267–86. http://dx.doi.org/10.1142/s0218194018400156.

Full text
Abstract:
Due to increasing use of third-party libraries because of the increasing complexity of software development, the lack of management of legacy code and the nature of embedded software, the use of third-party libraries which have no source code is increasing. Without the source code, it is difficult to analyze these libraries for vulnerabilities. Therefore, to analyze weaknesses inherent in binary code, various studies have been conducted to perform static analysis using intermediate code. The conversion from binary code to intermediate language differs depending on the execution environment. In this paper, we propose a deep learning-based analysis method to reconstruct missing data types during the compilation process from binary code to intermediate language, and propose a method to generate supervised learning data for deep learning.
APA, Harvard, Vancouver, ISO, and other styles
6

Lo, James Ting-Ho, and Bryce Mackey-Williams Carey. "A Cortical Learning Machine for Learning Real-Valued and Ranked Data." International Journal of Clinical Medicine and Bioengineering 1, no. 1 (December 30, 2021): 12–24. http://dx.doi.org/10.35745/ijcmb2021v01.01.0003.

Full text
Abstract:
The cortical learning machine (CLM) introduced in [1-3] is a low-order computational model of the neocortex. It has the real-time, photogragraphic, unsupervised, and hierarchical learning capabilities, which existing learning machines such as the multilayer perceptron and convolutional neural network do not have. The CLM is a network of processing units (PUs) each comprising novel computational models of dendrites (for encoding), synapses (for storing code covariance matrices), spiking/nonspiking somas (for evaluating empirical probabilities and generating spikes), and unsupervised/supervised Hebbian learning schemes. In this paper, the masking matrix in the CLM in [1-3] is generalized to enable the CLM to learn ranked and real-valued data in the form of the binary numbers and unary (thermometer) codes. The general masking matrix assigns weights to the bits in the binary and unary code to reflect their relative significances. Numerical examples are provided to illustrate that a single PU with the general masking matrix is a pattern recognizer with an efficacy comparable to those of leading statistical and machine learning methods, showing the potential of CLMs with multiple PUs especially in consideration of the aforementioned capabilities of the CLM.
APA, Harvard, Vancouver, ISO, and other styles
7

Shen, Fumin, Xiang Zhou, Yang Yang, Jingkuan Song, Heng Tao Shen, and Dacheng Tao. "A Fast Optimization Method for General Binary Code Learning." IEEE Transactions on Image Processing 25, no. 12 (December 2016): 5610–21. http://dx.doi.org/10.1109/tip.2016.2612883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Do, Thanh-Toan, Tuan Hoang, Dang-Khoa Le Tan, Anh-Dzung Doan, and Ngai-Man Cheung. "Compact Hash Code Learning With Binary Deep Neural Network." IEEE Transactions on Multimedia 22, no. 4 (April 2020): 992–1004. http://dx.doi.org/10.1109/tmm.2019.2935680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Hao, Tong Zhang, Songqiang Chen, Lina Wang, and Fajiang Yu. "FUSION: Measuring Binary Function Similarity with Code-Specific Embedding and Order-Sensitive GNN." Symmetry 14, no. 12 (December 2, 2022): 2549. http://dx.doi.org/10.3390/sym14122549.

Full text
Abstract:
Binary code similarity measurement is a popular research area in binary analysis with the recent development of deep learning-based models. Current state-of-the-art methods often use the pre-trained language model (PTLM) to embed instructions into basic blocks as representations of nodes within a control flow graph (CFG). These methods will then use the graph neural network (GNN) to embed the whole CFG and measure the binary similarities between these code embeddings. However, these methods almost directly treat the assembly code as a natural language text and ignore its code-specific features when training PTLM. Moreover, They barely consider the direction of edges in the CFG or consider it less efficient. The weaknesses of the above approaches may limit the performances of previous methods. In this paper, we propose a novel method called function similarity using code-specific PPTs and order-sensitive GNN (FUSION). Since the similarity of binary codes is a symmetric/asymmetric problem, we were guided by the ideas of symmetry and asymmetry in our research. They measure the binary function similarity with two code-specific PTLM training strategies and an order-sensitive GNN, which, respectively, alleviate the aforementioned weaknesses. FUSION outperforms the state-of-the-art binary similarity methods by up to 5.4% in accuracy, and performs significantly better.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Daokun, Jie Yin, Xingquan Zhu, and Chengqi Zhang. "Search Efficient Binary Network Embedding." ACM Transactions on Knowledge Discovery from Data 15, no. 4 (June 2021): 1–27. http://dx.doi.org/10.1145/3436892.

Full text
Abstract:
Traditional network embedding primarily focuses on learning a continuous vector representation for each node, preserving network structure and/or node content information, such that off-the-shelf machine learning algorithms can be easily applied to the vector-format node representations for network analysis. However, the learned continuous vector representations are inefficient for large-scale similarity search, which often involves finding nearest neighbors measured by distance or similarity in a continuous vector space. In this article, we propose a search efficient binary network embedding algorithm called BinaryNE to learn a binary code for each node, by simultaneously modeling node context relations and node attribute relations through a three-layer neural network. BinaryNE learns binary node representations using a stochastic gradient descent-based online learning algorithm. The learned binary encoding not only reduces memory usage to represent each node, but also allows fast bit-wise comparisons to support faster node similarity search than using Euclidean or other distance measures. Extensive experiments and comparisons demonstrate that BinaryNE not only delivers more than 25 times faster search speed, but also provides comparable or better search quality than traditional continuous vector based network embedding methods. The binary codes learned by BinaryNE also render competitive performance on node classification and node clustering tasks. The source code of the BinaryNE algorithm is available at https://github.com/daokunzhang/BinaryNE.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Binary code learning"

1

Koseler, Kaan Tamer. "Realization of Model-Driven Engineering for Big Data: A Baseball Analytics Use Case." Miami University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=miami1524832924255132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Olby, Linnea, and Isabel Thomander. "A Step Toward GDPR Compliance : Processing of Personal Data in Email." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-238754.

Full text
Abstract:
The General Data Protection Regulation enforced on the 25th of may in 2018 is a response to the growing importance of IT in today’s society, accompanied by public demand for control over personal data. In contrast to the previous directive, the new regulation applies to personal data stored in an unstructured format, such as email, rather than solely structured data. Companies are now forced to accommodate to this change, among others, in order to be compliant. This study aims to provide a code of conduct for the processing of personal data in email as a measure for reaching compliance. Furthermore, this study investigates whether Named Entity Recognition (NER) can aid this process as a means of finding personal data in the form of names. A literature review of current research and recommendations was conducted for the code of conduct proposal. A NER system was constructed using a hybrid approach with Binary Logistic Regression, hand-crafted rules and gazetteers. The model was applied to a selection of emails, including attachments, obtained from a small consultancy company in the automotive industry. The proposed code of conduct consists of six items, applied to the consultancy firm. The NER-model demonstrated low ability to identify names and was therefore deemed insufficient for this task.
Dataskyddsförordningen började gälla den 25e maj 2018, och uppstod som ett svar på den okände betydelsen av IT i dagens samhälle samt allmänhetens krav på ökad kontroll över personuppgifter för den enskilde individen. Till skillnad från det tidigare direktivet, omfattar den nya förordningen även personuppgifter som är lagrad i ostrukturerad form, som till exempel e-post, snarare än endast i strukturerad form. Många företag tvingas därmed att anpassa sig efter detta, tillsammans med ett flertal andra nya krav, i syfte att efterfölja förordningen. Den här studien syftar till att lägga fram ett förslag på en uppförandekod för behandling av personuppgifter i e-post som ett verktyg för att nå medgörlighet. Utöver detta undersöks det om Named Entity Recognition (NER) kan användas som ett hjälpmedel vid identifiering av personuppgifter, mer specifikt namn. En litteraturstudie kring tidigare forskning och aktuella rekommendationer utfördes inför utformningen av uppförandekoden. Ett NER-system konstruerades med hjälp av Binär Logistisk Regression, handgjorda regler och ordlistor. Modellen applicerades på ett urval av e-postmeddelanden, med eventuella bilagor, som tillhandahölls från ett litet konsultbolag aktivt inom bilindustrin. Den rekommenderade uppförandekoden består av sex punkter, applicerade på konsultbolaget. NER-modellen påvisade en låg förmåga att identifiera namn och ansågs därför inte vara lämplig för den utsatta uppgiften.
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Guosheng. "Structured output prediction and binary code learning in computer vision." Thesis, 2015. http://hdl.handle.net/2440/91777.

Full text
Abstract:
Machine learning techniques play essential roles in many computer vision applications. This thesis is dedicated to two types of machine learning techniques which are important to computer vision: structured learning and binary code learning. Structured learning is for predicting complex structured output of which the components are inter-dependent. Structured outputs are common in real-world applications. The image segmentation mask is an example of structured output. Binary code learning is to learn hash functions that map data points into binary codes. The binary code representation is popular for large-scale similarity search, indexing and storage. This thesis has made practical and theoretical contributions to these two types of learning techniques. The first part of this thesis focuses on boosting based structured output prediction. Boosting is a type of methods for learning a single accurate predictor by linearly combining a set of less accurate weak learners. As a special case of structured learning, we first propose an efficient boosting method for multi-class classification, which can be applied to image classification. Different from many existing multi-class boosting methods, we train class specified weak learners by separately learning weak learners for each class. We also develop a fast coordinate descent method for solving the optimization problem, in which we have closed-form solution for each coordinate update. For general structured output prediction, we propose a new boosting based method, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. Our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem is challenging in the sense that it may involve exponentially many variables and constraints. We develop cutting plane and column generation based algorithms to efficiently solve the optimization. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation. The last part of this thesis focuses on hashing methods for binary code learning. We develop three novel hashing methods which focus on different aspects of binary code learning. We first present a column generation based hash function learning method for preserving triplet based relative pairwise similarity. Given a set of triplets that encode the pairwise similarity comparison information, our method learns hash functions within the large-margin learning framework. At each iteration of the column generation procedure, the best hash function is selected. We show that our method with triplet based formulation and large-margin learning is able to learn high quality hash functions. The second hashing learning method in this thesis is a flexible and general method with a two-step learning scheme. Most existing approaches to hashing apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of the method to respond to the data, and can result in complex optimization problems that are difficult to solve. In this chapter we propose a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. This framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: hash bit learning and hash function learning based on the learned bits. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training standard binary classifiers. These two steps can be easily solved by leveraging sophisticated algorithms in the literature. The third hashing learning method aims for efficient and effective hash function learning on large-scale and high-dimensional data, which is an extension of our general two-step hashing method. Non-linear hash functions have demonstrated their advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval performance at the price of slow evaluation and training time. We propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evaluate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular formulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash functions by training boosted decision trees to fit the binary codes. We show that our method significantly outperforms most existing methods both in retrieval precision and training time, especially for high-dimensional data.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2015
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Binary code learning"

1

Ju, Zhen-fei, Xiao-jiao Mao, Ning Li, and Yu-bin Yang. "Binary Code Learning via Iterative Distance Adjustment." In MultiMedia Modeling, 83–94. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14445-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Guosheng, Chunhua Shen, and Jianxin Wu. "Optimizing Ranking Measures for Compact Binary Code Learning." In Computer Vision – ECCV 2014, 613–27. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10578-9_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zhongmin, Zhen Feng, Zhenzhou Tian, and Lingwei Chen. "Binary Code Authorship Identification with Neural Representation Learning." In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 1407–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70665-4_153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Zhongmin, Zhen Feng, and Zhenzhou Tian. "Neural Representation Learning Based Binary Code Authorship Attribution." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 244–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68734-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

da Silva Gomes, João, and Roman Borisyuk. "Biological Brain and Binary Code: Quality of Coding for Face Recognition." In Artificial Neural Networks and Machine Learning – ICANN 2012, 427–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vinodhkumar, N., M. G. Rajendrakumar, and S. Muthumanickam. "Performance Analysis of Gray to Binary Code Converter Using GDI Techniques." In Proceedings of the 2nd International Conference on Recent Trends in Machine Learning, IoT, Smart Cities and Applications, 419–29. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6407-6_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Fengliang, Guixing Wu, Guochao Zhao, and Xiangyu Li. "SimCGE: Simple Contrastive Learning of Graph Embeddings for Cross-Version Binary Code Similarity Detection." In Information and Communications Security, 458–71. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15777-6_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Priyanga, S., Roopak Suresh, Sandeep Romana, and V. S. Shankar Sriram. "The Good, The Bad, and The Missing: A Comprehensive Study on the Rise of Machine Learning for Binary Code Analysis." In Computational Intelligence in Data Mining, 397–406. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9447-9_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leng, Cong, Jian Cheng, Ting Yuan, Xiao Bai, and Hanqing Lu. "Learning Binary Codes with Bagging PCA." In Machine Learning and Knowledge Discovery in Databases, 177–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44851-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grauman, Kristen, and Rob Fergus. "Learning Binary Hash Codes for Large-Scale Image Search." In Machine Learning for Computer Vision, 49–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-28661-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Binary code learning"

1

Aumpansub, Amy, and Zhen Huang. "Learning-based Vulnerability Detection in Binary Code." In ICMLC 2022: 2022 14th International Conference on Machine Learning and Computing. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3529836.3529926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Zhi, Yang Hu, Yunchao Jiang, Yan Chen, and Bing Zeng. "Learning Binary Code for Personalized Fashion Recommendation." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nguyen, Viet-Anh, and MinhN Do. "Binary code learning with semantic ranking based supervision." In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. http://dx.doi.org/10.1109/icassp.2016.7471859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chaochao Bai, Weiqiang Wang, Tong Zhao, and Mingqiang Li. "Learning compact binary quantization of Minutia Cylinder Code." In 2016 International Conference on Biometrics (ICB). IEEE, 2016. http://dx.doi.org/10.1109/icb.2016.7550054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Qiao, Qinyu Zhang, Xi Wu, Xiao Han, and Ronghua Li. "Learning binary code features for UAV target tracking." In 2017 3rd IEEE International Conference on Control Science and Systems Engineering (ICCSSE). IEEE, 2017. http://dx.doi.org/10.1109/ccsse.2017.8087896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Hong, Rongrong Ji, Yongjian Wu, Feiyue Huang, and Baochang Zhang. "Cross-Modality Binary Code Learning via Fusion Similarity Hashing." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Duan, Yue, Xuezixiang Li, Jinghan Wang, and Heng Yin. "DeepBinDiff: Learning Program-Wide Code Representations for Binary Diffing." In Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2020. http://dx.doi.org/10.14722/ndss.2020.24311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Lixin. "Supervised Binary Hash Code Learning with Jensen Shannon Divergence." In 2013 IEEE International Conference on Computer Vision (ICCV). IEEE, 2013. http://dx.doi.org/10.1109/iccv.2013.325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Changyi, Fangchen Yu, Yueyao Yu, and Wenye Li. "Learning Sparse Binary Code for Maximum Inner Product Search." In CIKM '21: The 30th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3459637.3482132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tian, Zhenzhou, Jinrui Li, Peng Xue, Jie Tian, Hengchao Mao, and Yaqian Huang. "Functionality Recognition on Binary Code with Neural Representation Learning." In AIPR 2021: 2021 4th International Conference on Artificial Intelligence and Pattern Recognition. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3488933.3489033.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Binary code learning"

1

Obert, James, and Timothy James Loffredo. Efficient Binary Static Code Data Flow Analysis Using Unsupervised Learning. Office of Scientific and Technical Information (OSTI), November 2019. http://dx.doi.org/10.2172/1592974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography