Skip to content

Training Vision Transformers from Scratch for Malware Classification

License

Notifications You must be signed in to change notification settings

rickyxume/MalwareViT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MalwareViT

从头训练一个 Transformer 用于恶意软件分类

Training Vision Transformers from Scratch for Malware Classification

An image is worth 16x16 words, what is a malware worth? Maybe a malware is worth 66x66 image.

Nowadays, neural network methodology has reached a level that may exceed the limits of previous machine learning methods, most of the image based malware classification techniques are implemented with convolutional neural networks (CNNs). It cleverly transfers the malware classification problem to the image classification problem. However, Vision Transformer (ViT), which extends the application of the Transformer architecture from natural language processing to computer vision, has gradually attained state-of-the-art results on many computer vision benchmarks and has been taken an alternative to the existing CNNs architecture.

Motivated by the visual similarity between malware samples of the same family and success of ViT on vision tasks, we propose MalwareViT for applying Vision Transformers to malware classification, a file agnostic deep learning approach based on the co-occurrence matrix obtained from the opcodes frequency extracted from Asm as images to efficiently group malicious software into families.

An image is worth 16x16 words in ViT, similarily, a malware is worth 66x66 image in MalwareViT. After calculating the word frequencies of the 66 opcodes obtained by decompiling the malicious binary file, we sorted them in ascending order of the total frequency, normalized them to the interval from 0 to 255, considered them as pixel values, and arranged them in one column each horizontally and vertically to form a two-dimensional array. Since some studies have shown that the smaller the word frequency, the better it is at distinguishing malware, I performed an "Inverse frequency" operation to make the smaller the word frequency, the larger the grayscale value. Additionally, the values of the intersection of the columns and rows in the matrix are taken as the maximum between them to obtain the co-occurrence matrix Finally, we save these matrices as images with size of 66x66.

$Inverse\space frequency\space value = \frac{255}{(\space normalized\space fequency\space + \space 1\space )}$

Todo List:

  • Update codes.

  • Evaluate the suitability of our approach against two benchmarks: the MalImg dataset and the Microsoft Malware Classification Challenge dataset.

  • Experimental comparison performance with respect to state-of-the-art techniques.

  • Upload paper to arXiv

About

Training Vision Transformers from Scratch for Malware Classification

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published