Text Classification
Transformers
Safetensors
roberta
Generated from Trainer
cedricbonhomme commited on
Commit
eb48d26
·
verified ·
1 Parent(s): 97369e2

End of training

Browse files
Files changed (2) hide show
  1. README.md +18 -49
  2. emissions.csv +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  library_name: transformers
3
- license: cc-by-4.0
4
  base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
@@ -9,55 +9,29 @@ metrics:
9
  model-index:
10
  - name: vulnerability-severity-classification-roberta-base
11
  results: []
12
- datasets:
13
- - CIRCL/vulnerability-scores
14
  ---
15
 
 
 
16
 
17
- # VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification
18
 
19
- # Severity classification
20
-
21
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the dataset [CIRCL/vulnerability-scores](https://huggingface.co/datasets/CIRCL/vulnerability-scores).
22
-
23
- The model was presented in the paper [VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification](https://huggingface.co/papers/2507.03607) [[arXiv](https://arxiv.org/abs/2507.03607)].
24
-
25
- **Abstract:** VLAI is a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated into the Vulnerability-Lookup service.
26
-
27
- You can read [this page](https://www.vulnerability-lookup.org/user-manual/ai/) for more information.
28
 
29
  ## Model description
30
 
31
- It is a classification model and is aimed to assist in classifying vulnerabilities by severity based on their descriptions.
32
-
33
- ## How to get started with the model
34
 
35
- ```python
36
- from transformers import AutoModelForSequenceClassification, AutoTokenizer
37
- import torch
38
 
39
- labels = ["low", "medium", "high", "critical"]
40
 
41
- model_name = "CIRCL/vulnerability-severity-classification-roberta-base"
42
- tokenizer = AutoTokenizer.from_pretrained(model_name)
43
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
44
- model.eval()
45
-
46
- test_description = "SAP NetWeaver Visual Composer Metadata Uploader is not protected with a proper authorization, allowing unauthenticated agent to upload potentially malicious executable binaries \
47
- that could severely harm the host system. This could significantly affect the confidentiality, integrity, and availability of the targeted system."
48
- inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)
49
-
50
- # Run inference
51
- with torch.no_grad():
52
- outputs = model(**inputs)
53
- predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
54
-
55
- # Print results
56
- print("Predictions:", predictions)
57
- predicted_class = torch.argmax(predictions, dim=-1).item()
58
- print("Predicted severity:", labels[predicted_class])
59
- ```
60
 
 
61
 
62
  ## Training procedure
63
 
@@ -72,20 +46,15 @@ The following hyperparameters were used during training:
72
  - lr_scheduler_type: linear
73
  - num_epochs: 5
74
 
75
- It achieves the following results on the evaluation set:
76
- - Loss: 0.5053
77
- - Accuracy: 0.8195
78
-
79
-
80
  ### Training results
81
 
82
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
83
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|
84
- | 0.6458 | 1.0 | 14962 | 0.6352 | 0.7394 |
85
- | 0.4643 | 2.0 | 29924 | 0.5741 | 0.7702 |
86
- | 0.5519 | 3.0 | 44886 | 0.5261 | 0.7922 |
87
- | 0.3822 | 4.0 | 59848 | 0.5054 | 0.8111 |
88
- | 0.344 | 5.0 | 74810 | 0.5053 | 0.8195 |
89
 
90
 
91
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
  base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
 
9
  model-index:
10
  - name: vulnerability-severity-classification-roberta-base
11
  results: []
 
 
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # vulnerability-severity-classification-roberta-base
18
 
19
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4988
22
+ - Accuracy: 0.8239
 
 
 
 
 
23
 
24
  ## Model description
25
 
26
+ More information needed
 
 
27
 
28
+ ## Intended uses & limitations
 
 
29
 
30
+ More information needed
31
 
32
+ ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
+ More information needed
35
 
36
  ## Training procedure
37
 
 
46
  - lr_scheduler_type: linear
47
  - num_epochs: 5
48
 
 
 
 
 
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|
53
+ | 0.5754 | 1.0 | 14998 | 0.6411 | 0.7426 |
54
+ | 0.5831 | 2.0 | 29996 | 0.5794 | 0.7704 |
55
+ | 0.4704 | 3.0 | 44994 | 0.5280 | 0.7950 |
56
+ | 0.3535 | 4.0 | 59992 | 0.5139 | 0.8138 |
57
+ | 0.3464 | 5.0 | 74990 | 0.4988 | 0.8239 |
58
 
59
 
60
  ### Framework versions
emissions.csv CHANGED
@@ -1,2 +1,2 @@
1
  timestamp,project_name,run_id,experiment_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,on_cloud,pue
2
- 2025-12-27T02:30:50,codecarbon,053a0694-69e4-4f2a-b6b7-c48517c95402,5b0fa12a-3dd7-45bb-9766-cc326314d9f1,13934.410282625999,0.6770843193932654,4.859081264726949e-05,42.5,598.0692513088488,755.7507977485657,0.16420767100260394,3.3485438799440677,2.919559131755385,6.432310682702044,Luxembourg,LUX,,,,Linux-6.8.0-90-generic-x86_64-with-glibc2.39,3.12.3,2.8.4,224,Intel(R) Xeon(R) Platinum 8480+,4,4 x NVIDIA L40S,6.1661,49.7498,2015.3354606628418,machine,N,1.0
 
1
  timestamp,project_name,run_id,experiment_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,on_cloud,pue
2
+ 2026-01-02T13:33:24,codecarbon,6f8649af-63f5-4be4-be38-b3a5e984904d,5b0fa12a-3dd7-45bb-9766-cc326314d9f1,14058.473727937904,0.6821544073612349,4.852265050690487e-05,42.5,627.1709501771732,755.7507977485657,0.16565008514747048,3.369631439036325,2.9451950664628326,6.480476590646617,Luxembourg,LUX,,,,Linux-6.8.0-90-generic-x86_64-with-glibc2.39,3.12.3,2.8.4,224,Intel(R) Xeon(R) Platinum 8480+,4,4 x NVIDIA L40S,6.1661,49.7498,2015.3354606628418,machine,N,1.0