Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xa0 in position 24: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3339, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2300, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1213, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 24: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

CompRealVul_Bin Dataset

Hugging Face

Dataset Summary

CompRealVul_Bin is a binary-level dataset derived from the CompRealVul_C dataset, designed for training and evaluating machine learning models on vulnerability detection in compiled binaries. This dataset contains cross-compiled .exe binaries corresponding to each C function in the source dataset, packaged as a single .zip archive.

The dataset bridges the gap between source-level vulnerability datasets and practical binary-level learning pipelines.

Key Features

  • ✅ Matches samples from the original CompRealVul dataset
  • ✅ Labeled with vulnerability information (binary class label)
  • ✅ Suitable for binary-focused tasks such as vulnerability detection, similarity learning, and binary classification
  • ✅ Provided as a ZIP archive for efficient download and storage

Dataset Structure

  • CompRealVul_Bin.zip: Contains all compiled .exe binary files
  • CompRealVul_metadata.csv: Metadata file with the following fields:
Column Description
file_name Name of the .exe file
function_name The name of the target function in the .exe file
label Vulnerability label (1.0 = vulnerable, 0.0 = non-vulnerable)

Example:

file_name,function_name,label
func_123.exe,func_123,1.0

Usage

You can load the dataset directly using the datasets library from Hugging Face:

import pandas as pd
import zipfile

# Load metadata
df = pd.read_csv("CompRealVul_metadata.csv")

# Access the ZIP archive
with zipfile.ZipFile("CompRealVul_Bin.zip", "r") as zip_ref:
    # Extract a specific binary
    binary_path = df.iloc[0]["binary_path"]
    zip_ref.extract(binary_path, "extracted_binaries/")

# Read binary data
with open(f"extracted_binaries/{binary_path}", "rb") as f:
    binary_data = f.read()

License

This dataset is released under the GPL-3.0.

Citation

@misc{comprealvul_bin,
  author       = {Compote},
  title        = {CompRealVul_Bin: A Binary Dataset for Vulnerability Detection},
  howpublished = {\url{https://huggingface.co/datasets/compAgent/CompRealVul_Bin}},
  year         = {2025}
}
Downloads last month
8