Add initial dataset card for CodeSense

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ task_categories:
5
+ - text-generation
6
+ tags:
7
+ - code-reasoning
8
+ - benchmark
9
+ - python
10
+ - c
11
+ - java
12
+ - software-engineering
13
+ - llm-evaluation
14
+ license: unknown
15
+ ---
16
+
17
+ # CodeSense: A Real-World Benchmark and Dataset for Code Semantic Reasoning
18
+
19
+ This repository contains the dataset and resources for **CodeSense**, the first benchmark for evaluating Large Language Models (LLMs) on fine-grained code semantic reasoning tasks in real-world software engineering contexts. The benchmark was presented in the paper [CodeSense: a Real-World Benchmark and Dataset for Code Semantic Reasoning](https://huggingface.co/papers/2506.00750).
20
+
21
+ CodeSense aims to bridge the gap between existing synthetic or educational coding problems and the practical demands of software engineering. It utilizes Python, C, and Java software projects from real-world repositories, collecting execution traces to construct a ground truth dataset for detailed semantic reasoning tasks.
22
+
23
+ **Paper:** [https://huggingface.co/papers/2506.00750](https://huggingface.co/papers/2506.00750)
24
+ **Project Page:** [https://codesense-bench.github.io/](https://codesense-bench.github.io/)
25
+ **Code Repository:** [https://github.com/codesense-bench/codesense-codes](https://github.com/codesense-bench/codesense-codes)
26
+
27
+ ## Codebase Overview
28
+ The associated code repository ([codesense-bench/codesense-codes](https://github.com/codesense-bench/codesense-codes)) contains three main components related to execution tracing, benchmark dataset creation, and LLM evaluation:
29
+
30
+ ### Benchmark Collection
31
+ - **Purpose:** Contains scripts to process and clean raw execution traces.
32
+ - **Description:** Converts raw traces into task-specific datasets suitable for various code understanding and reasoning benchmarks.
33
+
34
+ ### Tracing Framework
35
+ - **Purpose:** Tools for collecting execution traces.
36
+ - **Description:** Supports tracing of Python, C, and Java programs to capture their runtime behavior and execution steps.
37
+
38
+ ### LLM Evaluation
39
+ - **Purpose:** Scripts for evaluating Large Language Models (LLMs) on the task-specific datasets.
40
+ - **Description:** Runs evaluations, computes metrics, and benchmarks model performance on the curated datasets.