Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
code
DOI:
Libraries:
Datasets
Dask
License:
jugalgajjar commited on
Commit
e8dc9be
Β·
1 Parent(s): e03fc04

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -94
README.md CHANGED
@@ -1,71 +1,91 @@
1
- ---
2
- license: mit
3
- language:
4
- - en
5
- tags:
6
- - code
7
- ---
8
  # MultiLang Code Parser Dataset (MLCPD)
9
 
10
- ## Dataset Description
 
 
 
11
 
12
- The MultiLang Code Parser Dataset (MLCPD) is a comprehensive multi-language code dataset designed to benchmark language-agnostic AI code parsers. It currently offers a filtered version of the StarCoder dataset, parsed with language-specific parsers, with future plans to unify outputs into a standard JSON format for complete AST representation.
13
 
14
- ### Key Features
 
 
 
15
 
16
- - **Cleaned and Filtered Code**: Samples have been processed to remove outliers in terms of line length and code size
17
- - **Quality Metrics**: Each sample includes metadata about average line length and line count of code along with AST node count and error count
18
- - **Multi-language Support**: 10 programming languages represented in separate subsets
19
- - **Consistent Format**: All samples follow the same Parquet structure for easy processing
20
 
21
- ### Dataset Size
22
 
23
- The complete dataset is approximately 35GB in size. Individual language files vary in size, with the largest being C++ (5.85GB) and the smallest being Ruby (1.71GB).
 
 
 
 
 
 
 
 
 
 
24
 
25
- ### Dataset Statistics
26
 
27
- | Language | Sample Count | Avg. Line Length | Avg. Line Count |
28
- |------------|--------------|------------------|-----------------|
29
- | C | 700,821 | 28.08 | 61.76 |
30
- | C++ | 707,641 | 28.16 | 87.88 |
31
- | C# | 705,203 | 29.53 | 44.26 |
32
- | Go | 700,331 | 25.18 | 68.22 |
33
- | Java | 711,922 | 30.85 | 54.40 |
34
- | JavaScript | 687,775 | 27.69 | 44.15 |
35
- | Python | 706,126 | 32.67 | 54.70 |
36
- | Ruby | 703,473 | 27.35 | 27.41 |
37
- | Scala | 702,833 | 35.30 | 44.38 |
38
- | TypeScript | 695,597 | 29.18 | 36.89 |
39
 
40
- ## Dataset Structure
41
 
42
- The dataset is organized with separate Parquet files for each programming language:
43
- - `c_parsed_1.parquet` ... `c_parsed_4.parquet` - C language samples
44
- - `cpp_parsed_1.parquet` ... `cpp_parsed_4.parquet` - C++ language samples
45
- - `c_sharp_parsed_1.parquet` ... `c_sharp_parsed_4.parquet` - C# language samples
46
- - `go_parsed_1.parquet` ... `go_parsed_4.parquet` - Go language samples
47
- - `java_parsed_1.parquet` ... `java_parsed_4.parquet` - Java language samples
48
- - `javascript_parsed_1.parquet` ... `javascript_parsed_4.parquet` - JavaScript language samples
49
- - `python_parsed_1.parquet` ... `python_parsed_4.parquet` - Python language samples
50
- - `ruby_parsed_1.parquet` ... `ruby_parsed_4.parquet` - Ruby language samples
51
- - `scala_parsed_1.parquet` ... `scala_parsed_4.parquet` - Scala language samples
52
- - `typescript_parsed_1.parquet` ... `typescript_parsed_4.parquet` - TypeScript language samples
53
 
54
- Within each file, data is stored with the following schema:
 
 
 
 
 
 
 
 
55
 
56
- ```
57
- - language: string (the programming language of the code sample)
58
- - code: string (the complete code content)
59
- - avg_line_length: float (average character count per line)
60
- - line_count: integer (total number of lines in the code)
61
- - lang_specific_parse: string (tree-sitter parsed output of the code sample)
62
- - ast_node_count: integer (total number of nodes in the AST)
63
- - num_errors: integer (total number of errors in the code)
64
- ```
65
 
66
- Each sample is stored as a row in the Parquet file with these four columns.
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
- ## How to Access the Dataset
 
 
 
 
 
 
 
 
 
 
 
69
 
70
  ### Using the Hugging Face `datasets` Library
71
 
@@ -79,21 +99,21 @@ pip install datasets
79
 
80
  #### Import Library
81
 
82
- ```python
83
  from datasets import load_dataset
84
  ```
85
 
86
  #### Load the Entire Dataset
87
 
88
- ```python
89
  dataset = load_dataset(
90
  "jugalgajjar/MultiLang-Code-Parser-Dataset"
91
  )
92
  ```
93
 
94
- #### Load a Specific Language
95
 
96
- ```python
97
  dataset = load_dataset(
98
  "jugalgajjar/MultiLang-Code-Parser-Dataset",
99
  data_files="python_parsed_1.parquet"
@@ -102,7 +122,7 @@ dataset = load_dataset(
102
 
103
  #### Stream Data
104
 
105
- ```python
106
  dataset = load_dataset(
107
  "jugalgajjar/MultiLang-Code-Parser-Dataset",
108
  data_files="python_parsed_1.parquet",
@@ -112,7 +132,7 @@ dataset = load_dataset(
112
 
113
  #### Access Data Content (After Downloading)
114
 
115
- ```python
116
  try:
117
  for example in dataset["train"].take(5):
118
  print(example)
@@ -125,40 +145,36 @@ except Exception as e:
125
 
126
  You can also manually download specific language files from the Hugging Face repository page:
127
 
128
- 1. Visit `https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset`
129
- 2. Navigate to the "Files" tab
130
- 3. Click on the language file you want to download (e.g., `python_parsed_1.parquet`)
131
- 4. Use the download button to save the file locally
132
-
133
- ## Dataset Creation
134
-
135
- This dataset was created through the following process:
136
-
137
- 1. Original code samples were collected from the StarCoder dataset ([URL](https://huggingface.co/datasets/bigcode/starcoderdata))
138
- 2. Statistical analysis was performed to identify quality metrics
139
- 3. Outliers were removed using IQR (Interquartile Range) method
140
- 4. Samples were filtered to remove excessively long or short code examples
141
- 5. Data was normalized and standardized across languages
142
- 6. Metadata (average line length and line count) was calculated for each sample
143
- 7. Data was serialized in the efficient Parquet format for optimal storage and access speed
144
- 8. Code samples from each language were parsed using language-specific tree-sitter parsers
145
- 9. Metadata (AST node count and number of errors in the code) were recorded for each sample
146
- 10. Final data was split into four files and stored in the Parquet format
147
-
148
- ## Citation
149
-
150
- If you use this dataset in your research or project, please cite it as follows:
151
-
152
- ```bibtex
153
- @misc{mlcpd2025,
154
- Β  author = {Jugal Gajjar, Kamalasankari Subramaniakuppusamy, Kaustik Ranaware},
155
- Β  title = {Filtered CodeStar Dataset Mini},
156
- Β  year = {2025},
157
- Β  publisher = {HuggingFace},
158
- Β  howpublished = {\url{https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset}}
159
- }
160
- ```
161
 
162
- ## License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
 
164
- This dataset is released under the MIT License. See the LICENSE file for more details.
 
 
 
 
 
 
 
 
1
  # MultiLang Code Parser Dataset (MLCPD)
2
 
3
+ [![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
4
+ [![Python](https://img.shields.io/badge/python-3.11%2B-blue)](https://www.python.org/downloads/)
5
+ [![HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Dataset-blue)](https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset)
6
+
7
 
8
+ **MultiLang-Code-Parser-Dataset (MLCPD)** provides a large-scale, unified dataset of parsed source code across 10 major programming languages, represented under a universal schema that captures syntax, semantics, and structure in a consistent format.
9
 
10
+ Each entry corresponds to one parsed source file and includes:
11
+ - Language metadata
12
+ - Code-level statistics (lines, errors, AST nodes)
13
+ - Universal Schema JSON (normalized structural representation)
14
 
15
+ MLCPD enables robust cross-language analysis, code understanding, and representation learning by providing a consistent, language-agnostic data structure suitable for both traditional ML and modern LLM-based workflows.
16
+
17
+ ---
 
18
 
19
+ ## πŸ“‚ Dataset Structure
20
 
21
+ ```
22
+ MultiLang-Code-Parser-Dataset/
23
+ β”œβ”€β”€ c_parsed_1.parquet
24
+ β”œβ”€β”€ c_parsed_2.parquet
25
+ β”œβ”€β”€ c_parsed_3.parquet
26
+ β”œβ”€β”€ c_parsed_4.parquet
27
+ β”œβ”€β”€ c_sharp_parsed_1.parquet
28
+ β”œβ”€β”€ ...
29
+ └── typescript_parsed_4.parquet
30
+ ```
31
+ Each file corresponds to one partition of a language (~175k rows each).
32
 
33
+ Each record contains:
34
 
35
+ | Field | Type | Description |
36
+ |--------|------|-------------|
37
+ | `language` | `str` | Programming language name |
38
+ | `code` | `str` | Raw source code |
39
+ | `avg_line_length` | `float` | Average line length |
40
+ | `line_count` | `int` | Number of lines |
41
+ | `lang_specific_parse` | `str` | TreeSitter parse output |
42
+ | `ast_node_count` | `int` | Number of AST nodes |
43
+ | `num_errors` | `int` | Parse errors |
44
+ | `universal_schema` | `str` | JSON-formatted unified schema |
 
 
45
 
46
+ ---
47
 
48
+ ## πŸ“Š Key Statistics
 
 
 
 
 
 
 
 
 
 
49
 
50
+ | Metric | Value |
51
+ |--------|--------|
52
+ | Total Languages | 10 |
53
+ | Total Files | 40 |
54
+ | Total Records | 7,021,722 |
55
+ | Successful Conversions | 7,021,718 (99.9999%) |
56
+ | Failed Conversions | 4 (3 in C, 1 in C++) |
57
+ | Disk Size | ~105 GB (Parquet format) |
58
+ | Memory Size | ~580 GB (Parquet format) |
59
 
60
+ The dataset is clean, lossless, and statistically balanced across languages.
61
+ It offers both per-language and combined cross-language representations.
 
 
 
 
 
 
 
62
 
63
+ ---
64
+
65
+ ## πŸš€ Use Cases
66
+
67
+ MLCPD can be directly used for:
68
+ - Cross-language code representation learning
69
+ - Program understanding and code similarity tasks
70
+ - Syntax-aware pretraining for LLMs
71
+ - Code summarization, clone detection, and bug prediction
72
+ - Graph-based learning on universal ASTs
73
+ - Benchmark creation for cross-language code reasoning
74
+
75
+ ---
76
 
77
+ ## πŸ” Features
78
+
79
+ - **Universal Schema:** A unified structural representation harmonizing AST node types across languages.
80
+ - **Compact Format:** Stored in Apache Parquet, allowing fast access and efficient querying.
81
+ - **Cross-Language Compatibility:** Enables comparative code structure analysis across multiple programming ecosystems.
82
+ - **Error-Free Parsing:** 99.9999% successful schema conversions across ~7M code files.
83
+ - **Statistical Richness:** Includes per-language metrics such as mean line count, AST size, and error ratios.
84
+ - **Ready for ML Pipelines:** Compatible with PyTorch, TensorFlow, Hugging Face Transformers, and graph-based models.
85
+
86
+ ---
87
+
88
+ ## πŸ“₯ How to Access the Dataset
89
 
90
  ### Using the Hugging Face `datasets` Library
91
 
 
99
 
100
  #### Import Library
101
 
102
+ ```bash
103
  from datasets import load_dataset
104
  ```
105
 
106
  #### Load the Entire Dataset
107
 
108
+ ```bash
109
  dataset = load_dataset(
110
  "jugalgajjar/MultiLang-Code-Parser-Dataset"
111
  )
112
  ```
113
 
114
+ #### Load a Specific Language File
115
 
116
+ ```bash
117
  dataset = load_dataset(
118
  "jugalgajjar/MultiLang-Code-Parser-Dataset",
119
  data_files="python_parsed_1.parquet"
 
122
 
123
  #### Stream Data
124
 
125
+ ```bash
126
  dataset = load_dataset(
127
  "jugalgajjar/MultiLang-Code-Parser-Dataset",
128
  data_files="python_parsed_1.parquet",
 
132
 
133
  #### Access Data Content (After Downloading)
134
 
135
+ ```bash
136
  try:
137
  for example in dataset["train"].take(5):
138
  print(example)
 
145
 
146
  You can also manually download specific language files from the Hugging Face repository page:
147
 
148
+ 1. Visit https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset
149
+ 2. Navigate to the Files tab
150
+ 3. Click on the language file you want (e.g., `python_parsed_1.parquet`)
151
+ 4. Use the Download button to save locally
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
 
153
+ ---
154
+
155
+ ## πŸ“œ License
156
+
157
+ This dataset is released under the MIT License.<br>
158
+ You are free to use, modify, and redistribute it for research and educational purposes, with proper attribution.
159
+
160
+ ---
161
+
162
+ ## πŸ™ Acknowledgements
163
+
164
+ - [StarCoder Dataset](https://huggingface.co/datasets/bigcode/starcoderdata) for source code samples
165
+ - [TreeSitter](https://tree-sitter.github.io/tree-sitter/) for parsing
166
+ - [Hugging Face](https://huggingface.co/) for dataset hosting
167
+
168
+ ---
169
+
170
+ ## πŸ“§ Contact
171
+
172
+ For questions, collaborations, or feedback:
173
+
174
+ - **Primary Author**: Jugal Gajjar
175
+ - **Email**: [812jugalgajjar@gmail.com](mailto:812jugalgajjar@gmail.com)
176
+ - **LinkedIn**: [linkedin.com/in/jugal-gajjar/](https://www.linkedin.com/in/jugal-gajjar/)
177
+
178
+ ---
179
 
180
+ ⭐ If you find this dataset useful, consider liking the dataset and the [GitHub repository](https://github.com/JugalGajjar/MultiLang-Code-Parser-Dataset) and sharing your work that uses it.