Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
code
DOI:
Libraries:
Datasets
Dask
License:
jugalgajjar commited on
Commit
9b62ca6
·
1 Parent(s): 6966282

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -40
README.md CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  # Filtered StarCoder Dataset Mini
2
 
3
  ## Dataset Description
@@ -9,7 +16,11 @@ This dataset contains filtered and processed code samples from 12 popular progra
9
  - **Cleaned and Filtered Code**: Samples have been processed to remove outliers in terms of line length and code size
10
  - **Quality Metrics**: Each sample includes metadata about average line length and line count
11
  - **Multi-language Support**: 12 programming languages represented in separate subsets
12
- - **Consistent Format**: All samples follow the same JSON structure for easy processing
 
 
 
 
13
 
14
  ### Dataset Statistics
15
 
@@ -30,44 +41,30 @@ This dataset contains filtered and processed code samples from 12 popular progra
30
 
31
  ## Dataset Structure
32
 
33
- The dataset is organized with separate JSON files for each programming language:
34
- - `c.json` - C language samples
35
- - `cpp.json` - C++ language samples
36
- - `c-sharp.json` - C# language samples
37
- - `go.json` - Go language samples
38
- - `java.json` - Java language samples
39
- - `javascript.json` - JavaScript language samples
40
- - `kotlin.json` - Kotlin language samples
41
- - `python.json` - Python language samples
42
- - `ruby.json` - Ruby language samples
43
- - `rust.json` - Rust language samples
44
- - `scala.json` - Scala language samples
45
- - `typescript.json` - TypeScript language samples
46
-
47
- Within each file, data is stored in the following format:
48
-
49
- ```json
50
- {
51
- "lang_0": {
52
- "code": "ENTIRE CODE",
53
- "avg_line_length": 35.7,
54
- "line_count": 120
55
- },
56
- "lang_1": {
57
- "code": "ANOTHER CODE EXAMPLE",
58
- "avg_line_length": 42.3,
59
- "line_count": 85
60
- },
61
- ...
62
- "lang_N": {
63
- "code": "ENTIRE CODE",
64
- "avg_line_length": 28.9,
65
- "line_count": 67
66
- }
67
- }
68
  ```
69
 
70
- Each sample is assigned a unique integer ID (0 to N) prefixed with the language identifier.
71
 
72
  ## How to Access the Dataset
73
 
@@ -81,16 +78,48 @@ This dataset is hosted on the Hugging Face Hub and can be easily accessed using
81
  pip install datasets
82
  ```
83
 
 
 
 
 
 
 
84
  #### Load the Entire Dataset
85
 
86
  ```python
87
- TODO
 
 
88
  ```
89
 
90
  #### Load a Specific Language
91
 
92
  ```python
93
- TODO
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  ```
95
 
96
  ### Manual Download
@@ -99,7 +128,7 @@ You can also manually download specific language files from the Hugging Face rep
99
 
100
  1. Visit `https://huggingface.co/datasets/jugalgajjar/Filtered-StarCoder-Dataset-Mini`
101
  2. Navigate to the "Files" tab
102
- 3. Click on the language file you want to download (e.g., `python.json`)
103
  4. Use the download button to save the file locally
104
 
105
  ## Dataset Creation
@@ -112,6 +141,7 @@ This dataset was created through the following process:
112
  4. Samples were filtered to remove excessively long or short code examples
113
  5. Data was normalized and standardized across languages
114
  6. Metadata (average line length and line count) was calculated for each sample
 
115
 
116
  The processing pipeline included steps to:
117
  - Remove code samples with abnormal line lengths (potential formatting issues)
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - code
7
+ ---
8
  # Filtered StarCoder Dataset Mini
9
 
10
  ## Dataset Description
 
16
  - **Cleaned and Filtered Code**: Samples have been processed to remove outliers in terms of line length and code size
17
  - **Quality Metrics**: Each sample includes metadata about average line length and line count
18
  - **Multi-language Support**: 12 programming languages represented in separate subsets
19
+ - **Consistent Format**: All samples follow the same Parquet structure for easy processing
20
+
21
+ ### Dataset Size
22
+
23
+ The complete dataset is approximately 14GB in size. Individual language files vary in size, with the largest being C++ (~2GB) and the smallest being Scala (~700MB).
24
 
25
  ### Dataset Statistics
26
 
 
41
 
42
  ## Dataset Structure
43
 
44
+ The dataset is organized with separate Parquet files for each programming language:
45
+ - `c.parquet` - C language samples
46
+ - `cpp.parquet` - C++ language samples
47
+ - `c-sharp.parquet` - C# language samples
48
+ - `go.parquet` - Go language samples
49
+ - `java.parquet` - Java language samples
50
+ - `javascript.parquet` - JavaScript language samples
51
+ - `kotlin.parquet` - Kotlin language samples
52
+ - `python.parquet` - Python language samples
53
+ - `ruby.parquet` - Ruby language samples
54
+ - `rust.parquet` - Rust language samples
55
+ - `scala.parquet` - Scala language samples
56
+ - `typescript.parquet` - TypeScript language samples
57
+
58
+ Within each file, data is stored with the following schema:
59
+
60
+ ```
61
+ - language: string (the programming language of the code sample)
62
+ - code: string (the complete code content)
63
+ - avg_line_length: float (average character count per line)
64
+ - line_count: integer (total number of lines in the code)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  ```
66
 
67
+ Each sample is stored as a row in the Parquet file with these four columns.
68
 
69
  ## How to Access the Dataset
70
 
 
78
  pip install datasets
79
  ```
80
 
81
+ #### Import Library
82
+
83
+ ```python
84
+ from datasets import load_dataset
85
+ ```
86
+
87
  #### Load the Entire Dataset
88
 
89
  ```python
90
+ dataset = load_dataset(
91
+ "jugalgajjar/Filtered-StarCoder-Dataset-Mini"
92
+ )
93
  ```
94
 
95
  #### Load a Specific Language
96
 
97
  ```python
98
+ dataset = load_dataset(
99
+ "jugalgajjar/Filtered-StarCoder-Dataset-Mini",
100
+ data_files="scala.parquet"
101
+ )
102
+ ```
103
+
104
+ #### Stream Data
105
+
106
+ ```python
107
+ dataset = load_dataset(
108
+ "jugalgajjar/Filtered-StarCoder-Dataset-Mini",
109
+ data_files="scala.parquet",
110
+ streaming=True
111
+ )
112
+ ```
113
+
114
+ #### Access Data Content (After Downloading)
115
+
116
+ ```python
117
+ try:
118
+ for example in dataset["train"].take(5):
119
+ print(example)
120
+ print("-"*25)
121
+ except Exception as e:
122
+ print(f"An error occurred: {e}")
123
  ```
124
 
125
  ### Manual Download
 
128
 
129
  1. Visit `https://huggingface.co/datasets/jugalgajjar/Filtered-StarCoder-Dataset-Mini`
130
  2. Navigate to the "Files" tab
131
+ 3. Click on the language file you want to download (e.g., `python.parquet`)
132
  4. Use the download button to save the file locally
133
 
134
  ## Dataset Creation
 
141
  4. Samples were filtered to remove excessively long or short code examples
142
  5. Data was normalized and standardized across languages
143
  6. Metadata (average line length and line count) was calculated for each sample
144
+ 7. Final data was serialized in the efficient Parquet format for optimal storage and access speed
145
 
146
  The processing pipeline included steps to:
147
  - Remove code samples with abnormal line lengths (potential formatting issues)