1、datasets.load_datasets()

jy: 调用 /datasets/load.py 中的 load_dataset 函数;

ds = load_dataset(“./dataset_script_jy/csv.py”, data_files=”nli_for_simcse.csv”) print(ds)

  1. 其中,`load_dataset`函数解析如下:
  2. ```python
  3. def load_dataset(
  4. path: str,
  5. name: Optional[str] = None,
  6. data_dir: Optional[str] = None,
  7. data_files: Union[Dict, List] = None,
  8. split: Optional[Union[str, Split]] = None,
  9. cache_dir: Optional[str] = None,
  10. features: Optional[Features] = None,
  11. download_config: Optional[DownloadConfig] = None,
  12. download_mode: Optional[GenerateMode] = None,
  13. ignore_verifications: bool = False,
  14. save_infos: bool = False,
  15. script_version: Optional[Union[str, Version]] = None,
  16. **config_kwargs,
  17. ) -> Union[DatasetDict, Dataset]:

(1)功能说明

  • Load a dataset. This method does the following under the hood:
  • 1)Download and import in the library the dataset loading script from pathif it’s not already cached inside the library.
    • Processing scripts are small python scripts that define the citation, info and format of the dataset, contain the URL to the original data files and the code to load examples from the original data files.
    • You can find some of the scripts here:https://github.com/huggingface/datasets/datasets and easily upload yours to share them using the CLI datasets-cli.
  • 2)Run the dataset loading script which will:
    • Download the dataset file from the original URL (see the script) if it’s not already downloaded and cached.
    • Process and cache the dataset in typed Arrow tables for caching.
      • Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python standard types.
      • They can be directly access from drive, loaded in RAM or even streamed over the web.
  • 3)Return a dataset build from the requested splits in split(default: all).

    (2)参数解析

  • path:path to the dataset processing script with the dataset builder. Can be either:

    • a local path to processing script or the directory containing the script (if the script has the same name as the directory),e.g. './dataset/squad'or'./dataset/squad/squad.py'
    • a dataset identifier on HuggingFace AWS bucket (list all available datasets and ids with datasets.list_datasets()), e.g. 'squad', 'glue' or 'openai/webtext'
  • name:defining the name of the dataset configuration
  • data_files:defining the data_files of the dataset configuration
  • data_dir:defining the data_dir of the dataset configuration
  • split (datasets.Split or str):which split of the data to load.
    • If None, will return a dict with all splits(typically datasets.Split.TRAIN and datasets.Split.TEST).
    • If given, will return a single Dataset.
    • Splits can be combined and specified like in tensorflow-datasets.
  • cache_dir:directory to read/write data. Defaults to ~/datasets.
  • features(Optional datasets.Features):Set the features type to use for this dataset.
  • download_config(Optional datasets.DownloadConfig):specific download configuration parameters.
  • download_mode(Optional datasets.GenerateMode):select the download/generate mode ,Default to REUSE_DATASET_IF_EXISTS
  • ignore_verifications:Ignore the verifications of the downloaded/processed dataset information (checksums/size/splits/…)
  • save_infos:Save the dataset information (checksums/size/splits/…)
  • script_version(Optional Union[str, datasets.Version]):if specified, the module will be loaded from the datasets repository at this version. By default it is set to the local version fo the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
  • **config_kwargs (Optional dict):keyword arguments to be passed to the datasets.BuilderConfigand used in the datasets.DatasetBuilder.

    (3)返回结果

  • datasets.Datasetordatasets.DatasetDict

    • if split is not None: the dataset requested,
    • if split is None, a datasets.DatasetDict with each split.

      2、Processing scripts

      (1)csv

      ```python

      coding=utf-8

import logging from dataclasses import dataclass from typing import List, Optional, Union

import pandas as pd import pyarrow as pa

import datasets

logger = logging.getLogger(name)

@dataclass class CsvConfig(datasets.BuilderConfig): “””BuilderConfig for CSV.”””

  1. sep: str = ","
  2. delimiter: Optional[str] = None
  3. header: Optional[Union[int, List[int], str]] = "infer"
  4. names: Optional[List[str]] = None
  5. column_names: Optional[List[str]] = None
  6. index_col: Optional[Union[int, str, List[int], List[str]]] = None
  7. usecols: Optional[Union[List[int], List[str]]] = None
  8. prefix: Optional[str] = None
  9. mangle_dupe_cols: bool = True
  10. engine: Optional[str] = None
  11. true_values: Optional[list] = None
  12. false_values: Optional[list] = None
  13. skipinitialspace: bool = False
  14. skiprows: Optional[Union[int, List[int]]] = None
  15. nrows: Optional[int] = None
  16. na_values: Optional[Union[str, List[str]]] = None
  17. keep_default_na: bool = True
  18. na_filter: bool = True
  19. verbose: bool = False
  20. skip_blank_lines: bool = True
  21. thousands: Optional[str] = None
  22. decimal: str = b"."
  23. lineterminator: Optional[str] = None
  24. quotechar: str = '"'
  25. quoting: int = 0
  26. escapechar: Optional[str] = None
  27. comment: Optional[str] = None
  28. encoding: Optional[str] = None
  29. dialect: str = None
  30. error_bad_lines: bool = True
  31. warn_bad_lines: bool = True
  32. skipfooter: int = 0
  33. doublequote: bool = True
  34. memory_map: bool = False
  35. float_precision: Optional[str] = None
  36. chunksize: int = 10_000
  37. features: datasets.Features = None
  38. def __post_init__(self):
  39. if self.delimiter is not None:
  40. self.sep = self.delimiter
  41. if self.column_names is not None:
  42. self.names = self.column_names

class Csv(datasets.ArrowBasedBuilder): BUILDER_CONFIG_CLASS = CsvConfig

def _info(self):
    return datasets.DatasetInfo(features=self.config.features)

def _split_generators(self, dl_manager):
    """We handle string, list and dicts in datafiles"""
    if not self.config.data_files:
        raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
    data_files = dl_manager.download_and_extract(self.config.data_files)
    if isinstance(data_files, (str, list, tuple)):
        files = data_files
        if isinstance(files, str):
            files = [files]
        return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"files": files})]
    splits = []
    for split_name, files in data_files.items():
        if isinstance(files, str):
            files = [files]
        splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files}))
    return splits

def _generate_tables(self, files):
    schema = pa.schema(self.config.features.type) if self.config.features is not None else None
    for file_idx, file in enumerate(files):
        csv_file_reader = pd.read_csv(
            file,
            iterator=True,
            sep=self.config.sep,
            header=self.config.header,
            names=self.config.names,
            index_col=self.config.index_col,
            usecols=self.config.usecols,
            prefix=self.config.prefix,
            mangle_dupe_cols=self.config.mangle_dupe_cols,
            engine=self.config.engine,
            true_values=self.config.true_values,
            false_values=self.config.false_values,
            skipinitialspace=self.config.skipinitialspace,
            skiprows=self.config.skiprows,
            nrows=self.config.nrows,
            na_values=self.config.na_values,
            keep_default_na=self.config.keep_default_na,
            na_filter=self.config.na_filter,
            verbose=self.config.verbose,
            skip_blank_lines=self.config.skip_blank_lines,
            thousands=self.config.thousands,
            decimal=self.config.decimal,
            lineterminator=self.config.lineterminator,
            quotechar=self.config.quotechar,
            quoting=self.config.quoting,
            escapechar=self.config.escapechar,
            comment=self.config.comment,
            encoding=self.config.encoding,
            dialect=self.config.dialect,
            error_bad_lines=self.config.error_bad_lines,
            warn_bad_lines=self.config.warn_bad_lines,
            skipfooter=self.config.skipfooter,
            doublequote=self.config.doublequote,
            memory_map=self.config.memory_map,
            float_precision=self.config.float_precision,
            chunksize=self.config.chunksize,
        )
        for batch_idx, df in enumerate(csv_file_reader):
            pa_table = pa.Table.from_pandas(df, schema=schema)
            # Uncomment for debugging (will print the Arrow table size and elements)
            # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
            # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
            yield (file_idx, batch_idx), pa_table

```