cQueenccc's picture
Update README.md
1668d51
|
raw
history blame
No virus
924 Bytes
---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 171055893.125
num_examples: 1087
download_size: 170841790
dataset_size: 171055893.125
task_categories:
- text-to-image
annotations_creators:
- machine-generated
language:
- en
---
# Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for A subset of Vivian Maier's photographs BLIP captions
The captions are generated with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `caption` keys. `image` is a varying size PIL jpeg, and `caption` is the accompanying text caption. Only a train split is provided.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)