jonasluehrs-jaai's picture
Update README.md
c192061 verified
metadata
dataset_info:
  features:
    - name: messages
      dtype: string
    - name: prompt_length
      dtype: int64
    - name: response_length
      dtype: int64
  splits:
    - name: train
      num_bytes: 528492
      num_examples: 2000
  download_size: 467665
  dataset_size: 528492
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - text-generation
size_categories:
  - 1K<n<10K
tags:
  - synthetic
  - benchmarking
  - vllm
  - llm-inference

Synthetic Dataset: Low Context, Medium Generation

Dataset Description

This is a synthetic benchmark dataset designed to test LLM inference performance in low-context, mid-generation scenarios. The dataset consists of 2,000 samples with randomly generated tokens that simulate workloads where models receive short prompts but generate longer responses.

Use Cases

This dataset is ideal for benchmarking:

  • Creative writing and content generation
  • Code generation from brief descriptions
  • Story/article generation from short prompts
  • Generation throughput and token-per-second optimization

Dataset Characteristics

  • Number of Samples: 2,000
  • Prompt Length Distribution: Beta distribution (skewed toward lower values)
    • Alpha (α): 2.0 (controls left tail)
    • Beta (β): 6.0 (controls right tail)
    • Range: 10-120 tokens
  • Response Length Distribution: Normal distribution
    • Mean: 1,500 tokens
    • Standard deviation: 300 tokens
  • Tokenizer: meta-llama/Llama-3.1-8B-Instruct

Dataset Structure

Each sample contains:

  • prompt: A sequence of randomly generated tokens (low context)
  • prompt_length: Number of tokens in the prompt
  • response_length: Number of tokens in the response
{
    'prompt': str,
    'prompt_length': int,
    'response_length': int
}

Token Generation

  • Tokens are randomly sampled from the vocabulary of the Llama-3.1-8B-Instruct tokenizer
  • Prompt lengths follow a beta distribution, creating a realistic skew toward shorter prompts
  • Response lengths follow a normal distribution, simulating typical generation patterns
  • Each sample is independently generated with lengths drawn from the specified distributions

Related Datasets

This dataset is part of a suite of three synthetic benchmark datasets, each designed for different workload patterns:

  1. synthetic_dataset_high-low

    • High context (32k tokens), low generation (200 tokens)
    • Focus: Prompt processing efficiency, TTFT optimization
  2. synthetic_dataset_mid-mid

    • Medium context (1k tokens), medium generation (1k tokens)
    • Focus: Balanced workload, realistic API scenarios
  3. 🔷 synthetic_dataset_low-mid (this dataset)

    • Low context (10-120 tokens), medium generation (1.5k tokens)
    • Focus: Generation throughput, creative writing scenarios

Benchmarking with vLLM

This dataset is designed for use with the vLLM inference framework. The vLLM engine supports a min_tokens parameter, allowing you to pass min_tokens=max_tokens=response_length for each prompt. This ensures that the response length follows the defined distribution.

Setup

First, install vLLM and start the server:

pip install vllm

# Start the vLLM server
vllm serve meta-llama/Llama-3.1-8B-Instruct

Usage Example

from datasets import load_dataset
from openai import OpenAI

# Load the dataset
dataset = load_dataset("jonasluehrs-jaai/synthetic_dataset_low-mid")

# Initialize vLLM client (OpenAI-compatible API)
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

# Use a sample from the dataset
sample = dataset['train'][0]

# Make a completion request with controlled response length
completion = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[{"role": "user", "content": sample['prompt']}],
    max_tokens=sample['response_length'],
    extra_body={"min_tokens": sample['response_length']},
)

print(f"Generated {len(completion.choices[0].message.content)} characters")

For more information, see the vLLM OpenAI-compatible server documentation.

License

This dataset is released under the MIT License.