section
stringclasses 4
values | name
stringclasses 6
values | description
stringclasses 6
values | payload_json
stringclasses 6
values | version
stringclasses 1
value | created_at
stringdate 2025-10-23 02:16:29
2025-10-23 02:16:29
|
|---|---|---|---|---|---|
sampling
|
iot23-train-dev-test
|
IoT-23 primary dataset sampling metadata (train/dev/test splits)
|
{"mode":"full","seed":42,"total":1800,"by_split":{"train":1253,"dev":269,"test":278}}
|
v1
|
2025-10-23T02:16:29.295365Z
|
sampling
|
e1-ood-datasets
|
Out-of-distribution datasets sampling metadata (CIC-IDS-2017, UNSW-NB15)
|
{"mode":"full","seed":42,"n_requested":600,"deduplication":"content-based (5-tuple + bytes + duration + state + service)","datasets":{"cic-ids-2017":{"hf_id":"bvk/CICIDS-2017","total":600,"labels":{"Malicious":300,"Benign":300},"attack_families":{"NA":600},"unique_hashes":600},"unsw-nb15":{"hf_id":"Mireu-Lab/UNSW-NB15","total":600,"labels":{"Malicious":330,"Benign":270},"attack_families":{"Normal":270,"Exploits":178,"Fuzzers":81,"DoS":41,"Reconnaissance":9,"Backdoor":5,"Generic":8,"Analysis":4,"Shellcode":3,"Worms":1},"unique_hashes":600}}}
|
v1
|
2025-10-23T02:16:29.295365Z
|
ood
|
cic-ids-2017-ood
|
CIC-IDS-2017 out-of-distribution test set (600 samples)
|
{"hf_id":"bvk/CICIDS-2017","total":600,"labels":{"Malicious":300,"Benign":300},"attack_families":{"NA":600},"unique_hashes":600}
|
v1
|
2025-10-23T02:16:29.295365Z
|
ood
|
unsw-nb15-ood
|
UNSW-NB15 out-of-distribution test set (600 samples)
|
{"hf_id":"Mireu-Lab/UNSW-NB15","total":600,"labels":{"Malicious":330,"Benign":270},"attack_families":{"Normal":270,"Exploits":178,"Fuzzers":81,"DoS":41,"Reconnaissance":9,"Backdoor":5,"Generic":8,"Analysis":4,"Shellcode":3,"Worms":1},"unique_hashes":600}
|
v1
|
2025-10-23T02:16:29.295365Z
|
provenance
|
dataset-sources
|
Original dataset sources and references
|
{"iot23":{"source":"Avast AIC IoT-23 dataset","url":"https://www.stratosphereips.org/datasets-iot23","hf_id":"stratosphereips/iot-23"},"cic-ids-2017":{"source":"Canadian Institute for Cybersecurity IDS 2017","url":"https://www.unb.ca/cic/datasets/ids-2017.html","hf_id":"bvk/CICIDS-2017"},"unsw-nb15":{"source":"UNSW-NB15 Network Intrusion Dataset","url":"https://research.unsw.edu.au/projects/unsw-nb15-dataset","hf_id":"Mireu-Lab/UNSW-NB15"}}
|
v1
|
2025-10-23T02:16:29.295365Z
|
notes
|
privacy-rationale
|
Explanation of private dataset hosting to prevent training contamination
|
{"reason":"Training contamination prevention","details":"Datasets are private to preserve benchmark validity and enable fair model comparisons over time.","access":"Request access via GitHub Issues: https://github.com/intertwine/security-verifiers/issues"}
|
v1
|
2025-10-23T02:16:29.295365Z
|
๐ Security Verifiers E1: Network Log Anomaly Detection (Public Metadata)
โ ๏ธ This is a PUBLIC metadata-only repository. The full datasets are hosted privately to prevent training contamination. See below for access instructions.
Overview
E1 is a network log anomaly detection environment with calibrated classification and abstention. This repository contains only the sampling metadata that describes how the private datasets were constructed.
Why Private Datasets?
Training contamination is a critical concern for benchmark integrity. If datasets leak into public training corpora:
- Models can memorize answers instead of learning to reason
- Evaluation metrics become unreliable
- Research reproducibility suffers
- True capabilities become obscured
By keeping evaluation datasets private with gated access, we:
- โ Preserve benchmark validity over time
- โ Enable fair model comparisons
- โ Maintain research integrity
- โ Allow controlled access for legitimate research
Dataset Composition
The private E1 datasets include:
Primary Dataset: IoT-23
- Samples: 1,800 network flows (train/dev/test splits)
- Source: IoT-23 botnet dataset
- Features: Network flow statistics, timestamps, protocols
- Labels: Benign vs Malicious with confidence scores
- Sampling: Stratified by label and split
Out-of-Distribution Datasets
- CIC-IDS-2017: 600 samples (different attack patterns)
- UNSW-NB15: 600 samples (different network environment)
- Purpose: Test generalization and OOD detection
What's in This Repository?
This public repository contains:
Sampling Metadata (
sampling-*.json):- Dataset versions and sources
- Sampling strategies and random seeds
- Label distributions
- Split ratios
- Reproducibility parameters
Tools Versions (referenced in metadata):
- Exact versions of all preprocessing tools
- Dataset library versions
- Python environment specifications
This README: Instructions for requesting access
Reward Components
E1 uses composable reward functions:
- Accuracy: Correctness of malicious/benign classification
- Calibration: Alignment between confidence and actual accuracy
- Abstention: Reward for declining on uncertain examples
- Asymmetric Costs: Higher penalty for false negatives (security context)
Requesting Access
๐ To access the full private datasets:
- Open an access request issue: Security Verifiers Issues
- Use the title: "Dataset Access Request: E1"
- Include:
- Your name and affiliation
- Research purpose / use case
- HuggingFace username
- Commitment to not redistribute or publish the raw data
Approval criteria:
- Legitimate research or educational use
- Understanding of contamination concerns
- Agreement to usage terms
We typically respond within 2-3 business days.
Citation
If you use this environment or metadata in your research:
@misc{security-verifiers-2025,
title={Open Security Verifiers: Composable RL Environments for AI Safety},
author={intertwine},
year={2025},
url={https://github.com/intertwine/security-verifiers},
note={E1: Network Log Anomaly Detection}
}
Related Resources
- GitHub Repository: intertwine/security-verifiers
- Documentation: See
EXECUTIVE_SUMMARY.mdandPRD.mdin the repo - Framework: Built on Prime Intellect Verifiers
- Other Environments: E2 (Config Verification), E3-E6 (in development)
License
MIT License - See repository for full terms.
Contact
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built with โค๏ธ for the AI safety research community
- Downloads last month
- 38