Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'cluster'}) and 1 missing columns ({'all_join_size'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Jerrylife/WikiDBGraph/data/cluster_assignments_dim2_sz100_msNone.csv (at revision d188b05c97ff24186f82bdfd0e16e42c718f8252)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
db_id: int64
cluster: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 481
to
{'db_id': Value('int64'), 'all_join_size': Value('float64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1450, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 993, in stream_convert_to_parquet
builder._prepare_split(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'cluster'}) and 1 missing columns ({'all_join_size'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Jerrylife/WikiDBGraph/data/cluster_assignments_dim2_sz100_msNone.csv (at revision d188b05c97ff24186f82bdfd0e16e42c718f8252)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
db_id
int64 | all_join_size
float64 |
|---|---|
4,900
| 27
|
4,901
| 12
|
4,902
| 2,424.808546
|
4,903
| 25
|
4,904
| 127
|
4,905
| 30
|
4,906
| 59
|
4,907
| 119
|
4,908
| 100.222222
|
4,909
| 50
|
4,910
| 2,573
|
4,911
| 816.103448
|
4,912
| 10
|
4,913
| 19,722.934524
|
4,914
| 11
|
4,915
| 12
|
4,916
| 399,260.705781
|
4,917
| 28
|
4,918
| 15
|
4,919
| 29
|
4,920
| 61
|
4,921
| 30
|
4,922
| 14
|
4,923
| 194.094444
|
4,924
| 160
|
4,925
| 169
|
4,926
| 28
|
4,927
| 12
|
4,928
| 50.285714
|
4,929
| 42
|
4,930
| 29
|
4,931
| 38
|
4,932
| 16
|
4,933
| 74
|
4,934
| 127
|
4,935
| 97
|
4,936
| 33.6
|
4,937
| 35
|
4,938
| 44.444444
|
4,939
| 77
|
4,940
| 32
|
4,941
| 35
|
4,942
| 174
|
4,943
| 54.954545
|
4,944
| 152
|
4,945
| 4,579.10402
|
4,946
| 14
|
4,947
| 24,812.354067
|
4,948
| 119.586207
|
4,949
| 29,827.221719
|
4,950
| 230
|
4,951
| 22
|
4,952
| 21
|
4,953
| 17
|
4,954
| 52.413793
|
4,955
| 16
|
4,956
| 32
|
4,957
| 53
|
4,958
| 280
|
4,959
| 286
|
4,960
| 34
|
4,961
| 29
|
4,962
| 23
|
4,963
| 13
|
4,964
| 12
|
4,965
| 126
|
4,966
| 53
|
4,967
| 52.7
|
4,968
| 116
|
4,969
| 23
|
4,970
| 45
|
4,971
| 66
|
4,972
| 31
|
4,973
| 192
|
4,974
| 53
|
4,975
| 38
|
4,976
| 170
|
4,977
| 36
|
4,978
| 202
|
4,979
| 379,247.065934
|
4,980
| 453.038961
|
4,981
| 27
|
4,982
| 22
|
4,983
| 36
|
4,984
| 28
|
4,985
| 14
|
4,986
| 519
|
4,987
| 888.648649
|
4,988
| 172.071429
|
4,989
| 12
|
4,990
| 987.33209
|
4,991
| 891.597052
|
4,992
| 60
|
4,993
| 1,053
|
4,994
| 428.831169
|
4,995
| 10
|
4,996
| 80
|
4,997
| 270.4
|
4,998
| 27
|
4,999
| 14
|
WikiDBGraph Dataset
WikiDBGraph is a comprehensive dataset for database graph analysis, containing structural and semantic properties of 100,000 Wikidata-derived databases. The dataset includes graph representations, similarity metrics, community structures, and various statistical properties designed for federated learning research and database schema matching tasks.
Dataset Overview
This dataset provides graph-based analysis of database schemas, enabling research in:
- Database similarity and matching: Finding structurally and semantically similar databases
- Federated learning: Training machine learning models across distributed database pairs
- Graph analysis: Community detection, connected components, and structural properties
- Schema analysis: Statistical properties of database schemas including cardinality, entropy, and sparsity
Statistics
- Total Databases: 100,000
- Total Edges: 17,858,194 (at threshold 0.94)
- Connected Components: 6,109
- Communities: 6,133
- Largest Component: 10,703 nodes
- Modularity Score: 0.5366
Dataset Structure
The dataset consists of 15 files organized into four categories:
1. Graph Structure Files
graph_raw_0.94.dgl
DGL (Deep Graph Library) graph file containing the complete database similarity graph.
Structure:
- Nodes: 100,000 database IDs
- Edges: 17,858,194 pairs with similarity ≥ 0.94
- Node Data:
embedding: 768-dimensional node embeddings (if available)
- Edge Data:
weight: Edge similarity scores (float32)gt_edge: Ground truth edge labels (float32)
Loading:
import dgl
import torch
# Load the graph
graphs, _ = dgl.load_graphs('graph_raw_0.94.dgl')
graph = graphs[0]
# Access nodes and edges
num_nodes = graph.num_nodes() # 100000
num_edges = graph.num_edges() # 17858194
# Access edge data
src, dst = graph.edges()
edge_weights = graph.edata['weight']
edge_labels = graph.edata['gt_edge']
# Access node embeddings (if available)
if 'embedding' in graph.ndata:
node_embeddings = graph.ndata['embedding'] # shape: (100000, 768)
database_embeddings.pt
PyTorch tensor file containing pre-computed 768-dimensional embeddings for all databases.
Structure:
- Tensor shape:
(100000, 768) - Data type: float32
- Embeddings generated using BGE (BAAI General Embedding) model
Loading:
import torch
embeddings = torch.load('database_embeddings.pt', weights_only=True)
print(embeddings.shape) # torch.Size([100000, 768])
# Get embedding for specific database
db_idx = 42
db_embedding = embeddings[db_idx]
2. Edge Files (Database Pair Relationships)
filtered_edges_threshold_0.94.csv
Main edge list with database pairs having similarity ≥ 0.94.
Columns:
src(float): Source database IDtgt(float): Target database IDsimilarity(float): Cosine similarity score [0.94, 1.0]label(float): Ground truth label (0.0 or 1.0)edge(int): Edge indicator (always 1)
Loading:
import pandas as pd
edges = pd.read_csv('filtered_edges_threshold_0.94.csv')
print(f"Total edges: {len(edges):,}")
# Find highly similar pairs
high_sim = edges[edges['similarity'] >= 0.99]
print(f"Pairs with similarity ≥ 0.99: {len(high_sim):,}")
Example rows:
src tgt similarity label edge
26218.0 44011.0 0.9896456 0.0 1
26218.0 44102.0 0.9908572 0.0 1
edges_list_th0.6713.csv
Extended edge list with lower similarity threshold (≥ 0.6713).
Columns:
src(str): Source database ID (padded format, e.g., "00000")tgt(str): Target database ID (padded format)similarity(float): Cosine similarity score [0.6713, 1.0]label(float): Ground truth label
Loading:
import pandas as pd
edges = pd.read_csv('edges_list_th0.6713.csv')
# Database IDs are strings with leading zeros
print(edges['src'].head()) # "00000", "00000", "00000", ...
# Filter by similarity threshold
threshold = 0.90
filtered = edges[edges['similarity'] >= threshold]
edge_structural_properties_GED_0.94.csv
Detailed structural properties for database pairs at threshold 0.94.
Columns:
db_id1(int): First database IDdb_id2(int): Second database IDjaccard_table_names(float): Jaccard similarity of table names [0.0, 1.0]jaccard_columns(float): Jaccard similarity of column names [0.0, 1.0]jaccard_data_types(float): Jaccard similarity of data types [0.0, 1.0]hellinger_distance_data_types(float): Hellinger distance between data type distributionsgraph_edit_distance(float): Graph edit distance between schemascommon_tables(int): Number of common table namescommon_columns(int): Number of common column namescommon_data_types(int): Number of common data types
Loading:
import pandas as pd
edge_props = pd.read_csv('edge_structural_properties_GED_0.94.csv')
# Find pairs with high structural similarity
high_jaccard = edge_props[edge_props['jaccard_columns'] >= 0.5]
print(f"Pairs with ≥50% column overlap: {len(high_jaccard):,}")
# Analyze graph edit distance
print(f"Mean GED: {edge_props['graph_edit_distance'].mean():.2f}")
print(f"Median GED: {edge_props['graph_edit_distance'].median():.2f}")
distdiv_results.csv
Distribution divergence metrics for database pairs.
Columns:
src(int): Source database IDtgt(int): Target database IDdistdiv(float): Distribution divergence scoreoverlap_ratio(float): Column overlap ratio [0.0, 1.0]shared_column_count(int): Number of shared columns
Loading:
import pandas as pd
distdiv = pd.read_csv('distdiv_results.csv')
# Find pairs with low divergence (more similar distributions)
similar_dist = distdiv[distdiv['distdiv'] < 15.0]
# Analyze overlap patterns
high_overlap = distdiv[distdiv['overlap_ratio'] >= 0.3]
print(f"Pairs with ≥30% overlap: {len(high_overlap):,}")
all_join_size_results_est.csv
Estimated join sizes for databases (cardinality estimation).
Columns:
db_id(int): Database IDall_join_size(float): Estimated size of full outer join across all tables
Loading:
import pandas as pd
join_sizes = pd.read_csv('all_join_size_results_est.csv')
# Analyze join complexity
print(f"Mean join size: {join_sizes['all_join_size'].mean():.2f}")
print(f"Max join size: {join_sizes['all_join_size'].max():.2f}")
# Large databases (complex joins)
large_dbs = join_sizes[join_sizes['all_join_size'] > 1000]
3. Node Files (Database Properties)
node_structural_properties.csv
Comprehensive structural properties for each database schema.
Columns:
db_id(int): Database IDnum_tables(int): Number of tables in the databasenum_columns(int): Total number of columns across all tablesforeign_key_density(float): Ratio of foreign keys to possible relationshipsavg_table_connectivity(float): Average number of connections per tablemedian_table_connectivity(float): Median connections per tablemin_table_connectivity(float): Minimum connections for any tablemax_table_connectivity(float): Maximum connections for any tabledata_type_proportions(str): JSON string with data type distributiondata_types(str): JSON string with count of each data typewikidata_properties(int): Number of Wikidata properties used
Loading:
import pandas as pd
import json
node_props = pd.read_csv('node_structural_properties.csv')
# Parse JSON columns
node_props['data_type_dist'] = node_props['data_type_proportions'].apply(
lambda x: json.loads(x.replace("'", '"'))
)
# Analyze database complexity
complex_dbs = node_props[node_props['num_tables'] > 10]
print(f"Databases with >10 tables: {len(complex_dbs):,}")
# Foreign key density analysis
print(f"Mean FK density: {node_props['foreign_key_density'].mean():.4f}")
Example row:
db_id: 88880
num_tables: 2
num_columns: 24
foreign_key_density: 0.0833
avg_table_connectivity: 1.5
data_type_proportions: {'string': 0.417, 'wikibase-entityid': 0.583}
data_volume.csv
Storage size information for each database.
Columns:
db_id(str/int): Database ID (may have leading zeros)volume_bytes(int): Total data volume in bytes
Loading:
import pandas as pd
volumes = pd.read_csv('data_volume.csv')
# Convert to more readable units
volumes['volume_mb'] = volumes['volume_bytes'] / (1024 * 1024)
volumes['volume_gb'] = volumes['volume_bytes'] / (1024 * 1024 * 1024)
# Find largest databases
top_10 = volumes.nlargest(10, 'volume_bytes')
print(top_10[['db_id', 'volume_gb']])
4. Column-Level Statistics
column_cardinality.csv
Distinct value counts for all columns.
Columns:
db_id(str/int): Database IDtable_name(str): Table namecolumn_name(str): Column namen_distinct(int): Number of distinct values
Loading:
import pandas as pd
cardinality = pd.read_csv('column_cardinality.csv')
# High cardinality columns (potentially good as keys)
high_card = cardinality[cardinality['n_distinct'] > 100]
# Analyze cardinality distribution
print(f"Mean distinct values: {cardinality['n_distinct'].mean():.2f}")
print(f"Median distinct values: {cardinality['n_distinct'].median():.2f}")
Example rows:
db_id table_name column_name n_distinct
6 scholarly_articles article_title 275
6 scholarly_articles article_description 197
6 scholarly_articles pub_med_id 269
column_entropy.csv
Shannon entropy for column value distributions.
Columns:
db_id(str): Database ID (padded format)table_name(str): Table namecolumn_name(str): Column nameentropy(float): Shannon entropy value [0.0, ∞)
Loading:
import pandas as pd
entropy = pd.read_csv('column_entropy.csv')
# High entropy columns (high information content)
high_entropy = entropy[entropy['entropy'] > 3.0]
# Low entropy columns (low diversity)
low_entropy = entropy[entropy['entropy'] < 0.5]
# Distribution analysis
print(f"Mean entropy: {entropy['entropy'].mean():.3f}")
Example rows:
db_id table_name column_name entropy
00001 descendants_of_john_i full_name 3.322
00001 descendants_of_john_i gender 0.881
00001 descendants_of_john_i father_name 0.000
column_sparsity.csv
Missing value ratios for all columns.
Columns:
db_id(str): Database ID (padded format)table_name(str): Table namecolumn_name(str): Column namesparsity(float): Ratio of missing values [0.0, 1.0]
Loading:
import pandas as pd
sparsity = pd.read_csv('column_sparsity.csv')
# Dense columns (few missing values)
dense = sparsity[sparsity['sparsity'] < 0.1]
# Sparse columns (many missing values)
sparse = sparsity[sparsity['sparsity'] > 0.5]
# Quality assessment
print(f"Columns with >50% missing: {len(sparse):,}")
print(f"Mean sparsity: {sparsity['sparsity'].mean():.3f}")
Example rows:
db_id table_name column_name sparsity
00009 FamousPencilMoustacheWearers Name 0.000
00009 FamousPencilMoustacheWearers Biography 0.000
00009 FamousPencilMoustacheWearers ViafId 0.222
5. Clustering and Community Files
community_assignment_0.94.csv
Community detection results using Louvain algorithm.
Columns:
node_id(int): Database IDpartition(int): Community/partition ID
Loading:
import pandas as pd
communities = pd.read_csv('community_assignment_0.94.csv')
# Analyze community structure
community_sizes = communities['partition'].value_counts()
print(f"Number of communities: {len(community_sizes)}")
print(f"Largest community size: {community_sizes.max()}")
# Get databases in a specific community
community_1 = communities[communities['partition'] == 1]['node_id'].tolist()
Statistics:
- Total communities: 6,133
- Largest community: 4,825 nodes
- Modularity: 0.5366
cluster_assignments_dim2_sz100_msNone.csv
Clustering results from dimensionality reduction (e.g., t-SNE, UMAP).
Columns:
db_id(int): Database IDcluster(int): Cluster ID
Loading:
import pandas as pd
clusters = pd.read_csv('cluster_assignments_dim2_sz100_msNone.csv')
# Analyze cluster distribution
cluster_sizes = clusters['cluster'].value_counts()
print(f"Number of clusters: {len(cluster_sizes)}")
# Get databases in a specific cluster
cluster_9 = clusters[clusters['cluster'] == 9]['db_id'].tolist()
6. Analysis Reports
analysis_0.94_report.txt
Comprehensive text report of graph analysis at threshold 0.94.
Contents:
- Graph statistics (nodes, edges)
- Connected components analysis
- Community detection results
- Top components and communities by size
Loading:
with open('analysis_0.94_report.txt', 'r') as f:
report = f.read()
print(report)
Key Metrics:
- Total Nodes: 100,000
- Total Edges: 17,858,194
- Connected Components: 6,109
- Largest Component: 10,703 nodes
- Communities: 6,133
- Modularity: 0.5366
Usage Examples
Example 1: Finding Similar Database Pairs
import pandas as pd
# Load edges with high similarity
edges = pd.read_csv('filtered_edges_threshold_0.94.csv')
# Find database pairs with similarity > 0.98
high_sim_pairs = edges[edges['similarity'] >= 0.98]
print(f"Found {len(high_sim_pairs)} pairs with similarity ≥ 0.98")
# Get top 10 most similar pairs
top_pairs = edges.nlargest(10, 'similarity')
for idx, row in top_pairs.iterrows():
print(f"DB {int(row['src'])} ↔ DB {int(row['tgt'])}: {row['similarity']:.4f}")
Example 2: Analyzing Database Properties
import pandas as pd
import json
# Load node properties
nodes = pd.read_csv('node_structural_properties.csv')
# Find complex databases
complex_dbs = nodes[
(nodes['num_tables'] > 10) &
(nodes['num_columns'] > 100)
]
print(f"Complex databases: {len(complex_dbs)}")
# Analyze data type distribution
for idx, row in complex_dbs.head().iterrows():
db_id = row['db_id']
types = json.loads(row['data_types'].replace("'", '"'))
print(f"DB {db_id}: {types}")
Example 3: Loading and Analyzing the Graph
import dgl
import torch
import pandas as pd
# Load DGL graph
graphs, _ = dgl.load_graphs('graph_raw_0.94.dgl')
graph = graphs[0]
# Basic statistics
print(f"Nodes: {graph.num_nodes():,}")
print(f"Edges: {graph.num_edges():,}")
# Analyze degree distribution
in_degrees = graph.in_degrees()
out_degrees = graph.out_degrees()
print(f"Average in-degree: {in_degrees.float().mean():.2f}")
print(f"Average out-degree: {out_degrees.float().mean():.2f}")
# Find highly connected nodes
top_nodes = torch.topk(in_degrees, k=10)
print(f"Top 10 most connected databases: {top_nodes.indices.tolist()}")
Example 4: Federated Learning Pair Selection
import pandas as pd
# Load edges and structural properties
edges = pd.read_csv('filtered_edges_threshold_0.94.csv')
edge_props = pd.read_csv('edge_structural_properties_GED_0.94.csv')
# Merge data
pairs = edges.merge(
edge_props,
left_on=['src', 'tgt'],
right_on=['db_id1', 'db_id2'],
how='inner'
)
# Select pairs for federated learning
# Criteria: high similarity + high column overlap + low GED
fl_candidates = pairs[
(pairs['similarity'] >= 0.98) &
(pairs['jaccard_columns'] >= 0.4) &
(pairs['graph_edit_distance'] <= 3.0)
]
print(f"FL candidate pairs: {len(fl_candidates)}")
# Sample pairs for experiments
sample = fl_candidates.sample(n=100, random_state=42)
Example 5: Column Statistics Analysis
import pandas as pd
# Load column-level statistics
cardinality = pd.read_csv('column_cardinality.csv')
entropy = pd.read_csv('column_entropy.csv')
sparsity = pd.read_csv('column_sparsity.csv')
# Merge on (db_id, table_name, column_name)
merged = cardinality.merge(entropy, on=['db_id', 'table_name', 'column_name'])
merged = merged.merge(sparsity, on=['db_id', 'table_name', 'column_name'])
# Find high-quality columns for machine learning
# Criteria: high cardinality, high entropy, low sparsity
quality_columns = merged[
(merged['n_distinct'] > 50) &
(merged['entropy'] > 2.0) &
(merged['sparsity'] < 0.1)
]
print(f"High-quality columns: {len(quality_columns)}")
Example 6: Community Analysis
import pandas as pd
# Load community assignments
communities = pd.read_csv('community_assignment_0.94.csv')
nodes = pd.read_csv('node_structural_properties.csv')
# Merge to get properties by community
community_props = communities.merge(
nodes,
left_on='node_id',
right_on='db_id'
)
# Analyze each community
for comm_id in community_props['partition'].unique()[:5]:
comm_data = community_props[community_props['partition'] == comm_id]
print(f"\nCommunity {comm_id}:")
print(f" Size: {len(comm_data)}")
print(f" Avg tables: {comm_data['num_tables'].mean():.2f}")
print(f" Avg columns: {comm_data['num_columns'].mean():.2f}")
Applications
1. Federated Learning Research
Use the similarity graph to identify database pairs for federated learning experiments. The high-similarity pairs (≥0.98) are ideal for horizontal federated learning scenarios.
2. Schema Matching
Leverage structural properties and similarity metrics for automated schema matching and integration tasks.
3. Database Clustering
Use embeddings and community detection results to group similar databases for analysis or optimization.
4. Data Quality Assessment
Column-level statistics (cardinality, entropy, sparsity) enable systematic data quality evaluation across large database collections.
5. Graph Neural Networks
The DGL graph format is ready for training GNN models for link prediction, node classification, or graph classification tasks.
Technical Details
Similarity Computation
- Method: BGE (BAAI General Embedding) model for semantic embeddings
- Metric: Cosine similarity
- Thresholds: Multiple thresholds available (0.6713, 0.94, 0.96)
Graph Construction
- Nodes: Database IDs (0 to 99,999)
- Edges: Database pairs with similarity above threshold
- Edge weights: Cosine similarity scores
- Format: DGL binary format for efficient loading
Community Detection
- Algorithm: Louvain method
- Modularity: 0.5366 (indicates well-defined communities)
- Resolution: Default parameter
Data Processing Pipeline
- Schema extraction from Wikidata databases
- Semantic embedding generation using BGE
- Similarity computation across all pairs
- Graph construction and filtering
- Property extraction and statistical analysis
- Community detection and clustering
Data Format Standards
Database ID Formats
- Integer IDs: Used in most files (0-99999)
- Padded strings: Used in some files (e.g., "00000", "00001")
- Conversion:
str(db_id).zfill(5)for integer to padded string
Missing Values
- Numerical columns: May contain
NaNor-0.0 - String columns: Empty strings or missing entries
- Sparsity column: Explicit ratio of missing values
Data Types
float32: Similarity scores, weights, entropyfloat64: Statistical measures, ratiosint64: Counts, IDsstring: Names, identifiers
File Size Information
Approximate file sizes:
graph_raw_0.94.dgl: ~2.5 GBdatabase_embeddings.pt: ~300 MBfiltered_edges_threshold_0.94.csv: ~800 MBedge_structural_properties_GED_0.94.csv: ~400 MBnode_structural_properties.csv: ~50 MB- Column statistics CSVs: ~20-50 MB each
- Other files: <10 MB each
Citation
If you use this dataset in your research, please cite:
@article{wu2025wikidbgraph,
title={WikiDBGraph: Large-Scale Database Graph of Wikidata for Collaborative Learning},
author={Wu, Zhaomin and Wang, Ziyang and He, Bingsheng},
journal={arXiv preprint arXiv:2505.16635},
year={2025}
}
License
This dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).
Acknowledgments
This dataset is derived from Wikidata and builds upon the WikiDBGraph system for graph-based database analysis and federated learning. We acknowledge the Wikidata community for providing the underlying data infrastructure.
- Downloads last month
- 30