Datasets:
				
			
			
	
			
	
		
			
	
		Tasks:
	
	
	
	
	Object Detection
	
	
	Modalities:
	
	
	
		
	
	Image
	
	
	Languages:
	
	
	
		
	
	English
	
	
	Size:
	
	
	
	
	10K<n<100K
	
	
	ArXiv:
	
	
	
	
	
	
	
	
Libraries:
	
	
	
	
	FiftyOne
	
	
	Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -63,7 +63,9 @@ The GQA (Visual Reasoning in the Real World) dataset is a large-scale visual que | |
| 63 |  | 
| 64 | 
             
            This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 35000 samples.
         | 
| 65 |  | 
| 66 | 
            -
            Note: This  | 
|  | |
|  | |
| 67 |  | 
| 68 | 
             
            ## Installation
         | 
| 69 |  | 
| @@ -95,9 +97,7 @@ session = fo.launch_app(dataset) | |
| 95 | 
             
            ## Scene Graph Annotations
         | 
| 96 |  | 
| 97 | 
             
            - Each of the 113K images in GQA is associated with a detailed scene graph describing the objects, attributes and relations present.
         | 
| 98 | 
            -
            - 
         | 
| 99 | 
             
            - The scene graphs are based on a cleaner version of the Visual Genome scene graphs.
         | 
| 100 | 
            -
            - 
         | 
| 101 | 
             
            - For each image, the scene graph is provided as a dictionary (sceneGraph) containing:
         | 
| 102 | 
             
              - Image metadata like width, height, location, weather
         | 
| 103 | 
             
              - A dictionary (objects) mapping each object ID to its name, bounding box coordinates, attributes, and relations[6]
         | 
|  | |
| 63 |  | 
| 64 | 
             
            This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 35000 samples.
         | 
| 65 |  | 
| 66 | 
            +
            Note: This is a 35,000 sample subset which does not contain questions, only the scene graph annotations as detection-level attributes.
         | 
| 67 | 
            +
             | 
| 68 | 
            +
            You can find the recipe notebook for creating the dataset [here](https://colab.research.google.com/drive/1IjyvUSFuRtW2c5ErzSnz1eB9syKm0vo4?usp=sharing)
         | 
| 69 |  | 
| 70 | 
             
            ## Installation
         | 
| 71 |  | 
|  | |
| 97 | 
             
            ## Scene Graph Annotations
         | 
| 98 |  | 
| 99 | 
             
            - Each of the 113K images in GQA is associated with a detailed scene graph describing the objects, attributes and relations present.
         | 
|  | |
| 100 | 
             
            - The scene graphs are based on a cleaner version of the Visual Genome scene graphs.
         | 
|  | |
| 101 | 
             
            - For each image, the scene graph is provided as a dictionary (sceneGraph) containing:
         | 
| 102 | 
             
              - Image metadata like width, height, location, weather
         | 
| 103 | 
             
              - A dictionary (objects) mapping each object ID to its name, bounding box coordinates, attributes, and relations[6]
         | 

