ImageNetTraining80.0-frac-1over4
/
pytorch-image-models
/hfdocs
/source
/models
/adversarial-inception-v3.mdx
| # Adversarial Inception v3 | |
| **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifier](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module). | |
| This particular model was trained for study of adversarial examples (adversarial training). | |
| The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models). | |
| ## How do I use this model on an image? | |
| To load a pretrained model: | |
| ```py | |
| import timm | |
| model = timm.create_model('adv_inception_v3', pretrained=True) | |
| model.eval() | |
| ``` | |
| To load and preprocess the image: | |
| ```py | |
| import urllib | |
| from PIL import Image | |
| from timm.data import resolve_data_config | |
| from timm.data.transforms_factory import create_transform | |
| config = resolve_data_config({}, model=model) | |
| transform = create_transform(**config) | |
| url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") | |
| urllib.request.urlretrieve(url, filename) | |
| img = Image.open(filename).convert('RGB') | |
| tensor = transform(img).unsqueeze(0) # transform and add batch dimension | |
| ``` | |
| To get the model predictions: | |
| ```py | |
| import torch | |
| with torch.no_grad(): | |
| out = model(tensor) | |
| probabilities = torch.nn.functional.softmax(out[0], dim=0) | |
| print(probabilities.shape) | |
| # prints: torch.Size([1000]) | |
| ``` | |
| To get the top-5 predictions class names: | |
| ```py | |
| # Get imagenet class mappings | |
| url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") | |
| urllib.request.urlretrieve(url, filename) | |
| with open("imagenet_classes.txt", "r") as f: | |
| categories = [s.strip() for s in f.readlines()] | |
| # Print top categories per image | |
| top5_prob, top5_catid = torch.topk(probabilities, 5) | |
| for i in range(top5_prob.size(0)): | |
| print(categories[top5_catid[i]], top5_prob[i].item()) | |
| # prints class names and probabilities like: | |
| # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] | |
| ``` | |
| Replace the model name with the variant you want to use, e.g. `adv_inception_v3`. You can find the IDs in the model summaries at the top of this page. | |
| To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. | |
| ## How do I finetune this model? | |
| You can finetune any of the pre-trained models just by changing the classifier (the last layer). | |
| ```py | |
| model = timm.create_model('adv_inception_v3', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) | |
| ``` | |
| To finetune on your own dataset, you have to write a training loop or adapt [timm's training | |
| script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. | |
| ## How do I train this model? | |
| You can follow the [timm recipe scripts](../training_script) for training a new model afresh. | |
| ## Citation | |
| ```BibTeX | |
| @article{DBLP:journals/corr/abs-1804-00097, | |
| author = {Alexey Kurakin and | |
| Ian J. Goodfellow and | |
| Samy Bengio and | |
| Yinpeng Dong and | |
| Fangzhou Liao and | |
| Ming Liang and | |
| Tianyu Pang and | |
| Jun Zhu and | |
| Xiaolin Hu and | |
| Cihang Xie and | |
| Jianyu Wang and | |
| Zhishuai Zhang and | |
| Zhou Ren and | |
| Alan L. Yuille and | |
| Sangxia Huang and | |
| Yao Zhao and | |
| Yuzhe Zhao and | |
| Zhonglin Han and | |
| Junjiajia Long and | |
| Yerkebulan Berdibekov and | |
| Takuya Akiba and | |
| Seiya Tokui and | |
| Motoki Abe}, | |
| title = {Adversarial Attacks and Defences Competition}, | |
| journal = {CoRR}, | |
| volume = {abs/1804.00097}, | |
| year = {2018}, | |
| url = {http://arxiv.org/abs/1804.00097}, | |
| archivePrefix = {arXiv}, | |
| eprint = {1804.00097}, | |
| timestamp = {Thu, 31 Oct 2019 16:31:22 +0100}, | |
| biburl = {https://dblp.org/rec/journals/corr/abs-1804-00097.bib}, | |
| bibsource = {dblp computer science bibliography, https://dblp.org} | |
| } | |
| ``` | |
| <!-- | |
| Type: model-index | |
| Collections: | |
| - Name: Adversarial Inception v3 | |
| Paper: | |
| Title: Adversarial Attacks and Defences Competition | |
| URL: https://paperswithcode.com/paper/adversarial-attacks-and-defences-competition | |
| Models: | |
| - Name: adv_inception_v3 | |
| In Collection: Adversarial Inception v3 | |
| Metadata: | |
| FLOPs: 7352418880 | |
| Parameters: 23830000 | |
| File Size: 95549439 | |
| Architecture: | |
| - 1x1 Convolution | |
| - Auxiliary Classifier | |
| - Average Pooling | |
| - Average Pooling | |
| - Batch Normalization | |
| - Convolution | |
| - Dense Connections | |
| - Dropout | |
| - Inception-v3 Module | |
| - Max Pooling | |
| - ReLU | |
| - Softmax | |
| Tasks: | |
| - Image Classification | |
| Training Data: | |
| - ImageNet | |
| ID: adv_inception_v3 | |
| Crop Pct: '0.875' | |
| Image Size: '299' | |
| Interpolation: bicubic | |
| Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L456 | |
| Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/adv_inception_v3-9e27bd63.pth | |
| Results: | |
| - Task: Image Classification | |
| Dataset: ImageNet | |
| Metrics: | |
| Top 1 Accuracy: 77.58% | |
| Top 5 Accuracy: 93.74% | |
| --> |