tree: db79935a5493c567603e5d083907710e2e9a2f9a [path history] [tgz]
  1. efficientSAM/
  2. llama/
  3. llm_utils/
  4. moshi/
  5. qwen2_5/
  6. t5/
  7. whisper/
  8. albert.py
  9. bert.py
  10. conv_former.py
  11. convnext_small.py
  12. cvt.py
  13. deit.py
  14. dino_v2.py
  15. distilbert.py
  16. dit.py
  17. efficientnet.py
  18. esrgan.py
  19. eurobert.py
  20. fastvit.py
  21. fbnet.py
  22. focalnet.py
  23. gMLP_image_classification.py
  24. maxvit_t.py
  25. mobilevit_v1.py
  26. mobilevit_v2.py
  27. pvt.py
  28. README.md
  29. regnet.py
  30. retinanet.py
  31. roberta.py
  32. squeezenet.py
  33. ssd300_vgg16.py
  34. swin_transformer.py
  35. swin_v2_t.py
  36. vit_b_16.py
examples/qualcomm/oss_scripts/README.md

Usage Guide for Models Provided by ExecuTorch

This guide provides examples and instructions for open source models. Some models under this folder might also have their own customized runner.

Model categories

The following models can be categorized based on their primary use cases.

  1. Language Model:

    • albert
    • bert
    • distilbert
    • eurobert
    • llama
    • roberta
  2. Vision Model:

    • conv_former
    • convnext_small
    • cvt
    • deit
    • dino_v2
    • dit
    • efficientnet
    • efficientSAM
    • esrgan
    • fastvit
    • fbnet
    • focalnet
    • gMLP_image_classification
    • maxvit_t
    • mobilevit1
    • mobilevit_v2
    • pvt
    • regnet
    • retinanet
    • squeezenet
    • ssd300_vgg16
    • swin_transformer
    • swin_v2_t
    • vit_b_16

Prerequisite

Please follow another README first to set up environment.

Model running

Some models require specific datasets. Please download them in advance and place them in the appropriate folders.

Detailed instructions for each model are provided below. If you want to export the model without running it, please add --compile_only to the command.

  1. albert,bert,distilbert, eurobert, roberta:

    • Required Dataset : wikisent2

      download dataset first, and place it in a valid folder.

      python albert.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/wikisent2
      
      
  2. conv_former, convnext_small, cvt, deit, dino_v2, efficientnet, fbnet, focalnet, gMLP_image_classification, maxvit_t, mobilevit1, mobilevit_v2, pvt, squeezenet, swin_transformer, swin_v2_t, vit_b_16 :

    • Required Dataset : ImageNet

      Download dataset first, and place it in a valid folder.

      python SCRIPT_NAME.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/ImageNet
      
      
  3. dit:

      python dit.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} 
    
  4. esrgan:

    • Required Dataset: B100

      Will be downloaded automatically if -d is specified. Alternatively, you can provide your own dataset using --hr_ref_dir and --lr_dir.

    • Required OSS Repo: Real-ESRGAN

      Clone OSS Repo first, and place it in a valid folder.

      python esrgan.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/Real-ESRGAN
      
      
  5. fastvit:

    • Required Dataset: ImageNet

      Download dataset first, and place it in a valid folder.

    • Required OSS Repo: ml-fastvit

      Clone OSS Repo first, and place it in a valid folder.

    • Pretrained weight:

      Download pretrained weight first, and place it in a valid folder(should be fastvit_s12_reparam.pth.tar).

      python fastvit.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/ml-fastvit -p path/to/pretrained_weight -d path/to/ImageNet
      
      
  6. regnet:

    • Required Dataset: ImageNet

      Download dataset first, and place it in a valid folder.

    • Weights: regnet_y_400mf, regnet_x_400mf

      use --weights to specify which regent weights/model to execute.

      python regnet.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/ImageNet --weights <WEIGHTS>
    
    
  7. retinanet:

    • Required Dataset: COCO

      Download val2017 and annotations first, and place it in a valid folder.

      python retinanet.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/PATH/TO/COCO #(which contains 'val_2017' & 'annotations')
      
      
  8. ssd300_vgg16:

    • Required OSS Repo:

      Clone OSS Repo first, and place it in a valid folder.

    • Pretrained weight:

      Download pretrained weight first, and place it in a valid folder.(checkpoint_ssd300.pth.tar)

    • Required Dataset: VOCSegmentation
      download VOC 2007 first, and place it in a valid folder.

      python ssd300_vgg16.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/a-PyTorch-Tutorial-to-Object-Detection -p path/to/pretrained_weight 
      
      
  9. llama: For llama, please check README under llama folder for more details.

  10. efficientSAM: For efficientSAM, please get access to efficientSAM folder.

    • Pretrained weight:

      Download EfficientSAM-S or EfficientSAM-Ti first, and place it in a valid folder.

    • Required Dataset: ImageNet

      Download dataset first, and place it in a valid folder.

    • Required OSS Repo:

      Clone OSS Repo first, and place it in a valid folder.

      python efficientSAM.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/EfficientSAM -p path/to/pretrained_weight -d path/to/ImageNet