This guide provides examples and instructions for open source models. Some models under this folder might also have their own customized runner.
The following models can be categorized based on their primary use cases.
Language Model:
Vision Model:
Please follow another README first to set up environment.
Some models require specific datasets. Please download them in advance and place them in the appropriate folders.
Detailed instructions for each model are provided below. If you want to export the model without running it, please add --compile_only to the command.
albert,bert,distilbert, eurobert, roberta:
Required Dataset : wikisent2
download dataset first, and place it in a valid folder.
python albert.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/wikisent2
conv_former, convnext_small, cvt, deit, dino_v2, efficientnet, fbnet, focalnet, gMLP_image_classification, maxvit_t, mobilevit1, mobilevit_v2, pvt, squeezenet, swin_transformer, swin_v2_t, vit_b_16 :
Required Dataset : ImageNet
Download dataset first, and place it in a valid folder.
python SCRIPT_NAME.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/ImageNet
dit:
python dit.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL}
esrgan:
Required Dataset: B100
Will be downloaded automatically if -d is specified. Alternatively, you can provide your own dataset using --hr_ref_dir and --lr_dir.
Required OSS Repo: Real-ESRGAN
Clone OSS Repo first, and place it in a valid folder.
python esrgan.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/Real-ESRGAN
fastvit:
Required Dataset: ImageNet
Download dataset first, and place it in a valid folder.
Required OSS Repo: ml-fastvit
Clone OSS Repo first, and place it in a valid folder.
Pretrained weight:
Download pretrained weight first, and place it in a valid folder(should be fastvit_s12_reparam.pth.tar).
python fastvit.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/ml-fastvit -p path/to/pretrained_weight -d path/to/ImageNet
regnet:
Required Dataset: ImageNet
Download dataset first, and place it in a valid folder.
Weights: regnet_y_400mf, regnet_x_400mf
use --weights to specify which regent weights/model to execute.
python regnet.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/ImageNet --weights <WEIGHTS>
retinanet:
Required Dataset: COCO
Download val2017 and annotations first, and place it in a valid folder.
python retinanet.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/PATH/TO/COCO #(which contains 'val_2017' & 'annotations')
ssd300_vgg16:
Required OSS Repo:
Clone OSS Repo first, and place it in a valid folder.
Pretrained weight:
Download pretrained weight first, and place it in a valid folder.(checkpoint_ssd300.pth.tar)
Required Dataset: VOCSegmentation
download VOC 2007 first, and place it in a valid folder.
python ssd300_vgg16.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/a-PyTorch-Tutorial-to-Object-Detection -p path/to/pretrained_weight
llama: For llama, please check README under llama folder for more details.
efficientSAM: For efficientSAM, please get access to efficientSAM folder.
Pretrained weight:
Download EfficientSAM-S or EfficientSAM-Ti first, and place it in a valid folder.
Required Dataset: ImageNet
Download dataset first, and place it in a valid folder.
Required OSS Repo:
Clone OSS Repo first, and place it in a valid folder.
python efficientSAM.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/EfficientSAM -p path/to/pretrained_weight -d path/to/ImageNet