How to deploy YOLO model on AI Inference Server with a package larger than 2.2GB
0
I tried to deploy the official YOLO V5 model on AI Inference Server, however the pipeline package is too large to deploy on AI Inference Server(>2.2GB). Is there anyone knows how to make it possible? Like make the package smaller or change the limitation size in AI inference server. thanks in advance.
asked
Wang Ping
1 answers
0
Hello Ping Wang,
As of current 2.2GB is the maximum for memory optimization and reducing the package size would be necessary. Hope this helps, Thank you