I’ve been having loads of fun playing around with all of the new LLMs that have come out on AWS recently. Last week we saw AWS announce support for the new gpt-oss models from OpenAI.
While those models are getting a lot of the press, some are less fortunate. This week I put together a quick intro on how to deploy Qwen2.5 VL model to Amazon Bedrock.
Even if this model is not the one you end up using in your stack, the process would be similar if you wanted to import other models from HuggingFace.
Resources
- The code snippet mentioned is in this GitHub Gist
- HuggingFace model card for Qwen2.5VL Instruct
- Original blog post: Deploy Qwen models with Amazon Bedrock Custom Model Import
If you’re looking for support with your next AI project, drop us a message or reach out to the MakeOps Team.