The Easiest and Cheapest Way to Deploy Finetuned Mistral 7B Instruct Model (or Any Model) | by Qendel AI - Freedium

PHOTO EMBED

Thu Nov 30 2023 00:07:24 GMT+0000 (Coordinated Universal Time)

Saved by @mikeee

Copy
# Mount your drive 
drive.mount('/content/drive')

# Define the data type 
dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] == 8 else torch.float16

# Your finetuned model's path (either in google drive or local)
model_path = "/path/to/your/finetuned/model/"

# Load the finetuned model and tokenizer
finetuned_model = AutoPeftModelForCausalLM.from_pretrained(
    model_path,
    low_cpu_mem_usage=True,
    torch_dtype=dtype,
    device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
content_copyCOPY

https://freedium.cfd/https://medium.com/@qendelai/the-easiest-and-cheapest-way-to-deploy-finetuned-mistral-7b-instruct-model-or-any-model-3f236182e8b8