Mistral-7B-Instruct-v0.2 model reading issue when using Transformer imported from mistral_inference.model

#142
by EnRaoufi - opened

I downloaded all the Mistral-7B-Instruct-v0.2 files and tried to use the code snippet given in the Model card to read the model from_folder. I faced this error:

File "/home/user/.local/lib/python3.10/site-packages/mistral_inference/model.py", line 378, in from_folder
    with open(Path(folder) / "params.json", "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'mistral_model/mistralai/Mistral-7B-Instruct-v0.2/params.json'

I've found a solution here. After renaming config.json to params.json, I'm encountering this error:

Traceback (most recent call last):
  File "/home/user/.local/lib/python3.10/site-packages/simple_parsing/helpers/serialization/serializable.py", line 893, in from_dict
    instance = cls(**init_args)  # type: ignore
TypeError: ModelArgs.__init__() missing 7 required positional arguments: 'dim', 'n_layers', 'head_dim', 'hidden_dim', 'n_heads', 'n_kv_heads', and 'norm_eps'

My questions are:

  1. How to solve this issue and read the model locally?
  2. How to read the model using Transformer imported from mistral_inference.model without locally saving the model?

Any help would be greatly appreciated.

me too

@patrickvonplaten @Jacoboooooooo Regarding alignment of encoded vectors achieved by Transformers.AutoTokenizer and MistralTokenizer.v1(), is it still necessary to use Mistral Tokenizer when using the "Mistral-7B-Instruct-v0.2" model to generate texts over a given prompt? If yes, could you please help me with the above issue of using the Mistral Tokenizer?

Sign up or log in to comment