-
Notifications
You must be signed in to change notification settings - Fork 12.7k
Closed
Labels
Description
Name and Version
version: 5731 (bb16041)
built with cc (Gentoo 14.3.0 p8) 14.3.0 for x86_64-pc-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Other (Please specify in the next section)
Command line
#huggingface-to-gguf is a symlink to the real file
./huggingface-to-gguf models/aya-expanse-8b/ --outtype q8_0
Problem description & steps to reproduce
I'm not able to convert huggingface models to gguf anymore via convert_hf_to_gguf.py.
I tried the last few commits of the file but always get errors like:
Traceback (most recent call last): File "/opt/ggml/./huggingface-to-gguf", line 2020, in <module> class ArceeModel(LlamaModel): File "/opt/ggml/./huggingface-to-gguf", line 2021, in ArceeModel model_arch = gguf.MODEL_ARCH.ARCEE ^^^^^^^^^^^^^^^^^^^^^ AttributeError: type object 'MODEL_ARCH' has no attribute 'ARCEE'
at going back to older versions i got the same regarding the 'DOTS1' arch.
Occurred at trying with aya-expanse, command-r-plus