Skip to content

Not able to convert hf models to gguf anymore #14315

@DigitalRudeness

Description

@DigitalRudeness

Name and Version

version: 5731 (bb16041)
built with cc (Gentoo 14.3.0 p8) 14.3.0 for x86_64-pc-linux-gnu

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

Other (Please specify in the next section)

Command line

#huggingface-to-gguf is a symlink to the real file
./huggingface-to-gguf models/aya-expanse-8b/ --outtype q8_0

Problem description & steps to reproduce

I'm not able to convert huggingface models to gguf anymore via convert_hf_to_gguf.py.

I tried the last few commits of the file but always get errors like:

Traceback (most recent call last): File "/opt/ggml/./huggingface-to-gguf", line 2020, in <module> class ArceeModel(LlamaModel): File "/opt/ggml/./huggingface-to-gguf", line 2021, in ArceeModel model_arch = gguf.MODEL_ARCH.ARCEE ^^^^^^^^^^^^^^^^^^^^^ AttributeError: type object 'MODEL_ARCH' has no attribute 'ARCEE'

at going back to older versions i got the same regarding the 'DOTS1' arch.

Occurred at trying with aya-expanse, command-r-plus

First Bad Commit

9ae4143

Relevant log output

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      close