Skip to content

Commit 60325fa

Browse files
Remove .attention from skipped tensors to match more accurately (#7051)
1 parent 6ecf318 commit 60325fa

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

convert-hf-to-gguf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1427,7 +1427,7 @@ def write_tensors(self):
14271427
experts = dict()
14281428
for name, data_torch in self.get_tensors():
14291429
# we don't need these
1430-
if name.endswith((".attention.masked_bias", ".attention.bias", ".attention.rotary_emb.inv_freq")):
1430+
if name.endswith((".attention.masked_bias", ".attention.bias", ".rotary_emb.inv_freq")):
14311431
continue
14321432

14331433
old_dtype = data_torch.dtype

0 commit comments

Comments
 (0)