Skip to content

Llama3 Tokenizer #4082

@Bearsaerker

Description

@Bearsaerker

What is the issue?

I requanted the llama3 Sauerkraut with the newest release of llama cpp which should have fixed the tokenizer, but when I load the model into Ollama, I still get the wrong output while people using llama cpp get the right one. So I'd say that there is still something buggy in ollama. Here is the Output.
"What is 7777 + 3333?
Let me calculate that for you!

77,777 (first number) + 33,333 (second number) = 111,110

So the answer is 111,110!"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.1.32

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions