Skip to content

Regarding Support for the Gemma 3n Model​ #54

@jiangdi0924

Description

@jiangdi0924

Thank you to the author for creating this useful package🚀.
Recently, models like Gemma 3n have been released. If we want LLM.swift to support them, how complex would the process be? (Would it require keeping the same update frequency as the upstream llama.cpp version?) I’d like to understand the future iteration plans for new models. Additionally, is there any consideration for supporting VLM?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions