-
-
Notifications
You must be signed in to change notification settings - Fork 89
Open
Description
Thank you to the author for creating this useful package🚀.
Recently, models like Gemma 3n have been released. If we want LLM.swift to support them, how complex would the process be? (Would it require keeping the same update frequency as the upstream llama.cpp version?) I’d like to understand the future iteration plans for new models. Additionally, is there any consideration for supporting VLM?
sohans0, gabrielserrao and davidweiss
Metadata
Metadata
Assignees
Labels
No labels