Skip to content

Commit 7dce8a4

Browse files
mudlergithub-actions[bot]
authored andcommitted
⬆️ Checksum updates in gallery/index.yaml
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
1 parent 22067e3 commit 7dce8a4

File tree

1 file changed

+2
-13
lines changed

1 file changed

+2
-13
lines changed

gallery/index.yaml

Lines changed: 2 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3112,8 +3112,8 @@
31123112
model: gemma-3-270m-it-qat-Q4_0.gguf
31133113
files:
31143114
- filename: gemma-3-270m-it-qat-Q4_0.gguf
3115-
sha256: 154546607c34d1509e95e2f9371bb0aef1dc6bc9ceba52a66112852cc65cf447
31163115
uri: huggingface://ggml-org/gemma-3-270m-it-qat-GGUF/gemma-3-270m-it-qat-Q4_0.gguf
3116+
sha256: 3626e245220ca4a1c5911eb4010b3ecb7bdbf5bc53c79403c21355354d1e2dc6
31173117
- &llama4
31183118
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
31193119
icon: https://avatars.githubusercontent.com/u/153379578
@@ -9499,18 +9499,7 @@
94999499
urls:
95009500
- https://huggingface.co/bartowski/baichuan-inc_Baichuan-M2-32B-GGUF
95019501
- https://huggingface.co/baichuan-inc/Baichuan-M2-32B
9502-
description: |
9503-
Baichuan-M2-32B is Baichuan AI's medical-enhanced reasoning model, the second medical model released by Baichuan. Designed for real-world medical reasoning tasks, this model builds upon Qwen2.5-32B with an innovative Large Verifier System. Through domain-specific fine-tuning on real-world medical questions, it achieves breakthrough medical performance while maintaining strong general capabilities.
9504-
9505-
Model Features:
9506-
9507-
Baichuan-M2 incorporates three core technical innovations: First, through the Large Verifier System, it combines medical scenario characteristics to design a comprehensive medical verification framework, including patient simulators and multi-dimensional verification mechanisms; second, through medical domain adaptation enhancement via Mid-Training, it achieves lightweight and efficient medical domain adaptation while preserving general capabilities; finally, it employs a multi-stage reinforcement learning strategy, decomposing complex RL tasks into hierarchical training stages to progressively enhance the model's medical knowledge, reasoning, and patient interaction capabilities.
9508-
9509-
Core Highlights:
9510-
9511-
🏆 World's Leading Open-Source Medical Model: Outperforms all open-source models and many proprietary models on HealthBench, achieving medical capabilities closest to GPT-5
9512-
🧠 Doctor-Thinking Alignment: Trained on real clinical cases and patient simulators, with clinical diagnostic thinking and robust patient interaction capabilities
9513-
⚡ Efficient Deployment: Supports 4-bit quantization for single-RTX4090 deployment, with 58.5% higher token throughput in MTP version for single-user scenarios
9502+
description: "Baichuan-M2-32B is Baichuan AI's medical-enhanced reasoning model, the second medical model released by Baichuan. Designed for real-world medical reasoning tasks, this model builds upon Qwen2.5-32B with an innovative Large Verifier System. Through domain-specific fine-tuning on real-world medical questions, it achieves breakthrough medical performance while maintaining strong general capabilities.\n\nModel Features:\n\nBaichuan-M2 incorporates three core technical innovations: First, through the Large Verifier System, it combines medical scenario characteristics to design a comprehensive medical verification framework, including patient simulators and multi-dimensional verification mechanisms; second, through medical domain adaptation enhancement via Mid-Training, it achieves lightweight and efficient medical domain adaptation while preserving general capabilities; finally, it employs a multi-stage reinforcement learning strategy, decomposing complex RL tasks into hierarchical training stages to progressively enhance the model's medical knowledge, reasoning, and patient interaction capabilities.\n\nCore Highlights:\n\n \U0001F3C6 World's Leading Open-Source Medical Model: Outperforms all open-source models and many proprietary models on HealthBench, achieving medical capabilities closest to GPT-5\n \U0001F9E0 Doctor-Thinking Alignment: Trained on real clinical cases and patient simulators, with clinical diagnostic thinking and robust patient interaction capabilities\n ⚡ Efficient Deployment: Supports 4-bit quantization for single-RTX4090 deployment, with 58.5% higher token throughput in MTP version for single-user scenarios\n"
95149503
overrides:
95159504
parameters:
95169505
model: baichuan-inc_Baichuan-M2-32B-Q4_K_M.gguf

0 commit comments

Comments
 (0)