RL researcher
-
ZJU
- Hangzhou, Zhejiang, China
-
09:07
(UTC +08:00) - [email protected]
Pinned Loading
-
PKU-Alignment/beavertails
PKU-Alignment/beavertails PublicBeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).
-
PKU-Alignment/safe-rlhf
PKU-Alignment/safe-rlhf PublicSafe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
-
PKU-Alignment/align-anything
PKU-Alignment/align-anything PublicAlign Anything: Training All-modality Model with Feedback
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.