Reinforcement Learning from Human Feedback
Back to Education
Level: AdvancedDuration: 19:39Source: Google Cloud Tech
RLHFAlignmentTraining
Summary
An in-depth exploration of Reinforcement Learning from Human Feedback, the technique that makes language models helpful and harmless. The video covers the three-stage process: supervised fine-tuning, reward model training on human preferences, and policy optimization using PPO. Learn how RLHF enables models to understand nuanced human values, producing outputs that are genuinely useful and aligned with user intent.
Source
Source:Google Cloud Tech
Duration:19:39
Level:Advanced
Topics:
RLHFAlignmentTraining
