精品欧美一区二区三区在线观看 _久久久久国色av免费观看性色_国产精品久久在线观看_亚洲第一综合网站_91精品又粗又猛又爽_小泽玛利亚一区二区免费_91亚洲精品国偷拍自产在线观看 _久久精品视频在线播放_美女精品久久久_欧美日韩国产成人在线

機器學習 | 從0開發大模型之復現DeepSeek的aha moment

人工智能
最近申請了48G的顯存,結合一些開源的方案復現aha monent,并給出完整的代碼和工具鏈。

前面一篇文章介紹了《從0開發大模型之DeepSeek的GRPO》,并且實現了一個簡單版本的 GRPO 代碼,不過從工程領域來看,并沒有復現DeepSeek-R1,于是最近申請了48G的顯存,結合一些開源的方案復現aha monent,并給出完整的代碼和工具鏈。

 1、什么是 aha monent 

DeepSeek-R1 論文中提到,模型讓作者「見證了強化學習的力量和美感」,在DeepSeek-R1-Zero的中間版本,「頓悟時刻」來了:模型學會了以人類的語氣進行反思。

圖片

aha monent

 2、使用什么的基座模型和訓練數據 

  • 由于顯卡只有48G,可以用基座模型Qwen2.5,模型大小:0.5B,1.5B,3B
  • 訓練數據有很多:(可以直接在huggingface上找到)

   a.AI-MO/NuminaMath-TIR:包括72K行的數學問題,解決方案和答案,是從 NuminaMath-CoT 數據集提煉出來的

   b. FreedomIntelligence/medical-o1-verifiable-problem:包括40K行的醫學數據,不過沒有相關的推理過程

   c.https://raw.githubusercontent.com/hkust-nlp/simpleRL-reason/refs/heads/main/train/data/math_level3to5_data_processed_with_qwen_prompt.json:在simpleRL-reason的開源項目中的訓練數據集

 3、如何訓練 

 3.1、設計獎勵函數 

從上一篇《從0開發大模型之DeepSeek的GRPO》中已經了解GRPO的原理,其中一部分是包括獎勵函數的設計,其中如何設計這里就省略,本文暫時參考其他復現R1的項目設使用了5個函數:

  • accuracy_reward:驗證答案的準確性,對就返回1,不對就返回0
  • format_reward:驗證格式的準確性,如果符合^<think>.*?</think><answer>.*?</answer>$的返回則返回1,否則就返回0
  • reasoning_steps_reward:有推理步驟的,類似(Step \d+:|^\d+\.|\n-|\n\*|First,|Second,|Next,|Finally,),最大值返回3,否則返回0
  • cosine_reward:基于答案的長度做余弦,分為正確答案最大長度,正確答案最小長度,錯誤答案最大長度,錯誤答案最小長度
  • repetition_penalty_reward:計算 N-gram 重復獎勵
  • length_reward:參考kimi1.5的論文(https://arxiv.org/abs/2501.12599)

a.正確答案長度獎勵: reward = 0.5 - (len - min_len)/(max_len - min_len)

b.錯誤答案長度獎勵: reward = min(0, 0.5 - (len - min_len)/(max_len - min_len))

 3.2、使用vLLM 

為了提升性能和節省顯存,這里使用了vLLMvLLM是一個開源的大模型推理加速框架,通過PagedAttention高效地管理attention中緩存的張量,實現比HuggingFace Transformers高14-24倍的吞吐量,從本文實驗過程中發現,之前需要60G顯存的,基本40G就能跑起來。

由于vLLM的加載模型和Huggingface的可以直接兼容,所以可以參考如下代碼跑起來:

from vllm import LLM, SamplingParams
if __name__ == '__main__':
    model_path = "{模型名稱}"
    model = LLM(model=model_path, 
        tensor_parallel_size=1, 
        trust_remote_code=True, 
        max_model_len=10000, 
        enforce_eager=True, 
        gpu_memory_utilizatinotallow=0.5, 
        block_size=32)
    sampling_params = SamplingParams(temperature=0, max_tokens=1, prompt_logprobs=20)

    prompt = "vLLM是如何實現的?"
    response = model.generate(prompt, sampling_params, use_tqdm=False)[0]
    print(response, '\n\n', response.outputs)

 3.3、使用Acceleratedeepspeed加速訓練 

AcceleratePyTorch官方提供的分布式訓練工具,而deepspeed是由Microsoft提供的分布式訓練工具,最主要的區別在于支持的模型規模不同,deepspeed支持更大規模的模型,deepspeed還提供了更多的優化策略和工具,例如ZeROOffload等,Accelerate更加穩定和易于使用,適合中小規模的訓練任務,不過huggingface已經集成了deepspeed,如果對于訓練改幾行代碼即可,如下:

#!pip install accelerate
#!pip install deepspeed
import torch
import torch.nn.functional as F
from datasets import load_dataset
# 引入基礎庫accelerate
from accelerate import Accelerator

# 創建accelerator
accelerator = Accelerator()
# 修改設備信息
device = accelerator.device
model = torch.nn.Transformer().to(device)
optimizer = torch.optim.Adam(model.parameters())

dataset = load_dataset({需要加載的數據})
data = torch.utils.data.DataLoader(dataset, shuffle=True)

# 使用accelerator訓練
model, optimizer, data = accelerator.prepare(model, optimizer, data)
model.train()
for epoch in range(10):
    for source, targets in data:
        source = source.to(device)
        targets = targets.to(device)

        optimizer.zero_grad()

        output = model(source)
        loss = F.cross_entropy(output, targets)

        # 使用accelerator做backward
        accelerator.backward(loss)

        optimizer.step()

相關的配置可以參考zero3.yaml文件或者運行accelerate config

 4、完整的代碼 

4.1、命令

需要安裝 python>=3.10 和必要的庫如下:

pip install transformers
pip install trl
pip install --upgrade trl
pip install latex2sympy2_extended math_verify
pip install flash_attn
pip install vllm
pip install deepspeed
pip install accelerate

運行的命令:

accelerate launch --config_file zero3.yaml 0-grpotrainer_r1.py

其中zero3.yaml配置:

compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
  deepspeed_multinode_launcher: standard
  offload_optimizer_device: cpu
  offload_param_device: cpu
  zero_stage: 3 
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 1 
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

4.2、代碼

完整的訓練代碼較大,請到本文的最后查看。

 5、觀察aha moment 

圖片

從上圖可以看出,模型從直接思考沒有解出問題,但是后面反復添加一些思考步驟就正確了。

 6、注意事項 

(1)安裝過程中錯誤:ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: the package flash_attn seems to be not installed. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.解決方案:
pip install -U flash-attn

(2)安裝過程中錯誤:ImportError: vLLM is not available and use_vllm is set to True. Please install vLLM with pip install vllm to use it.解決方案:
pip install -U vllm

(3)訓練完的模型如何轉換為運行的模型?解決方案:

from deepspeed.utils.zero_to_fp32 import convert_zero_checkpoint_to_fp32_state_dict 

convert_zero_checkpoint_to_fp32_state_dict(
    checkpoint_dir="./output/GRPO-R1-1.5B",
    output_dir="./output/GRPO-R1-1.5B",
    tag="global_step9055", # 模型保存的step文件
)

(4)如果進行模型測試?解決方案:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftConfig

# 加載Qwen模型
# model_name = "Qwen/Qwen2.5-1.5B"
# 加載本地模型
model_name = "./output/GRPO-R1-1.5B"
model = AutoModelForCausalLM.from_pretrained(model_name, cache_dir="./model")

tokenizer = AutoTokenizer.from_pretrained(model_name)
model.eval()
device = torch.device("cuda"if torch.cuda.is_available() else"cpu")
print("device: ", device)
model.to(device)

chat_history_ids = None
whileTrue:
    user_input = input("用戶: ")
    if user_input.lower() == "exit":
        break

    new_user_input_ids = tokenizer.encode(user_input + tokenizer.eos_token, return_tensors='pt').to(device)

    if chat_history_ids isnotNone:
        input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1)
    else:
        input_ids = new_user_input_ids

    chat_history_ids = model.generate(input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
    bot_response = tokenizer.decode(chat_history_ids[:, input_ids.shape[-1]:][0], skip_special_tokens=True)

    print("機器人: ", bot_response)

 7、代碼 

from typing import Optional, Dict
import re, logging, os, sys, torch, math
import transformers
from transformers import (
    AutoModelForCausalLM,
    set_seed,
)
from transformers.trainer_utils import get_last_checkpoint
import datasets
from datasets import load_dataset
from trl import ModelConfig, ScriptArguments, GRPOConfig, GRPOTrainer, get_peft_config
from dataclasses import dataclass, field
from latex2sympy2_extended import NormalizationConfig
from math_verify import LatexExtractionConfig, parse, verify

logger = logging.getLogger(__name__)

def verify_answer(contents, solution):
    rewards = []
    for content, sol in zip(contents, solution):
        gold_parsed = parse(
            sol,
            extraction_mode="first_match",
            extraction_cnotallow=[LatexExtractionConfig()],
        )
        print('-'*100)
        print(f'\ncontent:{content}\nsol:{sol}')
        if len(gold_parsed) != 0:
            answer_parsed = parse(
                content,
                extraction_cnotallow=[
                    LatexExtractionConfig(
                        normalization_cnotallow=NormalizationConfig(
                            nits=False,
                            malformed_operators=False,
                            basic_latex=True,
                            equatinotallow=True,
                            boxed="all",
                            units=True,
                        ),
                        # Ensures that boxed is tried first
                        boxed_match_priority=0,
                        try_extract_without_anchor=False,
                    )
                ],
                extraction_mode="first_match",
            )
            # Reward 1 if the content is the same as the ground truth, 0 otherwise
            reward = float(verify(answer_parsed, gold_parsed))
            print('-'*100)
            print(f'\nanswer_parsed:{answer_parsed}\ngold_parsed:{gold_parsed}\nreward:{reward}')
        else:
            reward = 1.0
            print(f'Failed to parse gold solution: {sol}')
        rewards.append(reward)

    return rewards

def accuracy_reward(completions, solution, **kwargs):
    """Reward function that checks if the completion is the same as the ground truth."""
    contents = [completion[0]["content"] for completion in completions]
    rewards = verify_answer(contents, solution)
    print(f'\naccuracy rewards:{rewards}')
    return rewards

def format_reward(completions, **kwargs):
    """Reward function that checks if the completion has a specific format."""
    pattern = r"^<think>.*?</think><answer>.*?</answer>$"
    completion_contents = [completion[0]["content"] for completion in completions]
    matches = [re.match(pattern, content) for content in completion_contents]
    rewards = [1.0if match else0.0for match in matches]
    print('-'*100)
    print('\nformat rewards:', rewards)
    return rewards

def reasoning_steps_reward(completions, **kwargs):
    """Reward function that checks for clear step-by-step reasoning.
    Regex pattern:
        Step \d+: - matches "Step 1:", "Step 2:", etc.
        ^\d+\. - matches numbered lists like "1.", "2.", etc. at start of line
        \n- - matches bullet points with hyphens
        \n\* - matches bullet points with asterisks
        First,|Second,|Next,|Finally, - matches transition words
    """
    pattern = r"(Step \d+:|^\d+\.|\n-|\n\*|First,|Second,|Next,|Finally,)"
    completion_contents = [completion[0]["content"] for completion in completions]
    matches = [len(re.findall(pattern, content)) for content in completion_contents]
    # Magic nubmer 3 to encourage 3 steps and more, otherwise partial reward
    return [min(1.0, count / 3) for count in matches]

def len_reward(completions: list[Dict[str, str]], solution: list[str], **kwargs) -> float:
    """Compute length-based rewards to discourage overthinking and promote token efficiency.

    Taken from from the Kimi 1.5 tech report: https://arxiv.org/abs/2501.12599

    Args:
        completions: List of model completions
        solutions: List of ground truth solutions

    Returns:
        List of rewards where:
        - For correct answers: reward = 0.5 - (len - min_len)/(max_len - min_len)
        - For incorrect answers: reward = min(0, 0.5 - (len - min_len)/(max_len - min_len))
    """
    contents = [completion[0]["content"] for completion in completions]

    # First check correctness of answers
    correctness = verify_answer(contents, solution)

    # Calculate lengths
    lengths = [len(content) for content in contents]
    min_len = min(lengths)
    max_len = max(lengths)

    # If all responses have the same length, return zero rewards
    if max_len == min_len:
        return [0.0] * len(completions)

    rewards = []
    for length, is_correct in zip(lengths, correctness):
        lambda_val = 0.5 - (length - min_len) / (max_len - min_len)
        reward = lambda_val if is_correct > 0.0else min(0, lambda_val) 
        rewards.append(float(reward))

    return rewards

def get_cosine_scaled_reward(
    min_value_wrong: float = -1.0,
    max_value_wrong: float = -0.5,
    min_value_correct: float = 0.5,
    max_value_correct: float = 1.0,
    max_len: int = 1000,
):
    def cosine_scaled_reward(completions, solution, **kwargs):
        """Reward function that scales based on completion length using a cosine schedule.

        Shorter correct solutions are rewarded more than longer ones.
        Longer incorrect solutions are penalized less than shorter ones.

        Args:
            completions: List of model completions
            solution: List of ground truth solutions

        This function is parameterized by the following arguments:
            min_value_wrong: Minimum reward for wrong answers
            max_value_wrong: Maximum reward for wrong answers
            min_value_correct: Minimum reward for correct answers
            max_value_correct: Maximum reward for correct answers
            max_len: Maximum length for scaling
        """
        contents = [completion[0]["content"] for completion in completions]
        rewards = []
        correctness = verify_answer(contents, solution)
        lengths = [len(content) for content in contents]
        for gen_len, is_correct in zip(lengths, correctness):
            # Apply cosine scaling based on length
            progress = gen_len / max_len
            cosine = math.cos(progress * math.pi)

            if is_correct > 0:
                min_value = min_value_correct
                max_value = max_value_correct
            else:
                # Swap min/max for incorrect answers
                min_value = max_value_wrong
                max_value = min_value_wrong

            reward = min_value + 0.5 * (max_value - min_value) * (1.0 + cosine)
            rewards.append(float(reward))

        return rewards

    return cosine_scaled_reward


def get_repetition_penalty_reward(ngram_size: int, max_penalty: float):
    """
    Computes N-gram repetition penalty as described in Appendix C.2 of https://arxiv.org/abs/2502.03373.
    Reference implementation from: https://github.com/eddycmu/demystify-long-cot/blob/release/openrlhf/openrlhf/reward/repetition.py

    Args:
    ngram_size: size of the n-grams
    max_penalty: Maximum (negative) penalty for wrong answers
    """
    if max_penalty > 0:
        raise ValueError(f"max_penalty {max_penalty} should not be positive")

    def zipngram(text: str, ngram_size: int):
        words = text.lower().split()
        return zip(*[words[i:] for i in range(ngram_size)])

    def repetition_penalty_reward(completions, **kwargs) -> float:
        """
        reward function the penalizes repetitions
        ref implementation: https://github.com/eddycmu/demystify-long-cot/blob/release/openrlhf/openrlhf/reward/repetition.py

        Args:
            completions: List of model completions
        """

        contents = [completion[0]["content"] for completion in completions]
        rewards = []
        for completion in contents:
            if completion == "":
                rewards.append(0.0)
                continue
            if len(completion.split()) < ngram_size:
                rewards.append(0.0)
                continue

            ngrams = set()
            total = 0
            for ng in zipngram(completion, ngram_size):
                ngrams.add(ng)
                total += 1

            scaling = 1 - len(ngrams) / total
            reward = scaling * max_penalty
            rewards.append(reward)
        return rewards

    return repetition_penalty_reward

SYSTEM_PROMPT = (
    "A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant "
    "first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning "
    "process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., "
    "<think> reasoning process here </think><answer> answer here </answer>"
)

@dataclass
class R1GRPOScriptArguments(ScriptArguments):
    reward_funcs: list[str] = field(
        default_factory = lambda: ["accuracy", "format"],
        metadata = {
            "help": f"List of reward functions. Available options: 'accuracy', 'format', 'reasoning_steps', 'len', 'get_cosine_scaled', 'get_repetition_penalty'"
        },
    )
    cosine_min_value_wrong: float = field(
        default=0.0,
        metadata={"help": "Minimum reward for wrong answers"},
    )
    cosine_max_value_wrong: float = field(
        default=-0.5,
        metadata={"help": "Maximum reward for wrong answers"},
    )
    cosine_min_value_correct: float = field(
        default=0.5,
        metadata={"help": "Minimum reward for correct answers"},
    )
    cosine_max_value_correct: float = field(
        default=1.0,
        metadata={"help": "Maximum reward for correct answers"},
    )
    cosine_max_len: int = field(
        default=1000,
        metadata={"help": "Maximum length for scaling"},
    )
    repetition_n_grams: int = field(
        default=3,
        metadata={"help": "Number of n-grams for repetition penalty reward"},
    )
    repetition_max_penalty: float = field(
        default=-1.0,
        metadata={"help": "Maximum (negative) penalty for for repetition penalty reward"},
    )

@dataclass
class R1GRPOConfig(GRPOConfig):
    """
    args for callbacks, benchmarks etc
    """
    benchmarks: list[str] = field(
        default_factory=lambda: [], metadata={"help": "The benchmarks to run after training."}
    )
    callbacks: list[str] = field(
        default_factory=lambda: [], metadata={"help": "The callbacks to run during training."}
    )
    system_prompt: Optional[str] = field(
        default=None, metadata={"help": "The optional system prompt to use for benchmarking."}
    )


def main(script_args, training_args, model_args):
    # Set seed for reproducibility
    set_seed(training_args.seed)

    ###############
    # Setup logging
    ###############
    logging.basicConfig(
        format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
        datefmt="%Y-%m-%d %H:%M:%S",
        handlers=[logging.StreamHandler(sys.stdout)],
    )
    log_level = training_args.get_process_log_level()
    logger.setLevel(log_level)
    datasets.utils.logging.set_verbosity(log_level)
    transformers.utils.logging.set_verbosity(log_level)
    transformers.utils.logging.enable_default_handler()
    transformers.utils.logging.enable_explicit_format()

    # Log on each process a small summary
    logger.warning(
        f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
        + f" distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
    )
    logger.info(f"Model parameters {model_args}")
    logger.info(f"Script parameters {script_args}")
    logger.info(f"Data parameters {training_args}")

    # Check for last checkpoint
    last_checkpoint = None
    if os.path.isdir(training_args.output_dir):
        last_checkpoint = get_last_checkpoint(training_args.output_dir)
        logger.info(f"Last checkpoint detected, resuming training at {last_checkpoint=}.")
    if last_checkpoint isnotNoneand training_args.resume_from_checkpoint isNone:
        logger.info(f"Checkpoint detected, resuming training at {last_checkpoint=}.")

    # Load the dataset
    dataset = load_dataset(script_args.dataset_name, name=script_args.dataset_config)

    # Get reward functions
    REWARD_FUNCS_REGISTRY = {
        "accuracy": accuracy_reward,
        "format": format_reward,
        "reasoning_steps": reasoning_steps_reward,
        "cosine": get_cosine_scaled_reward(
            min_value_wrnotallow=script_args.cosine_min_value_wrong,
            max_value_wrnotallow=script_args.cosine_max_value_wrong,
            min_value_correct=script_args.cosine_min_value_correct,
            max_value_correct=script_args.cosine_max_value_correct,
            max_len=script_args.cosine_max_len,
        ),
        "repetition_penalty": get_repetition_penalty_reward(
            ngram_size=script_args.repetition_n_grams,
            max_penalty=script_args.repetition_max_penalty,
        ),
        "length": len_reward,
    }
    reward_funcs = [REWARD_FUNCS_REGISTRY[func] for func in script_args.reward_funcs]

    # Format into conversation
    def make_conversation(example):
        return {
            "prompt": [
                {"role": "system", "content": SYSTEM_PROMPT},
                {"role": "user", "content": example["problem"]},
            ],
        }

    dataset = dataset.map(make_conversation)
    for split in dataset:
        if"messages"in dataset[split].column_names:
            dataset[split] = dataset[split].remove_columns("messages")

    logger.info("*** Initializing model kwargs ***")
    torch_dtype = (
        model_args.torch_dtype if model_args.torch_dtype in ["auto", None] else getattr(torch, model_args.torch_dtype)
    )

    training_args.gradient_checkpointing = True
    model_kwargs = dict(
        revision = model_args.model_revision,
        trust_remote_code = model_args.trust_remote_code,
        attn_implementation = model_args.attn_implementation,
        torch_dtype = torch_dtype,
        use_cache = Falseif training_args.gradient_checkpointing elseTrue,
    )

    model = AutoModelForCausalLM.from_pretrained(model_args.model_name_or_path, 
                                                 load_in_4bit=False, **model_kwargs)

    print(model_args.model_name_or_path)
    #############################
    # Initialize the R1GRPO trainer
    #############################
    trainer = GRPOTrainer(
        model = model,
        reward_funcs = reward_funcs,
        args = training_args,
        train_dataset = dataset[script_args.dataset_train_split],
        eval_dataset = dataset[script_args.dataset_test_split] if training_args.eval_strategy != "no"elseNone,
        peft_config = get_peft_config(model_args),
    )

    ###############
    # Training loop
    ###############
    logger.info("*** Train ***")
    checkpoint = None
    if training_args.resume_from_checkpoint isnotNone:
        checkpoint = training_args.resume_from_checkpoint
    elif last_checkpoint isnotNone:
        checkpoint = last_checkpoint
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
    metrics = train_result.metrics
    metrics["train_samples"] = len(dataset[script_args.dataset_train_split])
    trainer.log_metrics("train", metrics)
    trainer.save_metrics("train", metrics)
    trainer.save_state()

    ##################################
    # Save model and create model card
    ##################################
    logger.info("*** Save model ***")
    trainer.save_model(training_args.output_dir)
    logger.info(f"Model saved to {training_args.output_dir}")

    # Save everything else on main process
    kwargs = {
        "dataset_name": script_args.dataset_name,
        "tags": ["GRPOTrainer-R1"],
    }
    if trainer.accelerator.is_main_process:
        trainer.create_model_card(**kwargs)
        # Restore k,v cache for fast inference
        trainer.model.config.use_cache = True
        trainer.model.config.save_pretrained(training_args.output_dir)

script_config = {
    "dataset_name": "AI-MO/NuminaMath-TIR",
    "dataset_config": "default",
    "reward_funcs": [
        "accuracy",
        "format",
        "reasoning_steps",
    ]
}

training_config = {
    "output_dir": "output/GRPO-R1-1.5B", # 模型輸出目錄
    "overwrite_output_dir": True, # 是否覆蓋輸出目錄
    "do_eval": True, # 是否進行評估
    "eval_strategy": "steps", # 評估策略,按步數進行評估
    "eval_steps": 100, # 每100步進行一次評估
    "per_device_train_batch_size": 4, # 每個設備上的訓練批次大小
    "per_device_eval_batch_size": 4, # 每個設備上的評估批次大小
    "gradient_accumulation_steps": 8, # 梯度累積步數
    "learning_rate": 1.0e-06, # 學習率
    "num_train_epochs": 1.0, # 訓練的總輪數
    "max_steps": -1, # 最大訓練步數,-1表示不限制
    "lr_scheduler_type": "cosine", # 學習率調度器類型,使用余弦退火
    "warmup_ratio": 0.1, # 預熱比例
    "log_level": "info", # 日志記錄級別
    "logging_strategy": "steps", # 日志記錄策略,按步數記錄
    "logging_steps": 100, # 每100步記錄一次日志
    "save_strategy": "no", # 保存策略,不保存
    "seed": 42, # 隨機種子
    "bf16": True, # 是否使用bfloat16精度
    "gradient_checkpointing": True, # 是否使用梯度檢查點
    "gradient_checkpointing_kwargs": {
        "use_reentrant": False# 梯度檢查點的額外參數,是否使用reentrant模式
    },
    "max_prompt_length": 128, # 最大提示長度
    "num_generations": 4, # 生成的數量
    "max_completion_length": 256, # 最大完成長度
    "use_vllm": True, # 是否使用vLLM
    "vllm_device": "auto", # vLLM設備,自動選擇
    "vllm_gpu_memory_utilization": 0.8, # vLLM GPU內存利用率
    "resume_from_checkpoint": "output/GRPO-R1-1.5B", # 恢復檢查點,如果沒有latest文件,需要添加latest文件類似`global_step9055`
}

model_config = {
    "model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct",
    "model_revision": "main",
    "torch_dtype": "bfloat16",
    "attn_implementation": "flash_attention_2",
}

if __name__ == "__main__":
    script_args = R1GRPOScriptArguments(**script_config)
    training_args = R1GRPOConfig(**training_config)
    model_args = ModelConfig(**model_config)
    main(script_args, training_args, model_args)

 參考 

(1)https://github.com/agentica-project/deepscaler

(2)https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset

(3)https://zhuanlan.zhihu.com/p/21393382793

(4)https://github.com/hkust-nlp/simpleRL-reason

(5)https://mp.weixin.qq.com/s/RbQnInTa00ZISvJL7vORzA

(6)https://zhuanlan.zhihu.com/p/629644249

責任編輯:龐桂玉 來源: 周末程序猿
相關推薦

2025-04-03 15:40:41

機器學習大模型DeepSeek

2024-11-04 00:24:56

2024-12-09 00:00:10

2024-12-26 00:46:25

機器學習LoRA訓練

2024-11-26 09:33:44

2025-02-18 10:54:04

2025-01-10 08:38:10

2025-07-04 08:47:00

大模型AI信息

2025-02-20 09:27:46

2025-04-27 09:00:00

模型視頻生成

2017-08-25 14:05:01

機器學習算法模型

2020-10-13 07:00:00

機器學習人工智能

2022-05-18 16:24:36

PythonPyCaret機器學習

2025-05-08 08:10:25

大模型DeepSeekAPI

2017-07-11 10:19:24

淺層模型機器學習優化算法

2025-02-10 09:42:14

2025-03-06 07:28:31

DeepSeek大模型人工智能

2020-11-03 10:09:46

機器學習論文代碼

2017-04-25 16:45:11

2025-02-13 08:30:11

點贊
收藏

51CTO技術棧公眾號

制服.丝袜.亚洲.中文.综合懂色| 91人妻一区二区| av小片在线| 国产精品性做久久久久久| 97超视频免费观看| youjizz亚洲女人| 中文字幕一区图| 在线观看亚洲成人| 老子影院午夜伦不卡大全| 国产午夜视频在线观看| 国产成人午夜精品影院观看视频 | 国内精彩免费自拍视频在线观看网址| 久久先锋影音av鲁色资源网| 91麻豆桃色免费看| www.中文字幕在线观看| 91精品99| 在线看国产精品| 污污免费在线观看| 日韩免费在线电影| 日韩欧美亚洲国产一区| 成人av在线播放观看| av在线电影观看| 92国产精品观看| 91福利视频导航| 亚洲中文字幕在线观看| 久久久人人人| 国语对白做受69| 91香蕉视频在线播放| 国产欧美一区二区精品久久久| 精品欧美乱码久久久久久| 天堂视频免费看| 婷婷六月国产精品久久不卡| 精品久久久国产精品999| 玖玖精品在线视频| 快射av在线播放一区| 国产欧美一区二区精品性色| 久久久久久艹| 亚洲aaa在线观看| 国产suv精品一区二区6| 亚洲最大成人在线| 国产又粗又猛又爽又黄的视频一| 日韩精品乱码免费| 日本欧美在线视频| 在线观看日本视频| 在线亚洲免费| 97视频免费观看| 日本特黄一级片| 伊人成年综合电影网| 欧美裸身视频免费观看| 黄色录像二级片| 91精品国产调教在线观看| 国产一区二区三区在线| 国产sm调教视频| 久久成人高清| 国产亚洲欧美另类中文| 欧美精品日韩在线| 久久激情电影| 久久亚洲精品网站| 精品无码久久久久成人漫画| 午夜欧美在线| 欧美美女15p| 国产真实乱偷精品视频| 99视频在线精品国自产拍免费观看| 欧美丰满少妇xxxxx| 国产亚洲成人精品| 99精品国产在热久久下载| 4k岛国日韩精品**专区| 日本中文字幕在线| 青草av.久久免费一区| 国产拍精品一二三| 国产乱叫456在线| 国产不卡视频在线观看| 国内精品视频在线播放| 九一在线视频| 国产精品大尺度| 国产精品久久国产| 伊人久久综合一区二区| 在线视频一区二区免费| www.色就是色.com| 91夜夜蜜桃臀一区二区三区| 日韩精品欧美国产精品忘忧草 | 日本成人a网站| 亚洲色图激情小说| 国产精品 欧美激情| 亚洲国产网站| 国产精品视频不卡| 性欧美videos另类hd| 91日韩在线专区| 亚洲欧洲一区二区福利| 男男gaygays亚洲| 色婷婷综合久久久| 久久成年人网站| 欧美成人专区| 俺去亚洲欧洲欧美日韩| 成年人午夜视频| 久久99久久99| 精品乱子伦一区二区三区| 波多野结衣一区二区| 一区二区三区欧美日| 日韩在线xxx| 精品国产伦一区二区三区观看说明| 亚洲国产精品久久精品怡红院| 国产又黄又粗的视频| 红桃视频国产精品| 国产精品男人爽免费视频1| 韩国av在线免费观看| 亚洲国产电影在线观看| 久久久性生活视频| **欧美日韩在线| 亚洲欧美日韩综合| 久久免费视频6| 激情综合色播激情啊| 欧美黄色直播| 色呦呦呦在线观看| 欧美日韩亚洲不卡| 国内精品久久99人妻无码| 在线国产一区二区| 国产精品久久久久久搜索| 无码精品人妻一区二区三区影院| 亚洲欧美偷拍三级| 亚洲一级免费在线观看| 蜜桃一区二区| 97免费视频在线| 国产福利资源在线| 1区2区3区精品视频| 久久久久免费精品| 玖玖玖免费嫩草在线影院一区| 久久久精品一区二区三区| 一级久久久久久| 91麻豆精品秘密| av在线观看地址| 精品国产乱码一区二区三区| 日韩在线视频播放| 久久久999久久久| 久久在线免费观看| 天天夜碰日日摸日日澡性色av| 国产日韩欧美中文在线| www.日韩欧美| 国产免费的av| 亚洲人成影院在线观看| 男人的天堂最新网址| 成人羞羞视频在线看网址| 国产高清视频一区三区| 日韩电影在线观看完整版| 精品久久久久久久久中文字幕| 丰满熟女人妻一区二区三区| 欧美深夜福利| 国产日韩一区欧美| 超碰在线cao| 精品视频久久久| 欧美videossex极品| 91丨porny丨中文| 国产精品宾馆在线精品酒店| 综合色就爱涩涩涩综合婷婷| 奇米4444一区二区三区| 美国成人毛片| 欧美三级视频在线| frxxee中国xxx麻豆hd| 国产精品1区2区3区在线观看| 欧美交换配乱吟粗大25p| 国产精品久av福利在线观看| 国内免费精品永久在线视频| 色一情一乱一区二区三区| 福利微拍一区二区| 国产1区2区在线观看| 蓝色福利精品导航| 日韩久久久久久久久久久久| 国产区精品视频在线观看豆花| 性欧美在线看片a免费观看| 午夜av免费观看| 欧美性色欧美a在线播放| 国产黄a三级三级| 国产99久久久国产精品 | 日韩高清一级片| 一本久道久久综合狠狠爱亚洲精品| 24小时成人在线视频| 欧美激情精品久久久久久黑人 | 久久久久亚洲综合| 亚洲第一狼人区| 欧美国产专区| 欧美日本国产精品| 白嫩亚洲一区二区三区| 国内偷自视频区视频综合| 国产中文在线观看| 日韩欧美中文字幕一区| 精品国产午夜福利| 亚洲日本在线观看| 亚洲AV无码国产精品| 看国产成人h片视频| 美脚丝袜脚交一区二区| 欧美伦理在线视频| 国产精品对白一区二区三区| 久久久人成影片一区二区三区在哪下载 | 官网99热精品| 亚洲电影有码| 久久久久久久久网站| 91在线播放网站| 亚洲福利影片在线| 一炮成瘾1v1高h| 婷婷丁香激情综合| 小早川怜子一区二区的演员表| 99久久夜色精品国产网站| 欧美激情第3页| 乱码第一页成人| 国产黄色片免费在线观看| 色天天综合网| 欧美日韩一区二区视频在线 | 韩国成人二区| 欧美美最猛性xxxxxx| 国产精品一区二区婷婷| 亚洲电影av在线| 国产乱码久久久| 日本高清无吗v一区| 久久久久久久伊人| 亚洲日本va在线观看| 欧美18—19性高清hd4k| proumb性欧美在线观看| 永久av免费在线观看| 热久久免费视频| 黄色片视频在线免费观看| 国内一区二区三区| 国产精品久久成人免费观看| 欧美日韩一区二区综合| 久热国产精品视频一区二区三区| 国模大尺度视频一区二区| 国产精品亚洲片夜色在线| 中国字幕a在线看韩国电影| 欧美精品激情在线观看| 91在线中字| 久久久成人精品| 欧美三级电影一区二区三区| 一本色道久久综合狠狠躁篇怎么玩 | 日韩精品一区二区亚洲av性色| 久久久91精品国产一区二区三区| 精品无码国产一区二区三区51安| 高清国产一区二区三区| 91aaa精品| 国产毛片精品视频| 最新av免费在线观看| 久久精品国产亚洲高清剧情介绍| 538在线视频观看| 免费在线看成人av| mm131国产精品| 另类中文字幕网| 亚洲精品成人在线播放| 国内精品伊人久久久久av影院| 国内av一区二区| 激情综合网天天干| 久久久精品人妻一区二区三区| 国产伦精品一区二区三区免费迷 | 久久久蜜桃一区二区| 色综合久久综合| 国产乱码77777777| 欧美色视频一区| 一级黄色大片免费| 91精品欧美久久久久久动漫 | 亚洲天堂久久av| 福利在线播放| 日韩天堂在线视频| jizzjizz亚洲| 久久久久久久国产精品| 综合毛片免费视频| 国产精品一区久久久| 午夜精品久久久久久毛片| 91九色对白| 日韩欧美影院| 亚洲毛片aa| 欧美全黄视频| 丝袜老师办公室里做好紧好爽| 丝袜美腿高跟呻吟高潮一区| 午夜免费看视频| 国产成人精品影视| asian性开放少妇pics| 国产精品素人视频| 九九视频免费看| 都市激情亚洲色图| 亚洲中文字幕在线观看| 精品国产91九色蝌蚪| 久草福利在线| 欧美日本亚洲视频| 欧洲一级精品| 亚洲自拍小视频免费观看| 欧美美女黄色| 少妇高潮流白浆| 一区二区日本视频| 免费av不卡在线| zzijzzij亚洲日本少妇熟睡| 久久久视频6r| 亚洲一区二区三区四区在线免费观看| 亚洲精品男人的天堂| 在线91免费看| 国产小视频免费在线观看| 欧美国产日本在线| 精品视频一区二区三区四区五区| 成人免费淫片aa视频免费| 天天躁日日躁狠狠躁欧美巨大小说 | 日韩av电影国产| 日本亚州欧洲精品不卡| 欧美精品123| 国产精品av久久久久久麻豆网| 日韩中文字幕二区| 国产99久久久久| 老司机深夜福利网站| 色综合婷婷久久| 黑人操亚洲女人| 久久精品亚洲94久久精品| 高清av不卡| 精品国产乱码久久久久久郑州公司| 久久电影院7| 玩弄japan白嫩少妇hd| 成人黄色小视频在线观看| 大地资源高清在线视频观看| 色婷婷亚洲精品| 色欲av永久无码精品无码蜜桃| 久久精品国产v日韩v亚洲| 台湾佬中文娱乐久久久| 精品欧美一区二区精品久久| 综合在线视频| 五月激情五月婷婷| 国产偷国产偷精品高清尤物| 日本中文字幕免费| 欧美成人精精品一区二区频| 欧美人xxx| 国产日本欧美一区二区三区在线| 西野翔中文久久精品国产| 日本阿v视频在线观看| 国产高清不卡二三区| 精品自拍偷拍视频| 欧美日韩综合在线免费观看| 福利在线播放| 国产精品国产三级国产aⅴ浪潮| 亚洲精品国模| www黄色av| 26uuu色噜噜精品一区| 成人午夜视频精品一区| 亚洲国产第一页| 欧美激情网站| 另类欧美小说| 母乳一区在线观看| 日韩精品卡通动漫网站| 精品色蜜蜜精品视频在线观看| 日韩在线观看视频一区| 韩国精品久久久999| 国产欧美三级电影| 国产精品va无码一区二区| 91色九色蝌蚪| 亚洲色成人www永久网站| 国产丰满美女做爰| 欧美激情综合亚洲一二区| 福利片一区二区| 奇米精品一区二区三区| 久久在线观看免费| 国产精品成人久久久| 精品久久久av| 日本精品视频| cao在线观看| 91欧美激情一区二区三区成人| www.国产毛片| 色青青草原桃花久久综合| 欧美专区视频| 国模无码视频一区二区三区| 久久精品一级爱片| 亚洲无码精品在线播放| 久久国产精品久久久久久| 国产精品色在线网站| 久久久久久久激情| 国产精品嫩草影院av蜜臀| 草草视频在线播放| 8x海外华人永久免费日韩内陆视频| 国产精选一区| 中文字幕第一页在线视频| 亚洲风情在线资源站| 免费毛片在线| 91夜夜揉人人捏人人添红杏| 亚洲福利久久| 国产综合精品久久久久成人av| 538prom精品视频线放| 波多野结衣精品| 日韩精品国内| 国产99久久久国产精品潘金 | xxx性欧美| 欧美在线3区| 国产精品一二三四区| 国产高清中文字幕| 久久视频在线视频| 一本色道久久综合亚洲精品酒店 | 99re成人精品视频| 最近中文字幕在线观看视频| 欧美激情视频一区二区三区不卡| 精品一区在线| 日本女人性视频| 91国产视频在线观看| 国产色婷婷在线| 一本一道久久a久久精品综合 | 日韩手机在线导航| 日韩精选视频| 成人免费性视频| 国产精品美女www爽爽爽| 天堂在线中文| av成人综合网|