Hello! In the paper it is mentioned that you do sampling with temp 0.6 and top_p 0.95 for all the evaluations. However, in run_math.py I see that
output = model.generate(
**tokenized_prompts,
max_length=args.max_length,
do_sample=False,
num_beams=1,
)
so there is no sampling. Could you please clarify this?
Hello! In the paper it is mentioned that you do sampling with
temp 0.6andtop_p 0.95for all the evaluations. However, inrun_math.pyI see thatso there is no sampling. Could you please clarify this?