In this video, let's look into fine-tuning the open-source 20b parameters GPT-OSS model. We will use the HugginFace ecosystem for this. It includes the transformers, peft, dataset, and the related libraries. The fine-tuning process is mostly following a boilerplate code other than that we need to be wary of what datasets we use, what hardware resources we need, and at times, the parameters we need to set before starting the training. Hope it's useful! Github Notebook: (look for ) Fine-tuned model - GPT-OSS - HF dataset - OpenAI's cookbook - AI BITES KEY LINKS Website: YouTube: @AIBites Twitter: Patreon: Github:










