All the training parameters should be configured using TrainingArguments: # Here I will pass the output directory where # the model predictions and checkpoints will be stored, # batch sizes for the training and validation steps, # and warmup_steps to gradually increase the learning rate training_args = TrainingArguments(output_dir=’. The Hugging Face toolkit provides a useful Trainer tool that helps users fine-tune pre-trained models in most standard use cases. Next, you need to split the whole dataset into the training (90%) and validation (10%) sets: train_size = int( 0.9 * len(dataset)) train_dataset, val_dataset = random_split(dataset, ) Now initialize the dataset: dataset = NetflixDataset( descriptions, tokenizer, max_length) This custom Dataset class is handy for fine-tuning using the Trainer tool: class NetflixDataset(Dataset): def _init_(self, txt_list, tokenizer, max_length): self.input_ids = self.attn_masks = self.labels = for txt in txt_list: # Encode the descriptions using the GPT-Neo tokenizer encodings_dict = tokenizer(‘ ’ + txt + ‘ ’, truncation=True, max_length=max_length, padding=”max_length”) input_ids = torch.tensor(encodings_dict) self.input_ids.append(input_ids) mask = torch.tensor(encodings_dict) self.attn_masks.append(mask) def _len_(self): return len(self.input_ids) def _getitem_(self, idx): return self.input_ids, self.attn_masks
The next step is to read the Netflix dataset and calculate the maximum possible length of the movie description in the dataset: descriptions = pd.read_csv(‘ netflix_titles.csv’) max_length = max()
cuda() # Resize the token embeddings because we've just added 3 new tokens model.resize_token_embeddings(len(tokenizer))
You can even include keyboard and mouse actions in your GIF – a handy feature if you’re planning to embed GIFs in a presentation or how-to sequence.įine-tune your GIF before saving: cut unnecessary material from the beginning and/or end of the recording, choose the resolution, even set the optimal quality (frame rate) for the output file.CUDA device used for this project, please note that the GPT-Neo is a very VRAM demanding model!įirst, we need to download and prepare the GPT-Neo model: # Set the random seed to a fixed value to get reproducible results torch.manual_seed(42) # Download the pre-trained GPT-Neo model's tokenizer # Add the custom tokens denoting the beginning and the end # of the sequence and a special token for padding tokenizer = om_pretrained(“ EleutherAI/gpt-neo-1.3B”, bos_token=’ ’, eos_token=’ ’, pad_token=’ ’) # Download the pre-trained GPT-Neo model and transfer it to the GPU model = om_pretrained(" EleutherAI/gpt-neo-1.3B").
Select the area you want to use for your GIF: you can go full screen or adjust the recording frame by hand to capture a particular area. Now you can make a GIF in just minutes: capture a short video from your browser, media player, desktop, or application, save the result in GIF format, then use your new animated image any way you want. Do you like to post GIFs in social networks, add them to your presentations, or use them in other projects? If you’re looking for an easy- to-use program that will help you make animated GIF images, GIF Maker Movavi is just what you need! With this software, you can quickly create GIFs from online videos, movies, music clips, presentations, or anything else playing on your Mac’s screen.