Summarisation involves extracting important points from a large body of text, done through either extractive or abstractive methods.
Extractive summarisation treats each sentence as a binary classification problem, simplifying the summarisation process but with limited capabilities.
Abstractive summarisation generates a new paragraph based on the meaning of the original text, more expressive yet complex to implement.
Fine-tuning a small T5 variant using LoRA, a technique updating low-rank matrices, effectively balances model performance and efficiency.
LoRA is ideal for efficient training in low-resource environments and multiple models, reducing parameter updates while maintaining expressivity.
Utilizing Hugging Face, the project fine-tunes the model using LoRA, emphasizing fast prototyping and adaptable pipeline for real-world tasks.
BERTScore F1 metric was employed to evaluate summarisation performance, showcasing improvement through LoRA fine-tuning in comparison to the vanilla model.
While the project showcases adapter-based fine-tuning, concerns include domain transfer testing, training duration, inference optimization, and token limits for efficiency.
Despite limitations, the project demonstrates the effectiveness of minimal fine-tuning updates on a pretrained model to enhance performance.
The project serves as a learning experience in implementing adapter-based fine-tuning with LoRA and welcomes input on improving evaluation metrics and techniques.