menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

MetaTT: A ...
source image

Arxiv

2d

read

214

img
dot

Image Credit: Arxiv

MetaTT: A Global Tensor-Train Adapter for Parameter-Efficient Fine-Tuning

  • MetaTT is a Tensor Train (TT) adapter framework designed for global low-rank fine-tuning of pre-trained transformers.
  • Unlike LoRA, MetaTT utilizes a single shared TT to factorize all transformer sub-modules such as query, key, value, projection, and feed-forward layers by indexing structural axes.
  • MetaTT adds parameters proportional to the sum across modes for a given rank, resulting in a much more compressed final adapter compared to LoRA.
  • Benchmarks comparing MetaTT with LoRA and other tensor-based methods on standard language modeling tasks show that MetaTT achieves a significant reduction in parameters while maintaining similar accuracy to LoRA and even outperforming other methods.
  • The TT ansatz benefit from mature optimization routines like DMRG-style rank adaptive minimization and Adam, making training simpler compared to other tensor factorization methods.
  • MetaTT allows for cheap appending of new modes, enabling shared adapters across multiple tasks without the need to redesign the core tensor.

Read Full Article

like

12 Likes

For uninterrupted reading, download the app