Fast Transformer Decoding: One Write-Head is All You Need

Multi query attention