Complete transformer model (Encoder + Decoder + Interconections)

14 次查看(过去 30 天)
Hello
I am wondering if there is already a Matlab keyboard warrior that has coded (on Matlab) a full transformer model:
  1. Inputs: Input Embedding + Positional Encoding
  2. Encoder: Multihead Attention + Add & Normalisation + Feedforward + Add & Normatisation
  3. Outputs: Output Embedding + Positional Encoding
  4. Decoder: Masked Multihead Attention + Add & Normalisation + Multihead Attention + Add & Normalisation + Feedforward + Add & Normatisation
  5. Final: Linear and Softmax.
Including all the interconnections between them.
Thank you
Will

回答(1 个)

Yash Sharma
Yash Sharma 2024-8-5
Hi Will,
You can take a look at the following file exchange submission.
  1 个评论
WIll Serrano
WIll Serrano 2024-8-7
Hello Yash
Thank you for your answer.
I read that one, it is based on a pre-trained transformer and it does not directly represent the transformer components. As well it provides the same functionality as a normal LSTM for text classification.
It is acknowledged transformers with attention are somehow superior to Deep Learning based on LSTM, however, I have yet to prove it myself.
Thank you
Will

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Develop Apps Using App Designer 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by