GJLi[Paper 筆記] Toolformer: Language Models Can Teach Themselves to Use Tools (2023) — 方法篇近半年 LLMs 的進展神速,也許現在正是通往 AGI 的重要轉折點? 總之,近期我又準備要回來刷 paper 了。Mar 19, 2023Mar 19, 2023
GJLiA Short Note On Visualizing Attention of Vision Transformer (ViT)Two years ago, when I was first studying the Vision Transformer paper [1], it was not very clear to me how the model attention is being…Mar 15, 2023Mar 15, 2023
GJLiNotes on (Deep) Successor RepresetationThe idea of successor representation (SR) first came to my eyes during my essay rotation survey project which was about reinforcement…Oct 16, 2018Oct 16, 2018
GJLi歐洲機器學習工程師-0我正在歐洲找機器學習工程師的工作(ML Engineer/ML Research Engineer),裡面有很多有趣的細節。我想ML engineer還是比較偏向軟件工程,我自己原本是數學背景出身,其實並沒有受過正式的程式訓練,但是在德國讀碩士的過程中對matlab…Aug 18, 2018Aug 18, 2018