The Wayback Machine - https://web.archive.org/web/20220323062252/https://github.com/topics/self-attention
Here are
175 public repositories
matching this topic...
Updated
Aug 2, 2021
Python
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
Updated
Dec 23, 2021
Python
Updated
Aug 15, 2021
Python
Datasets, tools, and benchmarks for representation learning of code.
Updated
Jan 31, 2022
Jupyter Notebook
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Updated
Feb 12, 2021
Jupyter Notebook
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).
Updated
Mar 19, 2021
Python
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Updated
Jan 1, 2019
Python
The implementation of DeBERTa
Updated
Mar 22, 2022
Python
list of efficient attention modules
Updated
Aug 23, 2021
Python
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Updated
Sep 14, 2021
Python
Recent Transformer-based CV and related works.
Text classification using deep learning models in Pytorch
Updated
Nov 17, 2018
Python
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Updated
May 7, 2020
Python
A Structured Self-attentive Sentence Embedding
Updated
Sep 22, 2019
Python
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
Updated
Mar 20, 2022
Python
Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
Updated
Mar 11, 2020
Jupyter Notebook
Implementing Stand-Alone Self-Attention in Vision Models using Pytorch
Updated
Feb 13, 2020
Python
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Updated
Oct 1, 2020
Python
Universal Graph Transformer Self-Attention Networks (TheWebConf WWW 2022) (Pytorch and Tensorflow)
Updated
Mar 10, 2022
Python
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.
Updated
Dec 27, 2018
Python
Awesome Transformers (self-attention) in Computer Vision
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Updated
Dec 26, 2021
Python
Important paper implementations for Question Answering using PyTorch
Updated
Dec 29, 2020
Jupyter Notebook
SOFT: Softmax-free Transformer with Linear Complexity, NeurIPS 2021 Spotlight
Updated
Nov 13, 2021
Python
Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"
Updated
Sep 8, 2021
Python
Representation learning on dynamic graphs using self-attention networks
Updated
Feb 10, 2022
Python
Updated
Feb 26, 2019
Python
Multi-turn dialogue baselines written in PyTorch
Updated
Mar 10, 2020
Python
Implementing Lambda Networks using Pytorch
Updated
Nov 24, 2020
Python
Improve this page
Add a description, image, and links to the
self-attention
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
self-attention
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.