Skip to content
@Dao-AILab

Dao AI Lab

We are a CS research group led by Prof. Tri Dao

Popular repositories Loading

  1. flash-attention flash-attention Public

    Fast and memory-efficient exact attention

    Python 11.9k 1.1k

  2. causal-conv1d causal-conv1d Public

    Causal depthwise conv1d in CUDA, with a PyTorch interface

    Cuda 221 43

  3. fast-hadamard-transform fast-hadamard-transform Public

    Fast Hadamard transform in CUDA, with a PyTorch interface

    C 62 6

Repositories

Showing 3 of 3 repositories
  • flash-attention Public

    Fast and memory-efficient exact attention

    Dao-AILab/flash-attention’s past year of commit activity
    Python 11,853 BSD-3-Clause 1,050 407 37 Updated Jul 3, 2024
  • causal-conv1d Public

    Causal depthwise conv1d in CUDA, with a PyTorch interface

    Dao-AILab/causal-conv1d’s past year of commit activity
    Cuda 221 BSD-3-Clause 43 8 4 Updated Jun 29, 2024
  • fast-hadamard-transform Public

    Fast Hadamard transform in CUDA, with a PyTorch interface

    Dao-AILab/fast-hadamard-transform’s past year of commit activity
    C 62 BSD-3-Clause 6 1 1 Updated May 24, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Python C Cuda

Most used topics

Loading…