Colossal-AI

0 0 47
1 month ago
Share: 

Colossal-AI

Colossal-AI: Making large AI models cheaper, faster, and more accessible

GPU Cloud HPC-AI.COM Coming!!

For a limited time, you can access an H100 Server for just $1! This is your chance to leverage premium GPU power at an unbeatable price.
Plus, when you refer a friend, you’ll receive 20% cashback or compute credits equal to 100% of their top-up!

Our platform offers on-demand premium compute, ensuring safe, permanent data storage even after stopping your instance.
Don’t miss this incredible opportunity to accelerate your AI projects!

Unlock premium GPUs and register now at HPC-AI.COM to receive $10!

Special Bonuses:

  • Top up $1,000 and receive 300 credits
  • Top up $500 and receive 100 credits

Why Colossal-AI

Prof. James Demmel (UC Berkeley): Colossal-AI makes training AI models efficient, easy, and scalable.

(back to top)

Features

Colossal-AI provides a collection of parallel components for you. We aim to support you to write your
distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart
distributed training and inference in a few lines.

(back to top)

Colossal-AI in the Real World

Open-Sora

Open-Sora:Revealing Complete Model Parameters, Training Details, and Everything for Sora-like Video Generation Models
[code]
[blog]
[Model weights]
[Demo]
[GPU Cloud Playground]
[OpenSora Image]

(back to top)

Colossal-LLaMA-2

[GPU Cloud Playground]
[LLaMA3 Image]

ColossalChat

ColossalChat: An open-source solution for cloning ChatGPT with a complete RLHF pipeline.
[code]
[blog]
[demo]
[tutorial]

  • Up to 10 times faster for RLHF PPO Stage3 Training

  • Up to 7.73 times faster for single server training and 1.42 times faster for single-GPU inference

  • Up to 10.3x growth in model capacity on one GPU
  • A mini demo training process requires only 1.62GB of GPU memory (any consumer-grade GPU)

  • Increase the capacity of the fine-tuning model by up to 3.7 times on a single GPU
  • Keep at a sufficiently high running speed

(back to top)

AIGC

Acceleration of AIGC (AI-Generated Content) models such as Stable Diffusion v1 and Stable Diffusion v2.

  • Training: Reduce Stable Diffusion memory consumption by up to 5.6x and hardware cost by up to 46x (from A100 to RTX3060).

  • Inference: Reduce inference GPU memory consumption by 2.5x.

(back to top)

Biomedicine

Acceleration of AlphaFold Protein Structure

  • FastFold: Accelerating training and inference on GPU Clusters, faster data processing, inference sequence containing more than 10000 residues.

  • xTrimoMultimer: accelerating structure prediction of protein monomers and multimer by 11x.

(back to top)

Parallel Training Demo

LLaMA3

LLaMA2

  • 70 billion parameter LLaMA2 model training accelerated by 195%
    [code]
    [blog]

LLaMA1

  • 65-billion-parameter large model pretraining accelerated by 38%
    [code]
    [blog]

MoE

  • Enhanced MoE parallelism, Open-source MoE model training can be 9 times more efficient
    [code]
    [blog]

GPT-3

  • Save 50% GPU resources and 10.7% acceleration

GPT-2

  • 11x lower GPU memory consumption, and superlinear scaling efficiency with Tensor Parallelism

  • 24x larger model size on the same hardware
  • over 3x acceleration

BERT

  • 2x faster training, or 50% longer sequence length

PaLM

OPT

  • Open Pretrained Transformer (OPT), a 175-Billion parameter AI language model released by Meta, which stimulates AI programmers to perform various downstream tasks and application deployments because of public pre-trained model weights.
  • 45% speedup fine-tuning OPT at low cost in lines. [Example] [Online Serving]

Please visit our documentation and examples for more details.

ViT

  • 14x larger batch size, and 5x faster training for Tensor Parallelism = 64

Recommendation System Models

  • Cached Embedding, utilize software cache to train larger embedding tables with a smaller GPU memory budget.

(back to top)

Single GPU Training Demo

GPT-2

  • 20x larger model size on the same hardware

  • 120x larger model size on the same hardware (RTX 3080)

PaLM

  • 34x larger model size on the same hardware

(back to top)

Inference

Colossal-Inference

Grok-1

  • 314 Billion Parameter Grok-1 Inference Accelerated by 3.8x, an easy-to-use Python + PyTorch + HuggingFace version for Inference.

[code]
[blog]
[HuggingFace Grok-1 PyTorch model weights]
[ModelScope Grok-1 PyTorch model weights]

SwiftInfer

  • SwiftInfer: Inference performance improved by 46%, open source solution breaks the length limit of LLM for multi-round conversations

(back to top)

Installation

Requirements:

If you encounter any problem with installation, you may want to raise an issue in this repository.

Install from PyPI

You can easily install Colossal-AI with the following command. By default, we do not build PyTorch extensions during installation.

Note: only Linux is supported for now.

However, if you want to build the PyTorch extensions during installation, you can set BUILD_EXT=1.

Otherwise, CUDA kernels will be built during runtime when you actually need them.

We also keep releasing the nightly version to PyPI every week. This allows you to access the unreleased features and bug fixes in the main branch.
Installation can be made via

Download From Source

The version of Colossal-AI will be in line with the main branch of the repository. Feel free to raise an issue if you encounter any problems. :)

By default, we do not compile CUDA/C++ kernels. ColossalAI will build them during runtime.
If you want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):

For Users with CUDA 10.2, you can still build ColossalAI from source. However, you need to manually download the cub library and copy it to the corresponding directory.

(back to top)

Use Docker

Pull from DockerHub

You can directly pull the docker image from our DockerHub page. The image is automatically uploaded upon release.

Build On Your Own

Run the following command to build a docker image from Dockerfile provided.

Building Colossal-AI from scratch requires GPU support, you need to use Nvidia Docker Runtime as the default when doing docker build. More details can be found here.
We recommend you install Colossal-AI from our project page directly.

Run the following command to start the docker container in interactive mode.

(back to top)

Community

Join the Colossal-AI community on Forum,
Slack,
and WeChat(微信) to share your suggestions, feedback, and questions with our engineering team.

Contributing

Referring to the successful attempts of BLOOM and Stable Diffusion, any and all developers and partners with computing powers, datasets, models are welcome to join and build the Colossal-AI community, making efforts towards the era of big AI models!

You may contact us or participate in the following ways:

  1. Leaving a Star ⭐ to show your like and support. Thanks!
  2. Posting an issue, or submitting a PR on GitHub follow the guideline in Contributing
  3. Send your official proposal to email contact@hpcaitech.com

Thanks so much to all of our amazing contributors!

(back to top)

CI/CD

We leverage the power of GitHub Actions to automate our development, release and deployment workflows. Please check out this documentation on how the automated workflows are operated.

Cite Us

This project is inspired by some related projects (some by our team and some by other organizations). We would like to credit these amazing projects as listed in the Reference List.

To cite this project, you can use the following BibTeX citation.

Colossal-AI has been accepted as official tutorial by top conferences NeurIPS, SC, AAAI,
PPoPP, CVPR, ISC, NVIDIA GTC ,etc.

(back to top)

No reviews found!

No comments found for this product. Be the first to comment!