We’ve been working with the community and collecting feedback to improve ChatLLaMA. With ChatLLaMA, you can create your hyper-personalized ChatGPT-like assistant using your own data and the least amount of compute possible.
Today we are happy to announce a new release of the project, with many new features and fixed issues 🚀
- Now you can easily parallelize your training with HuggingFace Accelerate and DeepSpeed.
- You can efficiently fine-tune your Hugging Face's LLM with PEFT/LoRA.
- Use your dataset without worrying to break your training with automatic dataset checks.
- Your training stats are logged and easily accessible for maximum transparency of what is happening.
- Apply ChatLLaMA also to your favorite HuggingFace models for maximum flexibility.
- I want to align the assistant with my personal/company values, culture, brand and manifesto.
- Training is more robust with checkpoints automatically managed for you.
- Schedulers and Hyper-parameters are already initialized for an optimal starting point.
If you'd like to remain updated on our progress, join our Discord community.