Join our newsletters for the latest updates and exclusive content on industry-leading AI coverage.
Stability AI is well-known for its stable diffusion text-to-image generative AI models, but they are expanding their offerings. Today, Stability AI introduced StableLM Zephyr 3B, a new 3 billion parameter large language model (LLM) designed for chat applications, including text generation, summarization, and content personalization. This latest model is a streamlined version of the StableLM text generation model first discussed in April.
StableLM Zephyr 3B offers several advantages due to its smaller size compared to the 7 billion parameter models. Its compact design allows for deployment on a broader range of hardware with lower resource usage while still delivering quick responses. The model has been optimized for tasks such as Q&A and following instructions.
StableLM Zephyr 3B is an extension of the existing StableLM 3B-4e1t model. Its design is inspired by the Zephyr 7B model from HuggingFace, which operates under an open-source MIT license and is intended to function as an assistant. Zephyr utilizes a training method called Direct Preference Optimization (DPO), which Stability AI now uses as well.
Direct Preference Optimization (DPO) is an alternative to the reinforcement learning used in previous models to align them with human preferences. Historically, DPO has been used with larger 7 billion parameter models, and StableLM Zephyr is among the first to apply this technique to a smaller 3 billion parameter model.
Stability AI used the UltraFeedback dataset from the OpenBMB research group for training. This dataset includes over 64,000 prompts and 256,000 responses. The combination of DPO, the smaller model size, and optimized data training results in strong performance. For instance, on the MT Bench evaluation, StableLM Zephyr 3B outperformed larger models like Meta’s Llama-2-70b-chat and Anthropic’s Claude-V1.
The release of StableLM Zephyr 3B is part of a series of new model launches by Stability AI. In August, they introduced StableCode for application code development, followed by Stable Audio in September for text-to-audio generation, and Stable Video Diffusion in November for video generation. Despite these new ventures, they have continued to innovate in text-to-image generation with the recent release of SDXL Turbo, a faster version of their flagship SDXL model.
Mostaque emphasized that Stability AI will continue to innovate. He believes small, open, high-performing models customized with user data will eventually outperform larger, general models. The future release of new StableLM models aims to further democratize generative language models.