Written by Mustafa Eyceoz. If you are not redirected, click here to read th...
Open-source AI
A collective of researchers and engineers from Red Hat & IBM building LLM toolkits you can use today.
LLM Hubs
its_hub
Inference-time scaling for LLMs.
sdg_hub
Synthetic data generation pipelines
training_hub
Post training algorithms for LLMs
LLM Tools
async-grpo
Asynchronous GRPO for scalable reinforcement learning.
hopscotch
A method for skipping redundant attention blocks in language models
mini_trainer
Efficient training library for large language models up to 70B parameters on a single node.
orthogonal-subspace-learning
Adaptive SVD-based continual learning method for LLMs.
probabilistic-inference-scaling
Inference-time scaling with particle filtering.
reward_hub
State-of-the-art reward models for preference data generation and acceptance criteria.
SQuat
KV cache quantization for scaling inference time
training
Efficient messages-format SFT library for language models
Recent posts
View all postsPost Training Methods Language Models
Post-training adapts language models for specific, safe, and practical uses. This overview highlights key methods and the open source training_hub library.
Getting Reasoning Models Enterprise Ready
Customize reasoning models with synthetic data generation for enterprise deployment. Learn techniques from Red Hat's AI Innovation Team.
Recent videos
View all videos📹 Instance-Adaptive Inference-Time Scaling with Calibrated Process Reward Models
👤 Speaker: Young Jin Park
📹 Language Model Post-Training in 2025: an Overview of Customization Options Today
👤 Speaker: Mustafa Eyceoz
📹 SDG_Hub: An Open-Source Toolkit for Synthetic Data Generation & LLM Customization
👤 Speaker: Shivchander Sudalairaj & Abhishek Bhandwaldar