
Polish Whisper

At bards.ai, we fine-tuned OpenAI’s Whisper model specifically for the Polish language as part of Hugging Face’s global Whisper Fine-Tuning Sprint. Our model took first place in the Polish category, standing out for its exceptional transcription accuracy and robustness across diverse Polish audio sources.
We focused on minimizing word error rate while preserving the natural flow of spoken Polish, ensuring the model performs well across interviews, podcasts, casual speech, and formal recordings. The result is a state-of-the-art Polish ASR model ready for real-world use in transcription, accessibility, and voice-driven applications.
Large: https://huggingface.co/bardsai/whisper-large-v2-pl-v2
Medium: https://huggingface.co/bardsai/whisper-medium-pl-v2
Performance
Metric | Value |
|---|---|
Loss | 0.3684 |
WER | 7.2802% |
Training Params
Parameter | Value |
|---|---|
Learning rate |
|
Train batch size |
|
Eval batch size |
|
Seed |
|
Gradient accumulation steps |
|
Total train batch size |
|
Optimizer | Adam ( |
LR scheduler type |
|
LR scheduler warmup steps |
|
Training steps |
|
Mixed precision training | Native AMP |
Enough reading! Let’s talk.
Our team is ready to support you in delivering Custom AI Solutions.
FAQs
How can I evaluate the potential of custom AI solutions?
What are the main challenges in developing custom AI solutions
What are the first steps a decision maker should take to start evaluating custom AI solutions
What are the costs involved in developing custom AI solutions?
What types of data are needed for AI development?
Looking to integrate AI into your product or project?
Get a Free consultation with our AI experts.



