Skip to content

[Bug]: Quantization In MambaMixer2 Not Supported when Tensor Parallel is enabled #14618

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
fabianlim opened this issue Mar 11, 2025 · 0 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@fabianlim
Copy link
Contributor

Your current environment

The output of `python collect_env.py`
Environment is not relevant for this issue.

🐛 Describe the bug

The current implementation for TP for Mamba2 is complicated for the in_proj, because the gate, projection, state space, heads, are all fused into this one layer. And furthermore, we also need to consider different possibilities if the number of groups divide the number of heads or not, see #13660.

For now the implementation of TP is simplified:

  • limited to the case of num_groups == 1 if num_groups does not divide num_heads #13660.
  • will does not support TP > 1 if the mamba2 mixer is quantised, see #14617

However for large models, it may be useful to support TP > 1 with quant layers, even in some special cases of num_heads and num_groups. cc: @tlrmchlsmth

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant