Skip to content

Proper nemotron H and 3 and 2#45569

Draft
ArthurZucker wants to merge 14 commits intomainfrom
nemotron-h-split-dense-sparse
Draft

Proper nemotron H and 3 and 2#45569
ArthurZucker wants to merge 14 commits intomainfrom
nemotron-h-split-dense-sparse

Conversation

@ArthurZucker
Copy link
Copy Markdown
Collaborator

What does this PR do?

Fixes # (issue)

Code Agent Policy

The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.

PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.

This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.

  • I confirm that this is not a pure code agent PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

xenova and others added 11 commits March 16, 2026 12:58
…tralExperts

- Replace Block/mixer abstraction with NemotronH{Dense,Sparse}DecoderLayer
  holding exactly one of self.mamba / self.self_attn / self.mlp|block_sparse_moe
  based on layer_type, matching the standard decoder-layer pattern.
- Pass attention_mask (4D causal) + mamba_attention_mask (raw 2D) uniformly
  to every layer instead of pre-dispatching per layer type.
- NemotronHSparseExperts now inherits from MixtralExperts (non-gated variant).
- Drop moe_latent_size from sparse config and the matching latent projections —
  the field is not present in the released Nemotron-3 config.
- Replace the single-class `if layer_type == ...` decoder layer with two
  dedicated classes per architecture, mirroring Jamba's pattern:
  `NemotronH{Dense,Sparse}{Mamba,Attention}DecoderLayer`, each containing
  norm -> mixer -> residual -> norm -> ffn -> residual.
- `NemotronHDenseMLP` now inherits `NemotronMLP`; dense/sparse Models and
  ForCausalLM inherit from the Jamba stack via the modular converter.
- Collapse the per-char `hybrid_override_pattern` into a per-decoder-layer
  `layer_types` list: each `M`/`*` becomes one layer (with mlp or moe FFN
  tail), `-` / `E` are absorbed. `num_hidden_layers` is now the count of
  logical decoder layers (not raw pattern characters).
- Sparse MoE experts continue to inherit from MixtralExperts (non-gated
  variant), MoE block from DeepseekV3MoE.
- Tests updated for the new layer structure; BC dispatcher still routes
  `NemotronHConfig(hybrid_override_pattern=...)` to the right subclass.
@ArthurZucker ArthurZucker changed the title Split Proper nemotron H and 3 and 2 Apr 22, 2026
…e-sparse

# Conflicts:
#	src/transformers/models/nemotron_h/configuration_nemotron_h.py
#	src/transformers/models/nemotron_h/modeling_nemotron_h.py
#	src/transformers/models/nemotron_h/modular_nemotron_h.py
Rewrite the per-model test files to inherit `CausalLMModelTester` / `CausalLMModelTest`
(the LLMTester base), which drastically shrinks the boilerplate and gives us
the standard ModelTester / Generation / Pipeline / Training / TensorParallel
mixin coverage. Override only the hybrid-cache and attention-count specifics:

- `test_attention_outputs` re-counted from `hybrid_override_pattern.count("*")`.
- Hybrid cache / continue-from-cache / single-layer tests skipped with reasons.

(`use_experts_implementation(has_gate=False)` is only used by
`nemotron_h_sparse` — no other model in the library uses it, so
`NemotronHSparseExperts` keeps inheriting from `MixtralExperts`.)
…split

- `modeling_nemotron_h.py` dispatcher: suppress TRF009 (cross-model imports
  are the whole point of this BC shim) and drop the explicit
  `trust_remote_code=` kwarg (TRF014).
- `configuration_nemotron_h.py`: wire `@auto_docstring(checkpoint=...)` so
  `check_config_docstrings` finds a Hub model link.
- Add stub `tests/models/nemotron_h/test_modeling_nemotron_h.py` covering
  the BC dispatcher (config + model + ForCausalLM) and register it in
  `TEST_FILES_WITH_NO_COMMON_TESTS` (the dispatcher has no model classes
  for the standard common-test machinery).
- Add model docs for `NemotronHDense` / `NemotronHSparse`, update the TOC,
  and drop the `- forward` autodoc lines for the BC dispatcher classes
  (they route via `__new__`, no `forward` of their own).
- Allow-list the small set of config attributes inherited from the hub
  `config.json` format that the split architectures no longer consume.
@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: auto, nemotron_h, nemotron_h_dense, nemotron_h_sparse

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants