Conversation
…xecution-in-functions Unify and extend parallel configuration for samplers and posterior evaluations
Summary by CodeRabbit
WalkthroughThe changes refactor parallel execution configuration across multiple sampler classes by renaming Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Pull request overview
This PR standardizes and expands how parallelism is configured across inference backends and posterior-intensity utilities.
Changes:
- Introduces a unified
parallelsetting (bool/int/device sequence) for some samplers and adds a parser to convert it to a worker count. - Extends posterior intensity CI evaluation (
flux,lumin,eiso) to acceptparallel/n_parallelcontrols. - Removes the existing end-to-end fit tests that exercised multiple backends and parallel options.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
tests/test_fit.py |
Deletes broad integration tests that covered multiple fitting methods and parallel settings. |
src/elisa/infer/results.py |
Adds parallel / n_parallel plumbing to posterior intensity CI computations (flux/lumin/eiso). |
src/elisa/infer/fit.py |
Renames/reshapes parallel configuration for multiple samplers and adds a _parse_parallel_setting helper. |
Comments suppressed due to low confidence (2)
src/elisa/infer/fit.py:1
- The public API now accepts
parallelas aSequence[Device], but the implementation only useslen(parallel)and discards which devices were provided. This can surprise callers who pass an explicit device list expecting sampling to be pinned to those devices. Consider either (a) narrowing the accepted type tobool | int(and updating docs accordingly), or (b) honoring the provided device list by passing it through to the sampler/backend (or introducing a separatedevices=argument) instead of treating it as only a count.
"""Model fit in a maximum likelihood or Bayesian way."""
src/elisa/infer/results.py:1
- The docstring says
n_paralleldefaults tojax.local_device_count(), but the implementation routes throughget_parallel_number(self._n_parallel if n_parallel is None else n_parallel), which may resolve to a configuredself._n_parallelvalue rather thanlocal_device_count(). Also, the wording 'parallel processes' is potentially misleading here since JAX parallelism is typically device-based; consider updating the docs to reflect the actual default resolution and whether this is device-count vs process-count parallelism.
"""Subsequent analysis of maximum likelihood or Bayesian fit."""
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/elisa/infer/fit.py (1)
1549-1570: Move this parser into a shared utility.Keeping the normalization private to
BayesFitmeansresults.pystill has to hand-roll the same fallback logic, and that copy has already drifted into a runtime bug. Extracting this into a common helper would actually unify the parallel API surface.♻️ Suggested direction
-from elisa.util.config import get_parallel_number +from elisa.util.config import get_parallel_number, parse_parallel_setting - `@staticmethod` - def _parse_parallel_setting( - parallel: bool | int | Sequence[Device], - ) -> int: - ... + # Reuse parse_parallel_setting() here and from results.py.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/elisa/infer/fit.py` around lines 1549 - 1570, Extract the parallel-normalization logic from BayesFit._parse_parallel_setting into a shared helper (e.g., parse_parallel_setting) in a common utils module and update callers to use it: implement the helper to accept parallel: bool|int|Sequence[Device] and reproduce the exact behavior (bool -> get_parallel_number(None) or 1, int -> get_parallel_number(int), non-empty Sequence -> get_parallel_number(len(sequence)), raise ValueError on empty sequence and TypeError otherwise), replace the private BayesFit._parse_parallel_setting to call/delegate to this new helper, and update results.py (and any other callers) to use the new parse_parallel_setting so the logic is unified and no duplicated fallback remains.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/elisa/infer/results.py`:
- Around line 1535-1537: PosteriorResult.covar raises NameError because it
references n_parallel which isn't defined; add an explicit parameter n_parallel:
int | None = None to the PosteriorResult.covar public signature and update its
docstring to document this override, then use the existing expression
get_parallel_number(self._n_parallel if n_parallel is None else n_parallel) (as
in the diff) so the method falls back to self._n_parallel when not provided.
---
Nitpick comments:
In `@src/elisa/infer/fit.py`:
- Around line 1549-1570: Extract the parallel-normalization logic from
BayesFit._parse_parallel_setting into a shared helper (e.g.,
parse_parallel_setting) in a common utils module and update callers to use it:
implement the helper to accept parallel: bool|int|Sequence[Device] and reproduce
the exact behavior (bool -> get_parallel_number(None) or 1, int ->
get_parallel_number(int), non-empty Sequence ->
get_parallel_number(len(sequence)), raise ValueError on empty sequence and
TypeError otherwise), replace the private BayesFit._parse_parallel_setting to
call/delegate to this new helper, and update results.py (and any other callers)
to use the new parse_parallel_setting so the logic is unified and no duplicated
fallback remains.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1a30c5bd-5e41-477f-848e-335bcb7ae59f
📒 Files selected for processing (3)
src/elisa/infer/fit.pysrc/elisa/infer/results.pytests/test_fit.py
💤 Files with no reviewable changes (1)
- tests/test_fit.py
No description provided.