Skip to content

Add log space fallback for PPF func#2

Merged
hbmartin merged 10 commits intomainfrom
log-fallback
Jul 17, 2025
Merged

Add log space fallback for PPF func#2
hbmartin merged 10 commits intomainfrom
log-fallback

Conversation

@hbmartin
Copy link
Copy Markdown
Owner

@hbmartin hbmartin commented Jul 17, 2025

PR Type

Enhancement


Description

  • Add log-space fallback mechanism for PPF function

  • Implement numerical stability for extreme probability values

  • Update scipy dependency to version 1.14.1


Diagram Walkthrough

flowchart LR
  A["PPF function"] --> B["Check for NaN values"]
  B --> C{"Fallback needed?"}
  C -->|Yes| D["Log-space calculation"]
  C -->|No| E["Standard beta PPF"]
  D --> F["Brentq solver"]
  F --> G["Final result"]
  E --> G
Loading

File Walkthrough

Relevant files
Enhancement
__init__.py
Add fallback parameter support to PERT class                         

betapert/init.py

  • Add fallback parameter to PERT class constructor
  • Define valid fallback options in _fallbacks set
  • Pass fallback parameter to PPF function call
+7/-1     
funcs.py
Implement log-space PPF fallback mechanism                             

betapert/funcs.py

  • Implement _ppf_fallback_log_space function for numerical stability
  • Add fallback parameter to ppf function with NaN detection
  • Use log-space calculations and brentq solver for extreme probabilities
  • Include multiple fallback levels for robustness
+45/-2   
Dependencies
pyproject.toml
Update scipy dependency version requirement                           

pyproject.toml

  • Update scipy dependency from "^1.14.0" to ">=1.14.1"
+1/-1     

Summary by Sourcery

Provide a fallback mechanism for the inverse PPF calculation to avoid NaN results on extreme probabilities by attempting log-space root finding and progressive fallbacks, and bump the scipy version requirement.

New Features:

  • Add fallback parameter to PERT class constructor and ppf function to enable alternative PPF methods
  • Introduce log-space fallback mechanism for beta PPF using brentq solver with multiple fallback levels

Enhancements:

  • Improve numerical stability for extreme probability inputs in the PPF computation

Build:

  • Update scipy dependency requirement to >=1.14.1

Summary by CodeRabbit

  • New Features

    • Added a log-space fallback method to improve numerical stability in percentile calculations for the PERT distribution.
    • Users can specify a fallback option when creating a PERT distribution instance.
  • Bug Fixes

    • Improved handling of extreme probability values in percentile calculations with the new fallback method.
  • Chores

    • Updated the minimum required version of the scipy dependency to 1.14.1.
    • Expanded linting ignore rules for test files.
    • Reordered test execution steps in the CI workflow for improved linting and testing sequence.
  • Tests

    • Added comprehensive tests validating the new log-space fallback functionality and its integration with the PERT distribution.
  • Documentation

    • Configured pytest to treat specific warnings as errors to improve test reliability.

Important

Add log-space fallback for PPF function to handle extreme probabilities and update scipy dependency.

  • Behavior:
    • Add log-space fallback mechanism in funcs.py for PPF function to handle extreme probabilities using _ppf_fallback_log_space().
    • Integrate fallback parameter in PERT and ModifiedPERT classes in __init__.py.
    • Update ppf() in funcs.py to use fallback if NaN values are detected.
  • Dependencies:
    • Update scipy version in pyproject.toml to >=1.14.1.
  • Tests:
    • Add test_ppf_log_fallback.py to validate log-space fallback functionality and integration with PERT distribution.

This description was created by Ellipsis for 6f9d3a5. You can customize this summary. It will automatically update as commits are pushed.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jul 17, 2025

Warning

Rate limit exceeded

@hbmartin has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 21 minutes and 37 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 6a25960 and 6f9d3a5.

⛔ Files ignored due to path filters (1)
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • .github/workflows/pytest-poetry.yml (1 hunks)
  • betapert/funcs.py (2 hunks)
  • pyproject.toml (3 hunks)
  • tests/test_ppf_log_fallback.py (1 hunks)

Walkthrough

A new fallback mechanism for the percent point function (PPF) calculation in the PERT distribution was introduced. The PERT and ModifiedPERT classes now pass a module-level FALLBACK parameter to the PPF computation. The underlying PPF function in funcs.py supports fallback logic, including a numerically stable log-space root-finding approach. The scipy dependency version was updated in the project configuration. Comprehensive tests were added to verify the fallback functionality and robustness.

Changes

File(s) Change Summary
betapert/init.py Added module-level FALLBACK variable; modified PERT._ppf and ModifiedPERT._ppf to pass fallback=FALLBACK to funcs.ppf.
betapert/funcs.py Added _ppf_fallback_log_space fallback using root-finding in log-space; updated ppf to accept optional fallback parameter and invoke fallback logic if needed; added _ppf_fallbacks mapping.
pyproject.toml Changed scipy dependency from ^1.14.0 to >=1.14.1; expanded lint ignore codes for tests/*.py.
tests/test_ppf_log_fallback.py Added comprehensive tests for log-space fallback PPF method covering edge cases, failure modes, input shapes, integration with PERT, and fallback triggering scenarios.
.github/workflows/pytest-poetry.yml Reordered steps to run tests after linting (ruff).
pytest.ini Added filterwarnings = error::pytest.PytestCollectionWarning to treat collection warnings as errors.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant PERT
    participant funcs

    User->>PERT: Instantiate PERT (uses FALLBACK)
    User->>PERT: Call PERT._ppf(q, mini, mode, maxi)
    PERT->>funcs: ppf(q, mini, mode, maxi, fallback=FALLBACK)
    funcs->>funcs: Compute standard PPF
    alt Standard PPF contains NaN and fallback specified
        funcs->>funcs: Call _ppf_fallback_log_space
        funcs-->>PERT: Return fallback PPF result
    else No fallback needed
        funcs-->>PERT: Return standard PPF result
    end
    PERT-->>User: Return PPF value(s)
Loading

Suggested labels

Review effort 3/5

Poem

In the land of PERT, a fallback appears,
For tricky quantiles that once caused us tears.
Now log-space is here, with a hop and a cheer,
Rabbits rejoice, the results are more clear!
With dependencies neat and the future in sight,
Our code leaps ahead, both stable and light.
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented Jul 17, 2025

Reviewer's Guide

Introduces a log-space fallback mechanism in the PPF function for enhanced numerical stability, extends the PERT class to accept a fallback option, and updates the scipy dependency to >=1.14.1.

Class diagram for updated PERT class and PPF fallback mechanism

classDiagram
    class PERT {
        +fallback
        +__init__(*args, **kwargs)
        +_get_support(mini, mode, maxi)
        +_stats(mini, mode, maxi)
        +_ppf(q, mini, mode, maxi)
        +_rvs(mini, mode, maxi, size=None, random_state=None)
    }
    class funcs {
        +ppf(q, mini, mode, maxi, lambd=4, fallback=None)
        +_ppf_fallback_log_space(q, mini, mode, maxi, lambd=4)
    }
    PERT --> funcs : uses
    funcs <|-- _ppf_fallback_log_space : fallback
    funcs <|-- ppf : main + fallback
Loading

Flow diagram for PPF function with log-space fallback

flowchart LR
  A["PPF function"] --> B["Check for NaN values"]
  B --> C{"Fallback needed?"}
  C -->|Yes| D["Log-space calculation"]
  C -->|No| E["Standard beta PPF"]
  D --> F["Brentq solver"]
  F --> G["Final result"]
  E --> G
Loading

File-Level Changes

Change Details Files
Implement log-space fallback mechanism for PPF calculations
  • Add _ppf_fallback_log_space using logcdf and brentq solver
  • Extend ppf() to detect NaN results and dispatch to fallback
  • Chain fallback strategies: log-space, standard PPF, then linear interpolation
betapert/funcs.py
Add fallback parameter support in PERT distribution
  • Introduce fallback argument and validation in PERT.init
  • Relay fallback option to funcs.ppf in the _ppf method
betapert/__init__.py
Update scipy dependency version requirement
  • Bump scipy requirement to ">=1.14.1" in pyproject.toml
pyproject.toml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @hbmartin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the numerical stability and robustness of the PERT distribution's ppf (percent point function). It introduces a configurable fallback mechanism, specifically a log-space approach, to handle cases where standard calculations might yield NaN values, particularly for extreme probabilities. This ensures more reliable quantile calculations across a wider range of inputs.

Highlights

  • Enhanced PERT Constructor: The PERT distribution now accepts an optional fallback argument in its constructor. This allows users to specify alternative calculation methods for the ppf (percent point function) when initializing the distribution.
  • Log-Space PPF Fallback Implementation: A new private function, _ppf_fallback_log_space, has been added. This method employs a log-space approach and scipy.optimize.brentq to solve for quantiles, providing a more numerically stable solution for extreme probabilities. It includes additional fallbacks to standard ppf and linear interpolation if the brentq method encounters issues.
  • Conditional Fallback Application in PPF: The core ppf function in betapert/funcs.py now first attempts its standard calculation. If the result contains NaN (Not a Number) values and a fallback method (e.g., 'log') is specified, the new log-space fallback is invoked to provide a more reliable result, improving robustness for edge cases.
  • Dependency Update: The scipy dependency has been updated in pyproject.toml from ^1.14.0 to >=1.14.1. This update likely ensures compatibility or leverages improvements in newer SciPy versions relevant to numerical computations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@qodo-code-review
Copy link
Copy Markdown

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Missing Import

The _ppf_fallback_log_space function uses scipy.optimize.brentq but scipy.optimize is not imported. This will cause a runtime error when the fallback is triggered.

x_normalized = scipy.optimize.brentq(log_cdf_eq, 1e-10, 1 - 1e-10)
results[i] = mini + (maxi - mini) * x_normalized
Exception Handling

The bare except: clause on line 41 catches all exceptions including system exits and keyboard interrupts, which is generally bad practice. Should catch specific exceptions or at least Exception.

except:
    # Final fallback: linear interpolation
Assertion Error

Using assert for input validation in production code is not recommended as assertions can be disabled with -O flag. Should use proper exception raising instead.

assert self.fallback in _fallbacks, "Invalid fallback value"
super().__init__(*args, **kwargs)

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @hbmartin - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `betapert/funcs.py:36` </location>
<code_context>
+            x_normalized = scipy.optimize.brentq(log_cdf_eq, 1e-10, 1 - 1e-10)
+            results[i] = mini + (maxi - mini) * x_normalized
+
+        except (ValueError, RuntimeError):
+            # Fallback to regular ppf if log-space fails
+            try:
+                x_normalized = scipy.stats.beta.ppf(qi_safe, alpha, beta)
+                results[i] = mini + (maxi - mini) * x_normalized
</code_context>

<issue_to_address>
Catching broad exceptions may obscure underlying issues.

Catching all exceptions with a bare except can hide unexpected errors. Catch only specific exceptions like ValueError or RuntimeError from scipy.stats.beta.ppf.
</issue_to_address>

### Comment 2
<location> `betapert/funcs.py:88` </location>
<code_context>
-    return mini + (maxi - mini) * scipy.stats.beta.ppf(q, alpha, beta)
+    _beta_ppf = mini + (maxi - mini) * scipy.stats.beta.ppf(q, alpha, beta)
+    # Use fallback if any values are NaN
+    if fallback is not None and np.isnan(_beta_ppf).any():
+        return _ppf_fallbacks[fallback](q, mini, mode, maxi, lambd)
+    return _beta_ppf
</code_context>

<issue_to_address>
NaN check may not handle scalar inputs as intended.

np.isnan(_beta_ppf).any() will fail if _beta_ppf is a scalar. Use np.any(np.isnan(_beta_ppf)) or handle scalars explicitly to avoid AttributeError.
</issue_to_address>

### Comment 3
<location> `betapert/__init__.py:44` </location>
<code_context>

+    def __init__(self, *args, **kwargs):
+        self.fallback = kwargs.pop('fallback', None)
+        assert self.fallback in _fallbacks, "Invalid fallback value"
+        super().__init__(*args, **kwargs)
+
</code_context>

<issue_to_address>
Using assert for input validation is not recommended in production code.

Assertions can be disabled at runtime; use ValueError to ensure input validation is always enforced.
</issue_to_address>

### Comment 4
<location> `betapert/funcs.py:12` </location>
<code_context>
 import scipy.stats


+def _ppf_fallback_log_space(q, mini, mode, maxi, lambd=4):
+    """Use log-space to avoid numerical issues with extreme probabilities"""
+    alpha, beta = _calc_alpha_beta(mini, mode, maxi, lambd)
</code_context>

<issue_to_address>
Consider refactoring the fallback logic in `ppf` to use a vectorized approach that only applies log-space backup to NaN results, eliminating the need for a separate fallback function.

Here’s one way to collapse all of that nested looping/inner‐function/try-except machinery into a single, small vectorized block in `ppf`.  Note that you can drop the separate `_ppf_fallback_log_space` altogether and just do your log-space backup *only* on the NaNs that `beta.ppf` spits out:

```python
import numpy as np
from scipy.optimize import brentq

def ppf(q, mini, mode, maxi, lambd=4, *, fallback=None):
    α, β = _calc_alpha_beta(mini, mode, maxi, lambd)
    # core ppf call (vectorized)
    out = scipy.stats.beta.ppf(q, α, β)

    if fallback == 'log':
        eps = 1e-15
        # clip once
        q_safe = np.clip(q, eps, 1 - eps)
        nan_mask = np.isnan(out)
        if nan_mask.any():
            log_q = np.log(q_safe[nan_mask])
            # solve only for the NaN slots
            def _solve_log_ppf(logqi):
                return brentq(
                    lambda x: scipy.stats.beta.logcdf(x, α, β) - logqi,
                    eps, 1 - eps
                )
            # map over just the bad entries
            out[nan_mask] = [ _solve_log_ppf(lqi) for lqi in log_q ]

    # scale back to [mini, maxi]
    return mini + (maxi - mini) * out
```

Key wins:

- No per‐element `for` loop over all q’s, only over the *failed* entries.
- All clipping, log, and `beta.ppf` calls are fully vectorized.
- Single try (inside `brentq`) rather than nested try/except for each path.
- Eliminates `_ppf_fallback_log_space` and its dict lookup.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread betapert/funcs.py
Comment thread betapert/funcs.py Outdated
Comment thread betapert/__init__.py Outdated
Comment thread betapert/funcs.py Outdated
import scipy.stats


def _ppf_fallback_log_space(q, mini, mode, maxi, lambd=4):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider refactoring the fallback logic in ppf to use a vectorized approach that only applies log-space backup to NaN results, eliminating the need for a separate fallback function.

Here’s one way to collapse all of that nested looping/inner‐function/try-except machinery into a single, small vectorized block in ppf. Note that you can drop the separate _ppf_fallback_log_space altogether and just do your log-space backup only on the NaNs that beta.ppf spits out:

import numpy as np
from scipy.optimize import brentq

def ppf(q, mini, mode, maxi, lambd=4, *, fallback=None):
    α, β = _calc_alpha_beta(mini, mode, maxi, lambd)
    # core ppf call (vectorized)
    out = scipy.stats.beta.ppf(q, α, β)

    if fallback == 'log':
        eps = 1e-15
        # clip once
        q_safe = np.clip(q, eps, 1 - eps)
        nan_mask = np.isnan(out)
        if nan_mask.any():
            log_q = np.log(q_safe[nan_mask])
            # solve only for the NaN slots
            def _solve_log_ppf(logqi):
                return brentq(
                    lambda x: scipy.stats.beta.logcdf(x, α, β) - logqi,
                    eps, 1 - eps
                )
            # map over just the bad entries
            out[nan_mask] = [ _solve_log_ppf(lqi) for lqi in log_q ]

    # scale back to [mini, maxi]
    return mini + (maxi - mini) * out

Key wins:

  • No per‐element for loop over all q’s, only over the failed entries.
  • All clipping, log, and beta.ppf calls are fully vectorized.
  • Single try (inside brentq) rather than nested try/except for each path.
  • Eliminates _ppf_fallback_log_space and its dict lookup.

Comment thread betapert/funcs.py Outdated
@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Jul 17, 2025

PR Code Suggestions ✨

Latest suggestions up to 32b0bae

CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix scalar return logic
Suggestion Impact:The commit implemented the core fix suggested - replacing len(results) == 1 with np.isscalar(q) for scalar return detection. However, it used a different variable naming approach (_q instead of q_array) and made additional changes not in the suggestion.

code diff:

-    q = np.atleast_1d(q)
-    results = np.zeros_like(q, dtype=float)
+    _q = np.atleast_1d(q)
+    results = np.zeros_like(_q, dtype=float)
 
     # Define the equation to solve: log(CDF(x)) - log(q) = 0
     def make_log_cdf_eq(qi_val):
+        log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
         def log_cdf_eq(x_normalized):
             # Ensure x_normalized stays in [0,1]
-            log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
             x_clamped = np.clip(x_normalized, _CLIP_EPSILON, 1 - _CLIP_EPSILON)
             return scipy.stats.beta.logcdf(x_clamped, alpha, beta) - log_qi
 
         return log_cdf_eq
 
-    for i, qi in enumerate(q):
+    for i, qi in enumerate(_q.flat):
         try:
             # Use brentq instead of fsolve, guaranteed convergence within bounds
             x_normalized = scipy.optimize.brentq(
@@ -40,7 +40,7 @@
                 _BRENTQ_BOUND,
                 1 - _BRENTQ_BOUND,
             )
-            results[i] = mini + (maxi - mini) * x_normalized
+            results.flat[i] = mini + (maxi - mini) * x_normalized
 
         except (ValueError, RuntimeError):
             # ValueError: Invalid function values, convergence issues, or invalid bounds
@@ -51,7 +51,7 @@
             results[i] = mini + (maxi - mini) * x_normalized
 
     # Returns scalar for scalar input, array for array input
-    return results[0] if len(results) == 1 else results
+    return results[0] if np.isscalar(q) else results

The scalar/array return logic is incorrect and will fail for multi-dimensional
arrays. Using len(results) == 1 only checks the first dimension, not the total
number of elements. This will incorrectly return a scalar for single-element
multi-dimensional arrays like np.array([[0.5]]).

betapert/funcs.py [17-54]

 def _ppf_fallback_log_space(q, mini, mode, maxi, lambd):
     """Use log-space to avoid numerical issues with extreme probabilities"""
     alpha, beta = _calc_alpha_beta(mini, mode, maxi, lambd)
 
     # Handle scalar and array inputs consistently
-    q = np.atleast_1d(q)
-    results = np.zeros_like(q, dtype=float)
+    q_array = np.atleast_1d(q)
+    results = np.zeros_like(q_array, dtype=float)
     ...
     # Returns scalar for scalar input, array for array input
-    return results[0] if len(results) == 1 else results
+    return results.item() if np.isscalar(q) else results

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies that using len(results) == 1 to handle scalar returns is buggy for multi-dimensional arrays, and the proposed fix using np.isscalar(q) is a robust solution to this critical logic error.

High
Fix multi-dimensional array iteration
Suggestion Impact:The suggestion was directly implemented - the code now uses _q.flat for enumeration and results.flat for assignment, fixing the multi-dimensional array iteration issue

code diff:

+    for i, qi in enumerate(_q.flat):
         try:
             # Use brentq instead of fsolve, guaranteed convergence within bounds
             x_normalized = scipy.optimize.brentq(
@@ -40,7 +40,7 @@
                 _BRENTQ_BOUND,
                 1 - _BRENTQ_BOUND,
             )
-            results[i] = mini + (maxi - mini) * x_normalized
+            results.flat[i] = mini + (maxi - mini) * x_normalized

The loop iterates over q but assigns to results[i] which is based on q_array
from np.atleast_1d(q). For multi-dimensional arrays, this indexing mismatch will
cause incorrect assignments or potential index errors since enumerate(q) doesn't
handle multi-dimensional arrays properly.

betapert/funcs.py [35-43]

-for i, qi in enumerate(q):
+for i, qi in enumerate(q_array.flat):
     try:
         # Use brentq instead of fsolve, guaranteed convergence within bounds
         x_normalized = scipy.optimize.brentq(
             make_log_cdf_eq(qi),
             _BRENTQ_BOUND,
             1 - _BRENTQ_BOUND,
         )
-        results[i] = mini + (maxi - mini) * x_normalized
+        results.flat[i] = mini + (maxi - mini) * x_normalized

[Suggestion processed]

Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a critical bug where iterating over a multi-dimensional numpy array with enumerate(q) is incorrect and will fail, and the proposed fix of iterating over q.flat is the correct way to handle arrays of any dimension.

High
Incremental [*]
Add convergence tolerance parameters

Add explicit tolerance parameters to brentq to ensure consistent convergence
behavior and prevent potential infinite loops or excessive iterations in edge
cases.

betapert/funcs.py [38-42]

 x_normalized = scipy.optimize.brentq(
     make_log_cdf_eq(qi),
     _BRENTQ_BOUND,
     1 - _BRENTQ_BOUND,
+    xtol=1e-12,
+    rtol=1e-12,
+    maxiter=100,
 )
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that adding explicit tolerance parameters to brentq improves the robustness and predictability of the numerical optimization.

Medium
Optimize redundant log calculation
Suggestion Impact:The suggestion was directly implemented - log_qi calculation was moved from inside the log_cdf_eq function to the outer make_log_cdf_eq function, exactly as suggested

code diff:

+        log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
         def log_cdf_eq(x_normalized):
             # Ensure x_normalized stays in [0,1]
-            log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))

The log_qi calculation should be moved outside the inner function to avoid
redundant computation. Currently, log_qi is recalculated on every function call
during root finding, but it only depends on qi_val which is constant for each
root-finding iteration.

betapert/funcs.py [26-33]

 def make_log_cdf_eq(qi_val):
+    log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
     def log_cdf_eq(x_normalized):
         # Ensure x_normalized stays in [0,1]
-        log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
         x_clamped = np.clip(x_normalized, _CLIP_EPSILON, 1 - _CLIP_EPSILON)
         return scipy.stats.beta.logcdf(x_clamped, alpha, beta) - log_qi
     return log_cdf_eq

[Suggestion processed]

Suggestion importance[1-10]: 6

__

Why: This is a valid performance optimization that moves the constant log_qi calculation out of the inner loop, improving efficiency without changing logic.

Low
Verify mock function invocation
Suggestion Impact:The suggestion was implemented by adding an assertion to verify the mock function was called exactly once

code diff:

+            assert TestPPFLogFallback.mock_ppf_nan_calls == 1

Add verification that the mock was actually called to ensure the test is
properly exercising the intended code path and not passing due to the mock not
being invoked.

tests/test_ppf_log_fallback.py [132-139]

 def test_ppf_fallback_not_triggered_when_not_configured(self):
     """Test that fallback is NOT triggered by default when regular ppf returns NaN."""
     mini, mode, maxi = 0, 1, 10
 
     with unittest.mock.patch("scipy.stats.beta.ppf", new=TestPPFLogFallback.mock_ppf_nan):
         dist = betapert.pert(mini, mode, maxi)
         result = dist.ppf(0.5)
         assert np.isnan(result)
+        assert TestPPFLogFallback.mock_ppf_nan_calls > 0

[Suggestion processed]

Suggestion importance[1-10]: 6

__

Why: This suggestion correctly improves the test by asserting that the mock was called, making the test more robust by ensuring the intended code path is exercised.

Low
General
Handle scalar NaN consistently
Suggestion Impact:The exact suggested change was implemented - line 49 shows np.any(np.atleast_1d(np.isnan(_beta_ppf))) replacing the original np.any(np.isnan(_beta_ppf))

code diff:

+    if fallback is not None and np.any(np.atleast_1d(np.isnan(_beta_ppf))):

The fallback logic doesn't handle the case where _beta_ppf is a scalar NaN
value. np.any() expects an array-like input, and calling it on a scalar NaN will
work but is inconsistent with the array case and could be more explicit.

betapert/funcs.py [95-101]

 def ppf(q, mini, mode, maxi, lambd=4, *, fallback=None):
     alpha, beta = _calc_alpha_beta(mini, mode, maxi, lambd)
     _beta_ppf = mini + (maxi - mini) * scipy.stats.beta.ppf(q, alpha, beta)
     # Use fallback if any values are NaN
-    if fallback is not None and np.any(np.isnan(_beta_ppf)):
+    if fallback is not None and np.any(np.atleast_1d(np.isnan(_beta_ppf))):
         return _ppf_fallbacks[fallback](q, mini, mode, maxi, lambd)
     return _beta_ppf

[Suggestion processed]

Suggestion importance[1-10]: 3

__

Why: The suggestion correctly notes that np.any() on a scalar NaN is unusual, but the existing code functions correctly as np.any(np.nan) evaluates to True; the proposed change to wrap the input in np.atleast_1d improves explicitness but offers only a minor readability improvement.

Low
  • Update

Previous suggestions

✅ Suggestions up to commit b3835b0
CategorySuggestion                                                                                                                                    Impact
General
Optimize function definition in loop
Suggestion Impact:The suggestion was directly implemented. The function definition was moved outside the loop using a factory function pattern (make_log_cdf_eq) that returns the equation function, eliminating the inefficient function creation on each iteration.

code diff:

+     # Define the equation to solve: log(CDF(x)) - log(q) = 0
+    def make_log_cdf_eq(qi_val):
+        def log_cdf_eq(x_normalized):
             # Ensure x_normalized stays in [0,1]
-            log_qi = np.log(np.clip(qi, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
+            log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
             x_clamped = np.clip(x_normalized, _CLIP_EPSILON, 1 - _CLIP_EPSILON)
             return scipy.stats.beta.logcdf(x_clamped, alpha, beta) - log_qi
+        return log_cdf_eq
 
+    for i, qi in enumerate(q):
         try:
             # Use brentq instead of fsolve, guaranteed convergence within bounds
-            x_normalized = scipy.optimize.brentq(log_cdf_eq, _BRENTQ_BOUND, 1 - _BRENTQ_BOUND)
+            x_normalized = scipy.optimize.brentq(make_log_cdf_eq(qi), _BRENTQ_BOUND, 1 - _BRENTQ_BOUND)

The function definition inside the loop creates a new function object on each
iteration, which is inefficient. Consider defining the equation function outside
the loop and passing qi as a parameter, or use a lambda function with proper
closure.

betapert/funcs.py [25-40]

+def make_log_cdf_eq(qi_val):
+    def log_cdf_eq(x_normalized):
+        log_qi = np.log(np.clip(qi_val, _CLIP_EPSILON, 1 - _CLIP_EPSILON))
+        x_clamped = np.clip(x_normalized, _CLIP_EPSILON, 1 - _CLIP_EPSILON)
+        return scipy.stats.beta.logcdf(x_clamped, alpha, beta) - log_qi
+    return log_cdf_eq
+
 for i, qi in enumerate(q):
-    # Define the equation to solve: log(CDF(x)) - log(q) = 0
-    def log_cdf_eq(x_normalized, qi=qi):  # def does not bind loop variable `qi`
-        ...
+    log_cdf_eq = make_log_cdf_eq(qi)
     try:
-        # Use brentq instead of fsolve, guaranteed convergence within bounds
         x_normalized = scipy.optimize.brentq(log_cdf_eq, _BRENTQ_BOUND, 1 - _BRENTQ_BOUND)
         results[i] = mini + (maxi - mini) * x_normalized
     except (ValueError, RuntimeError):
         ...
Suggestion importance[1-10]: 2

__

Why: The suggestion to move the function definition out of the loop is a valid micro-optimization, but its performance impact is negligible compared to the brentq call within the loop.

Low
✅ Suggestions up to commit 3e2074d
CategorySuggestion                                                                                                                                    Impact
Possible issue
Add missing scipy.optimize import
Suggestion Impact:The suggestion was directly implemented - scipy.optimize import was added on line 10 of the diff, which is exactly what was suggested to fix the missing import for the brentq function call

code diff:

+import scipy.optimize

The function is missing the required import for scipy.optimize which is used in
the brentq call. This will cause a NameError at runtime when the log-space
fallback is triggered.

betapert/funcs.py [15-46]

+import scipy.optimize
+
 def _ppf_fallback_log_space(q, mini, mode, maxi, lambd=4):
     """Use log-space to avoid numerical issues with extreme probabilities"""
     alpha, beta = _calc_alpha_beta(mini, mode, maxi, lambd)
 
     # Handle scalar and array inputs consistently
     q = np.atleast_1d(q)
     results = np.zeros_like(q, dtype=float)
 
     for i, qi in enumerate(q):
         # Clamp to avoid log(0) or log(1)
         qi_safe = np.clip(qi, _CLIP_EPSILON, 1 - _CLIP_EPSILON)
         log_qi = np.log(qi_safe)
 
         # Define the equation to solve: log(CDF(x)) - log(q) = 0
         def log_cdf_eq(x_normalized):
             # Ensure x_normalized stays in [0,1]
             x_clamped = np.clip(x_normalized, _CLIP_EPSILON, 1 - _CLIP_EPSILON)
             return scipy.stats.beta.logcdf(x_clamped, alpha, beta) - log_qi
 
         try:
             # Use brentq with bounds instead of fsolve for more stability
             x_normalized = scipy.optimize.brentq(
                 log_cdf_eq, _BRENTQ_LOWER_BOUND, 1 - _BRENTQ_LOWER_BOUND
             )
             results[i] = mini + (maxi - mini) * x_normalized
 
         except (ValueError, RuntimeError):
             # Fallback to regular ppf if log-space fails
             x_normalized = scipy.stats.beta.ppf(qi_safe, alpha, beta)
             results[i] = mini + (maxi - mini) * x_normalized
 
     return results[0] if len(results) == 1 else results
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies that scipy.optimize is used without being imported, which would cause a NameError and prevent the new fallback functionality from working.

High
Incremental [*]
Fix incorrect fallback expectation
Suggestion Impact:The test was modified to remove the incorrect expectation of linear interpolation fallback and instead test that the function works correctly with proper fallback behavior, though not exactly as suggested

code diff:

-    def test_log_fallback_multiple_fallback_levels(self):
+    def test_log_fallback_with_secondary_clip_fallback(self):
         """Test that multiple fallback levels work correctly."""
         mini, mode, maxi = 0, 1, 10
         q = 0.5
@@ -158,19 +194,14 @@
         def mock_brentq_fail(*args, **kwargs):
             raise ValueError("Brentq failed")
 
-        def mock_ppf_fail(*args, **kwargs):
-            raise ValueError("PPF failed")
-
         scipy.optimize.brentq = mock_brentq_fail
-        scipy.stats.beta.ppf = mock_ppf_fail
 
         try:
             # This should fall back to linear interpolation
-            result = funcs._ppf_fallback_log_space(q, mini, mode, maxi)
-
-            # Should fall back to linear interpolation: mini + q * (maxi - mini)
-            expected = mini + q * (maxi - mini)
-            assert result == expected
+            result = funcs._ppf_fallback_log_space(q, mini, mode, maxi, 4)
+
+            assert np.isfinite(result)
+            assert mini <= result <= maxi
 
         finally:
             # Restore original functions
@@ -181,22 +212,11 @@

The test expects linear interpolation fallback when both brentq and beta.ppf
fail, but the actual implementation doesn't have this fallback behavior. The
function will likely raise an exception instead of returning linear
interpolation.

tests/test_ppf_log_fallback.py [149-178]

 def test_log_fallback_multiple_fallback_levels(self):
     """Test that multiple fallback levels work correctly."""
     mini, mode, maxi = 0, 1, 10
     q = 0.5
 
     # Mock both scipy functions to fail
     original_brentq = scipy.optimize.brentq
     original_ppf = scipy.stats.beta.ppf
 
     def mock_brentq_fail(*args, **kwargs):
         raise ValueError("Brentq failed")
 
     def mock_ppf_fail(*args, **kwargs):
         raise ValueError("PPF failed")
 
     scipy.optimize.brentq = mock_brentq_fail
     scipy.stats.beta.ppf = mock_ppf_fail
 
     try:
-        # This should fall back to linear interpolation
-        result = funcs._ppf_fallback_log_space(q, mini, mode, maxi)
-
-        # Should fall back to linear interpolation: mini + q * (maxi - mini)
-        expected = mini + q * (maxi - mini)
-        assert result == expected
+        # This should raise an exception since there's no linear interpolation fallback
+        with pytest.raises((ValueError, RuntimeError)):
+            funcs._ppf_fallback_log_space(q, mini, mode, maxi)
 
     finally:
         # Restore original functions
         scipy.optimize.brentq = original_brentq
         scipy.stats.beta.ppf = original_ppf
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that a test asserts an incorrect behavior, as the implementation lacks the linear interpolation fallback the test expects, which would cause the test to fail.

Medium
✅ Suggestions up to commit 774bc77
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix scalar NaN check
Suggestion Impact:The exact suggested change was implemented - line 59 was changed from "np.isnan(_beta_ppf).any()" to "np.any(np.isnan(_beta_ppf))" to fix the scalar NaN check issue

code diff:

-    if fallback is not None and np.isnan(_beta_ppf).any():
+    if fallback is not None and np.any(np.isnan(_beta_ppf)):

The np.isnan(_beta_ppf).any() check will fail if _beta_ppf is a scalar value, as
scalars don't have an .any() method. Use np.any(np.isnan(_beta_ppf)) to handle
both scalar and array inputs consistently.

betapert/funcs.py [87-90]

 # Use fallback if any values are NaN
-if fallback is not None and np.isnan(_beta_ppf).any():
+if fallback is not None and np.any(np.isnan(_beta_ppf)):
     return _ppf_fallbacks[fallback](q, mini, mode, maxi, lambd)
 return _beta_ppf

[Suggestion processed]

Suggestion importance[1-10]: 9

__

Why: This is a critical bug fix, as np.isnan(_beta_ppf).any() will raise an AttributeError for scalar inputs, which is a likely use case, preventing the fallback logic from working correctly.

High
Validate fallback parameter exists

The function doesn't validate that the fallback parameter exists in
_ppf_fallbacks before accessing it. This could cause a KeyError if an invalid
fallback is passed. Add validation to ensure the fallback exists.

betapert/funcs.py [84-90]

 def ppf(q, mini, mode, maxi, lambd=4, *, fallback=None):
     alpha, beta = _calc_alpha_beta(mini, mode, maxi, lambd)
     _beta_ppf = mini + (maxi - mini) * scipy.stats.beta.ppf(q, alpha, beta)
     # Use fallback if any values are NaN
     if fallback is not None and np.isnan(_beta_ppf).any():
+        if fallback not in _ppf_fallbacks:
+            raise ValueError(f"Invalid fallback '{fallback}'. Valid options: {list(_ppf_fallbacks.keys())}")
         return _ppf_fallbacks[fallback](q, mini, mode, maxi, lambd)
     return _beta_ppf
Suggestion importance[1-10]: 5

__

Why: The suggestion correctly points out a potential KeyError, but the main entry point via the PERT class already performs this validation, making this a minor improvement for direct function usage.

Low

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable fallback mechanism for the PPF function to handle extreme probability values, enhancing numerical stability. The implementation uses a log-space calculation with brentq for robustness.

My review focuses on improving code robustness, maintainability, and consistency. Key suggestions include:

  • Replacing assert with proper exception handling for input validation.
  • Avoiding bare except clauses to prevent catching unintended exceptions.
  • Using named constants instead of magic numbers for better readability.
  • Adhering to PEP 8 naming conventions for constants.

A significant point to consider is that the new fallback functionality has been added to the PERT class but not to the ModifiedPERT class. This creates an inconsistency in the library's features, which might be unintentional. I recommend extending this functionality to ModifiedPERT as well for a more consistent user experience.

Comment thread betapert/__init__.py Outdated
Comment thread betapert/funcs.py Outdated
Comment thread betapert/__init__.py Outdated
Comment thread betapert/funcs.py Outdated
Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed everything up to 774bc77 in 1 minute and 51 seconds. Click for details.
  • Reviewed 112 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. betapert/funcs.py:41
  • Draft comment:
    Avoid using a bare 'except:' clause; catch Exception explicitly to prevent masking unexpected errors.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% This is a valid Python best practice suggestion. Bare except clauses can hide serious errors like KeyboardInterrupt or SystemExit. However, this is intentionally a last-resort fallback that needs to catch any possible numerical computation error to provide a simple linear interpolation result. The code's purpose is clear from the comments. The bare except could potentially mask serious errors that should bubble up. We might want to at least let system-level exceptions through. Given this is explicitly a final fallback for numerical computation, and the fallback operation is very simple and safe (linear interpolation), a bare except is actually appropriate here to ensure we always return some reasonable value. While generally good advice, in this specific case the bare except is an intentional design choice for a final fallback, so the comment should be removed.
2. betapert/funcs.py:20
  • Draft comment:
    Consider vectorizing the loop over q in _ppf_fallback_log_space for performance improvements if large arrays are expected.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While vectorization could potentially improve performance, this is a fallback function that's only called when the main ppf implementation fails. The loop exists because it needs to handle complex error cases and multiple fallback options per element. Vectorizing this would make the error handling much more complex and potentially less reliable. The performance impact is likely minimal since this is not the primary code path. The comment identifies a legitimate performance optimization opportunity. The use of numpy suggests vectorization could be possible. However, this is an error-handling fallback path where reliability and clarity are more important than performance. The complexity cost of vectorization likely outweighs the benefits. The comment should be deleted because it suggests an optimization that would make the error-handling code more complex for minimal benefit in an exceptional code path.

Workflow ID: wflow_93OIzBMBNjf0B8GZ

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Comment thread betapert/__init__.py Outdated
Comment thread betapert/funcs.py Outdated

try:
# Use brentq with bounds instead of fsolve for more stability
x_normalized = scipy.optimize.brentq(log_cdf_eq, 1e-10, 1 - 1e-10)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import 'scipy.optimize' is missing. Add 'import scipy.optimize' at the top to use brentq.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
betapert/__init__.py (1)

75-129: Consider adding fallback support to ModifiedPERT for consistency.

The PERT class now supports a fallback parameter, but ModifiedPERT doesn't. Since the underlying funcs.ppf already accepts a fallback parameter, consider adding the same functionality to ModifiedPERT to maintain API consistency between the two distribution classes.

🧹 Nitpick comments (2)
betapert/funcs.py (2)

5-5: Fix typo in docstring.

-lambda) and implementsa specific statistical operation like pdf, cdf, etc.
+lambda) and implements a specific statistical operation like pdf, cdf, etc.

84-91: Add validation for fallback parameter.

The function could raise a KeyError if an invalid fallback value is passed. While validation exists in the PERT class, this function might be called directly.

 def ppf(q, mini, mode, maxi, lambd=4, *, fallback=None):
     alpha, beta = _calc_alpha_beta(mini, mode, maxi, lambd)
     _beta_ppf = mini + (maxi - mini) * scipy.stats.beta.ppf(q, alpha, beta)
     # Use fallback if any values are NaN
     if fallback is not None and np.isnan(_beta_ppf).any():
+        if fallback not in _ppf_fallbacks:
+            raise ValueError(f"Unknown fallback method: {fallback}")
         return _ppf_fallbacks[fallback](q, mini, mode, maxi, lambd)
     return _beta_ppf
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8f03cad and 774bc77.

📒 Files selected for processing (3)
  • betapert/__init__.py (3 hunks)
  • betapert/funcs.py (2 hunks)
  • pyproject.toml (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
betapert/__init__.py (1)
betapert/funcs.py (1)
  • ppf (84-90)

Comment thread pyproject.toml
Comment thread betapert/__init__.py Outdated
Comment thread betapert/funcs.py
Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 3e2074d in 2 minutes and 7 seconds. Click for details.
  • Reviewed 364 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. betapert/funcs.py:41
  • Draft comment:
    If both brentq and beta.ppf fail in _ppf_fallback_log_space, the intended linear interpolation fallback (mini + q*(maxi-mini)) is never reached. Consider adding a nested try/except around the beta.ppf call to catch its failure and then fall back to linear interpolation as expected by the tests.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 30% vs. threshold = 50% The code previously had a nested try/except that would fall back to linear interpolation if beta.ppf failed. This was deliberately removed in the diff. The comment is technically correct about the behavior change. However, we don't know if this fallback was actually needed or if it was removed intentionally. The tests are mentioned but we can't see them. Without more context about why the change was made, we can't be certain the comment is correct. I might be too hesitant - if tests are mentioned, maybe there really is a regression. The comment seems well-reasoned and specific. While the comment is detailed, we still lack crucial context about whether this change was intentional or if the tests actually require this behavior. The safe approach is to assume the author had a reason for the change. Without being able to verify the test requirements or understand why the fallback was removed, we should err on the side of trusting the author's changes and remove this comment.
2. betapert/__init__.py:127
  • Draft comment:
    Only the PERT class supports a fallback parameter while ModifiedPERT does not. Ensure this is intentional and, if so, consider documenting the discrepancy to avoid confusion.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None
3. betapert/funcs.py:87
  • Draft comment:
    The ppf() function uses a dictionary lookup on the 'fallback' parameter and will raise a KeyError if an invalid value is supplied. It may be beneficial to validate the fallback parameter and raise a more descriptive error before attempting the lookup for clarity of API usage.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_M9pWHEWsAWeEJOsv

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed 226c28e in 2 minutes and 35 seconds. Click for details.
  • Reviewed 378 lines of code in 5 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. .github/workflows/pytest-poetry.yml:31
  • Draft comment:
    Consider adding a newline at the end of the file. The diff shows 'No newline at end of file', which is a minor formatting issue.
  • Reason this comment was not posted:
    Comment was on unchanged code.
2. betapert/funcs.py:93
  • Draft comment:
    Add fallback type validation in the ppf() function: before using the fallback, verify that it is a valid key in _ppf_fallbacks and raise a ValueError if not, so that tests expecting a ValueError for an invalid fallback pass.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. tests/test_ppf_log_fallback.py:226
  • Draft comment:
    The test for invalid fallback type expects a ValueError, so ensure that the fallback validation added in ppf() (or init) correctly triggers ValueError for unsupported fallback values.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that a specific behavior (raising a ValueError) is correctly implemented in the code. This is against the rules as it asks the author to ensure behavior rather than making a specific suggestion or pointing out a specific issue.
4. .github/workflows/pytest-poetry.yml:28
  • Draft comment:
    Typographical consistency: the step name 'run black' starts with a lowercase letter whereas other step names are capitalized (e.g., 'Run ruff', 'Run tests'). Consider updating it to 'Run black' for consistency.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_NUdo3rhOcaWFdi0D

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Comment thread betapert/__init__.py Outdated
Comment thread betapert/funcs.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
.github/workflows/pytest-poetry.yml (1)

31-33: Good workflow optimization with missing newline.

The reordering to run tests after linting is logical and can save CI time by catching style issues first. However, the file is missing a newline at the end.

Add a newline at the end of the file to fix the YAMLlint warning:

       - name: Run tests
-        run: poetry run pytest
+        run: poetry run pytest
+
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3e2074d and 226c28e.

📒 Files selected for processing (5)
  • .github/workflows/pytest-poetry.yml (1 hunks)
  • betapert/__init__.py (3 hunks)
  • betapert/funcs.py (2 hunks)
  • pyproject.toml (2 hunks)
  • tests/test_ppf_log_fallback.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • pyproject.toml
  • betapert/init.py
  • betapert/funcs.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/test_ppf_log_fallback.py (3)
betapert/funcs.py (4)
  • stats (166-173)
  • _ppf_fallback_log_space (22-52)
  • ppf (93-99)
  • cdf (83-85)
tests/conftest.py (1)
  • lambd (27-28)
betapert/__init__.py (1)
  • PERT (13-79)
🪛 YAMLlint (1.37.1)
.github/workflows/pytest-poetry.yml

[error] 33-33: no new line character at the end of file

(new-line-at-end-of-file)

🔇 Additional comments (18)
tests/test_ppf_log_fallback.py (18)

1-14: Well-structured test module with comprehensive imports.

The imports are appropriate and include all necessary dependencies for testing the log fallback functionality. The module structure follows good testing practices.


16-34: Excellent basic functionality test with proper assertions.

The test correctly validates the fundamental requirements of the log fallback:

  • Results within bounds
  • Finite values
  • Monotonicity preservation

The test design is solid and covers the essential behavioral requirements.


35-54: Good coverage of extreme probability edge cases.

The test appropriately handles very small (1e-10 to 1e-20) and very large (1-1e-10 to 1-1e-20) probability values, which are exactly the scenarios where numerical stability issues would occur and the log fallback would be most valuable.


55-66: Proper handling of boundary conditions.

Testing q=0 and q=1 edge cases is crucial since these can cause numerical issues in PPF calculations. The test correctly validates that the fallback handles these gracefully.


67-77: Good scalar input validation.

The test ensures that scalar inputs return scalar outputs, which is important for API consistency. The type checking with isinstance(result, (int, float, np.number)) is appropriate.


78-95: Comprehensive parameter combination testing.

The test covers a good range of parameter combinations including negative values, large values, and small decimal ranges. This helps ensure the fallback works across different PERT distribution configurations.


96-110: Effective integration test with PERT class.

This test validates that the fallback mechanism integrates properly with the PERT class when fallback="log" is specified. It ensures the end-to-end functionality works as expected.


111-126: Excellent mock testing for NaN fallback trigger.

The mock implementation correctly simulates the scenario where scipy.stats.beta.ppf returns NaN, which would trigger the fallback mechanism. The test validates that the fallback produces finite, bounded results.


127-143: Strong consistency validation with CDF inverse relationship.

This test is particularly valuable as it validates the mathematical correctness of the fallback by ensuring that cdf(ppf(q)) ≈ q. The tolerance of rtol=1e-6 is appropriate for numerical computations.


144-174: Thorough multi-level fallback testing with proper cleanup.

This test effectively validates the fallback hierarchy by mocking both scipy.optimize.brentq and scipy.stats.beta.ppf to fail. The test correctly expects linear interpolation as the final fallback. The proper cleanup in the finally block is good practice.


175-189: Good validation of non-fallback behavior.

This test ensures that when no fallback is specified, NaN values are returned as expected. This validates the opt-in nature of the fallback mechanism.


190-209: Comprehensive array shape testing.

The test validates that the fallback preserves array shapes for both 1D and 2D inputs, which is crucial for maintaining NumPy array broadcasting behavior.


210-221: Important monotonicity preservation test.

Monotonicity is a critical property of PPF functions. This test ensures the fallback maintains this mathematical property across a range of probability values.


222-228: Appropriate error handling test.

The test correctly validates that invalid fallback types raise ValueError, ensuring proper API validation.


229-248: Valuable numerical stability testing.

Testing with extreme parameter ranges (very small: 1e-10 to 1e-9, very large: 1e6 to 1e10) helps ensure the fallback mechanism remains stable under challenging numerical conditions.


249-271: Good multi-dimensional array support validation.

The test validates that the fallback correctly handles 2D and 3D input arrays, preserving their shapes and ensuring all elements remain within bounds.


272-295: Excellent scalar/array return type consistency testing.

This test ensures that scalar inputs return scalar outputs and single-element arrays also return scalars, which is important for API consistency and user experience.


296-310: Final validation of array return behavior.

The test confirms that multi-element arrays return arrays with preserved shapes, completing the comprehensive coverage of input/output type handling.

Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 2160dea in 26 seconds. Click for details.
  • Reviewed 11 lines of code in 1 files
  • Skipped 1 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. .github/workflows/pytest-poetry.yml:33
  • Draft comment:
    Minor: A trailing newline was added at the end of the file for consistency and adherence to best practices.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_rDhBnNxY3NIMhH51

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed b3835b0 in 1 minute and 58 seconds. Click for details.
  • Reviewed 286 lines of code in 5 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. betapert/__init__.py:64
  • Draft comment:
    Using a global FALLBACK instead of an instance-level attribute removes per‐instance configurability. The previous design allowed setting fallback via the constructor; consider whether this change is intentional.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
2. betapert/__init__.py:12
  • Draft comment:
    The init method that accepted a fallback parameter was removed. If per‐instance fallback configuration is desired, update documentation or reconsider this design.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is informative and suggests updating documentation or reconsidering the design. It doesn't ask for confirmation or suggest a specific code change. It doesn't fit the criteria for a useful comment as per the rules.
3. betapert/funcs.py:29
  • Draft comment:
    Direct use of np.clip instead of a helper _clip improves clarity. Ensure that the clipping bounds (_CLIP_EPSILON, 1-_CLIP_EPSILON) are appropriate in all scenarios.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
4. tests/test_ppf_log_fallback.py:257
  • Draft comment:
    When an invalid fallback type is provided, a KeyError is raised. Consider raising a more descriptive error (e.g. ValueError) that details the valid fallback options.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 0% vs. threshold = 50% The test is explicitly checking for KeyError, which suggests this is the intended behavior. The change in the test from ValueError to KeyError appears deliberate. KeyError is a reasonable choice if fallback types are implemented as a dictionary lookup. Without seeing the implementation, we can't be certain which error type is more appropriate. I don't have access to the implementation code to see how fallback types are handled. The KeyError could be a perfectly valid design choice if using a dictionary lookup pattern. Since this is a test file and the error type was explicitly changed to KeyError, we should respect that design decision without strong evidence it's wrong. The comment should be deleted. The KeyError appears to be an intentional design choice, and without seeing the implementation code, we can't justify suggesting a different error type.

Workflow ID: wflow_XIlGziuFS5u4SDKp

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Comment thread tests/test_ppf_log_fallback.py
Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 6a25960 in 2 minutes and 2 seconds. Click for details.
  • Reviewed 46 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. betapert/funcs.py:25
  • Draft comment:
    Good use of a closure (make_log_cdf_eq) to correctly bind the loop variable (qi). This avoids the common pitfall with late binding in inner functions.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, praising the use of a closure to bind a loop variable. It doesn't provide any actionable feedback or suggestions for improvement.
2. betapert/funcs.py:36
  • Draft comment:
    Using scipy.optimize.brentq with fixed clipping bounds (_BRENTQ_BOUND and 1 - _BRENTQ_BOUND) ensures numerical stability. Consider reviewing these bounds for edge cases across all valid parameters.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. tests/test_ppf_log_fallback.py:133
  • Draft comment:
    The updated docstring in 'test_ppf_fallback_not_triggered_when_not_configured' clarifies that fallback is not active by default. This improves test readability and intent.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it only describes what the updated docstring does without providing any actionable feedback or suggestions. It does not align with the rules for useful comments.

Workflow ID: wflow_fzcOv7W5tXjwx1lF

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed a18437c in 44 seconds. Click for details.
  • Reviewed 22 lines of code in 2 files
  • Skipped 1 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. .github/workflows/pytest-poetry.yml:33
  • Draft comment:
    Using '--random-order' helps catch order dependencies. Ensure all tests are truly order-independent to avoid flakiness.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
2. pyproject.toml:30
  • Draft comment:
    New 'pytest-random-order' dependency added. Verify that its version fits your testing needs and that it doesn't conflict with other dev packages.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None

Workflow ID: wflow_nQAjQRdJovFkR1KY

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 32b0bae in 1 minute and 3 seconds. Click for details.
  • Reviewed 44 lines of code in 2 files
  • Skipped 1 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. betapert/funcs.py:38
  • Draft comment:
    Good formatting improvements: using a multi-line call for brentq improves readability. The fallback via np.clip for extreme probabilities is clear. Consider documenting the choice of _BRENTQ_BOUND value and potential tolerance settings for edge cases.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
2. pyproject.toml:4
  • Draft comment:
    Version bump to 0.1.7 and scipy dependency update to >=1.14.1 are in line with the PR's enhancements.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
3. pyproject.toml:7

Workflow ID: wflow_7LmNTvIaSi3duYDN

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@openhands-ai
Copy link
Copy Markdown

openhands-ai Bot commented Jul 17, 2025

Looks like there are a few issues preventing this PR from being merged!

  • GitHub Actions are failing:
    • Pytest, ruff, and black

If you'd like me to help, just leave a comment, like

@OpenHands please fix the failing actions on PR #2

Feel free to include any additional details that might help me get this PR into a better state.

You can manage your notification settings

Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 6f9d3a5 in 36 seconds. Click for details.
  • Reviewed 12 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. betapert/funcs.py:28
  • Draft comment:
    Extra blank line added after computing log_qi improves readability; no changes necessary.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_brg33a6qCLNXNRio

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 1eb871a in 2 minutes and 5 seconds. Click for details.
  • Reviewed 89 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. betapert/funcs.py:45
  • Draft comment:
    In the exception block, use 'results.flat[i]' instead of 'results[i]' for consistency with the flat-indexing in the try block. This ensures correct element assignment for multidimensional inputs.
  • Reason this comment was not posted:
    Comment was on unchanged code.
2. tests/test_ppf_log_fallback.py:322
  • Draft comment:
    The test now expects a single‐element array to return an ndarray (instead of a scalar). Ensure that this behavior is intentional and well‐documented, as scalar inputs (like 0.5) still return a scalar.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 40% <= threshold 50% The comment is asking the author to ensure that the behavior change is intentional and well-documented. This is against the rules as it asks for confirmation of intention and documentation. However, it does point out a specific change in behavior which could be useful for the author to consider. The comment could be rephrased to suggest a test or documentation update without asking for confirmation.
3. betapert/funcs.py:54
  • Draft comment:
    Typographical Error: The variable 'q' is used in the scalar check (np.isscalar(q)) but it is not defined in this function. It appears this might be a typo; perhaps you meant to check 'x_normalized' or another variable?
  • Reason this comment was not posted:
    Comment looked like it was already resolved.

Workflow ID: wflow_OXs33F7XUeH8X83m

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@hbmartin hbmartin merged commit 3db5815 into main Jul 17, 2025
4 checks passed
@hbmartin hbmartin deleted the log-fallback branch July 17, 2025 21:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant