Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,16 @@ git fetch upstream
git rebase upstream/main
```

### 5. Add Changelog Entry

We use [towncrier](https://towncrier.readthedocs.io/) to manage our changelog. This prevents merge conflicts and ensures standardized release notes.

When you create a Pull Request, please add a changelog entry file in `docs/changes/devel/`. The file name should correspond to your PR number and change type (e.g., `42.feature.md`).

For detailed instructions and available types, see [docs/changes/README.md](https://github.com/mne-tools/mne-denoise/blob/main/docs/changes/README.md).

**Author Attribution**: We encourage contributors to include their name in the changelog entry if they wish to be highlighted. In Markdown, you can link to your GitHub profile (e.g., `... (by [@YourUser](...))`).

## Code Style

We use **Ruff** for linting and formatting, configured to follow PEP 8 with NumPy docstring conventions.
Expand Down
35 changes: 35 additions & 0 deletions docs/changes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Changelog Guide

We use `towncrier` to manage our changelog. This ensures that changes are documented as they happen, preventing merge conflicts in the changelog file and ensuring high-quality release notes.

## Adding a Changelog Entry

When you make a change (feature, bugfix, documentation update), you should add a fragment file to the `docs/changes/devel/` directory.

The filename should be your **PR number** (or Issue number if no PR yet) followed by the type of change and the extension `.md`.

Format: `<ISSUE_OR_PR_NUMBER>.<TYPE>.md`

### Available types:

* `feature`: New feature.
* `bugfix`: Bug fix.
* `doc`: Documentation improvement.
* `removal`: Deprecation or removal of a feature.
* `misc`: Internal changes, tooling, etc.

## Example

If you fixed a bug in PR #42, create a file `docs/changes/devel/42.bugfix.md`:

```markdown
Fixed a bug where the ZapLine algorithm would crash on empty data.
```

## Building the Changelog

To preview the changelog (requires `towncrier`):

```bash
towncrier build --draft
```
5 changes: 5 additions & 0 deletions docs/changes/devel/10.feature.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
**DSS Refactor**:
- Consolidated DSS implementation from `dev-benchmarch` branch, including:
- 20+ denoiser classes in `mne_denoise.dss.denoisers`.
- Updated `mne_denoise.utils` for consistent MNE object handling.
- Visualization updates.
2 changes: 2 additions & 0 deletions docs/changes/devel/16.bugfix.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
**ZapLine**:
- Fixed a bug in `ZapLine` adaptive mode where sampling rate mismatch caused incorrect frequency detection and potential crashes (Issue #16).
14 changes: 14 additions & 0 deletions docs/changes/template.jinja
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{% for section, _ in sections.items() %}
{% if section %}
### {{ section }}

{% endif %}
{% for category, val in definitions.items() if category in sections[section] %}
#### {{ definitions[category]['name'] }}

{% for text, values in sections[section][category].items() %}
- {{ text }} ({{ values|join(', ') }})
{% endfor %}

{% endfor %}
{% endfor %}
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
]

templates_path = ["_templates"]
exclude_patterns: list[str] = ["_build", "Thumbs.db", ".DS_Store"]
exclude_patterns: list[str] = ["_build", "Thumbs.db", ".DS_Store", "changes"]

autosummary_generate = True
napoleon_google_docstring = False
Expand Down
62 changes: 29 additions & 33 deletions mne_denoise/dss/denoisers/spectral.py
Original file line number Diff line number Diff line change
Expand Up @@ -223,60 +223,56 @@ def _apply_fft(self, data: np.ndarray) -> np.ndarray:
raise ValueError(f"Data must be 2D or 3D, got {data.ndim}D")

def _get_target_indices(self, nfft: int) -> list:
"""Get FFT bin indices for target frequencies."""
freq_bins = np.fft.fftfreq(nfft, 1 / self.sfreq)
"""Get FFT bin indices for target frequencies.

Selects exactly one bin per harmonic (no neighbor padding).
Negative-frequency conjugates are included automatically for
real-valued IFFT reconstruction.
"""
target_indices = []

for f in self._harmonic_freqs:
idx = np.argmin(np.abs(freq_bins - f))
if idx not in target_indices:
# Positive-frequency bin: round(f / sfreq * nfft)
idx = int(round(f / self.sfreq * nfft))
if 0 <= idx < nfft and idx not in target_indices:
target_indices.append(idx)
# Negative frequency
idx_neg = np.argmin(np.abs(freq_bins + f))
if idx_neg not in target_indices:

# Negative-frequency (conjugate symmetric) bin
idx_neg = nfft - idx
if 0 <= idx_neg < nfft and idx_neg not in target_indices:
target_indices.append(idx_neg)

return target_indices

def _apply_fft_2d(self, data: np.ndarray) -> np.ndarray:
"""Apply bias to 2D data using FFT."""
"""Apply bias to 2D data using FFT.

Process the data in non-overlapping rectangular blocks of length
*nfft* (no windowing, no overlap-add). Short trailing blocks are
zero-padded to *nfft* and the output is truncated to the true block
length.
"""
n_channels, n_times = data.shape

# Use data length or nfft, whichever is smaller
actual_nfft = min(self.nfft, n_times)
target_indices = self._get_target_indices(actual_nfft)

# If data is shorter than nfft, process as single block
if n_times <= actual_nfft:
X = fft(data, n=actual_nfft, axis=1)
X_bias = np.zeros_like(X)
for idx in target_indices:
X_bias[:, idx] = X[:, idx]
biased = np.real(ifft(X_bias, axis=1))[:, :n_times]
return biased

# Welch-style block processing
step = int(actual_nfft * (1 - self.overlap))
step = max(step, 1)

biased = np.zeros_like(data)
counts = np.zeros(n_times)
pos = 0

for start in range(0, n_times - actual_nfft + 1, step):
end = start + actual_nfft
segment = data[:, start:end]
while pos < n_times:
end = min(pos + actual_nfft, n_times)
block_len = end - pos

X = fft(segment, axis=1)
# FFT (zero-pads short blocks automatically)
X = fft(data[:, pos:end], n=actual_nfft, axis=1)
X_bias = np.zeros_like(X)
for idx in target_indices:
X_bias[:, idx] = X[:, idx]
y = np.real(ifft(X_bias, axis=1))

segment_biased = np.real(ifft(X_bias, axis=1))
biased[:, start:end] += segment_biased
counts[start:end] += 1

# Normalize by overlap counts
counts = np.maximum(counts, 1)
biased /= counts
biased[:, pos:end] = y[:, :block_len]
pos = end

return biased
25 changes: 19 additions & 6 deletions mne_denoise/dss/denoisers/temporal.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,21 +177,34 @@ def __init__(self, window: int = 10, iterations: int = 1) -> None:
self.iterations = iterations

def apply(self, data: np.ndarray) -> np.ndarray:
"""Apply smoothing bias."""
from scipy.ndimage import uniform_filter1d
"""Apply smoothing bias.

Uses a causal running-mean filter:
``y[t] = mean(x[t-W+1 : t+1])`` for ``t >= W``, with an expanding
window for the first ``W`` samples. Repeated ``iterations`` passes
approximate a Gaussian kernel.
"""
orig_shape = data.shape
if data.ndim == 3:
data_2d = data.reshape(data.shape[0], -1)
else:
data_2d = data

W = int(self.window)
smoothed = data_2d.copy()

for _ in range(self.iterations):
# Use axis=-1 to support 1D (n_times) and 2D (n_ch, n_times)
smoothed = uniform_filter1d(
smoothed, size=self.window, axis=-1, mode="reflect"
)
mean_head = np.mean(smoothed[..., : W + 1], axis=-1, keepdims=True)
centered = smoothed - mean_head

# Causal running mean via cumulative sums
cs = np.cumsum(centered, axis=-1)
out = np.empty_like(centered)
# First W samples: expanding window
out[..., :W] = cs[..., :W] / np.arange(1, W + 1)
# Remaining samples: fixed-width causal window
out[..., W:] = (cs[..., W:] - cs[..., :-W]) / W
smoothed = out + mean_head

if data.ndim == 3:
return smoothed.reshape(orig_shape)
Expand Down
2 changes: 1 addition & 1 deletion mne_denoise/dss/linear.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ def compute_dss(
# =========================================================================
dss_filters = unmixing_matrix.T

# DSS patterns: for interpretation
# DSS patterns: L2-normalized for topographic visualization (Haufe et al. 2014)
dss_patterns = covariance_baseline @ unmixing_matrix
pattern_norms = np.sqrt(np.sum(dss_patterns**2, axis=0))
pattern_norms = np.where(pattern_norms > 1e-15, pattern_norms, 1.0)
Expand Down
55 changes: 43 additions & 12 deletions mne_denoise/zapline/adaptive.py
Original file line number Diff line number Diff line change
Expand Up @@ -341,8 +341,11 @@ def segment_data(
if len(dists) == 0:
return [(0, n_times)]

# 4. Detect peaks (boundaries)
peak_indices, _ = find_peaks(dists, prominence=np.std(dists) * 0.5)
# 4. Detect peaks (boundaries) - use distance.
min_distance = max(1, int(min_chunk_len * sfreq / n_win))
peak_indices, _ = find_peaks(
dists, prominence=np.std(dists) * 0.5, distance=min_distance
)
boundary_indices = (peak_indices + 1) * n_win

# 5. Enforce min length
Expand Down Expand Up @@ -557,13 +560,23 @@ def detect_harmonics(
return detected


def check_spectral_qa(data: np.ndarray, sfreq: float, target_freq: float) -> str:
def check_spectral_qa(
data: np.ndarray,
sfreq: float,
target_freq: float,
max_prop_above: float = 0.005,
max_prop_below: float = 0.005,
freq_detect_mult: float = 2.0,
) -> str:
"""Assess quality of noise cleaning using spectral analysis.

Analyzes the power spectrum around the target frequency to determine
if cleaning was appropriate, too weak (residual noise), or too strong
(over-cleaning causing spectral notch).

This implementation uses proportion-based thresholds rather than binary
checks.

Parameters
----------
data : ndarray, shape (n_channels, n_times)
Expand All @@ -572,6 +585,14 @@ def check_spectral_qa(data: np.ndarray, sfreq: float, target_freq: float) -> str
Sampling frequency in Hz.
target_freq : float
Target noise frequency (Hz).
max_prop_above : float, default=0.005
Maximum proportion of frequency samples that may be above the upper
threshold before cleaning is considered too weak. Default is 0.5%.
max_prop_below : float, default=0.005
Maximum proportion of frequency samples that may be below the lower
threshold before cleaning is considered too strong. Default is 0.5%.
freq_detect_mult : float, default=2.0
Multiplier for the 5% quantile deviation detector. Default is 2.

Returns
-------
Expand Down Expand Up @@ -602,28 +623,38 @@ def check_spectral_qa(data: np.ndarray, sfreq: float, target_freq: float) -> str
right = win_psd[-n_third:]
center_power = np.mean(np.concatenate([left, right]))

# Use lower quantile as indicator of variability
q_left = np.percentile(left, 5)
q_right = np.percentile(right, 5)
deviation = center_power - np.mean([q_left, q_right])
mean_lower_quantile = np.mean([q_left, q_right])
deviation = center_power - mean_lower_quantile

# Threshold calculation
thresh_upper = center_power + freq_detect_mult * deviation
thresh_lower = center_power - freq_detect_mult * deviation

# Weak check
# Weak check - detailedFreqBoundsUpper = [-0.05, 0.05]
f_tight_low = target_freq - 0.05
f_tight_high = target_freq + 0.05
mask_tight = (freqs >= f_tight_low) & (freqs <= f_tight_high)
tight_psd = mean_log_psd[mask_tight]
thresh_weak = center_power + 2 * deviation

if np.any(tight_psd > thresh_weak):
return "weak"
# Proportion-based check
if len(tight_psd) > 0:
prop_above = np.sum(tight_psd > thresh_upper) / len(tight_psd)
if prop_above > max_prop_above:
return "weak"

# Strong check
# Strong check - detailedFreqBoundsLower = [-0.4, 0.1]
f_notch_low = target_freq - 0.4
f_notch_high = target_freq + 0.1
mask_notch = (freqs >= f_notch_low) & (freqs <= f_notch_high)
notch_psd = mean_log_psd[mask_notch]
thresh_strong = center_power - 2 * deviation

if np.any(notch_psd < thresh_strong):
return "strong"
# Proportion-based check
if len(notch_psd) > 0:
prop_below = np.sum(notch_psd < thresh_lower) / len(notch_psd)
if prop_below > max_prop_below:
return "strong"

return "ok"
Loading
Loading