Skip to content

Commit c219d3d

Browse files
authored
Merge pull request #17 from BabaSanfour/fix/core-updates
Bugfix: ZapLine sampling rate mismatch, returning patterns for plotting & Changelog setup
2 parents db23c8f + a2fed1d commit c219d3d

15 files changed

Lines changed: 586 additions & 91 deletions

File tree

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
#!/usr/bin/env python3
2+
3+
# Authors: The MNE-Python contributors.
4+
# License: BSD-3-Clause
5+
# Copyright the MNE-Python contributors.
6+
# Copied from mne-python:
7+
# https://github.com/mne-tools/mne-python/blob/main/.github/actions/rename_towncrier/rename_towncrier.py
8+
9+
import json
10+
import os
11+
import re
12+
import subprocess
13+
import sys
14+
from pathlib import Path
15+
16+
from github import Github
17+
from tomllib import loads
18+
19+
event_name = os.getenv("GITHUB_EVENT_NAME", "pull_request")
20+
if not event_name.startswith("pull_request"):
21+
print(f"No-op for {event_name}")
22+
sys.exit(0)
23+
if "GITHUB_EVENT_PATH" in os.environ:
24+
with open(os.environ["GITHUB_EVENT_PATH"], encoding="utf-8") as fin:
25+
event = json.load(fin)
26+
pr_num = event["number"]
27+
basereponame = event["pull_request"]["base"]["repo"]["full_name"]
28+
real = True
29+
else: # local testing
30+
pr_num = 12318 # added some towncrier files
31+
basereponame = "mne-tools/mne-python"
32+
real = False
33+
34+
g = Github(os.environ.get("GITHUB_TOKEN"))
35+
baserepo = g.get_repo(basereponame)
36+
37+
# Grab config from upstream's default branch
38+
toml_cfg = loads(Path("pyproject.toml").read_text("utf-8"))
39+
40+
config = toml_cfg["tool"]["towncrier"]
41+
pr = baserepo.get_pull(pr_num)
42+
modified_files = [f.filename for f in pr.get_files()]
43+
44+
# Get types from config
45+
types = [ent["directory"] for ent in toml_cfg["tool"]["towncrier"]["type"]]
46+
type_pipe = "|".join(types)
47+
48+
# Get files that potentially match the types
49+
directory = toml_cfg["tool"]["towncrier"]["directory"]
50+
assert directory.endswith("/"), directory
51+
52+
file_re = re.compile(rf"^{directory}({type_pipe})\.rst$")
53+
found_stubs = [f for f in modified_files if file_re.match(f)]
54+
for stub in found_stubs:
55+
fro = stub
56+
to = file_re.sub(rf"{directory}{pr_num}.\1.rst", fro)
57+
print(f"Renaming {fro} to {to}")
58+
if real:
59+
subprocess.check_call(["mv", fro, to])

CONTRIBUTING.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -129,6 +129,16 @@ git fetch upstream
129129
git rebase upstream/main
130130
```
131131

132+
### 5. Add Changelog Entry
133+
134+
We use [towncrier](https://towncrier.readthedocs.io/) to manage our changelog. This prevents merge conflicts and ensures standardized release notes.
135+
136+
When you create a Pull Request, please add a changelog entry file in `docs/changes/devel/`. The file name should be the change type (e.g., `feature.rst`, `bugfix.rst`).
137+
138+
For detailed instructions and available types, see [docs/changes/README.md](https://github.com/mne-tools/mne-denoise/blob/main/docs/changes/README.md).
139+
140+
**Author Attribution**: We encourage contributors to include their name in the changelog entry if they wish to be highlighted. In Markdown, you can link to your GitHub profile (e.g., `... (by [@YourUser](...))`).
141+
132142
## Code Style
133143

134144
We use **Ruff** for linting and formatting, configured to follow PEP 8 with NumPy docstring conventions.

docs/changes/README.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Changelog Guide
2+
3+
We use `towncrier` to manage our changelog. This ensures that changes are documented as they happen, preventing merge conflicts in the changelog file and ensuring high-quality release notes.
4+
5+
## Adding a Changelog Entry
6+
7+
When you make a change (feature, bugfix, documentation update), you should add a fragment file to the `docs/changes/devel/` directory.
8+
9+
The filename should be the type of change and the extension `.rst`. The PR number will be added automatically.
10+
11+
Format: `<TYPE>.rst`
12+
13+
### Available types:
14+
15+
* `feature`: New feature.
16+
* `bugfix`: Bug fix.
17+
* `doc`: Documentation improvement.
18+
* `removal`: Deprecation or removal of a feature.
19+
* `misc`: Internal changes, tooling, etc.
20+
21+
## Example
22+
23+
If you fixed a bug in a PR, create a file `docs/changes/devel/bugfix.rst`:
24+
25+
```rst
26+
Fixed a bug where the ZapLine algorithm would crash on empty data.
27+
```
28+
29+
## Building the Changelog
30+
31+
To preview the changelog (requires `towncrier`):
32+
33+
```bash
34+
towncrier build --draft
35+
```

docs/changes/devel/10.feature.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
**DSS Refactor**:
2+
- Consolidated DSS implementation from `dev-benchmarch` branch, including:
3+
- 20+ denoiser classes in `mne_denoise.dss.denoisers`.
4+
- Updated `mne_denoise.utils` for consistent MNE object handling.
5+
- Visualization updates.

docs/changes/devel/17.bugfix.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
**ZapLine**:
2+
- Fixed a bug in `ZapLine` adaptive mode where sampling rate mismatch caused incorrect frequency detection and potential crashes (Issue #16).

docs/changes/template.jinja

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
{% for section, _ in sections.items() %}
2+
{% if section %}
3+
### {{ section }}
4+
5+
{% endif %}
6+
{% for category, val in definitions.items() if category in sections[section] %}
7+
#### {{ definitions[category]['name'] }}
8+
9+
{% for text, values in sections[section][category].items() %}
10+
- {{ text }} ({{ values|join(', ') }})
11+
{% endfor %}
12+
13+
{% endfor %}
14+
{% endfor %}

docs/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
]
3232

3333
templates_path = ["_templates"]
34-
exclude_patterns: list[str] = ["_build", "Thumbs.db", ".DS_Store"]
34+
exclude_patterns: list[str] = ["_build", "Thumbs.db", ".DS_Store", "changes"]
3535

3636
autosummary_generate = True
3737
napoleon_google_docstring = False

mne_denoise/dss/denoisers/spectral.py

Lines changed: 29 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -223,60 +223,56 @@ def _apply_fft(self, data: np.ndarray) -> np.ndarray:
223223
raise ValueError(f"Data must be 2D or 3D, got {data.ndim}D")
224224

225225
def _get_target_indices(self, nfft: int) -> list:
226-
"""Get FFT bin indices for target frequencies."""
227-
freq_bins = np.fft.fftfreq(nfft, 1 / self.sfreq)
226+
"""Get FFT bin indices for target frequencies.
227+
228+
Selects exactly one bin per harmonic (no neighbor padding).
229+
Negative-frequency conjugates are included automatically for
230+
real-valued IFFT reconstruction.
231+
"""
228232
target_indices = []
229233

230234
for f in self._harmonic_freqs:
231-
idx = np.argmin(np.abs(freq_bins - f))
232-
if idx not in target_indices:
235+
# Positive-frequency bin: round(f / sfreq * nfft)
236+
idx = int(round(f / self.sfreq * nfft))
237+
if 0 <= idx < nfft and idx not in target_indices:
233238
target_indices.append(idx)
234-
# Negative frequency
235-
idx_neg = np.argmin(np.abs(freq_bins + f))
236-
if idx_neg not in target_indices:
239+
240+
# Negative-frequency (conjugate symmetric) bin
241+
idx_neg = nfft - idx
242+
if 0 <= idx_neg < nfft and idx_neg not in target_indices:
237243
target_indices.append(idx_neg)
238244

239245
return target_indices
240246

241247
def _apply_fft_2d(self, data: np.ndarray) -> np.ndarray:
242-
"""Apply bias to 2D data using FFT."""
248+
"""Apply bias to 2D data using FFT.
249+
250+
Process the data in non-overlapping rectangular blocks of length
251+
*nfft* (no windowing, no overlap-add). Short trailing blocks are
252+
zero-padded to *nfft* and the output is truncated to the true block
253+
length.
254+
"""
243255
n_channels, n_times = data.shape
244256

245257
# Use data length or nfft, whichever is smaller
246258
actual_nfft = min(self.nfft, n_times)
247259
target_indices = self._get_target_indices(actual_nfft)
248260

249-
# If data is shorter than nfft, process as single block
250-
if n_times <= actual_nfft:
251-
X = fft(data, n=actual_nfft, axis=1)
252-
X_bias = np.zeros_like(X)
253-
for idx in target_indices:
254-
X_bias[:, idx] = X[:, idx]
255-
biased = np.real(ifft(X_bias, axis=1))[:, :n_times]
256-
return biased
257-
258-
# Welch-style block processing
259-
step = int(actual_nfft * (1 - self.overlap))
260-
step = max(step, 1)
261-
262261
biased = np.zeros_like(data)
263-
counts = np.zeros(n_times)
262+
pos = 0
264263

265-
for start in range(0, n_times - actual_nfft + 1, step):
266-
end = start + actual_nfft
267-
segment = data[:, start:end]
264+
while pos < n_times:
265+
end = min(pos + actual_nfft, n_times)
266+
block_len = end - pos
268267

269-
X = fft(segment, axis=1)
268+
# FFT (zero-pads short blocks automatically)
269+
X = fft(data[:, pos:end], n=actual_nfft, axis=1)
270270
X_bias = np.zeros_like(X)
271271
for idx in target_indices:
272272
X_bias[:, idx] = X[:, idx]
273+
y = np.real(ifft(X_bias, axis=1))
273274

274-
segment_biased = np.real(ifft(X_bias, axis=1))
275-
biased[:, start:end] += segment_biased
276-
counts[start:end] += 1
277-
278-
# Normalize by overlap counts
279-
counts = np.maximum(counts, 1)
280-
biased /= counts
275+
biased[:, pos:end] = y[:, :block_len]
276+
pos = end
281277

282278
return biased

mne_denoise/dss/denoisers/temporal.py

Lines changed: 19 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -177,21 +177,34 @@ def __init__(self, window: int = 10, iterations: int = 1) -> None:
177177
self.iterations = iterations
178178

179179
def apply(self, data: np.ndarray) -> np.ndarray:
180-
"""Apply smoothing bias."""
181-
from scipy.ndimage import uniform_filter1d
180+
"""Apply smoothing bias.
182181
182+
Uses a causal running-mean filter:
183+
``y[t] = mean(x[t-W+1 : t+1])`` for ``t >= W``, with an expanding
184+
window for the first ``W`` samples. Repeated ``iterations`` passes
185+
approximate a Gaussian kernel.
186+
"""
183187
orig_shape = data.shape
184188
if data.ndim == 3:
185189
data_2d = data.reshape(data.shape[0], -1)
186190
else:
187191
data_2d = data
188192

193+
W = int(self.window)
189194
smoothed = data_2d.copy()
195+
190196
for _ in range(self.iterations):
191-
# Use axis=-1 to support 1D (n_times) and 2D (n_ch, n_times)
192-
smoothed = uniform_filter1d(
193-
smoothed, size=self.window, axis=-1, mode="reflect"
194-
)
197+
mean_head = np.mean(smoothed[..., : W + 1], axis=-1, keepdims=True)
198+
centered = smoothed - mean_head
199+
200+
# Causal running mean via cumulative sums
201+
cs = np.cumsum(centered, axis=-1)
202+
out = np.empty_like(centered)
203+
# First W samples: expanding window
204+
out[..., :W] = cs[..., :W] / np.arange(1, W + 1)
205+
# Remaining samples: fixed-width causal window
206+
out[..., W:] = (cs[..., W:] - cs[..., :-W]) / W
207+
smoothed = out + mean_head
195208

196209
if data.ndim == 3:
197210
return smoothed.reshape(orig_shape)

mne_denoise/dss/linear.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ def compute_dss(
209209
# =========================================================================
210210
dss_filters = unmixing_matrix.T
211211

212-
# DSS patterns: for interpretation
212+
# DSS patterns: L2-normalized for topographic visualization (Haufe et al. 2014)
213213
dss_patterns = covariance_baseline @ unmixing_matrix
214214
pattern_norms = np.sqrt(np.sum(dss_patterns**2, axis=0))
215215
pattern_norms = np.where(pattern_norms > 1e-15, pattern_norms, 1.0)

0 commit comments

Comments
 (0)