-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] updated visualisation methods and unit handling #264
[MRG] updated visualisation methods and unit handling #264
Conversation
Thanks @cjayb, I'll go ahead and merge once the CI's go green. I actually really like the smoothing for illustrative purposes. Might require a quick statement in the docstring like "MEG/EEG signals undergo low-pass filtering when passing through the skull. We can apply a smoothing to the simulated current dipole to better reflect experimentally recorded signals." Totally fine to wait to merge if you want to go ahead and add the smoothing. |
yikes, Matti will disown me if he sees this sentence in hnn-core :) I believe the low-pass hypothesis has never been proven experimentally ... but I am happy to be proven wrong |
I also saw this sentence in the example:
but no call to "scale". Remove the sentence? |
I wonder if the |
Ah good to know, quick reading suggests that it may reflect how synchronous high frequency activity occurs with smaller volumes of cortical tissue. Point stills stands, if we want to include smoothing do you have a suggested explanation that won't offend too many people? Perhaps just "Several factors contribute to the appearance of low-pass filtered signal when recording neural signals outside the head. We can apply a smoothing to the simulated current dipole to better reflect experimentally recorded signals." |
So -- low-pass filtering the data will create a corner frequency whereas I have mostly seen the spectrum decay (what's usually called a 1/f but that's a bit of a misnomer too). I am also not exactly sure what the smoothing with Hamming window does. It low-pass filters but what's the corner frequency? Okay I have said enough ... floor is yours @cjayb to pitch opinions :) |
Codecov Report
@@ Coverage Diff @@
## master #264 +/- ##
==========================================
- Coverage 52.92% 52.83% -0.10%
==========================================
Files 23 24 +1
Lines 2749 2843 +94
==========================================
+ Hits 1455 1502 +47
- Misses 1294 1341 +47
Continue to review full report at Codecov.
|
I think Matti and @jasmainak are vindicated: http://dx.doi.org/10.1523/ENEURO.0291-16.2016 :) Any real evoked or even rhythmic response is riding on top of that 1/f-like sea of, you know, that ninety-something% of "brain activity" we rarely focus on. So in essence we have hundreds of thousands of 'other' HNN networks active, creating a stochastic smoothing. Furthermore, one of our 200 pyramidal neuron-strong networks is probably only on the order of a percent of the total number of microcolumns creating the macroscopic signals we measure. And each of these is also doing other computations, involving other dynamics that interact and interfere. I see smoothing here as a way to 'simulate' the combined effects of 1) non-zero background and 2) hundreds or thousands of more I'll take a look at the smoothing code (agree it's currently a little ad hoc), and see if I can write some prose about the background (without drowning the example in too many details). @ntolley had a delightfully 'Swiss' proposal above, maybe best to dodge the issue like that :) |
yes, agree with @cjayb 's explanation. Another point I can't resist making is that intracranial EEG signals look pretty smooth as well ... at least what I've seen so far from recent explorations. So the destructive/constructive interference between multiple HH networks seems to make more sense to me. I'm okay with either @ntolley 's swiss explanation or a more detailed explanation that is less wrong than the first explanation we had :) |
I like the returning of what's the benefit of EDIT: okay I see you can specify |
I guess it depends what you're trying to find. Welch will dampen any local effects but periodogram may show even artifacts |
sure. Okay I am deferring the review of the rest of the PR to @ntolley . Need to go underground for grant writing again :) |
hnn_core/dipole.py
Outdated
@@ -318,3 +329,38 @@ def write(self, fname): | |||
self.data['L5']]].T | |||
np.savetxt(fname, X, fmt=['%3.3f', '%5.4f', '%5.4f', '%5.4f'], | |||
delimiter='\t') | |||
|
|||
# Savitzky-Golay filtering, lifted and adapted from mne-python (0.22) | |||
def savgol_filter(self, h_freq): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you have to lift the test as well :) getting lazy? :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of a Savitzky–Golay filter, but I think this code should go in the smooth()
method. I'm late to the party here so if this has already been discussed and resolved, my suggestion isn't a big deal.
@rythorpe @ntolley could you weigh in on these? I'd like to get you immediate reactions before moving forward (yes: even with tests ;)
Our windows are so short that there's hardly a case where we'd benefit from Welch, or is there? The Hamming window applied in the periodogram should take care of our simulation edge-artefacts, too! Though it might be prudent to let the |
Agree that this sounds reasonable as well.
Unit argument sounds good since a large amount of fitting waveforms to real data involves appropriate scaling. Perhaps we could rename I'll need to refresh my memory on these spectral methods but will comment on the proposed functions soon. |
Back from some light reading. Definitely supportive of the savgol filter. Specifying the frequency seems quite useful, but I am particularly a fan of the shape preserving properties. Tough decision with welch vs. periodogram. While this isn't the ideal example, the periodogram may prove more useful in practice. PSD's from this sort of experiment would typically be averaged over multiple trials. I believe this would have a similar benefit to welch's method if applied to a single longer time series? To stay consistent with the lab's literature, I'll plan to put together an example for beta events that shows where averaged spectral methods can misleading (after the 0.1 release). |
hnn_core/viz.py
Outdated
def plot_spectrogram(dpl, *, fmin, fmax, winlen=None, tmin=None, tmax=None, | ||
layer='agg', ax=None, show=True): | ||
"""Plot Welch spectrogram (power spectrum) of dipole time course | ||
def plot_periodogram(dpl, *, fmin, fmax, tmin=None, tmax=None, layer='agg', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about plot_psd()
to be explicit that we're plotting spectral density in units of power rather than e.g. spectral mass in units of energy?
hnn_core/viz.py
Outdated
@@ -110,7 +110,7 @@ def plot_dipole(dpl, tmin=None, tmax=None, ax=None, layer='agg', decim=None, | |||
|
|||
ax.ticklabel_format(axis='both', scilimits=(-2, 3)) | |||
ax.set_xlabel('Time (ms)') | |||
ax.set_ylabel('Dipole moment') | |||
ax.set_ylabel(f'Dipole moment ({dpl[0].units})') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ax.set_ylabel(f'Dipole moment ({dpl[0].units})') | |
ax.set_ylabel(f'Dipole moment (scaled {dpl[0].units})') |
Regarding the discussion about how to handle units, what if we add 'scaled' in the y-label and then add a label+legend to the figure that specifies the scaling factor. Here's an example snippet of how I've handled it in some of my figures:
might be a radical solution, but can we get rid of |
Are we OK with How about dropping Welch and re-implementing Next up: units scaling. |
Before we go into the rabbit hole of implementing more sophisticated periodograms, let's ask ourselves ... do we really need this or is a TFR plot good enough for everything? Because that's what I have seen in HNN so far. I feel the efforts might be better spent in trying to copy the TFR code from MNE. It's also easier to interpret in HNN contexts where we can have transient oscillations. The periodogram that I had in the alpha oscillation was a lazy effort to prevent an MNE dependency while showing alpha for cheap with scipy |
Need to sleep on it a bit. You could consider the following alternatives: dipole.smooth_savgol(30)
dipole.smooth_hamming(500) or dipole.smooth('savgol', 30)
dipole.smooth('hamming', 500) also not entirely sure about forcing named arguments in functions with only a few arguments. Might be annoying for a new user who doesn't know Python that much. |
Enables chaining, e.g.: `dpl.smooth(...).scale(...).plot(...)1
Fair enough, but I do believe HNN has PSD plots too? The fundamental problem of too short data segments applies to either approach. With 300 ms of data it makes little sense even pretending to estimate power <10 Hz. The simulations we run are sampled at 40 kHz, but we know our aggregate dipoles are effectively silent above 200 Hz. I think it would be great to have a simple PSD routine available, such as the periodogram that could be used to average over trials to obtain a little of that sweet Welch feeling.
Agreed, but I wouldn't sell the simple spectrum quite that short: many non-engineers balk at TFRs, but can easily get behind a power spectrum.
I guess I see what you're trying to avoid (e.g. forced named arguments), but having to memorise different smoothing methods is also an overhead. I still like my proposal best, but can easily be swayed :) (P.S. Travis will have his pep8 with the next commit) |
Take a look at what happens in MNE. I don't think chaining is supposed to work with in-place operations, e.g., EDIT: I take it back. Looks like it's not how I was thinking :) |
We should separate style checks into a separate CI build so it fails fast ... |
hnn_core/parallel_backends.py
Outdated
@@ -371,6 +376,7 @@ def simulate(self, net, n_trials, postproc=True): | |||
""" | |||
|
|||
# just use the joblib backend for a single core | |||
# XXX this does not work: if n_procs==1, empty dipole list returned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something feels fishy here. I'd rather fix it here than in the example if it's easy enough to do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say "I'd" do you really mean "you'd"? ;) I'm a bit stumped about that behaviour. I don't know enough about the context manager to understand how you can nest a JoblibBackend().simulate
inside a with MPIBackend...
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay let me give this a shot after lunch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see db0555c
what would you do without my reviews? ;-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops I forgot to remove the XXX comment
anything to update in |
Did some work on making the visualisations more robust to different label widths and subplot combos (this is a great resource). I made all the implicit post-processing stuff private, so we can easily remove them in the next round. Behaviour remains GUI-like (smoothing and scaling read from param-file by default). Beginning to feel pretty done here... Edit: I'm not impressed by CodeCov, it doesn't seem to be updating and I can't figure out how to trigger it! |
@@ -100,7 +100,7 @@ | |||
############################################################################### | |||
# We can plot the firing pattern of individual cells by indexing with the gid | |||
gid = 170 | |||
plt.figure(figsize=(4, 4)) | |||
plt.figure(figsize=(4, 4), constrained_layout=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately this doesn't work for more complicated plots (like the one in plot_simulate_gamma
). Without some extra effort, many of our plots get the ylabel cut out (see e.g. the firing rate demo in our master docs). The challenge is to find something that respects sharex=True
, and allows variable-width labels (1-2 lines), and understands colorbars... Any suggestions welcome! (tight_layout
doesn't cut it)
I wonder if we need to worry about that warning though? I think hnn-core
will be developing a lot faster than matplotlib
, so I'm sure we'll come you with new plots faster than they can remove functionality from us ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't we just get rid of this argument? Does it look too bad? The aim of the examples is not to make beautiful figures. People can always add plt.tight_layout
to their scripts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the warnings don't look too bad. I am fine with letting it stay :)
Other than MPI issue, PR looks good to me. I'll give it a shot myself after lunch. Thanks @cjayb for the awesome PR as always! |
e1e0be3
to
db0555c
Compare
.circleci/config.yml
Outdated
@@ -26,7 +26,7 @@ jobs: | |||
command: | | |||
export PATH=~/miniconda/bin:$PATH | |||
conda update --yes --quiet conda | |||
conda create -n testenv --yes pip python=3.6 | |||
conda create -n testenv --yes pip python=3.7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you certain this has nothing to do with this PR? One way to verify is to make a "blank PR" with a tiny doc change and see if you have the same CircleCI problem. If so, I can live with this. It's not good to create such tight bleeding edge Python dependency. Because Neuron doesn't work on 3.8, it means we have only one version of Python for which MPI backend would work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switching back to 3.6, works on master (#272)
dpls[-1]._baseline_renormalize(N_pyr_x, N_pyr_y) # XXX cf. #270 | ||
dpls[-1]._convert_fAm_to_nAm() # always applied, cf. #264 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is different from the definition of a "raw dipole" in the GUI? I am fine with this but perhaps GUI users should weigh in.
@kohl-carmen @rythorpe any opinions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what a "raw dipole" is?
for key in self.data.keys(): | ||
self.data[key] = _hammfilt(self.data[key], winsz) | ||
|
||
return self | ||
|
||
def savgol_filter(self, h_freq): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no test for this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only implicitly (test_viz.py:L36
). Eric implemented some actual tests on this in mne
, but I decided against copying those, since (again) we don't have that much data to make FFTs of...
@cjayb I left a few more comments. But let me know if you're too saturated with this PR, we can move those discussions elsewhere |
@@ -63,7 +63,8 @@ | |||
# ``openmpi``, which must be installed on the system | |||
from hnn_core import MPIBackend | |||
|
|||
with MPIBackend(n_procs=2, mpi_cmd='mpiexec'): | |||
n_procs = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
think we should revert this to n_procs=2
as it was before? not sure if that was causing the CircleCI issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do. In fact, #272 passed just fine with python=3.6, so I'll revert it here too. I'm hoping it's just been a fluke. I really don't see how this PR could have broken MPI...
All good merging. Thanks a lot @cjayb !! 🥳 |
Sweet! |
NB 81ff283 removes testing of
MPIBackend
due to CircleCI shitting kittens (see #215)This PR updates methods for visualisation of
Dipole
scale
-operation) (closes Units missing in dipole plots #243 )copy
methodtest_viz.py
for testing visualisationsplot_tfr_morlet
(andplot_psd
)Closes #265 : baseline normalisation left in to have parity with HNN GUI, this PR addresses units-issues
To follow up on a discussion in #249 (@jasmainak @rythorpe): I think this example might benefit from smoothing the waveforms? I tried with
smooth(1000)
, which looks quite illustrative for both cases:We could leave it for 0.2 release, though. There's a lot we need to figure out regarding visualisation.