HDBSCAN validation paper notebook#274
Conversation
…sion tests A new per-user regression test exposed a real stop-table concat bug when some users had empty outputs.\n\nThis commit hardens empty stop-table construction by deriving exact output columns and explicit dtypes from shared helpers, then using those typed empties in stop-detection paths. It also applies reset_index(drop=True) after grouped stop summarization and adds passthrough guards to avoid duplicate user_id columns.\n\nPer-user regression tests were cleaned up and made faster:\n- compare labels directly by (user_id, timestamp)\n- remove offset/parts-style expectation logic\n- run on a 4-user sample\n- parameterize n_jobs with 1 and 2\n\nFor now, this is prototyped in dbstop.py and sequential.py via the focused per-user regression path that originally surfaced the bug, with shared helper changes ready for wider consolidation.
Replace split empty-stop schema helpers (column names + dtype map) with one shared helper that directly returns a typed empty stop DataFrame. Update all active stop-detection summarization callsites (dbstop, dbscan, density_based, hdbscan, lachesis, sequential, grid_based) to use the unified helper, removing duplicated empty-frame construction logic.
|
We should merge #251 first, after tests pass, then merge with main again. |
refreshed stop detection utilities and needed to resolve conflicts
This branch cleans up the HDBSCAN validation notebook with some deeper
refactors to nomad's function.
## Validation
The first part of the change makes the validation path related to
`compute_visitation_errors` better, and it now lives with the rest of
the stop-detection validation logic in validation.py, and the overlap /
validation code can handle a separate traj_cols for the right-hand table
when the predicted stops and the truth table do not use the same column
names. That let me remove a lot of notebook-side transformations that
was only there to work with that fragile code.
## Notebook
The notebook `hdbscan_validation_paper` is leaner. It no longer passes
default traj_cols mappings into loaders just to restate the defaults,
and it no longer drops diary rows with missing building IDs before
validation. The general metrics now use the full truth diary, while
category-specific slices happen naturally where the categories are
actually used. I also fixed the stale `start_timestamp` / `timestamp`
mismatch after the summarize-stop output switched to
`keep_col_names=True`, and cleaned up the generation path so regenerated
diaries keep `user_id`.
## Plotting
The plotting code also got reorganized. The notebook was mixing up two
different statistical objects: the per-user distribution of a metric,
and uncertainty in the median metric estimate. Those are now shown
separately. `validation.py` now provides a small bootstrap summary
helper plus two plotting helpers: one for per-user boxplots, and one for
bootstrapped median estimates with interval whiskers. The boxplots are
there to show the spread across users; the point-and-whisker plot is
there to compare the estimated medians. That split makes the
interpretation much clearer for this notebook.
For the grouped colors, the x-axis still uses the registry family labels
such as `lachesis_coarse` and `lachesis_fine`, but the colors are
grouped by the underlying base algorithm. That is piped through from the
registry as `{algo['family']: algo['algorithm']}`, so variants of the
same base method share a hue family without hardcoding the palette in
the notebook.
I ran the validation notebook end to end on the 250-agent dataset after
these changes. The script completes successfully and writes figures that
make sense to me.
|
@carolineychen8 , I forget if all we wanted from this PR was to touch up this notebook and bring your branch and work up to date. If so, we can merge it? |
|
We can't merge yet because we still have 3 failing tests related to hdbscan. I don't know if they are new failures ======================================================= short test summary info ======================================================== However, we do know that hdbscan is about to change. So let's wait until we are debugging it. |
No description provided.