-
-
Notifications
You must be signed in to change notification settings - Fork 60
PyMC/PyTensor Implementation of Pathfinder VI #387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyMC/PyTensor Implementation of Pathfinder VI #387
Conversation
`fit_pathfinder` - Edited `fit_pathfinder` to produce `pathfinder_state`, `pathfinder_info`, `pathfinder_samples` and `pathfinder_idata` for closer examination of the outputs. - Changed the `num_samples` argument name to `num_draws` to avoid `TypeError` got multiple values for keyword argument 'num_samples'. - Initial points are automatically set to jitter as jitter is required for pathfinder. Extras - New function 'get_jaxified_logp_ravel_inputs' to simplify previous code structure in fit_pathfinder. Tests - Added extra test for pathfinder to test pathfinder_info variables and pathfinder_idata are consistent for a given random seed.
Add a new PyMC-based implementation of Pathfinder VI that uses PyTensor operations which provides support for both PyMC and BlackJAX backends in fit_pathfinder.
- Implemented in to support running multiple Pathfinder instances in parallel. - Implemented function in for Pareto Smoothed Importance Resampling (PSIR). - Moved relevant pathfinder files into the directory. - Updated tests to reflect changes in the Pathfinder implementation and added tests for new functionalities.
…race data to InferenceData
… for bfgs_sample
9bfc48c
to
ef2956f
Compare
This version runs much faster than #386, but the codes are messier due to the numerous pytensor symbolic variables created for the compiled pytensor functions (see the lines of code between |
g: np.ndarray | ||
|
||
|
||
class LBFGSHistoryManager: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaner to use a data class? Don't know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, I agree. dataclass now added
Summaryh of changes: - Remove multiprocessing code in favour of reusing compiled for each path - takes only random_seed as argument for each path - Compute graph significantly smaller by using pure pytensor op and symoblic variables - Added LBFGSOp to compile with pytensor.function - Cleaned up codes using pytensor variables
…and . - Corrected the dimensions in comments for matrices Q and R in the function. - Uumerical stability in the calculation by changing from to .
pymc_experimental/inference/fit.py
Outdated
@@ -31,11 +31,13 @@ def fit(method, **kwargs): | |||
arviz.InferenceData | |||
""" | |||
if method == "pathfinder": | |||
# TODO: Remove this once we have a pure PyMC implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR will provide that, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the latest commit addresses this
Fixed incorrect and inconsistent posterior approximations in the Pathfinder VI algorithm by: 1. Adding missing parentheses in the phi calculation to ensure proper order of operations in matrix multiplications 2. Changing the sign in mu calculation from 'x +' to 'x -' to match Stan's implementation (which differs from the original paper) The resulting changes now make the posterior approximations more reliable.
Implements both sparse and dense BFGS sampling approaches for Pathfinder VI: - Adds bfgs_sample_dense for cases where 2*maxcor >= num_params. - Moved existing and computations to bfgs_sample_sparse, making the sparse use cases more explicit. Other changes: - Sets default maxcor=5 instead of dynamic sizing based on parameters Dense approximations are recommended when the target distribution has higher dependencies among the parameters.
Bigger changes: - Made pmx.fit compatible with method='pathfinder' - Remove JAX dependency when inference_backend='pymc' to support Windows users - Improve runtime performance by setting trust_input=True for compiled functions Minor changes: - Change default num_paths from 1 to 4 for stable and reliable approximations - Change LBFGS code using dataclasses - Update tests to handle both PyMC and BlackJAX backends
- Add LBFGSInitFailed exception for failed LBFGS initialisation - Skip failed paths in multipath_pathfinder and track number of failures - Handle NaN values from Cholesky decompsition in bfgs_sample - Add checks for numericl stabilty in matrix operations Slight performance improvements: - Set allow_gc=False in scan ops - Use FAST_RUN mode consistently
Major: - Added progress bar support. Minor - Added exception for non-finite log prob values - Removed . - Allowed maxcor argument to be None, and dynamically set based on the number of model parameters. - Improved logging to inform users about failed paths and lbfgs initialisation.
class LBFGSOp(Op): | ||
__props__ = ("fn", "grad_fn", "maxcor", "maxiter", "ftol", "gtol", "maxls") | ||
|
||
def __init__(self, fn, grad_fn, maxcor, maxiter=1000, ftol=1e-5, gtol=1e-8, maxls=1000): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add type hints throughout (and docstrings, ideally).
return idata | ||
|
||
|
||
def alpha_recover(x, g, epsilon): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete docstring and missing type hints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. docstrings and type hints have been added
Add something to the docs? |
value = self.fn(self.x0) | ||
grad = self.grad_fn(self.x0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may want to consider a value_and_grad function, as it can avoid many repeated operations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for my understanding, does "repeated operations" refer to value being computed twice: once in fn(x0)
and again as part of grad_fn(x0)
? or does "repeated operations" refer to something else?
I could change scipy.optimize.minimize(fun=logp_dlogp_fn, jac=True)
where logp_dlogp_fn
is:
first approach
logp_dlogp_fn = model.logp_dlogp_function(
ravel_inputs=True,
dtype="float64",
mode=pytensor.compile.mode.Mode(linker="cvm_nogc"),
)
logp_dlogp_fn.set_extra_values({}) # without this, i'd get an error
logp_dlogp_fn._pytensor_function.trust_input = True
I would prefer this approach since I can toggle between jacobian=True/False if I need to:
second aproach
outputs, inputs = pm.pytensorf.join_nonshared_inputs(
model.initial_point(),
[model.logp(jacobian=jacobian), model.dlogp(jacobian=jacobian)],
model.value_vars,
)
logp_dlogp_fn = compile_pymc(
[inputs], outputs, mode=pytensor.compile.mode.Mode(linker="cvm_nogc")
)
logp_dlogp_fn.trust_input = True
how does the second approach look?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
repeated operations means operations that are shared between the logp and dlogp functions. Second approach is fine, except the hardcoded mode, you should allow the user to pass compile_kwargs
where they can define a custom Mode if they need to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh I see. In that case, I might have considered the repeated operations here: #387 (comment)
I can make the changes if repeated operations weren't considered
return phi, logQ_phi | ||
|
||
|
||
class LogLike(Op): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this an Op? How is it constructed? Why can't you work directly with PyTensor graphs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LogLike
is initialised using logp_func
which is a compiled function that we already have. logp_func
cannot take in a pytensor variable since its already compiled, so it has to be a numpy array with ndim=1. I needed to vectorise logp_func
so that it can take in an array with ndim=3.
I wasn't sure how to make logp_func
take in the symbolic input with batched dims, phi
or psi
, which was why I used Op.
how would you make an already compiled function receive symbolic pytensor variables as inputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can use pt.vectorize
or pytensor.graph.replace.vectorize_graph
tests/test_pathfinder.py
Outdated
np.testing.assert_allclose(idata.posterior["mu"].mean(), 5.0, atol=1.6) | ||
np.testing.assert_allclose(idata.posterior["tau"].mean(), 4.15, atol=1.5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are quite big atols
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they are. I think its due to the Pathfinder algorithm. Have compared it to Stan's multipath Pathfinder (teal) and the mu appears about -2 away from the reference posterior (red) which is taken from posteriordb.
the image below are pymc's pathfinder and NUTS (blue) estimate. the default multipath pathfinder (orange) agrees with Stan's default multipath pathfinder (teal) above. getting closer to the reference or NUTS posterior requires a different setting for jitter and num_paths.
as for tau, the original reference value was 4.15. just had a look at what pymc NUTS returns, and its somewhere around 3.5, whereas estimates from pymc Pathfinder is around 3.0. I guess I can set it to np.testing.assert_allclose(idata.posterior["tau"].mean(), 3.5, atol=0.6)
assert beta.eval().shape == (L, N, 2 * J) | ||
assert gamma.eval().shape == (L, 2 * J, 2 * J) | ||
assert phi.eval().shape == (L, num_samples, N) | ||
assert logq.eval().shape == (L, num_samples) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you test something more than the shapes?
# setting jacobian = True, otherwise get very high values for pareto k. | ||
outputs, inputs = pm.pytensorf.join_nonshared_inputs( | ||
model.initial_point(), | ||
[model.logp(jacobian=jacobian), model.dlogp(jacobian=jacobian)], | ||
model.value_vars, | ||
) | ||
|
||
logp_func = compile_pymc( | ||
[inputs], outputs[0], mode=pytensor.compile.mode.Mode(linker="cvm_nogc") | ||
) | ||
logp_func.trust_input = True | ||
|
||
dlogp_func = compile_pymc( | ||
[inputs], outputs[1], mode=pytensor.compile.mode.Mode(linker="cvm_nogc") | ||
) | ||
dlogp_func.trust_input = True | ||
|
||
return logp_func, dlogp_func |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relates to #387 (comment)
would the operations be shared between logp and dlogp here considering outputs
comes from both model.logp
and model.dlogp
? Or it wouldn't since they are separated (compile fn for logp, compile separate fn for dlogp)?
This would help me determine if I need to change the existing codes to the one below:
outputs, inputs = pm.pytensorf.join_nonshared_inputs(
model.initial_point(),
[model.logp(jacobian=jacobian), model.dlogp(jacobian=jacobian)],
model.value_vars,
)
logp_dlogp_fn = compile_pymc(
[inputs], outputs
)
logp_dlogp_fn.trust_input = True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you have to compile both outputs (logp and dlogp) together, then the compiled function will avoid repeated operations.
If you compile different functions it won't. Does that answer your question?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, sure does--Thanks! the logp
and dlogp
is now a combined output to a pytensor.function
.
It would be useful to have a progress bar. At the moment we only get number of parameters and maxcor, which is hard to translate into expected runtime. |
Changes: - Add rich table summary display for results - Added PathStatus and LBFGSStatus for error handling, status tracking and displaying results - Changed importance_sampling return type to ImportanceSamplingResult - Changed multipath_pathfinder return type to MultiPathfinderResult - Added dataclass containers for results (ImportanceSamplingResult, PathfinderResult, MultiPathfinderResult) - Refactored LBFGS by removing PyTensor Op classes in favor of pure functions - Added timing and configuration tracking - Improve concurrency with better error handling - Improved docstrings and type hints - Simplified logp and gradient computation by combining into single function - Added compile_kwargs parameter for pytensor compilation options
- Move pathfinder module from pymc_experimental to pymc_extras - Update directory structure to match upstream repository
@fonnesbeck progress bar is available, which can be enabled by setting @ricardoV94, comprehensive tests not yet done in commit baad3d9. updating the tests in pymc-extras/tests/test_pathfinder.py will be done shortly. |
@aphc14 faster performance would be My inclination would be to turn it on by default, to be consistent with all our other model fitting algos. |
num_draws: int, | ||
method: Literal["psis", "psir", "identity", "none"] | None, | ||
random_seed: int | None = None, | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add return typehint: ) -> ImportanceSamplingResult:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
type hints are now added for all function outputs. Let me know if there is any more I've missed
"This might indicate invalid probability weights or insufficient valid samples." | ||
) | ||
raise ValueError( | ||
"Importance sampling failed with both with and without replacement" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: "... for both with and ..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
self.gtol = gtol | ||
self.maxls = maxls | ||
|
||
def minimise(self, x0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a Canadian, it pains me to say this but it should probably be minimize
to be consistent with other Python packages with optimization tools (scipy, numpy, sckit-learn).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for checking that! I forgot to control my typing reflexes. I have updated the spelling to follow US/Canada conventions like for minimize, optimize, reparameterize, and maximize. Let me know if I've missed anything
- Add proper type hints throughout pathfinder module - Improve error handling in concurrent execution paths - Better handling of when all paths are fail by displaying results before Assertion - Changed Australian English spelling to US - Update compile_pymc usage to handle deprecation warning - Add tests for concurrent execution and seed reproducibility - Clean up imports and remove redundant code - Improve docstrings and error messages
jitter: float = 2.0, | ||
epsilon: float = 1e-8, | ||
importance_sampling: Literal["psis", "psir", "identity", "none"] = "psis", | ||
progressbar: bool = True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aphc14 faster performance would be
progressbar=False
, no?My inclination would be to turn it on by default, to be consistent with all our other model fitting algos.
@fonnesbeck progressbar
now defaults to True
. Agreed, that it should be True
to keep it consistent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Looks good. The only other thing that occurs to me is that |
Test failures appear to be due to tests being conducted on Python 3.10. Not sure for how much longer we are going to support this version, but in the meantime you may need to refactor so as not to rely on |
Or use |
Another version to draft PR #386 which uses more of PyTensor's symbolic variables and compiling functions.
Questions for Review