Skip to content

fix autoreparam because dims are no longer static #363

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Jul 27, 2024
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 10 additions & 7 deletions pymc_experimental/model/transforms/autoreparam.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import logging
from dataclasses import dataclass
from functools import singledispatch
from typing import Dict, List, Optional, Sequence, Tuple, Union
Expand All @@ -8,7 +9,6 @@
import pytensor.tensor as pt
import scipy.special
from pymc.distributions import SymbolicRandomVariable
from pymc.exceptions import NotConstantValueError
from pymc.logprob.transforms import Transform
from pymc.model.fgraph import (
ModelDeterministic,
Expand All @@ -19,10 +19,13 @@
model_from_fgraph,
model_named,
)
from pymc.pytensorf import constant_fold, toposort_replace
from pymc.pytensorf import toposort_replace
from pytensor.graph.basic import Apply, Variable
from pytensor.tensor.basic import infer_static_shape
from pytensor.tensor.random.op import RandomVariable

_log = logging.getLogger("pmx")


@dataclass
class VIP:
Expand Down Expand Up @@ -175,14 +178,14 @@ def vip_reparam_node(
if not isinstance(node.op, RandomVariable | SymbolicRandomVariable):
raise TypeError("Op should be RandomVariable type")
Copy link
Member

@ricardoV94 ricardoV94 Jul 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to raise NotImplementedError for op.ndim_supp>0?

Not sure if for those you would want one lambda per shape or size, mentioning because those will always be different for multivariate RVs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will not pass the dispatch anyway, it is not needed

Copy link
Member Author

@ferrine ferrine Jul 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ndim_supp>0, will never be supported I think, but I'm not sure, maybe it will, e.g. Dirichlet, there are some reparameterizations with Gamma https://stats.stackexchange.com/questions/548620/reparameterization-trick-for-the-dirichlet-distribution

Copy link
Member Author

@ferrine ferrine Jul 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one lambda per shape or size

Per size for sure 100%

I believe that if I work with multivariate rv, it will be the only way to have one lambda per size (one per independent draw)

rv = node.default_output()
try:
[rv_shape] = constant_fold([rv.shape])
except NotConstantValueError:
raise ValueError("Size should be static for autoreparametrization.")
rv_shape_t, _ = infer_static_shape(rv.shape)
rv_shape = pt.as_tensor(rv_shape_t).eval(mode="FAST_COMPILE")
lam_name = f"{name}::lam_logit__"
_log.debug(f"Creating {lam_name} with shape of {rv_shape}")
logit_lam_ = pytensor.shared(
np.zeros(rv_shape),
shape=rv_shape,
name=f"{name}::lam_logit__",
name=lam_name,
)
logit_lam = model_named(logit_lam_, *dims)
lam = pt.sigmoid(logit_lam)
Expand Down
9 changes: 5 additions & 4 deletions tests/model/transforms/test_autoreparam.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,20 +7,21 @@

@pytest.fixture
def model_c():
with pm.Model() as mod:
# TODO: Restructure tests so they check one dist at a time
with pm.Model(coords=dict(a=range(5))) as mod:
m = pm.Normal("m")
s = pm.LogNormal("s")
pm.Normal("g", m, s, shape=5)
pm.Normal("g", m, s, dims="a")
pm.Exponential("e", scale=s, shape=7)
return mod


@pytest.fixture
def model_nc():
with pm.Model() as mod:
with pm.Model(coords=dict(a=range(5))) as mod:
m = pm.Normal("m")
s = pm.LogNormal("s")
pm.Deterministic("g", pm.Normal("z", shape=5) * s + m)
pm.Deterministic("g", pm.Normal("z", dims="a") * s + m)
pm.Deterministic("e", pm.Exponential("z_e", 1, shape=7) * s)
return mod

Expand Down
Loading