Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Inconsistent output dimensions #2740

Open
1 task done
AdrianSosic opened this issue Feb 11, 2025 · 1 comment · May be fixed by #2743
Open
1 task done

[Bug]: Inconsistent output dimensions #2740

AdrianSosic opened this issue Feb 11, 2025 · 1 comment · May be fixed by #2743
Labels
bug Something isn't working

Comments

@AdrianSosic
Copy link
Contributor

What happened?

Not sure if this is intentional, but it seems like a bug to me. When fixing all features in optimize_acqf, the returned acquisition value changes from a float to a 1-D tensor. Haven't checked what happens for the other optimize_* functions.

Of course, this is a degenerate case since calling the optimization routing with zero degrees of freedom is sort of meaningless, but it can still be useful when invoking BoTorch procedurally from wrappers as in my case.

Please provide a minimal, reproducible example of the unexpected behavior.

Here a minimal example adjusted from the landing page code:

import torch
from botorch.acquisition import LogExpectedImprovement
from botorch.fit import fit_gpytorch_mll
from botorch.models import SingleTaskGP
from botorch.models.transforms import Normalize, Standardize
from botorch.optim import optimize_acqf
from gpytorch.mlls import ExactMarginalLogLikelihood

def run(fixed_features):

    train_X = torch.rand(10, 2, dtype=torch.double) * 2
    Y = 1 - torch.linalg.norm(train_X - 0.5, dim=-1, keepdim=True)
    Y = Y + 0.1 * torch.randn_like(Y)

    gp = SingleTaskGP(
        train_X=train_X,
        train_Y=Y,
        input_transform=Normalize(d=2),
        outcome_transform=Standardize(m=1),
    )
    mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
    fit_gpytorch_mll(mll)
    logEI = LogExpectedImprovement(model=gp, best_f=Y.max())
    bounds = torch.stack([torch.zeros(2), torch.ones(2)]).to(torch.double)
    candidate, acq_value = optimize_acqf(
        logEI,
        bounds=bounds,
        q=1,
        num_restarts=5,
        raw_samples=20,
        fixed_features=fixed_features,
    )
    print(acq_value)

run({0: 0})  
run({0: 0, 1: 0})

Please paste any relevant traceback/logs produced by the example provided.

tensor(-4.7173, dtype=torch.float64)
tensor([-8.1727], dtype=torch.float64)

BoTorch Version

0.13.0

Python Version

3.10

Operating System

macOS

Code of Conduct

  • I agree to follow BoTorch's Code of Conduct
@AdrianSosic AdrianSosic added the bug Something isn't working label Feb 11, 2025
@sdaulton
Copy link
Contributor

Thanks for pointing this out. Would you mind putting up a PR to make it consistent?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants