Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Duplicate key validation error on conda-store-server #1070

Open
viniciusdc opened this issue Jan 30, 2025 · 1 comment · May be fixed by #1071
Open

[BUG] - Duplicate key validation error on conda-store-server #1070

viniciusdc opened this issue Jan 30, 2025 · 1 comment · May be fixed by #1071
Labels
type: bug 🐛 Something isn't working

Comments

@viniciusdc
Copy link
Contributor

Describe the bug

I tested the latest release as part of Nebari's integration (upcoming) release and found the following error message from the conda-store-server pod:

INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade bf065abf375b -> 89637f546129, remove conda package build channel
Traceback (most recent call last):
  File "/opt/conda/envs/conda-store-server/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1964, in _exec_single_context
    self.dialect.do_execute(
  File "/opt/conda/envs/conda-store-server/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 942, in do_execute
    cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: could not create unique index "_conda_package_build_uc"
DETAIL:  Key (package_id, subdir, build, build_number, sha256)=(182273, linux-64, h924138e_3, 3, 76c7405bcf2af639971150f342550484efac18219c0203c5ee2e38b8956fe2a0) is duplicated.

full log attached

dev-nebari-conda-store-server-59448568dd-7mbn9-1738275853668991320.log

Since I was not entirely sure if this was my own workspace (nebari's fault), I shifted to this repo's docker-compose to reach the same workspace I had on k8s. To my surprise, as soon as the service went online,e it crashed with a similar error message, as seen in the screenshot bellow:

Image

Curiously, the issue went away, and the service started its usual self-checks, though the same can't be said for its Kubernetes counterpart since it enters a CrashLoopBack state.

Image

Expected behavior

At least on Nebari's side, it will be usual running pods

How to Reproduce the problem?

  • From a local nebari (Kubernetes):

    • Deploy a local nebari from pip install nebari with nebari init local -p mytest and subsequently nebari deploy -c nebari-config.yaml;
    • Then, after deployment completes, upgrade it to pip install git+https://github.com/nebari-dev/nebari.git@upgrade-conda-store and run the deploy command once again -- check with k9s for pod status
  • From docker-compose:

    • Run docker-compose from main

Output

No response

Versions and dependencies used.

conda-store==2025.1.1

Anything else?

No response

@viniciusdc viniciusdc added the type: bug 🐛 Something isn't working label Jan 30, 2025
@viniciusdc
Copy link
Contributor Author

viniciusdc commented Jan 30, 2025

observation: I did update the DB by replacing the metadata_ value from {} to null for a few namespaces (namely admin, analyst, developer, global, nebari-git ) before upgrading to the RC test of nebari (k8s only), since I was using that as a way to test the recent fixes

@soapy1 soapy1 linked a pull request Jan 31, 2025 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug 🐛 Something isn't working
Projects
Status: New 🚦
Development

Successfully merging a pull request may close this issue.

1 participant