Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Short name resolution invoked on temporary images in multi-stage build when image name composed from ARGs #4820

Open
chonton opened this issue May 27, 2023 · 8 comments · May be fixed by #4839
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue

Comments

@chonton
Copy link

chonton commented May 27, 2023

Issue Description

version: 4.5.0

Steps to reproduce the issue

Steps to reproduce the issue

  1. Multistage Dockerfile with contents:
ARG ALPINE_BASE

# azul alpine
FROM ${ALPINE_BASE} AS base

FROM base AS base-arm64
ENV SUFFIX=aarch64

FROM base AS base-amd64
ENV SUFFIX=x64

FROM base-${TARGETARCH} AS repackage
ARG JAVA_VERSION
ARG AZUL_BUILD

ARG JAVA_MINIMAL=/opt/jre

ARG BINARY=zulu${AZUL_BUILD}-ca-jdk${JAVA_VERSION}-linux_musl_${SUFFIX}.tar.gz

ENV JAVA_HOME=/opt/zulu
ENV PATH="${PATH}:${JAVA_HOME}/bin"

# download azul openjdk
RUN mkdir -p ${JAVA_HOME}\
  && wget "https://cdn.azul.com/zulu/bin/${BINARY}" -O "${JAVA_HOME}/${BINARY}"\
  && tar -xzf "${JAVA_HOME}/${BINARY}" -C "${JAVA_HOME}" --strip-components=1\
# build modules distribution
  && jlink --verbose --add-modules \
java.base,\
java.compiler,\
java.desktop,\
java.instrument,\
jdk.jcmd,\
java.logging,\
java.management,\
java.naming,\
java.net.http,\
java.security.sasl,\
java.sql,\
java.scripting,\
java.xml,\
jdk.charsets,\
jdk.compiler,\
jdk.crypto.cryptoki,\
jdk.crypto.ec,\
jdk.dynalink,\
jdk.httpserver,\
jdk.jdi,\
jdk.jdwp.agent,\
jdk.jfr,\
jdk.localedata,\
jdk.management,\
jdk.management.agent,\
jdk.management.jfr,\
jdk.net,\
jdk.security.auth,\
jdk.security.jgss,\
jdk.unsupported,\
jdk.xml.dom,\
jdk.zipfs\
 --compress 2 --strip-java-debug-attributes --no-header-files --no-man-pages --vm=server --output ${JAVA_MINIMAL}
RUN cp ${JAVA_HOME}/bin/javac ${JAVA_MINIMAL}/bin/

# Second stage, copy minimal JRE distribution
FROM ${ALPINE_BASE}
ARG JAVA_MINIMAL=/opt/jre

RUN apk add --update --no-cache ca-certificates jq\
  && rm -rf /var/cache/apk/*

COPY --from=repackage ${JAVA_MINIMAL}/ ${JAVA_MINIMAL}/

ENV JAVA_HOME=${JAVA_MINIMAL}
ENV PATH=${PATH}:${JAVA_HOME}/bin
  1. podman invocation:
podman --url tcp://localhost:30558 build -t repocache.internal.net/temp-docker-local/java-base:17.0.6-3.17.3 --build-arg ALPINE_BASE=repocache.internal.net/temp-docker-local/tool/alpine:3.18.0 --build-arg AZUL_BUILD=17.40.19 --build-arg JAVA_VERSION=17.0.6 src

Describe the results you received

  1. Error output:
[5/5] STEP 1/6: FROM repocache.internal.net/temp-docker-local/tool/alpine:3.18.0
[5/5] STEP 2/6: ARG JAVA_MINIMAL=/opt/jre
--> Using cache b544ed193fa48e7e92f8ecd0143c96597220cbc0afa50ffa253becdc9d2e7e07
--> b544ed193fa4
[5/5] STEP 3/6: RUN apk add --update --no-cache ca-certificates jq  && rm -rf /var/cache/apk/*
--> Using cache 2ff0c0f55695335b709a1fd3668cb83436b5560c8c2e1b723b65435eaf7535e3
--> 2ff0c0f55695
[5/5] STEP 4/6: COPY --from=repackage ${JAVA_MINIMAL}/ ${JAVA_MINIMAL}/
[4/5] STEP 1/9: FROM base-arm64 AS repackage
Error: creating build container: short-name resolution enforced but cannot prompt without a TTY
  1. Snippet from podman service logs:
...
time="2023-05-27T00:22:02Z" level=debug msg="FROM \"base-arm64 AS repackage\""
time="2023-05-27T00:22:02Z" level=debug msg="Pulling image base-arm64 (policy: missing)"
time="2023-05-27T00:22:02Z" level=debug msg="Looking up image \"base-arm64\" in local containers storage"
time="2023-05-27T00:22:02Z" level=debug msg="Normalized platform linux/arm64 to {arm64 linux  [] }"
time="2023-05-27T00:22:02Z" level=debug msg="Trying \"localhost/base-arm64:latest\" ..."
time="2023-05-27T00:22:02Z" level=debug msg="Trying \"registry.fedoraproject.org/base-arm64:latest\" ..."
time="2023-05-27T00:22:02Z" level=debug msg="Trying \"registry.access.redhat.com/base-arm64:latest\" ..."
time="2023-05-27T00:22:02Z" level=debug msg="Trying \"docker.io/library/base-arm64:latest\" ..."
time="2023-05-27T00:22:02Z" level=debug msg="Trying \"quay.io/base-arm64:latest\" ..."
time="2023-05-27T00:22:02Z" level=debug msg="Trying \"docker.io/library/base-arm64:latest\" ..."
time="2023-05-27T00:22:02Z" level=debug msg="Trying \"base-arm64\" ..."
...

Describe the results you expected

Successful build

podman info output

podman --url tcp://localhost:30558 info                                                                                                                                                                                                                                               

host:
  arch: arm64
  buildahVersion: 1.30.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 96.69
    systemPercent: 1.35
    userPercent: 1.96
  cpus: 8
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: container
    version: "38"
  eventLogger: file
  hostname: podman-5978fdd8fc-4dfxx
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
  kernel: 5.15.49-linuxkit
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 14941433856
  memTotal: 25440960512
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.8.5-1.fc38.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.5
      commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
      rundir: /tmp/podman-run-1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: tcp://0.0.0.0:8080
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-12.fc38.aarch64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 1h 17m 50.00s (Approximately 0.04 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/podman/.local/share/containers/storage
  graphRootAllocated: 235900760064
  graphRootUsed: 29493710848
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 41
  runRoot: /tmp/containers-user-1000/containers
  transientStore: false
  volumePath: /home/podman/.local/share/containers/storage/volumes
version:
  APIVersion: 4.5.0
  Built: 1681486872
  BuiltTime: Fri Apr 14 15:41:12 2023
  GitCommit: ""
  GoVersion: go1.20.2
  Os: linux
  OsArch: linux/arm64
  Version: 4.5.0


### Podman in a container

Yes

### Privileged Or Rootless

Rootless

### Upstream Latest Release

Yes

### Additional environment details

podman rootless container running as docker-desktop kubernetes pod

### Additional information

all architectures
@chonton chonton added the kind/bug Categorizes issue or PR as related to a bug. label May 27, 2023
@rhatdan rhatdan transferred this issue from containers/podman May 27, 2023
@rhatdan
Copy link
Member

rhatdan commented May 27, 2023

This looks like a buildah bug. For some reason buildah is looking for the image in github.com/containers/common rather then using the builtin name.

@flouthoc PTAL

I think if you had a tty this would have just worked. Did you set strict lookup in your containers.conf?

@rhatdan
Copy link
Member

rhatdan commented May 27, 2023

@flouthoc you can probably get this to happen if you just do

buildah ... < /dev/null

flouthoc added a commit to flouthoc/buildah that referenced this issue Jun 2, 2023
While creating a dependency map, executor must consider expanding base
images with `builtInArgs` like `TARGETARCH, TARGETOS` etc so buildah can
still keep and use the stages as dependency later on.

Closes: containers#4820

Signed-off-by: Aditya R <[email protected]>
@flouthoc
Copy link
Collaborator

flouthoc commented Jun 2, 2023

Thanks I think #4839 and openshift/imagebuilder#258 should close this.

@flouthoc
Copy link
Collaborator

flouthoc commented Jun 2, 2023

Issue happens because first stages never gets added to baseMap where buildah confirms if stages can be used later or not. It works with additional --platform but not by default. Above PR's should take care of this and additonal test verifies the issue.

flouthoc added a commit to flouthoc/buildah that referenced this issue Jun 21, 2023
While creating a dependency map, executor must consider expanding base
images with `builtInArgs` like `TARGETARCH, TARGETOS` etc so buildah can
still keep and use the stages as dependency later on.

Closes: containers#4820

Signed-off-by: Aditya R <[email protected]>
@github-actions
Copy link

github-actions bot commented Jul 3, 2023

A friendly reminder that this issue had no activity for 30 days.

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 3, 2023

Waiting on a review here: openshift/imagebuilder#258

@github-actions
Copy link

github-actions bot commented Aug 4, 2023

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

github-actions bot commented Sep 5, 2023

A friendly reminder that this issue had no activity for 30 days.

Urfoex added a commit to getml/getml-demo that referenced this issue Apr 22, 2024
Urfoex added a commit to getml/getml-demo that referenced this issue Apr 22, 2024
* &4 #34 adjusting path to requirements.txt in README, adjust dockerfile for usage with x86_64 and arm64

* dodgers: fix prophet usage

* Adjusting Dockerfile and docker-compose.yml to actually work with Docker, simplify by removing multiarch parts. Using heredoc for multiline RUN

* Adjusting Dockerfile and compose to work with Docker-Desktop; add hint in README for better bind-mount use with alternatives

* Adjusting README, shortening Podman part

* renaming GETML_VERSION_NUMBER to GETML_VERSION; using multi-stages to have ARCH specific arguments

* Using RUN script instead of multiple stages because of buildah problem with composed stage name resultion (containers/buildah#4820)

* Tiny improvement on getml version grep
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants