Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman-compose (rootless mode) processing "networks:" directive wrong for external networks #1127

Open
VitSimon opened this issue Feb 2, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@VitSimon
Copy link

VitSimon commented Feb 2, 2025

Describe the bug

I have 2 standard users and using Podman in rootless mode. I have two compose.yaml files. One contains network definition. Second one, runned later contains network definition with external: true option. Unfortunately it is not processed correctly. Network is created, but not shared for all rootless users. Second compose.yaml will fail on network already exists and later on error network does not exists also.

If I have another try with external: false, then another networks with names [user name]_[network name] has been created.

To Reproduce

  1. 1st user have this compose.yaml:
version: "3.9"

networks:
  ninternal:
    name: "ninternal"
    driver: bridge
    ipam:
      config:
        - subnet: "10.1.0.0/16"
#    external: false

  idbshared:
    name: "idbshared"
    driver: bridge
    ipam:
      config:
        - subnet: "10.2.0.0/28"
#    external: false

services:
  idb:
    image: postgres:alpine
    container_name: idb
    restart: always
    shm_size: 128mb
    volumes:
      ./dbdata:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: usr1
      POSTGRES_PASSWORD: pwd1
    networks:
      - ninternal
      - idbshared
  1. 2nd user have this compose.yaml:
version: "3.9"

networks:
  nvpn:
    name: "nvpn"
    driver: bridge
    ipam:
      config:
        - subnet: "192.169.0.0/24"
    external: false

  idbshared:
    external: true

services:
  vcs:
    image: gitea/gitea
    container_name: vcs
    environment:
      - GITEA__database__DB_TYPE=postgres
      - GITEA__database__HOST=idb:5432
      - GITEA__database__NAME=test
      - GITEA__database__USER=user
      - GITEA__database__PASSWD=pwd
    restart: always
    networks:
      - nvpn
      - idbshared
    volumes:
      - ./giteadata:/data
    ports:
      - "80:3000"
    depends_on:
      - idb

Steps to reproduce the behavior:

  1. Both users are standard users on Alpine Linux system
  2. Log in with 1st user and run 1st compose.yaml with podman-compose up -d
  3. Log in with 2nd user and run 2nd compose.yaml with podman-compose up -d

Expected behavior
I have expected both yaml files will be processed successfully without error.

Actual behavior
Only the 1st one compose.yaml has been correctly processed.
2nd file failed despite network has been created before (under other user | in rootless mode)

Output

To reproduce point 1:
After call:
podman-compose up -d
the list of networks:
podman network ls
NETWORK ID NAME DRIVER
508f6cf7a0bd idbshared bridge
bf6db76ddbb8 ninternal bridge
2f259bab93aa podman bridge

(it is ok, all processed and networks created)

To reproduce point 2:
After call:
podman-compose up -d
I will get error:
fe93da32ac198bf114e7d21251f336538b53e7a8682df824bd0c3b87d2b1a4ef
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 851, in assert_cnt_nets
await compose.podman.output([], "network", ["exists", net_name])
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 1362, in output
raise subprocess.CalledProcessError(p.returncode, " ".join(cmd_ls), stderr_data)
subprocess.CalledProcessError: Command 'podman network exists idbshared' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/bin/podman-compose", line 8, in
sys.exit(main())
^^^^^^
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 3504, in main
asyncio.run(async_main())
File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/base_events.py", line 686, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 3500, in async_main
await podman_compose.run()
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 1743, in run
retcode = await cmd(self, args)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 2500, in compose_up
podman_args = await container_to_args(compose, cnt, detached=args.detach)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 1094, in container_to_args
await assert_cnt_nets(compose, cnt)
File "/usr/lib/python3.12/site-packages/podman_compose.py", line 854, in assert_cnt_nets
raise RuntimeError(f"External network [{net_name}] does not exists") from e
RuntimeError: External network [idbshared] does not exists

networks list after:
podman network ls
NETWORK ID NAME DRIVER
fc76a6495d83 nvpn bridge
2f259bab93aa podman bridge

Environment:

  • OS: Linux - Alpine Linux 3.21, Kernel 6.12.11-0-lts x86_64
  • podman version: 5.3.2
  • podman compose version: 1.2.0

Additional context

Does my request is valid in connection to Podman rootless mode?

I found similar problem has been discussed here earlier here:

I think ... only podman network has the same hash id over all Podman users ( [Default network (https://github.com/containers/podman/blob/main/docs/tutorials/basic_networking.md#default-network) ). Does there is any other way to define shared network? As Pod need to be complete list of containers which cannot be shared, pods are not a way for me as I need shared DB server for my deployment. Thanks for help.

@VitSimon VitSimon added the bug Something isn't working label Feb 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant