WSL2, Docker Desktop and Shared Drives

A pain in the bum but I got there.

I was trying to share a NFS share with an immich docker container to have a play with importing videos. The Debian could see it but the Image could not. I am not sure if it is to do with the limit on the omv NFS IP address range or something but I knew if my Windows machine could see it then the damn WSL2 should be able to see it too, natively!

In PowerShell wsl --list --verbose lists which WSL are in use, and confusingly it gives two but the * suggested that this was the one to work with.

  NAME              STATE           VERSION
* Debian            Running         2
  docker-desktop    Running         2

I tried and tried to mount the NFS share there, but though the Debian could see it, the Docker image could not. I share the same folder using SMB so was able to get that in to WSL using:

wsl -d Debian
sudo mkdir /mnt/p
sudo mount -t drvfs P: /mnt/p

This makes the p: drive appear on /mnt/p which I could then get working in the yaml with:

    volumes:
      - /mnt/p:/mnt/p_drive

This worked so nicely that I couldn’t think why the NFS share worked so went back and tried it all over again. It immediately worked!

Argh.

samba machine visible on Windows Network

This has bothered me for ages, but today I worked it out.

I am using a docker image for samba (dperson/samba) and that works fine after a bit of work, but it always bothered me that I could not see the computer in the home network. I know it is more secure not to, but sometimes convenience wins.

I finally found the answer, and it is wsdd

I found an undocumented, wsdd docker image image with 500k downloads in docker hub but no documentation at all. This yaml was enough to allow me to call it what I wanted rather than it’s rather prosaic name and it came up INSTANTLY:

  wsdd:
    image: viniciusleterio/wsdd
    container_name: wsdd
    network_mode: host
    restart: always
    command: >
      -i eth0
      -n MyServerName

Multiple Paperless Instances

This nearly killed me, I thought I had nuked everything but in the end it was way easier and simpler than I had feared.

Assuming you used one of the default .yaml and .env files from the paperless-ngx github you first need to add a database. I use portainer so went to the db shell and then did this to log in, list them, create a new one, check it is there and quit:

psql -U paperless
\l
CREATE DATABASE mynewdb;
\l
\q
exit

Next I needed to make a new env file. I just Save As… with the existing one and called it docker-compose-mynewone.env and added these lines:

PAPERLESS_DBNAME=mynewdb
PAPERLESS_SECRET_KEY=an+all+new+random+set+of+characters
PAPERLESS_URL=https://mynewpapaerless.myfinedomain.com

The .yaml file needs one new section – I copied the webserver one and only changed these lines (the first instance is on 8010). The /1 on the end of the PAPERLESS_REDIS means use a second database and you need all new volumes or things go awry (I found this out the hard way).

  webserver-newone:
    ports:
      - "8011:8000"
    env_file: docker-compose-mynewone.env
    environment:
      PAPERLESS_REDIS: redis://broker:6379/1
    volumes:
      - data-new:/usr/src/paperless/data
      - media-new:/usr/src/paperless/media
      - ./export-new:/usr/src/paperless/export
      - ./consume-new:/usr/src/paperless/consume

You need to tweak the original webserver changing just this one line with the /0.

  webserver:
      PAPERLESS_REDIS: redis://broker:6379/0

Then yougo to the folder with the yaml in console, stop all the instances, pull a new one and create a superuser as per the usual, but with a minor difference (the new webserver name):

docker compose down
docker compose pull
docker compose run --rm webserver-newone createsuperuser
docker compose up -d

And that should be it!

Caddy, SEC_ERROR_UNKNOWN_ISSUER and TLS Internal

I have solved this before but forgot how. Maybe typing something will help me remember the next time.

I use Caddy in a docker container to reverse proxy around my docker box and other computers in my network. Something like this:

# test subdomain
test.mydomain.net {

        reverse_proxy http://10.10.10.15:8010
        tls internal
}

It always gives SEC_ERROR_UNKNOWN_ISSUER error and I can just accept this, which works for a while. Ideally you want it to just work, especially if you are setting up a site your 83 year old mum might access.

I had forgotten that if you want it to work, you need to add the subdomain to your DNS as an A record – and then remove TLS internal. Job done.

This does mean that anyone can hit that domain – which is fine in some cases and not fine in others. So I updated the internal only domains to this:

fileserver.mydomain.net {

        @denied not client_ip 10.10.10.0/16 172.26.0.0/12

        handle @denied {
                 abort
        }

        reverse_proxy http://10.10.10.15:8081
}

The 172.26 address comes from docker and I think it fails without it from memory.