

I think the restrictions are just for publishing containers on Docker Hub. If you aren’t doing that, you aren’t impacted.
I think the restrictions are just for publishing containers on Docker Hub. If you aren’t doing that, you aren’t impacted.
My pleasure! Getting this stuff together can be a pain, so I’m always trying to pay it forward. Good luck and let me know if you have any questions!
Here you go. I commented out what is not necessary. There are some passwords noted that you’ll want to set to your own values. Also, pay attention to the volume mappings… I left my values in there, but you’ll almost certainly need to change those to make sense for your host system. Hopefully this is helpful!
services:
mongodb:
image: "mongo:6.0"
volumes:
- "/mnt/user/appdata/mongo-graylog:/data/db"
# - "/mnt/user/backup/mongodb:/backup"
restart: "on-failure"
# logging:
# driver: "gelf"
# options:
# gelf-address: "udp://10.9.8.7:12201"
# tag: "mongodb"
opensearch:
image: "opensearchproject/opensearch:2.13.0"
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "action.auto_create_index=false"
- "plugins.security.ssl.http.enabled=false"
- "plugins.security.disabled=true"
- "OPENSEARCH_INITIAL_ADMIN_PASSWORD=[yourpasswordhere]"
ulimits:
nofile: 64000
memlock:
hard: -1
soft: -1
volumes:
- "/mnt/user/appdata/opensearch-graylog:/usr/share/opensearch/data"
restart: "on-failure"
# logging:
# driver: "gelf"
# options:
# gelf-address: "udp://10.9.8.7:12201"
# tag: "opensearch"
graylog:
image: "graylog/graylog:6.2.0"
depends_on:
opensearch:
condition: "service_started"
mongodb:
condition: "service_started"
entrypoint: "/usr/bin/tini -- wait-for-it opensearch:9200 -- /docker-entrypoint.sh"
environment:
GRAYLOG_TIMEZONE: "America/Los_Angeles"
TZ: "America/Los_Angeles"
GRAYLOG_ROOT_TIMEZONE: "America/Los_Angeles"
GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
GRAYLOG_PASSWORD_SECRET: "[anotherpasswordhere]"
GRAYLOG_ROOT_PASSWORD_SHA2: "[aSHA2passwordhash]"
GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
GRAYLOG_ELASTICSEARCH_HOSTS: "http://opensearch:9200/"
GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"
ports:
- "5044:5044/tcp" # Beats
- "5140:5140/udp" # Syslog
- "5140:5140/tcp" # Syslog
- "5141:5141/udp" # Syslog - dd-wrt
- "5555:5555/tcp" # RAW TCP
- "5555:5555/udp" # RAW UDP
- "9000:9000/tcp" # Server API
- "12201:12201/tcp" # GELF TCP
- "12201:12201/udp" # GELF UDP
- "10000:10000/tcp" # Custom TCP port
- "10000:10000/udp" # Custom UDP port
- "13301:13301/tcp" # Forwarder data
- "13302:13302/tcp" # Forwarder config
volumes:
- "/mnt/user/appdata/graylog/data:/usr/share/graylog/data/data"
- "/mnt/user/appdata/graylog/journal:/usr/share/graylog/data/journal"
- "/mnt/user/appdata/graylog/etc:/etc/graylog"
restart: "on-failure"
volumes:
mongodb_data:
os_data:
graylog_data:
graylog_journal:
Can you clarify what your concern is with “heavy” logging solutions that require database/elasticsearch? If you’re worried about system resources that’s one thing, but if it’s just that it seems “complicated,” I have a docker compose file that handles Graylog, Opensearch, and Mongodb. Just give it a couple of persistent storage volumes, and it’s good to go. You can send logs directly to it with syslog or gelf, or set up a filebeat container to ingest file logs.
There’s a LOT you can do with it once you’ve got your logs into the system, but you don’t NEED to do anything else. Just something to consider!
I’m far from an expert, but it seems to me that if you’re setting up your containers according to best practice you would only be mapping the specific ports needed for the service, which renders a wayward “open port” useless. If there’s some kind of UI exploit, that’s a different story. Perhaps this is why most people suggest not exposing your containerized services to the WAN. If we’re talking about a virus that might affect files, it can only see the files that are mapped to the container which limits the damage that can be done. If you are exposing sensitive files to your container, it might be worth it to vet the container more thoroughly (and make sure you have good backups).
That’s like insult to injury… Docker Desktop is already way worse than running on linux!