The number of containers I’m running on my server keeps increasing, and I want to make sure I’m not pushing it beyond its capabilities. I would like a simple interface accessible on my home network (that does not make any fishy connections out) that shows me CPU and RAM-usage, storage status of my hard drives, and network usage. It should be FOSS, and I want to run it as a Docker container.

Is Grafana the way to go, or are there other options I should consider?

  • ReversalHatchery@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    it’s a bit more complicated than that. grafana is only for displaying of the collected info. you still need a database, and something that collects data from systems.

    what I do is grafana + prometheus for storage + prometheusnode exporter for collection.
    but, I’m not totally satisfied with this setup, because long term storage is unsolved (cranking up the retention time in prom will maje sure it’ll cost a lot of storage after a few months), and I haven’t found a way to collect info about top users of resources (e.g. top 10 processes by cpu usage)

    • Scott@lem.free.as
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Victoria Metrics is a timeseries database for long term storage. Can be used as a direct plugin to Prometheus et al.

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Ah, I see. What kind of disk usage are we talking about over e.g. one month? I am (at least for now) not necessarily interested in long term storage (but the data hoarder in me might quickly change that).

      • ReversalHatchery@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I’ve set it up last December for 3 systems, changed the collection interval from the default 1 minute to 15 sec, and now it uses 15 GB

  • ReversalHatchery@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 months ago

    btw grafana does make connections out, at least for when installing plugins, possibly more.

    if you are not in the EU, they even load fucking fecesbook scripts on their main website! a few months ago that was happening in the EU too. if you’re in the EU, you can see it for yourself with thea VPN or the Tor browser, request a new circuit until the bottom one is USA or something like that, and check the network traffic with the devtools (reload the site if you don’t see it there)

    even if this is not the case in the EU (for now), there are no excuses for doing this. no, letting your website be handled solely by marketing heads is not an excuse.

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      For installing plugins, I am fine with it, but would not want any telemetry being sent somewhere without my knowledge. The data collected should stay on my server.

  • jlow (he/him)@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    Netdata is far simpler to set up than Grafana from what I remember but it does phone home by default (you can disable it in via options in docker or something). On one of my servers it doesn’t show container names which is kinda a bummer but I didn’t care enough to troubleshoot that, since I mostly ssh in and use btop anyway …

    • atimehoodie@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      Seconded. Netdata has a generic and forgettable name but is powerful and easy to set up.

      Open source, runs in docker or LXC. Web UI with more metrics than you will ever want, plus plugins. Support for alerts and some log aggregation, though I have not tried logging yet.

  • grapemix@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    3 months ago

    I won’t waste time on things other than grafana if your setup is serious. Because you will always want more. Log aggregation, log query, alerts, tracing, profiling, oidc, s3 bucket, more and more dashboards. It’s addictive. Why waste time to redo it in the future?

  • mbirth@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    3 months ago

    Possibly a bit overkill, but I’m running Zabbix in 3 containers (Core, WebUI, database). Using its agent installed on all my machines, I can monitor basically anything. Of course, you can set limits, alerts, draw graphs, etc.

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      That looks cool, but as you said maybe a little overkill, hehe. I’ll still check it out in more detail, in any case good for later!

  • Brewchin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I have Netdata running in a container, which has a useful all-in-one-pane view, and it does a good job of auto detecting other containers and the host OS. Its essentially zero config.

    It also has alerting capability, which is not zeroconf (configuring it properly is a bit of a chore). 😅

    They try to push a pro/paid version, but it’s subtle and completely optional (a bit like the way Portainer does it).