Infra call on Tue, Apr 17 at 16:30 UTC

Hi there,

The next infra call will take place at `date -d 'Tue Apr 17 16:30:00 UTC 2018'`
(18:30:00 Berlin time).

See https://pad.documentfoundation.org/p/infra for details; agenda TBA.

See you there!
Cheers,

The next infra call will take place at `date -d 'Tue Apr 17 16:30:00 UTC 2018'`
(18:30:00 Berlin time).

Reminder, it's in a bit more than 16h!

See https://pad.documentfoundation.org/p/infra for details; agenda TBA.

Here is what we currently have in the agenda:

  * [Florian Reisinger (unable to participate) reisi007]
    + Can this folder [ https://dev-builds.libreoffice.org/daily/ ] be cleaned
      up. LibreOffice 4.2 to LibreOffice 5.3 have not been active for a while
      year. Same for Master tinderboxes. Accessing the "current" folder often
      results in a 4xx return code [ https://dev-builds.libreoffice.org/daily/master/Win-x86@39/current ].
      Deleting such tinderboxes and branches would significantly reduce the
      server load, because currently I need to test every /current folder of
      every tinderbox to see if it is active! At the moment it seems like I
      am getting blocked from the servers because of the high number of
      useless request I have to make in order to check whether a tinderbox is
      active or not. Thanks for considering this!
    - G. *not* an argument not to cleanup, but FWIW dentries are buffered so
      the overhead should be minimal. also we're not blocking nor throttling
      peers with many requests, IMHO it's more likely that you've been
      affected by the network issue we current have at our datacenter
    - FYI if you have access to shell on that box,
        `find /srv/www/dev-builds.libreoffice.org/daily -mindepth 3 -maxdepth 3 -name current -xtype d -printf '/%P\n'`
      will do what you want
    - Alternatively
        `lynx -listonly -nonumbers -dump "$URL" | grep -Px "\Q$URL\E.+/" | sed 's,.*,output=/dev/null\nurl=&current/,' | curl -w '%{http_code} %{url_effective}\n' -sIK-`
      (where URL=https://dev-builds.libreoffice.org/daily/master/ ) is pretty
      efficient (two HTTP connections for each branch; replace `grep -P` with
      perl or similar if your grep isn't GNU's)
  * Monitoring
    + Grafana and prometheus deployed on https://monitoring.documentfoundation.org
    + Do we want to install the node exporter on each VM? All recent VMs
      expose their state to the host through QEMU guest agent
      https://wiki.qemu.org/Features/GuestAgent and we could use that instead
    + Alerting: do we want that in prometheus, or on the Grafana panels? (so
      far only graph panels support that, but it should be possible with
      singlestats and table panels in later versions)
    + collectd: OK to remove from salt core, and uninstall from the VMs?
    + Exporters to look for, test and adopt:
      - libvirtd (so we see whether we overcommit RAM or vCPUs)
      - gluster: throughput, latency, healing stats
      - postfix: mail queue
      - RDBMS: slow queries, transactions
      - nginx: cache hits/misses, requests count, method, status code
    + Would be nice to expose the metrics from the blackbox exporter (or
      associated dashboards), not sure if that's feasible without exposing the
      whole prometheus data source
  * Bugzilla upgrade: currently hosted our last prod Debian 7 VM (codename
    Wheezy). Trash it and replace it with a fresh Debian 9 one from the new
    baseline?

See you there