Date: prev next · Thread: first prev next last
2022 Archives by date, by thread · List index


Participants
============
 1. guilhem
 2. Brett
 3. cloph

Agenda
======
 * Brett: I've noticed a lot of CI builds spending hours in the queue. I have
   two sets of hardware I can donate as tinderboxes:
   + Intel i7 6700k CPU 4.00 Ghz, 16 GB RAM (DDR4 @ 3200 MHz) → sitting in a
     closet right now, tower format
   + 1x Xeon E3-1270v2 3.50GHz, 4GB RAM (DDR3 @ 1333 MHz) (can purchase RAM
     upgrade), rack format but noisy
   + Also have a European Hetzner Debian host operating as a Tor relay/libgen
     seedbox, could share resources as CPU usage only hovers around 20% usage
     (Intel Xeon CPU E31245 3.30GHz, 16 GB RAM (DDR3 @ 1333 MHz ECC).
   + Happy to provide whichever OS is needed most. Looks like win is in highest
     demand but we do also seem to have a low number of mac builders. So long as
     hackintoshes are acceptable, happy to put the work in to provide another mac
     machine.
     - cloph: happy to make use of the offer :-)
       . easiest to use as a linux tb (centos7) — for mac we're waiting for the
         new mac minis; for windows it's easier to stick to an homogeneous
         baseline hosted at tdf
       . can deploy as a salt minion from a bare centos7
       . guilhem: only need the minion pubkey (and client ip to allow in the
         firewall)
     - cloph: can change tb88 to be win-only then
   + Brett: will start with the 6700k
   + guilhem: not sure long queue times suggest a hardware shortage though,
     we've have connection issues lately and jnlpxyz doesn't always restart
     automatically when stopped — should wrap in a `while :; do … done`
 * hypervisor crashes
   + excelsior crashed last week, and also in early dec, but no other time since
     2019 (update: replaced faulty memory module since the call)
   + charly also crashed twice last summer — been stable since though
   + it's not that we have to change hardware asap, but it's also aging and it
     makes sense to replace/upgrade in 2022 or so (would make sense to move away
     from spinning drives also)
     - guilhem to check with manitu what their current offer is
 * upcoming os upgrades:
   + weblate (already python3, hopefully smooth)
     - cloph: please hold upgrade until FOSDEM
   + extensions.lo → can upgrade whenever
   + tb88, tb89
     - cloph: will shutdown the linux VM but stick to a virtualized setup
     - guilhem: occasion to “test” hypervisor upgrade there before proceeding
       with the prod setup, and also to unify the hypervisor setup in salt
   + FYI vm161 is still running Debian 10; on hold for now because planetvenus
     is not ported to python3
 * pg backups
   + Brett: I've been running pgBackRest on vm191 for some time now; it's been
     behaving well. We can easily run full, diff, and incremental backups with
     fairly minimal setup/maintenance.
   + Created three general systemd service/timer files called pgbackrest-backup-{incr,
     diff,full}@.{service,timer}.
   + The issues are in scalability. Creating separated users on the backup host
     (e.g. vm191 logs in and pushes backups via a different user than vm221 so
     they cannot view each other's backups) is difficult. pgBackRest's docs don't
     really talk about this use-case and Debian's package assumes that this will
     all run under a single user (All of the package files/directories are owned
     by the postgres user)
   + Been playing with alterations to fit the vision above but, while any one of
     these issues are easily rectified, all of them put together makes for a very
     onerous and spaghetti-like structure
   + How uncomfortable does Guilhem feel with iterating (since this project has
     been going on for far too long) and starting with a single user for simple
     deployment, then working toward splitting up users?
     - g. no problem
   + If Hetzner ever gets S3-compatible storage, pgBackRest is well-equipped to
     push to buckets, would remove need for these complexities entirely since
     per-server permissions to the buckets could be employed.
   + example of SSH key provisioning: 
https://git.libreoffice.org/infra/salt/+/refs/heads/master/users/root.sls
   + rsnapshot-validate can be adapted at later stage as a form a poor man's chroot /
     privilege drop 
https://git.libreoffice.org/infra/salt/+/refs/heads/master/rsnapshot/rsnapshot-validate
   + Brett: will send that email to hostmaster@ but the only difficulty left is the key provisioning
     — guilhem: awesome!
 * Next call: Feb 15, 17:30 UTC

-- 
Guilhem.

-- 
To unsubscribe e-mail to: website+unsubscribe@global.libreoffice.org
Problems? https://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: https://wiki.documentfoundation.org/Netiquette
List archive: https://listarchives.libreoffice.org/global/website/
Privacy Policy: https://www.documentfoundation.org/privacy

Context


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.