Recent comments on posts in the blog:

Look at Cosmos

Your post prompted me to write something about the system I have been using for my servers/laptops for a couple of years now.... I hope you can find some inspiration in Cosmos. See writeup here:

http://blog.josefsson.org/2015/09/24/cosmos-simple-configuration-management-system/

/Simon

Comment by josefsson.org
comment 3

Propellor has some nasceant ability to run on a host without ghc. It does this by sending over a precompiled binary along with all the libraries it needs. Currently this is only done when the host doesn't seem to have a functional apt at all -- I implemented it for OS takeover purposes -- but it wouldn't be hard to add a property to a host that makes propellor always use this mode when spinning it. Of course architecture compatibility limitations apply. Also, it means uploading some number of megabytes (around 10 IIRC) each time. It might be possible to use rsync to get the bandwidth usable somewhat more tractable.

Relevant code is in sendPrecompiled

Comment by joeyh.name
ansible

I have been working with ansible for the past few months. I've chosen it because of two things mainly: it's push-based and you don't have to install anything on the targets. I would try salt, as installing just that is not much of a deal.

The thing is actually pretty powerful, but the biggest problem it has is the mess they did with YAML and Jinja. It is really badly designed (I'd say not designed at all). You never know for sure when the Jinja parts are evaluated, the variables don't have clear precedence rules, much less scopes. All this makes writing nice, reusable roles very hard. I'd say that if you just rewrote the front-end, using a better DSL, Ansible could be a killer.

Comment by Martín Ferrari
shell

It’s there, and easy to use. Ph33r the mighty power of the Korn Shell!

When I arrived at the current $orkplace there was a system in action that downloaded and ran a script every 10 minutes, as root. I was shocked.

The system is still in use, maintained by me, and has so far survived all attempts by other admins (most of which came to the company even later) to replace it with e.g. puppet, has been robust yet simple to use. It now also calls out to a CGI for some basic monitoring (we have another CGI that displays an overview page) and can signal problems (such as “no NTP running” or “Java < 7 installed”), which make the host line on the overview page a signalling colour. The script installed on the servers that does the actual downloading and running exports some useful info before (hostname, MAC of the primary interface, OS, version, host type/class, etc).

Comment by mirabilos
puppet or propellor here

you are going to have trouble not installing software. even ansible pretends not to install anything, when in fact, it does deploy a bunch of Python code on the other side and assumes you have some version of python installed as well (which is now pretty much universal in Debian, but still)...

Yansible drives me nuts: they basically built a DSL on top of YAML, with Jinja in there, and it's not quite clear to me how it is an improvement over pure Python (or why that isn't the default) or a DSL like Puppet's.

I use Puppet at $work. It's probably because it was the only thing available besides cfengine back then and now that I learned it, i don't want to bother learning something else or (worse) converting the thousands of lines of code we have running in Puppet into something else. Puppet is also pretty flexible: you can use a central server, but you can also run locally with manifests checked out of git. Some friends are running it through gpg-validated git hooks in a decentralized manner, so i know that's possible as well. You'd have to learn a DSL, so it's annoying, but the DSL is not so bad, although it changes too quickly to my taste.

Starting again, I may go with Chef, but I'm not sure i want to rely on Ruby for anything, after running into problems with it with Puppet and Redmine.

If configuration management wouldn't be my day job, I would go with Propellor, especially with your requirements. I like the idea of not building a new DSL (or at least, making one based on a real language, as opposed to freaking YAML or a completely new language like Puppet). And joeyh builds stuff to last, so I trust him. Of course, Haskell needs to work, but then GHC seems to run pretty much everywhere Debian runs, according to the build logs so I'd be curious to hear what platforms you are having problems with.

The problem with Propellor, of course, aside from Haskell, is that it isn't meant to be a serious project (yet?). The API is changing quickly and it's mostly a hobby project right now. But it could be fine for personnal projects.

For my personal stuff, i manage around 3 personal hosts (laptop, workstation at home and in the office). I just run etckeeper to keep an eye on /etc, put /home under git as well, and sync stuff around by hand, including package lists. I gave up on using Puppet on theses, too much overhead for too few machines.

Hoping that helps...

Comment by anarcat [id.koumbit.net]
comment 1
I don't understand but I assume it's amusing they fail at emails. As long as they don't fail at keeping my site address :)
Comment by Holly
Verifiable content

It seems that if the script being curled could be verified (beyond just https) then it would solve most problems you outlined.

That is exactly what notary appears to solve. (there may be other similar tools)

With verifiable content (via cryptographic signatures) The curl command doesn't get much harder but becomes immensely more trusted assuming you trust the author. Which you are planning on running the software so we can assume you trust them. The new command would be something like.

curl http://runme.sh | notary author/project | sudo bash -

This will check the content curled to verify it has been unmodified from what the author signed.

Comment by rothgar
Disagree :)

You're ignoring language ecosystems in your analysis of ease of use and user experience.

There are many things installed via e.g. 'pip install THING' or 'npm install THING' - and they are much better than curlsh precisely because they provide centralised points for dealing with platform quirks, rather than every vendor learning the same lessons. Yes, dpkg can do that.... in the Debian ecosystem,etc. Language centric package managers fill much the same spot but are cross platform by their very nature.

On the review thing - I think you have to either trust or not trust your vendor, and work from there. If you do trust them, there are obviously other boxes to tick - have you got their code or something someone else created. And maybe you don't trust them with root :).

Comment by lifeless
sandboxing

I agree. sad state of affairs. See also: Docker

I'm hopeful about application sandboxing as a way of solving this problem for certain types of software. GNOME are doing some good work there (although quite incomplete): http://gnome.org/Projects/SandboxedApps

For system-level software the problem is a lot more intractable, but if distros can stop caring about having to package 3rd-party apps that only interface with the host machine in well-defined ways, it's a step forward.

Comment by samthursfield