In a past life, this is an interview question that I would ask people: "We have thousands of servers, and you need to run the same command on 10 of them, what's one way you would do it?" and follow up with "What if you wanted to run the command on hundreds or thousands - what problems would you expect with this approach, and what might you do differently?"
I didn't really expect them to write code on the spot (hate that in interviews), but just to describe a possible solution; there were no wrong answers. Seeing how people came up with a quick hack for 10 and then being able to evolve their thinking for thousands was the point of the question and it could be enlightening to see.
I had a lot of people come up with things that I never even thought of, such as SSH'ing in to all 10 machines and then using a terminal feature to type into all of the windows at once.
That's a great question you used in interviews! I'm curious – how would you personally approach this situation? What would your solution be for running a command on 10 servers, and how would you scale it to thousands?
For a handful of servers I would just do a for loop like `for i in {01..10};do echo "command" | ssh -T server$i.example.com;done` because it was quick and dirty and worked fine for something quick, but obviously it doesn't really ensure a common state or handle errors at all (but I still used a for loop like that many times a day for quick stuff like "I wonder what size a specific file is on each server" or "let me quickly grep a config file across a range of servers because I forget which one this thing is running on").
For more than that, I used Puppet at the time (this was a decade and a half ago); I was a contributor to Puppet and standardized on it in my company. Eventually we moved to Ansible and I sold that business but last I heard, they are still using Ansible and likely using playbooks that were ported over from my Puppet stuff
I dunno about standard, but it's been done a bunch.
* As sibling notes, there's ansible (or chef/puppet/salt/...)
* The traditional solution was https://github.com/duncs/clusterssh which opens an xterm to each target plus a window that you type into to mirror input to all of them
* I do the same-ish in tmux with
bind-key a set-window-option synchronize-panes
and I expect screen and such have equivalent features
* Likewise, there are terminal emulators that let you do splits and then select a menu option to mirror input to all of them
Around 10 years ago I was supporting infrastructure at a PHP shop and we needed a similar thing, but for ~3000 servers, and the library that we were using (libpssh) didn't support async SSH agent authentication, so I built this small tool in Go to allow to implement such tooling in any language (PHP, Python, whatever) in a simple way: https://github.com/YuriyNasretdinov/GoSSHa
It's main advantage is that it allows you to do SSH agent forwarding that actually works at scale, since it limits concurrency when talking to SSH agent to a configurable amount (by default 128, the default connection backlog in OpenSSH ssh-agent)
ansible my_servers -m shell -a 'fortune && reboot' -b
I know it is easy to be a hater, but sincerely do not see a reason to use something like that over Ansible or just pure sh, ssh and scp. All you have to do is to set up keys and the inventory. Takes 10 minutes, even if you are doing it for the first time. And you can expand it if you need it.
I use pssh often (not this tool, but as I understand is similar).
The reasons I find it over Ansible are:
- takes the same syntax and options as plain SSH, just run over multiple hosts. So if you already know SSH, you know how to use pssh that is an extension of the command. Ansible requires to study it. The configuration format is trivial, just a file that contains on each line one host, no need to study complex formats like Ansible
- doesn't require dependencies on the target machine. Ansible, as far as I know, requires a python3 installation on the target machine. Something that, for example, is not granted in all settings (e.g. embedded devices, that are not strictly GNU/Linux machines, for example consider a lot of network devices that espose an SSH server, like Microtik devices, with PSSH is possible to configure them in batch), or in some settings you maybe need to work on legacy machines that have an outdated python version.
- sometimes simpler tool that just do one thing are just better. Especially for tools like pssh that are to me like a swiss army knife, the kind of tool that you use obviously when you are bodging something up to make something work because you are in an hurry and saves your day
Of course if you already use Ansible to manage your infrastructure you may as well use it to run a simple command. But if you have to run a command on some devices, that were not previously setup for Ansible, and devices trough which you may not have a lot of control (e.g. a bunch of embedded devices of some sort), pssh is a tool that can come handy.
Ansible is the easiest tool for configuration management to onboard and start using. Great documentation, large community. It is as complex as you want it to be and it's complexity scales with your infra. ofc YMMV.
I disagree. It requires to know Python, YAML, Jinja2, own set of commands. It requires careful developing of playbooks. It is slow. It is complicated for non-standard cases where you don't have ready for using modules.
Is there a better approach? I think yes. This is Pyinfra - just pure Python, no additional DSLs. Also for configuration there is Terraform (but there are also some limitations).
I know ansible or even custom shell scripts are way better and optimized for such use cases. However, I just wanted to show something I built that might be useful to someone.
My comparison is most likely unfair because i am looking at it through a distorted lens of running all sorts of configuration management in production or at home for years. So i might be the wrong person to make judgement on it and just being a hater for no good reason.
I was getting bored, this seemed like a cool project to work on outside of work, that's why. One of my colleagues found it useful for his needs, so I figured there might be other people who'd find this useful too.
How are commands handled, that require user input? E.g. password for sudo in your example:
sshsync group web-servers "sudo systemctl restart nginx"
I like that you included a demo in the README, but it is too long for a gif, as I can't pause/rewind/forward. So splitting into multiple short gifs or converting into a video (if GitHub supports them) could improve the experience.
As of now there is no way to take user input in transit, so either the user is required to have the privilege to execute the specified command or have passwordless sudo available.
And Yeah, now that you've mentioned it multiple shorter gifs would be better.
Have you looked at what powershell does? Invoke-Command (and the Job stuff it meshes perfectly with via AsJob) is really nice
I only needed a very small fraction of what it can do to bail a client out of a problem their customer caused on several hundred computers the night before an event, but it absolutely saved the day and a lot of money.
I’m surprised no one has mentioned pdsh yet. Piped to dshbak and output was grouped by response. I’d probably use a config management tool for anything more than simple commands now but that tool was indispensable for managing our fleet, when we used to actually connect to machines.
To collect all the results and show which ones are the same or different, use dshbak (i.e., "pdsh <parameters including servers>|dshbak"): https://linux.die.net/man/1/dshbak
Similar things, sometimes more convenient but less efficient for a large number of servers, are to use the konsole terminal program and link multiple window tabs together so the same typed command goes to all, and quickly view the results across the tabs; or to use tmux and send the same commands to multiple windows (possible useful "man tmux" page terms: link-window, pipe-pane, related things to those, activity, focus, hooks, control mode).
And others that I haven't used but which also look possibly interesting for platforms where pdsh and dshbak might not be available (like OpenBSD at least):
Very cool project! But it's a quite saturated market. Any sysadmin/similar who needs this sort of functionality have most likely already found an solution (5-20 years ago).
Personally, I'm a tmux synchronized-panes kind of guy, so that I can see the result of each output immediately.
The dry-run option is nice, but you can do this easily in a normal environment without special tooling (GNU parallel, etc).
I have made scripts to do this with filter parameters over VMs on cloud providers, which is very valuable. Maybe you can extend this to have those options, so potential users are more attracted to it?
what is the use case? why wouldn't you use something else (like ansible or puppet or something)? I do not understand why someone would do things like this.
well usually it's already in place, right? maybe at home it wouldn't be, but at work, that stuff would already be in place on the stock OS install.
and I would be afraid to run SSH commands on multiple machines at once in case one of them errored out and needed manual intervention. ansible or puppet would let me know about that stuff.
I guess I don't understand why running a command in multiple places at once is preferable to running a shell script in four places in sequence.
maybe it's me! there is approximately zero chance that running the same script on four of my machines would result in four cleanly run scripts. one or more would fail, and if more than one failed, they would each fail for a different reason.
You said Ansible and Puppet, they are not installed by default on any Linux distributions or BSDs that I know of.
I personally do not prefer "run command in multiple places at once" over "run command in four places in sequence", however. I think it would just be a "random" choice in my case, and I might just write a script that does either. I do not mind as long as it is a one-time thing, but if I would have to do this more than once, I would just automate it, via scripts. I would probably just have it run in sequence.
I'm curious what happens with output. Is it one line per each computer you've connected to or does it get merged until it diverges, kinda like in that Rick and Morty episode with timeline splits?
Haven't set a limit to how many connections are shown, once all the commands are executing, each result (success/failure) is shown at once. So if you connect to 1000 computers, your shell will be flooded with progress bars first and then the output.
Maybe I should set a limit or let the user set a limit to how many results should be shown once the process is completed. Showing m and n results from the start and end
Dude, your tool does so much than just run ssh commands. I just took a quick glance at your project, just wanted to know does this have support for vultr?
yeah, ansible is very nice also in that it can have multiple inventories across cloud providers for running whatever, with minimal setup, and without needing to modify your ssh host config
To be completely honest, I didn't even think about ansible when creating this (probability because I haven't yet used it), I looked at pssh and clusterssh and just decided to build one myself.
The lack of research, the AI-generated README and comments, and the "Pythonic" approach (while Ansible exists) made me laugh. I guess it's a good CS50 project, but it's not presentable at all and doesn't have real-world usage.
Hey, yeah I admit i should've written the README myself, but I'm kinda lazy , so I let gpt handle both readme and the post. And I do know there are other tools way better than this and battle tested, but I just built this for fun and not to compete with any of them.
I didn't really expect them to write code on the spot (hate that in interviews), but just to describe a possible solution; there were no wrong answers. Seeing how people came up with a quick hack for 10 and then being able to evolve their thinking for thousands was the point of the question and it could be enlightening to see.
I had a lot of people come up with things that I never even thought of, such as SSH'ing in to all 10 machines and then using a terminal feature to type into all of the windows at once.
For more than that, I used Puppet at the time (this was a decade and a half ago); I was a contributor to Puppet and standardized on it in my company. Eventually we moved to Ansible and I sold that business but last I heard, they are still using Ansible and likely using playbooks that were ported over from my Puppet stuff
* As sibling notes, there's ansible (or chef/puppet/salt/...)
* The traditional solution was https://github.com/duncs/clusterssh which opens an xterm to each target plus a window that you type into to mirror input to all of them
* I do the same-ish in tmux with
and I expect screen and such have equivalent features* Likewise, there are terminal emulators that let you do splits and then select a menu option to mirror input to all of them
It's main advantage is that it allows you to do SSH agent forwarding that actually works at scale, since it limits concurrency when talking to SSH agent to a configurable amount (by default 128, the default connection backlog in OpenSSH ssh-agent)
I know it is easy to be a hater, but sincerely do not see a reason to use something like that over Ansible or just pure sh, ssh and scp. All you have to do is to set up keys and the inventory. Takes 10 minutes, even if you are doing it for the first time. And you can expand it if you need it.
The reasons I find it over Ansible are:
- takes the same syntax and options as plain SSH, just run over multiple hosts. So if you already know SSH, you know how to use pssh that is an extension of the command. Ansible requires to study it. The configuration format is trivial, just a file that contains on each line one host, no need to study complex formats like Ansible
- doesn't require dependencies on the target machine. Ansible, as far as I know, requires a python3 installation on the target machine. Something that, for example, is not granted in all settings (e.g. embedded devices, that are not strictly GNU/Linux machines, for example consider a lot of network devices that espose an SSH server, like Microtik devices, with PSSH is possible to configure them in batch), or in some settings you maybe need to work on legacy machines that have an outdated python version.
- sometimes simpler tool that just do one thing are just better. Especially for tools like pssh that are to me like a swiss army knife, the kind of tool that you use obviously when you are bodging something up to make something work because you are in an hurry and saves your day
Of course if you already use Ansible to manage your infrastructure you may as well use it to run a simple command. But if you have to run a command on some devices, that were not previously setup for Ansible, and devices trough which you may not have a lot of control (e.g. a bunch of embedded devices of some sort), pssh is a tool that can come handy.
I do agree with your point, sometimes it's just easier to use native tools or simple wrappers around native tools. Use whatever makes your job easier
Is there a better approach? I think yes. This is Pyinfra - just pure Python, no additional DSLs. Also for configuration there is Terraform (but there are also some limitations).
Stop assuming your method works across the universe of edge cases.
https://linux.die.net/man/1/pssh
And Yeah, now that you've mentioned it multiple shorter gifs would be better.
I only needed a very small fraction of what it can do to bail a client out of a problem their customer caused on several hundred computers the night before an event, but it absolutely saved the day and a lot of money.
To send the same command to multiple servers, use pdsh: https://linux.die.net/man/1/pdsh
To collect all the results and show which ones are the same or different, use dshbak (i.e., "pdsh <parameters including servers>|dshbak"): https://linux.die.net/man/1/dshbak
Similar things, sometimes more convenient but less efficient for a large number of servers, are to use the konsole terminal program and link multiple window tabs together so the same typed command goes to all, and quickly view the results across the tabs; or to use tmux and send the same commands to multiple windows (possible useful "man tmux" page terms: link-window, pipe-pane, related things to those, activity, focus, hooks, control mode).
And others that I haven't used but which also look possibly interesting for platforms where pdsh and dshbak might not be available (like OpenBSD at least):
- https://github.com/duncs/clusterssh/wiki (available on OpenBSD as a package)
- https://www.gnu.org/software/parallel/ (also available as a package on OpenBSD 7.6: named "parallel-20221122"; might relate to "pdksh")
- Also clusterit.
Personally, I'm a tmux synchronized-panes kind of guy, so that I can see the result of each output immediately.
I have made scripts to do this with filter parameters over VMs on cloud providers, which is very valuable. Maybe you can extend this to have those options, so potential users are more attracted to it?
For the most common cases I have it aliased to just `p`: https://github.com/Julian/dotfiles/blob/main/.config/zsh/com...
Or https://github.com/Julian/dotfiles/blob/4d36e6b17e9804a887ba...
and I would be afraid to run SSH commands on multiple machines at once in case one of them errored out and needed manual intervention. ansible or puppet would let me know about that stuff.
maybe it's me! there is approximately zero chance that running the same script on four of my machines would result in four cleanly run scripts. one or more would fail, and if more than one failed, they would each fail for a different reason.
I personally do not prefer "run command in multiple places at once" over "run command in four places in sequence", however. I think it would just be a "random" choice in my case, and I might just write a script that does either. I do not mind as long as it is a one-time thing, but if I would have to do this more than once, I would just automate it, via scripts. I would probably just have it run in sequence.
Maybe I should set a limit or let the user set a limit to how many results should be shown once the process is completed. Showing m and n results from the start and end
Like a basic list of servers can also have this done via "ansible -m shell -a 'echo something' <server group>"