Victor Reyes
5/26/2008 2:12:00 PM
[Note: parts of this message were removed to make it a legal post.]
Actually, ssh was my first choice and we use it for a short period until it
became impractical.
Here is what I would like to do so you have a better understanding.
BTW, I've been playing with *gserver* this weekend but I still don't get
what I need. I have problems on the receiving size. I don't get all the data
sent by the server.
That been said, here is what we do and the trouble we ran into.
Son facts:
1. I am a Ruby neophyte but I don't give up until I get what I need.* *My
solutions are not always elegant but they do the job!
2. ssh *IS permitted.*
3. We are less than than 10 UNIX admins.
4. We have over 100 AIX servers behind splitted among vlans and each
behind different firewalls.
5. My second solution TCP Server/Client worked very well. That's until
the security people discovered the listening port and the fact that my
server, which was listening on EVERY server would execute any cmd. True, the
client version only runs as *root* providing just a bit more security as
you first have to log-in with your ID and then *su* to *root*.
6. *root* can only be use via *su*.
7. The solution I am looking for is to be used only by the sys admins.
8. My *first solution* was using *ssh* as it is fully allowed by the sec
group. Since authenticating would be impractical when executing a cmd on
over 100 servers, we created public/private keys, which was a pain below the
waist to distribute for everyone. Also, since in many instances we needed to
run *root* commands, that was a real problem since we would have to
either setup keys for root or implement* sudo*. That's why I decided to
create my own poor-man distributed remote command processor.
So, this is what I need to do.
Create an environment where a sys admin:
1. log-in with her userid as we do daily and su to root.
2. Execute a root cmd remotely on a server or multiple servers and
receive the reply on the local server. We use one server as a the main
server. Kind of a control work station.
3. The communication between the main (local) server and the remote
server(s) must be "secured" (ssh, ssl, encryption, whatever)
That's in a nutshell!
All suggestions are greatly appreciated.
Thank you
Victor
On Mon, May 26, 2008 at 8:49 AM, Robert Klemme <shortcutter@googlemail.com>
wrote:
> 2008/5/24 Aaron Turner <synfinatic@gmail.com>:
> > The easiest way to add SSL to any application is to run stunnel on
> > each of your servers and have it proxy to your server listing on a
> > port on the loopback interface. That way your server doesn't even
> > have to know SSL and it's easy to debug. Whatever you do DO NOT
> > design your own crypto solution- notice the Debian guys couldn't even
> > make a small "fix" without breaking ssh horribly.
>
> Definitively not cook your own!
>
> > On a side note, there are already free solutions for this sort of
> > thing... just search freshmeat.net.
>
> Yet another alternative might be to just use ssh, i.e. replace your
> demon with sshd and execute commands directly via ssh. This also
> allows for secure file transfers (scp). dshc and dshp then become
> wrapper for a ssh call. Note that with ssh-agent you don't even have
> to enter passwords for all the servers.
>
> Kind regards
>
> robert
>
> --
> use.inject do |as, often| as.you_can - without end
>
>