-
Notifications
You must be signed in to change notification settings - Fork 4
Home
sos-collector
can be run either on a node in a cluster or on a workstation. In either case, a successful run will result in a tarball on the system that ran sos-collector
that contains sosreports from the nodes provided to or discovered by sos-collector.
For our purposes "cluster" is simply used to mean "a multi-node environment", but is not referencing any specific type of technology or product.
If running on a cluster node, that node needs to be able to enumerate the other nodes that make up the cluster. As an example, this means that you can run sos-collector
on any node in a pacemaker
cluster, but would need to run it on the manager system for an ovirt
environment.
If you have SSH keys installed on the node you're running sos-collector
from for the other nodes in the cluster, then you can just run it directly:
[root@ovirtm2 ~]# sos-collector
sos-collector (version 1.0)
This utility is used to collect sosreports from multiple nodes simultaneously
Please Note: sos-collector ASSUMES that SSH keys are installed on all nodes unless the --password option is provided.
Cluster type set to ovirt
Please provide the engine database password:
The following is a list of nodes to collect from:
ovirt1.example.com
ovirt2.example.com
ovirt3.example.com
ovirt4.example.com
ovirtm2.example.com
Please enter your first inital and last name: tturtle
Please enter the case id you are collecting reports for: 12345678
Begining collection of sosreports from 5 nodes, collecting a maximum of 4 concurrently
ovirtm2.example.com : Generating sosreport...
ovirt3.example.com : Generating sosreport...
ovirt4.example.com : Generating sosreport...
ovirt1.example.com : Generating sosreport...
[...]
ovirt1.example.com : Retrieving sosreport...
ovirt1.example.com : Successfully collected sosreport
Collecting additional data from master node...
Successfully captured 5 of 5 sosreports
Creating archive of sosreports...
The following archive has been created. Please provide it to your support team.
/var/tmp/sos-collector-tturtle-12345678-2018-04-27-isqed.tar.gz
As you can see, sos-collector
detected the local system was an ovirt manager, and then proceeded to enumerate the nodes in this environment. From there it connected to the remote nodes that were enumerated, and began generating a sosreport. A sosreport from the local node is also being collected as well. By default, sos-collector
will only collect 4 sosreports at a time in an effort to not put undo stress on the cluster, this can be adjusted as necessary using the --threads
option. As nodes finish being collected, any remaining nodes will be started.
Once all nodes are either collected or detected as having failed to collect, sos-collector
will create a tarball of the sosreports and report the location of it. The tarball that sos-collector
generates should be what you give to your support team or representative.
If you'd prefer to run sos-collector
on your local workstation, you can do so by defining a 'master' node. The 'master' in this case is just a node that can enumerate the other nodes, not necessarily a controller. Using the same environment from the previous example, you could run sos-collector
like so on a local workstation:
sos-collector --master=ovirtm2.example.com
If you don't have SSH keys available on the cluster nodes, you can specify a password to use to open the SSH sessions, by using the --password
option and you will be prompted for an SSH password (defaulting to the root user)
[turtle@terra]$ sos-collector --master=ovirtm2.example.com --password
sos-collector (version 1.0)
This utility is used to collect sosreports from multiple nodes simultaneously
User requested password authentication.
Provide the SSH password for user root:
Note that the password defined here is assumed to be the same on all nodes. If you have different passwords for each node, deploy SSH keys and use the default behavior.
Yes and no. sos-collector
itself can be run as a non-root user, but sosreport
at this point does require root privileges. Because of this, the default user that will be used to open the SSH sessions to the nodes is root, so you'll either need the root password or have your SSH keys tied to the root account.
Otherwise, you can change the SSH user using the --ssh-user
option like so:
$ sos-collector --master=ovirtm2.example.com --password --ssh-user=joe
sos-collector (version 1.0)
This utility is used to collect sosreports from multiple nodes simultaneously
A non-root user has been provided. Provide sudo password for joe on remote nodes:
User requested password authentication.
Provide the SSH password for user joe:
Notice two things have happened:
- You are prompted for a
sudo
password forjoe
. - In this case, because
--password
is also specfied, you are prompted for the password for thejoe
user. If you do not specify--password
you will only be prompted for the sudo password and the expectation is that you have SSH keys deployed for thejoe
user on the cluster nodes.
If you cannot open SSH connections as root, do not have sudo
access, but do have the root password on the remote machines, use --become
along with --ssh-user
:
$ ./sos-collector --master=ovirtm2.example.com --password --ssh-user=joe --become
sos-collector (version 1.0)
This utility is used to collect sosreports from multiple nodes simultaneously
User joe will attempt to become root. Provide root password:
User requested password authentication.
Provide the SSH password for user joe:
You're first prompted for the root
password and then for the ssh password for joe
. Again, in this example because --password
was specified you are also prompted for the password for joe
, if --password
had been left off, you would only be prompted for the root
password.