Spacewalk cluster with self-signed certs.

Spacewalk has a lot of options. There’s a lot of good docs out there by the default docs are somewhat hard to follow. This post is intended to be a framework to create a self-signed CA (or use a third-party CA and skip the self-signed portion), and then apply the root CA cert to the entire cluster simplifying configuration. If you go self-signed, you’re definitely going to want to create the CA certs.

For the purpose of this post, we’re making a master and slave configuration with proxies.

Wewt. We have our signing key. Time to sign some certs. For this post we’re going to have 7 servers:
master.server.com – the spacewalk master server.
a.slave.server.com
b.slave.server.com
c.slave.server.com – These are the spacewalk portal/slave servers. These will pull packages and channels from the master server.
proxy.a.slave.server.com
proxy.b.slave.server.com
proxy.c.slave.server.com – These are the Spacewalk proxies. Each of these sits in front of the Spacewalk slave and caches packages.

A quick note on how this configuration works. The master server pulls in packages and assigns them to channels (repos.) The slave servers sync content with the master. Errata is assigned to the slaves and tied to channels, and packages in those channels. You can run servers directly against the slaves or master, but it can be tipped over under heavy load. The proxies use Squid to cache packages offloading much of the work but otherwise are just relays.

On relays, there’s two basic forms of package management. OSAD, and RHN. OSAD uses a HTTP keepalive sessions from each client to the SW server to allow for pushes. It’s very handy, but it can be a pain to maintain with a large number of clients. RHN is the other method. Each server will check into the proxy/slave/master every 60 minutes by default and see if there are commands queued.

Back to the build. We’re going to make some client certs, then sign them.

Now we have ssl certs for each of the servers in this cluster. Copy them over to each server. The structure of the directories on each server will need to be this:

This imports the cert into the Spacewalk application. Lets create the RPM for the CA, and each server:

Substitute proxy.c.slave for each of the hostnames listed previously. There will be 7 total here. Validate that each cert matches:

If there are any problems, recreate, or resign the certs.

Now that we have the actual SSL certs, lets start applying them. Spacewalk has tools that creates RPM’s for the CA (/root/RHN-ORG-TRUSTED-SSL-CERT) and also the certs.

There’s an additional step on the slaves, and the master:

master
a.slave
b.slave
c.slave

On the proxies:

There’s some basic management needed next and good guides out there for that stage of things. At a minimum, you need to go into the slave servers and add the proxies, grant them access to any wanted channels and make sure that they have a provisioning entitlement.

On each client to connect to these servers:

This took a lot of piecing things together. It’s not really complex and there’s some good guides out there but it’s hard to get everything working from end to end.

Media de-duplication script.

I enjoy using Plex Media Server. To the point where I have backed up all of my media, tossed the cases, and put everything into folders. A big plug for Plex and MakeMKV.

There are filename formats that you should use for TV shows:

https://support.plex.tv/hc/en-us/articles/200220687-Naming-Series-Season-Based-TV-Shows

That’s nice, but once in a while you might replace content (Blu-Ray instead of DVD quality, etc.) It can be a pain to re-rip all that content. If you use automated tools to pull content and sort it out and move it places, sometimes you’ll wind up with multiple versions of the same file, possibly with different filenames, formats, etc.

That was the case for me, and I had changed filename formats due to the conversion to Plex from XBMC. That’s the biggest reason for this (simple) script:

While most of the filename was different what wasn’t is the sxxexx format (s01e01, s01e02, etc.) With some regex, that’s enough to compare and remove duplicates. Note, this will prompt for every match. Even if you’ve removed the first instance.

From: https://github.com/tuxbiker/dupe_cleanup

Auto tar find results.

You’re a standard sysadmin. You get an alert that your filesystem is filling up (usually on /var.) You don’t need to spend a lot of time cleaning it up so like any good sysadmin you write a script. Here’s a couple of quick ones.
First, grab files a, but exclude set b. (Maybe you want to archive log_files.may.txt, but not log_files.june.txt.tgz.)
A little simpler. Grab anything that matches the wildcard and compress it.
Assuming the following:

 

You want to keep secure-20150315 but compress everything else.


What does this do?

Find files in /var/log with the name of secure-20150315 and exclude them from processing. Then (-o) search for anything not excluded with the name secure-2015*. The print0, and xargs options are really the neat thing here. Normally the results of find will be a hash. This iterates through each file individually instead. {} is the variable so {}.tgz will create a file called ‘secure-20150301.tgz’ containing all content in 20150301.tgz.

This is super useful if you’re trying to condense directories. All you have to change? type f to instead be type d.

 

Bash mass change permissions using find

A quick one-liner to recursively change all files in a directory to a set permission:

You can substitute chown if you want to set ownership:

This comes in handy when you want to give a group the ability to navigate through directories while not just blindly giving write/execute permissions.

If you want to avoid doing recursion, use –max-depth=x where x is the number of subdirectories to navigate:

This will only modify files in the path directory.

Putty Reverse forwarding command line options.

Port 80 is the port that you’re connecting to on the remote server. Port 12000 is the port you’re connecting to locally. Say this is a HTTP connection. The path to connecting is simply pulling up localhost:12000 in any web browser.

Bash renaming utility

Ubuntu/Debian packages the rename utility with perl and regex. For distros that do not, this one-liner is handy:

As with anything Linux, there are many ways to accomplish this.

Replacing Google Fiber’s Network Box

Google Fiber is amazing. Their network box does a pretty decent job but it lacks a lot of features for more advanced users. Their router does a good job with active connections and considering the bandwidth in use for the average person it’s fine. If, however, you are wishing to do more with it (DMZ, bridging, better port forwarding, or even just wanting to use your own router) it’s not currently possible with their modem.

There’s some projects in play (pfsense, etc) that allow you to connect your own hardware. Google doesn’t discourage your replacing their network box and even gives you some basic information on how to proceed:

https://support.google.com/fiber/answer/3333210?hl=en

The takeaway there is that you need to vlan a port, and set QoS bits on egress traffic. Once that is done, you can hook any Linux machine directly up to the fiber jack.

I’m using eth3 for WAN in this example. Replace the WAN port on your machine with whatever yours is.

You need to VLAN the interface that is connected to the fiber jack. If this is a single machine, this is easy. Typically eth0.

On EL systems:

Now create the VLAN:

Create the VLAN device:

Finally add the route:

The route here is necessary for TV services. If you just have internet… you won’t need to do it.

This gets all of your changes active. You should now have a new address on eth3.2.

Finally, set the QoS/CoS bit on egress traffic. Until this is done you will max out at roughly 10mb/s upload speed (I was getting 500mb/s down even here.)

Special thanks to my friends Josh Bergland and John Narron who helped me with some packet diving to get everything working!