Posted: April 6th, 2017 | Author: jordan | Filed under: Linux, Work | No Comments »
Recently I was lucky enough to be a crew member on a sailboat that was making passage through the Caribbean. The Captain of the vessel, who lived aboard, was speaking to me about data storage and how difficult of an equation it was. Sailboats have very little power available to them when they’re underway as most don’t run their engine which is the only source to power the limited batteries kept onboard. He was thinking about picking up a Drobo-Mini and using SSDs to reduce the amount of draw on his system, however this solution is DAS based and doesn’t allow him access to the data unless he plugs directly into the box which means, you need a computer as well. Which is even more draw on the electrical system.
After a quick think and a look around the Internet I decided that the best way to address this issue would be to use a Raspberry Pi 3, a four port USB hub, multi-SD card reader, and mdadm, with smb, nfs, and upnp. I’m not going to go into the nitty gritty of how to setup a raspberry pi as there are many tutorials available online already. However I will touch on some performance metrics that I was able to pull.
It’ll be small physically, have very little power draw, each microSD card draws between 66–330 mW during data transfer, at idle 0.2mA. Each bank will be less than 1mA at idle and 1.2W during transfer. Each bank should yield close to 800GB, all together I’ve calculated 3.2TB of data storage at 6-8W. Pretty dope hey?
The issue is cost. Prices in CAD
Raspberry Pi is $60
case and parts $20
USB Hub $26
total: $106 plus tax / shipping
Four card reader $20
200GB microSD card $91
800GB bank total $384 plus tax / shipping
Performance will max out at around 40MB/sec which isn’t great however we’re not looking for performance we’re looking for efficiency.
I welcome comments to this plan below
My next step to this plan would be to get the Pi to be a wireless access point as well.
Posted: February 29th, 2016 | Author: jordan | Filed under: Uncategorized | No Comments »
This is a follow up post to my previous article about how to setup RADIUS on Server 5.x
In my Open Directory user list I have a user called scanner with a password of, you guessed it, scanner. Now I know this isn’t the most secure thing ever but the user only has very limited access. Recently I wanted to implement RADIUS so that the VPN concentrator could authenticate against Open Directory but I certainly don’t want the scanner user to be able to authenticate. Previously I would fire up WorkGroup Manager and build a service access control list, SACL. However with WGM now gone I have to do it on the command line. After some hacking I figured it out.
First, you’ll need to make a group in OD called VPN and put the users you would like to have VPN access in it. Then whip open a terminal and get the GUID of that group.
dscl localhost read /LDAPv3/127.0.0.1/Groups/VPN
You’re looking for the “GeneratedUID” record this somewhere. Next edit the following script and put your GeneratedUID into the line where it says NestedGroups
dscl localhost create /Local/Default/Groups/com.apple.access_radius
dscl localhost create /Local/Default/Groups/com.apple.access_radius RealName com.apple.access_radius
dscl localhost create /Local/Default/Groups/com.apple.access_radius passwd "*"
dscl localhost create /Local/Default/Groups/com.apple.access_radius gid 260
dscl localhost create /Local/Default/Groups/com.apple.access_radius NestedGroups PASTE_GUID_HERE
Then reboot radius with a serveradmin stop/start radius and you should be good to go!
Posted: February 15th, 2016 | Author: jordan | Filed under: SSL | No Comments »
Now that you have your own Certificate Authority setup from my previous article you’ll want a fast way of whipping up new certs. Never fear for making certs is here. Drop the following code into an executable file and place it into the myCA folder. You’ll be able to make certificates at whim by issuing a command like
make-cert.sh hostname.lan.domain.com 3650 01
if [[ $# -lt 3 ]]; then
echo $0: requires FQDN for certificate, valid for X days, two digit serial, please document this
echo example: $0 hostname.lan.domain.com 3650 01
echo place this script into the myCA root folder
openssl genrsa -des3 -out $HOSTNAME/$HOSTNAME.key 4096
openssl req -new -key $HOSTNAME/$HOSTNAME.key -out $HOSTNAME/$HOSTNAME.csr
openssl x509 -req -days $DAYS -in $HOSTNAME/$HOSTNAME.csr -CA cert/cert.pem -CAkey key/ca.key.pem -set_serial $SERIAL -out $HOSTNAME/$HOSTNAME.crt
openssl rsa -in $HOSTNAME/$HOSTNAME.key -out $HOSTNAME/$HOSTNAME.key.insecure
mv $HOSTNAME/$HOSTNAME.key $HOSTNAME/$HOSTNAME.key.secure
mv $HOSTNAME/$HOSTNAME.key.insecure $HOSTNAME/$HOSTNAME.key
Posted: February 15th, 2016 | Author: jordan | Filed under: Open Directory, RADIUS, SSL | No Comments »
Quite simple to setup. First paste in the following commands.
radiusconfig -setconfig auth yes
radiusconfig -setconfig auth_badpass yes
Now install an SSL cert/key pair for your host, the built in ones are found in
/etc/certificates or you followed my previous article about becoming a certificate authority and you have the certs on hand.
radiusconfig -installcerts /path/to/key /path/to/cert
Finally add some clients
radiusconfig -addclient other
Then start the radius server
serveradmin start radius
When I did this recently I didn’t have a way to test the server so I installed the FreeRadius server via brew.
brew install freeradius-server
And then tested the server by using
radtest The binary can be found in the following directory
The syntax of the command is as follows:
radtest username password radius-server[:port] nas-port-number secret
Here’s an example:
radtest username password 192.168.1.1 10 secret
An Access-Accept is a passing grade!
Posted: February 11th, 2016 | Author: jordan | Filed under: Mac OS X, Mac OS X Server, SSL | 3 Comments »
I use Open Directory a lot, I can’t think of many times when I don’t use it at least once a day in some way whether that be direct or indirect. It’s not the best directory system out there – hell, it’s not even very good but it’s what the fine people at Apple have supplied us with and it’s what I use. Although I have long had a pet peeve with the way OD is built if you just run through the setup wizard. The certificate expires in one year. Almost every time I encounter an OD server in the wild nine times out of ten the certs are a mess. They’re either expired or about to expire and all the services that depend on them are freaking out. The clients are constantly being prompted to accept an invalid cert and OD fail-over tends to stop working. My solution has been to build a certificate authority for all my OD installs and to build my own certs that are valid for ten years, that way I won’t have to worry about them expiring and let’s be honest there’s no Mac server on the planet that’s going to last ten years lol (sense the cynicism yet?)
Create the Root Key
First we’re going to hop into a terminal on any Mac OS X box and navigate to somewhere safe in the file system and build the master key, make sure the password you use for this is kept secret and safe.
mkdir -p myCA/cert myCA/key
openssl genrsa -aes256 -out key/ca.key.pem 4096
chmod 400 key/ca.key.pem
That OpenSSL command is going to ask you a bunch of questions, answer them to the best way you see fit and for the common name put the organization that you’re building this certificate authority for. Do not put an actual domain. Record what you wrote for Common Name because we’re going to need it later. Whitespaces are allowed. Oh and one more pro tip, make sure the value of the certificate common is unique and can’t be half-matched. For example I made a root cert called “Client Name” but when it came to do deploy the certificate via munki, the check install script found a cert called “Client Name Open Directory” and thus the command matched and wouldn’t deploy the new root cert.
Create the Root Cert
openssl req -key key/ca.key.pem -new -x509 -days 3650 -sha256 -extensions v3_ca -out cert/cert.pem
chmod 444 cert/cert.pem
Now we have two files.
cert/cert.pem – This is your CA’s certificate and can be publicly available and of course world readable. You will need to load this certificate into all the clients in your network.
key/ca.key.pem – This is your CA’s private key. Although it is protected with a passphrase you should restrict access to it, so that only root can read it.
Create the First Server key/cert combo
We can now start creating SSL certificates for our various servers and services. Create a directory named a hostname corresponding to the hostname for the computer or service you are creating the certificate for.
Create the first key
openssl genrsa -aes256 -out hostname/hostname.domainname.key 4096
While answering the questions for this make sure you type in the FQDN for server or service you want to secure into Common Name.
Create the server cert
openssl req -new -key hostname/hostname.domainname.key -out hostname/hostname.domainname.csr
Sign the cert
openssl x509 -req -days 3650 -in hostname/hostname.domainname.csr -CA cert/cert.pem -CAkey key/ca.key.pem -set_serial 01 -out hostname/hostname.domainname.crt
Now before we continue I have to drill into you that it’s imperative that you document these certs as you make them, which hosts they’re deployed on to, and what the serial number is. It will help you in the long run.
Finally we’re going to make a passwordless version of the server key, this is the key that we’ll ultimately use on our server. We need a passwordless key so that the Mac OS X services do not need human intervention when trying to use the certs. Otherwise you’ll have to type the cert password in every time you restart the service.
openssl rsa -in hostname/hostname.domainname.key -out hostname/hostname.domainname.key.insecure
mv hostname/hostname.domainname.key hostname/hostname.domainname.key.secure
mv hostname/hostname.domainname.key.insecure hostname/hostname.domainname.key
Import the CA and Cert
Copy the myCA/cert.pem, hostname/hostname.domainname.key, and hostname/hostname.domainname.crt files to your Mac OS X server. Double click on the myCA/cert.pem, this should open the file with Keychain Access which will ask you which keychain to import the cert to. Select the system keychain and then double click on the entry in Keychain Access and set the trust setting to Always Trust.
Next open Server.app and click on Certificates. Click on the gear menu and select Import Certificate Identity. Drag in the hostname/hostname.domainname.key, and hostname/hostname.domainname.crt files, when you do this you should see that the files are signed by the organization that you entered in step one. Finally select the cert from the list of certs in server.app and give it time to switch over.
Finally fire up the wiki service in server.app, whip open your browser of choice and connect to the FQDN that you create the SSL cert for. Remember, if your cert common name doesn’t match the way you address the wiki in the URL bar you will get a hostname mismatch error. You MUST make sure the names match from what you type into the certificate’s common name and how you connect to the wiki.
Root Cert Deployment
So now maybe you want to be able to use this cert with some of your clients right? You can do this a multitude of ways. Such as:
Copy the file to /tmp across your network and run the following command as root
security add-trusted-cert -d -r trustRoot -k "/Library/Keychains/System.keychain" "/tmp/cert.pem"; srm "/tmp/cert.pem"
Or if you’re cool you’ll use some sort of package deployment system. I use munki because it’s not backed by some money hungry corporation. Here’s looking at you Cohen. 😉 To do this I made a new package with Composer dropped my root cert into /tmp and then wrapped then finished the Composer wizard. Import this package into munki using munkiimport and then drop aforementioned command into post-install script.
For extra bonus points it would be cool to see if the root cert is installed before we go pushing this package to all the workstations in the network. To do this I added a little check install script to munki. Note that you have change the -c flag to match whatever you wrote for the root cert common name way back in step one.
security find-certificate -c "Root Cert Common Name" /Library/Keychains/System.keychain
if [ $? != 0 ]; then
You’ll note that the exit codes are reversed here, it’s because Munki will only install if the check install script exits on 0 which is how our security check command will exit if it finds the cert installed. So we flip the exit code to make munki do our bidding.
At this point you should be feeling like a rock star for the following reasons:
- You haven’t given Go Daddy any money
- You’ve successfully built and deployed your own cert authority
- You won’t have to worry about onboarding new machines with the cert cause you’ve got in munki
Posted: February 8th, 2016 | Author: jordan | Filed under: DNS, Mac OS X, Mac OS X Server | No Comments »
I was faced with a DNS migration, but from Snow Leopard Server to Server.app 5.x. There were only 9 zones but there are hundreds of records and Apple provides zero tools to help make this migration easy. But, I found a hack. Now I’m going to say right now that I just found that this worked and YMMV.
First, on your Snow Leopard Server box do an
ls /var/named/zones and make a primary zone in Server.app 5.x for every file listed in this directory. Then tarball up all these files and copy them over to your Server.app 5.x machine. One by copy the zones files from this tarball into /Library/Server/named matching the names as you go with some tab-auto-complete action.
For example, if the zone files in your Snow Leopard server are:
Then you would issue the following commands:
sudo cp db.1.5.10.in-addr.arpa.zone.apple /Library/Server/named/db.1.5.10.in-addr.arpa
sudo cp db.lan.clientname.com.zone.apple /Library/Server/named/db.lan.clientname.com
sudo cp db.mgmt.clientname.com.zone.apple /Library/Server/named/db.mgmt.clientname.com
sudo cp db.remote.clientname.com.zone.apple /Library/Server/named/db.remote.clientname.com
sudo cp db.backup.clientname.com.zone.apple /Library/Server/named/db.backup.clientname.com
Posted: February 2nd, 2016 | Author: jordan | Filed under: Mac OS X, Mac OS X Server, Mountain Lion, postgres, Wiki | No Comments »
Recently I received a panicked phone call from a fellow sysadmin who was in a real jam. He had a customer who was dumping all their knowledge into Apple’s Wiki system running on top of Mountain Lion and Server 2.2.5. The storage system in the mini failed and they had to recover from backup, however the backup was setup using Carbon Copy Cloner and as we all know you cannot rely on a file-based backup system to backup a running postgres database.
After the data was restored the machine did boot but all the postgres services would not start, including the wiki. After reviewing the logs for quite some time I found some entries of
pgstat wait timeout and then no log entries for about a day. I assumed that this was our hard drive failure window. Then two days later the log started producing tons of postgres crash statements, launchctl statements and this little nugget
Jan 19th 13:29 database system was interrupted This was all the information I needed. From what I can tell, between the time that Carbon Copy Cloner calculated changes and the time that it copied the data some minute things changed within the database and so CCC didn’t get a proper clone. It appears that this error is caused when the database engine no longer knows where to start writing data back into the database. Basically, the counters were broken and had to be reset. Luckily postgres makes a tool called
The command has this basic structure:
-x XID set next transaction ID
-m XID set next multitransaction ID
-o OID set next OID
-l TLI,FILE,SEG force minimum WAL starting location for new transaction log
Now the Apple Wiki postgres data is held within
/Library/Server/PostgreSQL\ For\ Server\ Services/Data which is an important detail to hold onto. Within this directory are all the bits of info you’ll need to run the following calculations. You’ll also need this decimal to hex converter.
A safe value for the next transaction ID (-x) can be determined by looking for the numerically largest file name in the directory
pg_clog under the aforementioned postgres data directory, adding one, and then multiplying by 1048576. Note that the file names are in hexadecimal. It is usually easiest to specify the switch value in hexadecimal too. For example, if 0011 is the largest entry in pg_clog, -x 0x1200000 will work (five trailing zeroes provide the proper multiplier).
A safe value for the next multitransaction ID (-m) can be determined by looking for the numerically largest file name in the directory
pg_multixact/offsets under the data directory, adding one, and then multiplying by 65536. As above, the file names are in hexadecimal, so the easiest way to do this is to specify the switch value in hexadecimal and add four zeroes.
A safe value for the next multitransaction offset (-O) can be determined by looking for the numerically largest file name in the directory
pg_multixact/members under the data directory, adding one, and then multiplying by 65536. As above, the file names are in hexadecimal, so the easiest way to do this is to specify the switch value in hexadecimal and add four zeroes.
The WAL starting address (-l) should be larger than any WAL segment file name currently existing in the directory
pg_xlog under the data directory. These names are also in hexadecimal and have three parts. The first part is the “timeline ID” and should usually be kept the same. Do not choose a value larger than 255 (0xFF) for the third part; instead increment the second part and reset the third part to 0. For example, if 00000001000000320000004A is the largest entry in pg_xlog, -l 0x1,0x32,0x4B will work; but if the largest entry is 000000010000003A000000FF, choose -l 0x1,0x3B,0x0 or more.
Once you have these four values you’re ready to try it out on your database. But before I began I requested a full bootable clone of the server as it was when they restored it, then I took this cloned and placed it into a VM in Fusion and snapped the VM before trying anything. Also, don’t forget that when you want to issue commands to the Apple postgres service you have to use the full path to the commands as well as use the
_postgres user. My final command, which recovered the wiki system AND profile manager, looked like this:
sudo -u _postgres /Applications/Server.app/Contents/ServerRoot/usr/bin/pg_resetxlog -f -x 0x100000 -m 0x10000 -o 0x10000 -l 0x1,0x2,0x18 /Library/Server/PostgreSQL\ For\ Server\ Services/Data
Feel free to reach out if you are having issues.
Posted: July 24th, 2015 | Author: jordan | Filed under: Mac OS X, munki | Tags: munki nopkg profile | No Comments »
I recently needed to push some user level profiles, CardDAV to be specific. I use Meraki MDM but the custom mobileconfig profile would only install as a device profile. So I turned to my new munki install instead. Check out this post here if you’re not familiar with nopkg http://grahamgilbert.com/blog/2014/07/27/personal-automation-munki-part-2/
First make sure you know the unique identifier of your profile, for an example we’ll use com.company.carddav
First create a folder on your munki repo called profiles then copy the profile into this folder.
Use the following bash scripts in your pkginfo files to check, install, and uninstall the profile as you need
USER=`/usr/bin/who | grep console | cut -d ' ' -f1`
sudo /usr/bin/profiles -P | grep com.company.carddav | grep $USER
if [ $? -eq 0 ]; then
USER=`/usr/bin/who | grep console | cut -d ' ' -f1`
/usr/bin/curl -L1 http://munki.yourmunkirepo.com/profiles/com.company.carddav.mobileconfig -o /tmp/profile.mobileconfig
sudo -u $USER /usr/bin/profiles -L -I -F /tmp/profile.mobileconfig
USER=`/usr/bin/who | grep console | cut -d ' ' -f1`
/usr/bin/curl -L1 http://munki.yourmunkirepo.com/profiles/com.company.carddav.mobileconfig -o /tmp/profile.mobileconfig
sudo -u $USER /usr/bin/profiles -L -R -F /tmp/profile.mobileconfig
Posted: March 14th, 2015 | Author: jordan | Filed under: LDAP, Mac OS X Server, SSL | No Comments »
I encountered an issue recently where I imported a wildcard certificate into an Open Directory server which was fine however once I tried to select it Open Directory immediately stopped working.
2015-03-14 10:42:07.113 AM com.apple.launchd: (org.openldap.slapd) Exited with code: 1
2015-03-14 10:42:07.113 AM com.apple.launchd: (org.openldap.slapd) Throttling respawn: Will start in 10 seconds
2015-03-14 10:42:17.150 AM com.apple.launchd: (org.openldap.slapd) Exited with code: 1
2015-03-14 10:42:17.150 AM com.apple.launchd: (org.openldap.slapd) Throttling respawn: Will start in 10 seconds
To diagnose I turned ldap off by way of launchd
sudo launchctl unload /System/Library/LaunchDaemons/org.openldap.slapd.plist
And then told openldap to launch in debug mode and don’t fork.
sudo /usr/libexec/slapd -d 99 -F /etc/openldap/slapd.d/
To which I received this reply:
TLS: attempting to read `/etc/certificates/server.inside.tld.ca.6C66FD3E997A9FD902DEA9050EE3F9A58EF63742.key.pem'.
TLS: could not use key file `/etc/certificates/server.inside.tld.ca.6C66FD3E997A9FD902DEA9050EE3F9A58EF63742.key.pem'.
TLS: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch /SourceCache/OpenSSL098/OpenSSL098-47.2/src/crypto/x509/x509_cmp.c:406
55047382 main: TLS init def ctx failed: -1
That’s strange I thought, so I cracked open /etc/openldap/slapd.d/cn=config.ldif in vim and found that at the bottom of the file the cert and the key did not change over properly.
olcTLSCertificatePassphrase: "Mac OS X Server certificate management.6C66FD3E9
Notice how the certkeyfile does not match the cert or chain file? It’s like Server.app b0rk3d on parse the wildcard symbol while modifying this file. The only way I’ve figured out how to get OD back on it’s feet after this disaster is to remove these lines from the cn=config.ldif and rebooting the OD server. Even if I tried hand coding the cert in Open Directory will stop crashing however the secure LDAP service does not come up.
I’ve since switched to an internal CA and making certs for each FQDN which has been a way better experience.
Posted: November 10th, 2014 | Author: jordan | Filed under: Uncategorized | No Comments »
Recently, I was granted access to the Windows beta agent. In a word, amazing. Truly, Allen and the guys at watchman have done an amazing job. Now, I have most of my clients enrolled in Meraki Systems Manager and I wanted to be able to push this agent to them without getting in the user’s face. I came up with the following and please keep in mind, I’m NOT a Windows sysadmin.
bitsadmin.exe /transfer "MSI" http://www.yourdomain.com/path/to/MonitoringClient.msi C:\temp\MonitoringClient.msi
bitsadmin.exe /transfer "regfile" http://www.yourdomain.com/path/to/monitoringclient.reg C:\temp\MonitoringClient.msi C:\temp\monitoringclient.reg
Regedit /s C:\temp\monitoringclient.reg
Msiexec.exe /I C:\temp\MonitoringClient.msi
I take this code and paste it line by line into the “Command Line” feature of Meraki Systems Manager.
For more info on Watchman Monitoring Windows Beta go here.
For Meraki Systems Manager go here.