Automate adding nodes to Solarwinds with Puppet

At work we use Puppet for configuration management. Recently we have decided to convert to Solarwinds for monitoring.

I loathe any sort of “true-up” process where people are trying to figure out what nodes they have and how many of them are missing from the monitoring tool.

To that end, we knew we needed to get nodes into Solarwinds automatically. Naturally Puppet would be the tool we used for that since it’s already everywhere and controlling everything for us.

Since Solarwinds has an API, it made sense to use a Puppet Custom Function to reach out to Solarwinds and ask if the node was present. If it wasn’t, go ahead and add it. This would also lay the groundwork for expanding on that capability such as a reconciliation of partitions to add new ones automatically without intervention from the Operations folks.

We have opensourced what we’ve done. While it’s not a ton of code it should get a lot of people up and running. And please if you extend it, send me a pull request!

Using Spark Core without the Cloud

Recently I was generously given a Spark Core (note, not the heretofore unreleased Photon) from my friend Bill.

The Core is a tiny Arduinoish compatible board with WiFi built into it. I’ll assume you can figure out why that is cool on your own.

As soon as I got it I ran through the online Getting Started Guide and had it up and running, and was controlling pins from my phone. One of the features of the board is that it ships with an entire cloud based ecosystem. You can use their WebIDE to flash your code to the board, you can send data to and from the board via their REST api and you can control aspects of the board via your phone, and their cloud. I think that’s great as it gives people unfamiliar with microcontrollers a way to hit the ground running. I also think it’s interesting from an IoT perspective.

But…

But I want to use it for home automation projects and I don’t particularly fancy my data hitting some other person’s servers. The great thing is, Spark.io realizes this, and provides a way for you to use the Spark Core with no Cloud interaction at all. (gold star Spark.io!).

The downside is the info for how you go about doing this is scattered all over their commit messages and docs. To be fair they aren’t hiding anything, at all, but my tiny brain always wants to see the simplest possible set-up that does what I want.

For the purposes of this post I want to take a factory reset/brand new Core and upload my code, and start the WiFi without the Core talking to Spark Cloud, at all.

Having said that, I had a few false starts getting this going and if you do things in a different order, you might find the Core does try to reach out to the cloud (again, that’s a totally fine thing to do, because that’s what they are trying to accomplish with the product).

Let’s begin.

Step 1

The first step is to Factory Reset the Core. You might be able to skip this step if you just got the device, but it certainly won’t hurt it if you do this. Plug the Core into your laptop using the micro-usb cable. This will power the device. Now hold the Core up so the usb port is at the top. Depress and hold the left button. Click and release the right button. Continue to hold the left button for about 10 seconds, until the main LED goes solid white. Release the left button and you can now put the Core back down on the table (sofa, rock, railing, seat of ferris wheel…wherever you are). We’re done with the reset and the main LED should be blinking blue.

Step 2

In step 2 we need to have a local development environment set up. This was trivial to do following the README on the Github page. I did deviate slightly in that I used homebrew to get the version 4.8 GCC for ARM compiler using the following:

brew tap PX4/homebrew-px4
brew update
brew install gcc-arm-none-eabi-48

If you end up with another version of the compiler, you could run into issues (I did when I tried to use 4.7).

Step 3

Now that you have a local development environment, we can dig in. Ultimately your entry point to the code is in /path/to/core-firmware/src/application.cpp. Move that file out of the way, and put the following in it’s place.

#include "application.h"

SYSTEM_MODE(SEMI_AUTOMATIC)
int led = D7;

void setup() {
  WiFi.connect();
  pinMode(led, OUTPUT);
}

void loop() {
  digitalWrite(led, HIGH);
  delay(5000);
  digitalWrite(led, LOW);
  delay(5000);
}

This code does a few things for us. First of all it puts the system into SEMI_AUTOMATIC mode, meaning it will not reach out to the cloud unless you tell it to with your code. Second it fires off WiFi.connect(). The reason we have that here is because if it doesn’t have credentials already for a network, it will listen for them to be delivered (which we’ll do in a minute). Then the main loop here just turns the on-board LED on and off so we know it’s working. If you set up WiFi before uploading code like with the new SYSTEM_MODE, it will reach out to the cloud to see if it’s already associated with an account.

Step 4

Ok, almost done. Let’s build and upload our code. Navigate to the /path/to/core-firmware/build directory and issue:

make clean
make

This will clean up any past attempts to build the code, and rebuild the firmware again. When that completes you should have the file core-firmware.bin in the build directory with you.

Now put the Core into DFU mode by holding the Core with the USB port at the top and holding down the left button. Then tap the right button and release. When the main LED begins flashing yellow release the left button (about 3 seconds). Now if you followed the README linked above you should be able to query the device with something like:

plum:build me$ dfu-util -l
dfu-util 0.8
[...]
Found DFU: [1d50:607f] ver=0200, devnum=6, cfg=1, intf=0, alt=1, name="@SPI Flash : SST25x/0x00000000/512*04Kg", serial="6D8B368A4857"
Found DFU: [1d50:607f] ver=0200, devnum=6, cfg=1, intf=0, alt=0, name="@Internal Flash  /0x08000000/20*001Ka,108*001Kg", serial="6D8B368A4857"

Great, now we can upload the firmware.

plum:build me$ dfu-util -d 1d50:607f -a 0 -s 0x08005000:leave -D core-firmware.bin
[...]
Opening DFU capable USB device...
[...]
Download        [=========================] 100%        77312 bytes
Download done.
File downloaded successfully
Transitioning to dfuMANIFEST state

At this point the Core should blinking blue, which says “I’m happy, but I’d take some WiFi credentials if you had some”.

If you don’t want to use the WiFi you could stop here and use it as a serial connected device, or you can follow the next step.

Step 5

We’re going to create a serial connection to the device. The Core can only respond to 2 inputs when connected to this way. w, which will prompt you for WiFi credentials, and i, which will show you the Core id. We obviously want w.

Create a Serial connection.

plum:build me$ screen /dev/cu.usbmodemfa131  9600
SSID: hardtoguessssid
Security 0=unsecured, 1=WEP, 2=WPA, 3=WPA2: 3
Password: super sekret
Thanks! Wait about 7 seconds while I save those credentials...

Awesome. Now we'll connect!

If you see a pulsing cyan light, your Spark Core
has connected to the Cloud and is ready to go!

If your LED flashes red or you encounter any other problems,
visit https://www.spark.io/support to debug.

    Spark <3 you!

That’s it. It doesn’t do anything interesting, that part is up to you next. However it will get you into a state that circumvents the cloud.

Happy hacking!

Simple on/off rules with OpenHAB

So as a quick follow-up to yesterday’s post I wanted to give you a simple rule-set to get started with.

In that post we got OpenHAB up and running. We were also able to turn the Christmas tree off and on via the web ui. In this post we need to automate that.

Log into the OpenHAB box and cd into the OpenHAB directory (in our case under /opt/openhab)

cd /opt/openhab/configurations/rules
vim default.rules

We will sock in there the following code

rule "TurnOnTree"
when
  Time cron "0 0 18 * * ?"
then
  sendCommand(Z_socket1, ON)
end

rule "TurnOffTree"
when
  Time cron "0 0 22 * * ?"
then
  sendCommand(Z_socket1, OFF)
end

That’s really it. OpenHAB can re-source the config files while running, so at 6pm your xmas tree will light up, and at 10pm it will turn off.

Because the project uses the Quartz Scheduler, it is helpful to read their documentation

The Simplest OpenHAB Z-Wave set up

I have recently become interested in home automation. Mostly because I like to play with technology, not becuase of any real pressing need for it.

I decided to pick up some Z-Wave equipment at the recommendation of my friend Bob.

I purchased the following:

The Aeon Labs ZStick

And a GE 45603 Socket

The first step is to associate the dongle with the socket. This is done easily. Just plug the outlet into the wall and walk up to it with the dongle in your hand. Press the button on the dongle, and then after the light begins to blink, press the button on the socket. The dongle will blip and then resume the slow pulsing. That’s it, you now have a mesh z-wave network of two nodes.

Then take your dongle and plug it into the machine you plan to use as your OpenHAB server. This can be anything that will run Java most likely. I used an old tower I had on hand and slapped CentOS 7 on it.

Install Oracle’s Java via RPM (jdk-8u25-linux-x64.rpm):

# yum install jdk-8u25-linux-x64.rpm

Then pull down the latest OpenHAB. You’ll need the Runtime Core and the Addons archives.

Sock OpenHAB on the server under /opt:

# mkdir /opt/openhab
# cd /opt/openhab
# wget https://github.com/openhab/openhab/releases/download/v1.6.1/distribution-1.6.1-runtime.zip
# unzip distribution-1.6.1-runtime.zip
# cd ..
# mkdir addons-openhab
# cd addons-openhab
# wget https://github.com/openhab/openhab/releases/download/v1.6.1/distribution-1.6.1-addons.zip
# unzip distribution-1.6.1-addons.zip
# cp org.openhab.binding.zwave-1.6.1.jar ../openhab/addons/

Now we have the code deployed, and our z-wave bindings in place. Let’s finish up the configuration and fire this up.

First openhab.cfg

# cd /opt/openhab/configurations
# cp openhab_default.cfg openhab.cfg
# vim openhab.cfg

Find out where Linux put that guy.

# dmesg | grep cp210x\ converter\ now\ attached
[    6.237436] usb 6-2: cp210x converter now attached to ttyUSB0

Alright, now that we have that change the following line:


zwave:port=/dev/ttyUSB0

Now, let’s create our items file


# vim /opt/openhab/configurations/items/default.items

And add this.


Switch Z_socket1 "Christmas Tree" (Lights) {zwave="2:command=SWITCH_BINARY"}

Now, let’s edit the sitemap so we can manipulate that item using the web interface


# vim /opt/openhab/configurations/sitemaps/default.sitemap

And add this.

sitemap default label="Main Menu"
{
        Frame label="Christmas Lights" {
              Switch item=Z_socket1 label="Christmas Tree" icon="switch"
        }
}

Now fire up the server and navigate to the web interface.

# cd /opt/openhab/
# bash start.sh

You should now be able to navigate to http://yourhost:8080 and there should be an on/off switch for you to play with.

Certainly not plug and play. But if you’re technically savvy, this should get you up and running.

Next up you can start to write rules for controlling your home.

mcollective-yum-agent release 0.4

I decided that I will actually do releases of the MCollective yum agent, though there will likely not be many.

Because I use Github as my only remote I sometimes push broken code so I can pull it to another machine later to keep working. This could result in an end user pulling the broken code and well, being generally annoyed.

With the latest additions to add `list` as a subcommand, I figured I’d tag a release and then people can nab that instead.

The .4 I picked out of the air. I feel like I am about 2/5ths done, so .4 seemed appropriate on the way to a 1.0 release.

I am also planning to produce a RPM that I will publish in a Yum Repository. Seems like blasphemy that a yum agent doesn’t have it’s own repo/rpm.

Releases URL:https://github.com/slaney/mcollective-yum-agent/releases


Change Log for 0.4
* Added `list` subcommand
** See docs for usage

yum mcollective agent

Almost 3 years ago I created 2 MCollective Agents for use at my last employer. One was a `yum` specific agent that allowed us to automate the patching of their brand new 500+ Redhat Enterprise Linux footprint. The second agent, shellout, allows you to execute arbitrary shell commands on hosts concurrently. Both agents enjoy the rich filtering capability you get for free with MCollective.

The Yum agent was by far the more popular agent. I think the Shellout agent scares people and it should, but nonetheless it has suffered poor adoption.

My concern recently has been that if you wanted to manage the Yum agent in your environment you either had to “cut” your own release and make an rpm of it, or just copy and paste it into your config management class to have it distributed. Chances are you want to use git in those workflows to get the agent from me. And now you have the shellout agent tagging along for the ride.

Future state: I have renamed the main repository to just be the yum repo and split the other agent into a separate repository so they can be managed independently. As of today you can get them at the following github links (though old urls will continue to work):

Yum: https://github.com/slaney/mcollective-yum-agent
Shellout: https://github.com/slaney/mcollective-shellout-agent

I am planning a post on my justification of the maligned Shellout agent in the very near future, so please come back.

iTunes “Helper”

I have a bluetooth headset I like. I use it at night sitting on the sofa messing around on my laptop to listen to music or videos on YouTube. However one annoyance I have always had with them is that when I connect to them they would launch iTunes.

So I googled around a bit and noticed that there was an app called iTunesHelper that was part of my Login Items. I removed it from there, and that seemed to do the trick, at first.

After a few weeks I noticed the behavior came back. I was perplexed, until I realized iTunes must be restarting the helper when it was used. After a 2 second test I realized this was true. So I helped the helper.


# cd /Applications/iTunes.app/Contents/MacOS/
# sudo mv iTunesHelper.app iTunesHelper-shutup.app
# sudo pkill iTunesHelper

Ugh, done.