Brain Candy: Things to watch

July 26th, 2017

I have an ever-growing list of things I wanna watch or listen to. Here’s a few:

DevOps

  1. DynaTrace interview with Sofico: covers pipeline metrics and integration of Selenium for DevOps CD/CI. 1 hour/YouTube

Change Chef node environments

July 18th, 2017

This little snippet is useful for two reasons. First, if you want to progress all (or a subset) of your Chef nodes to a different environment, this is the secret sauce.

More importantly, if you want to convert the output of `knife node list` to a space-delimited array (instead of the \n-delimited list), the sed command is your weapon.

First, let’s get all of the nodes into an array:


nodes=($(knife node list | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/ /g'))

*Humbly shagged from this Stack Overflow post. Read the rest of this entry »


Debug with KitchenCI

July 11th, 2017

This is just a super quick note about how to dump tons of logging information during a `kitchen converge` run. In the .kitchen.yml file, put the following in the provisioner section:


provisioner:
name: chef_zero
log_level: debug

This creates debugging during the chef converge. If you want to debug what is happening during the kitchen converge, pass the following:


kitchen converge 2012 -l debug


Chef node attributes with KitchenCI

June 21st, 2017

I’ve been using KitchenCI/Vagrant/VirtualBox to create “Local Development Areas” for quite some time. There are many benefits to using these light-weight, disposable, local VMs.

  • Providing a safe sandbox; what happens in the VM is (generally) separate from the host machine
  • Use them like Kleenex; fire one up, install/reconfigure something and delete it when you’re done with no impact to the host system
  • Keeps the host light-weight; you can trash your PC and get going on another with one “kitchen converge”

and so on…

One of the issues I’ve historically had with these KitchenCI images is that I didn’t know what the Chef node attributes were. KitchenCI uses ChefZero, an in-memory, ephemeral version of the Chef server, to configure the VirtualBox VMs. As such, there’s no Chef server to look at or to issue knife commands against.

In digging around this week, I stumbled upon something while debugging one of my cookbooks which solves this.

When you work with temporary files (e.g. zip files you need to expand), it’s a best practice to place them in the Chef temp cache. To maintain platform independence, that location is stored in the variable ‘#{Chef::Config['file_cache_path']}’. This enables the following:


windows_zipfile "#{Chef::Config['file_cache_path']}\\sxs" do
source "#{Chef::Config[:file_cache_path]}\\sxs.zip"
action :nothing
end

# Download "CAB"
remote_file "#{Chef::Config['file_cache_path']}\\sxs.zip" do
source 'https://withimpact.blob.core.windows.net/software/sxs.zip'
action :create
notifies :unzip, "windows_zipfile[#{Chef::Config['file_cache_path']}\\sxs]", :immediately
end

My cookbook was having (unrelated) issues, so I went digging… On Window Server 2016, that path resolves to C:\Users\vagrant\AppData\Local\Temp\kitchen\cache (on CentOS 7.3, it resolves to /tmp/kitchen/cache). Poking around in that tree, I found C:\Users\vagrant\AppData\Local\Temp\kitchen\nodes, which happened to have a JSON file in it that contains all of the node attributes from the prior chef-client converge.

A bit hidden, but very useful…


Chef Recipes all around…

March 31st, 2016

So, in working with Chef, I needed to spread some changes to every node in the shop. After quickly realizing that Ruby takes FOREVER to start upon Windows 2012 and that there is no way to batch commands to knife, I jumped over to my Linux box and crafted this quick bash script:


for i in $( knife node list ); do
/opt/chefdk/bin/knife node show $i --run-list
/opt/chefdk/bin/knife node run_list set $i 'role[roleA],recipe[recipeA],recipe[recipeB]'
done

This showed the before state, updated the node’s run list and showed the end state. It run fairly quick on a moderate RHEL 6.6 VM (~ 5 seconds per node).

Nifty and hopefully helpful…


git tutorial creating global excludes

March 29th, 2016

If you want to exclude a file or directory from ALL git repositories, you can use a global exclude… and this git tutorial shows you how.

In my case, the git repositories are on a separate drive, identified as drive E: in a directory called “Repositories”. Within that directory are individual trees for each repository.

In E:\Repositories\.gitignore (just to follow convention), I placed a line “/sqlDumps” which is where I tend to store DB backups. Then I issue the command using git bash in each repository I create:
git config --global core.excludesfile ../.gitignore

This loads the E:\Repositories\.gitignore file for all repositories but doesn’t gum up the individual repositories’ .gitignore file with these Ray-specific nuances…

Just a quick how-to note. Enjoy!


Splitting a repository chunk

March 9th, 2016

On a project I’m working on, I was interested in cloning a large git repository on GitHub (Azure Quickstart ARM Templates) and splitting out a few pieces for further development while still merging changes from the original repository.

Essentially, I wanted to split out a folder of the parent repo.

Now, I’m not a git expert (yet) and this still needs some further testing, but I think I figured it out and this post serves as my way of recording this for future enhancement. If you’ve got suggestions/questions, please post them in the comments or e-mail me as I’d love to grow this.

Original repo: https://github.com/Azure/azure-quickstart-templates.git
Directory of interest: https://github.com/Azure/azure-quickstart-templates/tree/master/chef-json-parameters-ubuntu-vm
Destination working repo: https://mygitlab.server/pg_chef-json-parameters-ubuntu-vm

Clone the original repo:
cd C:\my\local\repos\
git clone https://github.com/Azure/azure-quickstart-templates.git

Add the new repo as a remote
git remote add pg_chef-json-parameters-ubuntu-vm https://mygitlab.server/azure-infra/pg_chef-json-parameters-ubuntu-vm.git

Split the desired folder into a new branch (named here specific to my project):
git subtree split --prefix=chef-json-parameters-ubuntu-vm -b pg_chef-json-parameters-ubuntu-vm
Note: this takes a while on this particular repo…

Push the branch to the new repo:
git push https://mygitlab.server/azure-infra/pg_chef-json-parameters-ubuntu-vm.git pg_chef-json-parameters-ubuntu-vm:master

Clone the new repo locally:
cd C:\my\local\repos\
git clone https://mygitlab.server/azure-infra/pg_chef-json-parameters-ubuntu-vm.git

And now you’re ready to go!!
This was adapted from John Teague’s post on Se Habla Code Los Techies. His example is a bit more complex and way cooler in that he sets up a scenario where he splits out the web UI of an application that is identical between two different supported app layers. In his example, the web UI is split out, deleted from the original repository and then added back into both of the app layer repos as a remote.

Sounds very cool and I need to try that… Thanks John!!!


Selling Sh*t on YouTube

December 19th, 2015

Alright, so I just couldn’t help myself. These days, it seems as if to make an engaging YouTube video, you have to resort to 5th grade potty humor. Literally. Check out these ridiculously engaging YouTube ads.

The Squatty Potty (yes, the on that was on Shark Tank)

And it features the unicorn that will change how you poop (and view ice cream). They even have a behind the scenes reel (which is pretty interesting, actually). Oh, and BTW, that video has 10,600,000 views!

And then, while watching YouTube, I saw yet another (less compelling, in my opinion) product… that has 51,000,000 (MILLION!!) views.

PooPourri (their business it to make it smell like your business never even happened)

My first thought is that I would love to be in the pitch room with any of these. Imagine saying to a client, “we’re going to use a unicorn that craps rainbow ice cream to explain the product benefits… I can only imagine. :)

But, for all us marketers, it shows in interesting (and highly effective) case study in getting attention and communicating product benefits. While the PooPourri is a little zany, the Squatty Potty video nails it in a way that keeps you watching.

Think about that next time you’re trying to communicate product benefits, no matter now obscure or taboo.


Cache on Facebook

December 18th, 2015

I learned a neat little trick today thanks to a home I have listed on Zillow.com this week.

We recently took a home we’re selling back from the listing agent(a beautiful 5BR/3.5BA, if you’re in the Greater Cincinnati market…). Dutifully, I reclaimed the Zillow ad and dropped the price. Wanting, of course, to leverage the power of social media, I tried to post it to Facebook via Zillow’s “share to Facebook” functionality.

WHOA, WAIT A MINUTE…

I immediately noticed the “For Sale: $XXX,XXX” price was the prior price, despite the fact I had updated it.

The URL was scraped and cached and, for the best of me, I couldn’t figure out how to “uncache” it. Heather at Zillow (via the Zillow help me link) to the rescue… The solution is simple.

  1. Log into Facebook
  2. Go to the “Open Graph Object Debugger
  3. Paste in the URL that is being shared to Facebook
  4. Show the existing scrape information, it’s probably wrong…
  5. Click “Fetch new scrape information”
  6. Check to see if it is updated (if it’s not, you either have the wrong URL or the original site (Zillow, in this case) isn’t updating what’s being served to Facebook
  7. If it’s right, go back to the original site, and reshare it

Viola!

In the case of Zillow, I tried scraping my home’s URL, first. That did not work. So, I posted to my timeline, right clicked on the post’s image to get the URL that was shared, and refreshed the scrape of that URL. As it turns out, what gets shared to Facebook from Zillow (and most places) is a shortened URL and it is this URL that Facebook scrapes and references…

That is that. Now you know how to clear the Facebook cache when your shares aren’t showing up right! This can even be done by you for the content you have on our site that others are sharing out if you run into issues there.

Hope this helps!


Facebook contest drives e-mail CRM return on investment

November 18th, 2015

BYSmarketinglogo-1-300x300Today we launched a cool campaign for our new client, Backyard Saver. It’s the first e-mail list building campaign for Backyard Savers, a local media and marketing magazine in the Greater Cincinnati area. The campaign involved creating a Facebook sweepstakes which integrates with the client’s back-end Constant Contact CRM solution. The execution includes timed and boosted Facebook posts to drive traffic to the sweepstakes.

In addition to setting up the FB campaign, we helped our client with a little house keeping which included:

  • Creating a high-level business-to-consumer (B2C) and business-to-business (B2B) e-mail CRM strategy
  • Setting up their Facebook business portal
  • Setting up their Constant Contact account (get yours here)
  • Because this is for a local publication to whom zip codes are important, we set up e-mail autoresponders to collect participant zip codes and page likes
  • Seeding the contest to local bloggers who posted the contest as well (aka, Influencer Marketing)

The contest ends 11/25/15 and I’m really excited to analyze the results… I anticipate great things! So, come sign up. You may win a $50 American Express gift card… and you will be exposed to a great way to build your business’ B2C and B2B e-mail CRM databases.

Want to do a similar execution for your business or need help driving value from your e-mail CRM efforts? Reach out to us or schedule a 15 minute join up.