Thursday, December 21, 2017

Uh oh, automated test and bad assumptions


So I'm looking at a test that is telling me it Passed, but this test is looking for a docker container and well, no containers are RUNNING yet!  WTF?

This one Returns 0, but we assert we want a 0
assert $r 0 "Should not see db connection warnings"
======================================

root@ip-172-31-40-182:/tmp/HUB4.4.0# sudo /usr/bin/docker logs 40e0533b4f76 2>&1 | grep  "WARN <HIDDEN>- Unable to manage connection" | wc -l
0
======================================
when I strip out 2>&1 we see there's a failure that's been being swallowed:


root@ip-172-31-40-182:/tmp/HUB4.4.0# sudo /usr/bin/docker logs 40e0533b4f76 | grep  "WARN <HIDDEN>- Unable to manage connection" | wc -l
Error: No such container: 40e0533b4f76
0
======================================

Time to refactor!

As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.

Wednesday, December 13, 2017

From Demo to Deployed in 1 week!

Last week I posted about a cool, impactful, project I worked on and demo'd to the company at our weekly Engineering demo's.  I also posted this about setting up a bastion host so that anyone can  clone the project in git and run it themselves.

The bastion host setup was in preparation to go from demoWare to making the demo work within the Kippernetes CI/CD framework getting my shit merged to master - which it is now!  ðŸ˜Ž

What was amazing to me is not just the work I did, hell I only got to the 1 yard line, it was folks on my team like +jay vyas and Alan Bradley who took my work over the goal line.  Could I have done it?  Yes, would we have done the work in 1 week?  Maybe, maybe not.  Team effort is needed to succeed, and for me to learn and improve.  Sometimes that's a PR and code review, other times it's just everyone doing what they do best to get it done.

I was so damn happy when I saw the first real CI/CD test get kicked off and work!









See that?  Failing: 0


What's even cooler is today I found that some of the docker-compose tests were failing - and it was because there was a real bug that had been checked in last night!

This really re-enforces a couple things for me:

  1. I actually do have the skill set to do DevOps, and my request to transfer to a DevOps role is the right thing for me
  2. Automate everything you can, and push back on manually testing beyond doing so as a first pass and as a means to an end of automating said manual test.  Now don't get me wrong, there are some tests that either can't be automated or automating those tests is so complex that there is little to no ROI
More cool shit coming!

Hit me with any comments below!










As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.

Wednesday, December 6, 2017

Vagrant - now we're talking infra as Code!!

Two weeks ago I'm in an Infrastructure team meeting and we get some news; people are clamoring for a docker-compose ephemeral lab for R&D and Test Engineering to work on and test the Black Duck Software Hub.
Damn, the team's been focusing on more Cloud Native deployments like Kubernetes, and OpenShift - docker-compose wasn't even in our peripheral vision.  Shit!  Okay time for me to make my move.  One of our team's technical leaders is +jay vyas and he asked if I could take ownership of the docker-compose work.  Of course I will.  We decided that I can probably get this done in about a week using Vagrant by HashiCorp. 
So last Tuesday I 'brew install vagrant' and set off to change the world!

So in three days I went from never having spun up any infra as software to a demo to the entire engineering team on Friday.

Doc Vyas helped me with a some seed code and I ran with that to get started. By the end of that 1st day I already had a Vagrant recipe that spun an Ubuntu 16.04 running in EC2.  Hell yea!

Day two I set forth to write some solid bash script that would:
update_os
inst_packs #installs the proper packages I need
add_gpgkey
add_stable_repo
update_packg_index
inst_docker
hello_world
inst_compose
start_hub
hub_ready

These are literally the functions I ended up with, self documenting - w00t!
The most complex code in this whole shebang is this:

=======================================
# Smoke test:  Is HUB ready and healthy
  function hub_ready() {
    TRIES=0
    while [[ $(sudo docker ps -a | grep 'blackduck' | grep "(healthy)" | wc -l) -lt $NUM_CONTAINERS ]] && [[ $TRIES -lt 120 ]] ; do
      sudo docker ps -a;
      sleep 1;
        let TRIES+=1
    done
  }

========================================
I'm just checking that all the hub docker containers are running and healthy, in 120 ticks of the clock.

So once I got all the things working on my local I committed and pushed, and was merged upstream  to our cool infrastructure project over on github:  https://github.com/blackducksoftware/kippernetes

My demo project is in /hack/compose-vagrant-up
and I'm actively working on wiring the demo into the Kippernetes CI/CD, with a goal of completing that work before Christmas break.
I've been blogging a bit about a test bastion host I've built in EC2 to run through the setup and configuration I'll need to do to the upstream Kippernetes bastion host to wire in my demoware.  You can check that out here https://tinyurl.com/sheppvagrantup01

Vagrant up!








As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.

Tuesday, December 5, 2017

Setting up a CentOS Bastion host for Vagrant

Well I'm up against it, I need to get my Vagrant recipe out of hack this shit together mode, and productize it a bit.
First I need to make it so I'm not 'vagrant up' from my MacBook, this doesn't work well when I need to go home and close my Mac yet keep Vagrant up instances running in EC2.  So what shall I do?  Yea, I could use TMUX and just jump back into a session, but that's lame.

Well how about a bastion host in EC2 that has Vagrant installed so I can push my bits and run them out of EC2?  Sounds cool AF - Let's try it out!

I've got a CentOS 7.x running using an official CentOS.org image off AWS:

Okay and we'll go ahead and use the t2.medium image.

Now that the instance is up let's update it:


Great!  I've gone ahead and downloaded the 64 bit version of Vagrant for CentOS, and using Filezilla uploaded to the AWS instance.  Time to install Vagrant






Cool, now we're cooking!

Now I need to get my CentOS into a DevOps state :)

I suggest running 'yum update

Install pre-reqs:
yum install -y ruby, gcc
Now lets install the vagrant aws plugin:
vagrant plugin install vagrant-aws
We need to get Vagrant onto the CentOS box. I'm not going to tell you how to do that, I personally DL to my Mac and then use FileZilla to push the bits to the bastion. Others may want to just curl or wget the bits, do what makes you happy - just stay in your own lane and out of mine. ;)
Vagrant strongly suggests to not install via package managers, as these don't always have the most updated bits, dependencies, etc. So just save yourself the time & hassle and DL and install from the rpm.
Grab Vagrant's CentOS rpm here: https://www.vagrantup.com/downloads.html
Okay, the bits are on the CentOS bastion host, lets install it: rpm -Uvh .rpm
Once the install is complete make sure it is installed: vagrant --version
Kewl, we're ready to rock and roll!

Have fun DevOps peeps!  Vagrant Up!

As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.

Wednesday, November 1, 2017

Working around OpenShift's docker storage check in OpenShift 3.6+

Working around OpenShift's docker storage check in OpenShift 3.6 +

So you've stood up some servers in AWS and you have everything ready to install OpenShift, you invoke atomic-openshift-installer install and after answering the questions about the Hosts, adding all the master and minion nodes, the installer fails:


So, let's do what this message tells us to and modify
vi /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/config.yml



Remember this is a YAML file so don't be tabbing and shit - spaces only.
:wq! that shit and re-run atomic-openshift-installer install and go eat lunch cuz when you come back, you're gonna have an OpenShift Cluster to work on yo!



As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.

Wednesday, October 18, 2017

pdsh? hell yes please and thank you!

So this dude I know +jay vyas, he's constantly telling me about new linux shit I have to try.  Recently he's been talking about pdsh, a multithreaded remote shell client that executes commands on multiple remote hosts, in parallel.  DaFuk, have I been living under a rock?!

Today I'm adding NFS storage to my OpenShift cluster in AWS.  After setting up my EBS I needed to create a directory and then mount that shit on 4 OPenShift nodes.  While I could have just ssh into each one, using TMUX of course, and run the pdsh command to mount the EBS share, I couldn't resist busting a nut with pdsh.
So I create a text file with all the servers I want to push commands to in /var/tmp, called oseServers.txt and get to work:

First let me create the mount point on all the Nodes:

[root@ip-172-31-35-71 tmp]# pdsh -R ssh -w ^oseServers.txt sudo mkdir /mnt/efs 

Now let me mount that AWS EFS
[root@ip-172-31-35-71 tmp]# pdsh -R ssh -w ^oseServers.txt sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-<hidden-from-your-eyes...>.efs.us-east-2.amazonaws.com:/ /mnt/efs 

I don't believe that shit really mounted anything on the nodes...
So I ssh into one of the nodes and check:

[root@ip-172-31-47-57 ~]# df -h /mnt/efs/
Filesystem                                 Size  Used Avail Use% Mounted on
fs-8854bef1.efs.us-east-2.amazonaws.com:/  8.0E     0  8.0E   0% /mnt/efs

[root@ip-172-31-47-57 ~]# 

Awww yea!  pdsh is the shit!









As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.

Tuesday, October 10, 2017

Pigeon Holed - dafuk?!

I'm a software test engineer, I test shit and find problems - and yea I also verify stuff works the way someone who matters says it should  :).  Over last 5+ years all I have heard is "Automate all the tests!".  I've never been a programmer, and while I've tried a few times to 'teach myself' to code, each attempt ends with me knowing how to do the classic "Hello World" bullshit and never using anything like I learned at work - like ever.  Yea I've done a little Java, fixing code that already existed, doing real simple shit that a high-schooler probably can do.  Java, Python, Perl, Ruby - yea I've hello world the shit out of those...

Being a non-programmer used to not bother me, I'm wicked fucking good at what I do and no automation is going to replace that, period.  With that said, people and organizations are actively looking for ways to eliminate people like me with automation.  I may not be a programmer but I'm not dumb, I see the handwriting on the wall.... can I last another 15 years in this game without coding, without automating shit, or will I be automated out of my livelyhood?  I lose sleep thinking about this shit, for real.

I saw someone close to me recently automated out of part of his part-time job, he went from working 3 days a week to 2.  That shit really hit him hard, it hit me too.

So when I recently thought I was about to move into a new career direction, really going to be put in a position to learn programming from folks like +jay vyas who seem like they genuinely want to help me, as long as I put in the effort of course, shift my career to a more in-demand skill set - I was like HELL YEA!!!

I got to work on that new team for ONE full day, then I got pulled/pushed back to 'my day job', executing manual tests for a product launch with yet another tight deadline.

There's talk I'll be back, I'll be on the team, writing code and submitting PRs, ... and I'm trying to believe that shit's going to happen... but there's this voice in my head telling me something different...
Failure's not a motherfucking option...

As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.

TMUX, cool AF

Thanks to +jay vyas I've been trying to new shit at work.  After riding my ass for not using 'TMUX' for a few weeks I gave it a go and honestly, at first I hated it.  It made me feel like I was a moron, "what's the key combo again to...?"  The dude probably was thinking 'WTF is with this guy?'

"What's TMUX" you ask (I sure had to!)? This quote comes directly from the TMUX project's github home:

"tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal"

I gave up on TMUX, that is until I had to monitor like 5 separate RHEL instances in AWS running on OpenShift.
I forgot to mention that by now +jay vyas had hijacked my MacBook and installed oh-my-zsh, seems cool, I got along fine before but hey - the kids are using it so I shall too!  Come to think of it, every time this dude touches my MacBook he's bitching about something I don't have installed... and then goes about installing it!

So I recently fired up TMUX again, determined to figure this shit out and get on with it, and I did!

I still had one beef with TMUX, I don't like the default behavior of new pane sizes, it annoys the shit out of me some days.  Today I learned a new trick to resize panes:
"ctrl + b + :  type 'resize pane' <-D, U, L, R>"

Now I can size and space my panes the way I want to!


Why am I blogging about TMUX though?  I have no fucking idea honestly! I'm a dork!








As always, what I blog about are my views and opinions and only mine, and are never the views or opinions of my past | present | future employers.