TL;DR: cd /opt/boxen/rbenv/plugins/ruby-build && git pull origin master
A while ago, I set up my development environment on my two machines using GitHub's Boxen. To be honest, I got it going and moved on. I have not been updating it. Then, I recently started a new Ruby project, and I wanted the latest version of Ruby. I went into rbenv, and it had no recent versions of Ruby - none since I set up Boxen.
How do I update the list of Ruby versions available in my Boxen environment?
I could update Boxen, but that's more than I wanted to bite off. My first guess was to run brew update and brew upgrade rbenv ruby-build, but that failed because rbenv is part of the bootstrap of Boxen and not installed by Boxen's version of Homebrew.
So, I thought I'd need to update rbenv directly. I figured it was installed as a git repo. The question was: where? which rbenv says it's a shell function. Looking at that code, it un-aliases itself to run.
which -a shows that rbenv is in /opt/boxen/rbenv/bin/rbenv. From there I found the ruby-build plugin, which leads to the solution:
cd /opt/boxen/rbenv/plugins/ruby-build
git pull origin master
It's not like it's rocket-surgery, but it was a curious puzzle to sort out. Soon I should update my Boxen environment, but that's a story for another day.
enjoy,
Charles.
Friday, April 22, 2016
Tuesday, March 15, 2016
Getting Started
How to begin the HashiQuest?
The world of AWS is huge these days. The HashiCorp tools can be counted two hands, but since they interface with AWS, that limited count is deceiving.
I actually started by getting the lay of the land from AWS in Action, which Manning conveniently had on special just about the time I was interested in learning more about it. The book isn't an exhaustive coverage of all of the AWS services, but it's an excellent overview. I did their tutorial for building a WordPress site. The authors provide their code examples online in GitHub, which is excellent.
After that, Terraform seemed like the logical place to start since it deals with building infrastructure in AWS. Again, I followed the online tutorial, and was pleased by the lack of drama.
I don't for a minute think that doing either of these tutorials qualifies as any real expertise, but I found that just typing the command and checking the results in the AWS console starts to build both the physical and mental "muscle memory." Both tutorials were done with the free tier in AWS, and free (as in beer) is always good.
Next, I checked out a new tool from HashiCorp - Otto. Otto is a successor to Vagrant, but it's heading in very different directions. If you start thinking that Otto is just Vagrant++, it's hard to understand the infrastructure and deployment functionality that Otto provides. Otto provides a path from a development environment on a single machine, to a simple AWS deployment, to a more sophisticated AWS deployment.
Because Otto is based on some opinionated policies and best practices, it provides a great way to see how all the pieces of the HashiCorp ecosystem and AWS fit together. It generates plain-text configurations and scripts in the .otto directory in your project's tree. These are there to read and learn from. Some AWS masters might chafe at the best practices, but everyone's gotta start somewhere, so it might as well be something sane.
I'm not sure if this is the best way to learn about AWS and the HashiCorp tools, but it's what I've done. Your milage may vary.
Enjoy,
Charles.
The world of AWS is huge these days. The HashiCorp tools can be counted two hands, but since they interface with AWS, that limited count is deceiving.
I actually started by getting the lay of the land from AWS in Action, which Manning conveniently had on special just about the time I was interested in learning more about it. The book isn't an exhaustive coverage of all of the AWS services, but it's an excellent overview. I did their tutorial for building a WordPress site. The authors provide their code examples online in GitHub, which is excellent.
I don't for a minute think that doing either of these tutorials qualifies as any real expertise, but I found that just typing the command and checking the results in the AWS console starts to build both the physical and mental "muscle memory." Both tutorials were done with the free tier in AWS, and free (as in beer) is always good.
Next, I checked out a new tool from HashiCorp - Otto. Otto is a successor to Vagrant, but it's heading in very different directions. If you start thinking that Otto is just Vagrant++, it's hard to understand the infrastructure and deployment functionality that Otto provides. Otto provides a path from a development environment on a single machine, to a simple AWS deployment, to a more sophisticated AWS deployment.
Because Otto is based on some opinionated policies and best practices, it provides a great way to see how all the pieces of the HashiCorp ecosystem and AWS fit together. It generates plain-text configurations and scripts in the .otto directory in your project's tree. These are there to read and learn from. Some AWS masters might chafe at the best practices, but everyone's gotta start somewhere, so it might as well be something sane.
I'm not sure if this is the best way to learn about AWS and the HashiCorp tools, but it's what I've done. Your milage may vary.
Enjoy,
Charles.
Monday, February 22, 2016
HashiQuest
I'm starting a new project at work to build a new infrastructure for hosting our apps. My objectives/requirements include:
- Elastic - something we can easily scale up and down.
- Redundant - something that can tolerate reasonable outages. With the recent Xen security issues, I've had more than one ride in the reboot-rodeo, and I'm getting tired of that.
- Invented Here as opposed to Not Invented Here (NIH) - I inherited the existing infrastructure, and there's nothing wrong with it per-se, but because I didn't build it, it surprises me from time to time.
- Immutable - I want to get on the immutable infrastructure bandwagon because it's what the cool kids have been doing. But, as I get into this, I realize that immutable infrastructure can lead to...
- Fearless - I want to be able to make changes quickly and easily without uttering "what could possibly go wrong?" before each change.
To achieve these objectives, I plan on using tools from HashiCorp to build out a pretty traditional infrastructure on Amazon Web Services. I'm a big fan of HashiCorp and their tools. Most of their tools are open source, which I like for cost and "religious" reasons. Mitchell Hashimoto was my first guest on the SE Radio podcast when HashiCorp was just launching, and he's great. Once my infrastructure is up and running, I look forward to using their Atlas tool to manage it all and pay Mitchell for all the great stuff he's done.
As mentioned, my initial plan is to build the first version of the infrastructure using AMIs running on EC2 instances as opposed to building Docker containers or running on Google Compute Engine. I made that decision in part to be more conservative (I hate explaining our current environment to prospective customers - no one ever got fired for picking IBM/Cicso/Amazon.) However, by using HashiCorp tools, I am hoping that I can keep my options open in the future.
Hence, I have begun what I'm calling HashiQuest. Stay tuned.
Charles.
Wednesday, January 13, 2016
A Pair of Interviews
I had two interviews released back-to-back on Software Engineering Radio around New Years:
- Episode 245: John Sonmez and I discussed his book Soft Skills - in particular the chapters on career management and marketing yourself. He was a great guest. I wish we could have gone over the whole book.
- Episode 246: I interview John Wilkes from Google about the Borg cluster management software used at Google and Kubernetes. No one, except John, will ever know what a pain in the butt it was to record that episode - epic Skype fails the first time, but John was exceedingly helpful and understanding.
Check them out,
Charles.
Subscribe to:
Posts (Atom)