The Productive Programmer by Neil Ford
ISBN: 978-0-596-51978-0
I've always had a more-than-passing interest in productivity porn, as Merlin Mann calls it. I'm not obsessive about seeking out productivity books (I can quit any time), but I've read more than a few such books in my day, and I do keep my tasks organized with OmniFocus. When I saw The Productive Programmer at the Powell's table at OSCON 2008, I confess I got pretty tingly, kinda like "when we used to climb the rope in gym class." Here was a book talking about productivity aimed specifically at what I do every day - programming, not management, not sales, not coaching a football team, but programming. In the words of Cartman, "sweet."
That said, I wouldn't classify this book as hard-core productivity porn. It doesn't lay out a dogmatic formula, nor does it suggest or require specific tools or techniques. And that's probably a good thing; in my experience, programmers can be some of the most opinionated people I've known, especially when it comes to their craft (e.g., editor wars). If the author had attempted to prescribe a specific set of practices, I think almost every programmer would have found something to hate about the book, and what's the point of that - we could just as easily go back to ragging on emacs, Windows, or Steve Jobs.
Instead, Mr. Ford offers a number of possible suggestions that one can take or leave. These are organized into two parts: mechanics (the productivity principles) and practice (philosphy), or what I would call tactical (little picture) and strategic (big picture) techniques. Mechanics includes things like controlling interruptions from things like email and using tools like Quicksilver (which I finally started using after reading this book). Philosophy includes things like test-driven development/design (TDD) and using things like static analysis tools.
The various suggestions were all very well and good, but what I liked most about this book is that it made me thing about mom-and-apple-pie topics like TDD (of course, we all write unit tests, right?) from a totally different angle - productivity. Of course, that what the agile folks have been saying all along, but somehow this book shed a whole new light on it and helps drive it even deeper. And the whole book got me thinking about the bigger question - "how can I be more productive and effective in my programming?"
Like I said, this isn't hard-core productivity porn, but it's a very useful and approachable guide to productivity by a programmer for programmers. Maybe that makes it productivity literary erotica?
enjoy,
Charles.
Wednesday, November 26, 2008
In Praise of TDD and Mocking
As noted elsewhere in the blog, I've been doing a bunch of work to script ESXi using Vmware's (sometimes troublesome) RCLI tools. I'm developing a higher-level set of code written in Python. (In theory, using Vmware's Perl toolkit would be cleaner, and I wouldn't have to bitch about the RCLI tools, but I've done enough Perl for one lifetime.) In addition to the mom-and-apple-pie goodness of TDD, I'm also getting a huge productivity boost by employing TDD via mocking.
In the code I'm writing, each method typically makes one or more RCLI calls. Each RCLI call takes 3-5 seconds. I structured the code so that the RCLI invocations all funnel through one point in the code which can easily be monkey-patched to go through a mock function. (More details on that in a later post.) After mocking, the result is that I can execute 20 tests with dozens of mocked RCLI calls in less than a tenth of second. After the unit tests pass, I can push the code out to a real host and run real RCLI comannds, and for the most part, "it just works."
When I started, I figured mocking was the only way to unit test this code, and it was more convenient to develop the code on a machine where I don't have the RCLI installed. The performance boost was unexpected but by far the most significant benefit of the unit tests. And when I needed to perform a couple of refactorings during the development, I got to enjoy wicked fast speed plus safety - two great tastes that taste great together.
Charles.
In the code I'm writing, each method typically makes one or more RCLI calls. Each RCLI call takes 3-5 seconds. I structured the code so that the RCLI invocations all funnel through one point in the code which can easily be monkey-patched to go through a mock function. (More details on that in a later post.) After mocking, the result is that I can execute 20 tests with dozens of mocked RCLI calls in less than a tenth of second. After the unit tests pass, I can push the code out to a real host and run real RCLI comannds, and for the most part, "it just works."
When I started, I figured mocking was the only way to unit test this code, and it was more convenient to develop the code on a machine where I don't have the RCLI installed. The performance boost was unexpected but by far the most significant benefit of the unit tests. And when I needed to perform a couple of refactorings during the development, I got to enjoy wicked fast speed plus safety - two great tastes that taste great together.
Charles.
Thursday, November 20, 2008
Vmware RCLI and exit status
Yet another gripe about Vmware's RCLI commands: they almost always return an exit status of zero, even if the command failed. For example, 'vmware-cmd -s unregister' for a non-existent virtual machine, return a status of zero (which is the same as if the command succeeded). One has to look at the standard output of the command to see "No virtual machine found." This is OK for an interactive user, but it's a pain-in-the-ass if you're trying to write scripts, as I happen to be. For every command, I have to parse the output to see if it succeed or not.
But then some commands (e.g., vifs) do return useful exit codes, at least some of the time. This whole process has to be handled on a case-by-case basis.
Speaking of parsing output, the outputs are not consistent. For example, when that unregister command is successful, it returns "unregister() = 1". Note there is a space on both sides of the equals sign. When the corresponding register command succeeds, it returns "register() =1". Note that there is no space on the left side of the equal sign. As I write code to parse these outputs (because I can't count on the exit status), I can't help but wonder how brittle this code will be...
Charles.
But then some commands (e.g., vifs) do return useful exit codes, at least some of the time. This whole process has to be handled on a case-by-case basis.
Speaking of parsing output, the outputs are not consistent. For example, when that unregister command is successful, it returns "unregister() = 1". Note there is a space on both sides of the equals sign. When the corresponding register command succeeds, it returns "register() =1". Note that there is no space on the left side of the equal sign. As I write code to parse these outputs (because I can't count on the exit status), I can't help but wonder how brittle this code will be...
Charles.
Friday, November 07, 2008
Book: Working Effectively with Legacy Code
Working Effectively with Legacy Code by Michael Feathers
ISBN: 0-13-11705-2
"...legacy code is simply code without tests... Code without tests is bad code."
From those statements, it doesn't take much to figure out what this book is about - how to write unit tests for code without tests. Of course, if you've ever tried to do that, you know that it's easier said than done. Many/most programmers who inherit a big ball of mud of code without tests (legacy code) just punt; the existing code has no tests, I can't see how to get any of it under test, so I'll just hack and pray - just like the original author(s) did.
That's where this book comes in. It's a primarily large collection of recipes about how to write unit tests for legacy code. That said, the focus is not really on how to write the tests but rather how to get chunks of the legacy code into a test harness so that you can write unit tests to characterize the existing functionality before adding or modifying functionality. It also contains techniques to add new functionality in such a way that you can test it immediately and possibly execute the "clean up" that the original author(s) promised would happen as soon as that next deadline was reached - all those years and deadlines ago.
The bulk of the book (Part 2) is organized as a series of complaints or excuses and how to deal with them. These include such topics as "My application has no structure," "Dependencies on libraries are killing me," and "This class is too big, and I don't want it to get any bigger." In each chapter, the author provides examples (in multiple languages - Java, C++, C, etc.) of these problems and specific techniques that can be used to address them. The last chapter (Part 3) is an encyclopedia of the techniques for easy reference.
If you're lucky enough to do only green-field development, you might think this book would be useless. However, one interpretation of this books is that it is a list of sins to avoid while your playing in the green field. And, many of the techniques can be interpreted as best practices for how to write you code to ensure it's testable. (Of course, you're following test driven development and achieving near 100% coverage, so that would never be a problem with your new code, would it? :-)
My one disappointment with this book was that I was hoping it would provide ideas about how to create higher-level (e.g., functional) tests. Of course, high-level tests are no substitute for unit tests. It's just that I was tasked with creating "some tests" quickly for an entire application, and unit tests are not practical in this particular case, which is my problem, not the author's.
This book is an excellent resource and cookbook for how to add unit tests to an existing code base that lacks tests, and it also provides design and implementation templates to ensure that new code is testable as it's created.
enjoy,
Charles.
ISBN: 0-13-11705-2
"...legacy code is simply code without tests... Code without tests is bad code."
From those statements, it doesn't take much to figure out what this book is about - how to write unit tests for code without tests. Of course, if you've ever tried to do that, you know that it's easier said than done. Many/most programmers who inherit a big ball of mud of code without tests (legacy code) just punt; the existing code has no tests, I can't see how to get any of it under test, so I'll just hack and pray - just like the original author(s) did.
That's where this book comes in. It's a primarily large collection of recipes about how to write unit tests for legacy code. That said, the focus is not really on how to write the tests but rather how to get chunks of the legacy code into a test harness so that you can write unit tests to characterize the existing functionality before adding or modifying functionality. It also contains techniques to add new functionality in such a way that you can test it immediately and possibly execute the "clean up" that the original author(s) promised would happen as soon as that next deadline was reached - all those years and deadlines ago.
The bulk of the book (Part 2) is organized as a series of complaints or excuses and how to deal with them. These include such topics as "My application has no structure," "Dependencies on libraries are killing me," and "This class is too big, and I don't want it to get any bigger." In each chapter, the author provides examples (in multiple languages - Java, C++, C, etc.) of these problems and specific techniques that can be used to address them. The last chapter (Part 3) is an encyclopedia of the techniques for easy reference.
If you're lucky enough to do only green-field development, you might think this book would be useless. However, one interpretation of this books is that it is a list of sins to avoid while your playing in the green field. And, many of the techniques can be interpreted as best practices for how to write you code to ensure it's testable. (Of course, you're following test driven development and achieving near 100% coverage, so that would never be a problem with your new code, would it? :-)
My one disappointment with this book was that I was hoping it would provide ideas about how to create higher-level (e.g., functional) tests. Of course, high-level tests are no substitute for unit tests. It's just that I was tasked with creating "some tests" quickly for an entire application, and unit tests are not practical in this particular case, which is my problem, not the author's.
This book is an excellent resource and cookbook for how to add unit tests to an existing code base that lacks tests, and it also provides design and implementation templates to ensure that new code is testable as it's created.
enjoy,
Charles.
Wednesday, November 05, 2008
vmware-cmd: SystemError=HASH(0x95d8d70)
More tales from the dark side. Trying to start a VM out on an ESXi host:
Charles.
vmware-cmd [conn options] '[datastore] path-to-vm/conf.vmx' start hardReturns this output:
Fault string: A general system error occurred: Internal errorI spent hours permuting the parameters. And vmware-cmd's -v option doesn't dump out all of the SOAP guts, so I had no view on what was going on. Finally I ran a vifs --dir out on the VM dir and found a bunch of vmware.log files. I fetched one that showed a normal startup and then this:
Fault detail: SystemError=HASH(0x95d8d70)
Nov 05 05:51:51.883: vmx| [msg.License.product.expired] This product has expired.There was a botched version of ESXi that we installed that contained a time bomb, and it had come home to roost. Fair enough - our bad, but certainly there could be a better error message than SystemError=HASH(0x95d8d70).
Nov 05 05:51:51.883: vmx| Be sure that your host machine's date and time are set correctly.
Charles.
Vmware ESXi - Unable to clone virtual disk
To import a VM from VMware Server to ESXi one must convert the disk by cloning the old one using vmkfstools. Fair enough. Doing this using the ESXi RCLI tools should look something like (based on what I've seen for non-RCLI for ESX):
vmkfstools [conn-options] -i old-disk new-disk -d thin
However, when you do this with RCLI, you get the following useless error message:
Unable to clone virtual disk : A general system error occurred: Internal error
Using the --verbose flag to look at the gory SOAP details I could see that the thin option wasn't getting sent. Looking through the Perl code it looks like one needs to specify both the -d and -a (adapter type) options. Once I added -a lsilogic it worked like a charm.
My big complaint (and I have other examples of this up my sleeve) is if the user makes a simple, bonehead parameter error, the command should point it out rather than saying "general system error...internal error." I hope this post will help someone else out if s/he is unlucky enough to encounter this error message.
Charles.
vmkfstools [conn-options] -i old-disk new-disk -d thin
However, when you do this with RCLI, you get the following useless error message:
Unable to clone virtual disk : A general system error occurred: Internal error
Using the --verbose flag to look at the gory SOAP details I could see that the thin option wasn't getting sent. Looking through the Perl code it looks like one needs to specify both the -d and -a (adapter type) options. Once I added -a lsilogic it worked like a charm.
My big complaint (and I have other examples of this up my sleeve) is if the user makes a simple, bonehead parameter error, the command should point it out rather than saying "general system error...internal error." I hope this post will help someone else out if s/he is unlucky enough to encounter this error message.
Charles.
Thursday, October 30, 2008
WebTest Groovy example thows NoSuchMethodError
I've been working on a project to do functional testing on a web application using WebTest. I've coded up some prototypes using the standard XML/Ant format/language/DSL, but I wanted to try doing it in Groovy, since I'm not a big fan of XML. I copied the sample application from the manual, but when I ran it I was getting and exception:
Caught: : java.lang.NoSuchMethodError: org.apache.xpath.compiler.FunctionTable.installFunction(Ljava/lang/String;Ljava/lang/Class;)I
at test.run(test.groovy:33)
at test.main(test.groovy)
(If you're following along at home, you'll note that that line number is too large for the example - I've already hacked some stuff into it.)
I couldn't find any examples online of people having exactly the same problem, but I'd seen other errors blaming an old version of xalan.jar for this problem, especially under Mac OS 10.5 - my platform. (BTW, this happened under both Java5 and Java6.) Although I didn't have CLASSPATH set to anything, I figured I'd try setting it to have the version of xalan.jar that ships with WebTest, and it worked. So, my complete invocation line is:
groovy -cp /usr/local/java/webtest_1720/lib/xalan-2.7.0.jar -Dwebtest.home=/usr/local/java/webtest_1720 test.groovy
enjoy,
Charles.
Caught: : java.lang.NoSuchMethodError: org.apache.xpath.compiler.FunctionTable.installFunction(Ljava/lang/String;Ljava/lang/Class;)I
at test.run(test.groovy:33)
at test.main(test.groovy)
(If you're following along at home, you'll note that that line number is too large for the example - I've already hacked some stuff into it.)
I couldn't find any examples online of people having exactly the same problem, but I'd seen other errors blaming an old version of xalan.jar for this problem, especially under Mac OS 10.5 - my platform. (BTW, this happened under both Java5 and Java6.) Although I didn't have CLASSPATH set to anything, I figured I'd try setting it to have the version of xalan.jar that ships with WebTest, and it worked. So, my complete invocation line is:
groovy -cp /usr/local/java/webtest_1720/lib/xalan-2.7.0.jar -Dwebtest.home=/usr/local/java/webtest_1720 test.groovy
enjoy,
Charles.
Wednesday, October 29, 2008
VMware ESXi default resource pool name
I've been playing around with ESXi trying to figure out how to use their Remote Command Line Interface (RCLI) to import and run a VM. This has been a major PITA, which I suppose I could have been documenting as I go but I didn't. (Full disclosure - I'm doing this without any formal training, without their snazzy Virtual Infrastructure tools, or anything else.)
According to VMware's RCLI Installation and Reference Guide, the data center and resource pool parameters to vmware-cmd -s register are optional, but nonetheless, it kept telling me "Must specify resource pool".
Eventually, I found a forum thread that contained the answer: Resources. (Kinda like "plastics" only different.) I don't have a data center set up, but I found that using almost any word for the data center name seemed to work.
So, in the end, this is what worked for me (your milage may vary):
enjoy,
Charles.
According to VMware's RCLI Installation and Reference Guide, the data center and resource pool parameters to vmware-cmd -s register are optional, but nonetheless, it kept telling me "Must specify resource pool".
Eventually, I found a forum thread that contained the answer: Resources. (Kinda like "plastics" only different.) I don't have a data center set up, but I found that using almost any word for the data center name seemed to work.
So, in the end, this is what worked for me (your milage may vary):
vmware-cmd [conn options] -s register '[datastore-name] vm-dir/conf.vmx' root Resources
enjoy,
Charles.
Thursday, September 04, 2008
Increasing Memory for Ant
I had an issue today with the javac Ant task running out of memory when compiling a project (~800 files) under Java 6 on the Mac. It was fine under the default Java 5 compiler on the Mac, and it was fine under Java 6 on the PC. Initially, I set memoryMaximumSize on the javac task, which also required that fork be set.
This was OK, but since this was specific to compiling on the Mac, I figured the solution should be specific to my environment on the Mac. After looking at the source of the ant script, I realized that the ANT_OPTS variable could be set to contain an option (-Xmx512m) for the JRE running Ant, and that ANT_OPTS could be set from either ~/.antrc or ~/.ant/ant.conf. So, I created ~/.antrc with one line: ANT_OPTS=-DXmx512m.
(Update: oops, that should be ANT_OPTS=-Xmx512m)
And it worked perfectly, of course - so simple it had to work.
enjoy,
Charles.
This was OK, but since this was specific to compiling on the Mac, I figured the solution should be specific to my environment on the Mac. After looking at the source of the ant script, I realized that the ANT_OPTS variable could be set to contain an option (-Xmx512m) for the JRE running Ant, and that ANT_OPTS could be set from either ~/.antrc or ~/.ant/ant.conf. So, I created ~/.antrc with one line: ANT_OPTS=-DXmx512m.
(Update: oops, that should be ANT_OPTS=-Xmx512m)
And it worked perfectly, of course - so simple it had to work.
enjoy,
Charles.
Thursday, August 28, 2008
Ant, JavaDoc, and CreateProcess error=87
I've been developing an Ant build file for a large-ish base of existing code. I've been using my Mac as the primary devleopment platform, but the ultimate target (for the other developers) is Windows. My task to build the javadocs runs fine on OS X, but the first time I rant it on Windows, I got the following:
c:\demo\build.xml:180: Javadoc failed: java.io.IOException: Cannot run program "c:\Program Files\Java\jdk1.6.0\bin\javadoc.exe": CreateProcess error=87, The parameter is incorrect
After about 10 minutes of Googling, I found a/the simple solution: add the useexternalfile attribute to the javadoc task in Ant. Voila , back in business, on the PC at least, and it didn't seem to break the build on the Mac.
Charles.
c:\demo\build.xml:180: Javadoc failed: java.io.IOException: Cannot run program "c:\Program Files\Java\jdk1.6.0\bin\javadoc.exe": CreateProcess error=87, The parameter is incorrect
After about 10 minutes of Googling, I found a/the simple solution: add the useexternalfile attribute to the javadoc task in Ant. Voila , back in business, on the PC at least, and it didn't seem to break the build on the Mac.
Charles.
Subscribe to:
Posts (Atom)